NeRF, Neural Radiance Fields, is a new technique aiming to generate highly detailed and realistic representations by modeling the volumetric appearance and geometry of the scene.
Unlike traditional approaches that rely on explicit structures such as 3D models or point clouds, NeRF represents a scene by approximating a continuous function, referred to as a neural radiance field, through the means of a deep neural network.
NeRF-generated environments can be trained using fewer and simpler sensors while providing continuous representation of the obstacles geometry while still being robust and accurate when specular or transparent materials are present.
This work aims to leverage the NeRF representation to research and develop new approaches to enable safe motion planning of autonomous robot systems through complex and challenging environments while minimizing the complexities and requirements of the classical approaches.
We plan to adapt and extend neural representations to make them suitable as a state representatives for autonomous robot navigation.
NeRF, Neural Radiance Fields, is a new technique aiming to generate highly detailed and realistic representations by modeling the volumetric appearance and geometry of the scene.
Unlike traditional approaches that rely on explicit structures such as 3D models or point clouds, NeRF represents a scene by approximating a continuous function, referred to as a neural radiance field, through the means of a deep neural network.
NeRF-generated environments can be trained using fewer and simpler sensors while providing continuous representation of the obstacles geometry while still being robust and accurate when specular or transparent materials are present.
This work aims to leverage the NeRF representation to research and develop new approaches to enable safe motion planning of autonomous robot systems through complex and challenging environments while minimizing the complexities and requirements of the classical approaches.
We plan to adapt and extend neural representations to make them suitable as a state representatives for autonomous robot navigation.
- Extend the quantities represented to include, in addition to radiance:
- - Density/traversability (e.g., smoke appears solid, but it is traversable).
- - A motion field, used to predict how the elements in the scene will move.
- - Sensor-specific quantities, such as reflectance, needed for LIDAR data.
- Sensor fusion with diverse sensors (i.e. cameras, stereo cameras, and LIDAR) within the same representation.
- Bayesian prediction of future states, by projecting the representation forward in time based on the velocity fields.
- Incorporate sampling-based planners with the neural representation to achieve fast occupancy queries.
- Extend the quantities represented to include, in addition to radiance: - - Density/traversability (e.g., smoke appears solid, but it is traversable). - - A motion field, used to predict how the elements in the scene will move. - - Sensor-specific quantities, such as reflectance, needed for LIDAR data. - Sensor fusion with diverse sensors (i.e. cameras, stereo cameras, and LIDAR) within the same representation. - Bayesian prediction of future states, by projecting the representation forward in time based on the velocity fields. - Incorporate sampling-based planners with the neural representation to achieve fast occupancy queries.