The process to create a Neural Radiance Field (NeRF) starts by training a deep neural network on a dataset of 2D images, each one taken from a different viewpoint.
The neural network uses the training dataset to learn the relationships between the different viewpoints and to estimate the 3D structure of the scene.
Once trained, the neural network can be used to generate novel views of the 3D scene in real-time.
Nephelie's wireless solution is ideal for light-field capture arrays that create NeRFs or 3D Gaussian Splatting (3DGS) renderings. The wireless camera module accommodates various CMOS sensor options all with global shutter capability, eliminating motion artifacts. These options offer high resolutions (ranging from 2 to 24 MP) and frame rates (from 1 to 160 fps) to produce top-quality 3D renderings.
Thanks to this modular and portable design, the image/video capturing system can be placed in different locations from those where the neural network training and the 3D rendering/presentation is performed. This capability enables distributed AR/VR applications as shown in the example below.