Volumetric video production involves capturing real-world scenes from multiple angles to create detailed 3D models.
Traditionally, this requires a complex setup with dozens of synchronized cameras mounted on rigid trusses, connected via Ethernet or optical fiber to centralized computing systems. These systems process high volumes of image data in real time to produce video-grade 3D reconstructions.
Despite their technical capability, most current volumetric video systems have significant limitations:
Complex setup requiring fixed rigs and specialized cabling
Limited flexibility in adjusting to different scene sizes or layouts
High cost of scaling up for larger set-ups and environments
Poor performance outdoors, especially in challenging lighting or with reflective and transparent surfaces
Original photo of a crystal ball
3D textured model using photogrammetry
As an example, conventional photogrammetry struggles to accurately reconstruct scenes with reflections, transparencies and changing lighting conditions like a crystal ball, often producing distorted or incomplete results.
Nephelie Technologies offers a new approach—an agile volumetric video solution that combines:
A high-performance wireless network, purpose-built for time-sensitive video synchronization
An AI/ML-powered rendering pipeline based on Radiance Fields rendering techniques such as 3D Gaussian Splattings, delivering higher realism with fewer cameras
This combination enables more accurate 3D reconstructions in real-world conditions while significantly reducing system cost and complexity.
3D model rendering using our nCOMM solution
Unlike rigid, wired systems, Nephelie’s solution is lightweight, portable, and completely wireless—making it ideal for outdoor use and dynamic setups.
In large-scale scenarios like sporting events, wireless camera modules can be mounted on drones or mobile platforms to capture action from all angles, enabling live volumetric capture in environments where traditional systems simply can't go.