Stop Overfitting: Build Perfect 360° Dynamic Objects From One Video
Based on research by Jae Won Jang, Yeonjin Chang, Wonsik Shin, Juhwan Cho, Nojun Kwak
Existing dynamic object reconstruction tools often hit a wall when attempting to capture full 360-degree views, struggling to maintain consistent geometry because they rely too heavily on surface data from visible angles. This limitation causes significant errors in occluded regions where the object is hidden from the camera's direct line of sight. A new diffusion-free framework called 4DGS360 changes this by using an advanced 3D-native initialization strategy that effectively prevents overfitting and resolves geometric ambiguities without relying on complex diffusion models. The system employs a specialized 3D tracker named AnchorTAP360 to create reinforced trajectories for point clouds, using confident 2D tracks as anchors to suppress drift and ensure reliable reconstruction even when parts of the object are out of view. This breakthrough enables coherent 4D reconstructions of dynamic scenes from casual monocular video alone. Researchers also introduced iPhone360, a novel benchmark that positions test cameras up to 135 degrees apart from training views, offering rigorous 360-degree evaluation previously unavailable in the field. Tests across multiple datasets confirm that 4DGS360 delivers state-of-the-art performance both visually and quantitatively, proving that high-fidelity full-sphere reconstruction is now achievable with standard video input.
4DGS360: 360° Gaussian Reconstruction of Dynamic Objects from a Single Video by Jae Won Jang et al., https://arxiv.org/abs/2603.21618