My research focuses on developing physics-aware neural rendering techniques that unify computer graphics and physical optics to enable new capabilities in
next-generation AR/VR displays and
3D scene reconstruction.
I've interned at Meta Reality Labs, where I worked on neural rendering for 3D reconstruction and next-generation computational displays.
I've also served as a reviewer for SIGGRAPH, SIGGRAPH Asia, NeuRIPS, ISMAR, TIP, TCI, ICIP, and ICASSP.
A fundamentally new alpha blending algorithm for Gaussian splats generates random-phase, full-bandwidth light field holograms,
enabling natural defocus blur, physically-accurate parallax, and occlusion.
When state-of-the-art neural rendering meets next-generation holographic displays: converting
optimized Gaussian splats to holograms that support natural focus cues.
A near-eye display design that pairs inverse-designed metasurface waveguides with AI-driven holographic displays
to enable full-colour 3D augmented reality from a compact glasses-like form factor.
The inclusion of parallax cues in CGH rendering plays a crucial role in enhancing perceptual realism,
and we show this through a live demonstration of 4D light field holograms.
A novel light-efficiency loss function, AI-driven
CGH techniques, and camera-in-the-loop calibration greatly improves holographic projector
brightness and image quality.
We propose an image-to-image translation algorithm based on generative adversarial networks
that rectifies fisheye images without the need of paired training data.