My research focuses on the integration of physics and ML techniques (neural rendering, generative models, etc) for novel imaging and display system design
to enable new capabilities in AR/VR and scene reconstruction.
I've interned at Meta Reality Labs, where I worked on neural rendering for 3D reconstruction.
I've also served as a reviewer for SIGGRAPH, NeuRIPS, ISMAR, TIP, ICIP, and ICASSP.
When state-of-the-art neural rendering meets next-generation holographic displays: converting
optimized Gaussian splats to holograms that support natural focus cues.
A near-eye display design that pairs inverse-designed metasurface waveguides with AI-driven holographic displays
to enable full-colour 3D augmented reality from a compact glasses-like form factor.
The inclusion of parallax cues in CGH rendering plays a crucial role in enhancing perceptual realism,
and we show this through a live demonstration of 4D light field holograms.
A novel light-efficiency loss function, AI-driven
CGH techniques, and camera-in-the-loop calibration greatly improves holographic projector
brightness and image quality.
We propose an image-to-image translation algorithm based on generative adversarial networks
that rectifies fisheye images without the need of paired training data.