My research focuses on computational imaging, displays, and physical optics, and how they
can be combined with modern graphics and 3D-ML techniques (such as NeRF, 3DGS, etc) to create novel AR/VR experiences.
I've interned at Meta Reality Labs, where I worked on 3D reconstruction with Changil Kim.
A near-eye display design that pairs inverse-designed metasurface waveguides with AI-driven holographic displays
to enable full-colour 3D augmented reality from a compact glasses-like form factor.
The inclusion of parallax cues in CGH rendering plays a crucial role in enhancing perceptual realism,
and we show this through a live demonstration of 4D light field holograms.
A novel light-efficiency loss function, AI-driven
CGH techniques, and camera-in-the-loop calibration greatly improves holographic projector
brightness and image quality.
We propose an image-to-image translation algorithm based on generative adversarial networks
that rectifies fisheye images without the need of paired training data.