Mar 2023
Conventional vision systems are mostly based on imaging sensors consisting of arrays of light-sensitive pixels, each of which measures the intensity of light falling upon it. Such pixels cannot directly acquire other important multimodal information of light, such as its incident angle, wavelength, and phase. While the intensity information is sufficient for conventional applications such as photography, it has significant limitations in advanced vision tasks. We will show a new class of photodetectors enabled by nanophotonics that can measure multimodal information of light waves. Although multimodal information can be measured through conventional optics such as lenses, prisms, and gratings, these components are difficult to integrate on chip. They also degrade spatial resolution and decrease operational speed. Novel nanostructures can induce coherent interactions among nearby photo-sensitive materials, which is exploited to create extremely compact multi-modal detectors. They can form high density arrays as imaging chips. Algorithms that exploit multimodal information could perform vision tasks beyond those possible with today’s intensity-only approach.
Zongfu Yu is a Jack St. Clair Kilby Associate Professor in the department of electrical and computer engineering at the University of Wisconsin – Madison. In the field of optics, he has authored and co-authored over 100 peer-reviewed papers with a total citation over 20,000 and an h-index of 63. He is named as a Global Highly Cited Researcher in the past 5 years. He is a Fellow of OSA (the Optical Society), a recipient of DARPA Young Faculty Award, and NSF CAREER Award. He received his Ph.D. in applied physics and M.S. in management science and engineering, both from Stanford University, and a B.S. degree in physics from the University of Science and Technology of China.