TrendPulse

A large-scale coherent 4D imaging sensor | Nature

Source: NatureView Original
scienceMarch 11, 2026

Download PDF Subjects Electrical and electronic engineering Imaging and sensing Integrated optics Silicon photonics Abstract Detailed and accurate 3D mapping of dynamic environments is essential for machines to interface with their surroundings 1 , 2 , 3 and for human–machine interaction 4 , 5 . Although considerable effort has been made to create the equivalent of the complementary metal–oxide–semiconductor (CMOS) image sensor for the 3D world, scalable, high-performance, reliable solutions have proven elusive 6 , 7 , 8 , 9 , 10 , 11 . Focal plane array (FPA) sensors using frequency-modulated continuous-wave (FMCW) light detection and ranging (LiDAR) have shown potential to meet all of these requirements and also provide direct measurement of radial velocity as a fourth dimension. Previous demonstrations 12 , 13 , although promising, have not achieved the simultaneous scale and performance required by commercial applications. Here we present a large-scale, coherent LiDAR FPA enabled by comprehensive chip-scale optoelectronic integration. A 4D imaging camera is built around the FPA and used to acquire point clouds. At the core is a 352 × 176-pixel 2D FMCW LiDAR FPA comprising more than 0.6 million photonic components, all integrated on-chip together with their associated electronics. This represents a five times increase in pixel count with respect to previous demonstrations 12 . The pixel architecture combines the outbound and inbound optical paths within the pixel in a monostatic configuration, together with coherent detectors and electronics. Frequency-modulated light is directed sequentially to groups of pixels by in-plane thermo-optic switches with integrated electronics for driving and calibration. An integrated serial digital interface controls both optical switching and readout synchronously. Point clouds of objects ranging from 4 to 65 m with per-pixel integration time compatible with frame rates from 3 to 15 frames per second (fps) are shown. This result demonstrates the capabilities of FMCW LiDAR FPA sensors as enablers of ubiquitous, low-cost, compact coherent 4D imaging cameras. Main The introduction of automation into everyday life has marked a pivotal moment in our society. Across various fields, from spatial mapping for reconnaissance and construction 1 , 2 , 3 to facial recognition, virtual and augmented reality 4 and autonomous driving 5 , accurate 3D representation of dynamically evolving environments is paramount for safe human–machine interaction. Therefore, substantial research efforts are directed towards developing a cost-effective, high-performance and scalable 3D imaging sensor comparable with a CMOS camera. LiDAR systems represent an ideal tool for 3D image reconstruction, owing to their extended range, high precision and fine spatial resolution 14 . Time-of-flight (ToF) LiDAR systems based on single-photon avalanche diode arrays are used in various applications, including consumer electronics 6 , owing to their low power consumption, affordability and compact design. However, for long-range applications such as those in the automotive industry, ToF LiDARs require μJs of energy per point and/or scanned configurations to maintain adequate system performance, while often compromising on spatial resolution. As a result, these systems can achieve 360° field of view (FOV) and ranges up to 500 m but at the cost of increased size, higher power consumption and reduced mechanical stability 7 . Solid-state beam scanners based on silicon photonics have been extensively studied as an alternative LiDAR platform. Taking advantage of the existing CMOS infrastructure, silicon-photonics-based LiDARs offer the potential to integrate all essential imaging components, from the light source 15 to imaging elements and control electronics into a compact, cost-effective and power-efficient 3D imaging solution 8 , 9 . Integrated single pixels with on-chip lasers 16 have demonstrated range measurements up to 75 m using 1,605 nJ per point. Fully integrated transceivers 17 on silicon photonics have achieved a maximum range of 60 m and a FOV of 70° with 330 nJ per point when combined with external fibre optics. In both instances, the on-chip integration of beam steering and scalability issues have not been addressed. A further step towards total integration has been proposed through lens-assisted beam steering with in-pixel detectors 18 , although monolithic integration of the control electronics is still needed. Solid-state LiDAR sensors based on optical phased arrays (OPAs) operate relying on the precise phase tuning of several hundred emitters as well as wide-band wavelength tuning for 2D beam steering. Hybrid platforms demonstrations with FOV up to 140° × 19.23° have been shown, reaching a range of 100 m for 90% reflective targets using 1,360 nJ per point 19 . Implementations with up to several thousand optical emitters on a silicon platform have achieved FOVs of 100° × 17° with a range of 35 m, using as

A large-scale coherent 4D imaging sensor | Nature | TrendPulse