This article explores the capabilities and limitations of each type of sensor, to provide a clear understanding of why LiDAR has emerged as a strong contender in computer vision tech race.
Before delving into the relative strengths and weaknesses of these technologies, let's first provide a brief overview of how cameras, radars, and LiDAR systems operate and perceive the world around them.
Generally speaking, sensors are devices that measure physical properties and transform them into signals suitable for processing, displaying, or storing.
Radar and LiDAR are Active sensors, different from Cameras, that are Passive.
Active sensors emit energy (e.g. Radio Waves or Laser light) and measure the reflected or scattered signal, while passive sensors detect the natural radiation or emission from the target or the environment (e.g. Sunlight or artificial light for Cameras).
Each sensing modality has its advantages and inconveniences.
The Insurance Institute for Highway Safety (“IIHS”) found that in darkness conditions camera- and radar-based Pedestrian Automatic Emergency Braking systems fail in every instance to detect pedestrians.
Additionally, the Governor’s Highway Safety Association (“GHSA”) of the USA found in an evaluation of roadway fatalities in 2020, that 75% of pedestrian fatalities occur at night.
 Jessica B. Cicchino, Insurance Institute for Highway Safety, Effects of Automatic Emergency Braking Systems on Pedestrian Crash Risk 20 (2022).
 Governor’s Highway Safety Association, Pedestrian Traffic Fatalities by State: 2020 Preliminary Data 16 (2021).
However, the most crucial disparities among these sensing modalities lie in two key aspects that warrant further exploration:
1) the inherently different types of perception they provide, and 2) the potential privacy concerns they may give rise to or help evade.
Each type of sensor operates in distinct sections of the electromagnetic spectrum, utilising signals with varying wavelengths.
Cameras capture colours in two dimensions, lacking any notion of depth. Consequently, a large object positioned far away from the device can have the same number of pixels as a small object situated close to the camera.
This depth perception is not required in many simple Computer Vision tasks, such as classifying objects, where Cameras excel:
Indeed, the lack of depth perception in Computer Vision software can pose challenges in tasks that require precise detection of object position, size, or movement.
This capability is especially important for applications such a precise crowd monitoring at scale:
In this example real persons and a printed representation are both displayed to the camera:
Active sensors like LiDAR and Radar can detect the distance of each object, in the case of LiDAR these measurements are done in 3 dimensions*, that is not only the depth but also the exact position of any object in the space.
In a nutshell, 3D LiDAR provides Spatial Data but can't detect colours:
That means that tasks such as tracking people at scale, individually and with cm accuracy, are much more appropriate for 3D Spatial sensors like LiDAR:
As we've seen both sensors share many characteristics, mostly being active and able to detect distance.
The Key Differences Lie in Precision: Lidar's Laser-Level Accuracy versus Radar's Lower Resolution"
Lidar boasts laser-level accuracy, providing cm-level precision (mm precision in some 2D Lidars), while Radar's resolution is significantly lower, posing challenges in precise tracking and distinguishing individuals or objects in crowded environments.
One of the main reasons being the wavelengths or radio waves compared to laser:
While Imaging Radar, the 3D version of Radar, shows promising potential and is currently in development, its capabilities are still limited.
Cameras were purposefully designed for human consumption of images, enabling them to capture and deliver information that closely mirrors what a person would perceive in reality (e.g. cinema, TV, portable cameras, smartphones...)
Only recently have the same images taken by cameras found applications in automated processing by computers, utilising technologies such as Computer Vision and Image Processing AI.
This remarkable capability has expanded to encompass the identification of individuals through advanced techniques like FaceID.
The use of cameras in facial recognition technology raises legitimate apprehensions about data security and personal privacy.
Consequently, the significance of safeguarding privacy has gained considerable attention from policymakers and regulators worldwide. State policies, such as the General Data Protection Regulation (GDPR) in the European Union, have been put in place to address these concerns by imposing strict guidelines and limitations on the usage of cameras and biometric data.
The following chart presents a summary of the previous comparison points, along with some additional ones:
In the world of automated processing, cameras have become a ubiquitous choice not necessarily due to their superiority as sensors for the task, but rather because of their widespread availability.
As we explored earlier in this article, the use of cameras may be suitable for certain applications like object classification, where their effectiveness is evident. However, when it comes to capturing complex Spatial Data of the physical world, these sensors reveal their limitations.
The intricacies of spatial data processing necessitate more specialised and sophisticated sensor technologies to overcome challenges and provide more accurate and comprehensive results.
While Radar shares the benefits of being an active sensor, akin to LiDAR, and equally excels in preserving privacy, its significantly lower resolution and precision disqualify it as a viable candidate for numerous use cases, such as crowd monitoring and precise traffic management.
The orders of magnitude difference in resolution makes Radar less capable of capturing intricate details and spatial data with the same level of accuracy as LiDAR.
As LiDAR continues to evolve and find wider integration, it holds the key to unlocking unprecedented insights and driving us into a more advanced and interconnected world.
However, while LiDAR data, in the form of point-cloud information, provides a wealth of Spatial data, this raw data alone is essentially useless without effective processing and analysis.
Through advanced algorithms, cutting-edge techniques and a full set of tools, Outsight can derive valuable insights from LiDAR point-cloud data, enabling a wide range of applications across industries.
One such tool is the first Multi-Vendor LiDAR Simulator, an online platform that empowers our partners and customers to make informed decisions about which LiDAR to utilize, their optimal placement, and the projected performance and cost for any given project.