News

LIDAR vs FSD: What would be the future of autonomous driving

Many car manufacturers like Mercedes, Nissan, BMW, VW and Volvo are using this technology in their pursuit of self-propelled cars.

BHPian dgindia recently shared this with other enthusiasts.

In the yesteryears, we have witnessed a fierce battle in the video industry on the underlying technology of PAN vs NTSC. I realised when a camcorder bought in the US (having PAL technology) was not functioning when connected to Indian TV (having NTSC technology). We had to avail services of experts who would burn a CD which has compatible technology for viewing at home.

Something similar is brewing in the automobile industry - the rival technologies used for self-driven cards are LIDAR and FSD.

LIDAR (Light Detection and Ranging) is the usage of laser light to sense the objects in the surroundings and collect the information to create a 3D digital image. Many car manufacturers like Mercedes, Nissan, BMW, VW, and Volvo are using this technology in their pursuit of self-propelled cars.

FSD (Full Self Driving) is another technology using cameras and computer vision to generate AI-driven surrounding digital clones. Tesla has pioneered this technology and is a key user of their products.

Critics say LIDAR supports multi-driving conditions when it is dark. FSD is more cost-effective and mimics human driving behaviour.

It will be interesting to witness the future of driving as a high-stakes battle between lasers and cameras.

Given both options, which car will you choose to relax while being auto-driven?

Here's what BHPian MotoBlip had to say about the matter:

Automated driving success hinges not just on sensor quantity or tech choice, but on aligning hardware with the operational design domain (ODD) requirements. Achieving advanced automated driving capabilities isn't simply about incorporating numerous sensors or cutting-edge technology—it's about ensuring the hardware precisely aligns with the vehicle's requirements for safe and effective operation in real-world conditions. Consider Tesla's FSD Beta system, designed to eventually achieve full autonomy (Level 4/5), yet currently labeled as Level 2, potentially to navigate strict regulatory frameworks.

In China, companies like NIO, Xpeng, and Xiaomi are integrating LIDAR into their systems, though the current capabilities and potential improvements through software updates remain somewhat ambiguous. Levels 2/3 of automated driving necessitate human oversight, requiring a person to be prepared to assume control if the system encounters difficulties. This requirement persists regardless of the vehicle's sensor array.

It's worth noting that simply adding sensors beyond the initial design won't inherently enhance the vehicle's capabilities beyond its original specifications. The complexity of navigating urban environments with ADAS already presents significant challenges, irrespective of the diversity or quantity of sensors employed.

From my experience working with LIDAR sensors, I've observed certain limitations. Their performance can degrade in adverse weather conditions, with time-of-flight(ToF) LIDAR sensors particularly affected compared to frequency-modulated continuous wave(FMCW) sensors, though the latter comes at a significantly higher cost. Another consideration is the choice between short-range or long-range LIDAR, or both, which adds to the system's overall cost and complexity. It's definitely not as simple as some people make it out to be.

Here's what BHPian Rehaan had to say about the matter:

Haven't researched this in too much detail, but I've also heard that LIDAR does not work well in the rain, and also it could become an issue when there are dozens of LIDAR devices all running simultaneously in a small area.

For me, the ultimate choice is always "sensor fusion". The combination of inputs from several different types of sensors (each of which has their strong and weak points).

Think of humans -- not only are we using our eyes to drive, but also taking inputs from our ears and hands/butt, sense of balance, and even sometimes our noses.

However, this approach is difficult because:

  • To code it (ie. which sensor to give priority to when there's a conflict) gets more and more complicated the more sensors you add
  • It increases hardware cost as well as R&D cost

Here's what BHPian NomadSK had to say about the matter:

FSD/Autonomous driving is ages away. Not happening soon.

All sensors need to be calibrated be it LIDAR, camera-based, IR or ultrasonics (RADAR/SONAR), or radiography, if you want 100% accuracy. They generally work nearly perfectly in controlled environments, outside of that envelope it's very difficult to make them work according to your needs. Each technology has its own set of limitations.

We use all these sensors for various inspections/investigation techniques. They don't give the exact results but have to be inferred by SMEs, with approximations. FSD and auto drive i.e level - 5 of ADAS is a farce at the moment and is probably at least a decade away, if not more. We have still not reached completely at level 3. You can reach very close, but not there. Theoretically, the only way it can be achieved safely is when 100% of the cars will have similar features, else there are too many variables in the real world to come close to fully autonomous driving.

I'm of the opinion if sensors-based autonomous travel will start, the first industry to incorporate will be the aviation industry, they are super close to full automation and have dedicated landing/take-off facilities. Else there's a reason trains run on rails.

Check out BHPian comments for more insights and information.

 
Live To Drive