Autopilot and self-propelled systems, we are not safe: here are the problems

The American electric car manufacturer Tesla has used vehicles equipped with Autopilot, that autopilot system which allows you to steer, brake and accelerate without human intervention.

The autopilot has been that since its inception characterized by several “security flaws”which pushed Elon Musk’s multinationals to continuous improvement.

Identification marks for self-driving cars: advantages and disadvantages

On a quarterly basis “Tesla Vehicle Safety Report “ which provides statistics on the number of kilometers driven between accidents both when motorists use the autopilot system and when they do not. These figures always show that accidents are less frequent when using autopilot. But the numbers do not clear up any doubt.

The use on the highway can not be generalized

According to the U.S. Department of Transportation, corresponding to our Department of Sustainable Infrastructure and Mobility, Tesla’s autopilot is mainly used for motorway driving (which is generally twice as safe as driving on city roads). It is possible, in layman’s terms, that there will be fewer accidents with the Tesla Autopilot just because that tool is used, typicallyin “safer” situations.

IoT and MES: how much do you know? Learn more about technologies for a data-driven future

MoreoverTesla has not provided data to allow comparison of autopilot safety on the same types of roads (nor do other car manufacturers offering similar systems, such as General Motors and Ford).

On U.S. roads, Autopilot has been present since 2015, General Motors’ Super Cruise since 2017 and Ford’s BlueCruise since 2021, but publicly available data on these systems is still sparse.

American driverswhether they use these systems or “share the road” with those who use them, they are actually “participants in an experiment” whose results have not yet been revealed. As you know, car manufacturers and technology companies are adding more and more features to vehicles with the aim of improving safety. However, it is not always easy to verify – concretely – these solutions.

What is unfortunately found, however, is this fatalities on U.S. highways and roads have continued to rise in recent years. It seems that the extra safety provided by technological advances does not compensate for the “wrong decisions” from drivers behind the wheel.

General Motors the collaboration with the University of Michigan Transportation Research Institute (UMTRI) as part of a study who explored the potential safety benefits of his Super Cruise; study which, however, ended with lack of available data useful to understand whether the General Motors system actually reduced accidents or not.

Data from the US Road Safety Agency is missing

One year ago, the US Highway Traffic Safety Administration (NHTSA), which deals with car safety on US roads, ordered the companies in question (the three giants above) to report potentially serious accidents involving their advanced driver assistance systems (as Autopilot) within a day after learning about it. The order required NHTSA to publish the reports, but has not yet done so, indicating that publication will take place in the near future.

Moreover, of the three self-driving multinationals, only General Motors said they had suffered serious accidents (one in 2018, the other in 2020). The NHTSA data is certainly unlikely to provide a “complete picture of the situation”, but it may encourage US lawmakers (state and federal) and US drivers to take a closer look at these technologies and ultimately change how they are marketed and regulated. .

The importance of human driver cooperation

Moreover, the driver’s “lack of cooperation” “Human” is just as important. Despite its capabilities, Autopilot – of course – does not remove the responsibility of the driver who sets autonomous driving on his vehicle.

Tesla urges motorists to be vigilant and ready to take control of the car at any time. The same goes for Ford’s BlueCruise and General Motors’ Super Cruise. However, many experts fear that these systems, by allowing drivers to relinquish active control of the car, may lead drivers to believe that their cars are “driving themselves”.

In that case, when technology flops or can not handle a situation alonedrivers may be unprepared to take control at the necessary speed of the case.

“Older” technologies, such as automatic emergency braking and lane departure warning, have long been a safety net for motorists who brake or stop the car or warn them when leaving their lane (eg in the event of a “falling asleep”).

But the latest driver assistance systems reverse the trend, making driver readiness necessary for autonomous driving. Moreover, the way Autopilot is marketed does not help.

For years, owner Elon Musk has said the California company’s cars were on the verge of reaching true range – driving alone in pretty much any situation.

However, the name of the system (Autopilot) also implies an automation that it – paradoxically – has not yet achieved. Complete dependence on such a system can generate a kind of “relaxation” in the driver, which can cause him not to react quickly in case of need. As mentioned, Tesla has long promoted its Autopilot as a road safety improvement system, with the aforementioned quarterly reports from the US company appearing to confirm this.

But a study from the Virginia Transportation Research Council (VTRC), from the Virginia Department of Transportation, shows that these reports “are not what they appear to be.”

According to the VTRC, cars that use autopilot crash less frequently than those that do not; however, they are not driven in the same way, on the same roads, at the same time of day and by the same drivers.

By analyzing US police and insurance data, non-profit USA Insurance Institute for Highway Safety found that the already mentioned “old” safety technologies, such as automatic emergency braking and lane departure warning, have significantly improved safety on American roads, while there is no evidence of the reliability of autonomous driving in this sense.

Part of the problem is that U.S. police and insurance data do not always indicate whether these systems were in use at the time of the incident.

Publication of NHTSA data may not be enough

The National Highway Traffic Safety Administration (NHTSA) has ordered affected companies to provide event data inn which driver assistance technologies were in use within thirty seconds after exposure. This could give a bigger picture of the performance of these systems. But even with this data, according to security experts, it will be difficult to determine whether it is safer to use these systems than to disable them in the same situations.

The Washington DC Trade Association for Automotive Innovation, a trade group that brings together several automotive companies, warned that NHTSA data could be misunderstood or misrepresented. Car manufacturers may be reluctant to share some data with NHTSA. Under his order, companies could even ask not to disclose any data claiming that such disclosure would reveal trade secrets.

NHTSA also collects data on accidents with automated driving systems, more advanced technologies they are aiming for completely removing drivers from cars. These systems are often referred to as “self-driving cars”.

In some U.S. federal states, companies are already required to report accidents involving autonomous driving systems. But the most immediate concern is the safety of Autopilot and other driver assistance systems, which are installed on hundreds of thousands of vehicles. Security that, as we have seen, is still hiding.

______________________________________________________________________

Note

[White Paper] Easier and more complete TEST for your software: AI makes the difference

@ALL RIGHTS RESERVED

Leave a Comment