AI and automotive, the vehicle “speaks” to us: how to improve safety and sustainability

Cars have always spoken to us. The engine, the gearbox, the braking, the tyre. For a few years also with a real voice, perhaps a little metallic, to give us information on the nearest petrol station, or on the expected weather. But today, and in the near future, we can and will have more. In fact, through artificial intelligence, it is possible to collect all the data detected by the biometric sensors present on the vehicles (on the steering wheel, on the seats, cameras in the mirrors) and integrate them with AI algorithms to verify, for example, the state of distraction or driver fatigue. In fact, every 10 seconds an accident occurs on European roads: the 94% are caused by human error resulting from these causes (Road Fatalty Statistics EU).

Security and more: what changes

If in past years car manufacturers have developed ADAS (Advanced Driver Assistance Systems) for safety, today the need changes: “Artificial intelligence exploits vehicle data (so what does the driver do, ed) to bring services to the driver, – says Ilario Gerlero senior manager of Sensor Reply, a multinational company specializing in consultancy, system integration and digital services that works alongside car manufacturing companies -. Non-reactive but proactive services: giving suggestions when improper driving situations arise in relation to the objectives you have at that moment. “Once you know how the driver drives and know how to monitor him – continues Gerlero – you can give the driver indications to improve his driving in terms of safety by reducing the elements of distraction. Or in terms of energy consumed, optimizing the “mission” of the vehicle, which depends on how a person drives and in terms of performance for the sportiest cars, a sort of virtual coach for the amateur on the track. We don’t replace the sensors that are already in the vehicle, we are an integration.”

The Reply car simulator: difficulties and advantages

Collecting data from sensors, however, is not without its difficulties, because while driving we encounter the most disparate situations, which are ambiguous for “machines” to interpret. “You look at the road, but you’re still distracted. It is the so-called cognitive distraction – underlines Gerlero -. And how do you realize it? You notice it because you drive badly, you take a different turn than you normally do, or you have a more aggressive attitude when driving because you notice it at the last minute.”
And artificial intelligence can detect and report all this. The simulator solution developed by Reply allows the sensors present on the vehicle to be correlated with AI algorithms to detect the state of distraction or tiredness of the driver to reliably define a reference driving model for each specific driver.
“We learn from the vehicle data what your normal driving style is and through connectivity to the cloud – underlines Gerlero – the artificial intelligence understands if the personal model collected over time is still valid, at all times. Maybe you had a serious accident and your style has changed, or you get older, or you become a family man and are more cautious. These models are not static, but evolve over time. Think of a driving assistant that helps you to best set up your vehicle depending on where you are driving or to pay attention where required, suggestions calibrated depending on how you drive and the situation and environment around you to improve performance and consumption”.

Connectivity: from cloud to the edge

In the data integration and communication project, connectivity becomes fundamental. How can we make up for any losses or connection drops?
“We are bringing everything to the edge because we need to be highly autonomous from the cloud” – Fausto Ferrettini, partner of Sensor Reply, tells us. “Edge and cloud coexist and do different things. If I have to send you an audible warning because you are distracted or behave too aggressively, I have to tell you immediately.”

Recall that Edge computing is a model where data processing takes place as close as possible to where the data is generated, so that response times are faster. And in fact Reply’s car simulator solution has a double benefit: data processing speed and reliability, since the sensors are able to function even in the absence of a data network connection (such as in tunnels).
“Furthermore, the cloud is expensive – adds Ferrettini – and has problems in terms of cyber security”.

Privacy and understanding: the eye of the camera

Understanding what the camera detects and how it interprets it becomes fundamental. In fact, I can have the right posture but still be distracted, or position myself in an awkward manner but be perfectly concentrated. “The AI ​​must understand where I am and what I’m really doing,” underlines Ferrettini -: “am I looking at the view, a billboard or simply turning my head at an intersection?”
“The majority of accidents occur due to distraction – adds Ferrettini -. Nobody watches where they are going anymore: drivers have very rich infotainment that is very much linked to the touchscreen, which is why the interface of the future will increasingly develop at a vocal level, with more advanced systems that better understand our directions and give sensible responses. Those who walk often have their heads on their cell phones or headphones. So if I can’t control pedestrians I control drivers.”
But the camera recording poses a privacy problem. “No one wants the camera looking at me,” Ferrettini specifies. “Driving is a private moment, I do things I don’t want people to know. This is why everything must remain “on the edge”. So no one knows, it wouldn’t be the same on the cloud.”
“The number of road deaths is unbearable for a world that claims to be civilized” underlines Ferrettini. “And so we need to increase security. The prospect that is being discussed but which we are not working on for now is that of an AI system that could invasively intervene in driving: first it warns the driver and then it could degrade the car’s performance. For example, if you are smoking and this is considered a behavior that generates distraction or danger, the AI ​​can warn you once or twice with a sound signal and then slowly lower the speed of the vehicle. But it’s just a topic of discussion, because this can create a danger: I might stop in an unsafe place or need to accelerate to avoid a risk.”

Regulatory context

Reply’s work takes place within the action of the European Parliament which has requested new safety measures to prevent risky driving situations (Regulation No. 2019/2144). These require new technological integrations and the European Commission is encouraging manufacturers to develop technologies based on biometric information. The regulation requires that motor vehicles of categories M (intended for the transport of people) and N (designed for the transport of goods) be equipped with driver drowsiness and attention warning systems (DDAW) which must provide a warning to the driver if a distracted driving condition is detected.) starting from 2024 for new approvals (new models placed on the market) and from 2026 for new registrations (any vehicle put on the road).

Back to the Future

What does the future hold for us? First of all, a complete renewal of the fleet: “Today the push towards electric cars is very strong”, notes Ferrettini. “Retrofitting (adapting old cars with innovative software systems, ed) exists, but the renewal of the fleet will happen. As well as autonomous driving, which is unthinkable in an Italian city today.”

But will we still drive in our cities in the future or will we do so just to savor the pleasure on the track and in dedicated spaces?