MOBILITY | VW integrated ChatGPT into its cars – Synopsys

0

Dennis Kengo Oka, Principal Automotive Security Strategist at Synopsys Software Integrity Group, shares his thoughts about this.

Source: Contributed file photo

Recent advancements in artificial intelligence, exemplified by Volkswagen’s latest unveiling at CES 2024, showcase the integration of the ChatGPT-based chatbot into the IDA voice assistant. The industry trend is steering towards elevating digital assistants beyond basic commands like unlocking doors or starting engines. With the potential of powerful AI language models like ChatGPT, automakers can now create bespoke digital assistants, trained specifically for automotive applications.

However, amidst the excitement of these advancements, it is crucial to address potential risks. Similar to the early days of ChatGPT, where unrestricted usage led to the creation of malware and hacking tools, deploying digital assistants in cars without proper safeguards poses security concerns.

Dennis Kengo Oka, Principal Automotive Security Strategist at Synopsys Software Integrity Group, shares his thoughts about this below:

The automotive industry is working towards improving the user experience in cars and allowing a more seamless transition from smart homes to smart cars. That is, the same digital assistants you are using in your smart home have also been available in your car for the past few years. However, these systems have been more general and often been limited to only supporting certain commands, e.g., unlock the door or start the engine.

Based on these powerful AI language models like ChatGPT, automakers can build their own digital assistants and train the AI model with automotive specific information. Similar to how ChatGPT was trained with, e.g., Linux and Unix man pages, and C and Python programming languages, one can imagine an automaker training their digital assistant with information from the car user manual as well as information on how to support common use cases including route planning, integration with smart homes and devices, charging, etc. This would allow a user to easily ask questions about a warning light blinking on the dashboard, plan an efficient route to the airport, to open the garage door or connect a user device, find and reserve a charging spot etc., without having to dig through a large user manual or use and manage multiple devices or systems.

But what about the risks? It is extremely important to consider what type of training data is used as well as apply policies that define what responses with what type of information are allowed. Similar to how early usage of ChatGPT with limited restrictions allowed it to write malware and hacking tools or to gain information that could be used with malicious intent, a digital assistant in your car could also be abused to potentially gain certain harmful information, e.g., how to clone keys or run unauthorized commands which could lead to attackers stealing cars.

While deploying a digital assistant in your car would provide many benefits and definitely improve the user experience, it is also important to consider the risks. Therefore, it’s imperative that automotive organizations consider what training data is used as well as consider providing some type of restrictions on content in responses, in order to prevent abuse or actions with malicious intent.

Moreover, OWASP has published the “OWASP Top 10 for LLM Applications”, which is a good source of information for automotive organizations to consider when developing their AI systems. It is important to be aware of the different types of cybersecurity concerns or attacks in order to develop proper security countermeasures. For example, a Prompt Injection attack is when an attacker feeds the AI system with certain data to make it behave in a way it was not intended for. Sensitive Information Disclosure can occur if an attacker is able to extract specific IP-related data or privacy-related data. The AI model itself can also be targeted through a Training Data Poisoning attack, where it becomes tainted by being trained on incorrect data.

There is also a concern of AI Model Theft, where attackers can reverse-engineer or analyze the contents of the model. Additionally, previous studies have shown that AI systems generate appropriate content 80% of the time but 20% of the time it seemingly just makes up content, so-called “AI hallucinations”. Therefore, it is important to consider what tasks the AI system is used for and to avoid over-reliance on the AI system.

WATCH: TECHSABADO and ‘TODAY IS TUESDAY’ LIVESTREAM on YOUTUBE

WATCH OUR OTHER YOUTUBE CHANNELS:

PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.

autoceremony >> experimental sound, synths, retro tech, shortwave

RACKET MUSIC GROUP >> alternative manila

Burning Chrome >>RC, die-cast cars, vintage anime, plus other collectibles

Zero Interrupt >>Vintage gadgets, gear and gizmos, plus some new one too!

PLEASE LIKE our FACEBOOK PAGE and SUBSCRIBE to OUR YOUTUBE CHANNEL.

Commentary article

Leave a Reply

Your email address will not be published. Required fields are marked *