Researchers Warn AI ‘Blind Spot’ Could Allow Attackers to Hijack Self-Driving Vehicles
A newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads.
Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle’s AI system until triggered by specific conditions.
Once triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.
The research finds that attackers could program almost any action within a self-driving vehicle’s AI super network to trigger VillainNet. In one possible scenario, it could be triggered when a self-driving taxi’s AI responds to rainfall and changing road conditions.
Once in control, hackers could hold the passengers hostage and threaten to crash the taxi.
Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle’s AI system until triggered by specific conditions.
Once triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.
The research finds that attackers could program almost any action within a self-driving vehicle’s AI super network to trigger VillainNet. In one possible scenario, it could be triggered when a self-driving taxi’s AI responds to rainfall and changing road conditions.
Once in control, hackers could hold the passengers hostage and threaten to crash the taxi.