NYU Tandon-led Research Team Develops Innovative System for Vehicles to Share AI Models
NYU Tandon-led research team develops system that lets vehicles pass along AI models like messages in a social network, even when they don’t meet directly
An NYU Tandon School of Engineering-led research team has developed a groundbreaking system that allows self-driving vehicles to share their knowledge about road conditions indirectly. This system enables vehicles to learn from the experiences of others, even when they rarely encounter each other on the road.
The research, presented in a paper at the Association for the Advancement of Artificial Intelligence Conference on February 27, 2025, addresses a common challenge in artificial intelligence: how to facilitate knowledge sharing among vehicles while safeguarding their data privacy. Typically, vehicles only exchange information during brief direct interactions, limiting their ability to quickly adapt to new conditions.
Professor Yong Liu, who supervised the research led by Ph.D. student Xiaoyu Wang, described the system as creating a network of shared experiences for self-driving cars. This innovative approach allows vehicles to learn about road conditions in areas they have not personally visited, enhancing their preparedness for diverse scenarios.
The researchers named their new approach Cached Decentralized Federated Learning (Cached-DFL). Unlike traditional Federated Learning methods that rely on a central server, Cached-DFL enables vehicles to train their AI models locally and share them directly with other vehicles.
When vehicles come within close proximity of each other, they utilize high-speed device-to-device communication to exchange trained models, rather than raw data. Importantly, vehicles can also pass along models they have received from previous encounters, enabling knowledge to propagate beyond immediate interactions. Each vehicle maintains a cache of up to 10 external models and updates its AI every 120 seconds.
To ensure optimal performance, the system automatically removes outdated models based on a staleness threshold, prioritizing recent and relevant knowledge.
In simulated experiments using Manhattan’s street layout, virtual vehicles navigated the city grid, making turns at intersections based on probability. Unlike conventional decentralized learning methods, Cached-DFL allows models to travel indirectly through the network, similar to how information spreads in social networks.
This multi-hop transfer mechanism reduces the limitations of traditional model-sharing approaches, enabling learning to propagate more efficiently across an entire vehicle fleet. By acting as relays, vehicles can pass along knowledge even if they do not directly experience certain conditions.
The technology allows connected vehicles to learn about road conditions, signals, and obstacles while maintaining data privacy. This is particularly beneficial in urban environments where vehicles face varied conditions but seldom interact for extended periods.
The study highlights the impact of vehicle speed, cache size, and model expiration on learning efficiency. Faster speeds and frequent communication enhance results, while outdated models diminish accuracy. A group-based caching strategy further improves learning by prioritizing diverse models from different areas.
As AI deployment shifts towards edge devices, Cached-DFL offers a secure and efficient method for self-driving cars to collectively enhance their learning capabilities. This system can also be applied to other smart mobile agent networks, such as drones, robots, and satellites, to facilitate decentralized learning and achieve swarm intelligence.
The researchers have made their code publicly available and detailed information can be found in their technical report. In addition to Liu and Wang, the research team includes collaborators from Stony Brook University and New York Institute of Technology.