26 September 2022 | Engineering
Industry-first: Cracking the code on a generalisable AI Driver
To build autonomous vehicle (AV) technology that can truly scale to many places or different use cases, you need an intelligent solution that’s capable of generalisation – or the ability to apply what it has learned to new, previously unseen situations. It’s a core feature of AV2.0, unlocking Wayve’s unique ability to enter markets at a very low cost with a vehicle-agnostic artificial intelligence (AI) Driver that requires significantly less data for each new domain. We know this to be true because we’ve shown that our system can generalise across two key concepts: 1) generalise to new, previously unseen cities and 2) generalise across two very different vehicle platforms. This is an industry first.

Why generalisation matters for scaling autonomy
AVs stand to offer huge benefits to society and will enable greater access to safer, smarter and sustainable forms of transportation. To bring a useful autonomy solution to the world, it’s our view that autonomous vehicles need to have the onboard intelligence to see, learn and adapt, just like how human drivers tackle the challenges of driving. This is because we operate in diverse and dynamic environments that are constantly changing. What makes driving such a complex engineering problem is that no two situations on the road are ever the same. Variables such as weather, lighting, traffic conditions, roadworks and other road users are constantly in flux. This means it’s extremely difficult, if not impossible, to exhaustively programme every situation a car might encounter. That’s why we need an intelligent solution that can learn the concepts of driving and generalise, or apply that knowledge, to new situations.
From a machine learning perspective, generalisation is the ability of an AI model to adapt properly to new, previously unseen data after having learned from training data. It is a measure of how accurate our AI driving model learns from the given data and applies that learned information elsewhere.
Generalisation is a core feature of AV2.0, and now Wayve has demonstrated two forms of generalisation: to new cities and vehicle types. This is an industry first.
Why is this important for our product?
Multi-city generalisation demonstrates that we can enter new markets at a low cost, requiring exponentially less data with each new domain. Multi-vehicle generalisation unlocks our ability to provide a vehicle-agnostic driver to support the vehicles that fleet customers want to use.
Generalisation is a challenge for AV1.0 autonomy technology, as these systems have many configuration parameters and software which affect how the vehicle behaves, particularly in their planning components. This makes it difficult to generalise without substantial re-engineering effort.
Last year, we showed that we could learn to drive in London and deploy our fleet in other major UK cities, without any experience or HD maps in these new cities. We saw that our AI driving model could learn complex driving concepts like navigating around temporary roadworks from its London training experience and then driving through new roadworks scenarios it encountered in Cambridge, Manchester, Leeds, Coventry and Liverpool. Our multi-city generalisation test proved that by solving driving in London, we can fast-track deployments in other cities. Since we require significantly less data for each new domain, we are developing an AV system that’s capable of driving in new markets quickly.
The next challenge was proving that we could generalise on two different vehicle platforms: a passenger car and a light commercial van.
Watch the inside story of the Wayve team, who created the industry’s first AI Driver that can generalise to different cities and vehicle types. This video was filmed during our first week driving our autonomous van on public roads in London.
How we built an autonomous vehicle system that could generalise from cars to vans
We developed a single generalisable driving model capable of driving two vehicle platforms with very different geometry. First, we prototyped this in simulation. We then brought up an autonomous light commercial van, a Maxus e9. Finally, we demonstrated vehicle generalisation with our two autonomous vehicle platforms: a Jaguar I-PACE passenger car and a Maxus e9 van.
1. Explore: simulated generalisation
Firstly, we wanted to understand whether generalisation between vehicle platforms was possible. There are notable physical differences between the two vehicles:
- different mass and braking response affect longitudinal motion,
- different geometry affects lateral motion and sensor position, and
- the vehicle type and acceleration profile influence how other drivers interact with the vehicle.
We introduced a light commercial vehicle to our simulation environment and we quickly created a driving dataset for our simulated van. This allowed us to explore how to adapt our modelling to drive both passenger cars and vans, prior to going on the road or having real-world data.
We tested simulated driving performance using the same suite of tests we use for AV development with our I-PACE platform
From this testing, we drew three conclusions:
- We could train a driving model capable of generalising between vehicle platforms.
- We did so with a comparably small dataset from the new vehicle platform, which confirms that we were able to leverage the wider data corpus for generalisation.
- We observed an uplift in performance on both I-PACE and Maxus vans by using a combination of car and van data to train the model. This joint data corpus outperformed a single vehicle data corpus on our simulation evaluation benchmarks. We attribute this to the greater diversity of driving states observed in a joint data corpus compared to a single vehicle data corpus.

The simulation test results indicated that our AI model trained on a combination of car and van data could generalise to the different camera positioning and vehicle dynamics between the two different vehicle platforms. It gave us the confidence to go further and build up an autonomous light commercial van (LCV) adapted from our Generation 1 I-PACE technology.
2. Build: Creating an autonomous van
We deliberately chose a vehicle platform that we felt would stress the ability of an autonomous system to generalise. We adapted our AV system to the Maxus e9 light commercial van.
We kept the autonomy stack identical to our I-PACE passenger cars, keeping the same vehicle abstraction layer. This was important to enable the same AI model to drive both vehicles.

3. Test: Confirming vehicle generalisation in reality
Our simulation test results were promising, but real-world results speak for themselves. We wanted to confirm our expectation of generalisation with our two vehicles with on-road testing.
We applied what we learned from the simulation testing to the real world, training our driver with thousands of hours of I-PACE data and only eighty hours of van data. We tested the vans for a week using the same structured testing methodology we use with our I-PACEs.
We made two key observations:
- We observed that the model transferred behavioural concepts to our van (like pedestrian crossings or overtaking double-parked vehicles), which we only saw on our I-PACE fleet after larger-scale training with substantially more than eighty hours of driving data.
- We observed that this demonstrated van-specific behaviour, like wider turning radius and managing different sensor geometry, which it learned from a much smaller amount of in-domain experience.
No two driving scenarios are ever the same. That’s why it’s important that our autonomous driving system can generalise the driving behaviours it has learned to new, unseen situations.
See the types of London traffic our autonomous van encountered during the first few days of on-road driving.
Watch our autonomous van (bottom screen) following our autonomous passenger car (top screen) through London. Both vehicles are being driven by the same autonomous stack with the same AI model without intervehicle communication.
Concluding thoughts
The multi-vehicle test demonstrated that it’s possible to build a single generalisable driving model capable of driving two vehicle platforms with very different geometry. Why is this important? The future of autonomous transportation requires a multi-modal solution. Even amongst our delivery trial partners, we see very different vehicle requirements. Generalisation to new vehicle types unlocks our ability to provide a vehicle-agnostic driver and work with the vehicles that fleet customers want to use.
Achieving the generalisation milestones of multi-city and multi-vehicle generalisation is an industry first. At Wayve, we are focused on building a central driving intelligence as a foundational machine learning model, learning from all experiences and expanding to new domains with exponentially less data. Now that we’ve validated AV2.0’s ability to generalise and, therefore scale, we will continue to expand the capabilities of our autonomous driver. We are also excited to roll this out to test with our commercial partners and show that it works in a real-world use case.