1 November 2021  |  Engineering

Unlocking markets faster: Building AVs that generalise

In this article, we discuss the results of a recent multi-city generalisation test, conducted to explore how we are building the most scalable approach to autonomous driving.

Unlocking Markets Faster: Building AVs that Generalise

To build autonomous driving technology that can easily scale to new markets, we are pioneering a data-driven approach to self-driving. At the heart of our AV2.0 platform is a fully learned end-to-end motion planner that can quickly and safely adapt to complex driving environments, anywhere in the world. Our full AV2.0 platform consists of a camera-led sensor suite, an end-to-end neural motion planner and an autonomous driving system designed with safety and redundancy in mind.

Limits of the traditional approach

It is interesting to contrast AV2.0 with what is being widely used in the AV industry today, what we call AV1.0. The traditional approach is a modular perception-prediction-planner stack derived from classical robotics principles. AV1.0 is, broadly speaking, motivated by the general principle that if perception is solved then motion planning is easy. This unfortunately is yet to be proven despite years of engineering efforts and billions in investment by numerous companies.

Well engineered AV1.0 stacks that follow this modular design principle have the benefits of being able to engineer in parallel each of the modules of perception, planning, and control independently. However, these stacks are very expensive to design, adapt and maintain, and are reliant on expensive hardware, HD mapping, and localization systems. These stacks are also brittle as they place extremely high demands on the sensing and perception modules. Furthermore, interfaces between the fundamental modules need constant adaptation, and errors propagate throughout the stack. These planners, although evolving from classical algorithms to more data-driven ones, still suffer from perception and localization errors.

Our alternative vision

We reframe the driving problem as one that can be solved fully using machine learning, i.e., jointly learning to represent any driving scene and motion plan using a deep neural network trained on large quantities of human driving demonstrations. This approach enables us to build an autonomous mobility platform that can quickly and safely adapt to new cities, use-cases, and vehicle types, which is a core promise of AV2.0. Achieving this is game-changing for scaling autonomy and it means that we can deploy in new markets faster with substantially lower cost; moving us closer towards our goal of being the first to bring AVs to 100 cities.

Developing AVs that can easily drive in new places is significantly different from concurrent AV industry methods which require time-consuming and expensive city-specific adaptations such as building and maintaining highly-detailed and customized HD maps for every road driven.

To deploy in a new location, an AV1.0 team starts by manually driving sensor-equipped vehicles down every street so they can paint a 3D picture of the environment, down to the centimeter. They process this data into a detailed map with additional context such as speed limits, lanes, and traffic light locations. The maps are then tested and verified before being deployed, as well as constantly updated as city streets change. With the environment mapped, now comes the challenge of adapting the behavior planning for the new driving culture and environment. This is notably difficult, even in moving between two seemingly similar environments. The solution to this looks like an engineering team redesigning components of a large, complex planner. This process takes months.

In contrast, at Wayve, we are building AVs that generalise and are intelligent enough to not need these cumbersome HD maps. The world is constantly changing, so we need to be able to adapt to drive anywhere. What we mean by this is we can train our AV2.0 system to learn to drive autonomously on, say, London roads and then it can apply this acquired driving skill to new, unseen places and cities without any place or city-specific adaptation.

How did we test this?

To demonstrate this capability, we recently conducted a multi-city generalisation test where we took our best performing AV2.0 model to 5 different cities across the UK that we have never previously been to. The goal was to see if our AV2.0 model that was trained in London could generalise its driving intelligence to new cities, with no prior data collection to influence model performance in the new cities.

Map showing testing in Leeds, Manchester, Liverpool, Cambridge, Coventry and London
  • Training city: London
  • New test cities: Cambridge, Coventry, Leeds, Liverpool, Manchester
  • Trial Duration: over a 3-week period in September 2021

As you can see in the charts below, we selected cities and routes that varied in terms of road layout, road features, driving complexity, and traffic density. For example, the areas we drove through in Manchester had more pedestrians, bus lanes, and traffic lights, whereas our Leeds testing routes had almost 4 times as many cycle lanes.

Chart showing road features detected on test routes

But when it comes to road density, London, for the most part, had more traffic in all forms (e.g. cars, bus, pedestrian, cyclists) with a few exceptions like cars in Coventry, pedestrians in Manchester, and cyclists in Cambridge.

Chart showing road density detected on test routes

What did we observe?

Please change your cookie settings to view embedded content, or view this video on YouTube.

Through this test, we demonstrated promising results that our AV2.0 model can indeed generalise its learned driving skill to new places. Without any prior city-specific adaptations, our autonomous driving system drove over 610km in previously unseen cities, demonstrating all of the skills it learned in London. This is exciting, as it gives us a clear signal that we are building an autonomous mobility platform that can drive anywhere based on what it learns training in London.

For example, this is our model driving autonomously through busy traffic in Central London (left frame), and then applying the same skill to driving in traffic for the first time in Leeds (right frame).

For example, this is our model driving autonomously through busy traffic in Central London (left frame), and then applying the same skill to driving in traffic for the first time in Leeds (right frame).
Here, you can see our car driving autonomously around a bus on its first drive in Liverpool (right frame) in a similar fashion to how it learned to drive around buses and change lanes in London (left frame).
In Cambridge (right frame), the model drove autonomously around roadworks as competently as it does training in London (left frame).
Roundabouts can be quite tricky, even for a person to navigate. But in Coventry (right frame), we saw our autonomous vehicle safely drive through a roundabout just like it does in Central London (left frame).
Finally, one of the hardest maneuvers in driving: unprotected right turns. In the left frame, you can see our car driving autonomously through a busy, unstructured intersection in Camden avoiding cyclists, pedestrians, cars, and a motorbike. On the right, is an example of our AV making a similar unprotected right turn through a busy intersection in Manchester for the first time.

With all of these examples, we see that our AV2.0 model demonstrates the same driving competencies that it has learned from its training curriculum in London. Our training dataset captured in London includes many data samples of 20mph single lane roads, navigating busy intersections, making left and right turns, changing lanes, and passing buses. With a basic understanding of machine learning, we would expect that if the training and test data distributions overlap, so should the corresponding model performance. Our multi-city generalisation test confirmed this assertion to be true.

Watch additional driving footage from each of our 5 test cities

Please change your cookie settings to view embedded content, or view this video on YouTube.

Cambridge’s test routes had medium traffic density with many 30mph roads, bus lanes, and cyclists.

Please change your cookie settings to view embedded content, or view this video on YouTube.

Coventry had higher car traffic density on our test routes with many 30mph single roads, roundabouts, and intersections.

Please change your cookie settings to view embedded content, or view this video on YouTube.

Leeds had medium traffic and pedestrian density with many 20-30mph single roads with cycle and bus lanes.

Please change your cookie settings to view embedded content, or view this video on YouTube.

Liverpool’s test routes had high traffic density with a mix of 30mph multi-lane and single lane roads, traffic lights, and cycle lanes.

Please change your cookie settings to view embedded content, or view this video on YouTube.

The routes selected in Manchester had the highest traffic and pedestrian density of the 5 test city routes with a mix of 20-30mph single and multi-lane roads.

Please change your cookie settings to view embedded content, or view this video on YouTube.

London is where we train. It has very high traffic and pedestrian density with mainly 20mph single lane roads, busy intersections, bus lanes, and cyclists.

What if the training and test data distributions don’t overlap? How quickly can we adapt model performance?

As we analyzed the routes in the new cities, we found that there were a few differences in the training and test distributions. As you can see in the chart below, one notable difference was that speed limits outside of London were more skewed towards 30mph whereas in our normal London training routes, there are mainly 20mph roads.

Diagram showing operational design domain distribution comparison

Therefore, to adapt model performance to the new cities, we retrained the model with a higher proportion of 30mph driving data from the London training set. We found that this quick adaptation improved performance on 30mph routes by nearly 30%. While this may seem small, what’s important to emphasize here is both the simplicity and scalability of our data-driven AV2.0 approach, which doesn’t require programming new hand-written rules when going to new markets.

What lies ahead

This multi-city generalisation test is a stepping stone to reaching our grand goal of bringing autonomy to 100 cities. In this post, we showcased what AV2.0 offers in terms of simplicity, efficiency, and scalability to bring autonomy to new, previously unseen places. Moreover, this test demonstrated that solving driving in London can fast-track deployment anywhere.

From an engineering perspective, we will continue to adjust and adapt our machine learning data curriculum to map closer to a general driving experience. This means that we will continue to teach our driving models with diverse experience across routes, roads, and environments to ensure that our AV models are not overfit to particular driving domains. Wayve’s future roadmap is prioritized according to building the most performant, comfortable, and safe driver that can quickly and safely generalise using machine learning.

As we continue to scale performance with data, we are equally excited by new developments in machine learning models to further boost the learning efficiency of AV2.0 models and the ability to reason and plan over longer horizons. Stay tuned for future updates on our work in this area!