Skip to content

From London, to Paris, to New York, with no re-engineering

Wayve's deep learning AI technology offers a solution to self-driving: it is the only approach that can truly handle the complexities of driving in dense urban areas and scale efficiently to new regions.

Instead of relying on the traditional AV stack, HD maps and hand-coded rules, we are focused on building a data-driven learned driver that can scale, adapt and generalise its driving intelligence to new places.

In designing our AV2.0 autonomous driving system, we make no compromises that could limit its ability to adapt to new environments. Our embodied intelligence does not depend on virtual infrastructure or costly sensing and mapping. Instead, it's data-driven at every layer, allowing for continuous learning without re-engineering.

In other words, our AV technology learns much like a person does. It can learn how to drive in one city and then apply that knowledge to a new city it's never seen before. It is this adaptability that makes our AV2.0 system compelling for unlocking urban use cases.

Reimagining Autonomous Mobility Through Embodied Intelligence

Driving Intelligence

Wayve's learned perception and planning algorithms are able to drive from point A to point B in the most complex urban driving environments. We work with an integrated driving system structured as one unified neural network driving autonomously.

We model the world robustly with computer vision, learning a representation of the scene's semantics, motion and geometry. We learn this representation jointly with learned planning and control algorithms, end-to-end. We build a number of intermediate outputs with multitask perception: semantic and instance segmentation, learned depth estimation, learned ego and scene motion, learned 3D detection, tracking and future prediction. We use surround cameras to perceive the scene, using recent advances in policy learning to learn motion planning for driving.

We use all available learning signals: supervision and self-supervision for computer vision, learning from expert demonstrations and with reinforcement learning for motion planning.

Fleet Learning

Wayve builds machine learning technology that scales, using both real and simulated city- and country-scale data to learn robust driving algorithms.

Our fleet learning framework is built to iterate quickly on data at a massive scale, allowing us to quickly sort, inspect, and curate vehicle data.

We use this framework to develop, deploy, and iterate on our driving algorithms, training for thousands of GPU-hours with hybrid cloud infrastructure.


Embodied AI

We’re building the sensing and computational systems for scalable autonomous driving.

We are a camera-first company: vision provides the richest possible source of sensing information at a price point that works.

We develop our technology using a fleet of Jaguar I-Pace electric vehicles from our London HQ, as our reference vehicle architecture for autonomy.

Our hardware is designed to be robust and extensible to support a wide range of vehicles and use cases for autonomy. We use hardware acceleration for image processing and machine learning computational workloads.

Simulation

We procedurally generate worlds for algorithmic development and validation, allowing Wayve's engineers to test algorithms and prove ideas at a scale beyond that which is possible with real world testing or modelling of single real-world test environments: scenario geometry, visual appearance of the city, traffic density, weather and environmental conditions, and more.

In addition to the diversity of worlds and scenarios we develop and test in, these simulated worlds allow testing of scenarios which are simply too dangerous to test in the real world.

We use resimulation of observed failures in the real world, allowing Wayve's engineers to reproduce failures and stress test improvements to our driving systems.

Our virtual fleet runs 24/7, and is a key part of Wayve's technology stack.

Selected Publications

Reimagining an Autonomous Vehicle
Jeffrey Hawke, Haibo E, Vijay Badrinarayanan, Alex Kendall.
August, 2021.

Probabilistic Future Prediction for Video Scene Understanding.
Anthony Hu, Fergal Cotter, Nikhil Mohan, Corina Gurau, Alex Kendall.
European Conference on Computer Vision (ECCV). August, 2020.

Urban Driving with Conditional Imitation Learning.
Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemysław Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall.
Proceedings of the International Conference on Robotics and Automation (ICRA). May, 2020.

Learning to Drive from Simulation without Real World Labels.
Alex Bewley, Jessica Rigley, Yuxuan Liu, Jeffrey Hawke, Richard Shen, Vinh-Dieu Lam and Alex Kendall.
Proceedings of the International Conference on Robotics and Automation (ICRA). May, 2019.

Learning to Drive in a Day.
Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah.
Proceedings of the International Conference on Robotics and Automation (ICRA). May, 2019.

Our Research and Engineering

A New Approach to Self-Driving: AV2.0

A perspective from Jeff Hawke, our VP Technology
Company News15th August 2021
  • About
  • Safety
  • Technology
  • Join Us
  • Blog
  • Wayve© 2021

    For press, partnerships & investment inquiries, please contact:
    [email protected] // [email protected] // [email protected]

    Download our Press Kit here