New autonomous driving AIs train on real roads

New autonomous driving AIs train on real roads

Four years ago, Alex Kendall, founder and CEO of UK self-driving car company Wayve, was driving down a small country road in rural Britain when he took his hands off the wheel. The car, equipped with cheap cameras and a huge neural network, swerved to one side. Kendall gripped the steering wheel for a few seconds to correct him. The car turned again and Kendall corrected him again. It took less than 20 minutes for the car to learn to follow the road on its ownKendall remembers.

It was the first time that reinforcement learning was used [técnica de inteligencia artificial (IA) que entrena una red neuronal para realizar una tarea a través de prueba y error] to teach a car to drive itself from scratch on a real road. It was a small step in a new direction that the latest generation of Opening thinks it could be the breakthrough that makes autonomous cars are a daily reality.

Reinforcement learning has been hugely successful in producing computer programs capable of playing video games and Go with superhuman ability; until has been used for control a nuclear fusion reactor. But driving was thought to be too complicated. “They laughed at us,” Kendall confesses.

Wayve currently trains its cars during rush hour in London. Last year, he showed that a car trained on the streets of London could be driven in five different cities – Cambridge, Coventry, Leeds, Liverpool and Manchester – all in the UK – without additional training. It’s something industry leaders like Cruise and Waymo have struggled to achieve. This month Wayve announced it will partner with Microsoft to train its neural network on Azurethe tech giant’s cloud supercomputer.

Investors have spent more than 100,000 million dollars (95,386 million euros) in the construction of cars capable of driving themselves. That’s a third of what NASA spent to put people on the Moon. Yet despite a decade and a half of development and countless miles of road testing, driverless technology is stuck in the pilot phase. “We’re seeing extraordinary amounts being invested for very limited results,” admits Kendall.

That is why Wayve and other Opening of autonomous vehicles such as Waabi and Ghost, both in the US, and Autobrains, based in Israel, bet on AI. With AV2.0 brand, they believe smarter, cheaper technology will allow them to overtake current market leaders.

The overblown machines

Wayve wants to be the first company to implement autonomous cars in 100 cities. But is that even more of a stretch for an industry that has been living off its own supply for years?

“There is too much hype in this field,” warns Raquel Urtasun, an expert who for four years had been director of the Uber autonomous driving team, from which she later left to found Waabi in 2021. “There is also a lack of recognition of what How difficult is this task in the first place?. But I don’t think the current overall strategy for autonomous driving gets us to where we need to be to implement the technology safely.”

That widespread approach dates back to at least 2007 and the DARPA Urban Challenge, when six teams of researchers got their robotic vehicles to navigate a model of a small town on a US Air Force base in disuse.

Waymo and Cruise launched off of that success, and the robotics approach taken by the winning teams stuck. your strategy treats perception, decision making and vehicle control as different problemswith different modules for each one. But this can make the overall system difficult to build and maintain, with bugs in one module spilling over to others, Urtasun explains. “We need the AI ​​mindset, not robotics“says the expert.

The new idea is this: Instead of building a system with multiple manually connected neural networks, Wayve, Waabi and other companies are creating a large neural network that discovers the details on its own. If the AI ​​has enough data, it will learn to convert the input (data from cameras or LIDAR about the road ahead) in production (turn the steering wheel or hit the brakes), like a child learning to ride a bike.

Skip straight from input Alabama production is known as end-to-end learning, and is what GPT-3 did for natural language processing and AlphaZero for Go and chess. “In the last 10 years this has caused numerous seemingly intractable problems to be solved,” says Kendall. “End-to-end learning pushed us toward superhuman capabilities. Driving will be no different.”

Like Wayve, Waabi uses end-to-end learning. However, it does not (yet) use real vehicles. This developing their AI almost entirely within a super realistic driving simulationcontrolled by an AI driving instructor. Ghost also takes an AI-first approach, using driverless technology that not only navigates roads, but also learns how to react to other drivers.

200,000 little problems

Autobrains also goes for the end-to-end approach, but does something different with it. Instead of training a huge neural network to deal with anything a car might encounter, the company is training many smaller networks (hundreds of thousands in fact) so that each handles a very specific scenario.

“We are translating the most difficult autonomous car problem into hundreds of thousands of smaller AI problems,” says Igal Raichelgauz, CEO of the company. Using a large model makes the problem more complex than it really is.explains the CEO: “When I’m driving, I’m not trying to understand every pixel on the road. It’s about pulling out contextual cues.”

Autobrains takes data from a car’s sensors and passes it through an AI that finds the match of the situation with one of the many possible scenarios: the rain, the pedestrian crossing, the traffic light, the bicycle turning to the right, a car behind, etc. Analyzing a million miles of driving data, Autobrains claims that its AI has identified around 200,000 unique scenariosand the company is training its individual neural networks to handle each of them.

This company has partnered with several car manufacturers to test its technology and has just created a small fleet of its own vehicles.

Kendall thinks what Autobrains does could work well for advanced driver assistance systems, but doesn’t see an advantage over his own approach. “By tackling the whole problem of autonomous driving, I would hope that they would have the same complexity as they encounter in the real world“, he says.

the autopilot

Anyway, should we trust this new wave of companies chasing the leading companies? Unsurprisingly, Mo ElShenawy, Cruise’s executive vice president of engineering, isn’t convinced. “The state of the art as it exists today It’s not enough to get us to the stage Cruise is on.“, he points out.

Cruise is one of the most advanced autonomous car companies in the world. Since November last year it has been offering a live robot-taxi service in San Francisco (USA). Its vehicles operate in a limited area, but currently anyone can order with the app Cruise a car without a driver who will stand by the curb. “We see a wide variety of reactions from our customers,” says ElShenawy. “It’s very exciting”.

Cruise has built a huge virtual factory to support his softwarewith hundreds of engineers working in different parts of the company. ElShenawy argues that the conventional modular approach is an advantage because it allows the company to exchange new technologies as they emerge.

It also dismisses the idea that Cruise’s approach won’t generalize to other cities. “We could have thrown ourselves into a suburb somewhere years ago, and that would have cornered us,” he explains. “The reason we chose a complex urban environment, like San Francisco, was deliberate.where there are hundreds of thousands of cyclists and pedestrians and emergency vehicles and cars blocking the way. It forces us to build something that can be easily expanded.”

But before Cruise enters a new city, he first has to map its streets in centimeter-level detail. Most driverless car companies use this type of high definition 3D maps They provide additional information to the vehicle on top of raw sensor data it receives on the go, typically including suggestions such as the location of lane edges and traffic lights, or whether there are curbs on a particular stretch of road.

These so-called HD maps are created by combining road data collected by cameras and LIDAR with satellite images. Hundreds of millions of miles of roads have been mapped in this way in the United States, Europe, and Asia. But the design of the roads changes every day, which means that map creation is a never-ending process.

Many autonomous car companies use HD maps created and maintained by specialized companies, but Cruise creates his own. “We can recreate entire cities: driving conditions, street layouts and everything,” says ElShenawy.

This gives Cruise an edge against major rivals, but more recent ones like Wayve and Autobrains have abandoned HD maps altogether. Wayve’s cars have GPS, but otherwise learn to read the road using only sensor data. It can be more difficult, but it means that they are not tied to a particular location.

For Kendall, this is the key to widespread use of self-driving cars. “It’s going to take longer to get to our first city,” she admits. “But when we get to a city, we can scale everywhere.”

In addition to so much debate, there is still a long way to go. While Cruise’s robot taxis drive paying customers around San Francisco, Wayve, the most advanced of the new wave, has yet to test its driverless safety cars. Waabi doesn’t even use real cars.

Still, these new AV2.0 companies have recent history on their side: End-to-end learning has rewrote the rules of what’s possible in computer vision and natural language processing. So that confidence is not misplaced. “If everyone is going in the same direction and it turns out wrong, we are not going to solve this problem“, concludes Urtasun. “We need a diversity of approaches, because we have not yet found the solution.”


Please enter your comment!
Please enter your name here