Automotive AI poses EV range and sustainability challenge

SAN JOSE, Calif. — The promise of artificial intelligence for automobiles took center stage at the GTC technology conference here this week.

Nvidia showed how it is using large language models to help autonomous vehicles process rare events. Volvo is collecting real-world data from connected vehicles to train the algorithms governing its advanced driver-assist functions. Elsewhere, GM Motorsports shared how it is using physics-based AI predictions to make R&D decisions about aerodynamics.

But all this powerful software requires increasing the building blocks of computation, including processing power, memory and storage. These resources are referred to as compute. They also require a lot of electricity.

The automotive industry has yet to thoroughly address how the increasing compute requirements tied to artificial intelligence in the vehicle might affect company sustainability goals. There’s also the issue of how diverting all this power conflicts with the challenge of extending the range of the EV battery, which powers AI-enabled functions onboard.

“It is significant enough to where you want to design something that is as efficient as possible so that” compute “isn’t what’s decreasing your range,” said Harrison Xue, a managing director and partner at Boston Consulting Group.


As automakers seek to employ AI across their processes, three areas are seeing additional compute requirements, Xue said.

First, automakers are using more compute to make the software and hardware governing the vehicles. The companies employing artificial intelligence to design vehicles, for example, are training algorithms to recognize the physical laws that govern drag and aerodynamics. That takes computing power.

Training the algorithms that underpin automated driving is even more compute-intensive. AI powering robotaxis, for example, must learn to respond to all manner of possible scenarios and variants using simulations and a digital twin of the vehicle. Generating those scenarios, rendering them in simulation and training the algorithm all require compute.

Then, companies are using compute in the cloud for operations such as updating vehicles over the air. There is also computing that happens in the cloud to manage commercial fleets.

Finally, vehicles themselves require compute. That cannot be offloaded to the cloud because safety-critical compute needs to be instant. Advanced driver-assist functions, which include perceiving surroundings, processing that perception, and planning and executing controls, are completed using compute onboard. Other vehicle functions, such as infotainment, require compute to a lesser extent.


Many factors impact the compute requirements for different automotive tasks, including the speed of processing, resolution and accuracy.

Starting in 1965, a rule of thumb for the IT industry was that computer hardware doubled every two years to match the increase in compute requirements. OpenAI said in 2018 that “the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time.” That was before OpenAI released ChatGPT, the chatbot that exposed the public to the possibilities of generative AI, in 2022.

It’s difficult to quantify the increase in compute requirements for the automotive industry. First, companies keep this information close to the vest because the unique cocktail of networking requirements, storage, speed and resolution could give one company an advantage over another.

The number of variables also ensures there’s not yet a standard measurement.

But the hardware underlying many autonomous vehicle simulators — platforms from Nvidia — have increased their compute capacity by 1,000 times over eight years.


Nvidia CEO Jensen Huang said the company is constrained by the physical limitations of the hardware, so the way Nvidia has increased compute capacity is by improving energy efficiency.

“Energy efficiency and cost efficiency is, in fact, at the core of everything we do,” he said.

Automakers have to consider that efficiency, and related factors such as thermal load, throughout the design and manufacturing process. AI is permeating even lighter compute workload tasks, such as voice assistants that are being trained with language to respond better to passenger and driver inquiries.

Alwin Bakkenes, head of software engineering at Volvo Cars, said the company is building power-saving modes into features, and saving compute capacity for new features as they arise.

“We are balancing” compute requirements and power draw, “and I think as you can see, from the EX90, it has a stated range well over 300 miles,” he said. “We seem to balance that quite well.”

But while companies such as Nvidia and individual automakers are trying to maintain efficiency, AI-enabled workloads are likely to increase.

Ben Ellencweig, a senior partner at the consulting firm McKinsey & Co., said it’s tough to predict if the energy efficiency of hardware such as chips and software including simulators will keep up with increasing AI workloads and data volumes.

The automotive industry “needs to think through” that “equation, as we’re talking about sustainability and moving to electrification,” he said. “The beauty is, it’s early days enough.”


Source Article

Leave a Comment