Elon Muskthe CEO of Tesla Inc. TSLAexpressed its dissatisfaction with the term GPU by announcing that the company’s core AI infrastructure is no longer tied to training.
What happened: During his first-quarter earnings call on Tuesday, Musk revealed that Tesla has been actively expanding its core artificial intelligence infrastructure. He said: “At this point we are no longer tied to training and so we are making rapid progress.”
The tech billionaire also revealed that Tesla has installed and put into service 35,000 H100 computers or GPUs, and Tesla expects this number to potentially reach around 85,000 by the end of the year, mostly for training purposes.
See also: Elon Musk says Tesla Optimus humanoid robot could be available externally by end of next year – here’s the progress report
“We’re making sure we’re as efficient as possible in our lineup,” Musk said, adding that it’s not just a matter of the number of H100s but “how efficiently they’re used.”
During the conversation, Musk also expressed his discomfort with the term GPU. “I always get startled when I say GPU because it’s not. GPU support: G stands for graphics and doesn’t do graphics,” the tech mogul said.
“The GPU is [the] wrong word,” he said, adding: “They need a new word.”
Sign up to Benzinga’s Tech Trends Newsletter to receive all the latest technological developments directly to your inbox.
Because matter: Musk’s statement came after Tesla reported its first-quarter financial revenue of $21.0 billion, which showed a decline of 9% year-over-year, missing the Street consensus estimate of 22, 15 billion dollars. The company said its revenue was impacted by lower average selling prices and lower vehicle deliveries during the quarter.
On the other hand, Nvidia Corporation NVDA last year it made a significant impact on the artificial intelligence and computing industries by introducing the H100 data center chip, which added more than $1 trillion to the company’s overall value.
In February, earlier this year, it was reported that demand for the H100 chip, which is four times faster than its predecessor, the A100in training large language models or LLMs and 30 times faster in responding to user requests, it has been so substantial that customers face wait times of up to six months.
Meanwhile, earlier this month, Piper Sandler analyst Harsh V. Kumar spoke directly with Nvidia’s management team and reported that despite Nvidia’s Hopper GPU having been on the market for nearly two years, demand remains strong, outpacing supply. Customers are reluctant to move their orders from the Hopper to the Blackwell, fearing extended wait times due to expected supply limitations.
Learn more about the future of consumer technology from Benzinga by by following this link.
Read next: Elon Musk likes games from this country because they haven’t been corrupted by “Woke DEI Lies”
Disclaimer: This content was partially produced with the help of Benzinga Neuro and has been reviewed and published by Benzinga Editors.
Photo via Shutterstock