The growth path for tech giant Nvidia lies in the development of advanced artificial intelligence (AI) chip models. CEO Jensen Huang believes there is a revenue opportunity of at least $1 trillion by 2027 for the company from these new products.

The prediction was given this Monday, March 16th, during Nvidia's annual developer conference, held in California.

Although Huang did not provide further details on how the company might absorb this market, the Nvidia founder's outlook is far superior to what he had outlined in the last earnings call in February.

At the time, the CEO had said that the company could capture opportunities worth at least US$500 million in 2026. In practice, he is doubling down, extending the timeframe by another year.

At the event, Nvidia presented a new central processing unit and an AI system based on technology from the American startup Groq. The model is part of Huang's initiative to strengthen the company's position in so-called "inferential computing."

Huang stated that inference, the process by which AI systems answer questions or perform tasks, will be divided into two stages. Nvidia's Vera Rubin chips will handle the first stage, called "pre-completion," in which the user's request is transformed from human words into a "token" language.

Groq, which signed a strategic licensing agreement in December worth $17 billion, specializes in fast and inexpensive “inference” computing, in which an AI model uses what it has already learned to answer a question or make a prediction in real time.

Nvidia also unveiled, at the conference, a next-generation AI chip called Feynman, in honor of the American physicist Richard Feynman, Nobel Prize winner in Physics in 1965, who died in 1988.

Huang stated at the event that part of Nvidia's competitive advantage lies in its CUDA chip programming software, which some analysts consider its greatest asset.

“The installed base is what attracts developers who, in turn, create the new algorithms that enable innovative technologies,” said Huang. “We are in every cloud. We are in every IT company. We serve virtually every sector.”

He also unveiled a new line in the chip family, the Groq 3 language processing unit, which was designed to speed up the response of AI systems to user questions.

This move shows that Nvidia is seeking to consolidate its leading position in the AI chip market by exploring new architectures, and abandoning its history of offering a single GPU chip for AI workloads.

“We are in high-volume production now,” said the company founder. “We will launch [the Groq 3] in the second half of 2026, probably in the third quarter.”

Huang announced that the chip will be manufactured by South Korea's Samsung, a change for Nvidia, which normally uses Taiwan Semiconductor Manufacturing Company (TSMC) to build its AI processors.

“Huang’s outline, which envisions a $1 trillion opportunity, reinforces the enduring demand for Nvidia’s AI infrastructure, despite investor concerns,” Jacob Bourne, an analyst at Emarketer, told Reuters.

"This demonstrates that Nvidia is maintaining its leadership in the AI chip market, while the industry as a whole expands, leaving behind the initial experimentation phase and entering large-scale deployment," he added.

In any case, Huang's prediction of $1 trillion in revenue from AI hardware is much higher than the consensus Wall Street estimates for Nvidia's total revenue.

Analysts' forecasts for fiscal years 2027 and 2028 — which run until the end of January 2028 — total approximately US$835 billion for the company, according to the market intelligence platform CapitalIQ.

Nevertheless, the market reacted optimistically to the announcements made this Monday by Nvidia's CEO. On Nasdaq, the company's shares closed up 1.63%. Over the past 12 months, the appreciation is 53.2%.

Nvidia is valued at US$4.45 trillion.