Nvidia's stellar Q3 proves that the age of AI is in full steam

Shawn Knight

Posts: 15,511   +194
Staff member
TL;DR: Nvidia has posted record revenue for the third quarter ending October 27, 2024, serving as further proof that the age of AI is in full steam. For the three-month period, Nvidia generated $35.1 billion in revenue. That is an increase of 17 percent compared to the previous quarter and a whopping 94 percent more than it brought in during the same period a year earlier.

Non-GAAP earnings per share were $0.81 – up 19 percent sequentially and 103 percent from a year ago. For reference, analysts were expecting $33.2 billion in revenue and EPS of $0.74.

The vast majority of Nvidia's earnings came from the data center division, which generated $30.8 billion. That's up 17 percent quarter over quarter and 112 percent compared to Q3 2023. Gaming revenue in the quarter was $3.3 billion – an increase of 14 percent versus last quarter and 15 percent from a year earlier.

Speaking of gaming, it was just over a month ago that Nvidia celebrated the 25th anniversary of the original GeForce card, the GeForce 256. That legendary unit dropped on October 11, 1999, and was marketed as the world's first GPU. While not the first aftermarket graphics card I would own, it was the first to be slotted into my first custom-built PC. I still remember being surprised to see the active cooling solution when I unboxed it. My, how time flies.

Shares in Nvidia are down 2.44 percent on the earnings report as of this writing to $142.33, but are still up nearly 195 percent year to date and more than 2,600 percent over the last five years. Nvidia's recent success has also made CEO Jensen Huang one of the richest people in the world.

Looking ahead, Nvidia forecasts revenue of $37.5 billion (plus or minus two percent) for the fourth quarter with gross margins between 73 percent and 73.5 percent. For comparison, Wall Street is expecting $37 billion in revenue from Nvidia in the final quarter of the year.

Permalink to story:

 
nVidia is still a giant because investors aren't asking for ROI from Google, MS and OpenAI. They're selling picks and pans to the gold miners.

Heck, if I was nVdia I'd love to make money off of the generative AI boom. Good for them, they also won't suffer the losses the same way all the big companies will, either.

And I hate this narrative that nVdia has had this generativeAI goal ever since they released their first GPU. They tried for over a decade to profit off of CUDA and couldn't produce a useful product for the mass market.

And what we have isn't AGI, it's just generative AI. An artifical general intelligence is still very far away and people will keep building datacenters with nVidia cards for as long as they can sell them on the promise of being the first to market.
 
nVidia is still a giant because investors aren't asking for ROI from Google, MS and OpenAI. They're selling picks and pans to the gold miners.

Heck, if I was nVdia I'd love to make money off of the generative AI boom. Good for them, they also won't suffer the losses the same way all the big companies will, either.

And I hate this narrative that nVdia has had this generativeAI goal ever since they released their first GPU. They tried for over a decade to profit off of CUDA and couldn't produce a useful product for the mass market.

And what we have isn't AGI, it's just generative AI. An artifical general intelligence is still very far away and people will keep building datacenters with nVidia cards for as long as they can sell them on the promise of being the first to market.

I like the pots and pan analogy

As for AGI ( Artificial General intelligence ) - it's a hotly debated topic

AI has already solved some pure mathematic proofs. Pure maths is kind of seen as the bastion of creative thinking etc

There a lot more models out their than LLMs and even LLMs with other models on top can do some pretty amazing stuff

AI models can tease out relationships between 2 different branches of mathematics, that would be very hard for humans to see.

So yes, there is no set it free AGI at the moment
But researchers with highly formalised inputs , highly curated techniques , and meta neural models on top can get some genuine ground breaking stuff

even LLMs will get better BS filters put on top of them, to say do better

So truth lies somewhere in the middle - Not just some dumb statistical model , nor some feed it anything , and let if get to work researching procuring resources needed

We do hold AI to a high std and keep defining new goal posts
But humans are subject to biases , false logic, reasoning , hallucinations , poor perception , illusions etc

We are seeing an acceleration -Really don't know what emergent properties we will get - Scientists were mostly surprised hold good simple LLMs could do
Plus in the scheme of human history another 30 years is nada , but to us - 33% of a healthy life
 
I like the pots and pan analogy

As for AGI ( Artificial General intelligence ) - it's a hotly debated topic

AI has already solved some pure mathematic proofs. Pure maths is kind of seen as the bastion of creative thinking etc

There a lot more models out their than LLMs and even LLMs with other models on top can do some pretty amazing stuff

AI models can tease out relationships between 2 different branches of mathematics, that would be very hard for humans to see.

So yes, there is no set it free AGI at the moment
But researchers with highly formalised inputs , highly curated techniques , and meta neural models on top can get some genuine ground breaking stuff

even LLMs will get better BS filters put on top of them, to say do better

So truth lies somewhere in the middle - Not just some dumb statistical model , nor some feed it anything , and let if get to work researching procuring resources needed

We do hold AI to a high std and keep defining new goal posts
But humans are subject to biases , false logic, reasoning , hallucinations , poor perception , illusions etc

We are seeing an acceleration -Really don't know what emergent properties we will get - Scientists were mostly surprised hold good simple LLMs could do
Plus in the scheme of human history another 30 years is nada , but to us - 33% of a healthy life
"AI" currently is massive language neural algorithmic models where each word is chosen based on what a human would pick from the text it knows, it has no capacity to construct outside the thinking of its source material, or language twrms that don't exist yet, great at writing text and translating and so on, but actual general thought is eons away and not in this format ever
 
Nvidia making $30.8 billion from data centers while gaming revenue is just $3.3 billion puts the whole "RTX 4090 for gaming" pitch into perspective.
 
Odd, no one has been able to compete with nVidia for the past few years on AI. I think everyone is taking the duopoly route and trickling in competition to keep prices high.
 
"AI" currently is massive language neural algorithmic models where each word is chosen based on what a human would pick from the text it knows, it has no capacity to construct outside the thinking of its source material, or language twrms that don't exist yet, great at writing text and translating and so on, but actual general thought is eons away and not in this format ever

Models are even showing intuition now to guide humans in their research effort

In mathematics modelling a lot of work is put into the LEAN language , so maybe faster proofs coming

Your reductionism is too simplistic and wrong
How do you tie Deepminds ability to play chess or go - I think a newer model learned to beat deepmind in a few days - alphago ?? maybe without checking , think it can beat best chess engines - could be wrong again without checking

ChatAI can play chess quite well and quite badly , forgets the board etc - can analyse a position fairly well - using the statistical model you outlined

Don't think AI playing pacman or donkey kong, Doom etc use LLM like ChatAI either

My answer was highly qualified, ie highly curated models can produce genuine AI like answers.
That even the model you refer to statistical LLM - has amazed researchers with some of it's emergent ability
If is a very wide field melded different models , meta models etc can help
There is an argument the human IQ even conscious is an emergent ability of underlying resources - That conscious or IQ is not on or off , but light a dimmer light can get stronger

if so using that analogy the AI field is growing dimly, but see no reason why more emergent AGI won't evolve in the next decades and glow brighter.

Any time a computer can beat a human - that ability is then relegated to a dumb ability

If we have a creativity model ( think outside the box ) and the model just runs 1 million things/concepts/methods vs against each other exploring possibilities - any novel products , methods , ideas will be can called just a dumb process
we already have reductionism for human activities - overdoing it can be a downer - Great Pyramid is just an oversized kids set of building blocks just to bury a human who wasn't a god or anything special with regards to other humans at the time , same number of legs and arms etc

This also belittles humans who have this ability - "I think these variation for an antibiotic might work , and there is a good chance they would not be too poisnesses for the body"
What do you call humans who can see patterns, or have intuition to how good something is from all their years of experience - an LLM ???

Plus on a more pragmatic basis is just a tool, use it right new better stuff can be discovered /created
 

Models are even showing intuition now to guide humans in their research effort

In mathematics modelling a lot of work is put into the LEAN language , so maybe faster proofs coming

Your reductionism is too simplistic and wrong
How do you tie Deepminds ability to play chess or go - I think a newer model learned to beat deepmind in a few days - alphago ?? maybe without checking , think it can beat best chess engines - could be wrong again without checking

ChatAI can play chess quite well and quite badly , forgets the board etc - can analyse a position fairly well - using the statistical model you outlined

Don't think AI playing pacman or donkey kong, Doom etc use LLM like ChatAI either

My answer was highly qualified, ie highly curated models can produce genuine AI like answers.
That even the model you refer to statistical LLM - has amazed researchers with some of it's emergent ability
If is a very wide field melded different models , meta models etc can help
There is an argument the human IQ even conscious is an emergent ability of underlying resources - That conscious or IQ is not on or off , but light a dimmer light can get stronger

if so using that analogy the AI field is growing dimly, but see no reason why more emergent AGI won't evolve in the next decades and glow brighter.

Any time a computer can beat a human - that ability is then relegated to a dumb ability

If we have a creativity model ( think outside the box ) and the model just runs 1 million things/concepts/methods vs against each other exploring possibilities - any novel products , methods , ideas will be can called just a dumb process
we already have reductionism for human activities - overdoing it can be a downer - Great Pyramid is just an oversized kids set of building blocks just to bury a human who wasn't a god or anything special with regards to other humans at the time , same number of legs and arms etc

This also belittles humans who have this ability - "I think these variation for an antibiotic might work , and there is a good chance they would not be too poisnesses for the body"
What do you call humans who can see patterns, or have intuition to how good something is from all their years of experience - an LLM ???

Plus on a more pragmatic basis is just a tool, use it right new better stuff can be discovered /created
You are romanticising 'AI', pretending that it's creative because it can infer things from its programming language. Solving mathematical equations is still based on its programming, not its ability to 'think outside the box'. There's no way for LLM's to be considered 'AI' because it's not intelligent in any way, it's just using the data its been provided and coming to conclusions based on its parameters, nothing more.

Midjourney and other image generators can create beautiful pictures now but they are far from perfect, with many glaring issues and none of it is 'creative', it's just using LLM's to create based on prompts. It's generating based on what has been before, it's not creating anything new.

To think that the marketting term AI can create anything new shows a serious shortsightedness and misunderstanding of what LLM's truly are. If it were able to create something new it wouldn't need text prompts, it would take a pen or brush and create its own works. But LLM's are not Van Gogh, Tolkien, the Shelley's or Chaucer, they use the afformentioned works to create something in their style but it's unable to create something new.

Your chess analogy is a perfect example of LLM's inability to create.
 
You are romanticising 'AI', pretending that it's creative because it can infer things from its programming language. Solving mathematical equations is still based on its programming, not its ability to 'think outside the box'. There's no way for LLM's to be considered 'AI' because it's not intelligent in any way, it's just using the data its been provided and coming to conclusions based on its parameters, nothing more.

Midjourney and other image generators can create beautiful pictures now but they are far from perfect, with many glaring issues and none of it is 'creative', it's just using LLM's to create based on prompts. It's generating based on what has been before, it's not creating anything new.

To think that the marketting term AI can create anything new shows a serious shortsightedness and misunderstanding of what LLM's truly are. If it were able to create something new it wouldn't need text prompts, it would take a pen or brush and create its own works. But LLM's are not Van Gogh, Tolkien, the Shelley's or Chaucer, they use the afformentioned works to create something in their style but it's unable to create something new.

Your chess analogy is a perfect example of LLM's inability to create.
We are going to agree to disagree.

Plus if our fundamental assumption of AGI disagrees which I think it does, then we are always going to disagree
To take a human analogy some extreme psychopath/sociopath may not really know a certain emotion, but they know how to mimic it to get what they want
ie AGI may appear intelligent but doesn't really understand anything , just responds to prompts

I remember talking to someone , who said it took him to about 9 to really understand he had no sense of smell , yet he knew when he heard a fart to say ew that stinks!

There is an argument if a LLM or AI model doesn't have a body with lots of needs and sensory inputs it can't have intelligence - eg humans saying I feel in in my stomach, or chills down my spine etc . Plus we strongly need to keep this body alive , I imagine it goes both ways the mind controlling the body and body controlling the mind

3 interesting facts from body controlling the mind
1. Unconsciously our body will say reach to grab a bottle of water, before the thought hits our conscious mind
2. If we can force someone to do a body move in reality or think they have done it an illusion . their conscious mind will sometimes rationalise why the decided to do it ( when it never decided at all )
3) Cults, Military , good or bad manipulators know it you make people do certain actions , control of the mind will follow . Most people don't consciously set out to smoke, but the performance of a group of kids doing the action and passing it to you , especially if a bit drunk may mean you will mimic that action without thought . Military , cults , can take years of actions to control the mind . A green recruit will find it hard to shoot to kill someone
control the body control the mind

Anyway back to Chess Deepmind and Aphago , when starting from scratch they recreated all the most successful human openings

The definition I was using if given some general enquiry an AGI could figure out the tools , resources m methods , models if might use to try and get an answer, if it can't , if may say why and what it needed - it might not "understand" in any emotional way , but if could say why it might be good for humans , the planet etc

Like I said IQ grows brighter only a few animals past the mirror test
Only certain animals and children reach a certain age they know a another animal/child will make the wrong choice as they themselves saw the controller swapping the boxes with the cookies , they all saw which box the cookies started in - yet the other animal/child did not see the swap , so they will select the wrong box .
Speaking of deception , more intelligent the kid , the sooner they lie
The more intelligent an animal , it seems the more emotions it has , including the joy of quite abstract stuff like pranking others - seen in great apes
 
I think they are using AI in words so often... to convince investors and to entice them into solidifying the general stock and ideas AI can create. But can AI actually create anything? It can't without human input.


 
"AI" currently is massive language neural algorithmic models where each word is chosen based on what a human would pick from the text it knows, it has no capacity to construct outside the thinking of its source material, or language twrms that don't exist yet, great at writing text and translating and so on, but actual general thought is eons away and not in this format ever

If people here that actually read this site expect that AI to become "self-sufficient, self-thinking, self-innovating" are dead WRONG. that will never happen.

Everything is based on algorithms that are created by a HUMAN, when AI can create by "itself" it's "own" algorithms or even LLM, without a HUMAN intervention and SUCCEEDS into creating something new and pragmatic, that is key, pragmatic, then you can say that it's something awesome.
 
Back