Is AMD a beneficiary of the large language models/ChatGPT boom? Everyone talks about NVDA benefiting. What makes AMD different/worse off to benefit?#personalfinance #investments #chatgpt #amd #nvda #nvidia
NVIDIA is in its own league
How ?
Most if not all ML Models that require GPU, are run on CUDA. CUDA is a proprietary language/SDK that let's people run their regular computations on NVIDIA GPU. Because of them being early to the game, CUDA is well developed compared to the AMD's alternative (OpenGL, RoCm, HIP etc) All the Libraries that general people use support CUDA. RoCm support is very limited if not unavailable leading to AMD not getting much benefit.
Thanks for explaining!! Sounds like there are network effect benefits to using CUDA
But do you think this much NVDA hype justified?
NVDA has a first mover advantage in the space. I'm long NVDA
I would play contrary view here. I agree CUDA and Nvidia is better. But no one wants vendor lock in. AMD doesn't need to be as good as Nvidia in generalized Machine learning workloads. They have to match Nvidia's performance on a few select customers workloads. The European Union and big companies are pushing for "green" AI. AMD has advantage there with chiplet design and being more power efficient arguably.
Kind of true! People don’t realize how important the Xilinx acquisition is for AMD long term, especially for AI. There are definitely risks, but it could pay off so well!
Free from vendor lock in? Lol how many usable GPU platforms for AI are out there!? Implicit vendor lock in play
It’s only a matter of software. If RocM can be as versatile as CUDA, AMD GPUs are more power efficient and can benefit more.
Rumors said NVDIA is going to use Intel fab.
AMD, why can’t you guys be the first movers instead of following the lead of NVDA?
You guys have 25k HC, AMD has like 10K at most at the GPU side and that's after the COVID ramp.
Our head count only grew after entering into other domains like Robotics, AV and Health care. Before that 6-7 years ago atleast when CUDA was pioneered in 2010 and at that time we were way less than this HC.
Disclaimer: obvious NVDA bias here I think the AMD argument of "power efficient" is all copium. They are attempting to extrapolate from our consumer/gaming GPUs to the datacenter offerings. NVIDIA GPUs consistently offer best performance per watt in datacenter. Datacenter is power-limited not cost-limited
A little bit of copium. Yes. It is needed. But arguments in favour are not completely baseless.
I wouldn’t count AMD out. They will need top shelf hardware for the best LLM, but remember that, while the scale of training has increased over the years, the cost of training has gone down. Stanford has recently demonstrated this. There will be tons of clients that would be satisfied with training on low cost, mid range HW that AMD can provide.
I don't think your hardware outperforming comment is correct. But Nvidia has a distinct software advantage
Ladies and gentleman stand up and clap for @hey_m_d