Looking for a Nvidia H100 AI GPU? You can get it a bit faster now, says Dell
Lead times drop to 8-12 weeks, according to Terence Liao, GM of Dell Taiwan
3 min. read
Published on
Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more
These days, AI is front and center, especially when it comes to ChatGPT, Copilot, and Gemini, but few know the raw processing power needed to keep them constantly running. Nvidia’s H100 AI GPUs are a great example of just that, being one of the leading graphics processing units used in AI.
But there aren’t that many of them lying around, and the demand is constantly increasing. Take, for example, Sora AI, which needs an estimated 4,200-10,500 Nvidia H100 GPUs for one month just to train one model. Factor in the recent shortage of H100 GPUs, and you can imagine how the wait time for new hardware has skyrocketed.
The wait times for H100 AI GPUs have significantly shortened.
Imagine a scenario where businesses and tech enthusiasts alike faced a grueling 40-52-week wait for their GPU orders at the end of 2023. Fast-forward to the present, and the landscape has dramatically changed, with lead times plummeting to a mere 8-12 weeks. According to DigiTimes, this shift, as reported by Terence Liao, the General Manager of Dell Taiwan, marks a pivotal moment in the tech industry, particularly for those vested in artificial intelligence (AI) and high-performance computing.
But what’s behind this sudden ease in GPU availability? The supply constraints that once throttled the availability of Nvidia’s H100 AI GPUs are apparently dissipating. The journey from a staggering 11-month wait to a more accessible 2-3 month timeframe is nothing short of remarkable. This improvement is not just a win for Dell but a beacon of hope for the entire tech ecosystem, signaling a potential end to the supply chain woes that have plagued the industry.
Despite the easing of supply constraints, the demand for AI-capable hardware remains sky-high. Businesses increasingly opt for AI servers over general-purpose ones despite the hefty price tag associated with the former. This trend underscores the critical role of AI in shaping the future of technology and business strategies. The reduction in lead times is a testament to the industry’s resilience and adaptability.
It seems that previous H100 stock also plays a role in this
Interestingly, this positive shift in GPU availability is partly attributed to companies having a surplus of H100 GPUs and choosing to resell some of their stock to mitigate the high maintenance costs of unused inventory. Additionally, Amazon Web Services (AWS) has played a role in alleviating some of the demand pressure by making it easier to rent Nvidia H100 GPUs through the cloud.
For large companies like OpenAI, which are at the forefront of developing their large language models (LLMs), the easing of supply constraints couldn’t come at a better time. These companies require thousands of GPUs to train their models efficiently and effectively. The continued shortening of lead times is a promising sign that they might soon have all the resources they need to push the boundaries of AI even further.
In essence, the GPU landscape is undergoing a significant transformation, and this shift not only benefits the big players in the field but also opens up new possibilities for innovators and creators everywhere.
User forum
0 messages