- Like
- SHARE
- Digg
- Del
- Tumblr
- VKontakte
- Flattr
- Buffer
- Love This
- Save
- Odnoklassniki
- Meneame
- Blogger
- Amazon
- Yahoo Mail
- Gmail
- AOL
- Newsvine
- HackerNews
- Evernote
- MySpace
- Mail.ru
- Viadeo
- Line
- Comments
- Yummly
- SMS
- Viber
- Telegram
- JOIN
- Skype
- Facebook Messenger
- Kakao
- LiveJournal
- Yammer
- Edgar
- Fintel
- Mix
- Instapaper
- Copy Link
Credit: Nvidia
Given the company’s two-year release cadence, Nvidia’s next-generation Blackwell architecture is due in late 2024. Despite that timeframe, sources inside Nvidia are already discussing what it might bring to the table, according to several YouTube tech journalists. The most interesting revelation is that Nvidia will likely stick with a monolithic die for the 50-seres GPUs instead of switching to a chiplet design similar to AMD’s 7900 series. These sources said that based on the performance and issues with AMD’s first chiplet-based GPUs, they have no interest in going down that path, at least for now. Nvidia will likely use chiplets in future HPC GPUs, however.
YouTubers Moore’s Law is Dead and RedGamingTech have reportedly been talking to Nvidia insiders, and they have much to say. Notable Twitter tipsters are also starting to chime in with Blackwell info, and Wccftech has collected this batch of rumors. The big news is that, like Ada Lovelace, Blackwell will be a quantum leap forward for Nvidia as it moves from TSMC’s 4/5nm process to the much-hyped 3nm node. This could allow it to deliver a 2-2.6X performance improvement over its current GPUs, even with a monolithic design. The flagship GB102 is expected to offer just 144 SMs, the same as AD102. Therefore, the advancements will come not from throwing more hardware at the problem but from a new, more efficient design.
Nvidia’s Blackwell architecture will follow a similar pattern as Ampere-to-Ada with a 2X boost in performance with a focus on ray tracing. Credit: RedTechGaming
The RTX 50-series will be similar to its previous generations in that there will be a monster flagship die named GB100 for data centers, which will be cut down for gamers into GB102, GB104, etc. It’s reportedly offering similar specs as its current top-tier platform but with a change to GDDR7 memory on a 384-bit memory bus. It’ll also be the first PCIe Gen 5 GPU, with clock speeds in the 3GHz range for consumer cards. Nvidia will reportedly focus heavily on ray-and-path tracing for this series, where it currently holds a considerable advantage over AMD. Part of that approach is new denoiser technology for Blackwell that could be a separate block on the die, a new instruction set, or new tensor core functionality.
The biggest drawback to Nvidia’s monolithic approach to Blackwell will be cost, as 3nm wafers are supposedly 25% more expensive than 5nm discs. Despite that, the company is reportedly unfazed by it due to the performance potential. That, along with the fact that it doesn’t mind paying for bleeding edge technology, and neither do its customers.
AMD, on the other hand, is more concerned with saving money, which is one of the reasons why it’s using the chiplet design in the first place. It allows it to use cheaper 6nm I/O dies and get better yields simultaneously, thanks to using smaller dies. The TSMC 5nm die on the 7900 XTX is 529mm squared, compared with 608mm for AD102 in the RTX 4090.
One of the more spurious claims made by Moore’s Law is Dead is that when Nvidia gets below 3nm, the reticle limit for the masking process will only allow for a die roughly ~400mm squared, which is half the size of its flagship data center products. His Nvidia source reportedly told him that by then, the company would be known more as an AI than a gaming business and would likely cease producing x90 GPUs, opting for x80 and lower cards to compete with AMD. It’s many years from now, like around 2026, so who knows what the future will hold. But it’s an interesting notion to ponder.
Nvidia has recently begun earning more from HPC than gaming, though. It’s not surprising the company would want to pivot a bit, especially given how ChatGPT has taken the world by storm recently, and that thousands of Nvidia A100 GPUs power it.