The lies and bullsh!t just keeps on coming! ****GOT THAT MERCH?!?**** Store: Twitter: Discord: All …
"Passive coolers" in server racks mean there are multiple fans running 10k+ rpm blowing a fucking hurricane of air through the card. It is normal for high TDP server CPUs to have just a heatsink on top of them. A measly fan in the card itself makes very little difference there. So what if you shut up for a change.
Guys i wrote this again, Ι also informed and Paul (RedGamingTech). My sources tell me that indeed the 3080 will be 40%-70% faster than 2080 but in ray tracing. Rasterization is another story for another day
Nvidia next gen gaming will be late.
There is NOTHING "FAKE" about the kind of Ray Tracing that Turing is doing. You CAN totally do film-quality path tracing with it – and it IS being done, with the likes of Autodesk Arnold, etc. It just isn't fast enough for games/realtime. But that doesn't make the gaming RTX "fake", it's just compromising on ray-depth and ray-count to make it run at real-time speeds.
75% more revenue for shareholders as Nvidia only cares about it and used the ignorance of gamers and lack of self respect and fanboy-ism… ignorance has always been bliss for powers 🙂
Are these news gonna kill nvidia that they don't deliver when its released like with amd? Peoples were destroying amd 480 that it's not gtx980 performance. Will peoples complain when it happen to nvidia? Don't think nvidia fans will ever complain and will just give nvidia money again.
Paul, not to be a spanner or anything, but Passive cooling has an actual definition. It's not about where the fans are, it's about cooling without fans using increased surface area and convection. Servers have extremely high airflow and don't require large heat-sinks or fans directly on components because they act as a high velocity wind tunnel… With all due respect, you're wrong about the definition, doubling down on it is not a good look
its not 75% faster thats incorrect, according to TAQI KAMASAKi it is 77% faster… so pls dont spread false news
Will this run Warcraft III Reforged???
Passively cooled in a sever rack that has air being blown threw it at 90km/h is going to be colder then if it's in a home case with the best cooler you can find man.
BEWARE THE TITAN T!!!!!!!
Paul, PAUL! Ye don know, Ngreedia breeds the best GPU dogs and they'll chase those fps cats out the house cause they need to justify 360 hertz or 360 cats per second. And AMD donkeys might get there but a year too late and they'll sit Andhra bout how they got better drivers and…
Here we go….. Turing's "Poor Volta" moment. In the end it will be poor Navi and I cannot wait for that video.
Maybe the 75% improvement will come only to ray-tracing? I doubt we'll see a 75% overall performance boost. And how much this new card will cost?! An arm, a leg and NAAF's eyeballs??
Yeah, that´s pretty much what I was saying months ago – 15-25% faster – like always, but 75%, sure, keep going on….
8192 / 5120 = 60% more cores, in HPC the core count actually scales and performance with it. Add some arch + process gains and i dont think 75% for the GA100 part would be unreal. There is already a "leak" of GA100/101 HPC tesla grade spec.
The 3000 series will be AT MOST 30-40% faster then the 2000 series ! I'm sure AMD's "Big Navi" will be around the same performance of Nvidia , this year !Next year , RDNA2/3 will be a bit faster then Nvidia . Just like AMD did with Intel ! AMD Radeon will do an "Ryzen" on Nvidia in 2021/2022 !!!
It is MAYBE 75% faster ……… AT GAY TRACING !!!
What matters is speed/cost ratio. 75% power for even more money isn't going to cut it…
if nvidia can make 75% faster cards, it's not going to gaming PC market… Unless the competition forced them to do so. Hopefully AMD's RDNA2 will be that drive to force nvidia's hand to release the 100 die (even a defect one is fine, small steps, right?).
Such an AMD bias here….
I FEEL MORE FRAGMENTED UNDER AMD THAN ON INTEL SIDE…DIFFERENT SETS OF CODES FOR EVERY DIFFERENT TYPE OF CHIPS….IMAGINE AMD FOLLOW THAT TO THEIR CPU'S ??? INTEL OPPORTUNITY LOOKING FOR…A WEAKNESS ..FRAGMENTATION SOFTWARE, WELL THATS WHAT HAPPEN WHEN FELL IN LOVE IN OPENSOURCE…IT OPEN CAN OF WORMS WITH TOO MANY FRAGMENTED SOFTWARE HAS NO ONE VOICE TO CODE FOR…?? IS THIS WHAT LINUX USERS REALLY WANT ?? HELL NO, I JUST WANT THE THING TO WORK …NOT POLITICAL SUICIDE UNDER LINUX DRIVERS
I REALLY DO WANT AMD TO WIN..I REALLY DO, BUT WITH MANY SETBACKS AND CHANGES IN ARCHECTURE …DEVELOPERS ARE NOT PAID TO WAIT FOR AMD…OR FIGURE OUT WITHIN 3 ARCHITECTURES TO USE FOR PC GAMES ….DEVELOPERS ARE THE ONE DECIDE THE WHOLE ECOSYSTEM WHEN MAKING A GAME….WITH AMD HAVING MORE THAN 1 ARCHITECTURE IS GONNA BE HARDER ON THEM TO UTILIZE WHICH OR WHAT FOR OPTIMIZATIONS CAUSE NVIDIA IS PRETTY STRAIGHT FORWARD…CUDA ANYONE AND DRIVERS ARE ALREADY OPTIMIZE WITH NO ERRORS LIKE AMD SHOWING SO FAR ..2020 AMD STILL CAN'T QUITE GET IT RIGHT STILL….JESUS AMD…WE BEEN OVER 20 YEARS OVER THIS SINCE FIRST CATALYST …FIX YOUR GOD DAM DRIVES AMD PLEASE FOR LOVED OF GOD
ITS SHOCK AND AWE …NVIDIA style in performance …AMD had their chance with RDNA stuff for whole year…now its NVIDIA turn ..you know all that means …VICTORY AGAIN…JUST LIKE 1080ti victory party looks like…more dance and party for 3080ti
why your upset…big red 2 server farms don't lie when upgrading into ampere cards ….and they are amaze from results ….and reduce rack space doing so, you need see the videos for yourself….lots of empty space due to ampere extra power and boost
All this talk about Ampere being such and such faster, and also cheaper..people are smoking crack! Ampere will be just another incremental performance increase, with an incremental price increase, I expect it to go like this:3060 = 15-25% better than a 2080 after driver maturation. 3070 = 15-25% better than a 2080 Super after driver maturation. 3080 = 15-25% better than a 2080ti after driver maturation.3080ti = 15-25% faster than a Titan (Turing) after driver maturation.Titan Ampere: totally new flagship performance point, at a totally new flagship price point.
And expect each of the performance tiers to go up $100+ RRP for each segment, the 3060 will be priced like a 2070 was, the 3070 will be priced like the 2080 was etc etc…
And consumer whore cash-whales will open their maws wide to receive steaming-hot fresh caca from Nvidia in droves, rinse and repeat ad infinitum.
Magnus Rasmussen is right, but everyone isn't explaining that the 70-75% gain in performance is strictly pertaining to deep learning or data center computing. They DO go over that info but all of the videos regarding nVidia 3000-series state next gen nVidia 75% faster 😑
So I will do my yearly prediction. I foresee the 3000-series to be at max 36% better performance than their predecessor models in the 2000-series 👍
Dat guy didnt say faster at what, maybe some edge case compute shit.
bro, when your sick you sound like The Godfather… not an Apple Fan, or as I know Paul.He is The Cardfather!
The transition from 12nm (TSMC) to 7nm (Samsung) doesn't provide the same improvement potential as 14nm (TSMC) to 7nm (TSMC).Traditionally speaking., sure we could talk about Node-to-Node between Fabricators as if they were interchangeable.. because the difference in Process Approach was so negligible that it really didn't make any difference.
Now that we're in the Sub 20nm Era., well those differences actually result in quite a large impact on the End Product.As it stands Samsung 7nm is roughly on-par with TSMC 10nm in Practical terms… and it's important to keep in mind that Pascal was on 16/14nm (14nm Process on 16nm Node) just as Turing is 12/10nm (10nm Process on 12nm Node).
NVIDIA just made the decision starting with Pascal., to simply refer to the Larger Process… as this gives the impression to Consumers and Investors that their Architecture Efficiency is much better., unlike AMD who instead choose to cite the Smaller Process, as they prefer to claim Technological Superiority ground.
And as a further key point here… NONE of the present Nodes actually hit their On-Paper Improvements in Practical Application. Intel have only just been able to hit the 14nm On-Paper with their 3rd Generation Improvement (14nm+++)., which is what we can actually call a 'Real' 14nm … as it's in terms of both Transistor Density, Power Efficiency and Frequency Stability ("Performance") actually hitting what 14nm was originally intended to provide over 24nm.
Now the other thing to keep in mind is when "Improvement Citations" are made… what are they actually comparing to. At present TSMC 7nm+ EUV is arguably the most accurate in it's improvements, as they're specifically citing Vs. their own 7nm… but others, such-as when they did cite 7nm; were using 14nm as the Baseline… NOT 12nm (14nm+) that was being moved from.
And those (+) Improvements in a Node, can be quite considerable. Keep in mind that 14nm when first used for HPE., was actually hitting ~30% BELOW what it should've been capable of in Practical Terms. So it was more like 24nm > 18nm instead of 24nm > 14nm… and as I noted above NVIDIA got around this via a Half-Node Approach. i.e. it was 14nm, but everything was spaced at 16nm that reduced artefacts (Quantum Tunnelling for example) and thus less Power was needed to offset, and Higher Frequencies were possible before the Instability from such became "Problematic" to Operation. Still such is a trade-off because essentially you have a Substantially Reduced Transistor Density… plus there was still the need for slightly higher Power than what should've been needed had the Node Process actually "worked as intended".
Now as NVIDIA moves from TSMC (12/10nm) to Samsung (7nm) … there is also another thing to keep in mind. Both NVIDIA and TSMC own the "Large Die" Process IP., this is what allowed them to create Die that are up to 800mm² instead of being capped at 640mm² … without that, in essence the RTX 2080 SUPER (TU104) would've been the Largest and Most Powerful, Turing-Based Architecture they'd have been able to Produce.
Samsung (as far as I'm aware) hasn't been able to License this… and really TSMC has no reason to Grant a License, as NVIDIA aren't allowing others (AMD, Intel or Apple for example) to use this Technology at TSMC. This means IF NVIDIA want to still create "Large Die" GeForce., they essentially HAVE to use TSMC; and their 7nm and 7nm+ Nodes are actually completely Saturated right now by Apple and AMD. (This makes sense too… after all AMD and Apple did bankroll and help develop these Nodes., likely with the provision that they get "First Selection" for Production Runs)
I'd argue this is why NVIDIA have switched to Samsung who present only serve Custom ASIC and Qualcomm., this gives NVIDIA domination (more or less) over a 7nm Node.The thing is that in terms of Density we're looking at about 45% going to Samsung 7nm, which remember they've renamed 8nm to prevent confusion with 7nm EUV., which their EUV Process is only available for LPE Applications; so NVIDIA's choice will be either Frequencies of up to 2,000MHz with a +45% Density or 1,450MHz with a +65% Density…
To put that into perspective., if we take the RTX 2080 SUPER as an example here.3072 SU / 384 TC / 48 RTC in 545mm… let's use the same Die Size., this means:Samsung 8nm • 4480 SU / 560 TC / 70 RTC @ 2,000MHz (17.92 TF) Samsung 7nm • 5056 SU / 632 TC / 79 RTC @ 1,450MHz (14.67 TF)
I mean these would be ±5mm but close enough., and the point I'm making here is consider the actual performance output. The key difference here is that the Samsung 8nm will use about the same Power as the current RTX 2080 SUPER; while Samsung 7nm will use about 25% less. So a 250w Card Vs. 190w Card… while the Performance Delta is only 22%…
The Trade-Off here is +60% Vs. +32% Performance Delta over the RTX 2080 SUPER (TSMC 12nm) depending on which Node they're using. Now Ampere is clearly using Samsung 8nm., as they've gone with the Higher Frequency to allow them to have substantially more Computing Power in the same Space… but in all other factors (Power and Frequency) it's essentially remaining the same.
Plus as noted, they don't have the extra Space to play with… so the above is very close to the "Largest" they'll be able to go on Samsung 8nm (7nm). Still, that doesn't mean that Baer is also going to take the same approach… remember that Volta and Turing are different as well; even those both are technically derived from the same Root Architectural Design.
I actually have a feeling that NVIDIA might actually chose the 7nm EUV Path., which on the Surface will mean more Processing Units and much better Power Efficiency but will also mean noticeable Lower Clocks and thus a more "Expected" Performance Delta.
When you think about it., what makes more sense from NVIDIA's standpoint … i.e. what will be the best in terms of their Profitability? Something that could actually compete with AMD's 7nm+ EUV Navi 2nd Gen (especially if AMD pull their thumb out of their arse and create a "Big Navi") or something that just keeps NVIDIA in the lead with the assumption that AMD just won't bother brining anything Competitive to the Table in the High-End?
I know what I'd bet on.
“Proper 12 “for the shore throat man shit .. let’s continue
If Amd releases big navi and it will be 25-30% faster than a stock 2080 ti than 3080 will be arguably better than big navi and 3080 ti will apear with 25-30 % over 3080. If big navi doesn't come than 3080 will be about 2080 ti's performance and 3080 ti will apea much later.
The 3080 ti will be 75% faster then the 2080 sure
I don't think it matters for AMD to get to the levels of Nvidia at this point after new Navi comes out because of one main reason: Most games will be optimized for consoles anyway, consoles are gonna have almost the same GPU core as the desktop cards of AMD(new Navi RDNA2), so, games will be(for a good part at least) already optimized for AMD cards.Assuming this is correct because it makes sense there is no reason to go for any overpriced Nvidia cards(unless something changes that drastically of course) but rather an AMD one.
Ye it might be 75% faster… In a specific task. Might be 12% faster than current gen, if Turing is a metric to compare to
Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *
Enregistrer mon nom, mon e-mail et mon site web dans le navigateur pour mon prochain commentaire.