[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Search] [Home]
Board
Settings Home
/g/ - Technology


Thread archived.
You cannot reply anymore.



>https://gpureport.cz/info/Graphics_Architecture_06102019.pdf

Of the 1.5x perf/watt improvement over Vega, 60% of that is attributed to the new architecture, and not the node shrink.
As in, if Navi were on 16nm still at the same clocks as Vega, it'd have around a 30% improved perf/watt over Vega.

It is likely that launch drivers are going to leave a lot of performance left to be gained as they'll need newly optimized shader compilers.

RDNA CUs are more different to Vega ("GCN 1.4") than Turing SMs are to Pascal, when Turing SMs changed quite a lot.
>>
it was worth the wait
>>
Don't worry it is still terrible when compared with 2070.
>>
>inb4 it goes toe to toe woth the rtx 3080 in a few years
>>
I look forward to Just Wait™ing until the drivers are good.
>>
>>71387053
You must have never gotten to experience FineWine™
>Buy 7970
>It's okay at launch. Much better than Fermi. Just couldn't know then that it was the best GPU ever made, relative to the time it was released.
>Kepler comes and is competitive, but then drivers make the 7970 you already got better than it and Kepler refresh
>More years later, it still runs all the new stuff great, thanks for improved drivers and games being more compute heavy, and it having enough VRAM which Nvidia cards always skimped on
>Sell it to a miner for 1/3rd of what you paid, 6 years later

>Buy Ryzen 1000 series
>At launch, just okay. Better than $250 for 4c/4t stutterlake, even if averages are sometimes worse
>Next year, microcode upgrades greatly improve memory compatibility, and also improve IPC about 6%. Now it's even better
>Another year later, Windows Scheduler gives another up to 20% improvement
>Don't even need to upgrade to Zen 2 like you thought because it FineWine™'d

It's not that it's bad on launch. 7970 wasn't bad on launch, it was still better perf/$, perf/watt, even if drivers had weird bugs. Ryzen 1 wasn't bad on launch. It just gets even better with time.
>>
>>71387106
Windows update "allegedly" fixed 1% fps drop while gaming. Wait for real tests and not just MS shilling.
>>
>>71386813
Here comes the wait and see bullshit again..
Anyone remember vega? I actually bought a card and put my money on the line.. Yeah, the wait never paid out.. Drivers never came. Features were never enabled. Just a half ass'd power inefficient architecture that delivered sensible middle ground performance and nothing more.
>>
>>71387233
You are definitely meming if you think vega didn't deliver.
V56 is like 1070ti performance and when undervolted runs around 200w and less.
V64 is good too and trades blow with 1080 depending on the game.
Yes they do take more energy but their performance was good.
>>
Updated my X470 chipset, latest BIOS and the Windows 10 May Update. My 2700x is absolutely a monster. Everything is snappy and smooth and it already was very nice. Glad to see MS partnering more with AMD. New consoles are Zen 2 and Navi. Second hand market is really good right now and AMD is luring Nvidia into a price war. The future is bright!
Not to mention SSDs and RAM are getting cheaper! Can't decide on the 3800x or the 3950x.
>>
>>71387305
10 Advanced Micro Dollars have been deposited into your account.
>>
RDNA uarch is based
>>
>>71386813
I actually kinda wonder whether the performance/power improvements they gave on the presentation were for Navi which is supposedly a GCN/RDNA hybrid or actually for future full RDNA architecture. I'd say it's a valid question because the slide with those numbers was about RDNA in general and not about Navi specifically
>>
>>71387305
Meanwhile I still use my win7 installation I did on 3570k now on my r5 1600 and everything was snappy all along. Too bad win10 requires so much tinkering, probably still laggy compared to win7.
>>
>>71386813
and already the "just wait, fine whine" has started before the new gpus have even released. Share holders must be really worried about amd stock prices.
>>
>>71387529
They go with the Fine Wine™ bullshit while at the same time bashing nvidia for bad drivers. How shit must AMD's drivers be if they take away that much performance at launch.
>>
>>71387381
Cope. :)
>>
>>71387053
considering CUs now are using IF its gonna take a lot more time and NOT just drivers sadly

this effectively means that each dual CU acts like a core and they are all basicly on a crossfire situation

in a tl dr version

this is bulldozer GPU that somehow is faster
>>
>>71386813
RDNA appears to have made optimizations and improvements over GCN across the board. I don't get why the power requirements is still higher than the competing gpu made on an inferior process.

>>71387233
What anandtech managed to get from RTG about the primitive shaders on vega:
https://twitter.com/RyanSmithAT/status/1138561780244869121
>>
IMD & Wave execution:
GCN: CU has 4 x SIMD16, Wave64 execute on SIMD16 x 4cycles.
RDNA: CU has 2 x SIMD32, Wave32 execute on SIMD32 x 1cycles.

LDS:
GCN: 10 Wave64 on Each SIMD16, 2560 threads per CU. 2560 threads (1CU) share 64KB LDS.
RDNA: 20 Wave32 on Each SIMD32, 1280 threads per CU. 2560 threads (2CU) share 64KB LDS.

Shared cache:
GCN: 4 schedulers & 4 scalar units (4CU) share I$, K$
RDNA: 4 schedulers & 4 scalar units (2CU) share I$, K$
>>
>>71388562
because amd wont ditch the hardware sc no matter what (there is no way of emulating async especially a heavy load on a software sc nvidia is a prime example of it at WWZ)

remember how the power dropped like a stone from maxwell to pascal?
780ti has similiar tdp to navi also
>>
>>71386980
Could've gotten the same perf/w back in 2016 with the GTX1080
>>
anyone know how anti lag works?
>>
>>71388631
Are you retarded?

https://www.nvidia.com/en-us/geforce/news/rage-2-game-ready-driver/

Nvidia OBLITERATES AMD in WWZ now with a single driver release
>>
>>71387233
kek you fell for raja's poor volta
>>71387296
vega is trash, there is no defending it. v56 is mostly fine, but mediocre. v64 is too slow to command its msrp when the gtx1080 exists.
>>
File: p1rdeui66vs21.png (62 KB, 872x920)
62 KB
62 KB PNG
>>71388659
160+18% is still SLOWER than amd
>>
>No HDMI 2.1
>No VirtualLink
>No Variable Rate Shading
>No Ray Tracing
>225W HOUSEFIRES on 7nm finally reaching GTX 1080 performance from 3 years ago

OH NO NO NO NO NO NO NO NO NO NO NO

AHAHAHAHAHAHAHAHAHAHAHAHAHAHA
>>
>>71388708
>no vrs
amd had a patent on it long before nvidia even had a prototype working on it
http://www.freepatentsonline.com/y2019/0066371.html

pretty sure navi will have it
>>
>>71388729
https://www.anandtech.com/show/14528/amd-announces-radeon-rx-5700-xt-rx-5700-series/2

>With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading

KILL YOURSELF FAGGOT :^)
>>
>>71388708
what is all this meme shittery you pretend to care about
>>
>>71388753
That's ok anon, all those are useless and shit.
Next year when AMD finally starts to support them they'll be groundbreaking must-have features though.
>>
>>71388659
>+18% for 2080ti
>+14% for 2070
>+12% for 2060

oh yeah they barely reach amd let alone beating them LOL
>>
>>71388753
>doesnt support it

thats weird considering its a software feature i wonder if the drives support it he will just eat his hat off
>>
>>71388708
>225W HOUSEFIRES on 7nm finally reaching GTX 1080 performance
the only valid point here, jensen.
>>
>>71388773
If Poovi supported it AYYMD would have stated it in the PDF, it doesn't and it's not software, it's required to be supported in hardware

https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/
>>
>>71386813
I'm more curious about their mid / low end tier gpus desu. I have a nvidia gtx1050ti and I want to see if amd can match it's performance with a new gpu.
>>
>>71388817
are they even going to make a mid / low end?
>>
>>71388923
250mm2 used to be mid for $300-330.
>>
>>71388817
what the fuck made you buy that over an rx 570? shitty prebuilt/low profile?
>>
>>71389018
was cheaper, the 1050ti costed 140 while the rx470/570 was almost 100 bucks more expensive IIRC. It's been so long but money definitely was the deciding factor.
>>
File: Untitled.jpg (1009 KB, 2560x3300)
1009 KB
1009 KB JPG
>>71388792
hello retard its me reality
amd lacks on SOFTWARE
>>
>>71388562
so what does this mean?
>>
>>71388635
And give nvidia my money? Why would I do that?
>>
>>71389194
You are a retard and it shows

Navi does not support VRS period
>>
>>71389390
>navi cant do pixel rasterization on hardware

jesus fucking christ /g/ doesnt have any sense of technology
>>
>>71389269
Because I would've had the performance your shit company is trying to sell me now for 3 years.
You won't be recieving dividends from me /biz/ amd nigger.
>>
>>71389552
Navi does not support VRS and this has been confirmed by many tech websites

Can't get any simpler than that but you insist on arguing about it like a faggot
>>
Super SIMD when
>>
>>71389608
navi LACKS software you moron
the HARDWARE is there rasterization of pixel shaders exists since 2xxx
>>
>>71386813
>As in, if Navi were on 16nm still at the same clocks as Vega, it'd have around a 30% improved perf/watt over Vega.
They should have shoveled out tons of them cheap using 12nm or 16nm. A vega64 sized chip would probably be decent and still cheaper than 7nm.
>>
File: GTX1080.png (71 KB, 1324x1664)
71 KB
71 KB PNG
>>71388635
>Could've gotten the same perf/w back in 2016 with the GTX1080
In reality the 1080 is 7% slower than the 2070, and the 5700 XT is 6% faster than the 2070. The 5700 XT is Vega 64 +15% perf.
>>
>>71388708

Based. Keep dabbing on the AMDrones here, most everyone hates these cards.
>>
>>71388708
>>71388781
>>71389780
For real though, I think navi is gonna be pretty okay desu. This is coming from a GTX 1070 owner who wants to upgrade.

If we take a look at the cheaper $379 36 CU Rx 5700 you'll notice how it wipes the floor with the RTX 2060 by about 10% on average. Does that put it closer to the RTX 2060 or 2070?

Because it it gets really close to the 2070, say 95% as fast then it properly competes with it in my book. That's not taking the driver updates into account like OP said btw.
>>
Personally I really look forward to essentially a ~doubling of FPS at 1440p with the 5700XT from my 1070 without having to pay the RTX tax of the 2080.
>>
>>71390012
>hurrrrr double the framerate
what is the actual improvement in frametimes though
>>
Also remember the $499 price tag was comfirmed fake, Rx 5700XT will be $449. That's a pretty sweet deal if it gets ~95% within RTX 2080 performance which retails for ~$800 on average.

https://wccftech.com/amd-radeon-rx-5700-xt-7nm-navi-rdna-gpu-official-launch/
https://www.nowinstock.net/computers/videocards/nvidia/rtx2080/
>>
>>71390041
I'll be honest here: I honestly EXPECT AMD to fuck up day 1 drivers but that's okay in my book too because this is what polaris and vega were like at launch. AMD does all the hard grinding with the hardware but then buffs everything out later with drivers.

I've been waiting 3 years for an upgrade and I don't mind waiting another few months after I get my 5700XT for frametime problems to be smoothed out (probably just power limit that can easily be fixed with slight UV).
>>
>>71390072
>~95% within RTX 2080
more like Radeon VII
>>
>>71390182
Which is 90% as fast as the 2080
2070 is 80% as fast as the 2080
see >>71390012


So throw 5% on top with driver updates and you have 95% 2080 performance in the end. Not bad.
>>
>>71389928
>1070 user
>upgrade to 5700
well, suit yourself. i don't see a reason to do so. 1070 is fine for 1440p (exactly my setup right now) but not high refresh rate
>>
>>71390417
the only game that runs bad on it right now is shadow of war on ultra
everything else is pretty good at 1440 on 1070
>>
>>71389708
>The 5700 XT is Vega 64 +15% perf.
At practically the same price of Vega 64.
Horrible pricing.
349.99 would have been a lot better.
>>
File: 1560200869974.jpg (151 KB, 1000x561)
151 KB
151 KB JPG
>>71390417
>>71390512
Nah, 5700XT. I have some IC diamond tubes laying around which I plan to use to lap the heatsink with and use as TIM to OC as high as possible. My 1070 only went up to 1720 MHz but maybe I'll get a lucky 100-200 MHz OC with navi.

Also seeing how big of a gap there is between the base and boost, there seems to be a lot of untapped performance on this little demon. Just needs more juice and better cooling, OC will just be icing on the cake.
>>
>>71387435
That's what Nvidia were doing since maxwell. That's why Radeon were mostly slower when compared to Nvidia unless the game were optimized for the GCN arch.
>>
>>71387106
>7970 wasn't bad on launch, it was still better perf/$, perf/watt
It was literally so bad at launch and getting its ass kicked by the 680 a few months after that AMD pushed out the factory OC ghz edition refresh that's exactly the WHYY AMD OVERVOLT THEIR CARDS, ITS OUT OF ITS COMFORT ZONE meme.

If you want to know how GCN should actually be clocked look no further than 7970 vs 7970 ghz edition
>>
>>71386813
I see the slides, AMD didn't lied, this time they really changed the architecture.
No wonder, despite the higher clocks, that 40CU is outperforming 56CU.
>>
>>71387435
I told you guys it was a wavefront saturation problem
>>
>>71390591
More like 20% if it outperforms the 2070 by 10% which by itself is 10% faster than V64. Right?
>>
>>71390592
navi is power locked like nvidia now. it won't go above 225w TDP spec.
>>
>>71388708
vrs is shit.
>>
File: amd-rx-5700-xt-die-size.jpg (120 KB, 1100x619)
120 KB
120 KB JPG
>>71390708
I doubt it
>>
>>71390770
I ignored everything gpu since vega, what is VRS?
>>
>>71387554
AMD bad drivers?

At least I don't have to step into my DeLorean and accelerate to 88MPH over 30 fucking seconds, just to change some settings.
>>
File: Wd9ylDhYzwU1xRZW.jpg (286 KB, 2165x1126)
286 KB
286 KB JPG
>>71390740
Source? Because aren't they releasing a 2GHz XT with """"225 TDP""" or was that a fake?
>>
>Vega 56 - 495 mm2=$399
>5700XT - 251 mm2=$449
hmmmm.
>>
>>71390815
better binned limited edition for $500, also it's 1980MGhz
>>
>>71390815
ah yes source is GN, they asked AMD directly about unlocked BIOS, it's all limited now
I think it's a good think, retarded journos won't show you 500w power draws anymore
>>
Yeah, Navi is fucking epic. I bet nvidia doesn't even bother reacting.
>>
>>71390800
compress shader quality to achiev higher fps.
>>
>>71390815
It is everywhere. That is really sad because Overclocking Radeon is really fun. Power tables, modded bios and all of it is now gone.
>>
File: prices for 5700XT.jpg (58 KB, 666x174)
58 KB
58 KB JPG
>>71390841
shut it down, don't let the goyim know!
>>
>>71391049
fuck that. nvidia already makes games look worse with aggressive culling
>>
The only thing wrong with Navi is the price.
>>
>>71390860
>>71390878
>>71391061
What about wattman's power limit offset thingie? Not even like 10%. I'm starting to reconsider navi desu. I just wanted rtx 2080 perf without the rtx price tag.
>>
File: 952.jpg (145 KB, 1280x720)
145 KB
145 KB JPG
>>71391070
GTX 970 -398 mm2=$329
GTX 1070 -314 mm2=$379
RTX 2070 -445(120-130mm2 of it is cheap cheap tensor cores) mm2=$499
>>
>>71391070
>thinking the yield would be 70%
You people just blindly parrot made up bullshit from AdoredTV. The Zen2 chiplet won't even have yields that high, let alone a fucking 200mm2+ dense logic IC.
>>
>>71386813
AMD goes full SIMT, this solved a lot unused capacity, and making better for future Raytracing hardware.
>>
Comming from a 970, how big of an upgrade would the 5700xt be?
>>
>>71391487
x2
>>
>>71390841
The 5700 XT is massively faster than Vega 56, and ~15% faster than Vega 64 which launched at $499.
>>
>>71390878
>>71391131
There would be no reason to put an 8 pin plus 6 pin on the 5700 XT unless they were going to allow significant power offsets in wattman. They had to lock the BIOS for DRM reasons.
>>
>>71391587
its fucking jewish to raise price for increased performance between generations as this is expected to happen
however as all the nvidiots cheered for that i say suits them right not getting the benefits of a price war
>>71391617
right, there might be some massive OC headroom
>75 Watts: None
>150 Watts: One six-pin connector
>225 Watts: Two six-pin connectors
>300 Watts: One eight-pin + one six-pin connector
>375 Watts: Two eight-pin connectors
>>
>>71391070
>Using yield numbers from a literal scam artist who makes shit up to grift NEET bux on Patreon
Lmoa
>>
>>71391271
>chiplets half the size of the previous one that had insane yields wont have the same and even more now..

/g/ ladies and gents
>>
>>71390841
7nm is more expensive than 14nm.
>>
>>71391692
>underage retard kid thinks he understands semicon industry
14LPP is not 7HPC, you retard. 14LPP is not an immersion process. TSMC's 7m node is. It has radically more lith masks, more multiexposure, complexity is exponentially higher.
Die size alone does not determine defect density, you retarded little kid. TSMC's 7nm lines will never produce high yields.
>>
>>71391692
>Another clueless dumbfuck believing Scottish NEET Patreon bux shill lies
Stop listing to people who make everything they say up to scam money out of dumbfucks like you.
>>
>>71391157
whats your point?
anyway amd priced 5700XT and 5700 so badly that it looks like intentional sabotage, I Have no idea why they think they can be more greedy than fucking nvidia who are hated wide and far for their greedy margins
>>
>>71391271
>>thinking the yield would be 70%
First off that is the Worst case scenario, not best. all those components are taken at their highest possible prices
Second, it's not AdoredTV dingus.
>The Zen2 chiplet won't even have yields that high, let alone a fucking 200mm2+ dense logic IC.
You have no idea what you are talking about. Then again maybe you do and just have stocks in AMD.
>>
>>71391587
so? 970 was massively faster than 770
it didn't cost arm and leg
390 was barely faster than 290 they still asked 970 price plus premium.
>>
File: giphy.gif (1.91 MB, 480x270)
1.91 MB
1.91 MB GIF
>>71391617
>>71391660
AMD better not fuck this up, this fucking gay RTX tax nonsense needs to end.
>>
Why is nobody mentioning the latency improvements?
>>
>>71391843
>thinking 70% yield is worst case scenario
LOOOOOL You're a riot, kid. You AdoredTV paypigs are hilariously detached from reality.
A process with 100+ lith masks and numerous litho steps on critical layers does not produce a high percentage of known good candidates per wafer. 7nm will never be a high yield node for TSMC. 7nm+ with partial EUV inclusion is a stop gag cost reduction measure. They will not see high yields on anything until 5nm with heavy EUV integration and mature pellicles
>>
>>71391816
>amd priced 5700XT and 5700 so badly that it looks like intentional sabotage
I don't think so.
For example here in my country the cheapest 2070 is going for like €530 and the 2080 is starting from around €730.
If we can get a performance from between 2070 and 2080 for €550~ it's not a bad deal at all.
And considering how AMD usually improves their drivers by a large margin as time goes by, I wouldn't be surprised at all to see this eventually trading blows with the 2080 or at least getting close to it.
If I can get something that's only marginally slower than the 2080 for 550€ then it's a very good deal.
>>
>>71392266
numerous immersion steps*
>>
>>71392252
because something fishy about it, no vsync game latency on 144hz monitor is around 10ms as is.
>>
>>71392156
honestly I have more hope for intel at this point.
>>
>>71392281
it will cost same as 2070 here in slavlands maybe 5% less which is irrelevant considering downsides
>>
>>71392449
its not frame latency

its motion-to-photon latency you fucking retard
>>
>>71392480
you may look up monitor testing with high speed cameras or tft central signal lag testing if you like
>>
>>71392504
i'd wonder about their testing procedure then because 144hz has at a minimum 6ms of frame latency, and keyboard to action is at a minimum 1ms at 1000hz polling, with extra processing time and lower polling putting it at 7 or 8 ms.
>>
>>71392504
this isn't talking about the monitor's input latency

its not talking about frame latency

its talking about mtp, which is how long it takes for the changes on the screen to reach your eyes. you can have 1ms gtg monitor + 240fps@244hz and have 50+ms mtp

theres a lot of things that affect mtp like your mouse/keyboard polling rate etc but the new amd thing reduces the cpu's contribution to it
>>
>>71386980
Yes, wait for Navi2. The performance boost will be massive. Expect a full 1080ti equivalent.
>>
>>71392504
Telling him to look up something when you didn't even look up Motion-to-Photon..

Imagine.
>>
>>71392621
*240hz kill me
>>
>>71390789
>2.3X performance per area
When do we see a 495mm^2 navi?
>>
>>71392637
>>71392621
you two really do not understand how lag is measured, it's exactly what you describe
most gaming monitors have it around 10ms full processing signal meaning click-processing-image
>>
>>71392252
Because my latency right now with no meme tech is around 10-20ms.
I have no idea where they are pulling their retarded 55ms and 44ms.
Literally seems like they are making up latency issues and then "heroically" overcoming them for marketing reasons.
>>
>>71392621
Are you a literally amd stock holder? Is that why we have shills pouring in?
If people test games and input latecny it takes for a mouse to be clicked until you see result on the image and its 10-20ms on average.
WHAT THE FUCK WHERE THE FUCK ARE YOU GETTING 55MS FORM?
WHAT IS THIS SHILLING?
DO I HAVE TO MAKE A COPY PASTA?
Like 5 different links to videos showing you high speed footage frame by frame of what the average latency is right now?

GOD FUCKING GOD.
>>
>>71390841
Historical pricing would put the XT closer to 300
>>
>>71392865
You can thank all the dumbasses buying overpriced Pascal cards and then buying overpriced RTX cards. Not even Nvidia's fault at this point, the average consumer is an idiot.
>>
>>71392895
the invisible hand of the market forced people to buy cards at all time supply lows
>>
>>71388762
They would be groundbreaking features if they didn’t drop your frame rate by 60%.
>>
>>71392895
>In the details shared by NVIDIA, the market to market revenue chart shows that Gaming was able to make $954 million in Q4 FY19 compared to $1764 million in the previous quarter. Not only does that make it a 46% decline from the previous quarter, but looking at Year-To-Year results, NVIDIA also saw a 45% decline.
wake up call is here.
>>
>>71392621
>>71392252
Here you amd stock holders:
https://youtu.be/L42nx6ubpfg?t=465
BEGONE!
>>
>>71393018
better than same graphics as on console.
>>
>>71391097
>aggressive culling
most retards even claim that upscaling is a fucking feature.
>>
>>71392281
>For example here in my country the cheapest 2070 is going for like €530 and the 2080 is starting from around €730.
>If we can get a performance from between 2070 and 2080 for €550~ it's not a bad deal at all.
You are insane, all of those prices are bad.
It's like you are used to abuse and beatings so anybody who doesn't beat you every hour is good because they only beat you at 1 hour and 10min intervals
>>
>>71392504
you may look up chicken tendies recipe first.
>>
>>71393052
30% import tax+10% store, we don't even consider it expensive anymore
average salary is $200/mo btw.
>>
>>71393087
like I said you sound insane for accepting such prices
>>
>>71393107
1. start a revolution
2. do not go to prison for life
3. buy overpriced hardware and be happy with videogames in internal immigration
I choose option 3.
>>
File: NVIDIA-Market-Revenue.jpg (73 KB, 1116x566)
73 KB
73 KB JPG
>>71393052
That was all mining right
>>
>>71393035
Amd niggerz btfo.
It's speculated that their anti-lag is just forcing default prerendered frames in botnet 10 from 3 to 1.
Something nvidia drivers have supported for ages for VR and exposed in their latest beta drivers for normal 3d.
>>
>>71388708
>>71388729
>>71390770
>>71391049
>>71391097
Fuck VRS, you’re literally cucking yourself if you use it, same as that AI upscaling!
The fact they’re trying to claim “higher performance” with it is disgusting. No, you’re literally run the game at a lower res shittier quality and stocking generated fake detail back in
God fucking damn it the fact this is taking over pisses me off
>>
>>71393136
>1. start a revolution
GAMERS RISE UP WITH AMD!


oh wait.
>>
fuck this /g/ i'm gonna start my own GPU manufacturing company. what should my flagship card be called?
>>
File: Bizarro_World_002.jpg (43 KB, 334x371)
43 KB
43 KB JPG
>>71393187
in 10 years GPU manufacturers will sell us picture quality of 1080p from 2015! They already sells us performance from 3 years back for prices from 3 years back, why stop at that?
>>
>>71393275
GRTX XT105780
>>
>>71393275
intel calls theirs odyssey, call it argo to piss them off
>>
>>71393028
Market segmentation is by products. So mining probably would have gone under gaming on their revenue reports in previous years, hence the inflation and the consequent decline in 2019
>>
>>71390041
Amd already has better frame times

Google nvidia stuttering and cope
>>
>>71393275
PTX 2800 Au Super Saiyan
>>
>>71393275
I'll make the logo
>>
>>71393275
WogFurnace 8800 SS
>>
>>71391727
>7nm wont produce high yields

i mean come on how can you allow your retarded brain to actually vomit such a stupid comment

there are two types of yield losses
1) die yield losses
95% of them have to do with design out of the parent company
and 5% with defects on the waffer itself after WAT tests are conducted

2)throughput yield losses
it has mainly to do with the waffer manufacturing rare earth metals bla bla things you wouldnt understand in the first place

in both situations less yields means increased costs to the end user
and since first gen 7nm isnt matured just like 14lpp wasnt mature enough on first zen and still yields were so fucking high that amd was selling a six core for a mere 150 bucks

just let that sink in for a moment before you actually reply something stupid again
>>
>>71393999
Why did you think you could come here and bluff without getting called out kid?
Every single die that is thrown away is done so for being on the edge of the wafer, defect binning, or not qualifying to power leakage/clock standards of the client. The overwhelming majority of dies are thrown out for defects.
Try reading semiwiki or something. You don't even know the right buzzwords to throw around. Its just sad.

Samsung's 14LPP process was years into volume product before GloFo ever had it up and running at Fab8 by the way. Your desperation to appear informed is quite something.
>>
>>71394070
>Every single die that is thrown away is done so for being on the edge of the wafer, defect binning, or not qualifying to power leakage/clock standards of the client. The overwhelming majority of dies are thrown out for defects.

thank you captain obvious its not like i literally stated that
kid
>Try reading semiwiki or something
yeah thank you i rather stick with my work at nxp cause you know i fucking know what im talking about
>>
>anti-lag
It's placebo for gaymers.
>>
>>71394145
You don't have a job. You're not fooling anyone. You can't even get basic facts right, kid. Stick to AdoredTV's youtube comments. Thats clearly where you came from.
>>
>>71394149
it's actually "we create lag that is not typical for gamers unless you use old fashioned V-sync and then we reduce it it to make us look better"
>>
>>71394149
I am an artist and i would really appreciate a quicker tablet response.
>>71394276
Professional monitors dont have gaming synch.
>>
>>71394337
>Professional monitors dont have gaming synch.
Even without sync there is no such fucking high input lag unless something is TERRIBLY wrong. just look here >>71393035
>>
I like how the nvidia defense squad is in this very thread, ready to snap out an response to anything slightly related to gaming performance of graphics cards. LMAO.
>>
>>71394176
>you cant even get basic stuff right

im not the one tying up masks with defected yields am i now

im not the one stating that a node wont have high yields because your words means a lot now am i
im not the one getting triggered when gets corrected and his only comeback is
"you are a kid"
"HA HA stick with adoredtv youtube comments"

like adored ever went into details or even knows what WAT is in the first place (not that you do but anyway)
>>
File: 6mishggx6w331.jpg (161 KB, 1200x749)
161 KB
161 KB JPG
>>71394360
its coincidental ofc that nvidia invented yet another metric today and even on that one they lied
>>
>>71394360
>we got caught lying bros, we got to cocky bros
>how to we turn around this shill failure
>how about we ironically laugh at getting BTFO?
hmm
>>
>>71394407
yikes my vega 64 consumes 300W total power, nearly twice as much as 2070, while delivering less performance, how
>>
>>71394377
Lithmasks and exposures directly correlate to complexity. Complexity leads to defects. There is no getting around this.
You just sat there and tried to argue that TSMC was yielding 70% known good dies per wafer on 251mm2 7nm part, worst case scenario.
You're a fucking low IQ NEET talking out of your ass.
>>
File: 1487818729490.jpg (119 KB, 796x805)
119 KB
119 KB JPG
>>71394407
>nvidia paying attention to navi
oh no no no
>>
>>71394407
is this real? what the hell nvidia. their own specs say 2070 is 175w T.D.P ffs
>>
>>71394459
It's closer to 50% more
>>
>>71394469
again sir
die yield defects ARE purely the parent's company fault
learn your shit

im arguing that your twisted views are wrong because you are basicly spewing numbers out of your ass cause nobody knows the true yields
>>
>>71394515
67%*
>>
>>71394459
his vega consumes 300watts
twice as much as a card that eats 200 watts
twice as much
>>
>>71394407
How are they lying? This is actually convention for CPUs. PCB, power stages, etc. consume power either because they need it or they waste it. TDP is a metric for cooling, so not all of those components may not be included, although it's conventional for most coolers to at least include VRAM, although we're seeing more and more VRMs and board components just go to the mounting plate or backplate rather than direct to the heatsink.

You probably wouldn't include the board + chipset power with the CPU TDP for example. Motherboards in the past could consume 20-30W including the 5-10W for chipset because you had other components like 3rd party USB controllers, DRM chipsets, BIOS chipset, general waste, etc.. And internally they may set guidelines for core and uncore TDP vs package (entire cpu die) TDP.
>>
>>71394513
They literally just moved "TDP" to "TGP". The Navi TGP values are the listed TDP values from AMD. There's literally nothing misleading about it, just renaming convention
>>
>>71394562
>hurr durr let me keep trying to bluff like I'm not a pathetic NEET
You're not fooling anyone, and you're just making yourself out to be an even bigger joke.
Defects are virtually always on the side of the foundry. Always. The IC designer runs simulations with process design rules before the lith masks are ever produced. If there were an issue with the design itself that was preventing it from being translated to working silicon then it would be identified and changed before it ever went into volume production. There is a reason why tape out is an extensive process and not just a one click to manufacture scenario, dipshit. The foundry is responsible for almost every defect because they are artifacts of the process, of the tooling not having perfect atomic deposition, one molecule of contamination in a metal layer. That is on the foundry, not the designer.

Keep up the effort though. You might just learn a thing or two, kid. Maybe it'll help you quit being a NEET loser so you can finally get a CS degree like all those trannies.
>>
>>71394630
It's more accurate too
>>
>>71394657
>You're not fooling anyone, and you're just making yourself out to be an even bigger joke.
sure
>Defects are virtually always on the side of the foundry
maybe you need to be illustrated with the facts cause apparently words dont reach from your eyes to your brain
http://yieldwerx.com/conducting-yield-analysis-semiconductor-manufacturing/

and thats the end of your little hiss around here
>>
>>71394609
first of all there isnt a single 2070 that eats so little power
nor a 2060
they are at least 25 watts above that

second of all using a metric as tdp to measure chip only power than CANNOT be measured is very dangerous...especially give the past of this company
>>
>>71394758
>I googled one article and didn't understand it but I'm bluffing and basing all of my asspull off of it anyway
Thats real cute, kid.
>>
>>71394852
>i googled one article
LOL
>i dont know shit and i bluff
LOL

i mean come on at some point just admit that you are full of shit and get done with your life
you
were
WRONG

end of story
>>
>>71394908
Sure thing, kid. AMD getting 70% yields on Navi XT, WORST CASE SCENARIO. You're totally nothing a pathetic NEET retard Adored fanboy. Totally.
This is one for the archives.
>>
>>71394801
>first of all there isnt a single 2070 that eats so little power
>nor a 2060
>they are at least 25 watts above that

So like most cards.

https://www.techpowerup.com/reviews/AMD/Radeon_VII/31.html

That's a reference Vega 56 cheating on its power btw. And I know for a fact it throttles with the blower so it's not using as much as it wants to, mine was like that. VII as well. The reality is Nvidia is plainly more efficient so they have nothing to even lie about. AMD plays loose with TDP the same way.

>second of all using a metric as tdp to measure chip only power than CANNOT be measured is very dangerous...especially give the past of this company
Fuck off retard.

AMD did the same thing with reviewers for Vega.

https://www.gamersnexus.net/news-pc/3004-rx-vega-64-and-vega-56-power-specs-price

220W and 165W for power for the GPU vs board power ("TDP").
>>
>>71394932
>navi

i responded to the cuck about zen now you are shifting and saying that you were talking about navi? LOL

more so you were comparing an LPP with HP node on a totally different product?

HAHAHAHAHAHHAHAHAHAHA
AHAHHAHAHAHAHAHHAHAHA

sorry i need to be the bigger man
and laugh harder

HAHAHAHAHAHHAHAHAHAHA
AHAHHAHAHAHAHAHHAHAHAHAHAHAHAHAHHAHAHAHAHA
AHAHHAHAHAHAHAHHAHAHAHAHAHAHAHAHHAHAHAHAHA
AHAHHAHAHAHAHAHHAHAHAHAHAHAHAHAHHAHAHAHAHA
AHAHHAHAHAHAHAHHAHAHA
>>
>>71395085
Try again, little kid.
See:>>71391070

Right there, the original claim that AMD is getting 70% yield on Navi XT. Which you claimed was WORST CASE SCENARIO. Don't try to back pedal now because you're frantically googling around and realizing you've made an ass of yourself.
You're an embarassment. Even for a NEET loser you're fucking dumb.
>>
>First off that is the Worst case scenario, not best.
>hurr I work at NXP guys seriously
lmao reddit spacing NEETs
>>
7nm yields are 'decent', but not what you would call good. Thats why Zen2 and Navi are more expensive and why sony and ms are only launching the next gen consoles in 2020. 7nm need EUV for actual good yield and lowering prices.
>>
>>71395416
But the expert here just said 70% would be the worst case scenario. Obviously TSMC is shitting out these substantial 250mm2+ dies like its nothing. Zen2 is obviously going to cost just pennies.
Big bad Lisa Su is raising prices to be mean.
>>
>>71388001
It's likely to increase async compute perf.
GCN only had around 20-25% scaling of async compute.
Turing has 60%.
That's why Turing outperformed AMD in a lot of traditionally AMD favored games which supported async compute.
And Async compute is legitimately the future of games programming.

>>71388562
Just because the TDP is 180/220W doesn't mean it'll be pegged to that number.
Nvidia cards don't hit their TDP power-limit in all games/applications.

>>71388659
You're lying.

>>71388708
>no HDMI 2.1
It has DP 1.4.
>No Raytracing
Yes it does :^)
>No Variable Rate Shading which makes graphics quality worse
Yes, worse quality graphics is an Nvidia feature
>225W is a housefire now
Reaching

>>71388781
>225W HOUSEFIRES on 7nm finally reaching GTX 1080 performance
>the only valid point here, jensen.
The 180W card appears to exceed 1080 perf.
The 225W card is between 1080 and 250W 1080Ti perf.

>>71394932
I doubt they're 70%.
>70% is what they're getting at TSMC with dies half the size.
>>71395546
TSMC isn't fabbing Navi.
>>
>>71395578
TSMC is in fact producing Navi, and this has been known since 2017.
https://www.digitimes.com/news/a20171023PD201.html

AMD's Vega series GPUs are fabricated by GlobalFoundries on 14nm process, but Taiwan Semiconductor Manufacturing Company (TSMC) has won the order from AMD to fabricate its NAVI GPUs using 7nm process technology. As TSMC is also keen on making deployments in advanced packaging technologies, it will continue to maintain coopetition relationships with local OSAT

Welcome to two entire fucking years ago.
>>
>>71394598
175W is not 200W
>>
>>71388707
>Vega 56 - 149fps
>GTX 1070 - 88fps
wut?
>>
>>71393275
Dilate 40%
>>
>>71395619
I could swear it was Samsung for Navi fab, but yeah you're right.
>>
>>71395172
>claim
so basicly you took random numbers and just make your own shit on top of that
and you CALL ME a kid

good laugh mate
>>
>>71395881
Samsung wasn't offering non EUV 7nm to external clients. Only TSMC elected to do so.
>>
>>71395909
They weren't offering it at all. Even Samsung Electronics didn't use it internally. 8nm is the ultimate marketing name
>>
>>71395902
Those random numbers are what you argued, retarded little kid. You staunchly argued that 70% yields were "worst case scenario." Are you going to try and pretend like this entire thread didn't happen? Sad.
>>
>>71389583
If I were an investor I'd be in Nvidia. But I'm not. I would rather have my sphincter intact then give Jensen free reign to stretch it out.

P.S. Nviduas dividends would be much, MUCH higher than AMD, broke bigger.
>>
>>71395578
are you insane? turing has a 60% scaling on async?
in what universe you live on that turing without a hardware sc is actually capable of more than double the async throughput of amd?

as it stands right now the norm on consoles is 30 to 40% depending on the game
>>
>>71395937
>you are literally just vomiting random shit thinking that im somebody else
you dont even know where did i fucking start replying despite the fact that i literally told you like 2 replies back that i was arguing with someone else about ZEN and you started saying random shit about navi
>>
>>71395949
i mean wwz is the heaviest async game to date more like 19-22% according to them
aots was about 5% at max

for turing to surpass amd with some magical sauce they would have to not only have a hardware sc but a comparable core design such amd has
and turing cores compared to amd ones are literally primitive
>>
>>71388708
>implying that 2080 is not just a marginal improvement over the 1080
>inb4 "b-but the ray tracing meme"
>>
>>71395971
Look at you. Its pathetic how you back pedal and deny it when the posts are all right there.
See: >>71391070
>muh 70%
See: >>71391843
>First off that is the Worst case scenario, not best. all those components are taken at their highest possible prices
You made a complete and total ass of yourself, because you're a low IQ lying NEET, and now all you can do is spiral the drain.

TSMC isn't hitting 70% on Navi XT. They're not hitting that with the Zen 2 chiplet either. That total nonsense has no source other than fucking AdoredTV, your idol, and all the internet clickbait sites that recirculated his bullshit. Neither you nor Adored understand the basics of defect binning and yields which has been made hilariously obvious with every one of your pathetic attempts at a post.
Keep up the shitposting though. I'll love to look back on this thread in the archives the next time I see a low IQ retard pretending he works in the industry because I know it'll probably be you again, still talking out of your ass.
>>
>>71388708
>hdmi 2.1
yeah as if there is a card capable of that
>no virtual link
aib will have the dock
>no vrs
amd doesnt have pixel shading raster engines
are you mad?
>no ray tracing
as oppose to nvidia that tanks the perfomance
>>
>>71395949
60% is the peak. I think average is more around 40%. 50% peak by this slide, but I could swear I saw a 60% example.
It's obviously true by the tests that async compute scaling is better on Turing than GCN.
But many of these improvements in Navi look to improve that async compute scaling.

Turing's is also limited to doing int and fp simultaneously to get that sort of throughput.
>>
File: asdf.jpg (95 KB, 1871x230)
95 KB
95 KB JPG
>>71396099
and i was right
you are a moron
>>
>>71396131
thats bullshit nvidia simply doesnt have the means yet to leverage such async compute capability
they literally need a hardware sc and cores that can flip pause discard and flush at the same time to another core if needed
and nvidia cores dont do nothing of the sort since 780ti
>>
>>71396163
It CLEARLY fucking does since games which support async compute got the largest performance improvements over Pascal. RTX 2060 outperforms the GTX 1080 in Doom significantly. It matches Vega 56 in Doom.

Holy shit you're delusional.
>>
>>71396187
again thats bullshit go check the scores at aots
2080ti single card at 9900
1080ti single card at 9600
at 1440p

similiar at 4k 300 points difference
>>
>>71387233
>Not mining cryptonight coins with your Vega on release and making it pay for itself.

That's your own fault my dude.
>>
>>71389928
>+21% in BFV
>+16% in Metro Exodus
was raytracing active for the RTX 2060 or why does the RX 5700 have such a huge lead here? aren't both nvidia titles?
>>
>>71395578
I got a vega 64 today, how much did I fuck up?
>>
>>71396815
Entirely depends how much you can UV it. You get an additional 5-10% higher perf from that alone before OC'ing due to higher sustained turbo. That puts you around 90% RTX 2070 perf.

Aim for 1000-1260mV starting with the latter and then cranking it down by 20mV until you find something stable and then add 20mV to that.

Got my v64 to 1120mV @ 1536 MHz that way.
>>
>>71396906
What about vega 64 at 1700mhz? and 1050mem?
>>
>>71393152
>cards costs 1000 USD
>wow record revenue
>>
>>71396815
uhh pretty bad?
RX 5700 is better performance at less power consumption for cheaper.
Unless you got it for close to $300 I guess.
>>71396935
No way you're getting good power consumption at 1700MHz.

>>71396805
RTX is obviously off on those titles. I don't think AMD's driver even has DXR support yet does it?

I have a feeling that Navi will do ray tracing pretty well though with its greatly improved CUs and the extra scheduler per CU, just not as good as a dedicated accelerator.
>>
>>71397222
>Unless you got it for close to $300 I guess.
nope at 425euros

>>71397222
>No way you're getting good power consumption at 1700MHz.
Define "good power consumption"
>>
>>71386980
You slurp up their cum no matter what you fucking faggot. Kill yourself
>>
>>71397402
How lucky you that leatherjacketman not only shot his cum in yours earlier, he's up for the second round already. I hope you can swallow it all!
>>
>>71397365
>Define "good power consumption"
depending on what narrative they wanna push now, >150W is housfire tier
>>
>>71387296
>V64 is good too and trades blow with 1080 depending on the game.
Stopped reading there.
It's obvious that you're a AMD fanboy
>OH OH you have to undervolt our 500 USD CARD to achieve 200watts, yes I know Nvidia cards are more efficient and are better at delivering everything but AMD VEGAAAAAAAAAAA
>>
>>71397536
I remember my dual 250W 480 firestarters.
>>
>>71397222
>I don't think AMD's driver even has DXR support yet does it?
yeah and that's why raytracing would be active for the RTX 2060 if you use ultra settings while it wouldn't be active for the RX 5700 thus giving the RX 5700 such a huge lead
>>
>>71397640
t. braindead Nvidia stock holder
>>
>>71396805
Probably RTX off. And 2060 mostly shines when DLSS is used, which is something very shitty to rely on.
>>
>>71390841
>old as fuck node vs brand new node
>>
>>71398265
Not really. It's a decent AA alternative for games that are properly trained, especially ones that lack a good AA already. It's just overhyped. But the future will inevitably be performance saving tricks like DLSS and coarse shaders as we march towards 4k and high-res VR. I wouldn't be surprised to learn sometime soon that the customized Navi for next gen consoles support some hardware accelerated form of variable rate shading to boost performance.
>>
>>71397536
Damn so ever RTX card is a housefire, including the reference 2060. Noted.

>>71397640
You can't be this stupid.

>>71390841
5700XT is only 20% less transistors.
And Vega56 are datacenter leftovers.

>>71391117
Yeah. And the price isn't even THAT bad.
10% lower and it'd be great. And it probably will be 10% lower when Super launches.
>>
>>71398621
>a decent

lowering quality isnt decent
>>
File: rbko8acbev331.png (1.31 MB, 1915x1075)
1.31 MB
1.31 MB PNG
It look like nvidia hardware encoder decoder are several years ahead of AMD.
>>
>>71399713
I won't deny the encoder part, the new one from Nvidia is pretty cash for streaming. It's only a matter of time until AMD does their own implementation however. People like to meme about AMD features, but I'd argue they have far more useful features on the driver stack for the average consumer right now.
>>
>>71399713
Is it a new encoder or the same ones used for the previous cards?
>>
>>71398621
Well Microsoft has made some very bold claims for the next Xbox. AMD is single handedly responsible for making PC gaming extremely expensive. A $700 card won't be anywhere near a 2020 $700 console.
>>
>>71399713
since when was 4K60 not enough?
>>
>>71398937
>And Vega56 are datacenter leftovers.
You're thinking of VII. Vega 10 was very obviously made for content creation.
>>
>>71398937
>5700XT is only 20% less transistors.
7nm was expected to be more expensive per gate but not not 1.25*1.1 expensive. And HBM is still less expensive than GDDR6, even if GDDR6 is like 1.7x the cost of GDDR5.
>>
>>71400072
7nm is a little over twice the cost of 12nm/16nm, afaik, per wafer.
But also twice the density, so you can fit twice as many chips on a wafer.
That'd make it roughly the same cost, except that defect rate is also vastly higher.

We know AMD was getting "over 70%" yields on 74mm^2 chiplets, vs well over 80% on the 214mm^2 zeppelin die.
That's massively worse yields. The yields on these 255mm^2 Navi chips must be under 50% unless there is (which there probably is) a lot of duplicate redundant silicon that they can switch off.

These Navi die yields are surely far, far worse than Vega were, when they're worse on the Zen2 chiplet that's almost 1/3rd the size of Zeppelin and Navi isn't even half the size of Vega.

Anyway, I forgot if it was Su or Papermaster, but one of them said in the past year that their 7nm products would have >50% margins. So it shouldn't be surprising to anyone that a 10 billion transistor 7nm GPU is retailing at around $400.
Glofo 14nm was much cheaper.

What I would like to see is for AMD (and Nvidia) to put the encoder, multimedia engine, etc, on a separate die like the i/o die. That's about 15% of the die size which could be on a cheaper process and improve yields for the GPU itself.
>>
>>71399713
I'd like to see the comparison of Relive vs Shadowplay and the FPS loss.
Relive on Polaris lowered fps <1% compared to 1.5-3.5% with Shadowplay on Pascal. I have not seen the comparison done with Turing.
>>
>>71400350
>We know AMD was getting "over 70%" yields on 74mm^2 chiplets
An anonymous source through bitsandchips, who've lied before. Parroted by adored, another liar.

>Anyway, I forgot if it was Su or Papermaster, but one of them said in the past year that their 7nm products would have >50% margins
Source?

I'm of the belief that 7nm is not yield 70% even for Zen. TSMC said 76% for SRAM

https://wccftech.com/tsmc-7nm-proces-sram-details/

I'm pretty sure large scale integrated circuits have significantly worse yields than memory in general. And wafer yield doesn't mean it's produced the same number of functional die. Zen is probably more forgiving than GPUs, only two SKUs for Navi whereas Zen 2 will inevitably have quad cores later down the line with the same chiplet die.

Unless you can confirm that >50% margin I'm of the belief that AMD is having a tough time making 7nm comfortably high profitable, although I'm sure they are making acceptable profit. The reason Zen 2 is cheap is because they're trying to capitalize on Intel's weak production right now. And I expect Zen 3 on 7nm+ will have the same pricing structure per core despite better yield because they're not trying to offer a value as their business model per se, they're trying to lock in swathes of the market while Intel is messing up their supply chain fussing with 10nm. Hence the socket compatible strategy and why AM4 is overbuilt. On the other hand Navi is "low effort" with less attractive pricing because they don't expect nvidia to give up their crown any time soon.
>>
Corelet gpus when?
>>
>>71400626
Here's your prototype
>>
>>71400679
More than a thousand mm2 of 7nm die size.
Lewd.
>>
>>71392811
2020 earliest
>>
>>71392811
7nm fabbing is very immature, the yields will be abysmal
>>
File: pv.jpg (48 KB, 1100x825)
48 KB
48 KB JPG
>>71400679
>>
>>71400626
Probably never. Would require new graphics programming.
It would be like dual core CPUs all over again, where it took almost 10 years to just START to catch on, and almost 20 years to be common place.

mGPU support was only finally added to Vulkan and DX12 this year.
>>
>>71388762
That's Nvidia logic though.
>>
>>71396122
The argument that raytracing is dumb is itself retarded though. Yes, I agree that right now RT doesn't make a lot of sense for gamers, but for AMD to omit that hardware but ask the same price than Nvidia is bullshit. Everyone would be all over the new cards if Lisa had said: We omitted RT hardware, but on the flipside our cards are 100$ cheaper than the competition. But she didn't do that.

I still wish that someone used the RT hardware to raytrace sound propagation in games instead of some dumb light shafts. Would probably not tank performance as much (as nothing would have to be rendered, just calculated in the background) and would make for a much more realistic soundscape, especially in shooters.
>>
>>71401150
sound was perfected 20 years ago. damn deus ex got perfect stereo reproduction, and it's not the best one from those years.
>>
>>71393180
The retarded thing: AMD catalyst used to have a Flip Queue Size setting (which is the exact same thing), but Crimson doesn't. So if that is the setting, then they're literally adding their own old setting into the driver again and selling it as new. Which is a shit move that I'd only expect from Nvidia, not from AMD.

Of this is actually something new, I'll stand corrected.
>>
>>71401150
Noises don't need a lot of rays. You can raycast with bounces on a CPU just fine. Most "3D" audio already works that way. Line of sight uses raycasting.

AMD thought that processing the audio based on the results was more pressing seeing as how trueaudio was a DSP block and SDK. I also like how no one brings that shit up when it was basically the same thing as physx but really shit and used by like one indie game. It only opened up after it died and they moved everything to GPGPU. At least RTX is just a DXR and Vulkan RT implementation to run on accelerators, so it's not breaking standards.
>>
>>71401534
>Flip Queue Size setting
Literally just max frames to render ahead
>>
>>71394276
No it must be different because AMD already has something they call Enhanced Sync in their current drivers that is supposedly like Vsync but without the input lag.
>>
>>71394459
Undervolt it.
I run 1075mV in the highest power state (1632Mhz) and it draws max 220 while running cooler and actually boosting to 1660
>>
>>71401553
>At least RTX is just a DXR and Vulkan RT
is it? why it's not allowed to run on AMD cards then? I know it will run like terribly but I would like to see it.
>>
>>71401578
Nvidia also has a "fast" vsync setting which caps frame rate to refresh rate.
>>
>>71401534
I'm pretty sure it's simply an improved Enhanced Sync with a new name. That's what their description of it seemed like.

I use Enhanced Sync for a few games. It's good, as long as you can render twice the framerate of your monitor's refresh rate.
Fast Sync is largely the same, though usually not quite as low latency overall.
Maybe they improved it so you don't need double the framerate? That'd be nice. But it could simply be a renaming.
>>71401609
No, that's not what it does.

>>71401600
>why it's not allowed to run on AMD cards then?
Need driver support for the APIs. When the API makes a call for a feature, the driver needs to know how to "translate" that to what the GPU can understand. It technically could, just as it runs on Pascal since Nvidia updated the driver. BFV RTX On lowers FPS by like 80% on the 1080Ti versus like 45% on the 2080Ti.
I'm fairly certain Navi is going to get support for it at some point, but it might not be until AMD also launches a card with a dedicated accelerator.

>>71401553
Yep.
>>
>>71401600
AMD literally doesn't have DXR drivers. Nvidia released them for Turing somewhere around BFV and a few months ago for Pascal.

An API is like molecules. They have keys that fit together to work in the body. The API is a hollow molecule that has the keys. The drivers fill it up so the OS and user mode software can use those molecules. In this case DXR. For nvidia they implemented supported for RTX (hardware) in their drivers for DXR, Vulkan, and a separate SDK (including an API) called Optix, although that seems to be for more professional stuff. They implemented a GPGPU solution for DXR for Pascal which they released a while ago.

https://www.guru3d.com/news-story/download-geforce-425-31-whql-driver-(adds-pascal-dxr-support).html

AMD hasn't said anything about DXR or Vulkan RT support as far as I'm aware. They're just completely ignoring it. It's not something they can pin on nvidia either. Vulkan and DXR are industry standard APIs.

That said. I'm sure at this point, since AMD hasn't even lifted a finger, currently DXR games might not even work well if AMD did implement it. Games are known for hacky solutions and they can't do anything for a vendor who's completely absent while they're developing the games.
>>
>>71401673
https://www.pcbuildersclub.com/en/2019/02/amd-radeon-gpus-feature-raytracing-via-microsoft-dxr/
they do support it, they told their cards support DXR before nvidia did last year if I remember it right
I think it simply locked in games if the card isn't nvidia.
>>
File: Dz-Iu7sX4AYoU_r.jpg (57 KB, 918x177)
57 KB
57 KB JPG
>>71401691
>I think it simply locked in games if the card isn't nvidia.
You're making bullshit up man. You didn't even read your own article, or the original twitter thread it's citing. From the article

>Although there is still a lack of raytracing games at the moment, some games will be launched in the future. Fortunately, they all use the same interface, DXR from Microsoft. According to AMD, Radeon graphics cards that support at least DX12 should theoretically be compatible with raytracing through a DXR fallback layer. However, the function is not integrated in the driver and therefore cannot be used

>However, the function is not integrated in the driver and therefore cannot be used

>NOT INTEGRATED IN THE DRIVER AND THEREFORE CANNOT BE USED

https://twitter.com/coreteks/status/1098174427609612288

https://twitter.com/Locuza_/status/1098489149038972928

>Seeing news based on it there are some wrong ideas how this is working.
>The fallback layer is a library which has its own interfaces and methods and needs to be shipped with the app.
Volta does support the DXR API directly, BFV/Metro is not using the fallback layer for it.

The so called Fallback Layer is a specific runtime (program) that needs to be running in the actual game to be used. And it was designed only to help MS design DXR, meaning it's probably not even compatibel with the final DXR specification. It's leftovers from when MS and vendors were designing the API.
>>
>>71401691
>post article trying to blame developers locking put poor amd
>said article cites it's up to amd to enable support in their drivers
kys seriously
>>
>>71401749
https://wccftech.com/amds-david-wang-we-wont-implement-directx-raytracing-dxr-until-its-offered-in-all-product-ranges/
found wang talk on this, so it's a bust, simply curious if RTX is directly DXR API or is it something Nvidia cooked up
>>
>>71401787
Are you fucking retarded? RTX is HW acceleration for DXR and vulkan RT.
Let me spell you to like I'm explaining to a brainlet.
It's the same as requiring HW tessellation support to run DX11 games.
>>
>>71401787
That doesn't mean they won't add driver support for DXR to current cards.
It means they won't add a dedicated accelerator the silicon until they can offer it in a full product stack (which Nvidia is currently not doing. The 2060 is also too weak to actually run DXR, anyway)
>simply curious if RTX is directly DXR API or is it something Nvidia cooked up
Read the thread. Already been stated numerous times.
>>
>>71401787
RTX is just a brand name for the entire RT core related tech stack. RT cores are hardware accelerators on the SMs and RTX is a name for the entire platform that includes the DXR and Vulkan drivers that use the RT core accelerators.

In theory none of the games at least should be locked to a vendor. They claim to be using DXR which is an extension of DirectX so AMD can support it if they want. But we can only know for sure once AMD actually gives us DXR drivers because it has to be tested to be certain. And like I mentioned above it may not even work correctly even if AMD does implement DXR drivers because games are hacky and nvidia and AMD consult during development of big games and engines to make sure things go right. But AMD can't do that when they don't support DXR.

So AMD has to pick up the pace. If the new xbox supports raytracing and they're really going to put off DXR until "all products support raytracing" then Nvidia has a killer feature. Because you can just imagine most of the multiplatform titles will have raytracing because of the xbox.
>>
>>71401813
>>71401818
nvidia was selling it as their own AI-bullshit thing and DXR was released after their launch, I didn't expect them to be this scummy with naming

>>71401827
aren't RT cores just tensor cores? all this naming nightmare
basically games that support RTX simply have DXR extension for development simplicity? there is absolutely nothing nvidia developed on their own and basically RTX feature can be ran on anything DXR?
>>
>>71401866
>nvidia was selling it as their own AI-bullshit thing
Jesus Christ. Nvidia upscaler AI shit has nothing to do with DXR and RTX.
>>
>>71390592
2560 Stream Processors
64 ROP's/160 TMU's
8 GB Ram

those are the same specs as my old 390. WTF are they charging $500 for this for?
>>
>>71401866
>aren't RT cores just tensor cores? all this naming nightmare
>basically games that support RTX simply have DXR extension for development simplicity? there is absolutely nothing nvidia developed on their own and basically RTX feature can be ran on anything DXR?

Tensor cores are a separate, smaller computing unit that accompanies the int32 and fp32 basic arithmetic blocks. They specialize in low precision fp16, FP8, etc math and matrix math in particular. RT cores accompany the large block and also special in operations for a particular data structure called a bounding volume hierarchy, common data structure for raytracing.

Tensor cores are the AI units. Not used for much, pretty much unrelated to raytracing. Most games use a completely generic general purpose GPU solution for the denoising (since even RT cores can't produce a perfect output), like temporal denoising or something. So there should be no vendor lock in there. Although tensor cores and nvidia's own SDK offer a tensor core, AI accelerated denoiser, I don't think any games use it.

Games just implement a renderer that uses DXR. The games calls specific methods from the API and the system sends that to the drivers, which figure out what to do. For Nvidia that means their Turing DXR drivers tell the GPU to use those RT cores to accelerate raytracing.

Everything RTX related is developed by Nvidia. Like I said an API is an empty shell until it's filled. MS and Nvidia and AMD cooperate to write APIs usually, so it serves everyone's best interests. In this case AMD for whatever reason decided not to play too well, seeing as how they just have a very old "fallback layer" used when they were designing it and nothing else. Nvidia implemented their DXR drivers, including a path specifically for Turing to use the RT cores and they made the RT core hardware to run the code on obviously.

So
>RTX = RT core platform/brand including a DXR driver, vulkan driver, and Optix development kit
>>
>>71401866
>nvidia was selling it as their own AI-bullshit thing and DXR was released after their launch, I didn't expect them to be this scummy with naming
You've completely misunderstood their marketing apparently. DLSS is AI trained upscaling. It's an independent feature that's like anti-aliasing. Nvidia recommends using it along with RTX/DXR to get better performance, claiming that DLSS is like running at a higher resolution.

So instead of running RTX/DXR at 4k, run it at 1440p + DLSS because that's like 4k. That's what they're saying. DLSS is a little scummy because there's no way upscaling can truly be natively rendered 4k, but RTX is a novel piece of work that's good in my book.
>>
>>71401985
DLSS actually looks worse and runs worse than regular old fashioned upscaling in almost all cases.
>>
>>71394149
>>71401578
>>71401661
>From what Scott Wasson said about it, it works in GPU limited scenarios by having the driver stall the CPU on the next frame calculation. This means your inputs will be less "stale" by the time the GPU finishes the current frame and starts on this new one.

>This is something quite different than pre-rendered frame control. If you have a 16.7ms frametime on the GPU, but the CPU frametime is only 8ms, then this feature is supposed to delay the start of CPU frame calculation by 8.7ms, meaning that much less input latency.
>>
>>71401594
give me your entire vega 64 clocks/voltage table
>>
>>71402004

This. I simply don't get it. Any sort of SRCNN like Waifu2x or FSRCNNX that MPV uses which is realtime outperforms the shit out of whatever DLSS is doing and I have no idea why Nvidia didn't base their research off that.
>>
>>71401902
you probably didn't watch how jensen was justworks whole thing it only got separated in later
>>
>>71402062
because 70fps "4k" games
>>
>>71399713
why cause nvidia offers 8k encode at 8k only on 265?
>>
>>71401150
the pricing amd got is to counter the super lineup since pretty much this is the cards they are gonna face with
>>
>>71402332
they already failed to counter 2070 and on par with 2060
super going to murder them again.
>>
>>71388707
>GayGPU
Bitch please
>>
>>71399713
>improved encoding
fingers crossed that it'll be as good as pascal's nvenc. Having vp9 encode would have been a godsend.
>>
>>71402048
Will do when I'm home
>>
>>71396906
>>71396935
I got my Vega64 running at 1630Mhz (but boosting into the 1660s) @1075mV
HBM2 1050Mhz@990mV.
Asus Strix variant, slightly more aggressive fan curve. Never exceeds 66°C even under full load. Gets loud though, but I play with headphones anyway. Draws around 220W that way.

Will post my Wattman profile / OverDriveNTool when I get home.
>>
>>71402898
>Asus Strix variant, slightly more aggressive fan curve. Never exceeds 66°C even under full load. Gets loud though, but I play with headphones anyway. Draws around 220W that way.
How loud?
I was considering getting a Nitro+ not limited edition myself.
How much of a noise difference is there between strix 64 and nitro+ 64(not LE)
>>
>>71397563
You misunderstood. Vega is as good as a 1080 on average (although by now it is slightly better on average, not sure), but its perf/watt is worse.
Undervolting mitigates the issue, but no one said that it is a better card on every metric.
Currently, of you want 1080 performance, you absolutely should buy Vega56 as they are now cheap enough.
1080s cost an arm and a leg as they're not being produced any more.
>>
>>71397640
Are you retarded? On the Nvidia side this would simply tank the fps, while on AMD cards it would simply be disregarded as their drivers don't support it.
No matter which way you lean, that would be an unfair comparison.
>>
>>71386813
Those would be game changers if they were priced like Polaris.

However AMD went with RTX pricing tier for some reason...
>>
>>71400369
Huh? I thought shadowplay has no performance impact since it is a separate chip?
>>
If Nvidia Super rumours are true, AMD would be better off with not releasing Navi. They would have to cut prices drastically to be even close to being considerable.
Navi DOA is too true with Super coming and AMD fucking up big time with pricing.
I hope for AMD that they can release a competitive 5800XT pretty fucking soon or it's over for them for good.
>>
File: 1540264640007.jpg (439 KB, 1920x1080)
439 KB
439 KB JPG
>>71388708
based
>>
>>71402484
>failed to counter
better perfomance up to +15% as expected from super is "failed to counter"
got it
>>
>>71402048
>>71402898
pic related
>>71402963
Not turbine but too loud to comfortably watch a movie. But again, this only happens during high load times. 4K decoding (even with added AMD Fluid Motion) doesn't even raise the clock by more than 300Mhz (so running at like 600 or so).
>>
>>71402696
>as good as pascal's nvenc

when did nvenc was good the quality IS SHIT
>>
File: vega undervolt.png (789 KB, 2562x1391)
789 KB
789 KB PNG
>>71403379
forgot pic
>>
>>71387106
had a 7870, those cards/drivers SUCKED SO MUCH BALLS HOLY SHIT. the car also fried itself after 2 years. worst piece of shit I ever owned.
>>
File: 1520492680318.png (218 KB, 600x540)
218 KB
218 KB PNG
>>71387305
LOL this looks like a "sponsored post" that you could fucking frame on the wall, jesus christ...
I don't even care about brands but this is fucking pathetic.
>>
File: 1554199216620.png (498 KB, 694x1115)
498 KB
498 KB PNG
>>71403363
well *shits in diaper and masturbates to hentai* DAS cuz gamers r stoopid
me very big boy compile neckbeardOS with nogaems and buy amd
>>
>>71403370
Oh how fast you forgot the same posts during vega paper launch.
>>
can you show the asic score of your vega ?
>>
>>71403439
>how fast

the fact that nvidia went on today on inventing a bullshit metric to shit on navi pretty much says a lot
the numbers ARE REAL
>>
I think I'm going to get an RTX 2060 Super instead.
>>
Nvidia RTX is for obeses. You gain 200 lbs upon purchasing an RTX obesity.
>>
>>71403192
>I hope for AMD that they can release a competitive 5800XT pretty fucking soon

This.
I finally broke away from Nvidia as a Linux user and did the Vega 64 LC thing and bought a big nice power supply to power the damn thing instead of spending that money on an Nvidia card and have unfortunately found out about the world of Vega crashing issues (which everybody blames on the power supply) and just ordered a used GTX 1080 SC2 to replace it.
Yeah native driver support on Linux seems nice but the proprietary Nvidia drivers for Linux have always worked good too and actually have graphical interfaces to adjust shit easily.
When your top of the line cards need 600 watts of sketchiness to compete with the upper mid-level cards of your competitor it's time for a drastic overhaul just like they had to do with their CPUs which were in the same boat.
>>
>>71403458
you already forgot same graphs for RVII?
>>
As long as ROCM doesn't get the same level of support as CUDA, it will be DOA for professionals.
>>
>>71403380
https://obsproject.com/forum/threads/comparison-of-x264-nvenc-quicksync-vce.57358/
You'd be surprised. nvenc had been further improve on turing and it matches x264 faster in quality now. It's why people were disappointed the gtx1650 came with an older volta nvenc.
https://www.techspot.com/article/1740-game-streaming-best-quality-settings/
>>
>>71392281
5700XT will age worse than the 2060 by 2021. Next gen consoles confirmed to have RT cores so many games will actually use ray tracing and hence the 2070 will have a major feature that the 5700XT doesn't, especially if it gets a discount
>>
>>71403848
Rtx on the 2060 will age like milk
>>
>>71403848
doubt it, RTX already barely working on 2060
I don't know why they even bothered for 25fps it gives
as much as I hate pricing 5700 would objectively age better with 8GB VRAM just like 480 8gb aged better than 1060s 3gb.
>>
>>71403848
>RT cores on consoles
No.
>>
>>71387554
Nvidia display driver crushed and was restored successfully. If I had a dollar for each time it happened I could afford to buy the whole apple and run it to the ground.
>>
>>71403848
I seriously, really seriously hope you realize next gen consoles have Navi GPUs in them. Now stand in the corner for an hour.
>>
>>71403848
lmao leatherjacketman sounds scared
>>
i just bought amd because i'm not dumb enough to think that RTX is going to be anything but another PhysX for them

ray tracing uses random math to get an approximation of a deterministic outcome. that's retarded. its not just retarded because we are on the bottom end of scaling for it and have more powerful GPUs than ever but also because its using random math to come up with a deterministic outcome. there's a reason that graphical effects have generally revolved around approximations of a real life phenomenon that give you consistent results rather than approximations of real life phenomena that require a high sample rate to get an acceptable result - because there isn't consistency unless you throw enough horsepower at it to burn your house down

the reason physx never caught on wasn't because it didn't offer something unique, or because it didn't offer something better than a cheaper alternative, it was because nobody implemented physx in a manner that actually made people think that it was worth buying a dedicated co-processor to make a bunch of rocks fly around and the same thing goes for real time reflections
>>
>>71404800
Physx was actually pretty impressive, the main problem was the good old typical njewia behaviour. They put a heavy patent on it, and absolutely prevented the competition to use, and then planned to use it to make a difference between the performance between the two brands. But things don't work that way, it's like comparing electric trains to gas fueled cars. There's no comparison, you can't just completely fuck over people who don't have that feature in their card, so as a result no one used it because it was a heavily restricted niche gimmick. And the same applies to ray tracing as well.
>>
>>71404800
>>71404893
Do I have to explain this every time? Physx is the whole package. You cannot separate the "cpu" and "accelerated" sides, you just don't implement the accelerated part when you dont want it. Physx as a result has high adoption rate since its built right into unity and unreal engine 3 and 4. That's a lot of games off the bat and devs can use the accelerated functionality at will. Takes work but it's there to use.

Also physx CPU runs just fine on any hardware. The accelerated part is written in CUDA so AMD needs to license CUDA if they want to use physx, and nvidia offered to license it for dirt cheap when CUDA and physx were still young but AMD declined to even get back to them on the offer. Instead they were busy backing the equally proprietary Havok, owned by Intel. AMD was run by dummies
>>
>>71404791
why would nvidia be scared about navi?
maybe if navi get a price cut nvidia could be in trouble, but for the prices they ask, navi is pretty average
>>
>>71405214
Sure, sure. Next time you say nvidia offered hairworks to amd for free, after they practically stole tesselation from them.
>>
>>71405337
They went on record

https://www.extremetech.com/computing/82264-why-wont-ati-support-cuda-and-physx

>But what about PhysX? Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons. To do so would require ATI to build a CUDA driver, with the benefit that of course other CUDA apps would run on Radeons as well. ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable—it would work out to less than pennies per GPU shipped.

>I spoke with Roy Taylor, Nvidia’s VP of Content Business Development, and he says his phone hasn’t even rung to discuss the issue. “If Richard Huddy wants to call me up, that’s a call I’d love to take,” he said.

>Keosheyan says, “We chose Havok for a couple of reasons. One, we feel Havok’s technology is superior. Two, they have demonstrated that they’ll be very open and collaborative with us, working together with us to provide great solutions. It really is a case of a company acting very indepently from their parent company. Three, today on PCs physics almost always runs on the CPU, and we need to make sure that’s an optimal solution first.” Nvidia, he says, has not shown that they would be an open and truly collaborative partner when it comes to PhsyX. The same goes for CUDA, for that matter.

>Though he admits and agrees that they haven’t called up Nvidia on the phone to talk about supporting PhysX and CUDA, he says there are lots of opportunities for the companies to interact in this industry and Nvidia hasn’t exactly been very welcoming.
>>
>>71405480
So basically they haven't even responded to nvidia about the offer or terms of licensing or anything. But nvidia's the one that's not open to collaboration? Wow, maybe that's something you find out after talking to the other party.

Instead AMD teams up with fucking INTEL, INTEL, literally a company that got caught in the most blatant anti-competitive bribery to stall AMD not even a decade before this article was written, AND caught forcing their compiler to specifically produce deoptimized software, running the worst case instructions when non-Intel CPUs (aka AMD) were detected. Really, that's who AMD decided was more "open" and "collaborative".

The entire thing was AMD being a little bitch towards Nvidia since Nvidia offered a merger on the condition Jensen become CEO of AMD. If they had just declined in any case I wouldn't even say this. But AMD specifically didn't call back, didn't talk to nvdiia about their offer, trash talked them, and then went to bat for fucking INTEL. AMD's a bitch ass company. Very arrogant and ready to spite the other, at the time, small guy and go running to Intel's cock.

It's better now but they might as well just ask again about CUDA. Even though at this point it's probably too valuable to just license out for the dirt under AMD's nails.
>>
>>71405480
Yes, nvidia absolutely surely tells the truth about the licencing price and the whole deal for sure. Trust them.
>>
>>71405311
Nvidia has no reason to be scared right now since APUs aren't even as fast as a RX 570, but what's to say about the next 5-7 years? Nvidia is probably in an even worse position that Intel right now because they don't have much to fall back onto. The post mentioned 2021, but by then we would be seeing Zen 3 APUs, possibly Zen 4 APUs with stacked dies/chiplets/DRAM that can challenge these Navi cards all in one simple package. People just want to slap in a processor and not think about it. Look at all the questions you get in fucking building threads over CPU and GPU combinations, you know how easier it would be to just recommend an AMD or Intel APU and be done with it? They have the enterprise, but what if the customers start making ASICs for their particular usercase instead of spending a fortune on Nvidia's multipurpose, less efficient offerings? Nvidia does not have the best experience making APUs and most of them have been terrible alternatives to what the competition has offered. After those two markets are checked and only the most die hard customers are left, what will Nvidia offer us? I'm just voicing these thoughts because I have no idea what Nvidia is planning. Last I heard they bought a networking company or some shit probably to stay relevant with fast interconnect.
>>
>>71405545
>caught forcing their compiler to specifically produce deoptimized software, running the worst case instructions when non-Intel CPUs (aka AMD) were detected
Nvidia did the exact same. How are they better than intel?
>>
>>71405581
Except they didn't retard.
>>
>>71393288
What the fuck are you talking about you retard
You couldn't get double a 390X's performance in 2016 for $450
>>
>>71401907
Because it costs >$12,000 for a 7nm wafer and it's twice as fast, you moron.
>>
>>71405678
Except they did, retard.
>>
>>71403393
only people with bad coolers undervolt, change my mind



Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.