I've been running integrated graphics on my ryzen 5 5600g and I keep getting crashes during renders, I assume the 500mb vram keeps running out or something anyway I figure I need a GPU, Im looking at either an rx7600, rx6650 xt, a770 if I find a good deal, something like thatcurious to hear what you guys are running
You either buy a used 3090 or you buy a new 4090. Those are your two options. Self /thread.
>>962800im too poor for nvidiamy budget is ~200$
>>962801Another anon here, run away from AMD and I am an AMD fag, buy a 3060 with 12GB, nothing less.
>>962806yeah looking into it further, from a wuick google/reddit search it seems like people unanimously recommend nvidia for the cuda coresI didnt realise it made that big of a differenceIm looking at used 3060/3060 ti/3070 now, theyre actual more reasonably priced than I thought theyd be
>>962806follow up question: whats better 3060/3060ti with 12gb vram or 3070 with 8gb vram?
>>962808Get the 12GB 3060 and you can run your uncensored & un-glowniggered AI girlfriend locally on your machine. The popular 13 billion parameter LLMs are suprisingly good at roleplay and will fit entirely in 12GBs of VRAM.And uhh, yeah, it's a good card for 3D too I guess.
>>962808You need all the VRAM you can get otherwise your shit just breaks. 3060ti will be faster but it doesn't matter if you can't load the scene. Only buy NVIDIA.
>>962813>he can't fit the 20B MLewd modelngmi
>>962816>MLewd 20Bvanillashit, I sleept. MLewdBoros 13B enjoyer
>7900 XTX>MBA reference cardFirst one was the defective vapor chamber, second one has been fine. It's nice and I don't have to worry about a damn thing
If you only have 200 you arent going to make it in this
>>962827>Navi 31 GPUWait until you get pump out. This die hates living. How loud is it by the way ? Just wondering if the MBA design is decent.
>>962827whyd you choose AMD over Nvidia?
>>962799Pretty much you do what >>962800 said.>>962806You COULD get away with AMD if you are mostly using Blender, but anything else, you are fucked and have to get jewvidia, cuz all shit just runs on CUDA or Cpu.RTX 3060 is the bare minimum you should get, 12GB is decent amount of Vram, 8GB just wont do if you have a "medium" sized scene, when I had a 2070 4 years ago I barely could push renders for my finals. Now no issues having a 3090, try to buy one if possible. Otherwise, get the 3060, used it until you can save some more and get the 3090 used again.With not enough RAM, be it video or system, you are just fucked on doing 3d stuff, period.
>>962806>>962813>>962917got a used EVGA 3060 12gb because Im poor, thanks bros
>>962920>EvgaEvga is the most prone to breaking. ONLY get msi
>>962922well its too late now I already bought italso I always heard EVGA were the one of the best gpu manufacturers and that MSI was one of the cheapestI had an MSI AMD card in the past that died on me, and I watched some northwest repair vids where he shits on MSI, so I just assumed theyre not great but maybe thats just for AMDtheres still 500 days left on the warrantyitll prob be fineyoure literally the first person Ive ever seen say EVGA is bad
>>962922>>962927>northwest repair has 2 vids repairing the exact card I boughtwhat the FUCKshould I resell it and buy something else?
>>962922>Evga bad, buy shit brand that made 4090 bricksLmao>>962927Dont worry anon, when you make heavy 3D shit, anything will break off eventually, EVGA has the best RMA at least, so if it dies but the dude who sold it to you has the receipt, they will do the RMA service.I've owned pretty much from all brands, never had issues except one time with a cheap h110 msi board that died off, but it was a pos anyways. Nowadays I kinda sticked to Gigabyte because of mobo features/price.
>>962981>EVGA has the best RMA at leastDude, EVGA is so shit they completely exited the nvidia graphics card business. They are DONE
Any reason to upgrade from my 1070ti 12gb or is it just a meme
>>962989The 1070ti is a 8gb only card
>>962982>EVGA bad because they left nvidia graphics card businessThey left because Nvidia were fucking assholes on making business, thats why they stopped working with them and didn't do newer GPUs, but they STILL do RMA services on their products.Even other board partners were threatening jewvidia on leaving because they fucked them badly, mostly during the whole crypto/coof years.
>>962799I have a 3080 10gb, I have the money to upgrade but it does everything I want so I see no reason to.Unfortunately, as much as I hate ngreedia I have to agree with the other anons recommending the 3060 12gb, having had AMD cards in the past the software support is just not there.Get a used one off ebay or hardwareswap, that should keep you in budget
>>962903It's fine I promise. I'd rather not do any water cooling my workload doesn't justify it. Also loops are a bitch to maintain so I'd simply rather not. I can afford to run noisy fans.The MBA is loud under maximum load near its limits for temps but the real problem is coil whine holy fuck I can hear the coil whine over the fans going full speed on this. I recommend not the MBA model for 2 reasons1) defective vapor chamber chances are 1 in 10, I can speak to this is true2) coil whine and fan sound are better on other modelsI went for it because yeston was my backup of this second MBA was defective. The almost the smallest design, 2 8pins was a must. Yeston was the best best model for my size and pin needs. Gigabyte has one too but fuck gigabyte it looks cheaper than my 750ti sc. >>962907I've been Nvidia free since 2012. Please understand this is both a cost and autistic reason. I like reference cards more and blower fans more. I just want a fucking rectangle with no gamer bullshit at a more affordable price point. I know my needs. Yes Nvidia can do my workloads in slightly less time but the ROI is better for AMD for my purposes. I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.Personally I want to get the W7900 because like I said I'm autistic and I like blower fans and rectangles and the pro line is basically what I want. Yes Nvidia offers it too but again price matters to me, and I don't benefit from the extras they offer so AMD is my best fitFor work I'm actually making a number of remote workstations and we're debating on the W7500 or the W7600 because single slot cards like that are nice. Yeah 8x is gay but for our use case it's more than enough. Need 4 of the fuckers though and that adds up. Thankfully gpu passthrough proxmox and me forcing their hand makes this a bit easier than normal. Also a threadripper board with 6 pcie slots all at 16x gen4 makes this a breeze. Hard part will be the storage.
NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.
>>963023>I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.oh so you're a beg then and using a liquid cooled AMD gpu and the icing on the cake is you're writing a wall of text as well. Great. Just great.
>>963024>NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.AI has not been proven to be actually profitable in the arts.
>>963032Its true.It hasnt turned a profit. The only use is in medicine (identifying afflictions) and the military (tax funded). You may say - but anon, all those movies and tv shows coming out, surely they must rely on ai and be profitable. This isnt true. Not only is streaming media not profitable for anyone but less and less movies are being made each year now.
>>963036>movies and tvwhat are you 70 years old?nobody here is arguing its used in "tv", obviously its not>The only use is in medicine (identifying afflictions) and the military (tax funded)AI in medicine was something hyped like a decade ago and turned out to be a commercial failure, what are you even talking about old manpeople use it all the time in their days to day livesmy friend uses gpt4 to draft scripts for codinghe and a lot of other people Ive seen also just use it like a search engine for general queriesI know two people who use chatgpt for law, specifically tax codes and criminal lawstudents notoriously are using it to write their essays, I was shocked to catch my sister using it for a college essayjust less than a month ago "AI" (really it should be called ML but whatever) was used to digitally "unwrap" and reveal the partial text of a herculaneum scroll, a breakthrough in the classicsof course if were going to talk ML, which all "AI" is, OCR has been used for decades now for a million different things, facial recognition in phones/surveillance, image classification in general is hugeIn 3dfx it would be used as a minor tool in a workflow, either for texturing, making hdris, photoshop assets for design work, etcin terms of profitability in art its mostly in independent work since obviously the tech is newIve seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitablethe ai voice synthesis tech is popping off recentlytheres a handful of indie artists on twitter making money off ai workai is great for creating in between frames for animation
AI is mostly great at taking dozens of gigabytes of disk space
>>963030Not everyone needs that shit, especially for increased cost. If I can save money I'd get an equivalent I will.I work in game dev and knowing the programmers I'm dealing with I need options. I keep some arc cards around just to make sure we're thorough.
>>963030>>963065Forgot to add that I clearly stated I don't use liquid cooled cards and prefer blower cards. Fuck liquid cooling it's more effort and maintenance than it's worth
>>963040so it hasnt been profitable in the arts.Your friend wrote some bad, derivative, STOLEN code that breaks its original license>Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitableyou are a joke
>>963072if you dont see the potential you are retarded
>>963086sorry bud, but now you are pivoting to POTENTIAL. You want to do something, do it right - create a generative script that respects copyright and doesn't just rip from the entire internet (including entire github, including specifically licensed code for example GPL). Make something that isnt susceptible to bias. Make something that can be done via an understandable, debuggable script, and not a 50,000 unit cluster outputting biased works or in the case of chat gpt, extremely neuteured non answers that just rip information from the web and dont give credit, even for code examples that require credit and attribution.
>>963088I gave you about 10 different real world use case half of them anecdotes from people I know irl and you ignore them all and think Im pivotingI gave you examples of where its currently profitable and you also ignored thoseNFT grifts would be another onenot saying these are particularly admirable use cases but theyre certainly profitableand like I said its just another tool to integrated into a preexisting workflow, not an end all be alllarge language models dont just rip shit from the internet, even though they are often trained on internet data it gives an original presentation every timeyoure a little too old and cranky to understand, thats okay
>>963091>NFT>youtube>stolen gpt promptsget out of here young man
>>963088>>963091look, Im trolling a bit but Ill be fair and grant you that in the arts its not a big player yet, but your following posts were utterly retarded and betrayed that you dont know jack shit about how ML is used right now IRL and thats what my posts were mostly arguing againstWhere we disagree on the first point is that you tacitly believe that ML is not going anywhere when thats clearly not the caseParadoxically however you ALSO tacitly believe that AI, if it is to succeed, MUST be this magic bullet do all that completely replaces every cg softwareIm just saying that a tool that allows you to create images from a prompt in any style you specify will be extremely usefulThey obviously still have a certain look to them, but theyve gotten WAY better at realism in recent years that even Ive been fooled at first glance by some AI gen imagesAlso the in between frames thing is probably the best use case in the arts for the near futurefor animation those frames are usually outsourced and take thousands of man hours, being able to do it with AI lowers the barrier of entry to animation substantially
>>963093>Utterly retarded>Clinging to NFT, youtube, and stolen code from gpl repos from promptingI dont even know what to say man
>>963094you have poor reading comprehension
>>962808>3060 12gb in case you want the extra vram to render stuff in 3d programs or ai>3070 for the bus speed to play vidyaI would rather go for a 40xx 8gb card instead if your answer is vidya. 30xx cards only have dlss 2.0. 40xx cards have dlss 3.0 with ai frame generation that boosts your framerate in new games. Pick your poison
>>96324640xx cards have power connectors that are so busted they had to recall them and are CURRENTLY actively remaking them
>>963247Only heard that issue mostly with 4090s and with some ti versions of 4080-4070. In my opinion, a standard 4070 is the best gpu on the market right now. Decent voltage consumption, plays everything with memetracing, and has 4k capabilities. You are also saved from the coil whine headache.
>>963246>>963258sir this is the 3dcg board
>>963259Yeah, I know. Im just making things clear. Btw I too have a 3060 12gb, and it runs blender nicely. Since I also play video games, I had the same dilemma as OP.
>>963258>You are also saved from the coil whine headache.Ha, joke's on you Nvidia, I have tinnitus.
>>963023>Personally I want to get the W7900>He's falling for the "workstation" GPU scam>And wants to use an AMD "Prosumer" cardYou just proved here that you are retarded. All /3/ fags know that for personal use, you just buy the usual gaming card because it works the same as the other ones without being scammed on 2K for some "tech support" that they will never get/use. Leave that shit to multi-million enterprises that buy heaps of these for servers, that's the reason they make them, nothing else.
>>963246>Pick DLSS script meant for vidya engines as the stuff for /3/ software workflow that doesn't even uses DLSS at all.You don't know shit about development faggot, go where you belong.>>>/v/
>>963298I want one because I like blower fans. Nothing more. You're looking too deep into this anon. I'm just very irresponsible with money
>>962799the consumer-grade nvidia card with the largest amount of vram you can get is the only valid answer.With enterprise grade cards you pay out the ass for 24/7 specialist support which you will never make use of as a solo.Anything else is gaymer poorfag copeYou also get to dunk on /v/irgin gpulets in your free time. Win/win.
I'm thinking of getting an RTX 4080 to succeed my aging GTX 1060.
>>963355You don't need a 4080. Get a 4060.
>>963498if he can afford it why stop himpoorfag mindset
>>963588he doesnt need it and 4080 is a old card now. Wait for 50 series and get a 4060 in the meanwhile
>>962799>rizen integrated graphicsif vram alone is the problem you can adjust the max vram of the system on the bios, either put it at 8gb or leave it dynamic so that the system can define it on the fly, try this before selling your house to buy an nvidia scamming card
I would reccomend that you get the 6650, same performance as the 7600 and cheaper by a lot (atleast where i live). Since it is older, it has good support on linux if you wish to use it
>>962799When is the 4090 coming back in stock?
>>965626Does you country have a computer chain or do you only have bestbuy to choose from? There's tons of stock in Canada Computers, though for some reason Bestbuy is completely sold out, despite the prices being higher.
>>965627I'm in the US. I'm aiming to get the Founders Edition and there are two places I know of that officially sell them which is Best Buy and Nvidia's own store.
>>963246Frame generation is a fucking joke
>>965629>Founders EditionWhy? I mean I guess it's a bit cheaper, but it also runs a bit hotter under load. And if your card is under load for long periods of time, you want it to be as chill as possible.
For all who are considering on getting a 40 series RTX that isn't the 4090, just keep waiting, the Super series were leaked and will be released soon enough, 4070Ti Super seems that will come with 16gb of Vram, and considering that one does come with the double encoding chip, is the best one to get when its released.
>>966170>encoding chipso you're a streamer and a gamer. Get out.
>>962813>13b model>goodlol, lmao even
>>962922fuckin faggot you are
>>963056It's true, most of the time you end up doing more work getting it to not fuck up than anything. It's a glorified filter for kids to use in school projects.So far most ai use in practical products have just been chinese making phone games to steal money.
>>963619This is actually a good take. 4080 was never worth it and just got hobbyists and scalpers to snatch them on a high
>>966171The double encoder also works for rendering, fucking retard
>>966170What about cuda and is it worth moving from a 3060 to it?