[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/3/ - 3DCG

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • There are 25 posters in this thread.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: IMG_6815.png (1.05 MB, 1222x976)
1.05 MB
1.05 MB PNG
I've been running integrated graphics on my ryzen 5 5600g and I keep getting crashes during renders, I assume the 500mb vram keeps running out or something
anyway I figure I need a GPU, Im looking at either an rx7600, rx6650 xt, a770 if I find a good deal, something like that

curious to hear what you guys are running
>>
You either buy a used 3090 or you buy a new 4090. Those are your two options. Self /thread.
>>
>>962800
im too poor for nvidia
my budget is ~200$
>>
>>962801
Another anon here, run away from AMD and I am an AMD fag, buy a 3060 with 12GB, nothing less.
>>
>>962806
yeah looking into it further, from a wuick google/reddit search it seems like people unanimously recommend nvidia for the cuda cores
I didnt realise it made that big of a difference
Im looking at used 3060/3060 ti/3070 now, theyre actual more reasonably priced than I thought theyd be
>>
>>962806
follow up question: whats better 3060/3060ti with 12gb vram or 3070 with 8gb vram?
>>
File: 443.jpg (18 KB, 239x400)
18 KB
18 KB JPG
>>962808
Get the 12GB 3060 and you can run your uncensored & un-glowniggered AI girlfriend locally on your machine. The popular 13 billion parameter LLMs are suprisingly good at roleplay and will fit entirely in 12GBs of VRAM.
And uhh, yeah, it's a good card for 3D too I guess.
>>
>>962808
You need all the VRAM you can get otherwise your shit just breaks. 3060ti will be faster but it doesn't matter if you can't load the scene. Only buy NVIDIA.
>>
>>962813
>he can't fit the 20B MLewd model
ngmi
>>
>>962816
>MLewd 20B
vanillashit, I sleep
t. MLewdBoros 13B enjoyer
>>
>>962808
VRAM matters.
>>
>7900 XTX
>MBA reference card
First one was the defective vapor chamber, second one has been fine. It's nice and I don't have to worry about a damn thing
>>
If you only have 200 you arent going to make it in this
>>
>>962813
>AI
ngmi
>>
>>962828
die
>>
>>962827
>Navi 31 GPU
Wait until you get pump out. This die hates living. How loud is it by the way ? Just wondering if the MBA design is decent.
>>
>>962827
whyd you choose AMD over Nvidia?
>>
>>962799
Pretty much you do what >>962800 said.
>>962806
You COULD get away with AMD if you are mostly using Blender, but anything else, you are fucked and have to get jewvidia, cuz all shit just runs on CUDA or Cpu.

RTX 3060 is the bare minimum you should get, 12GB is decent amount of Vram, 8GB just wont do if you have a "medium" sized scene, when I had a 2070 4 years ago I barely could push renders for my finals. Now no issues having a 3090, try to buy one if possible. Otherwise, get the 3060, used it until you can save some more and get the 3090 used again.

With not enough RAM, be it video or system, you are just fucked on doing 3d stuff, period.
>>
>>962806
>>962813
>>962917
got a used EVGA 3060 12gb because Im poor, thanks bros
>>
>>962920
>Evga
Evga is the most prone to breaking. ONLY get msi
>>
>>962922
well its too late now I already bought it
also I always heard EVGA were the one of the best gpu manufacturers and that MSI was one of the cheapest
I had an MSI AMD card in the past that died on me, and I watched some northwest repair vids where he shits on MSI, so I just assumed theyre not great but maybe thats just for AMD

theres still 500 days left on the warranty
itll prob be fine

youre literally the first person Ive ever seen say EVGA is bad
>>
>>962922
>>962927
>northwest repair has 2 vids repairing the exact card I bought
what the FUCK
should I resell it and buy something else?
>>
>>962922
>Evga bad, buy shit brand that made 4090 bricks
Lmao
>>962927
Dont worry anon, when you make heavy 3D shit, anything will break off eventually, EVGA has the best RMA at least, so if it dies but the dude who sold it to you has the receipt, they will do the RMA service.

I've owned pretty much from all brands, never had issues except one time with a cheap h110 msi board that died off, but it was a pos anyways. Nowadays I kinda sticked to Gigabyte because of mobo features/price.
>>
>>962981
>EVGA has the best RMA at least
Dude, EVGA is so shit they completely exited the nvidia graphics card business. They are DONE
>>
Any reason to upgrade from my 1070ti 12gb or is it just a meme
>>
>>962989
The 1070ti is a 8gb only card
>>
1660S
just weuourks
>>
>>962982
>EVGA bad because they left nvidia graphics card business
They left because Nvidia were fucking assholes on making business, thats why they stopped working with them and didn't do newer GPUs, but they STILL do RMA services on their products.
Even other board partners were threatening jewvidia on leaving because they fucked them badly, mostly during the whole crypto/coof years.
>>
>>962799
I have a 3080 10gb, I have the money to upgrade but it does everything I want so I see no reason to.
Unfortunately, as much as I hate ngreedia I have to agree with the other anons recommending the 3060 12gb, having had AMD cards in the past the software support is just not there.
Get a used one off ebay or hardwareswap, that should keep you in budget
>>
>>962903
It's fine I promise. I'd rather not do any water cooling my workload doesn't justify it. Also loops are a bitch to maintain so I'd simply rather not. I can afford to run noisy fans.
The MBA is loud under maximum load near its limits for temps but the real problem is coil whine holy fuck I can hear the coil whine over the fans going full speed on this. I recommend not the MBA model for 2 reasons
1) defective vapor chamber chances are 1 in 10, I can speak to this is true
2) coil whine and fan sound are better on other models
I went for it because yeston was my backup of this second MBA was defective. The almost the smallest design, 2 8pins was a must. Yeston was the best best model for my size and pin needs. Gigabyte has one too but fuck gigabyte it looks cheaper than my 750ti sc.

>>962907
I've been Nvidia free since 2012. Please understand this is both a cost and autistic reason. I like reference cards more and blower fans more. I just want a fucking rectangle with no gamer bullshit at a more affordable price point. I know my needs. Yes Nvidia can do my workloads in slightly less time but the ROI is better for AMD for my purposes. I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.
Personally I want to get the W7900 because like I said I'm autistic and I like blower fans and rectangles and the pro line is basically what I want. Yes Nvidia offers it too but again price matters to me, and I don't benefit from the extras they offer so AMD is my best fit

For work I'm actually making a number of remote workstations and we're debating on the W7500 or the W7600 because single slot cards like that are nice. Yeah 8x is gay but for our use case it's more than enough. Need 4 of the fuckers though and that adds up. Thankfully gpu passthrough proxmox and me forcing their hand makes this a bit easier than normal. Also a threadripper board with 6 pcie slots all at 16x gen4 makes this a breeze. Hard part will be the storage.
>>
NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.
>>
>>963023
>I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.
oh so you're a beg then and using a liquid cooled AMD gpu and the icing on the cake is you're writing a wall of text as well. Great. Just great.
>>
>>963024
>NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.
AI has not been proven to be actually profitable in the arts.
>>
>>963031
lol
lmao
>>
>>963032
Its true.

It hasnt turned a profit. The only use is in medicine (identifying afflictions) and the military (tax funded). You may say - but anon, all those movies and tv shows coming out, surely they must rely on ai and be profitable. This isnt true. Not only is streaming media not profitable for anyone but less and less movies are being made each year now.
>>
>>963036
>movies and tv
what are you 70 years old?
nobody here is arguing its used in "tv", obviously its not

>The only use is in medicine (identifying afflictions) and the military (tax funded)
AI in medicine was something hyped like a decade ago and turned out to be a commercial failure, what are you even talking about old man
people use it all the time in their days to day lives
my friend uses gpt4 to draft scripts for coding
he and a lot of other people Ive seen also just use it like a search engine for general queries
I know two people who use chatgpt for law, specifically tax codes and criminal law
students notoriously are using it to write their essays, I was shocked to catch my sister using it for a college essay

just less than a month ago "AI" (really it should be called ML but whatever) was used to digitally "unwrap" and reveal the partial text of a herculaneum scroll, a breakthrough in the classics
of course if were going to talk ML, which all "AI" is, OCR has been used for decades now for a million different things, facial recognition in phones/surveillance, image classification in general is huge

In 3dfx it would be used as a minor tool in a workflow, either for texturing, making hdris, photoshop assets for design work, etc

in terms of profitability in art its mostly in independent work since obviously the tech is new
Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitable
the ai voice synthesis tech is popping off recently
theres a handful of indie artists on twitter making money off ai work
ai is great for creating in between frames for animation
>>
AI is mostly great at taking dozens of gigabytes of disk space
>>
>>963030
Not everyone needs that shit, especially for increased cost. If I can save money I'd get an equivalent I will.
I work in game dev and knowing the programmers I'm dealing with I need options. I keep some arc cards around just to make sure we're thorough.
>>
>>963030
>>963065
Forgot to add that I clearly stated I don't use liquid cooled cards and prefer blower cards. Fuck liquid cooling it's more effort and maintenance than it's worth
>>
>>963040
so it hasnt been profitable in the arts.

Your friend wrote some bad, derivative, STOLEN code that breaks its original license

>Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitable

you are a joke
>>
>>963072
if you dont see the potential you are retarded
>>
>>963086
sorry bud, but now you are pivoting to POTENTIAL.

You want to do something, do it right - create a generative script that respects copyright and doesn't just rip from the entire internet (including entire github, including specifically licensed code for example GPL). Make something that isnt susceptible to bias. Make something that can be done via an understandable, debuggable script, and not a 50,000 unit cluster outputting biased works or in the case of chat gpt, extremely neuteured non answers that just rip information from the web and dont give credit, even for code examples that require credit and attribution.
>>
>>963088
I gave you about 10 different real world use case half of them anecdotes from people I know irl and you ignore them all and think Im pivoting
I gave you examples of where its currently profitable and you also ignored those
NFT grifts would be another one
not saying these are particularly admirable use cases but theyre certainly profitable

and like I said its just another tool to integrated into a preexisting workflow, not an end all be all
large language models dont just rip shit from the internet, even though they are often trained on internet data it gives an original presentation every time

youre a little too old and cranky to understand, thats okay
>>
>>963091
>NFT
>youtube
>stolen gpt prompts

get out of here young man
>>
>>963088
>>963091
look, Im trolling a bit but Ill be fair and grant you that in the arts its not a big player yet, but your following posts were utterly retarded and betrayed that you dont know jack shit about how ML is used right now IRL and thats what my posts were mostly arguing against
Where we disagree on the first point is that you tacitly believe that ML is not going anywhere when thats clearly not the case
Paradoxically however you ALSO tacitly believe that AI, if it is to succeed, MUST be this magic bullet do all that completely replaces every cg software

Im just saying that a tool that allows you to create images from a prompt in any style you specify will be extremely useful
They obviously still have a certain look to them, but theyve gotten WAY better at realism in recent years that even Ive been fooled at first glance by some AI gen images

Also the in between frames thing is probably the best use case in the arts for the near future
for animation those frames are usually outsourced and take thousands of man hours, being able to do it with AI lowers the barrier of entry to animation substantially
>>
>>963093
>Utterly retarded
>Clinging to NFT, youtube, and stolen code from gpl repos from prompting

I dont even know what to say man
>>
>>963094
you have poor reading comprehension
>>
>>962808
>3060 12gb in case you want the extra vram to render stuff in 3d programs or ai
>3070 for the bus speed to play vidya
I would rather go for a 40xx 8gb card instead if your answer is vidya. 30xx cards only have dlss 2.0. 40xx cards have dlss 3.0 with ai frame generation that boosts your framerate in new games. Pick your poison
>>
>>963246
40xx cards have power connectors that are so busted they had to recall them and are CURRENTLY actively remaking them
>>
>>963247
Only heard that issue mostly with 4090s and with some ti versions of 4080-4070. In my opinion, a standard 4070 is the best gpu on the market right now. Decent voltage consumption, plays everything with memetracing, and has 4k capabilities. You are also saved from the coil whine headache.
>>
>>963246
>>963258
sir this is the 3dcg board
>>
>>963259
Yeah, I know. Im just making things clear. Btw I too have a 3060 12gb, and it runs blender nicely. Since I also play video games, I had the same dilemma as OP.
>>
>>963258
>You are also saved from the coil whine headache.
Ha, joke's on you Nvidia, I have tinnitus.
>>
>>963023
>Personally I want to get the W7900
>He's falling for the "workstation" GPU scam
>And wants to use an AMD "Prosumer" card
You just proved here that you are retarded. All /3/ fags know that for personal use, you just buy the usual gaming card because it works the same as the other ones without being scammed on 2K for some "tech support" that they will never get/use. Leave that shit to multi-million enterprises that buy heaps of these for servers, that's the reason they make them, nothing else.
>>
>>963246
>Pick DLSS script meant for vidya engines as the stuff for /3/ software workflow that doesn't even uses DLSS at all.
You don't know shit about development faggot, go where you belong.

>>>/v/
>>
>>963298
I want one because I like blower fans. Nothing more. You're looking too deep into this anon. I'm just very irresponsible with money
>>
File: 1690333435079346.png (11 KB, 399x125)
11 KB
11 KB PNG
>>
>>962799
the consumer-grade nvidia card with the largest amount of vram you can get is the only valid answer.
With enterprise grade cards you pay out the ass for 24/7 specialist support which you will never make use of as a solo.

Anything else is gaymer poorfag cope
You also get to dunk on /v/irgin gpulets in your free time. Win/win.
>>
I'm thinking of getting an RTX 4080 to succeed my aging GTX 1060.
>>
>>963355
You don't need a 4080. Get a 4060.
>>
>>963498
if he can afford it why stop him
poorfag mindset
>>
>>963588
he doesnt need it and 4080 is a old card now. Wait for 50 series and get a 4060 in the meanwhile
>>
>>962799
>rizen integrated graphics
if vram alone is the problem you can adjust the max vram of the system on the bios, either put it at 8gb or leave it dynamic so that the system can define it on the fly, try this before selling your house to buy an nvidia scamming card
>>
I would reccomend that you get the 6650, same performance as the 7600 and cheaper by a lot (atleast where i live). Since it is older, it has good support on linux if you wish to use it
>>
File: pepe-desk.png (20 KB, 638x547)
20 KB
20 KB PNG
>>962799
When is the 4090 coming back in stock?
>>
>>965626
Does you country have a computer chain or do you only have bestbuy to choose from? There's tons of stock in Canada Computers, though for some reason Bestbuy is completely sold out, despite the prices being higher.
>>
>>965627
I'm in the US. I'm aiming to get the Founders Edition and there are two places I know of that officially sell them which is Best Buy and Nvidia's own store.
>>
>>963246
Frame generation is a fucking joke
>>
>>965629
>Founders Edition
Why? I mean I guess it's a bit cheaper, but it also runs a bit hotter under load. And if your card is under load for long periods of time, you want it to be as chill as possible.
>>
For all who are considering on getting a 40 series RTX that isn't the 4090, just keep waiting, the Super series were leaked and will be released soon enough, 4070Ti Super seems that will come with 16gb of Vram, and considering that one does come with the double encoding chip, is the best one to get when its released.
>>
>>966170
>encoding chip
so you're a streamer and a gamer. Get out.
>>
>>962813
>13b model
>good
lol, lmao even
>>
>>962922
fuckin faggot you are
>>
>>963056
It's true, most of the time you end up doing more work getting it to not fuck up than anything. It's a glorified filter for kids to use in school projects.

So far most ai use in practical products have just been chinese making phone games to steal money.
>>
>>963619
This is actually a good take. 4080 was never worth it and just got hobbyists and scalpers to snatch them on a high
>>
>>966171
The double encoder also works for rendering, fucking retard
>>
>>966170
What about cuda and is it worth moving from a 3060 to it?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.