[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vr / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / asp / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / qst / sci / soc / sp / tg / toy / trv / tv / vp / wsg / wsr / x] [Settings] [Search] [Mobile] [Home]
Settings Mobile Home
/3/ - 3DCG

4chan Pass users can bypass this verification. [Learn More] [Login]
  • Please read the Rules and FAQ before posting.
  • There are 26 posters in this thread.

05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
06/20/16New 4chan Banner Contest with a chance to win a 4chan Pass! See the contest page for details.
[Hide] [Show All]

Janitor acceptance emails will be sent out over the coming weeks. Make sure to check your spam box!

File: Questions guy.jpg (51 KB, 975x1300)
51 KB
Questions Thread:
Fucked up the OP edition

Got a question about anything? Have multiple questions? Ask them here!
When did you suck your first cock?
Inb4 flooded by blendlets
File: screencaprobo01.png (69 KB, 752x547)
69 KB
I want to make low poly modular buildings, which could be used in building a town. I want all buildings to share one texture map.
I would like to know what is a good ratio of triangles to pixels. As in if I have for example a wall that could be build of 3 units wide and one unit high, should I have the mesh be split into 3 quads with each sharing a 1 to 1 texture space, or have one quad that is 1x3 and have that part of texture 1x3 as well.
Basically is it better to have 1 quad with 32 by 96 pixels, or 3 quads with 32 by 32 pixels. Obviously it would be scaled up a lot, if I am not clear in my description, let me know and I will do my best to clarify

Pic not mine
Look into trim sheets. Then just set up your model's UVs accordingly. Maybe have part of the texture be dedicated to trim sheet stuff, then like towards the bottom have things that can't use them, like manhole covers or other little graphics. Then be creative with your textures by editing colors with overlays and shit.
As far as size stuff goes, just try to maintain a uniform texel density as much as you can. On your texture, just make things all relative as best you can as well.
that doesn't answer the question though. Trim sheets is how I was going to do it, but question is if e.g. look at image >>696171
the cross section of the tree trunk was copied and pasted 15 on full length of the texture for 15 cross sections, and on model that is one rectangle, but it could have been one tree cross section, so 1/15 of the texture length on 15 squares.

It's texture size vs model triangle number.
Rent free
>Then be creative with your textures by editing colors with overlays and shit.
Your post has some good info, but can you elaborate on this part please? I'm not sure what you mean.
is this for game boy?
File: Colors.jpg (456 KB, 2324x1558)
456 KB
456 KB JPG
Just mix in a color to your diffuse map before you send it into a shader.
I'm using the "hue" blending mode in this case, but you can play around with the blending modes to find what works.

>inb4 Blender
I'm sure this is pretty simple to do in just about any 3d software.

You could even go a step further and set up a gradient of colors to pick from randomly. So each object will be differently colored based on the range you have in the gradient.
What's the best tool/method for character animation, second-only-to mocap which is outside of my reach?
How do I learn it best?
no, but I want to pretend... I just want to make it properly. I know you can say that it doesn't matter, but I would like to know which on is better and when one way is better than the other.
The answer will depend on your intended application. But the goto method is obviously classic keyframe animation using a IK/FK character rig.

Another option is to drive animations thru custom scripted rigs utilizing any input devices (mouse, keyboard, game controllers etc) possibly in conjunction with physics simulation to create a digital marionette that can act out the animation you need.

Best way to learn it is to clear up the coming ten years in your calendar and start reading and practicing anything and everything you can find on the subject.
Why are you still not making money from India?
You can film a scene and then directly animate on top of it, basically like a 2d rotoscoping but with 3D.
Procedural animation is also a thing. Use procedurals as base, keyframes on top.
Do I need to bake a normal map for a plain subdivided sphere or does it add nothing. Any way to just add normal map chamfers to a cube too without having to bake it. Don't know if I am explaining it right. Baking takes so long for basic stuff.
File: 1549400710877.jpg (97 KB, 800x800)
97 KB
>Superb Pixel and Colors
File: 1565488188164.gif (2.79 MB, 300x252)
2.79 MB
2.79 MB GIF
How the ever loving FUCK do I get into UV unwrapping? Ive just made a highpoly character and have no idea where to begin. I'm not walking through the tutorial minefield again, anybody got a good primer video/playlist?
No, you don't need unique normal maps for that. Face weighted normals is what you need. Search for polycount wiki it has a lot of info. When you understand how it works find plugins for your software of choice.
You don't wanna UV map something too dense. For a classic UV unwrap of something 'highpoly' you would do it at the subdiv zero level where the polycount is manageable, like ~100K poly type range max.

If you have a asset with millions of polygons you wanna texture use tech like PTEX instead.

Don't expect UV-unwrapping to be something you get done very quickly, esp not the first few times you try it. Unwrapping a character cleanly can be a several hour long process even with experience, have patience.
I want Models-Resource models fully rigged and operational in Daz Studio / DAZ3D. How do?

alright. Thanks anon.
does intersecting geometry slows down render time?
i dont mean millions of polygons, a whole object into the middle of another object. im talking more in terms of bottom or a corner of one object being couple of centimeters into another
If you're using raytracing, those rays won't go through the object to see the other one, except in the places it's poking through.
Plenty of modellers just say fuck it and make things with intersecting objects instead of doing it cleanly.

It works just fine in game engines as well, though I don't know the mechanics. Probably they draw them anyway, but if it's not super dense it doesn't matter. Engines are good at pushing polygons.
no, it won't slowdown your render nor does it really matter.
how do i get started? everything is so intimidating and scary, especially the technical parts
just have fun. It's ok to mess up, so don't expect yourself to get everything right.
>It's ok to mess up
Messing up is preferable even. You can learn a lot just by winging it and googling/making a post asking how to fix whatever you fucked up.
Genuinely confused with animation transferring between different rigs in Maya.

Got a bunch of Maya ASCII files as well as some bvh files. Manage to either import or convert them to fbx to use in Maya with my own defined rig.

The only way I can use the animations from these files is by changing the target to the other character.

However, I'd like to copy the keyframes from one rig, to another.

No idea how
Anyone have some good particle/fibermesh hair tutorials?
I get how to make hair, in blender, generally. But I have no idea how hair works cos I'm a fucking baldlet.

So I mean like, maskignot parts of the scalp, creating new particles, combing, hide, repeat, etc etc. Any good tutorials that cover the process of why certain areas work?

Zbrush or blender okay, I'm not really looking for software specific but general hair creation tips.
Realistic preffered, not cartoony/anime.
File: UV issue.jpg (114 KB, 890x1390)
114 KB
114 KB JPG
Hey guys, I'm having a problem and I don't really know what's up.
Is there any way in Substance Painter to have shit like blur only blur inside the islands? Pic related has the blur cross over into a separate UV island which is obviously not what I want. Kinda makes the blur and the other blur filters useless. I know it's possible to just mask it out with a paint layer over the top, but it seems like something that shouldn't need to be done.
Any help appreciated.
Anyone know why roughness get's fucked up when importing textures from Substance Painter into Blender?
Things would look perfectly fine in SP, but get way too glossy in Blender.
Am I exporting in the wrong file format or what (PNG 8bit)? I could do 16bit, but honestly things don't look that different since the 8bit output is dithered.
Gonna ask the obvious, do you need to invert the texture, but forgot to do it?
How do I go from zBrush to Substance painter? I don't understand much about UVs. A simplified explanation would be appreciated
does Substance really not have a way to apply a gradient from the top down in global space irregardless of UVs? seems like it should

You would need a fairly low poly model for substance painter. If you want an easy uv solution look up unfold 3d or Use 3d coat to do uv's.
Nah it ain't that.
inverting does exactly what you'd expect and fucks everything.
Most of the time I have to throw a color ramp after to dial it in by bringing the black color closer to gray. So something is up where it's all offset or something. Using non-color as well,l.

Blender is shit.
Like without using the position filter?
Wow! Great contribution faggot. That really helps.
Ctrl-Z yourself.
he's not wrong to be fair
>bringing the black color closer to gray
A gamma issue? Gloss/rough should be linear, make sure it's being written and read as such. Painter can be a bit fucky with that sometimes.
oh you mean like apply a layer via position map? I'll try it
File: positional blending.jpg (338 KB, 2004x748)
338 KB
338 KB JPG
He's not (to a degree), but I gotta work with what I got, and what I know. If someone can't make something halfway decent regardless of software, they're a shit artist. Blender has a ton of hangups and shit implementation in places, but it's possible to supplement them in other places with other software (which is what I'm doing).

Yeah that sounds like it could be the issue. Where would I specify the linear interpolation in SP? Pretty sure Blender already uses linear as default when importing maps.

Yeah, there's a position filter when you're doing masks. The mask generator/builder has a positional filter as well, which means you can use it plus the curvature and other shit on top. That's why the position maps are baked.
>That's why the position maps are baked.
makes a lot of sense, I didn't realize how useful they could be
Yeah they're pretty useful, but I honestly find them a bit difficult to use at times. They can be pretty touchy, and it can be hard sometimes to get them exactly where you want and blended right.
For those times I just use a gradient fill, then move and scale it in the 2d/UV view.
Then if it's over anything I don't want, just mask it out with a paint layer on top. The paint layer isn't very procedural, but I don't know of another good way to do it. Mainly because I don't know every little in and out of SP.
>Where would I specify the linear interpolation in SP?
Hm, I thought it was possible to specify that by output, but apparently it can only be done on Designer. Anyway, I output a couple of normal maps in PNG and EXR format, and it seems that Painter is baking the sRGB gamma for PNG. You could try to use EXR to force linear export, or tell Blender to linearize the map.
*Actually, I think the correct way is to tell Blender to take the map as linear and not do any gamma correction. This shit always confuses me.
Yeah I just did a test on my own and Painter is definitely baking sRGB for the Roughness/Metallic maps. Which is honestly pretty weird. You'd think it'd force grayscale on those no matter what.
Switching to EXR and forcing linear seems to have done the trick. Thanks.
File: images (49).jpg (30 KB, 694x442)
30 KB
>Anyone have some good particle/fibermesh hair tutorials?
>I get how to make hair, in blender, generally. But I have no idea how hair works cos I'm a fucking baldlet.
>So I mean like, maskignot parts of the scalp, creating new particles, combing, hide, repeat, etc etc. Any good tutorials that cover the process of why certain areas work?
>Zbrush or blender okay, I'm not really looking for software specific but general hair creation tips.
>Realistic preffered, not cartoony/anime.
I have started making textures for blender and Daz and have noticed this also, especially problematic with Daz cos you can't add a ramp so usually I have to tweak each roughness map in PS.

I don't know what it is, even when I export relatively high roughness from SP, it's like they lighten it on the export
Do you mean you are exporting them as EXR?
Not him, but I export everything from Substance in EXR. Saves a few headaches, and there's always the option of converting to a more adequate format/bitdepth if needed.
How do you texture film assets? How does the workflow differ between texturing for offline rendering vs realtime?
I have a decent understanding on how to texture for games, but I haven't got a clue on what the standard workflow is for movies/films. Do they still use image textures, or is it all procedural? Are there any differences in how you unwrap assets? I don't even know where to begin looking for this information.
Heavy use of UDIMs. Procedural is helpful for any workflow, and should be used as much as possible before going in and putting human touches on something to finalize it.
Texture sizes and stuff are a bit more lenient, but efficiency and economy are still important. It's often times just easier to add another UDIM tile than to bother trying to pack things into a UV map as efficiently as possible though.
Is there something like this in maya?

I've been trying to use lattices but it's not the same.
Look up Mari tutorials. Mari is pretty much the industry standard in texturing for big budget films.
Wanna explain why, to someone who doesn't know much about different file formats aside from png vs jpeg
Using EXR ideally saves everything in linear gamma, so it's more comfortable (to me) to later load the textures without worrying too much about which gamma curve was used. PNG and other formats that save with an sRGB gamma rely on the export software for doing the gamma encoding, and this is a point of possible failure for a pipeline where consistent color/values are desired.

Then there's the issue of bit-depth and precision. EXR allows you to use 32-bit float per channel, which is a lot of precision for things that may benefit from it, like displacement maps, Z-maps, vector data in general, etc. Of course you can also reduce bit-depth in EXR for certain textures (and it should generally be done, like for example with diffuse or roughness maps). PNG can use up to 16-bit integer per channel, which afaik is comparable in range to 32-bit float, but has less precision where it matters -- in low-to-mid values. In other words, high values are given the same space (in the field of representable values) as lower ones, but they aren't used as much, especially for visual data; and, for non-visual data, most of the time it's between -1 and 1, where float has the most precision.

You can also store different planes in one EXR file (think diffuse + specular + sss render passes, or really any arbitrary data you want to export), which can make your work more streamlined, especially when compositing.
how do i challenge one of the 5 masters of lowpoly in order to take their seat?
What exactly is the deal with those japanese MMD videos, specifically the highly-detailed dance ones? Do they actually hand-animate all those or do they just attach premade dance scripts to the models? It's bizarre seeing such good movement alongside obvious physics errors, clipping, etc and generally anything that isn't a dance video in MMD is very amateurish by comparison.
I have no idea but if I had to guess i'm pretty sure it's all hand animated. Typically you'll see western MMD users or nips trying to make a meme produce a result that looks like shit. But I think the nips who really like their robot music will go autistically far for their vocaloid waifu. It makes sense to me since you can take a look at other weeb mediums and it's pretty similar. Westerners attempt to mimick nip magic but always looks like shit. Whereas hikikomori try really hard and produce really great stuff
You have to defeat all their lieutenants first.
Maybe have a hard time with the first few, then finish off the rest in a music montage.
After that, you'll probably have an Earth shattering secret reveal dumped on you that makes you question everything you've done thus far, and completely changes your arc for the rest of the series. But at least you'll get some sick power boosts, and make friends with your rivals.

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.