Thursday, March 29, 2012

VFX Back to Basics Series: 5. What is shading and texturing?


This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this fifth post I will talk about shading and texturing.

In the previous posts I talked about 3d modeling and 3d animation. It is great to have cool animated models but without color they do look dull. Moreover, if you want a photo real result you need to spend some time on proper shading and texturing. Many styles are possible though. From very cartoony to hyper real.


The plain standard grey shader on a model of a gun.


Now we are getting somewhere. The shell has been properly shaded and textured.


Since good shading needs texturing let's start at the top and explain shading first. Almost every object in the real world reflects light. That reflected light makes us perceive color. The reason we perceive an object as being red for example is because the blue and the green wavelengths of the light shining on it get absorbed. Another important part of an object is its shape. A flat surface reflects light differently than a round surface. Computers need to simulate this behavior to get us the desired result.

In short shading is the art of defining the properties of the material of an object. These properties get used during the rendering stage to simulate how the light in the scene is reflected. Shaders are little programs which handle the incoming light, do some calculations and send that reflected light to the render camera. Most 3D packages have build in shaders so no need for programming there but if you really like to have full flexibility then programming them is the way to go. The most known shader language in the film industry is the RenderMan Shading Language or also called RSL and is specific to the RenderMan standard.


Something to remember: Shaders are render engine dependent. Shaders programmed for the RenderMan standard do not work with other engines like Mental Ray or V-Ray. Shaders programmed for Mental Ray can only be used with Mental Ray and so on. It is good practice to choose a render engine before you start shading your models. Most software packages have a built in render engine but have also plug-ins to allow others.

The two most important shader types are surface shaders and displacement shaders.


Surface shaders

At the base of a surface shader there is a Bidirectional Reflectance Distribution Function or in short BRDF. Without getting too technical, it is a function that describes how light is reflected. Common models are Lambert, Blinn, Phong and Oren-Nayar.


Four common BRDF examples. From left to right: Lambert, Blinn, Phong, Oren-Nayar with a high  roughness value.

Surface shaders are usually divided in several components. Diffuse, specular, reflection and refraction are the most prominent ones. These components are combined to get a result for the final image.
  • Diffuse means that the reflected light is scattered and no clean reflections are visible.
  • Specular is invented to simulate highlights from light sources. It is a kind of simplified reflection which is only affected by lights. This can be very useful as calculations for a a specular highlight are much cheaper than true reflections. Nowadays when going for photo real renderings specular gets replaced by true reflections.
  • Reflections are just what they mean, they reflect the environment, including light sources but also other objects in the scene.
  • Refractions are needed when it is possible to look trough objects like glass or a liquid. These type of objects also bend the light that is passing trough them.

Diffuse color only.

Specular highlight only.

Reflections only.


Displacement Shaders

Displacement shaders handle light a bit differently. Instead of handling the light itself, they change the apparent shape of the object and this has an influence on how light is bounced back. There are three methods commonly used and one newer one which combines two of the others.
  • Bump mapping is the simplest and most economic method. It only changes the normals in one direction and give an apparent relief to the material.
  • Normal mapping is the advanced version of bump mapping and moves the normals in three directions (xyz) instead of one. It gives a lot more detail to an object for a fairly low cost on rendering times. The bump and normal mapping methods do not alter the geometry itself.
  • Displacement mapping is also a one dimensional move just as bump mapping but does actually move the geometry and therefor alters the space the object takes in a scene. Although it can give very nice results it does affect render times.
  • The last one is vector displacement mapping. This is a combination of normal mapping and displacement mapping. It moves the geometry in three dimensions and can give highly realistic results.

An example of bump mapping. The pattern apparently shapes the geometry but if you look at the edges you can see that they are still smooth.

An example of true displacement. You can see on the edges that the geometry has been altered.

Three dimensional maps for normal mapping and vector displacement mapping are harder to make and you need specialized software like Mudbox to create them while one dimensional mapping can be done in any paint program like Photoshop.


Texturing

So how does texturing come into all this. Textures are two dimensional images which are wrapped around the three dimensional geometry which drive the calculations of the shader. There are procedural textures which are generated by functions in the shader code or regular textures which are regular images. Photographs are a good source for textures but texture artists can do some amazing hand painted textures too.


A network for a procedural checkerboard shader in Maya. 


Simple hand painted map to drive the color of the diffuse channel.

First we need to make sure the texture fits on the geometry. If the geometry is a NURBS object then the mapping is straightforward as a NURBS patch has fixed texture coordinates. When using polygons or subdivision surfaces there is need for a UV map to define the relationship between the 3D geometry and the 2D map. This UV map can be seen as the polygon mesh being unwrapped on a plane. The UV map can then be used as a guide to create texture maps.


Clean UV map of the shell used in the other images. This can be used as a template for painting the textures.


A procedural checkerboard texture used to check if the UV's are properly mapped. All the squares seem to have the same size so the UV layout is a success.

Each component of a shader, like diffuse and specular can be driven by a texture map. The map influences the color and value of that particular component to create patterns. Let's have a look at two more maps which drive the shader for the shell used in the examples.
A bump map. The lines are very obvious on the render of the gun as they are dark. The lighter pattern is only a subtle effect on the result.


A grime map to control the reflections. The darker the color, the more the reflections get blocked.

Shaders are not only functional to create materials to show in a beauty pass but can also be used to enhance compositing. Ambient occlusion, reflection occlusion and material ID shaders are a good example of that.

There are also more advanced shaders which handle calculations like subsurface scattering. Although these are extremely interesting shaders it would go beyond the basics and thus the scope of this series.

This concludes the fifth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

No comments: