Thursday, March 29, 2012

VFX Back to Basics Series: 5. What is shading and texturing?


This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this fifth post I will talk about shading and texturing.

In the previous posts I talked about 3d modeling and 3d animation. It is great to have cool animated models but without color they do look dull. Moreover, if you want a photo real result you need to spend some time on proper shading and texturing. Many styles are possible though. From very cartoony to hyper real.


The plain standard grey shader on a model of a gun.


Now we are getting somewhere. The shell has been properly shaded and textured.


Since good shading needs texturing let's start at the top and explain shading first. Almost every object in the real world reflects light. That reflected light makes us perceive color. The reason we perceive an object as being red for example is because the blue and the green wavelengths of the light shining on it get absorbed. Another important part of an object is its shape. A flat surface reflects light differently than a round surface. Computers need to simulate this behavior to get us the desired result.

In short shading is the art of defining the properties of the material of an object. These properties get used during the rendering stage to simulate how the light in the scene is reflected. Shaders are little programs which handle the incoming light, do some calculations and send that reflected light to the render camera. Most 3D packages have build in shaders so no need for programming there but if you really like to have full flexibility then programming them is the way to go. The most known shader language in the film industry is the RenderMan Shading Language or also called RSL and is specific to the RenderMan standard.


Something to remember: Shaders are render engine dependent. Shaders programmed for the RenderMan standard do not work with other engines like Mental Ray or V-Ray. Shaders programmed for Mental Ray can only be used with Mental Ray and so on. It is good practice to choose a render engine before you start shading your models. Most software packages have a built in render engine but have also plug-ins to allow others.

The two most important shader types are surface shaders and displacement shaders.


Surface shaders

At the base of a surface shader there is a Bidirectional Reflectance Distribution Function or in short BRDF. Without getting too technical, it is a function that describes how light is reflected. Common models are Lambert, Blinn, Phong and Oren-Nayar.


Four common BRDF examples. From left to right: Lambert, Blinn, Phong, Oren-Nayar with a high  roughness value.

Surface shaders are usually divided in several components. Diffuse, specular, reflection and refraction are the most prominent ones. These components are combined to get a result for the final image.
  • Diffuse means that the reflected light is scattered and no clean reflections are visible.
  • Specular is invented to simulate highlights from light sources. It is a kind of simplified reflection which is only affected by lights. This can be very useful as calculations for a a specular highlight are much cheaper than true reflections. Nowadays when going for photo real renderings specular gets replaced by true reflections.
  • Reflections are just what they mean, they reflect the environment, including light sources but also other objects in the scene.
  • Refractions are needed when it is possible to look trough objects like glass or a liquid. These type of objects also bend the light that is passing trough them.

Diffuse color only.

Specular highlight only.

Reflections only.


Displacement Shaders

Displacement shaders handle light a bit differently. Instead of handling the light itself, they change the apparent shape of the object and this has an influence on how light is bounced back. There are three methods commonly used and one newer one which combines two of the others.
  • Bump mapping is the simplest and most economic method. It only changes the normals in one direction and give an apparent relief to the material.
  • Normal mapping is the advanced version of bump mapping and moves the normals in three directions (xyz) instead of one. It gives a lot more detail to an object for a fairly low cost on rendering times. The bump and normal mapping methods do not alter the geometry itself.
  • Displacement mapping is also a one dimensional move just as bump mapping but does actually move the geometry and therefor alters the space the object takes in a scene. Although it can give very nice results it does affect render times.
  • The last one is vector displacement mapping. This is a combination of normal mapping and displacement mapping. It moves the geometry in three dimensions and can give highly realistic results.

An example of bump mapping. The pattern apparently shapes the geometry but if you look at the edges you can see that they are still smooth.

An example of true displacement. You can see on the edges that the geometry has been altered.

Three dimensional maps for normal mapping and vector displacement mapping are harder to make and you need specialized software like Mudbox to create them while one dimensional mapping can be done in any paint program like Photoshop.


Texturing

So how does texturing come into all this. Textures are two dimensional images which are wrapped around the three dimensional geometry which drive the calculations of the shader. There are procedural textures which are generated by functions in the shader code or regular textures which are regular images. Photographs are a good source for textures but texture artists can do some amazing hand painted textures too.


A network for a procedural checkerboard shader in Maya. 


Simple hand painted map to drive the color of the diffuse channel.

First we need to make sure the texture fits on the geometry. If the geometry is a NURBS object then the mapping is straightforward as a NURBS patch has fixed texture coordinates. When using polygons or subdivision surfaces there is need for a UV map to define the relationship between the 3D geometry and the 2D map. This UV map can be seen as the polygon mesh being unwrapped on a plane. The UV map can then be used as a guide to create texture maps.


Clean UV map of the shell used in the other images. This can be used as a template for painting the textures.


A procedural checkerboard texture used to check if the UV's are properly mapped. All the squares seem to have the same size so the UV layout is a success.

Each component of a shader, like diffuse and specular can be driven by a texture map. The map influences the color and value of that particular component to create patterns. Let's have a look at two more maps which drive the shader for the shell used in the examples.
A bump map. The lines are very obvious on the render of the gun as they are dark. The lighter pattern is only a subtle effect on the result.


A grime map to control the reflections. The darker the color, the more the reflections get blocked.

Shaders are not only functional to create materials to show in a beauty pass but can also be used to enhance compositing. Ambient occlusion, reflection occlusion and material ID shaders are a good example of that.

There are also more advanced shaders which handle calculations like subsurface scattering. Although these are extremely interesting shaders it would go beyond the basics and thus the scope of this series.

This concludes the fifth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

Friday, March 16, 2012

VFX Back to Basics Series: 4. What is 3D animation?


This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this fourth post I will talk about 3D animation. Although it is difficult to demonstrate this by still images, I will use them to show certain principles.

A simple humanoid skeleton.

In one of the previous posts I talked about 3D modeling. Although it is not always necessary to animate models, they do get a lot more exciting when they are. Especially creatures and humans are calling out to be animated. Models can be hand animated by an animator or can be procedurally animated by computer simulations and automatic processes. 

Let's start with hand animated non deformable objects. To conveniently hand animate models it is necessary to build an animation rig. In short this process is called rigging. It enables the animator to take control over the movements of the object without having to worry about every individual vertex or polygon. Of course, when moving an object from point A to point B one does hardly need a rig. Just using the build in transform will do the trick. But when the model has several parts like a car with turning wheels and doors which can be opened and closed, just using the build in transforms will give the animator a hard time.

If we look closer at the example of a door we can discover that a real door has limits. It is impossible to open the door further than the hinge allows and a closed door fits firmly into the door socket. A door from a 3D model will rotate in any direction and will just penetrate the geometry around it. It cannot easily detect solid matter unless there is some collision detection going on. Collision detection is a simulation technique and is too complicated for something as simple as a door. Instead we can use a mini rig which limits the movement of the door and which shows only one handle so the animator immediately sees what can be animated.

A simple door with no rig. An animator could rotate it in any direction which is confusing and prone to errors.
A simple door with a simple rig. The door can now only rotate on its hinges and is limited by the closed position and open position. The circular handle makes it easy for the animator to animate and keyframe. 

Let's take this a step further and look at a human character. When a human moves there are hundreds of muscles contracting and relaxing. It is nearly impossible and usually not necessary to build every single muscle into a rig so it can be simplified a lot. There is the extra concern that the body changes shape when making movements. So the rig will be a bit more complicated. What defines the movement and the limits of a human is its skeleton and therefor most programs like Maya or 3D Max, have skeleton tools build in.

A skeleton in a 3D program is build out of a hierarchy of joints. There are two important principles when rigging up a skeleton for animation. They are called forward kinematics and inverse kinematics.
With forward kinematics each joint in the skeleton drives the joints further down the hierarchy. For example, when the torso rotates, the shoulders and arms will follow that rotation.
With inverse kinematics we rather control a child joint and the movement of the joints in between the child and the root will get automatically calculated by the computer. For example, hands get usually animated by this principle to save time. The elbow joint will automatically follow the movement of the hand where the child joint is the hand and the root joint is the shoulder. Inverse kinematics do require more setup as you also need to define the limits but save time during the animation process.

A short chain of joints which form up a skeleton.

An example of forward kinematics. The rotation of joint 1 will influence the joints down the hierarchy. The rotation of joint 2 will not influence joint 1 or the root.
An example of inverse kinematics. Moving the outer most joint will also control the rotation of the joints in between the outer most joint and the root joint.

Another issue when animating deformable objects like humans is that the skin has to follow the skeleton and has the possibility to stretch and deform when muscles bulge. We can solve this by a task called skinning. It connects the modeled mesh to the skeleton and defines how much the skeleton influences parts of that mesh. For example, when moving a shoulder, the mesh in the shoulder area will deform, maybe the neck will deform a bit too but the legs aren't influenced at all.

Once a rig has been set up it still has to be animated of course. The computer helps us out even when we are hand animating a character or object. The animator sets up certain key poses on the time line and the computer calculates the in between poses over time. This process is called keyframing and is extremely powerful as it makes animation smooth with minimal effort. You could compare it with classic drawn animation where the lead animator would set up the keyframes and the junior animators would draw the in between frames but in this case the computer takes the role of the junior animator. In the example of the door one would only have to key the position of a closed door on frame 1 and the open door on frame 20. When scrolling trough the time line from frame 1 to frame 20 the door will gradually open and has a different position on each frame.

Just as classic drawn animation there are many styles possible. For example, you can go for very cartoony style where there is a lot of squashing, stretching and exaggerated movements. This can require more complicated rigs. There is also the physical correct style where outside influences like gravity are important. Especially with this last style it is possible for the computer to lend us a hand.
Motion capture is such a tool. Instead of hand animating the character, an actor is used to give a performance and his movements will get recorded onto disk and transferred to a mesh. This can give highly realistic results but keep in mind that an animator usually has to tweak these results. Creatures like Gollem from The Lord of the Rings are animated this way.

Another type of animation are simulations. Breaking objects can be almost an impossible task to animate by hand when those objects consists of thousands of parts. Luckily there are tools to simulate these kind of events. The computer will calculate the interactions between each piece like collision detection and the forces of gravity and will move each object accordingly. Animating fluids and gasses is even a more specialized job and is usually not done by animators but by FX technical directors.

This concludes the fourth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

In this series: