Friday, April 27, 2012

VFX Back to Basics Series: 6. What is lighting and rendering?


This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this sixth post I will talk about lighting and rendering.

A ray traced image with a directional light and a sky dome which is used for the image based lighting.

The reason why lighting and rendering usually goes together is because lighting technical directors handle both of them and certain render techniques influence directly how the scene is lit.

Rendering in a nutshell

Let's start this post with rendering and more specific with render engines. A render engine is a piece of software which translates scene data, like models, shaders and lights into a final viewable image. These calculations can go from mere seconds for a simple scene to hours for a complex scene just for one frame. Keep in mind that a movie in the theater needs 24 frames a second so you can imagine that movies packed with VFX are taking years to render, if it was done by a single CPU that is. Tackling a large number of frames is solved by using many CPU's at once. A room full of computers set up for this purpose is called a render farm.

The render engine can be seen as a separate entity and is not really part of the animation and modeling package although it might seem so. Most packages come with a built in renderer but external render engines usually are chosen when working on bigger projects. Photorealistic RenderMan, 3Delight, Mental Ray and V-Ray are only a few examples. There are plenty of good renderers out there. Just choose the one which works for you.

Diagram which shows the relationship between 3D packages and render engines.

The above diagram shows how the software packages like Maya and 3D Studio Max talk with the help of a translator to the rendering packages. Each render engine has its own "language" so the translator gets provided by the render engine.

There is one family of renderers which talks the same language which is the RenderMan standard. This standard was created by Pixar and for a long time Pixar was the only company which had a RenderMan compliant renderer. Now there are other commercial renderers availabe. The scene file from the 3D program gets translated to a RIB file which any RenderMan compliant renderer should be able to read and render out. This open standard is extremely powerful and flexible. It allows for very complex render pipelines so this is mostly used in VFX for film and not so much for small projects.


Render algorithms

Every renderer has its own algorithm but there are two distinctive paths on how to calculate an image: Scanline rendering and ray tracing. I am explaining shadow creation here as well although it can also fit in the lighting section below.

Scanline rendering is a technique which sorts the geometry according to depth and then renders a row at the time. It is very efficient as it discards geometry which is invisible and therefor limits calculations. Shadow calculations are usually done trough shadow mapping. Shadow maps are depth maps which can be stored in a file and be reused. High resolutions depth maps can be expensive to calculate but their reusability in certain circumstances make up for that. It handles large amounts of geometry rather well. Although it has been a very popular technique and is used for example by PRMan, it is being replaced more and more by ray tracing.

Very simple example of shadows made with a shadow map. This one has not a high enough resolution to have a nice shadow. The edges are pixelated.

A new render with a higher resolution shadow map. The edges aren't pixelated anymore.

This how a shadow map looks like. It is a depth map where the lighter grey is closer to the camera than the  darker grey.

Raytracing is a technique which calculates a pixel at the time. It shoots a ray from the camera to the the objects which bounces of to the lights present in the scene. Shadows are an automatic result of this technique and therefor easy to make but they can become rather expensive when soft shadows are needed. This increases the rays per pixel and affects directly the render times. The big benefit is true reflections and refractions are possible. Mental Ray and V-Ray belong to this category.

Ray traced shadows. These are sharp clean shadows.

An attempt to get softer shadows but not enough samples are used and it looks bad and pixelated.

This render has more samples as the previous image and has therefor a much smoother result but render times have gone up considerably. From 7 seconds to 23 seconds.

Render times in both techniques are influenced by the objects in the scene and the complexity of the shaders and lights used. It is imperative to keep render times under control when running a production. Render algorithms can be combined. It can be possible to use a scanline renderer and activate the ray traced shadows if the render engine allows this.

There are more subcategories like global illumination and radiosity but that is a bit too specialized for this article.

Lighting

A 3D scene without light would turn out black just like in the real world. The techniques for lighting a scene are pretty much like lighting for a live action set. There are some differences though. It is impossible to subtract light (and I do not mean block) in the real world whereas in the digital world it is a matter of mathematics. Shadows are another difference. It is possible to change the color, softness without affecting the light itself or even turn it off completely. This gives a huge amount of flexibility but be careful when lighting for a photorealistic scene. Using these tricks usually makes the scene look less real.

Classic lights

Lights in a 3D scene are controlled by light shaders just like surface shaders. Most of the time you do not have to assign a light shader to your light though as most packages do this automatically. A light shader is a little program which defines the properties of the light. Light shaders are also render engine specific. All 3D packages have a ready to use set of lights available. Here are the most important examples.
  • Point light: light emitting from a single point like a light bulb.
  • Spot light: A light like a regular spot on a set. It can be controlled by barn doors and by adjusting its diameter.
  • Directional light: A light source with parallel light beams. It is well suited for simulating the sun for example. 
  • Area light: A light which emanates from a surface. Good examples are Kino Flo lamps and light trough a window into a room.
Take notice that area lights are a special case for shadows. The light is bigger than a point and will therefor give always soft shadows. Soft shadows are nice but are always a bit more expensive in calculations especially when you don't want them to look to grainy.

Newer techniques

When lighting for CGI which needs to be incorporated into live action another technique can be used if your renderer allows it. This is called image based lighting or in short IBL. Instead of lighting the scene completely yourself it is possible to use the lighting which was present on the set. This can be done by taking high a dynamic range (or HDR) panorama photograph. The HDR images contain much more information than a regular photograph. Hot spots in the image are not clipped and the blacks are not crushed.

A panoramic image of a kitchen. This is just a representation of an HDR image. Real HDR images are not supported to be shown on the web.

The renderer will use this HDR image as base to light  the scene. This can give highly realistic results. There are two main techniques to make these photographs. The cheap way is to use a mirror ball. It works well for grabbing the lighting but has certain limits when the image is used for crisp reflections on the CGI objects. The more expensive way is to use a fish eye lens.

If you're interested in good lighting make sure you read some books on cinematography. There is much to learn from the real thing. Understanding color theory and how light behaves is very important.


This concludes the sixth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

Thursday, April 12, 2012

How we made Agent Orange



Jef-Aram Van Gorp and I joined forces as the Belgian Boomsticks and we produced our first YouTube short. In this post I explain the how we made it all happen.


The Idea

Jef-Aram and I are fans of shorts like the ones made by Freddie Wong and Brandon Laatsch. They are low budget, have decent enough effects and have often great stories. 
Since making shorts is fun we decided to give it a go ourselves. Guns and action are always well received with the YouTube public but we wanted an original angle on this so Jef-Aram came up with the idea of using bananas as guns just like kids usually do. To extend this a bit we added the double barrel leek to the arsenal.

With the general theme being fruit and vegetables, sugar and salt was quickly added as the so called drugs as these are bad when eating them in big quantities.

We consider this a hobby project so we had only limited amount of time and resources to complete this next to our full time jobs.

Equipment on set

We both have some filming equipment so we threw everything together for this shoot.

Camera's:
  • Panasonic HVX 200
  • Nikon D7000
  • Canon 600D/T3i
  • GoPro Hero2
Sound:
  • Rode VideoMic Pro
  • Rode NTG2
  • Rode NTA2 (studio Mic for additional sounds)
and also some tripods and the Edelkrone Pocket Rig (Link to Edelkrone Pocket Rig review).

The HVX 200 is an older camera so it was the limiting factor resolution wise. 720P is more than enough for YouTube so we decided to stick to that. 
The biggest problem when shooting with different camera's is color balance. Every camera has it own characteristics so we had to solve this one in post production. We did make the mistake to shoot a bit overexposed. The H264 codec in the DSLR's is a lossy 8bit format and does limit the things you can do when color channels get clipped to their maximum value.

We left out any footage from the GoPro camera. Although we had some cool shots, the angle was so wide that even the shadow of the camera man filming with one of the other camera's was in view. Since we don't have a clip on LCD screen for the GoPro it was hard to estimate what we were shooting exactly. Something we need to prepare better next time around.

A screenshot from the GoPro. Although it looks cool it is useless because the shadow of the camera is visible in the lower left corner.

The microphones did give a good results. These are condenser microphones with a cardioid pattern which helps just getting the voice of the actor and not so much the environment. Luckily for us there was not too much wind that morning. It was -15 degrees Celsius though which made it hard to manipulate the small knobs on the cameras without our fingers freezing off.

Music and Sound FX

All music is royalty free. We bought these some time ago at a big discount. This is a good way to get cheap music which does not infringe on the youtube copyright rules.
The sound FX were free samples on the net. These were also royalty free. When you grab files from the web, make sure to check their license before using them.

Post Production

Post production consisted of cleaning up bits and pieces we didn't want to have in view after all, adding muzzle flashes, some smoke, a bullet hole in my head, a bit of blood and the orange disks which we use for the glasses of Agent Orange. The effects are quite rough but they'll do the trick when everything is moving fast.

Using compositing to clean up the plates is really powerful and it is not too hard to do. When something is just visible for a couple of frames it easy enough to copy the background from a couple of frames before. If the camera is handheld it is a bit trickier but nothing a tracker cannot solve.

The banana in the lower left corner came a bit too early into view when Agent Orange points it to my head.  It was easily repaired by taking a part of the wall two frames earlier.

The Muzzle flashes and dust clouds come from the Action Essentials 2 pack from Video Copilot. The pack is not to expensive but I am sure you can find free muzzle flashes on the web. Check if they got an alpha channel as it makes compositing them over your footage much easier. If they are against a black background it is easy enough to pull your own matte.

The blood spatter is just a paint splatter image we pulled from the web. We created a matte, gave it the right color and made it blurry. Since it sticks to the lens, there is no need to track it in.

Frame without the blood spatter.

Frame with the blood spatter and the hole in my head.

The more tricky effects are the ones you need to track over more than 10 frames. Both the oranges and the bullet hole in my head do move around. We started with a full 3D track but everything moves so much that it would become a nightmare to clean up the jumpy tracking points to get a solid track.
So we switched to the classic four point track instead. We let the computer do the track but every so often the computer got totally lost and you have to correct it manually. The perspective change was done by hand as it was just quicker to do this for a couple of frames then for the computer to try and figure it out.

Oranges fixed with gaffer tape to the glasses. Didn't look very cool so we had to replace them in post.

Oranges replaced by a clean image.



What's next?

Jef-Aram and I are already working on the second short. Make sure to stay tuned. You can do this by following us on twitter or by subscribing to our YouTube channel.

Links:
Belgian Boomsticks YouTube Channel
Jef-Aram's Blog

Frederic's Twitter Page
Jef-Aram's Twitter Page