Sunday, February 24, 2013

IBL and Environment Maps for CGI



Image Based Lighting, also known as IBL, has been around for a while now and is a great solution for lighting and integrating your VFX scene into shot footage. In this article I like to show you how I create the photograph needed for this technique.


Low Dynamic range example of an equirectangular image also known as a Lat/long environment map. Keep reading if you like to know how I make this.


The basics

It is perfectly doable to light your scene with CG lights. I have created hundreds of realistically lit shots this way but it can become very time consuming when you want to get the fine details completely right. Thanks to IBL we can gain some time and spend it on other parts of the project. The idea is that a photograph of the place where you shot the footage contains all the needed light information and can be used with your graphics software to simulate the lighting.

There are two problems we have to overcome to be able to use this technique.
  • We need the light information of the whole scene. One photograph won't give us that unless you got a very expensive 360 degrees camera. This means we need to take multiple photographs and stitch those together till we got a complete view of our scene. Each photograph needs some overlap with the previous one. The field of view needs to be big enough otherwise we have to take too many photos which will become a nightmare to organise let alone all the time you spend by taking those pictures.
  • We need to capture all light information, Including the little light there is in the shadows as well as the super bright highlights of the sun. The dynamic range of digital cameras isn't big enough to capture all this information into one photo. The shadows will be crushed and the highlights will be burned, especially when storing the photo into an 8 bit image type like jpeg. The subtlety will be lost and the lighting won't look realistic.

Important remark: Keep in mind that you should have as few as possible moving objects in your scene. You will be taking pictures at different shutter speeds and moving objects will become a blur at slow shutter speeds.

Today I am using a better but slightly more expensive method than a couple of years back. In the first section I will explain in short how I used to do it and in the second section I will explain how I do it today. 

The old way


A mirror ball. Note the scratches and blemishes. Although the map will work for lighting it might be a bit rough for good reflections.

You need at least a camera which can shoot in manual mode. You have to have full control on aperture, shutter speed and ISO. A regular DSLR will do the trick. It even doesn't have to be a very expensive one.

To grab a complete environment with a regular lens you need way too many pictures to stitch together which will be very time consuming. A neat solution is to photograph not your environment directly but to shoot a spherical mirror or better known as a mirror ball. You don't want those facetted disco balls but rather the smooth chrome like balls which give a perfect reflection without breakups. The great thing about a spherical mirror is that the reflection is more than 180 degrees. It is actually almost 360 degrees with the exception of the view directly behind the ball. The drawback is that the edges are extremely distorted and a lot of information will be squeezed into a few pixels. To counter this drawback it is good practice to photograph the sphere from three or more different angles and to stitch those together.

Pros:
  • Cheap, available in garden shops unless you want a perfect chrome ball with no blemishes.
  • Good enough for capturing the general lighting information of your scene.
  • A regular DSLR camera with a regular lens will do the trick although I recommend a long one as you will be less visible in the reflection of the sphere.
Cons:
  • Every blemish on the sphere makes your picture unsharp.
  • You will always be in the picture as you are being reflected as well. You can paint yourself out but it consumes time.
  • Low resolution. Might not be enough for perfect reflections in your CGI image.
  • Measuring the distance between the camera and the sphere is critical. You want all the angles to be taken from the same distance.

Since regular jpegs have a low dynamic range we need to shoot different exposures and join those together into a High Dynamic Range Image or HDRI. This means the camera needs to be on a tripod as long exposures will be unavoidable. You also need to undistort the spherical distortion caused by the mirror ball. There are several programs available to do this for you. I used to use HDR Shop 1.0 but it is old and there is a newer version available.

Check steps 4 to 6 in the next section below to get an idea how to set exposure and how many pictures to take.


The new way


An low dynamic range tone mapped lat/long environment map. It wasn't stitched up properly. Notice the soft edges of the buildings.

Last year I invested in a whole new setup. It is superior to above method and get's better quality environment maps. Instead of working with a mirror ball I use a Fish Eye lens. You know, those funky super wide lenses which capture between 140 and 180 degrees field of view (depending on which lens you buy).

The aim is to take pictures with multiple exposures from six different angles plus a top and bottom shot. You always need a tripod for this.

Let's look at the kit.
  • DSLR: I use a Nikon D7000. This is a good midlevel DSLR. Why Nikon? Because I have already invested quite a bit of money into Nikon lenses over the past decade. Any other brand will do as long as you can shoot in manual mode.
  • The lens: This is the important part. I use a Samyang 8mm f/3.5 Fish Eye lens. This lens has a 180 degrees field when used with a DX camera like the D7000. You have to do a little research to know which lens will be the best choice for your camera. This Samyang is great quality but it is a fully manual lens including the focus. It has no proper chip for EXIF data (Although the new November 2011 model does). This is not a problem as such. You need fixed settings for your pictures anyway. I know this lens is also available for other camera's.
  • Tripod: A regular stable tripod will do.
  • 360 degree rig: Another very essential piece of equipment. This allows you to rotate the camera in fixed intervals around the central Y-axis. A good rig will measure the distances for you. I use the Nodal Ninja 4 for measuring the intervals with the EZ-Leveler II to get the camera perfectly horizontal.
  • A laptop for tethered shooting: A laptop is not necessary but it is always great to save all those images straight to hard drive and to automate the whole process. I use Sofortbild, a free Mac application from Stefan Hafeneger which allows me to take multiple exposures with one mouse click. Sofortbild works only with Nikon camera's. I know Canon has its own software.
  • The HDRI conversion software: I just use photoshop to join my multiple exposure brackets into one HDR image. Sofortbuild can do this on the fly while taking the pictures but I had some trouble with it lately and I didn't bother to figure out yet why.
  • The stitching software: Although you could try to stitch all the images together in photoshop, it is far more convenient to use a HDR panoramic stitcher program. I use PTGui Pro for Mac. you can give indications on where pictures are overlapping and it will try to stitch them together for you. There is also a manual mode if it doesn't manage to stitch them automatically. It also transforms the whole image into a longitude/latitude image format and saves them in a 32bit image format of your choice. I always use the radiance file format which has the .hdr extension. They seem to work flawlessly in Maya.


Now we have to put all this kit into practice.
  • Step 1: Put the whole rig on the spot where you want your light being captured. This is usual the location where you want your CG element to be in the scene.
  • Step 2: Make sure your camera is level. Use a spirit bubble or the build in sensor to measure this. Be as accurate as possible. This becomes relatively easy when using an EZ-leveler II.
  • Step 3: Put the nodal point of the lens right above the nodal point of the rig. If you skip this step your pictures won't align when stitching them. You can check this by taking pictures at different angles. You should get no parallax shift between the two pictures. If you do then you need to adjust the placement of the camera in comparison to the rigs nodal point accordingly.
  • Step 4: We need to take pictures at different exposures. Use the shutter speed to control this. Fix all other settings. Put the ISO to 100 to get noise free images and put the aperture to f22 so you get as much depth of field as possible. It will make everything sharp in the picture and that is exactly what you want.
  • Step 5: make a couple of stills to check the darks and the brights. Use the histogram function in your camera to see when the blacks aren't crushed anymore and the whites not clipped. It will show you what the minimum and the maximum shutter speed should be.
  • Step 6: Start with the slowest shutter speed and take a picture every two stops till you reach the fastest shutter speed. This will usually be between 5 to 8 pictures. A program like Sofortbild will take them all in one go when using tethering.
  • Step 7: Rotate the camera exactly 60 degrees (This can vary with other lenses but is a good benchmark) and repeat step 6. Make sure you take the same amount of pictures.
  • Step 8: Keep doing this till you have a full rotation.
  • Step 9: Take 2 extra sets for zenith and nadir. You could do without but it gives a better result. The cheap rigs won't allow you to do this though.
  • Step 10: You should have 8 sets of images now. Convert each set to an HDR image in photoshop. (For CS6: File>Automate>Merge to HDR Pro).
  • Step 11: Import the images into PTGui Pro and go trough the whole procedure to stitch them together into an equirectangular image.
  • Step 12: Export the results as a new HDR and use it in your 3D software. 

Something extra:
If you don't want to have your rig in the photograph it is possible to paint it out. It will give a nicer result but it can be time consuming.

I realize this article doesn't explain all the intricate details of the process but it should be enough to get you on your way and to try it out for yourself.


Links

2 comments:

Unknown said...

Would a Tokina 11-16mm work alright for doing this? I already own one and would like to skip spending more money than I have to. :D

Nice write up and thanks.

Frederic said...

Thanks for question and the nice feedback.

In theory you can use any lens to do this but the wider the lens the less pictures you need to take. With my 8mm fish eye lens I only need 8 pictures in total (6 around, 1 up and 1 down). I move the rig 60 degrees for every picture.

You will have to calculate how many pictures you need to shoot for a full environment. You can do this by checking the vertical field of view of your lens and divide that number by 360. You also need some overlap so make sure to at least double that number.

I hope this helps.