Showing posts with label IBL. Show all posts
Showing posts with label IBL. Show all posts

Friday, September 19, 2014

PRMan RMS 18: Some tips on Rendering in Maya



RMS 19 is around the corner and will be released this fall. The new RIS engine works different than the RMS 18 reyes and ray tracer hiders but these old hiders will still be available. So why not write an article on how to get decent quality images out and how to streamline your workflow. Some of the workflow tips will still be useful in RMS 19.


What are my render options while working on a scene?


There are three ways to get your previews rendered. You can use the internal renderer. The interactive re-render option or the PRMan external renderer.
  • The internal renderer: Works just like the Maya renderer or Mental Ray. My biggest problem with this method is that Maya becomes unresponsive while rendering. You can stop the render process by pressing Esc but you can't modify any parameters in Maya. You can only twiddle your thumbs while waiting.
  • The re-render option: In this mode the renderer will constantly update the image while Maya stays responsive. You can change settings on the fly and see them change in de Render View. I have seen people using it successfully. Personally speaking I do not have a great experience with it. It does sometimes freeze Maya as it is still a process within Maya. This can really disrupt the workflow. It works best when you output your image to "it" and not to the render view in Maya.
  • The PRMan external renderer: In this mode the scene gets exported en queued up in the Local Queue manager (or the farm if you have one) and then rendered by RenderMan Pro Server. The big benefit is that it is an external process which won't block Maya from being used once the scene is launched. It is slightly less interactive than the re-render mode but is very stable. Since it uses the Local Queue it is possible to queue up multiple renders. It also uses "it" which is superior image viewer compared to the Render View.



How to speed up your workflow by using the external renderer


RMS 18 has two modes of rendering: reyes or ray trace. Reyes is a hybrid renderer (used to be scan line only) and the other mode is of course a full ray tracer. They both produce excellent results but they respond slightly different to the quality settings. I have made a test scene with a pencil and three black spheres lit with an HDR map.

Let's have a look at those settings (click on the images to see a full res version):


Image1: RenderMan Globals: Quality tab and advanced tab. (click image to see full res).

Under the advanced tab you can choose your render mode (or also called hider) (right window on image 1). When choosing reyes there is not much else to set in the hider tab. When choosing ray trace it is possible to tweak some settings. I prefer the adaptive path tracer. When you check incremental the image will start rendering noisy but will improve over time. I really like this as you get immediate results but you can also wait for more details to show up while it continues calculating the image. It allows you to judge the image quickly so you can start making tweaks. The image keeps rendering while you make those shading tweaks in Maya or when updating texture maps. (but does not incorporate them until the next render). For me it means I can do two things at once.

Image 2: RenderMan Render Options

To set up you external renderer you need to adjust some setting in the RenderMan Render Options.

  • Check the external renderer and choose "it" as your image display or your image viewer.
  • Choose local queue and local render (unless you have a complete render farm at your finger tips).
  • Set the environment key to "rms-18.0-maya-2015 prman-18.0". Important: You do need to have the RenderMan Pro server installed for this to work.


How to simply adjust quality.


The next section shows some differences in quality. Make sure to click the images to see the actual differences between them. The blurred reflection on the spheres and the text on the pencil are good places to look.

Reyes is easily controllable with the shading rate setting (left window on the image 1). Large values will produce course quality but will render very fast. A shading rate of 5 will give very quick renders but your texture maps will look blurred. This can be enough to see if everything renders but isn't very great to check the detail in your maps while you are painting them. The letters on the pencil are unreadable. Check out the test image below (Image 3):


Image 3: Reyes: shading rate 5, pixel samples 3x3

When you lower the shading rate, the quality will improve. Production renders are always on a shading rate of 1 or lower (like 0.5). This will increase rendering time. On the next image (Image 4) I lowered the shading rate to 0.1 which makes the small letters on the pencil very readable but the render time was 7 times as high as shading rate 5.

Image 4: Reyes: shading rate 0.1, pixel samples 3x3

The quality controls work a bit different when you are using the ray tracer. In general you can leave the shading rate on 1. It is in fact the pixel samples which control the quality of the image (check right window on Image 1). You can see on the next image (Image 5) that the ray tracer shows the texture maps more clearly but has more problems with noise and anti-aliassing. It is very noticeable in the blurred reflections.

Image 5: Ray trace: shading rate 1, pixel samples 2x2

Increasing the pixel samples will render better quality but will increase render times exponentially. On the next image (Image 6) we can see that 4x4 pixel samples is enough for a still image without depth of field. It is very comparable to the high quality reyes image.Once you start adding movement or shallow depth of field it needs much more samples to get rid of the noisiness.

Image 6: Ray trace: shading rate 1: pixel samples 4x4



Conclusion


As you can see, both the good old reyes and the newer ray tracer produce excellent results. I tend to lean to the ray tracer as I like the incremental path tracer. It is just great so your image improve over time. It is a bit slower than reyes though but delivers very sharp images  when the pixel samples are set correctly.

Sunday, February 24, 2013

IBL and Environment Maps for CGI



Image Based Lighting, also known as IBL, has been around for a while now and is a great solution for lighting and integrating your VFX scene into shot footage. In this article I like to show you how I create the photograph needed for this technique.


Low Dynamic range example of an equirectangular image also known as a Lat/long environment map. Keep reading if you like to know how I make this.


The basics

It is perfectly doable to light your scene with CG lights. I have created hundreds of realistically lit shots this way but it can become very time consuming when you want to get the fine details completely right. Thanks to IBL we can gain some time and spend it on other parts of the project. The idea is that a photograph of the place where you shot the footage contains all the needed light information and can be used with your graphics software to simulate the lighting.

There are two problems we have to overcome to be able to use this technique.
  • We need the light information of the whole scene. One photograph won't give us that unless you got a very expensive 360 degrees camera. This means we need to take multiple photographs and stitch those together till we got a complete view of our scene. Each photograph needs some overlap with the previous one. The field of view needs to be big enough otherwise we have to take too many photos which will become a nightmare to organise let alone all the time you spend by taking those pictures.
  • We need to capture all light information, Including the little light there is in the shadows as well as the super bright highlights of the sun. The dynamic range of digital cameras isn't big enough to capture all this information into one photo. The shadows will be crushed and the highlights will be burned, especially when storing the photo into an 8 bit image type like jpeg. The subtlety will be lost and the lighting won't look realistic.

Important remark: Keep in mind that you should have as few as possible moving objects in your scene. You will be taking pictures at different shutter speeds and moving objects will become a blur at slow shutter speeds.

Today I am using a better but slightly more expensive method than a couple of years back. In the first section I will explain in short how I used to do it and in the second section I will explain how I do it today. 

The old way


A mirror ball. Note the scratches and blemishes. Although the map will work for lighting it might be a bit rough for good reflections.

You need at least a camera which can shoot in manual mode. You have to have full control on aperture, shutter speed and ISO. A regular DSLR will do the trick. It even doesn't have to be a very expensive one.

To grab a complete environment with a regular lens you need way too many pictures to stitch together which will be very time consuming. A neat solution is to photograph not your environment directly but to shoot a spherical mirror or better known as a mirror ball. You don't want those facetted disco balls but rather the smooth chrome like balls which give a perfect reflection without breakups. The great thing about a spherical mirror is that the reflection is more than 180 degrees. It is actually almost 360 degrees with the exception of the view directly behind the ball. The drawback is that the edges are extremely distorted and a lot of information will be squeezed into a few pixels. To counter this drawback it is good practice to photograph the sphere from three or more different angles and to stitch those together.

Pros:
  • Cheap, available in garden shops unless you want a perfect chrome ball with no blemishes.
  • Good enough for capturing the general lighting information of your scene.
  • A regular DSLR camera with a regular lens will do the trick although I recommend a long one as you will be less visible in the reflection of the sphere.
Cons:
  • Every blemish on the sphere makes your picture unsharp.
  • You will always be in the picture as you are being reflected as well. You can paint yourself out but it consumes time.
  • Low resolution. Might not be enough for perfect reflections in your CGI image.
  • Measuring the distance between the camera and the sphere is critical. You want all the angles to be taken from the same distance.

Since regular jpegs have a low dynamic range we need to shoot different exposures and join those together into a High Dynamic Range Image or HDRI. This means the camera needs to be on a tripod as long exposures will be unavoidable. You also need to undistort the spherical distortion caused by the mirror ball. There are several programs available to do this for you. I used to use HDR Shop 1.0 but it is old and there is a newer version available.

Check steps 4 to 6 in the next section below to get an idea how to set exposure and how many pictures to take.


The new way


An low dynamic range tone mapped lat/long environment map. It wasn't stitched up properly. Notice the soft edges of the buildings.

Last year I invested in a whole new setup. It is superior to above method and get's better quality environment maps. Instead of working with a mirror ball I use a Fish Eye lens. You know, those funky super wide lenses which capture between 140 and 180 degrees field of view (depending on which lens you buy).

The aim is to take pictures with multiple exposures from six different angles plus a top and bottom shot. You always need a tripod for this.

Let's look at the kit.
  • DSLR: I use a Nikon D7000. This is a good midlevel DSLR. Why Nikon? Because I have already invested quite a bit of money into Nikon lenses over the past decade. Any other brand will do as long as you can shoot in manual mode.
  • The lens: This is the important part. I use a Samyang 8mm f/3.5 Fish Eye lens. This lens has a 180 degrees field when used with a DX camera like the D7000. You have to do a little research to know which lens will be the best choice for your camera. This Samyang is great quality but it is a fully manual lens including the focus. It has no proper chip for EXIF data (Although the new November 2011 model does). This is not a problem as such. You need fixed settings for your pictures anyway. I know this lens is also available for other camera's.
  • Tripod: A regular stable tripod will do.
  • 360 degree rig: Another very essential piece of equipment. This allows you to rotate the camera in fixed intervals around the central Y-axis. A good rig will measure the distances for you. I use the Nodal Ninja 4 for measuring the intervals with the EZ-Leveler II to get the camera perfectly horizontal.
  • A laptop for tethered shooting: A laptop is not necessary but it is always great to save all those images straight to hard drive and to automate the whole process. I use Sofortbild, a free Mac application from Stefan Hafeneger which allows me to take multiple exposures with one mouse click. Sofortbild works only with Nikon camera's. I know Canon has its own software.
  • The HDRI conversion software: I just use photoshop to join my multiple exposure brackets into one HDR image. Sofortbuild can do this on the fly while taking the pictures but I had some trouble with it lately and I didn't bother to figure out yet why.
  • The stitching software: Although you could try to stitch all the images together in photoshop, it is far more convenient to use a HDR panoramic stitcher program. I use PTGui Pro for Mac. you can give indications on where pictures are overlapping and it will try to stitch them together for you. There is also a manual mode if it doesn't manage to stitch them automatically. It also transforms the whole image into a longitude/latitude image format and saves them in a 32bit image format of your choice. I always use the radiance file format which has the .hdr extension. They seem to work flawlessly in Maya.


Now we have to put all this kit into practice.
  • Step 1: Put the whole rig on the spot where you want your light being captured. This is usual the location where you want your CG element to be in the scene.
  • Step 2: Make sure your camera is level. Use a spirit bubble or the build in sensor to measure this. Be as accurate as possible. This becomes relatively easy when using an EZ-leveler II.
  • Step 3: Put the nodal point of the lens right above the nodal point of the rig. If you skip this step your pictures won't align when stitching them. You can check this by taking pictures at different angles. You should get no parallax shift between the two pictures. If you do then you need to adjust the placement of the camera in comparison to the rigs nodal point accordingly.
  • Step 4: We need to take pictures at different exposures. Use the shutter speed to control this. Fix all other settings. Put the ISO to 100 to get noise free images and put the aperture to f22 so you get as much depth of field as possible. It will make everything sharp in the picture and that is exactly what you want.
  • Step 5: make a couple of stills to check the darks and the brights. Use the histogram function in your camera to see when the blacks aren't crushed anymore and the whites not clipped. It will show you what the minimum and the maximum shutter speed should be.
  • Step 6: Start with the slowest shutter speed and take a picture every two stops till you reach the fastest shutter speed. This will usually be between 5 to 8 pictures. A program like Sofortbild will take them all in one go when using tethering.
  • Step 7: Rotate the camera exactly 60 degrees (This can vary with other lenses but is a good benchmark) and repeat step 6. Make sure you take the same amount of pictures.
  • Step 8: Keep doing this till you have a full rotation.
  • Step 9: Take 2 extra sets for zenith and nadir. You could do without but it gives a better result. The cheap rigs won't allow you to do this though.
  • Step 10: You should have 8 sets of images now. Convert each set to an HDR image in photoshop. (For CS6: File>Automate>Merge to HDR Pro).
  • Step 11: Import the images into PTGui Pro and go trough the whole procedure to stitch them together into an equirectangular image.
  • Step 12: Export the results as a new HDR and use it in your 3D software. 

Something extra:
If you don't want to have your rig in the photograph it is possible to paint it out. It will give a nicer result but it can be time consuming.

I realize this article doesn't explain all the intricate details of the process but it should be enough to get you on your way and to try it out for yourself.


Links

Thursday, October 11, 2012

RenderMan Studio 4 and RenderMan Pro Server 17




Last week Pixar released their latest instalment of RenderMan Studio and RenderMan Pro Server. This is quite exciting news so a good reason to write something about it.

Versions and Pricing for RenderMan Studio 4

Pixar used to have two product lines. The cheaper RenderMan for Maya which was a limited plug-in but with an embedded render license and the more expensive RenderMan Studio which had all the regular tools like Slim and "it" but needed a RenderMan Pro Server license to actualy render anything at all. Pixar has consolidated their two Maya plug-ins into one and has adjusted the pricing and functionality. The RenderMan for Maya product line no longers exists and only RenderMan Studio is now available. RenderMan Studio still has all functionalities but Pixar added an embedded render license. In combination with the price drop to $1300 a license this will give bigger companies a considerable expense cut and gives smaller companies the full set of tools for only $300 more than the old RenderMan for Maya license.

This new pricing gives smaller studios the opportunity to start building their RenderMan pipeline without the huge investment that it used to be.


New Features in RenderMan Studio 4

RMS 4 contains the following applications.
  • RenderMan for Maya 5
  • Slim 10
  • "it" 10
  • Pixar's RenderMan 17 (embedded version)
  • LocalQueue
  • Tractor 1.6
The two new applications are the embedded RenderMan 17 and LocalQueue. This last one is particularly handy for artists who like to render locally controlled by a render manager without having to set up an entire render management infrastructure. The embedded renderer is exactly the same as the Pro Server version so there is no difference if you render it locally or on the render farm.

There is a long list of improvements and efficiency updates but one of the more interesting features is "Phisically Plausible Shading". This is a new easy to use advanced shading and lighting workflow which creates very realistic results. Keep in mind that this new workflow is not compatible with the older RMS 3 workflow although the old workflow is still available if you wish to use it. You have to choose which workflow you like to use at the start of your project and stick with it.

Some interesting features of the new RMS 4 plausible shading workflow.
  • Raytraced and point-based global illumination: This is controlled with special global GI lights. It supports light linking so it is possible to use pre-computed point clouds for a set while rendering the hero objects with raytraced global illumination.
  • Image based lighting is now de-coupled from global illumination. The new RMSEnvLight is a bit slower to calculate but the quality has improved a lot.
  • New Area Lights: Area lights are quite hot nowadays. They provide realistic lighting and soft shadows.
  • Light Blockers: Although I used this feature many years ago as a custom light shader they have now included this as a standard. This is incredibly handy to subtract light in certain areas of your scene.
  • Subsurface scattering trough raytracing: No need to compute pre-passes anymore. This can be handy for relighting purposes as pre-passes can take up too much time.
To facilitate these features Pixar has added new shading nodes which are directly accessible in Maya or trough Slim. The general purpose surface shader supports layering which in my opinion is a very important feature.

New features in RenderMan Pro Server 17

RPS 17 is mainly a speed and efficiency update. I'll mention the features which I think are the more important ones. Hair and fur render up to five times faster and the new implementation of RSL has a 20% speed increase on average on shading calculations. There is also a volume rendering optimization and a new implementation of objects instancing.

Also new: RenderMan On Demand

Pixar also created a new online service called RenderMan On Demand. Whenever you have not enough render capacity it is possible to send your scenes to Pixar and let them render out your scenes. The service fee starts at 70 cent per core per hour.

Links:

Friday, April 27, 2012

VFX Back to Basics Series: 6. What is lighting and rendering?


This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this sixth post I will talk about lighting and rendering.

A ray traced image with a directional light and a sky dome which is used for the image based lighting.

The reason why lighting and rendering usually goes together is because lighting technical directors handle both of them and certain render techniques influence directly how the scene is lit.

Rendering in a nutshell

Let's start this post with rendering and more specific with render engines. A render engine is a piece of software which translates scene data, like models, shaders and lights into a final viewable image. These calculations can go from mere seconds for a simple scene to hours for a complex scene just for one frame. Keep in mind that a movie in the theater needs 24 frames a second so you can imagine that movies packed with VFX are taking years to render, if it was done by a single CPU that is. Tackling a large number of frames is solved by using many CPU's at once. A room full of computers set up for this purpose is called a render farm.

The render engine can be seen as a separate entity and is not really part of the animation and modeling package although it might seem so. Most packages come with a built in renderer but external render engines usually are chosen when working on bigger projects. Photorealistic RenderMan, 3Delight, Mental Ray and V-Ray are only a few examples. There are plenty of good renderers out there. Just choose the one which works for you.

Diagram which shows the relationship between 3D packages and render engines.

The above diagram shows how the software packages like Maya and 3D Studio Max talk with the help of a translator to the rendering packages. Each render engine has its own "language" so the translator gets provided by the render engine.

There is one family of renderers which talks the same language which is the RenderMan standard. This standard was created by Pixar and for a long time Pixar was the only company which had a RenderMan compliant renderer. Now there are other commercial renderers availabe. The scene file from the 3D program gets translated to a RIB file which any RenderMan compliant renderer should be able to read and render out. This open standard is extremely powerful and flexible. It allows for very complex render pipelines so this is mostly used in VFX for film and not so much for small projects.


Render algorithms

Every renderer has its own algorithm but there are two distinctive paths on how to calculate an image: Scanline rendering and ray tracing. I am explaining shadow creation here as well although it can also fit in the lighting section below.

Scanline rendering is a technique which sorts the geometry according to depth and then renders a row at the time. It is very efficient as it discards geometry which is invisible and therefor limits calculations. Shadow calculations are usually done trough shadow mapping. Shadow maps are depth maps which can be stored in a file and be reused. High resolutions depth maps can be expensive to calculate but their reusability in certain circumstances make up for that. It handles large amounts of geometry rather well. Although it has been a very popular technique and is used for example by PRMan, it is being replaced more and more by ray tracing.

Very simple example of shadows made with a shadow map. This one has not a high enough resolution to have a nice shadow. The edges are pixelated.

A new render with a higher resolution shadow map. The edges aren't pixelated anymore.

This how a shadow map looks like. It is a depth map where the lighter grey is closer to the camera than the  darker grey.

Raytracing is a technique which calculates a pixel at the time. It shoots a ray from the camera to the the objects which bounces of to the lights present in the scene. Shadows are an automatic result of this technique and therefor easy to make but they can become rather expensive when soft shadows are needed. This increases the rays per pixel and affects directly the render times. The big benefit is true reflections and refractions are possible. Mental Ray and V-Ray belong to this category.

Ray traced shadows. These are sharp clean shadows.

An attempt to get softer shadows but not enough samples are used and it looks bad and pixelated.

This render has more samples as the previous image and has therefor a much smoother result but render times have gone up considerably. From 7 seconds to 23 seconds.

Render times in both techniques are influenced by the objects in the scene and the complexity of the shaders and lights used. It is imperative to keep render times under control when running a production. Render algorithms can be combined. It can be possible to use a scanline renderer and activate the ray traced shadows if the render engine allows this.

There are more subcategories like global illumination and radiosity but that is a bit too specialized for this article.

Lighting

A 3D scene without light would turn out black just like in the real world. The techniques for lighting a scene are pretty much like lighting for a live action set. There are some differences though. It is impossible to subtract light (and I do not mean block) in the real world whereas in the digital world it is a matter of mathematics. Shadows are another difference. It is possible to change the color, softness without affecting the light itself or even turn it off completely. This gives a huge amount of flexibility but be careful when lighting for a photorealistic scene. Using these tricks usually makes the scene look less real.

Classic lights

Lights in a 3D scene are controlled by light shaders just like surface shaders. Most of the time you do not have to assign a light shader to your light though as most packages do this automatically. A light shader is a little program which defines the properties of the light. Light shaders are also render engine specific. All 3D packages have a ready to use set of lights available. Here are the most important examples.
  • Point light: light emitting from a single point like a light bulb.
  • Spot light: A light like a regular spot on a set. It can be controlled by barn doors and by adjusting its diameter.
  • Directional light: A light source with parallel light beams. It is well suited for simulating the sun for example. 
  • Area light: A light which emanates from a surface. Good examples are Kino Flo lamps and light trough a window into a room.
Take notice that area lights are a special case for shadows. The light is bigger than a point and will therefor give always soft shadows. Soft shadows are nice but are always a bit more expensive in calculations especially when you don't want them to look to grainy.

Newer techniques

When lighting for CGI which needs to be incorporated into live action another technique can be used if your renderer allows it. This is called image based lighting or in short IBL. Instead of lighting the scene completely yourself it is possible to use the lighting which was present on the set. This can be done by taking high a dynamic range (or HDR) panorama photograph. The HDR images contain much more information than a regular photograph. Hot spots in the image are not clipped and the blacks are not crushed.

A panoramic image of a kitchen. This is just a representation of an HDR image. Real HDR images are not supported to be shown on the web.

The renderer will use this HDR image as base to light  the scene. This can give highly realistic results. There are two main techniques to make these photographs. The cheap way is to use a mirror ball. It works well for grabbing the lighting but has certain limits when the image is used for crisp reflections on the CGI objects. The more expensive way is to use a fish eye lens.

If you're interested in good lighting make sure you read some books on cinematography. There is much to learn from the real thing. Understanding color theory and how light behaves is very important.


This concludes the sixth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.