Showing posts with label 3D. Show all posts
Showing posts with label 3D. Show all posts

Friday, September 19, 2014

PRMan RMS 18: Some tips on Rendering in Maya



RMS 19 is around the corner and will be released this fall. The new RIS engine works different than the RMS 18 reyes and ray tracer hiders but these old hiders will still be available. So why not write an article on how to get decent quality images out and how to streamline your workflow. Some of the workflow tips will still be useful in RMS 19.


What are my render options while working on a scene?


There are three ways to get your previews rendered. You can use the internal renderer. The interactive re-render option or the PRMan external renderer.
  • The internal renderer: Works just like the Maya renderer or Mental Ray. My biggest problem with this method is that Maya becomes unresponsive while rendering. You can stop the render process by pressing Esc but you can't modify any parameters in Maya. You can only twiddle your thumbs while waiting.
  • The re-render option: In this mode the renderer will constantly update the image while Maya stays responsive. You can change settings on the fly and see them change in de Render View. I have seen people using it successfully. Personally speaking I do not have a great experience with it. It does sometimes freeze Maya as it is still a process within Maya. This can really disrupt the workflow. It works best when you output your image to "it" and not to the render view in Maya.
  • The PRMan external renderer: In this mode the scene gets exported en queued up in the Local Queue manager (or the farm if you have one) and then rendered by RenderMan Pro Server. The big benefit is that it is an external process which won't block Maya from being used once the scene is launched. It is slightly less interactive than the re-render mode but is very stable. Since it uses the Local Queue it is possible to queue up multiple renders. It also uses "it" which is superior image viewer compared to the Render View.



How to speed up your workflow by using the external renderer


RMS 18 has two modes of rendering: reyes or ray trace. Reyes is a hybrid renderer (used to be scan line only) and the other mode is of course a full ray tracer. They both produce excellent results but they respond slightly different to the quality settings. I have made a test scene with a pencil and three black spheres lit with an HDR map.

Let's have a look at those settings (click on the images to see a full res version):


Image1: RenderMan Globals: Quality tab and advanced tab. (click image to see full res).

Under the advanced tab you can choose your render mode (or also called hider) (right window on image 1). When choosing reyes there is not much else to set in the hider tab. When choosing ray trace it is possible to tweak some settings. I prefer the adaptive path tracer. When you check incremental the image will start rendering noisy but will improve over time. I really like this as you get immediate results but you can also wait for more details to show up while it continues calculating the image. It allows you to judge the image quickly so you can start making tweaks. The image keeps rendering while you make those shading tweaks in Maya or when updating texture maps. (but does not incorporate them until the next render). For me it means I can do two things at once.

Image 2: RenderMan Render Options

To set up you external renderer you need to adjust some setting in the RenderMan Render Options.

  • Check the external renderer and choose "it" as your image display or your image viewer.
  • Choose local queue and local render (unless you have a complete render farm at your finger tips).
  • Set the environment key to "rms-18.0-maya-2015 prman-18.0". Important: You do need to have the RenderMan Pro server installed for this to work.


How to simply adjust quality.


The next section shows some differences in quality. Make sure to click the images to see the actual differences between them. The blurred reflection on the spheres and the text on the pencil are good places to look.

Reyes is easily controllable with the shading rate setting (left window on the image 1). Large values will produce course quality but will render very fast. A shading rate of 5 will give very quick renders but your texture maps will look blurred. This can be enough to see if everything renders but isn't very great to check the detail in your maps while you are painting them. The letters on the pencil are unreadable. Check out the test image below (Image 3):


Image 3: Reyes: shading rate 5, pixel samples 3x3

When you lower the shading rate, the quality will improve. Production renders are always on a shading rate of 1 or lower (like 0.5). This will increase rendering time. On the next image (Image 4) I lowered the shading rate to 0.1 which makes the small letters on the pencil very readable but the render time was 7 times as high as shading rate 5.

Image 4: Reyes: shading rate 0.1, pixel samples 3x3

The quality controls work a bit different when you are using the ray tracer. In general you can leave the shading rate on 1. It is in fact the pixel samples which control the quality of the image (check right window on Image 1). You can see on the next image (Image 5) that the ray tracer shows the texture maps more clearly but has more problems with noise and anti-aliassing. It is very noticeable in the blurred reflections.

Image 5: Ray trace: shading rate 1, pixel samples 2x2

Increasing the pixel samples will render better quality but will increase render times exponentially. On the next image (Image 6) we can see that 4x4 pixel samples is enough for a still image without depth of field. It is very comparable to the high quality reyes image.Once you start adding movement or shallow depth of field it needs much more samples to get rid of the noisiness.

Image 6: Ray trace: shading rate 1: pixel samples 4x4



Conclusion


As you can see, both the good old reyes and the newer ray tracer produce excellent results. I tend to lean to the ray tracer as I like the incremental path tracer. It is just great so your image improve over time. It is a bit slower than reyes though but delivers very sharp images  when the pixel samples are set correctly.

Friday, June 14, 2013

Has the new Mac Pro 2013 the potential to be a good CGI workstation?

Apple sneak previews the new Mac Pro on the first day of  WWDC 2013


On the 10th of June Phil Schiller, the senior vice president of worldwide marketing at Apple Inc., presented the new Mac Pro. This new machine looks nothing like the previous model. In fact, the whole thing is redesigned from the bottom up.

The internals of the new Mac Pro. You can see two memory slots at either side for a total of four.

What's under the cylindrical hood?


The new cylindrical design has already been mocked by the internet community to be a trash can, jet engine or a Darth Vader inspired machine. I agree that it is a strange design but I can see the benefits when keeping the machine cool. All components are cooled by a central cooling element and one large fan which will keep the noise levels down.

It will have up to 12 new Intel Haswell Xeon cores, a dual AMD Fire Pro graphics card, 1866MHz DDR3 memory for a bandwidth of up to 60 GB/s, PCI express flash storage, Thunderbold 2, HDMI 1.4, USB3 and gigabit ethernet. Even the latest wireless technology is incorporated.


So, is it any good as a graphics workstation?


The new Xeon cores will definitely provide plenty of power for local render jobs. The fast memory and flash drive will run 3D software like Maya quickly. On the pictures, released on the Apple web site, you can see that there are only four memory slots. Compared to the current Mac Pro that seems kind of few. It all depends how much you can fit in one of those slots of course.

"So what about the graphics card?" you might ask. The dual professional AMD GPU is not a bad choice but it is not sure if it will be upgradable once the machine leaves the factory. The evolution of graphics cards goes much quicker than the rest of a machine. It usually gets replaced half way the lifetime of the machine itself. There is also the question if nVidia GPUs will become available. A lot of graphics software works with CUDA acceleration. Apple is gambling a bit that software companies will develop more OpenCL applications in the future. That said, Mari, a great painting tool for 3D artists will become available for Mac later this year.

One of the things people believe to be the biggest problem is expandability. There are no expansion slots anymore and there is no space for extra hard drives. Apple believes that all expansion should go trough the Thunderbolt 2 interface. It does show potential though. With 6 ports you can add up to 36 devices (6 daisy chained per port) which I am sure is more than enough. And the Thunderbolt 2 I/O is twice as fast as the current one going up to 20 Gb/s.

I honestly think that hard drive expansion, although external, will not be such a big issue. There are already plenty of Thunderbolt solutions out there. Besides that, most post production facilities have network attached storage. Yes, four internal drives might be neater and less cable clutter but I don' think performance will suffer too much.

A potential bigger problem might be finding solutions that needed a PCIe expansion slot before. Some facilities have invested a lot of money in peripheral hardware which with the new Mac Pro can't be used anymore. There are PCIe expansion racks available but this adds to the cost. And, although I am not 100% sure if this is true, Thunderbolt 2 might not have enough bandwidth for adding an extra GPU in an external rack. Then again, you get 2 professional GPUs fitted when you buy a new Mac Pro which provide already a lot of GPU power.


Am I getting one once it is available?


So am I getting one when it becomes available? Well, that will all depend on the price. The machine looks great on paper and will definitely outperform my 2008 Mac Pro but if the entry model will cost more than 3000 Euro, it will become hard to justify the purchase. The 2008 Mac Pro was one of the cheapest Mac Pros on the market compared to it's competitors. If Apple can repeat this then I will be one of the first ones to get one.

Anyway, I am looking forward for this machine to hit the shelves.

Sunday, February 24, 2013

IBL and Environment Maps for CGI



Image Based Lighting, also known as IBL, has been around for a while now and is a great solution for lighting and integrating your VFX scene into shot footage. In this article I like to show you how I create the photograph needed for this technique.


Low Dynamic range example of an equirectangular image also known as a Lat/long environment map. Keep reading if you like to know how I make this.


The basics

It is perfectly doable to light your scene with CG lights. I have created hundreds of realistically lit shots this way but it can become very time consuming when you want to get the fine details completely right. Thanks to IBL we can gain some time and spend it on other parts of the project. The idea is that a photograph of the place where you shot the footage contains all the needed light information and can be used with your graphics software to simulate the lighting.

There are two problems we have to overcome to be able to use this technique.
  • We need the light information of the whole scene. One photograph won't give us that unless you got a very expensive 360 degrees camera. This means we need to take multiple photographs and stitch those together till we got a complete view of our scene. Each photograph needs some overlap with the previous one. The field of view needs to be big enough otherwise we have to take too many photos which will become a nightmare to organise let alone all the time you spend by taking those pictures.
  • We need to capture all light information, Including the little light there is in the shadows as well as the super bright highlights of the sun. The dynamic range of digital cameras isn't big enough to capture all this information into one photo. The shadows will be crushed and the highlights will be burned, especially when storing the photo into an 8 bit image type like jpeg. The subtlety will be lost and the lighting won't look realistic.

Important remark: Keep in mind that you should have as few as possible moving objects in your scene. You will be taking pictures at different shutter speeds and moving objects will become a blur at slow shutter speeds.

Today I am using a better but slightly more expensive method than a couple of years back. In the first section I will explain in short how I used to do it and in the second section I will explain how I do it today. 

The old way


A mirror ball. Note the scratches and blemishes. Although the map will work for lighting it might be a bit rough for good reflections.

You need at least a camera which can shoot in manual mode. You have to have full control on aperture, shutter speed and ISO. A regular DSLR will do the trick. It even doesn't have to be a very expensive one.

To grab a complete environment with a regular lens you need way too many pictures to stitch together which will be very time consuming. A neat solution is to photograph not your environment directly but to shoot a spherical mirror or better known as a mirror ball. You don't want those facetted disco balls but rather the smooth chrome like balls which give a perfect reflection without breakups. The great thing about a spherical mirror is that the reflection is more than 180 degrees. It is actually almost 360 degrees with the exception of the view directly behind the ball. The drawback is that the edges are extremely distorted and a lot of information will be squeezed into a few pixels. To counter this drawback it is good practice to photograph the sphere from three or more different angles and to stitch those together.

Pros:
  • Cheap, available in garden shops unless you want a perfect chrome ball with no blemishes.
  • Good enough for capturing the general lighting information of your scene.
  • A regular DSLR camera with a regular lens will do the trick although I recommend a long one as you will be less visible in the reflection of the sphere.
Cons:
  • Every blemish on the sphere makes your picture unsharp.
  • You will always be in the picture as you are being reflected as well. You can paint yourself out but it consumes time.
  • Low resolution. Might not be enough for perfect reflections in your CGI image.
  • Measuring the distance between the camera and the sphere is critical. You want all the angles to be taken from the same distance.

Since regular jpegs have a low dynamic range we need to shoot different exposures and join those together into a High Dynamic Range Image or HDRI. This means the camera needs to be on a tripod as long exposures will be unavoidable. You also need to undistort the spherical distortion caused by the mirror ball. There are several programs available to do this for you. I used to use HDR Shop 1.0 but it is old and there is a newer version available.

Check steps 4 to 6 in the next section below to get an idea how to set exposure and how many pictures to take.


The new way


An low dynamic range tone mapped lat/long environment map. It wasn't stitched up properly. Notice the soft edges of the buildings.

Last year I invested in a whole new setup. It is superior to above method and get's better quality environment maps. Instead of working with a mirror ball I use a Fish Eye lens. You know, those funky super wide lenses which capture between 140 and 180 degrees field of view (depending on which lens you buy).

The aim is to take pictures with multiple exposures from six different angles plus a top and bottom shot. You always need a tripod for this.

Let's look at the kit.
  • DSLR: I use a Nikon D7000. This is a good midlevel DSLR. Why Nikon? Because I have already invested quite a bit of money into Nikon lenses over the past decade. Any other brand will do as long as you can shoot in manual mode.
  • The lens: This is the important part. I use a Samyang 8mm f/3.5 Fish Eye lens. This lens has a 180 degrees field when used with a DX camera like the D7000. You have to do a little research to know which lens will be the best choice for your camera. This Samyang is great quality but it is a fully manual lens including the focus. It has no proper chip for EXIF data (Although the new November 2011 model does). This is not a problem as such. You need fixed settings for your pictures anyway. I know this lens is also available for other camera's.
  • Tripod: A regular stable tripod will do.
  • 360 degree rig: Another very essential piece of equipment. This allows you to rotate the camera in fixed intervals around the central Y-axis. A good rig will measure the distances for you. I use the Nodal Ninja 4 for measuring the intervals with the EZ-Leveler II to get the camera perfectly horizontal.
  • A laptop for tethered shooting: A laptop is not necessary but it is always great to save all those images straight to hard drive and to automate the whole process. I use Sofortbild, a free Mac application from Stefan Hafeneger which allows me to take multiple exposures with one mouse click. Sofortbild works only with Nikon camera's. I know Canon has its own software.
  • The HDRI conversion software: I just use photoshop to join my multiple exposure brackets into one HDR image. Sofortbuild can do this on the fly while taking the pictures but I had some trouble with it lately and I didn't bother to figure out yet why.
  • The stitching software: Although you could try to stitch all the images together in photoshop, it is far more convenient to use a HDR panoramic stitcher program. I use PTGui Pro for Mac. you can give indications on where pictures are overlapping and it will try to stitch them together for you. There is also a manual mode if it doesn't manage to stitch them automatically. It also transforms the whole image into a longitude/latitude image format and saves them in a 32bit image format of your choice. I always use the radiance file format which has the .hdr extension. They seem to work flawlessly in Maya.


Now we have to put all this kit into practice.
  • Step 1: Put the whole rig on the spot where you want your light being captured. This is usual the location where you want your CG element to be in the scene.
  • Step 2: Make sure your camera is level. Use a spirit bubble or the build in sensor to measure this. Be as accurate as possible. This becomes relatively easy when using an EZ-leveler II.
  • Step 3: Put the nodal point of the lens right above the nodal point of the rig. If you skip this step your pictures won't align when stitching them. You can check this by taking pictures at different angles. You should get no parallax shift between the two pictures. If you do then you need to adjust the placement of the camera in comparison to the rigs nodal point accordingly.
  • Step 4: We need to take pictures at different exposures. Use the shutter speed to control this. Fix all other settings. Put the ISO to 100 to get noise free images and put the aperture to f22 so you get as much depth of field as possible. It will make everything sharp in the picture and that is exactly what you want.
  • Step 5: make a couple of stills to check the darks and the brights. Use the histogram function in your camera to see when the blacks aren't crushed anymore and the whites not clipped. It will show you what the minimum and the maximum shutter speed should be.
  • Step 6: Start with the slowest shutter speed and take a picture every two stops till you reach the fastest shutter speed. This will usually be between 5 to 8 pictures. A program like Sofortbild will take them all in one go when using tethering.
  • Step 7: Rotate the camera exactly 60 degrees (This can vary with other lenses but is a good benchmark) and repeat step 6. Make sure you take the same amount of pictures.
  • Step 8: Keep doing this till you have a full rotation.
  • Step 9: Take 2 extra sets for zenith and nadir. You could do without but it gives a better result. The cheap rigs won't allow you to do this though.
  • Step 10: You should have 8 sets of images now. Convert each set to an HDR image in photoshop. (For CS6: File>Automate>Merge to HDR Pro).
  • Step 11: Import the images into PTGui Pro and go trough the whole procedure to stitch them together into an equirectangular image.
  • Step 12: Export the results as a new HDR and use it in your 3D software. 

Something extra:
If you don't want to have your rig in the photograph it is possible to paint it out. It will give a nicer result but it can be time consuming.

I realize this article doesn't explain all the intricate details of the process but it should be enough to get you on your way and to try it out for yourself.


Links

Thursday, October 11, 2012

RenderMan Studio 4 and RenderMan Pro Server 17




Last week Pixar released their latest instalment of RenderMan Studio and RenderMan Pro Server. This is quite exciting news so a good reason to write something about it.

Versions and Pricing for RenderMan Studio 4

Pixar used to have two product lines. The cheaper RenderMan for Maya which was a limited plug-in but with an embedded render license and the more expensive RenderMan Studio which had all the regular tools like Slim and "it" but needed a RenderMan Pro Server license to actualy render anything at all. Pixar has consolidated their two Maya plug-ins into one and has adjusted the pricing and functionality. The RenderMan for Maya product line no longers exists and only RenderMan Studio is now available. RenderMan Studio still has all functionalities but Pixar added an embedded render license. In combination with the price drop to $1300 a license this will give bigger companies a considerable expense cut and gives smaller companies the full set of tools for only $300 more than the old RenderMan for Maya license.

This new pricing gives smaller studios the opportunity to start building their RenderMan pipeline without the huge investment that it used to be.


New Features in RenderMan Studio 4

RMS 4 contains the following applications.
  • RenderMan for Maya 5
  • Slim 10
  • "it" 10
  • Pixar's RenderMan 17 (embedded version)
  • LocalQueue
  • Tractor 1.6
The two new applications are the embedded RenderMan 17 and LocalQueue. This last one is particularly handy for artists who like to render locally controlled by a render manager without having to set up an entire render management infrastructure. The embedded renderer is exactly the same as the Pro Server version so there is no difference if you render it locally or on the render farm.

There is a long list of improvements and efficiency updates but one of the more interesting features is "Phisically Plausible Shading". This is a new easy to use advanced shading and lighting workflow which creates very realistic results. Keep in mind that this new workflow is not compatible with the older RMS 3 workflow although the old workflow is still available if you wish to use it. You have to choose which workflow you like to use at the start of your project and stick with it.

Some interesting features of the new RMS 4 plausible shading workflow.
  • Raytraced and point-based global illumination: This is controlled with special global GI lights. It supports light linking so it is possible to use pre-computed point clouds for a set while rendering the hero objects with raytraced global illumination.
  • Image based lighting is now de-coupled from global illumination. The new RMSEnvLight is a bit slower to calculate but the quality has improved a lot.
  • New Area Lights: Area lights are quite hot nowadays. They provide realistic lighting and soft shadows.
  • Light Blockers: Although I used this feature many years ago as a custom light shader they have now included this as a standard. This is incredibly handy to subtract light in certain areas of your scene.
  • Subsurface scattering trough raytracing: No need to compute pre-passes anymore. This can be handy for relighting purposes as pre-passes can take up too much time.
To facilitate these features Pixar has added new shading nodes which are directly accessible in Maya or trough Slim. The general purpose surface shader supports layering which in my opinion is a very important feature.

New features in RenderMan Pro Server 17

RPS 17 is mainly a speed and efficiency update. I'll mention the features which I think are the more important ones. Hair and fur render up to five times faster and the new implementation of RSL has a 20% speed increase on average on shading calculations. There is also a volume rendering optimization and a new implementation of objects instancing.

Also new: RenderMan On Demand

Pixar also created a new online service called RenderMan On Demand. Whenever you have not enough render capacity it is possible to send your scenes to Pixar and let them render out your scenes. The service fee starts at 70 cent per core per hour.

Links:

Saturday, August 18, 2012

RenderMan Basics




I recently put my old graduation animation online (which you can watch on youtube: A Plug's Life). It was made back in 2001 and it was my first big project working with Maya and Pixar's RenderMan.

The other day I was asked in the video comments if I could write some tutorials on the use of RenderMan. That sounds like a great idea but before I want to come up with some hands on tutorials it is important to learn something about the RenderMan architecture.

I often hear that RenderMan is not suitable for small studios as it is too complex and is for tech heads and not artists. I beg to differ. RenderMan is a very efficient render engine and small studios which do not have a lot of render capacity can really benefit here by lowering render times. The shading tools are quite extensive and can give superb results without the need of any programming.


What is RenderMan?

First I like to define RenderMan. RenderMan is actually an API (application programming interface) and not a render engine. For a long time Pixar was the only one having RenderMan compliant renderer (as they invented the standard) called Photorealistic RenderMan or in short PRMan. People quickly started to call it RenderMan though and it has stuck ever since. Today there are more commercial render engines available which are RenderMan compliant like 3Delight.

Since Pixar's RenderMan is the industry standard (they say so themselves and honestly it is true), I will use their software to explain my examples.


RIB or RenderMan Interface Bytestream

Since PRMan is a renderer and Maya an animation package, there is need of a common language between the two. The API mentioned before is this language. Have a look at the following schematic.

The scene translator converts Maya data into a RIB file which the render engine understands.

The scene information from Maya is translated into a RIB file. This RIB file contains everything from geometry and information on which shaders are used to render resolution and certain render settings like shading rate. Since a RIB file is written in the common RenderMan language every RenderMan compliant renderer can interpret it and render it.

The RIB file can be displayed as an ASCII file, looks a bit like a programming language and is actually quite readable. We often opened up the RIB file to see where things went wrong when the renderer didn't give us the expected results.

In larger studios this RIB file is usually hacked to add in extra elements before the the final render is made.


RSL or RenderMan Shading Language

Shaders are render engine dependent. This means that when you go from one render engine to another you need to redo the shading. To tackle this problem between RenderMan compliant renderers, the standard provides a common shading language called RenderMan Shading Language. This is a simplified programming language to code shaders. These shaders are then compiled and used by the render engine.

Coding shaders is not what most artists want to do but since RenderMan Studio has a visual tool to create shaders called Slim, artists don't have to feel left behind. Understanding how to code shaders can give you a better insight in how shading works in CGI though.


RenderMan Studio

Pixar's RenderMan is available as a package called RenderMan Studio. It contains:
  • RenderMan for Maya (Pro)
  • Slim
  • it
  • Tractor
RenderMan for Maya is the core plug-in. It deals with the scene settings and takes care of translating the scene information into a RIB file. It comes with its own Maya menu and custom shelf.

Slim is the shading management tool. It is an external running program but can be connected to your Maya scene. Custom shading networks can be generated visually as well as trough coding in RSL.

"It" is the image tool. When rendering out your images you can do so to the Maya renderview but also to "it". "It" is much more flexible and allows even simple compositing trough scripting. It also allows the use of Look Up Tables and displays actual pixel values, something the Maya renderview is lacking.

Tractor is the render farm tool which queues and manages your renders. Not only can you manage your RenderMan renders but also other jobs like Nuke composites which you want to be calculated on the render farm.

RenderMan Studio comes with an embedded render license so even small VFX studios can get started straight away.


Links:

Monday, August 06, 2012

CGI Workstations



Because I am a bit of a techie I often get the question what kind of workstation people should get to do their post production and CGI work on. In this article I look into what is useful and what is merely fluff. Since hardware specifications change regularly I will try to be as general as possible so hopefully this article will still be valid in years to come.

The machine

Any computer can be used to do graphics on but I am sure you will get frustrated quickly when things don't move along smoothly. Let's not kid ourselves though, machines get faster every year but scene complexity goes up as well. In the end you always need a good machine to do CGI.

CPU or processor

Let's start with the heart of the workstation. The CPU will take care of most calculations. A fast processor is great to have but there are some things to consider.
  • MHz vs cores: Multicore processors are quite common the last couple of years. These are great for multitasking but also for programs which are multithreaded. Most CGI software is multithreaded and uses more than one core at the time. Having a fast CPU with several MHz will help you along as well as it will execute a thread faster. To find the balance is a bit tricky but I always check these CPU benchmark reports. They give added performance of all cores in a processor. This way you can see if it is better to go for that high MHz quad core processor or for the lower MHz hexa core processor.
  • Price vs performance: Now that you have an idea how each processor performs you have to balance it with how much it costs. Most processor series have a sweet spot where you get the best performance for its price. It is usually a good idea to pick the one which performs a couple of steps better than the sweet spot one. Yes, it is more expensive but these are a bit more future proof. If you got unlimited funds you can pick the fastest one but keep in mind that these are only 20% to 30% faster but more than triple the price.
I am a fan of lot of cores since my main competence lies in lighting and rendering. It used to be that you had to buy a render license for each core but luckily those days are over. Buying a 12 core machine over a 8 core one could be beneficial if your post production software is really expensive.

I also like to suggest to take server rated processors like the Intel Xeon and the AMD Opteron series. They are a bit more expensive but have no trouble running 24/7.

Memory or RAM

Next up is memory. 3D, compositing and render software use tons of memory. The good part is that in comparison to processors memory is rather cheap. Don't skimp on it! I know it is easy to add in more but getting enough memory will help you a long way. It also allows to have multiple programs open at the same time. I often have Maya and Nuke open while rendering a scene in the background with Mental Ray.

When you run out of memory the workstation will start swapping memory to disk. This really grinds it to a halt as disks are death slow in comparison to RAM. Once it starts swapping you will pull your hair out of frustration and it can take literally minutes before your machine becomes responsive again.

So how much should you get you ask? As a rule of thumb take at least twice as much as the market puts in machines by default. For example most good performing machines have 8 GB of RAM at the time of release of this article. I suggest you put in 16 GB. Each year this number will go up so adjust accordingly.

Make sure to choose the right RAM for your motherboard. Some motherboards need the more expensive ECC memory.

Graphics Cards

Graphics cards are probably the most discussed items in a workstation. They are not only important to show your graphics on the screen but also have some calculation capabilities which gives a boost to the performance of the workstation.

  • Game card vs Professional card: This is one of the big questions. Should you get a cheaper good performing game card or a slower very expensive professional card. The reason there is so much discussion about it is that it is a difficult question to answer. Post production software usually claims they are only certified to work with professional cards. In practice we see that most game cards cope quite well though. If you build a dedicated workstation and are willing to spend the money then it might be worth to get the professional card. If you are on a limited budget and also like to use your machine for games, go for good processors first and get a good performance game card. NVIDIA has a document which promotes the Quadro series over their GeForce series for professional work. Look it up and see what is important to you.
  • NVIDIA or ATI (from AMD): The race to make the fastest graphics card is an ongoing process. NVIDIA has the Quadro series and ATI has the FirePro series for professional graphics. In my opinion NVIDIA is the clear winner here. They have developed the very popular CUDA which allows to run calculations on your graphics card which usually are done by the CPU. A lot of programs are already taking advantage of this. Even the game cards support this technology.
At the time of writing this article there is not much choice when you are using a Mac Pro as your workstation and want an NVIDIA card. Only the expensive Quadro 4000 is currently available. I hope this will change in the near future.

Motherboard

Since processors dictate what kind of technology you use, choosing a motherboard becomes slightly less important. Most of the time the hardware manufacturers won't even give you a real choice. If you go for server rated processors then you usually also get a server rated motherboard which is good enough.

Hard drives

Most computers have only one hard drive. If you work in a facility with a server then this one disk will be enough. It just needs to be big enough to store all your post production software. All created content will be stored on the server so multiple people can have easy access to it.

If you have a standalone workstation then it is a good idea to get two extra disks and put them into a RAID 0. This RAID disk will be used for all your data while your main disk will contain the operating system and the installed software. It will increase reading performance quite a bit. There is a caveat with this kind of setup though. Since data is divided over two disks the chance of a hard disk failure is doubled. Make sure to backup your data regularly, preferably onto an external disk (which can be a SAN).

Take server rated drives which run at 7200 rpm or faster. These are manufactured to run 24/7. Cheap green disks just don't have enough performance for this kind of work.

Sound card

Most motherboards have built in audio and if you are not making any music then this will do.

Mouse and keyboard

Just pick a keyboard you feel comfortable with but do pay some attention to the mouse. Most mice are too light and are not comfortable to work with. Keep in mind that the mouse is used a lot while creating graphics and getting Carpal tunnel syndrome because of a bad mouse will kill your VFX career rather quickly. A heavy mouse works more accurately. I use game mice with a good grip on which I can add little weights. Most post production software make heavily use of the middle mouse button. This is usually a scroll wheel so make sure it feels comfortable enough to be used as a button too.

Tablet and pen

This is a bit of an investment but I have a tablet since 2000 and I really can't mis it anymore. I don't use it for Maya but it is incredibly handy when editing, compositing and, last but actually really important, when doing Photoshop paintwork. It has a complete different feel than a mouse and it just works way faster for certain things.

Monitors

Depending on what you do you can get a cheap one (when modeling or animating) or an expensive one (when doing color critical work like painting or lighting). I suggest not to skimp too much on a good monitor. I only use 1920 by 1200 LCD panels nowadays. They have enough pixels to show all important screen assets and can handle full HD. If you do have color critical work make sure to get a monitor which can be calibrated. It will save you a lot of hassle later on. 

I worked for production companies who didn't bother too much to get good monitors and the result was that everything we produced looked different on each monitor. You can imagine that it can become quite frustrating when a director sees your image on his screen and tells you it is too dark or too red while on your own screen it is too light and too green. If you work together with other people and can't afford multiple good monitors, you need to pick a reference monitor on which everyone will judge the color fidelity of the entire project. This way everyone sees the same image with the same color balance.

Other Peripherals

Feel free to add in other peripherals that may ease up your life like a Blu-ray player or an old school floppy drive. Do check that they do not eat too much resources like CPU power and memory.

Conclusion

You might have noticed that building a dedicated workstation can cost quite a bit of money. Yes, it usually surpasses the price of a very expensive gaming rig. If you are a hobbyist it might be enough to use that gaming rig. If you are a professional trying to make money out of VFX work then you better go for a professional workstation. Having a decent system will accelerate your workflow and since time is money you can make the calculations yourself how much money you can save over time by making the the initial investment.

Saturday, June 09, 2012

VFX Back to Basics Series: 8. What is compositing?



This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this eighth and final post in the Back to Basics Series I will talk about compositing.

A CGI image with a split trough the middle which shows color channels in the top part and the alpha channel in the bottom part. Alpha channels are the key to compositing.

Where all the previous posts talked about generating elements, this one will talk about combining these elements into a final image. For people who don't know the word compositing I always compare it to "Photoshop with moving images". We layer up different images and combine them into one with the illusion that the final image looks as if everything was filmed at the same time.

Back in the old days

Although digital compositing is a very powerful tool in filmmaking, compositing itself did start out as an analog process with the use of the optical printer. The first optical printers were created in the 1920's and were improved up till the 1980's. At first the effects were quite simple like a fade in or fade out but the effects became increasingly complex with the addition of things like matte paintings and blue screen effects. All these elements were combined by exposing the film several times.

This also explains the necessity of using mattes. If the film would be exposed twice it would create a double exposure unless the area where the second element needs to come is masked out. For blue or green screen shots these mattes were made with high contrast film and the use of a color filter to separate out the background color. In some other cases mattes were painted by hand which could result in chattering edges.

Films like 2001: A Space Odyssey and Star Wars had a staggering amount of success with this optical workflow.

The digital era

At the end of the 1980's digital compositing started to take over. It has two major improvements over its optical counterpart:

  • In the optical process it is necessary to create the several layers like the mattes by copying it from one film strip to another. Each copy degrades the image and adds extra noise to the result. In the digital realm a copy is 100% the same as the original (unless you resample or recompress them but that is usually not done).
  • The second problem is that film going trough a machine can drift a bit which could result in chattering mattes or halos. If done correctly digital compositing does never suffer from this.

When we need to composite for film a digital intermediate is created by scanning the film. When shot on digital cameras the source is already digital and does not need scanning.

Popular compositing packages today are Nuke, Digital Fusion and After Effects. The first two are node based packages while After Effects is a layer based package and works in a similar way as Photoshop does. Node based systems are represented by a flow chart where every node applies an operator to tree. Life action and CGI compositing do benefit more from this method while motion graphics are usually done in layer based packages.

A Nuke node based network.

Layered based compositing in After Effects.

Color and Alpha channels

Let's have a look at how compositing works in the digital realm. A digital image consists out of pixels which are represented by a combination of the three primary colors red, green and blue. This combination can create a broad amount of other colors available in the visible spectrum. Every color is stored in a channel.

In order to combine different images, it is necessary to have a fourth channel which is called the alpha or matte channel. It contains the transparency information of each pixel and this dictates how the result looks like when one image is put atop another image. It works just like its analog counterpart.

There are two ways to show an image with an alpha channel. The premultiplied one where the value of the alpha channel is already multiplied by the color values or the unpremultiplied one where the color values have their original values. This is very important to understand as failing to grasp this concept will possibly make composites look horrible with weird matte lines as a result. Check what your software package expects you to work with.

CGI images are usually premultiplied automatically when rendered.

A magnified CGI image. Look at the anti-aliased edges. This image is premultiplied.

The corresponding alpha channel image of the above image which has the same anti-aliased edges.

When unpremultiplied the edges become aliased.

The Extra's

Composting packages do not only put one image over another but have a vast set of tools available:

  • Rotoscope tools: With these it is possible to create mattes to isolate parts of the image. These include also operations to modify edges or to paint over existing imagery.
  • Retiming tools: All kinds of operators to manipulate the speed of the sequence.
  • Color correction tools: Operators to analyze and change the color of the image. Very useful to match foreground and background elements.
  • Filters: All kind of filters like blur, motion blur, noise, denoise and many more.
  • Keying tools: These are useful for matte extraction from images with blue or green screen but also other methods are available like luma keying.
  • Layering tools: from simple operators to put one image over another to operators for premultiplication.
  • Transformation tools: Operators such as translate, rotate, scale, resize and distortions.
  • Tracking tools: Tools to track objects in a scene or to stabilize a shot.

Although compositing is a 2D process, nowadays most packages support 3D environments. This adds extra functionality as it is possible to have proper parallax, use projections and makes the integration of particle systems really useful.

Another rather recent development is stereo conversion tools. It allows regular images to be converted to stereographic images. Note that I use the term stereographic instead of more common and commercial term 3D. Stereographic images are not truly 3D as the audience can not look behind objects by moving their head. I therefor prefer the term stereographic.


This concludes the eighth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

6. What is lighting and rendering?
7. What is matte painting?