Thursday, November 22, 2012

Playtime with the Sony NEX FS700



I get more and more filming jobs than real VFX work lately which in a way is a pitty but I also see it as an opportunity to learn new things. For one of my latest clients we needed a high speed camera. Luckily for us Sony released the NEX FS700, an affordable 1080p camera with amazing frame rates.

Not a Review

First of all I like to say this is not really a review but merely my thoughts after using the camera for two days. I didn't have time to do an in depth test or read the complete manual. We only used the camera for shooting at high frame rates as we wanted cool slow motion shots.

The Kit

The camera was a rental and came with the kit zoom lens which isn't the best glass around. It did do ok but having Nikon glass on my DSLR we are used to better. It has an apperture of 3.5 at 18mm and 5.6 at 200mm. There are lens adapters for sale so it is possible to use the glass you already own. We used a 32GB SDHC memory card which records the video in AVCHD. A great feuture is the build in ND filter. It also comes with a XLR boom mic. This is hardly important for a slow motion shoot but a great feature none the less. 

Handling the Camera and Shooting Slow Motion

If you ever used a video camera like a HVX200 then you will quickly get around using the FS700. It took us less than an hour to get used to handling the camera in manual mode.

We had the PAL version of the camera so the available frame rates for me were 200, 400 and 800 fps. The NTSC version does 240, 280  and 960 fps. At 200 fps you get full HD. The resolution goes down a bit at 400 fps but it still looks great. The resolution and quality at 800 fps really drops. So 800 fps is great fun to test but it is no longer usuable if quality is an issue. You can really see the difference, even on the LCD screen of the camera as the image gets cropped to record only a smaller section of the sensor.

Some remarks when shooting slow motion:
  • You need extra light. The shutter speed goes up quite a bit which lowers the light input considerably. You could adjust the ISO to a higher value but that really brings out a lot of noise in the image.
  • Don't use lights which are flickering like fluorecent tubes. We did the test and the flicker is horrible. Tungsten and sunlight are much safer here. We haven't checked HMI lights but as far as I know they flicker as well even with high frequency electronic ballasts. In a pinch you can get rid of bad flicker with the FurnaceCore tools from The Foundry but why fix it in post if you just avoid making the mistake while shooting.
The camera records the footage to a buffer which is only big enough for around ten seconds of footage (It depends on the frame rate you choose). After recording your scene, the camera needs time to write the images to the memory card which can take half a minute or so. It is stored as 25 fps footage so you can immediatly play it back on the camera to see if your shot is ok. We connected the camera with a HDMI cable to a monitor so the client was able to look as well without everybody having to bend over the little built in view screen.

The result of the Test

As a picture is worth a thousand words and a video even more I like to show you the video we made after an evening of test shooting. We used a 1K tungsten lamp to light the scene and shot various items at 400 fps. This is not the footage we shot for our client.


Thursday, October 11, 2012

RenderMan Studio 4 and RenderMan Pro Server 17




Last week Pixar released their latest instalment of RenderMan Studio and RenderMan Pro Server. This is quite exciting news so a good reason to write something about it.

Versions and Pricing for RenderMan Studio 4

Pixar used to have two product lines. The cheaper RenderMan for Maya which was a limited plug-in but with an embedded render license and the more expensive RenderMan Studio which had all the regular tools like Slim and "it" but needed a RenderMan Pro Server license to actualy render anything at all. Pixar has consolidated their two Maya plug-ins into one and has adjusted the pricing and functionality. The RenderMan for Maya product line no longers exists and only RenderMan Studio is now available. RenderMan Studio still has all functionalities but Pixar added an embedded render license. In combination with the price drop to $1300 a license this will give bigger companies a considerable expense cut and gives smaller companies the full set of tools for only $300 more than the old RenderMan for Maya license.

This new pricing gives smaller studios the opportunity to start building their RenderMan pipeline without the huge investment that it used to be.


New Features in RenderMan Studio 4

RMS 4 contains the following applications.
  • RenderMan for Maya 5
  • Slim 10
  • "it" 10
  • Pixar's RenderMan 17 (embedded version)
  • LocalQueue
  • Tractor 1.6
The two new applications are the embedded RenderMan 17 and LocalQueue. This last one is particularly handy for artists who like to render locally controlled by a render manager without having to set up an entire render management infrastructure. The embedded renderer is exactly the same as the Pro Server version so there is no difference if you render it locally or on the render farm.

There is a long list of improvements and efficiency updates but one of the more interesting features is "Phisically Plausible Shading". This is a new easy to use advanced shading and lighting workflow which creates very realistic results. Keep in mind that this new workflow is not compatible with the older RMS 3 workflow although the old workflow is still available if you wish to use it. You have to choose which workflow you like to use at the start of your project and stick with it.

Some interesting features of the new RMS 4 plausible shading workflow.
  • Raytraced and point-based global illumination: This is controlled with special global GI lights. It supports light linking so it is possible to use pre-computed point clouds for a set while rendering the hero objects with raytraced global illumination.
  • Image based lighting is now de-coupled from global illumination. The new RMSEnvLight is a bit slower to calculate but the quality has improved a lot.
  • New Area Lights: Area lights are quite hot nowadays. They provide realistic lighting and soft shadows.
  • Light Blockers: Although I used this feature many years ago as a custom light shader they have now included this as a standard. This is incredibly handy to subtract light in certain areas of your scene.
  • Subsurface scattering trough raytracing: No need to compute pre-passes anymore. This can be handy for relighting purposes as pre-passes can take up too much time.
To facilitate these features Pixar has added new shading nodes which are directly accessible in Maya or trough Slim. The general purpose surface shader supports layering which in my opinion is a very important feature.

New features in RenderMan Pro Server 17

RPS 17 is mainly a speed and efficiency update. I'll mention the features which I think are the more important ones. Hair and fur render up to five times faster and the new implementation of RSL has a 20% speed increase on average on shading calculations. There is also a volume rendering optimization and a new implementation of objects instancing.

Also new: RenderMan On Demand

Pixar also created a new online service called RenderMan On Demand. Whenever you have not enough render capacity it is possible to send your scenes to Pixar and let them render out your scenes. The service fee starts at 70 cent per core per hour.

Links:

Tuesday, September 25, 2012

Getting into the VFX business



Every now and then I get the question on how to get into the VFX business. In this post I'll give some pointers which can help you to get on your way. I have to warn you though. It is not an easy business to get into and a tough one to stay in. The competition can be quite stiff. Although I do not encourage working many extra hours you probably will at some point as you have to meet deadlines. In practice it is never a nine to five job. If you do not have a passion for computer graphics it might be better for you to find another occupation.

Now that I have warned you, let's go over the pointers.


Education


In the old days CGI was so new that no courses on the subject existed. It was something you had to learn on the job. Since it involved computers it was helpful to get a degree in computer science first but it wasn't always necessary. Many people started in VFX companies at the bottom of the ladder as runners and were allowed to train on the expensive workstations during the evening. If you were good it was possible to get promoted from runner to a junior position in the company. Usually this was in the tracking or rotoscoping department.

In the last decade it has all changed. Software has become more user friendly and you don't need to know how to program any more. Artistic people belong in this business as much as the tech heads. It can be done on mainstream hardware and many software packages have learning editions out there which allow you to get first hand experience for a small amount of money or sometimes even for free.

As in many things it is always smart to get some education first. When I started there were no VFX courses in Belgium at all so I had to move to the UK for an education. Today, your local school may have a computer animation course or you might try online courses like the ones available at FXPHD. Keep in mind that just learning how to work with Maya or any other software package is not good enough. I really recommend that you learn about traditional film making too. Learning about cinematography and story telling is a huge added value to your knowledge and will make your work so much better.

Practice


I can't stress this enough: practice, practice, practice... till it makes perfect. Nobody can model a perfect human model from day one nor paint a beautiful matte painting when opening Photoshop for the first time. Everyone needs to practice, even the super talented people. It is only by doing so that you learn it. You need to constantly hone your skills. Learn to understand why something is good and why something is crap. Get others to look at your work and let them comment on it. Learn from the advice which more experienced people give you and then go back to practicing. Reading about it and watching tutorials is great but if you do not practice it doesn't help you all that much.

This seems like a lot of hard work and it is. I spent many late nights on trying things out and practicing my skills while I was a student. Don't get disheartened when your progress is slow. Some things take a while to learn.


Once you start working you will have less time to practice and it can become harder to pick up and train new skills.

Generalist vs Specialist.


It really depends in what kind of industry you will work in when thinking about becoming an generalist or a specialist. Being a jack of all trades and a master of none can be really beneficial in a small company or when making VFX for commercials. You probably get many small projects each year and usually the teams are small too. It is even possible that you have to do the entire job on your own which means you got to model, animate, shade, light, render and composite it all yourself.


When you are aiming to become a VFX artist in the film industry it is very likely that you will have to specialize in certain skills as it demands high quality. This means you will be much better trained in your particluar field and therefor produce better and faster results. It also means that other skills will be neglected and you will probably never become good at them.


I do recommend to go over every category and at least try them a couple of times. This way you can see what you like most and you will understand the whole VFX pipeline.


Portfolio and Showreel


When you go to apply for a job make sure to have a portfolio (for Matte painters and concept artists) or/and a showreel which showcases your best work. Nobody will give you a job when you can't show the work you have produced in the past. It is even more important than your CV.


Only show your best work. One bad shot on your reel will pull the quality of the whole reel down. Don't make a ten minute reel. Nobody will watch that as most recruiters do not have time to do so. Two minutes is more than enough. If you don't impress them in those first two minutes of your reel you probably never will.


Focus your reel on the job you are applying for. If you want to be animator show animation. If you want to become a lighting TD show finished shots with a breakdown on how it is done and so on.


Conclusion


I don't want to discourage anyone but VFX is much more hard work than glory. If you are really passionate about VFX it can become a very rewarding job but keep in mind it is a tough job.

Now go practice some more.

Saturday, August 18, 2012

RenderMan Basics




I recently put my old graduation animation online (which you can watch on youtube: A Plug's Life). It was made back in 2001 and it was my first big project working with Maya and Pixar's RenderMan.

The other day I was asked in the video comments if I could write some tutorials on the use of RenderMan. That sounds like a great idea but before I want to come up with some hands on tutorials it is important to learn something about the RenderMan architecture.

I often hear that RenderMan is not suitable for small studios as it is too complex and is for tech heads and not artists. I beg to differ. RenderMan is a very efficient render engine and small studios which do not have a lot of render capacity can really benefit here by lowering render times. The shading tools are quite extensive and can give superb results without the need of any programming.


What is RenderMan?

First I like to define RenderMan. RenderMan is actually an API (application programming interface) and not a render engine. For a long time Pixar was the only one having RenderMan compliant renderer (as they invented the standard) called Photorealistic RenderMan or in short PRMan. People quickly started to call it RenderMan though and it has stuck ever since. Today there are more commercial render engines available which are RenderMan compliant like 3Delight.

Since Pixar's RenderMan is the industry standard (they say so themselves and honestly it is true), I will use their software to explain my examples.


RIB or RenderMan Interface Bytestream

Since PRMan is a renderer and Maya an animation package, there is need of a common language between the two. The API mentioned before is this language. Have a look at the following schematic.

The scene translator converts Maya data into a RIB file which the render engine understands.

The scene information from Maya is translated into a RIB file. This RIB file contains everything from geometry and information on which shaders are used to render resolution and certain render settings like shading rate. Since a RIB file is written in the common RenderMan language every RenderMan compliant renderer can interpret it and render it.

The RIB file can be displayed as an ASCII file, looks a bit like a programming language and is actually quite readable. We often opened up the RIB file to see where things went wrong when the renderer didn't give us the expected results.

In larger studios this RIB file is usually hacked to add in extra elements before the the final render is made.


RSL or RenderMan Shading Language

Shaders are render engine dependent. This means that when you go from one render engine to another you need to redo the shading. To tackle this problem between RenderMan compliant renderers, the standard provides a common shading language called RenderMan Shading Language. This is a simplified programming language to code shaders. These shaders are then compiled and used by the render engine.

Coding shaders is not what most artists want to do but since RenderMan Studio has a visual tool to create shaders called Slim, artists don't have to feel left behind. Understanding how to code shaders can give you a better insight in how shading works in CGI though.


RenderMan Studio

Pixar's RenderMan is available as a package called RenderMan Studio. It contains:
  • RenderMan for Maya (Pro)
  • Slim
  • it
  • Tractor
RenderMan for Maya is the core plug-in. It deals with the scene settings and takes care of translating the scene information into a RIB file. It comes with its own Maya menu and custom shelf.

Slim is the shading management tool. It is an external running program but can be connected to your Maya scene. Custom shading networks can be generated visually as well as trough coding in RSL.

"It" is the image tool. When rendering out your images you can do so to the Maya renderview but also to "it". "It" is much more flexible and allows even simple compositing trough scripting. It also allows the use of Look Up Tables and displays actual pixel values, something the Maya renderview is lacking.

Tractor is the render farm tool which queues and manages your renders. Not only can you manage your RenderMan renders but also other jobs like Nuke composites which you want to be calculated on the render farm.

RenderMan Studio comes with an embedded render license so even small VFX studios can get started straight away.


Links:

Monday, August 06, 2012

CGI Workstations



Because I am a bit of a techie I often get the question what kind of workstation people should get to do their post production and CGI work on. In this article I look into what is useful and what is merely fluff. Since hardware specifications change regularly I will try to be as general as possible so hopefully this article will still be valid in years to come.

The machine

Any computer can be used to do graphics on but I am sure you will get frustrated quickly when things don't move along smoothly. Let's not kid ourselves though, machines get faster every year but scene complexity goes up as well. In the end you always need a good machine to do CGI.

CPU or processor

Let's start with the heart of the workstation. The CPU will take care of most calculations. A fast processor is great to have but there are some things to consider.
  • MHz vs cores: Multicore processors are quite common the last couple of years. These are great for multitasking but also for programs which are multithreaded. Most CGI software is multithreaded and uses more than one core at the time. Having a fast CPU with several MHz will help you along as well as it will execute a thread faster. To find the balance is a bit tricky but I always check these CPU benchmark reports. They give added performance of all cores in a processor. This way you can see if it is better to go for that high MHz quad core processor or for the lower MHz hexa core processor.
  • Price vs performance: Now that you have an idea how each processor performs you have to balance it with how much it costs. Most processor series have a sweet spot where you get the best performance for its price. It is usually a good idea to pick the one which performs a couple of steps better than the sweet spot one. Yes, it is more expensive but these are a bit more future proof. If you got unlimited funds you can pick the fastest one but keep in mind that these are only 20% to 30% faster but more than triple the price.
I am a fan of lot of cores since my main competence lies in lighting and rendering. It used to be that you had to buy a render license for each core but luckily those days are over. Buying a 12 core machine over a 8 core one could be beneficial if your post production software is really expensive.

I also like to suggest to take server rated processors like the Intel Xeon and the AMD Opteron series. They are a bit more expensive but have no trouble running 24/7.

Memory or RAM

Next up is memory. 3D, compositing and render software use tons of memory. The good part is that in comparison to processors memory is rather cheap. Don't skimp on it! I know it is easy to add in more but getting enough memory will help you a long way. It also allows to have multiple programs open at the same time. I often have Maya and Nuke open while rendering a scene in the background with Mental Ray.

When you run out of memory the workstation will start swapping memory to disk. This really grinds it to a halt as disks are death slow in comparison to RAM. Once it starts swapping you will pull your hair out of frustration and it can take literally minutes before your machine becomes responsive again.

So how much should you get you ask? As a rule of thumb take at least twice as much as the market puts in machines by default. For example most good performing machines have 8 GB of RAM at the time of release of this article. I suggest you put in 16 GB. Each year this number will go up so adjust accordingly.

Make sure to choose the right RAM for your motherboard. Some motherboards need the more expensive ECC memory.

Graphics Cards

Graphics cards are probably the most discussed items in a workstation. They are not only important to show your graphics on the screen but also have some calculation capabilities which gives a boost to the performance of the workstation.

  • Game card vs Professional card: This is one of the big questions. Should you get a cheaper good performing game card or a slower very expensive professional card. The reason there is so much discussion about it is that it is a difficult question to answer. Post production software usually claims they are only certified to work with professional cards. In practice we see that most game cards cope quite well though. If you build a dedicated workstation and are willing to spend the money then it might be worth to get the professional card. If you are on a limited budget and also like to use your machine for games, go for good processors first and get a good performance game card. NVIDIA has a document which promotes the Quadro series over their GeForce series for professional work. Look it up and see what is important to you.
  • NVIDIA or ATI (from AMD): The race to make the fastest graphics card is an ongoing process. NVIDIA has the Quadro series and ATI has the FirePro series for professional graphics. In my opinion NVIDIA is the clear winner here. They have developed the very popular CUDA which allows to run calculations on your graphics card which usually are done by the CPU. A lot of programs are already taking advantage of this. Even the game cards support this technology.
At the time of writing this article there is not much choice when you are using a Mac Pro as your workstation and want an NVIDIA card. Only the expensive Quadro 4000 is currently available. I hope this will change in the near future.

Motherboard

Since processors dictate what kind of technology you use, choosing a motherboard becomes slightly less important. Most of the time the hardware manufacturers won't even give you a real choice. If you go for server rated processors then you usually also get a server rated motherboard which is good enough.

Hard drives

Most computers have only one hard drive. If you work in a facility with a server then this one disk will be enough. It just needs to be big enough to store all your post production software. All created content will be stored on the server so multiple people can have easy access to it.

If you have a standalone workstation then it is a good idea to get two extra disks and put them into a RAID 0. This RAID disk will be used for all your data while your main disk will contain the operating system and the installed software. It will increase reading performance quite a bit. There is a caveat with this kind of setup though. Since data is divided over two disks the chance of a hard disk failure is doubled. Make sure to backup your data regularly, preferably onto an external disk (which can be a SAN).

Take server rated drives which run at 7200 rpm or faster. These are manufactured to run 24/7. Cheap green disks just don't have enough performance for this kind of work.

Sound card

Most motherboards have built in audio and if you are not making any music then this will do.

Mouse and keyboard

Just pick a keyboard you feel comfortable with but do pay some attention to the mouse. Most mice are too light and are not comfortable to work with. Keep in mind that the mouse is used a lot while creating graphics and getting Carpal tunnel syndrome because of a bad mouse will kill your VFX career rather quickly. A heavy mouse works more accurately. I use game mice with a good grip on which I can add little weights. Most post production software make heavily use of the middle mouse button. This is usually a scroll wheel so make sure it feels comfortable enough to be used as a button too.

Tablet and pen

This is a bit of an investment but I have a tablet since 2000 and I really can't mis it anymore. I don't use it for Maya but it is incredibly handy when editing, compositing and, last but actually really important, when doing Photoshop paintwork. It has a complete different feel than a mouse and it just works way faster for certain things.

Monitors

Depending on what you do you can get a cheap one (when modeling or animating) or an expensive one (when doing color critical work like painting or lighting). I suggest not to skimp too much on a good monitor. I only use 1920 by 1200 LCD panels nowadays. They have enough pixels to show all important screen assets and can handle full HD. If you do have color critical work make sure to get a monitor which can be calibrated. It will save you a lot of hassle later on. 

I worked for production companies who didn't bother too much to get good monitors and the result was that everything we produced looked different on each monitor. You can imagine that it can become quite frustrating when a director sees your image on his screen and tells you it is too dark or too red while on your own screen it is too light and too green. If you work together with other people and can't afford multiple good monitors, you need to pick a reference monitor on which everyone will judge the color fidelity of the entire project. This way everyone sees the same image with the same color balance.

Other Peripherals

Feel free to add in other peripherals that may ease up your life like a Blu-ray player or an old school floppy drive. Do check that they do not eat too much resources like CPU power and memory.

Conclusion

You might have noticed that building a dedicated workstation can cost quite a bit of money. Yes, it usually surpasses the price of a very expensive gaming rig. If you are a hobbyist it might be enough to use that gaming rig. If you are a professional trying to make money out of VFX work then you better go for a professional workstation. Having a decent system will accelerate your workflow and since time is money you can make the calculations yourself how much money you can save over time by making the the initial investment.

Wednesday, July 18, 2012

Render Farms



Projects range from small one person achievements to huge VFX productions where hundreds of artists contribute to the result. The thing they have in common is that they need computational power to finish certain steps in the process. Rendering is the first one that comes to mind but simulations and compositing take up their own share of CPU cycles.

It is possible to let your workstation chug away on those calculations and eventually it will get done but more often than not a tight deadline does nog give you this luxury.

The speed increase with a Render Farm

Render farms are all about speeding things up. Let's say you have only one workstation and you work 8 hours a day (Ok, you have to be lucky to work only 8 hour days in VFX, usually you need to do more to reach the deadline). That means that you have 16 hours left for rendering your sequence. Even with 10 minutes a frame, which is quite acceptable, it will give you 96 frames which is only 4 seconds of animation (at 24 fps that is) a day. You could try to simplify the scene to speed things up but that is not always possible.

The simplest form of a farm is to have second machine next to your workstation which can do calculations while you continue working. This will more than double your capacity and will get you 10 seconds of animation done per day for the same scene. When you have only one machine it is easy enough to manage the rendering of different scene files manually. No extra management software is needed at this point.

Let's move to a bigger setup where there are 5 artists in the shop and let's say they have a workstation each. With the same 10 minute a frame setup it means that they can render 20 seconds of animation each day when working 8 hour days. You can see that the math behind it is simple enough to make predictions once you got an idea how long a frame will take. I must agree that it can be tricky to predict render times when the contents of the scene and hence the render times per frame change a lot. But of course, an indication is better than nothing.

It becomes obvious: the more machines you get, the more complicated the managing of the jobs will become. In these circumstances it is wise to introduce a render manager into the pipeline.

The render manager

The render manager is a piece of software which automates the distribution of jobs to your render farm. It consists out of two parts. The server side program which is the actual manager and the client side program which activates the right renderer for the job. The server side program usually runs on a dedicated machine, very often the same machine as the license server for your software. The client program is installed on each machine which can handle a render job.

Instead of launching the job manually on the client machines it will be submitted to the render manager. When a job is accepted, the render manager has several tasks to do. First it checks the availability of the clients. It then will send a chunk of the job towards a client. The chunk size can be dictated by the user. It can go from part of a frame to a full frame to even a group of frames. If your frames render fast then it is usually better the group them otherwise one frame per client will do fine.

The render manager will now divide all the chunks between the clients. When a client is finished it will report back to the manager and a new job will be given automatically.

Another task of the render manager is to queue the jobs from the users. It allows several people to submit jobs without the fear that they will fail because the render farm might be too busy. The render managers usually have the option to prioritize jobs according to preset rules.

A good render manager will tell you when things go wrong. When a render fails it usually generates an error message which is passed on to the manager. It will then show you which jobs didn't render properly.

What is needed for running a render farm

  • Computers: Ok, that is pretty obvious. Fast CPUs and a lot of memory are preferred. You do not want them to start swapping memory to disk. It will grind the render to a halt. Big hard drives are not important, do get server grade ones though. Rack machines are the preferred choice when not using workstations. They are space efficient and are usually server grade so they are made to run 24/7 at full capacity. Do not use the render manager server as a render box so you need one machine dedicated for the manager to run on.
  • Operating system: A good rendering manager works cross platform. It doesn't matter if Maya was installed under Windows, MacOSX or Linux. Most render farms run Linux as you can install it without a GUI. This saves memory and makes the machine more efficient (oh, and Linux is free!). Some software packages do not have a Linux or Mac version and then the choice of OS becomes obvious.
  • A render manager: If you are serious about rendering you need a render manager. It will automate everything, speed things up and take a lot of worry out of your hands. It will also make your farm very scalable. There are many different ones out there and all come at different prices so it is hard to recommend any of them.
  • Software Licenses: That's right, software is usually not free and you need to buy the right amount of licenses. Most VFX packages have separate render licenses. They tend to be cheaper and are sometimes sold in bundles. For example Maya comes with 5 render licenses when you buy a floating license. Take note that you can only use packages which have a command line render options. A render manager can not start GUIs.
  • Network: A fast network is a must. The bigger your farm becomes the more data you have to pull trough those wires. Yes, wires. Running a farm on WiFi is a bad idea.
  • A storage server: Rendering images or baking out data can fill up disks pretty quick. A good server with a RAID system for redundancy and continuity will be ideal for storing data. You do not want to loose all that render time because a disk went bad. Remember: a RAID is not a backup system, it was designed to let you keep on working even when a disk dies on you. If you want to be sure that your render data is safe, copy it to a second server after the render job finishes.

Scalability

One of the things we notice in VFX is that render and simulation times never seem to go down. We would assume that with faster processors our render times would go down but instead we see another evolution where the render times stay the same but the complexity of the scene goes up. 

Lucky for us, render farms are to a point very scalable. Just add in more machines and licenses when needed. It only becomes a problem when the server or the network can not handle the traffic anymore. You can imagine that a farm at ILM or Pixar is a complex matter to maintain.

If you you need a quick boost in render capacity but don't have the cash to expand the farm,  you can always hire the services of online render farms. You could say that it is rendering in the cloud. It works very similar as your local render farm with the main difference that you need to upload your data to the cloud first. With a slow internet connection this may take a while but once in the cloud the renders go really fast.

Tuesday, July 10, 2012

Review: Rode Videomic Pro and Stereo Videomic Pro



Sound is 50% of a movie. Most people stop watching your movie when the sound is badly recorded. With the whole revolution of shooting on DSLRs, new types of microphones have been designed. Rode has come up with the Videomic Pro and the Stereo Videomic Pro.

We have those microphones available to us so what is better than making a blog post about it? That's right, a video on our bbrevisited channel! Watch it and discover how these Rode microphones perform under different circumstances.


Some afterthoughts

I must say that it is rather annoying that the Nikon D7000 has no sound metering. This caused some of the recordings to be distorted as the signal was clipped. I did put the internal amplification on medium. The distortion could have been avoided by leaving the setting on automatic. That aside, the rest of the test went rather well.

The cardioid patterns of both microphones are very useful to pick up less sound behind the camera and more in front. They still pick up sound from a quite big environment though and do not filter as much out as a shotgun or a lavalier mic. I can agree that for the Stereo Videomic Pro this is rather an advantage than a drawback as it is designed for grabbing that environment. The big advantage for both is that they can record straight into the camera so no separate sound recorder is needed and they do beat the built in microphones of DSLRs hands down.

I found that the Videomic Pro picked up a bit more bass than the Stereo version but both performed well. The Stereo Videomic Pro records two channels which no built in mic can do.

Battery life is really not an issue. More than 70 hours on the Videomic Pro and even more than 100 hours on its stereo brother is really good performance. It will last you many days if you turn off the microphone every time you are not recording. Both microphones are also quite compact so they won't take up too much space in your bag.

If you are serious about recording sound properly I really recommend these two models from Rode.

Tuesday, June 19, 2012

How we made Toilet Run



Toilet Run is Belgian Boomsticks' second production.


The idea

Queueing for a public toilet is something which happens to all of us now and then but is never pleasant when you need to go urgently. It gets worse when it is out of order and you need to find another one. What happens when it becomes a race to get there first but the path to it is full of obstacles?

Since all obstacles are in different locations we wanted to have something which tied it all together. Using the map with the animated heads was the ideal solution to avoid discontinuity in environments. All obstacles are merely accidents. We didn't want them to be induced by direct rivalry between the two characters.

Equipment

We used almost the same camera and sound equipment as with Agent Orange (check this link) but we did have some extra ropes en rigs to get over the ditch. The rope was a regular  climbing rope and we had an issue getting reasonable tension on it. Without tension Jef-Aram would get wet feet rather quickly. Luckily we had a Grigri with us, which helped us tighten the rope. We know that this is not the proper way of using a Grigri but it worked and that is all what matters when you need to shoot stuff in a couple of hours time.

Again we tried to use the GoPro to get some nice shots on the cables but  the shots didn't work that well. It is better to cut them out as bad shots would bring the quality down a lot. I guess we still need to train more on how to position the GoPro camera. The lack of a viewfinder just doesn't make it easy.

Music and Sound FX

We tapped again into our royalty free library of music and sound FX but this time we also added more of our own foley recordings. All cloth movement and some impacts were done by us. We noticed that when two objects collide you do not need one sound but two. Both objects have their own distinguished sound. The truck hitting Jef-Aram is a combination of a metal sound from the body of the truck and a human impact sound. This makes it much more dramatic.

Post Production

The first small bits to fix were the toilets. As a matter of fact those are not toilets at all but electricity booths. The first one was solved practically with the"out of order" messages. On the second one we painted out the voltage signs and added a toilet sign in After Effects.

A bit of more work was the map. We designed one big map in illustrator and colored it in Photoshop. The colored map was then taken into After Effects for the animation. In the end we showed only a fraction of the map in the clip. It was useful to draw the whole map as it gave us more continuity while editing.

The full map. Click to enlarge.

The biggest challenge in compositing was the truck hitting Jef-Aram. We first wanted to hit him with me driving a car but since all the obstacles were more things that just happened as accidents instead of direct rivalry, that seemed too violent. We decided to go for a truck instead of a car as the impact would be more dramatic. The problem was that we didn't know anybody at all who could drive us a truck. At first we were thinking of using an image of a truck but that idea was quickly ditched as we realized there is no perspective change on a translating image. So we jumped in the car and drove around an industry area where lots of trucks pass by. Most people are not too happy getting caught on camera so instead of filming a truck passing by we drove by a parked truck ourselves. This gives exactly the same result in perspective change. After some rotowork and color grading the truck fitted right in.

A still of the truck while driving past it. Just using a photo of a truck would never have the right perspective change while driving by.

The truck moves so fast that it was not even necessary to animate the wheels properly to make the effect convincing enough. Luckily for us the weather was the same as on the day of shooting the electricity booths. The cloudy sky provides soft shadows in both images.

What's next?

Jef-Aram and I are already working on our next short. Make sure to stay tuned. You can do this by following us on twitter or by subscribing to our YouTube channel.

Links:

Saturday, June 09, 2012

VFX Back to Basics Series: 8. What is compositing?



This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this eighth and final post in the Back to Basics Series I will talk about compositing.

A CGI image with a split trough the middle which shows color channels in the top part and the alpha channel in the bottom part. Alpha channels are the key to compositing.

Where all the previous posts talked about generating elements, this one will talk about combining these elements into a final image. For people who don't know the word compositing I always compare it to "Photoshop with moving images". We layer up different images and combine them into one with the illusion that the final image looks as if everything was filmed at the same time.

Back in the old days

Although digital compositing is a very powerful tool in filmmaking, compositing itself did start out as an analog process with the use of the optical printer. The first optical printers were created in the 1920's and were improved up till the 1980's. At first the effects were quite simple like a fade in or fade out but the effects became increasingly complex with the addition of things like matte paintings and blue screen effects. All these elements were combined by exposing the film several times.

This also explains the necessity of using mattes. If the film would be exposed twice it would create a double exposure unless the area where the second element needs to come is masked out. For blue or green screen shots these mattes were made with high contrast film and the use of a color filter to separate out the background color. In some other cases mattes were painted by hand which could result in chattering edges.

Films like 2001: A Space Odyssey and Star Wars had a staggering amount of success with this optical workflow.

The digital era

At the end of the 1980's digital compositing started to take over. It has two major improvements over its optical counterpart:

  • In the optical process it is necessary to create the several layers like the mattes by copying it from one film strip to another. Each copy degrades the image and adds extra noise to the result. In the digital realm a copy is 100% the same as the original (unless you resample or recompress them but that is usually not done).
  • The second problem is that film going trough a machine can drift a bit which could result in chattering mattes or halos. If done correctly digital compositing does never suffer from this.

When we need to composite for film a digital intermediate is created by scanning the film. When shot on digital cameras the source is already digital and does not need scanning.

Popular compositing packages today are Nuke, Digital Fusion and After Effects. The first two are node based packages while After Effects is a layer based package and works in a similar way as Photoshop does. Node based systems are represented by a flow chart where every node applies an operator to tree. Life action and CGI compositing do benefit more from this method while motion graphics are usually done in layer based packages.

A Nuke node based network.

Layered based compositing in After Effects.

Color and Alpha channels

Let's have a look at how compositing works in the digital realm. A digital image consists out of pixels which are represented by a combination of the three primary colors red, green and blue. This combination can create a broad amount of other colors available in the visible spectrum. Every color is stored in a channel.

In order to combine different images, it is necessary to have a fourth channel which is called the alpha or matte channel. It contains the transparency information of each pixel and this dictates how the result looks like when one image is put atop another image. It works just like its analog counterpart.

There are two ways to show an image with an alpha channel. The premultiplied one where the value of the alpha channel is already multiplied by the color values or the unpremultiplied one where the color values have their original values. This is very important to understand as failing to grasp this concept will possibly make composites look horrible with weird matte lines as a result. Check what your software package expects you to work with.

CGI images are usually premultiplied automatically when rendered.

A magnified CGI image. Look at the anti-aliased edges. This image is premultiplied.

The corresponding alpha channel image of the above image which has the same anti-aliased edges.

When unpremultiplied the edges become aliased.

The Extra's

Composting packages do not only put one image over another but have a vast set of tools available:

  • Rotoscope tools: With these it is possible to create mattes to isolate parts of the image. These include also operations to modify edges or to paint over existing imagery.
  • Retiming tools: All kinds of operators to manipulate the speed of the sequence.
  • Color correction tools: Operators to analyze and change the color of the image. Very useful to match foreground and background elements.
  • Filters: All kind of filters like blur, motion blur, noise, denoise and many more.
  • Keying tools: These are useful for matte extraction from images with blue or green screen but also other methods are available like luma keying.
  • Layering tools: from simple operators to put one image over another to operators for premultiplication.
  • Transformation tools: Operators such as translate, rotate, scale, resize and distortions.
  • Tracking tools: Tools to track objects in a scene or to stabilize a shot.

Although compositing is a 2D process, nowadays most packages support 3D environments. This adds extra functionality as it is possible to have proper parallax, use projections and makes the integration of particle systems really useful.

Another rather recent development is stereo conversion tools. It allows regular images to be converted to stereographic images. Note that I use the term stereographic instead of more common and commercial term 3D. Stereographic images are not truly 3D as the audience can not look behind objects by moving their head. I therefor prefer the term stereographic.


This concludes the eighth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

6. What is lighting and rendering?
7. What is matte painting?