Showing posts with label animation. Show all posts
Showing posts with label animation. Show all posts

Tuesday, April 19, 2016

Tips and tricks to avoid common Pitfalls when Shooting Stop Motion Video



Having young kids I got interested in stop motion animation so I could make some short entertaining video's with their own toys, in particular their Lego Duplo sets. I got most equipment anyway and I only had to invest in some white paper and card board which is needed for the background. So 10 € later could set up a mini studio. Results can be seen on my Legobrick Parade YouTube channel.

I did encounter some pitfalls which can really ruin the quality of the video. In this article I will show you what problems I encountered and how I solved them.


Lighting the Scene


When lighting the scene I wanted to have soft lighting which didn't produce hard shadows. Hard shadows give too much contrast and are usually not very nice to look at. As the key light I used a 1K tungsten spot with a soft box. This diffuses the light enough to get rid of harsh shadows. As a fill light I used a 650 Watt spot. I didn't have a second soft box to diffuse it so I have put a polystyrene panel above the scene and pointed the spot towards it. The light bounces back down and gives a soft filler which removes even more shadows. The  white background paper also helps bouncing the light a bit to make it softer.



Basic scene lighting for my stop motion project.


Only the spotlights illuminate the scene.


Stop motion and the dreaded flicker


One of the difficulties of shooting stop motion is to keep consistency in the lighting to avoid differences between frames. Even small differences are a cause for a nasty flicker once you edit your clip together. It does seem simple enough but there are many different causes for flicker which you all need to address to get clean footage.

1. Shoot in an environment where you have full control over the light. This seems trivial but shooting on a sunny day with clouds will give you different light situations every couple of minutes. On video this is usually not that big of an issue since you continually shoot the footage at 25 frames per second or so. In stop motion there might be a pause of a couple of seconds or longer while you change the scene before you take the next frame. This is a guaranteed way to flicker. Shoot preferably in a room where no external light sources control the lighting conditions. You can see on the pictures above how I have set up my mini studio in my basement.

2. Don't use fluorescent light as they flicker at a certain frequency. Tungsten or LED is your best option. If you do so anyway then the next tip might be a help to cancel out some of the flicker.

3. Use a slow shutter speed on your camera. If you use 1/200th of a second for example you might have a bigger chance at getting flicker. Especially when using fluorescent lights. They have a much lower frequency and you just might take the picture right in between the two light pulses and therefor have less light than the previous frame. 1/5th of a second works for me.

4. Use an open diaphragm if possible. The aperture gives you control over your exposure but also the depth of field so experiment what works for you. I still had enough sharpness at F5.6 on my Nikon D7000 which has an APS-C sensor. Most modern DSLR lenses have their diaphragm controlled by the camera. The camera will open up the diaphragm completely in its default state. When taking a picture it will change the aperture of the diaphragm to the setting which you have set. This means that the diaphragm changes state.  Your camera might have difficulties at going back at the exact same state as the frame before. Even the smallest difference in aperture between two frames will give you a difference in lighting. If you leave it open then the change of state is not as big as when you close it almost completely and less error will occur. Also a small difference in aperture on an open diaphragm between shots has a smaller margin of error then on an almost closed diaphragm. If you use old school manual lenses then this problem is non existent as there is no change of state when taking the picture. If there is too much light coming in the camera due to a large aperture you can use a ND filter to decrease the light going in.

5. Wear dark or even black clothing while shooting. White clothing might reflect indirect light into the room and therefor will change your lighting condition when you are moving around. You could try to stand in the exact same position while taking the picture but that is usually very unpractical. Black clothing doesn't reflect light and will have no impact on your lighting condition.

6. Don't use the automatic white balance function. Yes it it easy when shooting pictures but for stop motion it is another potential source for flicker. Instead fix the value depending on which light you use. Tungsten might need a different value than LED. Test it and then keep it the same for the whole shoot.

Which frame rate do I use


When shooting stop motion you have to keep in mind which frame rate you want to use while shooting. There are a couple of options.
  • 24 frames per second: This is the ideal world. It looks nice and is as much as regular feature film. The drawback is that you need to take 24 positions for every second of video you want to make. This can be time consuming.
  • 24/2 frames per second: This means you take only 12 positions per second but use every frame twice. This is usually more achievable and gives decent results.
  • 30/2 frames per second: Here you take 15 positions per second and use every frame twice. This is closer to the American NTSC standard and a bit in between the two above concerning the time spend to record it.

Keeping the camera in the same spot


Most of the time you will be doing locked off shots (Shots which have no movement in them). For stop motion you need a good sturdy tripod that will keep your camera in the same spot. But even then you can have slightly moved shots and they can be quite distracting. This is usually caused by pressing the button on you camera. By entering a bit of force on the camera you can move it ever so slightly. The solution is to use a remote. I use a wired one but wireless remotes are even better as you don't have to touch the camera anymore. Make sure not to bump into the tripod. It will move a tiny bit and that is very visible once you convert your project to video. I had it happen to me once and I had to start all over again. Luckily I was only a hundred frames in or so.


What if you want a pan or a travel in your shot.


Although you could do it manually it might be better to use a motion controlled slider. I have no hands on experience on this but systems like Edelkrone's Sliderplus with action and target module can generate great repeatable results.

Conclusion


Making stop motion video's can be fun but attention to some details is important to create good results. I have made two video's so far and it took me about an evening a video to shoot it.




Saturday, August 18, 2012

RenderMan Basics




I recently put my old graduation animation online (which you can watch on youtube: A Plug's Life). It was made back in 2001 and it was my first big project working with Maya and Pixar's RenderMan.

The other day I was asked in the video comments if I could write some tutorials on the use of RenderMan. That sounds like a great idea but before I want to come up with some hands on tutorials it is important to learn something about the RenderMan architecture.

I often hear that RenderMan is not suitable for small studios as it is too complex and is for tech heads and not artists. I beg to differ. RenderMan is a very efficient render engine and small studios which do not have a lot of render capacity can really benefit here by lowering render times. The shading tools are quite extensive and can give superb results without the need of any programming.


What is RenderMan?

First I like to define RenderMan. RenderMan is actually an API (application programming interface) and not a render engine. For a long time Pixar was the only one having RenderMan compliant renderer (as they invented the standard) called Photorealistic RenderMan or in short PRMan. People quickly started to call it RenderMan though and it has stuck ever since. Today there are more commercial render engines available which are RenderMan compliant like 3Delight.

Since Pixar's RenderMan is the industry standard (they say so themselves and honestly it is true), I will use their software to explain my examples.


RIB or RenderMan Interface Bytestream

Since PRMan is a renderer and Maya an animation package, there is need of a common language between the two. The API mentioned before is this language. Have a look at the following schematic.

The scene translator converts Maya data into a RIB file which the render engine understands.

The scene information from Maya is translated into a RIB file. This RIB file contains everything from geometry and information on which shaders are used to render resolution and certain render settings like shading rate. Since a RIB file is written in the common RenderMan language every RenderMan compliant renderer can interpret it and render it.

The RIB file can be displayed as an ASCII file, looks a bit like a programming language and is actually quite readable. We often opened up the RIB file to see where things went wrong when the renderer didn't give us the expected results.

In larger studios this RIB file is usually hacked to add in extra elements before the the final render is made.


RSL or RenderMan Shading Language

Shaders are render engine dependent. This means that when you go from one render engine to another you need to redo the shading. To tackle this problem between RenderMan compliant renderers, the standard provides a common shading language called RenderMan Shading Language. This is a simplified programming language to code shaders. These shaders are then compiled and used by the render engine.

Coding shaders is not what most artists want to do but since RenderMan Studio has a visual tool to create shaders called Slim, artists don't have to feel left behind. Understanding how to code shaders can give you a better insight in how shading works in CGI though.


RenderMan Studio

Pixar's RenderMan is available as a package called RenderMan Studio. It contains:
  • RenderMan for Maya (Pro)
  • Slim
  • it
  • Tractor
RenderMan for Maya is the core plug-in. It deals with the scene settings and takes care of translating the scene information into a RIB file. It comes with its own Maya menu and custom shelf.

Slim is the shading management tool. It is an external running program but can be connected to your Maya scene. Custom shading networks can be generated visually as well as trough coding in RSL.

"It" is the image tool. When rendering out your images you can do so to the Maya renderview but also to "it". "It" is much more flexible and allows even simple compositing trough scripting. It also allows the use of Look Up Tables and displays actual pixel values, something the Maya renderview is lacking.

Tractor is the render farm tool which queues and manages your renders. Not only can you manage your RenderMan renders but also other jobs like Nuke composites which you want to be calculated on the render farm.

RenderMan Studio comes with an embedded render license so even small VFX studios can get started straight away.


Links:

Monday, August 06, 2012

CGI Workstations



Because I am a bit of a techie I often get the question what kind of workstation people should get to do their post production and CGI work on. In this article I look into what is useful and what is merely fluff. Since hardware specifications change regularly I will try to be as general as possible so hopefully this article will still be valid in years to come.

The machine

Any computer can be used to do graphics on but I am sure you will get frustrated quickly when things don't move along smoothly. Let's not kid ourselves though, machines get faster every year but scene complexity goes up as well. In the end you always need a good machine to do CGI.

CPU or processor

Let's start with the heart of the workstation. The CPU will take care of most calculations. A fast processor is great to have but there are some things to consider.
  • MHz vs cores: Multicore processors are quite common the last couple of years. These are great for multitasking but also for programs which are multithreaded. Most CGI software is multithreaded and uses more than one core at the time. Having a fast CPU with several MHz will help you along as well as it will execute a thread faster. To find the balance is a bit tricky but I always check these CPU benchmark reports. They give added performance of all cores in a processor. This way you can see if it is better to go for that high MHz quad core processor or for the lower MHz hexa core processor.
  • Price vs performance: Now that you have an idea how each processor performs you have to balance it with how much it costs. Most processor series have a sweet spot where you get the best performance for its price. It is usually a good idea to pick the one which performs a couple of steps better than the sweet spot one. Yes, it is more expensive but these are a bit more future proof. If you got unlimited funds you can pick the fastest one but keep in mind that these are only 20% to 30% faster but more than triple the price.
I am a fan of lot of cores since my main competence lies in lighting and rendering. It used to be that you had to buy a render license for each core but luckily those days are over. Buying a 12 core machine over a 8 core one could be beneficial if your post production software is really expensive.

I also like to suggest to take server rated processors like the Intel Xeon and the AMD Opteron series. They are a bit more expensive but have no trouble running 24/7.

Memory or RAM

Next up is memory. 3D, compositing and render software use tons of memory. The good part is that in comparison to processors memory is rather cheap. Don't skimp on it! I know it is easy to add in more but getting enough memory will help you a long way. It also allows to have multiple programs open at the same time. I often have Maya and Nuke open while rendering a scene in the background with Mental Ray.

When you run out of memory the workstation will start swapping memory to disk. This really grinds it to a halt as disks are death slow in comparison to RAM. Once it starts swapping you will pull your hair out of frustration and it can take literally minutes before your machine becomes responsive again.

So how much should you get you ask? As a rule of thumb take at least twice as much as the market puts in machines by default. For example most good performing machines have 8 GB of RAM at the time of release of this article. I suggest you put in 16 GB. Each year this number will go up so adjust accordingly.

Make sure to choose the right RAM for your motherboard. Some motherboards need the more expensive ECC memory.

Graphics Cards

Graphics cards are probably the most discussed items in a workstation. They are not only important to show your graphics on the screen but also have some calculation capabilities which gives a boost to the performance of the workstation.

  • Game card vs Professional card: This is one of the big questions. Should you get a cheaper good performing game card or a slower very expensive professional card. The reason there is so much discussion about it is that it is a difficult question to answer. Post production software usually claims they are only certified to work with professional cards. In practice we see that most game cards cope quite well though. If you build a dedicated workstation and are willing to spend the money then it might be worth to get the professional card. If you are on a limited budget and also like to use your machine for games, go for good processors first and get a good performance game card. NVIDIA has a document which promotes the Quadro series over their GeForce series for professional work. Look it up and see what is important to you.
  • NVIDIA or ATI (from AMD): The race to make the fastest graphics card is an ongoing process. NVIDIA has the Quadro series and ATI has the FirePro series for professional graphics. In my opinion NVIDIA is the clear winner here. They have developed the very popular CUDA which allows to run calculations on your graphics card which usually are done by the CPU. A lot of programs are already taking advantage of this. Even the game cards support this technology.
At the time of writing this article there is not much choice when you are using a Mac Pro as your workstation and want an NVIDIA card. Only the expensive Quadro 4000 is currently available. I hope this will change in the near future.

Motherboard

Since processors dictate what kind of technology you use, choosing a motherboard becomes slightly less important. Most of the time the hardware manufacturers won't even give you a real choice. If you go for server rated processors then you usually also get a server rated motherboard which is good enough.

Hard drives

Most computers have only one hard drive. If you work in a facility with a server then this one disk will be enough. It just needs to be big enough to store all your post production software. All created content will be stored on the server so multiple people can have easy access to it.

If you have a standalone workstation then it is a good idea to get two extra disks and put them into a RAID 0. This RAID disk will be used for all your data while your main disk will contain the operating system and the installed software. It will increase reading performance quite a bit. There is a caveat with this kind of setup though. Since data is divided over two disks the chance of a hard disk failure is doubled. Make sure to backup your data regularly, preferably onto an external disk (which can be a SAN).

Take server rated drives which run at 7200 rpm or faster. These are manufactured to run 24/7. Cheap green disks just don't have enough performance for this kind of work.

Sound card

Most motherboards have built in audio and if you are not making any music then this will do.

Mouse and keyboard

Just pick a keyboard you feel comfortable with but do pay some attention to the mouse. Most mice are too light and are not comfortable to work with. Keep in mind that the mouse is used a lot while creating graphics and getting Carpal tunnel syndrome because of a bad mouse will kill your VFX career rather quickly. A heavy mouse works more accurately. I use game mice with a good grip on which I can add little weights. Most post production software make heavily use of the middle mouse button. This is usually a scroll wheel so make sure it feels comfortable enough to be used as a button too.

Tablet and pen

This is a bit of an investment but I have a tablet since 2000 and I really can't mis it anymore. I don't use it for Maya but it is incredibly handy when editing, compositing and, last but actually really important, when doing Photoshop paintwork. It has a complete different feel than a mouse and it just works way faster for certain things.

Monitors

Depending on what you do you can get a cheap one (when modeling or animating) or an expensive one (when doing color critical work like painting or lighting). I suggest not to skimp too much on a good monitor. I only use 1920 by 1200 LCD panels nowadays. They have enough pixels to show all important screen assets and can handle full HD. If you do have color critical work make sure to get a monitor which can be calibrated. It will save you a lot of hassle later on. 

I worked for production companies who didn't bother too much to get good monitors and the result was that everything we produced looked different on each monitor. You can imagine that it can become quite frustrating when a director sees your image on his screen and tells you it is too dark or too red while on your own screen it is too light and too green. If you work together with other people and can't afford multiple good monitors, you need to pick a reference monitor on which everyone will judge the color fidelity of the entire project. This way everyone sees the same image with the same color balance.

Other Peripherals

Feel free to add in other peripherals that may ease up your life like a Blu-ray player or an old school floppy drive. Do check that they do not eat too much resources like CPU power and memory.

Conclusion

You might have noticed that building a dedicated workstation can cost quite a bit of money. Yes, it usually surpasses the price of a very expensive gaming rig. If you are a hobbyist it might be enough to use that gaming rig. If you are a professional trying to make money out of VFX work then you better go for a professional workstation. Having a decent system will accelerate your workflow and since time is money you can make the calculations yourself how much money you can save over time by making the the initial investment.

Wednesday, July 18, 2012

Render Farms



Projects range from small one person achievements to huge VFX productions where hundreds of artists contribute to the result. The thing they have in common is that they need computational power to finish certain steps in the process. Rendering is the first one that comes to mind but simulations and compositing take up their own share of CPU cycles.

It is possible to let your workstation chug away on those calculations and eventually it will get done but more often than not a tight deadline does nog give you this luxury.

The speed increase with a Render Farm

Render farms are all about speeding things up. Let's say you have only one workstation and you work 8 hours a day (Ok, you have to be lucky to work only 8 hour days in VFX, usually you need to do more to reach the deadline). That means that you have 16 hours left for rendering your sequence. Even with 10 minutes a frame, which is quite acceptable, it will give you 96 frames which is only 4 seconds of animation (at 24 fps that is) a day. You could try to simplify the scene to speed things up but that is not always possible.

The simplest form of a farm is to have second machine next to your workstation which can do calculations while you continue working. This will more than double your capacity and will get you 10 seconds of animation done per day for the same scene. When you have only one machine it is easy enough to manage the rendering of different scene files manually. No extra management software is needed at this point.

Let's move to a bigger setup where there are 5 artists in the shop and let's say they have a workstation each. With the same 10 minute a frame setup it means that they can render 20 seconds of animation each day when working 8 hour days. You can see that the math behind it is simple enough to make predictions once you got an idea how long a frame will take. I must agree that it can be tricky to predict render times when the contents of the scene and hence the render times per frame change a lot. But of course, an indication is better than nothing.

It becomes obvious: the more machines you get, the more complicated the managing of the jobs will become. In these circumstances it is wise to introduce a render manager into the pipeline.

The render manager

The render manager is a piece of software which automates the distribution of jobs to your render farm. It consists out of two parts. The server side program which is the actual manager and the client side program which activates the right renderer for the job. The server side program usually runs on a dedicated machine, very often the same machine as the license server for your software. The client program is installed on each machine which can handle a render job.

Instead of launching the job manually on the client machines it will be submitted to the render manager. When a job is accepted, the render manager has several tasks to do. First it checks the availability of the clients. It then will send a chunk of the job towards a client. The chunk size can be dictated by the user. It can go from part of a frame to a full frame to even a group of frames. If your frames render fast then it is usually better the group them otherwise one frame per client will do fine.

The render manager will now divide all the chunks between the clients. When a client is finished it will report back to the manager and a new job will be given automatically.

Another task of the render manager is to queue the jobs from the users. It allows several people to submit jobs without the fear that they will fail because the render farm might be too busy. The render managers usually have the option to prioritize jobs according to preset rules.

A good render manager will tell you when things go wrong. When a render fails it usually generates an error message which is passed on to the manager. It will then show you which jobs didn't render properly.

What is needed for running a render farm

  • Computers: Ok, that is pretty obvious. Fast CPUs and a lot of memory are preferred. You do not want them to start swapping memory to disk. It will grind the render to a halt. Big hard drives are not important, do get server grade ones though. Rack machines are the preferred choice when not using workstations. They are space efficient and are usually server grade so they are made to run 24/7 at full capacity. Do not use the render manager server as a render box so you need one machine dedicated for the manager to run on.
  • Operating system: A good rendering manager works cross platform. It doesn't matter if Maya was installed under Windows, MacOSX or Linux. Most render farms run Linux as you can install it without a GUI. This saves memory and makes the machine more efficient (oh, and Linux is free!). Some software packages do not have a Linux or Mac version and then the choice of OS becomes obvious.
  • A render manager: If you are serious about rendering you need a render manager. It will automate everything, speed things up and take a lot of worry out of your hands. It will also make your farm very scalable. There are many different ones out there and all come at different prices so it is hard to recommend any of them.
  • Software Licenses: That's right, software is usually not free and you need to buy the right amount of licenses. Most VFX packages have separate render licenses. They tend to be cheaper and are sometimes sold in bundles. For example Maya comes with 5 render licenses when you buy a floating license. Take note that you can only use packages which have a command line render options. A render manager can not start GUIs.
  • Network: A fast network is a must. The bigger your farm becomes the more data you have to pull trough those wires. Yes, wires. Running a farm on WiFi is a bad idea.
  • A storage server: Rendering images or baking out data can fill up disks pretty quick. A good server with a RAID system for redundancy and continuity will be ideal for storing data. You do not want to loose all that render time because a disk went bad. Remember: a RAID is not a backup system, it was designed to let you keep on working even when a disk dies on you. If you want to be sure that your render data is safe, copy it to a second server after the render job finishes.

Scalability

One of the things we notice in VFX is that render and simulation times never seem to go down. We would assume that with faster processors our render times would go down but instead we see another evolution where the render times stay the same but the complexity of the scene goes up. 

Lucky for us, render farms are to a point very scalable. Just add in more machines and licenses when needed. It only becomes a problem when the server or the network can not handle the traffic anymore. You can imagine that a farm at ILM or Pixar is a complex matter to maintain.

If you you need a quick boost in render capacity but don't have the cash to expand the farm,  you can always hire the services of online render farms. You could say that it is rendering in the cloud. It works very similar as your local render farm with the main difference that you need to upload your data to the cloud first. With a slow internet connection this may take a while but once in the cloud the renders go really fast.

Friday, March 16, 2012

VFX Back to Basics Series: 4. What is 3D animation?


This is part of a series on the basic elements of Visual Effects. Each post will talk about a certain element which is one of the basic bricks used for building VFX shots.

In this fourth post I will talk about 3D animation. Although it is difficult to demonstrate this by still images, I will use them to show certain principles.

A simple humanoid skeleton.

In one of the previous posts I talked about 3D modeling. Although it is not always necessary to animate models, they do get a lot more exciting when they are. Especially creatures and humans are calling out to be animated. Models can be hand animated by an animator or can be procedurally animated by computer simulations and automatic processes. 

Let's start with hand animated non deformable objects. To conveniently hand animate models it is necessary to build an animation rig. In short this process is called rigging. It enables the animator to take control over the movements of the object without having to worry about every individual vertex or polygon. Of course, when moving an object from point A to point B one does hardly need a rig. Just using the build in transform will do the trick. But when the model has several parts like a car with turning wheels and doors which can be opened and closed, just using the build in transforms will give the animator a hard time.

If we look closer at the example of a door we can discover that a real door has limits. It is impossible to open the door further than the hinge allows and a closed door fits firmly into the door socket. A door from a 3D model will rotate in any direction and will just penetrate the geometry around it. It cannot easily detect solid matter unless there is some collision detection going on. Collision detection is a simulation technique and is too complicated for something as simple as a door. Instead we can use a mini rig which limits the movement of the door and which shows only one handle so the animator immediately sees what can be animated.

A simple door with no rig. An animator could rotate it in any direction which is confusing and prone to errors.
A simple door with a simple rig. The door can now only rotate on its hinges and is limited by the closed position and open position. The circular handle makes it easy for the animator to animate and keyframe. 

Let's take this a step further and look at a human character. When a human moves there are hundreds of muscles contracting and relaxing. It is nearly impossible and usually not necessary to build every single muscle into a rig so it can be simplified a lot. There is the extra concern that the body changes shape when making movements. So the rig will be a bit more complicated. What defines the movement and the limits of a human is its skeleton and therefor most programs like Maya or 3D Max, have skeleton tools build in.

A skeleton in a 3D program is build out of a hierarchy of joints. There are two important principles when rigging up a skeleton for animation. They are called forward kinematics and inverse kinematics.
With forward kinematics each joint in the skeleton drives the joints further down the hierarchy. For example, when the torso rotates, the shoulders and arms will follow that rotation.
With inverse kinematics we rather control a child joint and the movement of the joints in between the child and the root will get automatically calculated by the computer. For example, hands get usually animated by this principle to save time. The elbow joint will automatically follow the movement of the hand where the child joint is the hand and the root joint is the shoulder. Inverse kinematics do require more setup as you also need to define the limits but save time during the animation process.

A short chain of joints which form up a skeleton.

An example of forward kinematics. The rotation of joint 1 will influence the joints down the hierarchy. The rotation of joint 2 will not influence joint 1 or the root.
An example of inverse kinematics. Moving the outer most joint will also control the rotation of the joints in between the outer most joint and the root joint.

Another issue when animating deformable objects like humans is that the skin has to follow the skeleton and has the possibility to stretch and deform when muscles bulge. We can solve this by a task called skinning. It connects the modeled mesh to the skeleton and defines how much the skeleton influences parts of that mesh. For example, when moving a shoulder, the mesh in the shoulder area will deform, maybe the neck will deform a bit too but the legs aren't influenced at all.

Once a rig has been set up it still has to be animated of course. The computer helps us out even when we are hand animating a character or object. The animator sets up certain key poses on the time line and the computer calculates the in between poses over time. This process is called keyframing and is extremely powerful as it makes animation smooth with minimal effort. You could compare it with classic drawn animation where the lead animator would set up the keyframes and the junior animators would draw the in between frames but in this case the computer takes the role of the junior animator. In the example of the door one would only have to key the position of a closed door on frame 1 and the open door on frame 20. When scrolling trough the time line from frame 1 to frame 20 the door will gradually open and has a different position on each frame.

Just as classic drawn animation there are many styles possible. For example, you can go for very cartoony style where there is a lot of squashing, stretching and exaggerated movements. This can require more complicated rigs. There is also the physical correct style where outside influences like gravity are important. Especially with this last style it is possible for the computer to lend us a hand.
Motion capture is such a tool. Instead of hand animating the character, an actor is used to give a performance and his movements will get recorded onto disk and transferred to a mesh. This can give highly realistic results but keep in mind that an animator usually has to tweak these results. Creatures like Gollem from The Lord of the Rings are animated this way.

Another type of animation are simulations. Breaking objects can be almost an impossible task to animate by hand when those objects consists of thousands of parts. Luckily there are tools to simulate these kind of events. The computer will calculate the interactions between each piece like collision detection and the forces of gravity and will move each object accordingly. Animating fluids and gasses is even a more specialized job and is usually not done by animators but by FX technical directors.

This concludes the fourth part of the VFX Back to Basics series.
Make sure to subscribe (at the top) or follow me on Twitter (check the link on the right) if you want to stay informed on the release of new posts.

In this series: