Wednesday, November 06, 2013

Review: DJI Phantom Quadcopter

I have always been a fan of remote controlled devices. When I came across the DJI Phantom I was immediately sold. This not too expensive quadcopter can carry a GoPro camera and has built in GPS to make sure your camera returns safely back to you. I already own a GoPro Hero 2 and in combination with the Phantom you can shoot really cool footage.

Assembled Phantom and the rest of the box content.

Features of the Phantom

The Phantom is stable and easy to fly. It will just hover when you release the controls unlike RC helicopters which need constant adjustments during flight to stay in the same spot. That makes it ideal to take pictures and film. Thanks to the GPS it will even counter any wind and won't drift off. The more pro pilots can disable this feature and just have the altitude regulated or can disable any correction features all together and fly it fully manual.

The Phantom has great orientation lights. The Red and green LEDs give you a clear sense of direction, even when flying high up. They also make it possible to fly it at night.

To avoid disaster, the Phantom has some nice failsafes built in. It returns to your lift off location if the connection between your controller and the Phantom fails (provided you have good GPS reception). It also will go in landing mode when the batteries are nearly depleted instead of just dropping out of the sky when the voltage isn't high enough anymore.

A full charge will give you between 10 to 15 minutes of flight depending on the load it is carrying. The maximum speed is about 10 m/s.

What's in the box (or at least in mine)?

  • A quick start manual V1.0
  • The Phantom quadcopter
  • Two legs of landing gear with built in compass
  • A white remote control
  • 4 sets of propellors
  • 2 sets of decals
  • A special mount for the the GoPro camera
  • 1 LyPo battery
  • A balance charger with a set of universal plugs
  • Screws to mount the landing gear
  • A small wrench and 4 nuts to screw on the propellors
  • A USB extension cable
  • A spare USB convertor (USB to Micro USB)
The guys from DJI promise that it takes only minimal effort to assemble and getting the Phantom ready to fly.

Getting the Phantom ready to make its first flight.

The first thing you need to do is to read the manual. Although there is the promise of getting the Phantom quickly into the air, this is by no means a simple device and can become a potential hazard. Fast rotating propellors, even made of plastic, are dangerous. I don't recommend kids flying this thing.

A good source of information next to the quick start guide is the DJI website. You can find extra manuals, video's and the software to calibrate the compass and IMU. The software is unfortunately Windows only.

Before you do the calibration and assembly it is wise to start charging the battery. The charging cycle takes about 1 to 2 hours so it is a good idea to get that underway while you prepare the rest of the Phantom.

Calibration of the IMU

The heart of the Phantom is a NAZA-M + GPS multi-rotor auto pilot system. Part of that system is the IMU. It interprets the built in gyroscopes, accelerometers and the external compass. It is important to calibrate this system because faulty interpretation of the flight data might crash your Phantom.

You need a charged battery to start the calibration process. Always turn on the remote control first and then connect the battery to the Phantom.

The calibration is done by connecting the Phantom with the provided USB cable to the software. It is also wise at this stage to upgrade to the latest firmware and to start out with the standard calibration file which can also be downloaded from the DJI site.

You can also test the response of the remote control and recalibrate it if necessary. When all the updates and calibrations are done it is time to assemble the rest of the parts.

Phantom assembly

There are two plastic legs which need to be mounted on the body with four screws each to form the landing gear. One of the legs has the compass module on it and it needs to be connected with the 5-pin cable so data can be passed onto the IMU. The manual states that this step can be done after IMU calibration but the video shows it is better to screw on the legs and connect the compass before the calibration.

At this stage you can also screw on the connector plate for the GoPro camera if you're planning to film during flights.

The last bits to be assembled are the propellors. There are two different types so it is very important that you screw on the right propellor on the right motor. There are arrow marks on the Phantom and the propellors to make sure you screw on the right one. Make sure they are well tightened. Losing a propellor in mid flight is a fast but fatal way to land the Phantom. 

Calibration of the compass and the first flight

The last step is to calibrate the compass. It is best to do this outside, away from large pieces of metal or magnetic sources. It is one of the steps I neglected a bit and my Phantom wouldn't fly because of it. Remember the process to turn on the remote control first and only then connect the battery to the Phantom. The other way around will give you an error message.

It is possible that the compass is so out of wack that it can't be calibrated with the regular procedure. Through one of the video's I have learned you can get it back straight with a magnet.

Once this is all done you are ready for your first flight.

My personal experience

I am not great with reading manuals. I want to dive in immediately which is actually not a very good idea with a machine as complex as the Phantom. As stated above, read the manual first. I did skip some sections and later on I wondered why the Phantom didn't want to fly.

There was also a bit of confusion concerning the box content. It didn't match the manuals and the video's I have seen online. I did get 2 sets of spare rotor blades instead of 1 but you won't see me complaining about that.

My remote control is also different. It has an extra lever on the back and the throttle control is automatically centered. This was really confusing as one of the important steps for lift off is setting the throttle stick all the way down. This is impossible when a spring centers it. I even opened up the controller to see what was wrong with it. It was only after searching several online forums that I figured out I had the new controller type which they only starting shipping recently. The extra lever on the back is usable as a controller for tilting an optional gimbal.

The last problem I had was the connection between remote control and the Phantom. Although the Phantom itself indicated that there was a connection to the remote control, the software was unable to detect it. The remote control is supposed to be of the PPM type and this is also the default value in the standard calibration file. It was only after changing to D-Bus that it started to work. Again, the forums helped me figure out that this updated controller behaves a bit different. The documentation wasn't updated though which lead to this long search. 

It went all smoothly once I had it into the air. Well, except for landing that is. My first landings were a bit rough and I did cut some grass with the propellors. Once I figured out how to do it, I had no issues whatsoever.

Always check the weather forecasts for the wind speeds. Once it gets too windy, the Phantom really needs to fight it to stay in the same spot, becomes highly unstable and might crash. Since it's maximum speed is 10 m/s I tend not to fly when the wind is blowing faster than 8 m/s.

As you can see, it wasn't a complete walk in the park to get it into the air. Luckily the Phantom is very popular and there are many forums on which you can find a ton of information or ask questions to other Phantom owners. Overall I am very happy with the Phantom. I got mine for 419€.

Flying the Phantom with the GoPro Hero2

Using the GoPro Hero 2 to take pictures. I did some basic grading and removed the lens distortion.

The Phantom is designed to fly with the GoPro Hero3 in mind. That said, it also works with previous models like the Hero2. When checking other people's footage I noticed there was a consensus among all Phantom and GoPro owners that the footage had problems with what they call the jello effect. This is because of micro vibrations that get transferred from the Phantom to the camera during flight. It makes the footage rather unusable for anything professional.

There are a couple of things you can do to avoid this effect.

You can do it the expensive way by getting a gimbal. This is by far the best option and gives extra benefits like camera tilt control. The gimbal is more expensive than the Phantom so it wasn't an option for me (for the time being at least).

Another option is to put some foam or rubber between the Phantom and the connection piece for the camera. This will absorb some of the micro vibrations. This did work for me and it removed them to an acceptable level although not completely.

A last trick is to balance the propellors by sanding off excess weight of one side. There are some handy balancing gizmos on the market which makes it easy enough to get it right. So far I haven't tried this option myself as I was making rough landings which wasn't great for the propellors. Every time you damage your propellors a bit they might become unbalanced again and you have to start all over again.

Some last tips to get better footage and photos.

It is good practice to shoot at 50 (or 60) fps. The footage will look more stable when slowing it down to 25 (or 30) fps during the edit.

It is better to use the camera housing which ships with your GoPro Camera. It protects it from impact should you crash your Phantom.

Another trick to stabilize your footage it to use After Effects Warp Stabilizer. You will loose some resolution but it works like a charm.

The GoPro has quite a bit of lens distortion. This is something you want to get rid of when just shooting still images. Camera Raw which is accessible trough Adobe Bridge has built in lens profiles for the Hero3. Using the Hero3 silver setting also works for the Hero2 camera. This eliminates the lens distortion and is great when shooting panorama's or buildings.


Friday, June 14, 2013

Has the new Mac Pro 2013 the potential to be a good CGI workstation?

Apple sneak previews the new Mac Pro on the first day of  WWDC 2013

On the 10th of June Phil Schiller, the senior vice president of worldwide marketing at Apple Inc., presented the new Mac Pro. This new machine looks nothing like the previous model. In fact, the whole thing is redesigned from the bottom up.

The internals of the new Mac Pro. You can see two memory slots at either side for a total of four.

What's under the cylindrical hood?

The new cylindrical design has already been mocked by the internet community to be a trash can, jet engine or a Darth Vader inspired machine. I agree that it is a strange design but I can see the benefits when keeping the machine cool. All components are cooled by a central cooling element and one large fan which will keep the noise levels down.

It will have up to 12 new Intel Haswell Xeon cores, a dual AMD Fire Pro graphics card, 1866MHz DDR3 memory for a bandwidth of up to 60 GB/s, PCI express flash storage, Thunderbold 2, HDMI 1.4, USB3 and gigabit ethernet. Even the latest wireless technology is incorporated.

So, is it any good as a graphics workstation?

The new Xeon cores will definitely provide plenty of power for local render jobs. The fast memory and flash drive will run 3D software like Maya quickly. On the pictures, released on the Apple web site, you can see that there are only four memory slots. Compared to the current Mac Pro that seems kind of few. It all depends how much you can fit in one of those slots of course.

"So what about the graphics card?" you might ask. The dual professional AMD GPU is not a bad choice but it is not sure if it will be upgradable once the machine leaves the factory. The evolution of graphics cards goes much quicker than the rest of a machine. It usually gets replaced half way the lifetime of the machine itself. There is also the question if nVidia GPUs will become available. A lot of graphics software works with CUDA acceleration. Apple is gambling a bit that software companies will develop more OpenCL applications in the future. That said, Mari, a great painting tool for 3D artists will become available for Mac later this year.

One of the things people believe to be the biggest problem is expandability. There are no expansion slots anymore and there is no space for extra hard drives. Apple believes that all expansion should go trough the Thunderbolt 2 interface. It does show potential though. With 6 ports you can add up to 36 devices (6 daisy chained per port) which I am sure is more than enough. And the Thunderbolt 2 I/O is twice as fast as the current one going up to 20 Gb/s.

I honestly think that hard drive expansion, although external, will not be such a big issue. There are already plenty of Thunderbolt solutions out there. Besides that, most post production facilities have network attached storage. Yes, four internal drives might be neater and less cable clutter but I don' think performance will suffer too much.

A potential bigger problem might be finding solutions that needed a PCIe expansion slot before. Some facilities have invested a lot of money in peripheral hardware which with the new Mac Pro can't be used anymore. There are PCIe expansion racks available but this adds to the cost. And, although I am not 100% sure if this is true, Thunderbolt 2 might not have enough bandwidth for adding an extra GPU in an external rack. Then again, you get 2 professional GPUs fitted when you buy a new Mac Pro which provide already a lot of GPU power.

Am I getting one once it is available?

So am I getting one when it becomes available? Well, that will all depend on the price. The machine looks great on paper and will definitely outperform my 2008 Mac Pro but if the entry model will cost more than 3000 Euro, it will become hard to justify the purchase. The 2008 Mac Pro was one of the cheapest Mac Pros on the market compared to it's competitors. If Apple can repeat this then I will be one of the first ones to get one.

Anyway, I am looking forward for this machine to hit the shelves.

Sunday, February 24, 2013

IBL and Environment Maps for CGI

Image Based Lighting, also known as IBL, has been around for a while now and is a great solution for lighting and integrating your VFX scene into shot footage. In this article I like to show you how I create the photograph needed for this technique.

Low Dynamic range example of an equirectangular image also known as a Lat/long environment map. Keep reading if you like to know how I make this.

The basics

It is perfectly doable to light your scene with CG lights. I have created hundreds of realistically lit shots this way but it can become very time consuming when you want to get the fine details completely right. Thanks to IBL we can gain some time and spend it on other parts of the project. The idea is that a photograph of the place where you shot the footage contains all the needed light information and can be used with your graphics software to simulate the lighting.

There are two problems we have to overcome to be able to use this technique.
  • We need the light information of the whole scene. One photograph won't give us that unless you got a very expensive 360 degrees camera. This means we need to take multiple photographs and stitch those together till we got a complete view of our scene. Each photograph needs some overlap with the previous one. The field of view needs to be big enough otherwise we have to take too many photos which will become a nightmare to organise let alone all the time you spend by taking those pictures.
  • We need to capture all light information, Including the little light there is in the shadows as well as the super bright highlights of the sun. The dynamic range of digital cameras isn't big enough to capture all this information into one photo. The shadows will be crushed and the highlights will be burned, especially when storing the photo into an 8 bit image type like jpeg. The subtlety will be lost and the lighting won't look realistic.

Important remark: Keep in mind that you should have as few as possible moving objects in your scene. You will be taking pictures at different shutter speeds and moving objects will become a blur at slow shutter speeds.

Today I am using a better but slightly more expensive method than a couple of years back. In the first section I will explain in short how I used to do it and in the second section I will explain how I do it today. 

The old way

A mirror ball. Note the scratches and blemishes. Although the map will work for lighting it might be a bit rough for good reflections.

You need at least a camera which can shoot in manual mode. You have to have full control on aperture, shutter speed and ISO. A regular DSLR will do the trick. It even doesn't have to be a very expensive one.

To grab a complete environment with a regular lens you need way too many pictures to stitch together which will be very time consuming. A neat solution is to photograph not your environment directly but to shoot a spherical mirror or better known as a mirror ball. You don't want those facetted disco balls but rather the smooth chrome like balls which give a perfect reflection without breakups. The great thing about a spherical mirror is that the reflection is more than 180 degrees. It is actually almost 360 degrees with the exception of the view directly behind the ball. The drawback is that the edges are extremely distorted and a lot of information will be squeezed into a few pixels. To counter this drawback it is good practice to photograph the sphere from three or more different angles and to stitch those together.

  • Cheap, available in garden shops unless you want a perfect chrome ball with no blemishes.
  • Good enough for capturing the general lighting information of your scene.
  • A regular DSLR camera with a regular lens will do the trick although I recommend a long one as you will be less visible in the reflection of the sphere.
  • Every blemish on the sphere makes your picture unsharp.
  • You will always be in the picture as you are being reflected as well. You can paint yourself out but it consumes time.
  • Low resolution. Might not be enough for perfect reflections in your CGI image.
  • Measuring the distance between the camera and the sphere is critical. You want all the angles to be taken from the same distance.

Since regular jpegs have a low dynamic range we need to shoot different exposures and join those together into a High Dynamic Range Image or HDRI. This means the camera needs to be on a tripod as long exposures will be unavoidable. You also need to undistort the spherical distortion caused by the mirror ball. There are several programs available to do this for you. I used to use HDR Shop 1.0 but it is old and there is a newer version available.

Check steps 4 to 6 in the next section below to get an idea how to set exposure and how many pictures to take.

The new way

An low dynamic range tone mapped lat/long environment map. It wasn't stitched up properly. Notice the soft edges of the buildings.

Last year I invested in a whole new setup. It is superior to above method and get's better quality environment maps. Instead of working with a mirror ball I use a Fish Eye lens. You know, those funky super wide lenses which capture between 140 and 180 degrees field of view (depending on which lens you buy).

The aim is to take pictures with multiple exposures from six different angles plus a top and bottom shot. You always need a tripod for this.

Let's look at the kit.
  • DSLR: I use a Nikon D7000. This is a good midlevel DSLR. Why Nikon? Because I have already invested quite a bit of money into Nikon lenses over the past decade. Any other brand will do as long as you can shoot in manual mode.
  • The lens: This is the important part. I use a Samyang 8mm f/3.5 Fish Eye lens. This lens has a 180 degrees field when used with a DX camera like the D7000. You have to do a little research to know which lens will be the best choice for your camera. This Samyang is great quality but it is a fully manual lens including the focus. It has no proper chip for EXIF data (Although the new November 2011 model does). This is not a problem as such. You need fixed settings for your pictures anyway. I know this lens is also available for other camera's.
  • Tripod: A regular stable tripod will do.
  • 360 degree rig: Another very essential piece of equipment. This allows you to rotate the camera in fixed intervals around the central Y-axis. A good rig will measure the distances for you. I use the Nodal Ninja 4 for measuring the intervals with the EZ-Leveler II to get the camera perfectly horizontal.
  • A laptop for tethered shooting: A laptop is not necessary but it is always great to save all those images straight to hard drive and to automate the whole process. I use Sofortbild, a free Mac application from Stefan Hafeneger which allows me to take multiple exposures with one mouse click. Sofortbild works only with Nikon camera's. I know Canon has its own software.
  • The HDRI conversion software: I just use photoshop to join my multiple exposure brackets into one HDR image. Sofortbuild can do this on the fly while taking the pictures but I had some trouble with it lately and I didn't bother to figure out yet why.
  • The stitching software: Although you could try to stitch all the images together in photoshop, it is far more convenient to use a HDR panoramic stitcher program. I use PTGui Pro for Mac. you can give indications on where pictures are overlapping and it will try to stitch them together for you. There is also a manual mode if it doesn't manage to stitch them automatically. It also transforms the whole image into a longitude/latitude image format and saves them in a 32bit image format of your choice. I always use the radiance file format which has the .hdr extension. They seem to work flawlessly in Maya.

Now we have to put all this kit into practice.
  • Step 1: Put the whole rig on the spot where you want your light being captured. This is usual the location where you want your CG element to be in the scene.
  • Step 2: Make sure your camera is level. Use a spirit bubble or the build in sensor to measure this. Be as accurate as possible. This becomes relatively easy when using an EZ-leveler II.
  • Step 3: Put the nodal point of the lens right above the nodal point of the rig. If you skip this step your pictures won't align when stitching them. You can check this by taking pictures at different angles. You should get no parallax shift between the two pictures. If you do then you need to adjust the placement of the camera in comparison to the rigs nodal point accordingly.
  • Step 4: We need to take pictures at different exposures. Use the shutter speed to control this. Fix all other settings. Put the ISO to 100 to get noise free images and put the aperture to f22 so you get as much depth of field as possible. It will make everything sharp in the picture and that is exactly what you want.
  • Step 5: make a couple of stills to check the darks and the brights. Use the histogram function in your camera to see when the blacks aren't crushed anymore and the whites not clipped. It will show you what the minimum and the maximum shutter speed should be.
  • Step 6: Start with the slowest shutter speed and take a picture every two stops till you reach the fastest shutter speed. This will usually be between 5 to 8 pictures. A program like Sofortbild will take them all in one go when using tethering.
  • Step 7: Rotate the camera exactly 60 degrees (This can vary with other lenses but is a good benchmark) and repeat step 6. Make sure you take the same amount of pictures.
  • Step 8: Keep doing this till you have a full rotation.
  • Step 9: Take 2 extra sets for zenith and nadir. You could do without but it gives a better result. The cheap rigs won't allow you to do this though.
  • Step 10: You should have 8 sets of images now. Convert each set to an HDR image in photoshop. (For CS6: File>Automate>Merge to HDR Pro).
  • Step 11: Import the images into PTGui Pro and go trough the whole procedure to stitch them together into an equirectangular image.
  • Step 12: Export the results as a new HDR and use it in your 3D software. 

Something extra:
If you don't want to have your rig in the photograph it is possible to paint it out. It will give a nicer result but it can be time consuming.

I realize this article doesn't explain all the intricate details of the process but it should be enough to get you on your way and to try it out for yourself.


Wednesday, January 23, 2013

Review: Tascam DR-40

I talked about DSLR microphones in the past (check this link) and how they add extra quality to your video. We can take this a notch further and use a separate sound recorder instead of using the recorded audio from the camera. We have a Tascam DR-40 to our disposal so why not give it a quick review. We made a video about it so check that out first and then continue to read the article where we cover some extra ground compared to the video.

The video

Why use an external sound recorder?

Sound recorders have much more features than the built in recorder of a DSLR. Let's mention the most important ones and compare those to the Tascam DR-40.
  • Most DSLR's have limited input level control. It is done automatically or they have only a few settings. The Tascam can be finely tuned thanks to the graphical metering and the input level buttons.
  • Unless you use a stereomic on your DSLR you will get only one channel of sound on your video. The Tascam DR-40 has up to 4 channels of sound available (two of which are used by the internal condenser mics). They can be used to record separate mics but also for recording a duplicate of a single mic where the second channel can be lowered in volume. This way if your main channel peaks by accident the second channel will still be good.
  • The bit rate of the DSLR is usually set to a fixed value and is baked into the video. Most sound recorders allow you to record low quality as well as high quality and it is of course always a separate file. That allows you to use different types of compression like mp3. Even the sampling rate can be adjusted. The Tascam-DR40 goes from 44.1kHz to 96kHz and saves them as 16 bit or 24 bit. 
  • The Tascam DR-40 has 2 XLR input connectors which provide phantom power. This allows you to use high quality condenser mics like a boom mic or Lavalier mics. Phantom power allows you to use long cables so you don't have to set up the mic close to the recorder if you don't want to.
  • Since it is a separate recorder you aren't bound to the camera's location either. You can use a long lens and just leave the recorder close to the subject you are filming.

The Tascam DR-40

So now that we have seen some advantages of an external sound recorder it is time to look at the DR-40 a bit more closely.

The unit is made out of plastic. Only the protection bracers for the condenser mics are metal. Dropping it is not a good idea so handle it with care. It has an LCD screen with amber colored backlight and decent buttons for accessing its many features. The play, record and stop buttons are nicely centered and big so no worries pressing the wrong button with the fat fingers syndrome there.

You need 3 AA batteries for powering the unit. It allows you to use rechargeable batteries and even has a setting to change the type so the charge level is displayed correctly. The USB port can also be used to power the unit. This is really recommended when using phantom powered condenser mics as they will drain the batteries rather fast.

The sound is recorded to an SD/SDHC card. There is a 2GB card in the box when you buy it. That looks small for todays standard but it is enough for several hours of good quality sound. So far I haven't seen the necessity to upgrade the card. Your recordings can be uncompressed WAV files or compressed MP3 files. 

The built in condenser mics can be set in an A-B or X-Y configuration.

As mentioned before, you can record up to 4 channels simultaneously when using the two built in mics and two external mics.

The unit has a whole list of features which would take me too much into detail to explain here. Stuff like overdubbing, auto-record when a certain sound input is detected, pre-recording period of 2 seconds, limiter controls, tuner function for tuning your instruments and much more are available.

My experience with the unit

I use this unit mostly with 2 Shure Lavalier mics and so far I think it is a really good piece of equipment. I think it is one of the cheapest units around with powered XLR inputs. It is small enough to fit in the palm of your hand and is lightweight.

The sound quality is great but I did notice that when I import the files into Adobe Premiere that they tend to sound really quiet. I always have to boost the volume quite a bit. Luckily this doesn't have an impact on the quality as the noise levels are equally as low.

Transferring the sound to disk is super easy. Just use the USB cable and plug the unit into your computer. It will show up as a USB drive for easy access. You can also take the SD card out but it is well protected by a sturdy cover which needs some skill to remove so using the USB port is more convenient.

I love the headphones jack for in the field monitoring. This is really missing on most DSLR's. The on screen metering and the peak warning LED are great but hearing your sound directly is way better. You can immediately identify a bad take due to non intended noise and redo the shot. I use good closed shell headphones so I can fully concentrate on the recording itself.


Check the official website to see the full feature list and some product stills.