Making a Video Recording Booth, Part 1: Brainstorming

My friends Jacqui and John got married, and they’ve asked me to provide a video booth/kiosk for their celebration party (in less than three weeks from now, eek!). The idea is that guests will be able to record their own video messages to the bride and groom – essentially a multimedia guestbook. This is very similar to the self-service photo booth, which has become a ‘must-have’ item at weddings and parties, and there are a number of commercial and open source photo booth software solutions available (SparkBooth, PhotoBoof, etc.)

I assumed that there would be similar software available for self-service video recording booths, but my Google-fu has largely failed me here. It seems that most of the software for this use is commercially developed and packaged with turn-key hardware solutions. I don’t have big bucks to spend here, so I won’t be buying/renting a commercial video booth. After much fruitless searching for a suitable product, I’ve determined that I’ll be writing my own video booth software.

My requirements for video booth software:

  1. Should have a simple, intuitive and bulletproof user interface. Anyone should be able to record a video without ever having used a computer before (I’m looking at you, Grandma), and the workflow must survive enthusiastic button-mashing by tipsy guests.
  2. Should not need expensive hardware (again, tipsy guests). I don’t have a powerful computer available to me for this project. This may be a challenging requirement, as video encoding in software is very CPU-intensive.
  3. Should be able to handle continuous use. If there’s a line of people waiting to record a greeting, a guest should be able to begin recording as soon as the previous person has finished their recording. This implies that I’ll either be encoding captured video/audio in real time or using background encoding tasks that are invisible to users, either of which may be a challenge considering my ‘no expensive hardware’ requirement.
  4. Should be able to record high quality video. I would hate see once-in-a-lifetime video memories look pixelated and crappy. My cell phone can take good video, so my video booth had better take great video. I’d be happiest with 1920×1080 (1080p HD), and I’d settle for 1280×720 (720p HD). A frame rate of 30FPS is preferable, but I’d go to 24 or 25FPS if needed, and I’d like to keep a data rate around 5000kbps. Absolute worst case scenario: 640×480 (SD) resolution at 24FPS and 500kbps data rate; I refuse to go any lower. I know I’m really challenging myself here, considering the need for cheap hardware and continuous use.

The spare computer that I have available for this project is an Acer Aspire AX1430. It’s got a wimpy little AMD E-450 processor, which is only slightly better than the dual-core Atoms.  The computer has 4GB of RAM, a 500GB HDD, and I’ve replaced the optical drive with a 32GB SSD. It’s essentially a high-end nettop, and doesn’t have much available horsepower for tasks like video encoding.  In order to keep resource usage as light as possible, the machine is running Lubuntu 12.04.

Other hardware that I have available to play with:

  • Logitech C260 webcam.
  • An Arduino microcontroller.
  • Wireless RF keyboard and mouse.
  • I’m considering using a Big Red Button for the user interface:

The user experience I have in mind is exceedingly simple:

  1. A webcam sits on top of a large monitor, which displays a full-screen preview of the video camera feed at all times. There won’t be a keyboard or mouse available, just a big illuminated red button in front of the monitor. In the idle (waiting) state, a text overlay invites the guest to “Press the button to begin recording.”
  2. When a guest presses the button, the text overlay gives them a quick countdown (3…2…1) and then recording begins. An overlay message (maybe including the ubiquitous flashing red circle ‘recording’ symbol) instructs the guest to “Press the button to stop recording.”  The big red button will flash during recording.
  3. The guest records their video message and then presses the button to complete their recording. A quick “Thank you!” message flashes, and then we’re back to the idle state, ready for the next person.

There won’t be an Internet connection at the party, so I’m not considering features like streaming video, automated uploading, email notifications, etc. at this time.

For the video capture and encoding backend, it looks like GStreamer is the de-facto choice as far as libraries go. I’ve used Processing to interface with arduino in the past, so I’d like to reuse some of that code (there’s a time crunch here!). There’s an implementation of GStreamer for Processing called GSVideo which seems to have garnered a substantial following, so that may be the best choice for the UX.

Now I’m off to run some GStreamer tests and see if my wimpy little computer will be capable of on-the-fly previewing and encoding of HD video. My suspicion is ‘no’, but I’m ever-hopeful! I’ll follow up soon with those results.

Webcam-based Head Tracking with linux-track

There’s a new breed of RC hobbyist these days – the First Person View (FPV) pilot/driver.  These folks install small cameras on their remote control vehicles and pilot/drive them by looking at a video monitor (generally a head mounted display ie. video goggles) rather than by watching the vehicle directly. A couple of example videos are here and here. It’s pretty cool, and if I was into RC I might try to set up something like this.  The part that caught my attention from a robotics standpoint was the head tracking aspect.  These guys use video goggles with a gyro that reports their head position back to the camera on the aircraft, which is mounted on a pan-tilt assembly using servos.  This allows them to remotely pan or tilt the camera just by turning their head.  Pilot looks to their right, camera swivels to the right.  Neat!

I did some research and discovered that there are a bunch of different methods of head tracking.  You can use gyros like the FPV pilots do, or you could try an optical method of head tracking, ie. with a camera.  There are commercial solutions like the TrackIR, which is mainly intended for video gaming. (Did I mention that head tracking is huge in the flightsim and FPS gaming world now too?)  The bad news is that the TrackIR costs $150, which in my mind is altogether too much to pay for what is essentially a webcam.  So why not just use a normal webcam for face tracking?  Well, it turns out that you can do exactly that – software like FaceAPI and FreeTrack can use any video source to track and quantify your head movement.  A little more digging and I found Linux-track, a FreeTrack clone for linux. Perfect, let’s play with it!

First we will add linux-track’s repository to apt, then we’ll update apt and install the package. (I’m using Ubuntu 11.10 – your sources.list entry will change based on your Ubuntu version, see this page for details.)

wget --quiet http://www.linuxtrack.eu/repositories/pubkey.gpg -O - | sudo apt-key add -
echo "deb http://www.linuxtrack.eu/repositories/ubuntu oneiric main"  | sudo tee -a /etc/apt/sources.list
sudo apt-get update
sudo apt-get install linuxtrack

Linux-track should now be installed in the “games” menu.

OK, let’s take a break from the software side and take a look at some of the hardware requirements of head tracking.  Some software (ie FaceAPI) observes facial features and relative head size to calculate pose information.  Linux-track is not so smart. It needs a “point model” – a series of LEDs or IR reflectors in a known configuration that it can track.  With three “points” in a known pattern it can determine your head pose in 6 degrees of freedom (rotation, pitch, tilt, movement in X, Y and Z axes).  For my purposes of pan and tilt, I need only 2DOF (rotation and pitch), and so I can get away with a single-point model – just 1 LED or IR reflector.  I found a cheap clip-on LED keychain in the garage which will be perfect for this.

Because I’m only interested in seeing the light emitted from my LED “point model” and the LED will emit lots of IR as well as visible spectrum, I chose to add an IR filter to my webcam.  A small piece of exposed film negative (man, was that tough to find!) taped over the lens of the webcam will act as an effective IR filter.  There are also other options out there for DIY IR filters.  Finally, clipping the LED light to the brim of a ball cap finishes off the hardware side of this project.

Back in the linux-track config GUI, I created a new point model, specifying a single-point LED model.  I played with the sensitivity settings and visible blob threshold until I had a smooth-moving point.  All your preferences are stored in .linuxtrack/linuxtrack.conf in your home folder.

In my next post, I’ll be using ltr-pipe to create a virtual joystick that will drive a remote pan-tilt camera mount.  Stay tuned for that.

Making an Arduino-controlled Delta Robot

A delta robot is a parallel robot that’s designed for precise and high speed movement of light payloads.  They’re generally used for pick-and-pack operations.  I’ve wanted to build a delta robot since I saw this video, so I took a weekend last summer to put something together as a technology demonstration for the high school robotics club that I coach.

The complete robot

The servos and upper assembly

The upper link of an arm

A delta robot is composed of three two-segment ‘arms’ mounted at 120 degree intervals.  The arms use parallelograms to restrict the motion of the ‘wrist’ and keep it in a static orientation.  For a much better description, check out Wikipedia, and for the math geeks there’s an excellent writeup of delta robot kinematics over at Trossen Robotics.

To build my delta robot I started with three HS-645MG servos, and mounted them on some particle board.  I fabricated upper arms out of Tetrix beam and used some JB-Weld to attach long standoffs perpendicularly at the ends of the arms.  The standoffs hold small ball joints (I used some ball endsthat were intended for RC car suspensions) that will provide the free movement that is required of these joints.  The rods that make up the parallel linkage for the second segment of the arms are aluminum. I bored holes in the ends of these on the lathe and pressed small bolts into the holes to create a mounting point for the ‘cap’ portion of the ball ends.  The lower triangle is just a quick mock up made of Lego with some more long standoffs zip-tied to it to provide mounting points for the ball ends.

On the software side, there are two components.  I modified an excellent Arduino sketch originally created by Bryan at Principia Labs to drive the hardware.  The Arduino sketch uses the Servo.h library to control the three servos.  It listens over a serial connection for three-byte control commands in the format start_byte, servo_number, desired_position where the start_byteis always 255.  Servo number 99 is used as a control channel to enable and disable the servos.  Sending a desired position of 180 to servo 99 will turn all the servos on, and sending position 0 to servo 99 will disable all servos and power them off.

Processing GUI

To control the robot I wrote some software in the Processing language.  I used the really nice ControlP5 GUI library to make text boxes and sliders to control servo positions.  The kinematics of converting three servo angles to a position in XYZ axes and vice-versa is..interesting, to say the least!  Luckily great minds have gone before and completed this math for me, even including C code. Again, Trossen Robotics forums are your friend.

I’m making all the code for the Arduino sketch and the Processing GUI available for download from my Box.net account (Edit: and on GitHub).  The code is well-commented, but I did not originally intend to publish it, so it’s a little rough around the edges.  Feel free to ask any questions in the post comments and I’ll try and clear things up if it’s confusing.

I’ll wrap things up with a quick video of the delta robot in action. Enjoy!

Installing Arduino 0023 on Ubuntu 11.10 (Oneiric Ocelot)

I recently installed Ubuntu 11.10 on a old laptop to be used as a dedicated robotics machine (I’m hoping to play with MOOS or ROS in the near future), and the first thing I wanted to do was install the latest Arduino IDE (0023).  A quick look in the Ubuntu repository told me that the latest package for Ubuntu is the older 0022 version, so I decided to install the software directly from the Arduino website.  It turns out that this process is not as ‘point-and-click’ at it should be, so I decided to document the installation process.

First, download and extract the Linux version of the Arduino software from the Arduino website. I’m using the 32 bit version and I extracted it to /home/matt/arduino-0023

I then tried to launch the IDE but received an errror about missing Java.  Oh yeah, this is a new install of Ubuntu, I guess I need to install Java.

sudo apt-get install openjdk-7-jre

OK, now the IDE launches properly. I tried to compile Blink and immediately received a complaint about a missing compiler.

“Cannot run program “avr-g++”:j ava.io.IOException:Error=2: No such file or directory”

A quick Google search got me to this (somewhat outdated and only sort of correct) page on the Arduino site where I learned that I had to install the AVR C library and the AVR C cross-compiler.  It also claimed to require the C++ compiler for AVR, but it seems that this package doesn’t exist (maybe it’s included in gcc-avr now?).

sudo apt-get install avr-libc gcc-avr

Let’s try compiling again.  Surprise – a compilation error!

In file included from /usr/lib/gcc/avr/4.5.3/../../../avr/include/util/delay.h:44:0,
                 from /usr/lib/gcc/avr/4.5.3/../../../avr/include/avr/delay.h:37,
                 from /home/matt/arduino-0023/hardware/arduino/cores/arduino/wiring_private.h:30,
                 from /home/matt/arduino-0023/hardware/arduino/cores/arduino/WInterrupts.c:34:
/usr/lib/gcc/avr/4.5.3/../../../avr/include/math.h:426:15: error: expected identifier or ‘(’ before ‘double’
/usr/lib/gcc/avr/4.5.3/../../../avr/include/math.h:426:15: error: expected ‘)’ before ‘>=’ token

Some more Google-fu pointed me to a solution here. Apparently the Arduino folks used their own implementation of the math function ’round’ as the old versions of the gcc-avr math.h library didn’t include one.  The Arduino version of this function breaks the current gcc-avr math.h and needs to be commented out.  I edited /home/matt/arduino-0023/hardware/arduino/cores/arduino/wiring.h (your path will vary based on your install location) and commented out this line (79):

// #define round(x)     ((x)>=0?(long)((x)+0.5):(long)((x)-0.5))

One more compilation attempt, and…success, it builds!  I then tested uploading code to a Duemilanove (appears as /dev/ttyUSB0) and an Uno (appears as /dev/ttyACM0).  Both boards connected successfully.  OK, time to write some code – I’ll be working on a servo controlled pan-tilt rig for a pair of webcams.  Stay tuned for more on that project!