Kinect Experiments
overview of available kinect libraries and drivers

In the last two years the Kinect Camera became an influential tool in a variety of situations, reaching from interactive installations to virtual user interfaces. Hence we’ve been following the development of different Kinect software solutions from the beginning and compared different processing libraries like

• Daniel Shiffman’s OpenKinect (based onlibfreenect, OSX only),
• SenseBloom OSCeleton (based on OpenNI skeleton tracking, cross platform),
SimpleOpenNI (an OpenNI and NITE wrapper, cross platform).
Status Quo
Due to the fact that there’s been a lot of news and updates regarding the usage of the Microsoft Kinect inside Processing, we’ve decided to update this article and give you some insights on the current status of its development.

Our conclusion is that SimpleOpenNI by Max Rheiner, is probably the best solution for the processing user, playing with the Kinect. Beside its cross platform architecture (supporting Windows, OSX Linux), it’s documentation and its active community it has a lot of other advantages, which we are going to point out in the following abstracts.

1. Skeleton Calibration
The skeleton calibration of Simple OpenNI is automized, since its v0.27 version. Once a user is detected, the virtual skeleton gets mapped to his/her full body movements. This is especially helpful in projects involving random users passing by, not aware about the outdated and annoying “Psi-Pose-to-Calibrate-Rule”.

2. Oni File Recording
Moreover it is possible to save a sequence of the Kinect data into an Oni-File. You can simply replay the prior recorded file and run your application on top of its data. No Kinect device needs to be plugged to your computer, which offers the possibility to develop little applications even on the go. Simple OpenNI also offers functionalities to control the playback. This feature reliefs the coder in a tremendous way, since it is not needed anymore to jump, dance and run in front of your Kinect device every time you test your application. For an introduction on how to use Oni-Recorder check the examples inside the library or have a look at this sketch.

3. Features
There are a lot of different features and utilities, which make this library so efficient for small and even bigger projects. May it be a single threaded Run, a multi threaded run or the usage of synchronized depth and RGB Image data. All those functionalities become invoked by a single line of code. Feel free to explore those features by running through the nicely documented examples provided with the library, or have a look at the list of features.

The combination of all those features makes the SimpleOpenNI library definitely a very comfortable and probably the best tool for developing with the Kinect in Processing. Check this link for the SimpleOpenNI installation guide for OSX, Windows and Linux: and here for general information and downloads:

Another framework you might want to use is the Microsoft Kinect SDK, which is a C++ library and therefore not truly user friendly within JAVA ;). However it definitely offers stable skeleton tracking, audio and image recording and speech recognition. Furthermore it enables impressive facetracking functionalities, which is an unique feature of the Microsoft Kinect SDK. You should seriously concider this framework if you are developing on Windows and are not necessarily chained to Processing. You get it here: Microsoft Kinect SDK
There are still a couple of more libraries for accessing the Kinect with Processing. We collected a list of links to some other libraries, in case you can not use SimpleOpenNI/Microsoft SDK for some reason or want to test out something else:

Daniel Shiffman’s library which was one of the first ones that appeared to make use of the kinect with processing, allows us to access both the depth image and the rgb image of the Kinect camera. The depth image is useful for quickly checking the depth at a certain position as it returns a Z-depth for every pixel. While only working on OSX, it is the easiest and quickest to install. It only provides basic functionality, but no skeleton tracking or proper hand recognition. If it is used to draw point clouds based upon the depth information, it can slow down dramatically.

OSCeleton is essentially a C program without a GUI, that tracks your skeleton via the Kinect, then sends the XYZ coordinates of each joint via OSC. You are able to receive this information in many applications/programming languages.
It is fast and accurate, but really only does one thing – skeleton tracking, which could work well for game design, but is inappropriate for any type of interface design. It also has the disadvantage of requiring a separate application to run – and one which requires constant (re-)calibration, and isn’t entirely stable.

This library was developed by Thomas Diewald but we only had a very brief look at it. It provides depth image and rgb image access, and access to the Kinect motor and LED, but does not bring any built in gestural or skeleton recognition. One of the features though is the option to use more than one kinect at a time. For the latest news go to the processing forum.