Kinect RGB Demo v0.4.0
(redirected from Research.KinectRgbDemo)
Demo software to visualize, calibrate and process Kinect cameras output
This software was partly developed in the RoboticsLab and aims at providing a simple toolkit to start playing with Kinect data. Features include:
- Grab kinect images and visualize / replay them
- Calibrate the camera to get point clouds in metric space
- Export to meshlab/blender using .ply files
- Demo of 3D scene reconstruction using a freehand Kinect
- Demo of people detection and localization
- Linux, MacOSX and (partial) Windows support
The project is divided into a library called
nestk and some demo programs using it. The library itself is easy to integrate to an existing project using
cmake: just copy the nestk folder as a subfolder of your project and you should be able to start working with Kinect data. You can get more information on the nestk page.
- Source code as an archive RGBDemo-0.4.0-Source.tar.gz (LGPL License)
- Source code on github
- MacOSX Intel binaries
- Windows binaries
New features since v0.4.0rc1
- Improved Windows event loop. Performance are now similar to Linux.
- Automatically load a calibration file if present in the current directory.
- Minor fixes.
New features since v0.3.1
- New demo: experimental people detection and height estimation
- Windows support greatly improved (thanks to Armin Zürcher)
- Better MacOSX support
- Optional calibration without distortions for the depth image (--no-depth-undistort)
- Various bug fixes
You can have a look at the 3D freehand reconstruction on the following video:
And at the people detection feature on the following video:
Compilation on Linux (Ubuntu)
- The source includes a copy of Open CV since Ubuntu packages are buggy. If you want to use an external Open CV installation (>= 2.2), disable the USE_EXTERNAL_OPENCV flag in C Make or directly use the
- Install required packages, e.g. on Ubuntu 10.10:
sudo apt-get install libusb-1.0-0-dev libqt4-dev libgtk2.0-dev cmake libglew1.5-dev libcv-dev libhighgui-dev libcvaux-dev libgsl0-dev libglut3-dev libxmu-dev
- Untar the source, use the provided scripts to launch cmake and compile:
tar xvfz rgbdemo-0.4.0rc1-Source.tar.gz cd rgbdemo-0.4.0rc1-Source ./linux_configure.sh ./linux_build.sh
- Note from Stéphane Magnenat about compilation on Ubuntu 10.04 :
> For information, to compile KinectRgbDemo from  under Ubuntu 10.04, I had > to add png12 to all target_link_libraries(...) as well as to had GLU to > target_link_libraries(...) of rgbd-viewer.
Compilation on Mac
You will need:
- An install of QT
- An install of libusb. You can install it manually from AS3 Download or using the provided
Then run the following commands:
tar xvfz rgbdemo-0.4.0rc1-Source.tar.gz cd rgbdemo-0.4.0rc1-Source ./macosx_configure.sh ./macosx_build.sh
The configure script might ask for libusb installation. Say yes if you don’t have it installed.
If you still experience some issues with libusb, or have a custom install, you can try:
cmake -DLIBUSB_1_INCLUDE_DIR=$HOME/libusb/include -DLIBUSB_1_LIBRARY=$HOME/libusb/lib/libusb-1.0.dylib build
supposing that you have it installed in
Compilation on Windows
- Install QT opensource for Windows. This will also install Min GW.
- Add C:\Qt\2010.05\Min GW\bin to the Path environment variable
- Install and run
- Open the C Make Lists.txt in
Qt Creatoror compile manually using mingw-make.
You can get Win32 binaries from there: RGBDemo-0.4.0-Win32.zip (LGPL License).
You will first need to install the libfreenect drivers. They are shipped with the archive, in the
drivers directory. When you plug your Kinect, specify the drivers location manually to the
drivers\xbox nui motor directory. If you missed it, then go into the device manager and update the drivers giving this location. More details there OpenKinect Windows.
Running the viewer without calibration
- Binaries are in the
build/bin/directory, you can give it a try without calibration using:
If you get an error such as:
libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied. libusb requires write access to USB device nodes. FATAL failure: freenect_open_device() failed
Give access rights to your user with:
sudo chmod 666 /dev/bus/usb/001/087
Or install the udev rules provided by libfreenect.
Calibrating your Kinect
A sample calibration file is provided in
data/kinect_calibration.yml. However, you should be able to get a more accurate mapping by estimating new parameters for each Kinect. Below is the procedure I follow.
1. Build a calibration pattern as shown in KinectCalibration. You can use the
Chessboard_A3.pdf file in the
data/ directory for this. I recommend printing the chessboard on a sheet of paper and glue it on a peace of carton. It is not necessary anymore to cut the carton around the paper.
2. Grab some images of your chessboard using the viewer (File / Grab frame or Ctrl-G). WARNING: you need to grab images in Dual IR/RGB more (enable it in the Capture menu). By default it will save them into directories
grab1/view????. These directories contain the raw files,
raw/intensity.png that corresponds to the color image, the depth image (in meters), and the IR image normalized to grayscale. You will also get an additional
raw/depth.png which is the depth image normalized to grayscale.
To get an optimal calibration, grabbed images should ensure the following:
- Cover as most image area as possible. Especially check for coverage of the image corners.
- Try to get the chessboard as close as possible to the camera to get better precision.
- For depth calibration, you will need some images with IR and depth. But for stereo calibration, the depth information is not required, so feel free to cover the IR projector and get very close to the camera to better estimate IR intrinsics and stereo parameters. The calibration algorithm will automatically determine which grabbed images can be used for depth calibration.
- Move the chessboard with various angles.
- I usually grab a set of 30 images to average the errors.
- Typical reprojection error is < 1 pixel. If you get significantly higher values, it means the calibration failed.
3. Run the calibration program:
build/bin/calibrate_kinect_ir --pattern-size 0.025 grab1
The pattern size correspond to the size in meters of one chessboard square. It should be 0.025 (25mm) for the A4 model.
This will generate the
kinect_calibration.yml file storing the parameters for the viewer, and two files
calibration_depth.yaml for ROS compatibility.
Note with Mac binaries: if there is a
grab1 directory in the current directory, it will be loaded automatically.
Running the viewer with calibration
- Just give it the path to the calibration file:
build/bin/rgbd-viewer --calibration kinect_calibration.yml
New since RGB Demo v0.4.0: if there is a
kinect_calibration.yml file in the current directory, it will be loaded automatically.
- You should get a window similar to this:
- The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Images are now undistorted.
- You can filter out some value and normalize the depth color range with the filter window (Show / Filters). The Edge filter is recommended.
- You can get a very simple depth-threshold based segmentation with Show / Object Detector
- You can get a 3D view in Show / 3D Window.
- By default you get a grayscale point cloud. You can activate color:
- And finally textured triangles :
- You can also save the mesh using the
Save current meshbutton, it will store in into a
current_mesh.plyfile that you can open with Meshlab Meshlab:
- Or import into Blender:
- The associated texture is written into a
current_mesh.ply.texture.pngfile and can be loaded into the UV editor in Blender.
Getting Infrared Images
- You can activate the IR mode in the capture menu. There is also a dual RGB/IR mode alternating between the two modes.
- You can grab RGBD Images using the
File/Grab Framecommand. This stores the files into
viewXXXXdirectories (see the Calibration section), that can be replayed later using the fake image grabber. This can be activated using the
build/bin/rgbd-viewer --calibration kinect_calibration.yml --image grab1/view0000
- You can also replay a sequence of images stored in a directory with the
build/bin/rgbd-viewer --calibration kinect_calibration.yml --directory grab1
This will cycle through the set of viewXXXX images inside the
Interactive scene reconstruction
- You can try an experimental interactive scene reconstruction mode using the
build/bin/rgbd-reconstructorprogram. This is similar to the interative mapping of Intel RGBD but still in a very preliminar stage. The relative pose between image captures is determined using feature points matching and mean squares minimization.
In this mode, point clouds will progressively be aggregated in a single reference frame using a Surfel representation.
rgbd-people-tracker. You need to specify a configuration file. Here an example of full command line:
build/bin/rgbd-people-tracker --calibration kinect_calibration.yml --config data/tracker_config.yml
Calibration and config files will be loaded automatically is they are in the current directory.