Kinect RGB Demo v0.2.1

Research.KinectRgbDemoV2 History

Hide minor edits - Show changes to output

July 19, 2011, at 07:43 PM by 87.217.161.246 -
Deleted lines 0-3:
! [[KinectRgbDemoV5|%red%NEW: version 0.5]]%%

(:redirect KinectRgbDemoV4:)

Added lines 3-6:
! [[KinectRgbDemoV6|%red%NEW: version 0.6]]%%

(:redirect KinectRgbDemoV6:)

May 30, 2011, at 12:43 PM by 163.117.150.79 -
Changed lines 1-2 from:
! [[KinectRgbDemoV3|%red%NEW: version 0.3]]%%
to:
! [[KinectRgbDemoV5|%red%NEW: version 0.5]]%%
February 07, 2011, at 04:05 PM by 163.117.150.79 -
Added lines 3-4:
(:redirect KinectRgbDemoV4:)
Added lines 1-2:
! [[KinectRgbDemoV3|%red%NEW: version 0.3]]%%
December 16, 2010, at 11:54 AM by 163.117.150.79 -
Changed lines 105-107 from:
* Now you need to annotate the depth images to generate a @@view????/raw/depth.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run on the directory containing the grabbed images:
to:
* The chessboard captures should cover as most image area as possible. Especially check for coverage of the image corners. Also try to get the chessboard as close as possible to the camera to get better precision. I usually grab about 30 images, but with 5-6 images covering well the image area, you should already get rough calibration. Typical reprojection error is < 2 pixels. If you get significantly higher values, it means the calibration failed.

'''4.'''
Now you need to annotate the depth images to generate a @@view????/raw/depth.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run on the directory containing the grabbed images:
Changed line 114 from:
'''4.''' Once your images are annotated, run:
to:
'''5.''' Once your images are annotated, run:
Changed lines 54-58 from:
!!! Compilation on Mac

You will need an install of QT and libusb > 1.0.

You might experience some issues with libusb. If the configure script cannot find it, supposing you installed it in your $HOME directory, you can try
:
to:
* Note from Stéphane Magnenat about compilation on Ubuntu 10.04 :
Changed lines 56-58 from:
cmake -DLIBUSB_1_INCLUDE_DIR=$HOME/libusb/include -DLIBUSB_1_LIBRARY=$HOME/libusb/lib/libusb-1.0.dylib build
to:
> For information, to compile KinectRgbDemo from [1] under Ubuntu 10.04, I had
> to add png12 to all target_link_libraries(...) as well as to had GLU to
> target_link_libraries(...) of rgbd-viewer.
Added lines 61-69:
!!! Compilation on Mac

You will need an install of QT and libusb > 1.0.

You might experience some issues with libusb. If the configure script cannot find it, supposing you installed it in your $HOME directory, you can try:
[@
cmake -DLIBUSB_1_INCLUDE_DIR=$HOME/libusb/include -DLIBUSB_1_LIBRARY=$HOME/libusb/lib/libusb-1.0.dylib build
@]

December 02, 2010, at 07:35 PM by 163.117.150.79 -
Added lines 56-57:
You will need an install of QT and libusb > 1.0.
December 02, 2010, at 07:20 PM by 163.117.150.79 -
Added lines 149-153:
!!! Getting Infrared Images

* You can activate the IR mode in the capture menu. There is also a dual RGB/IR mode alternating between the two modes.
%height=320px% Attach:viewer_output_ir.png

December 02, 2010, at 07:14 PM by 163.117.150.79 -
Changed lines 1-2 from:
(:title [=Kinect RGBDemo v0.2=]:)
to:
(:title [=Kinect RGBDemo v0.2.1=]:)
Added lines 13-20:
!!! Bug fixes since v0.2.0

* Fix compilation issues
* Include a safe version of OpenCV
* annotate_image is easier to use
* Fix broken calibration
* Fix compilation of MacOSX

Changed lines 36-37 from:
Source code for Linux [[http://handle2.uc3m.es/pub/kinect/rgbdemo-0.2.0-Source.tar.gz|rgbdemo-0.2.0-Source.tar.gz]] (GPL License)
to:
Source code for Linux [[http://handle2.uc3m.es/pub/kinect/rgbdemo-0.2.1-Source.tar.gz|rgbdemo-0.2.1-Source.tar.gz]] (GPL License)
Changed line 40 from:
* The source does not include OpenCV anymore, so you will need an install of Opencv > 2.0.
to:
* The source includes a copy of OpenCV since Ubuntu packages are buggy. If you want to use an external OpenCV installation, disable the USE_EXTERNAL_OPENCV flag in CMake.
Changed lines 48-49 from:
tar xvfz rgbdemo-0.2.0-Source.tar.gz
cd rgbdemo-0.2.0-Source
to:
tar xvfz rgbdemo-0.2.1-Source.tar.gz
cd rgbdemo-0.2.1-Source
Changed lines 54-56 from:
If you get compilation issues regarding opencv and installed the official packages, you can try this:
to:
!!! Compilation on Mac

You might experience some issues with libusb. If the configure script cannot find it, supposing you installed it in your $HOME directory, you can try
:
Changed lines 58-61 from:
cd rgbdemo-0.2.0-Source
./linux
_configure.sh
cmake -DOPENCV
_DIR=/usr/share/opencv build
./linux
_build.sh
to:
cmake -DLIBUSB_1_INCLUDE_DIR=$HOME/libusb/include -DLIBUSB_1_LIBRARY=$HOME/libusb/lib/libusb-1.0.dylib build
Added lines 61-62:
Note that the program has not been tested on Mac, I know it can compile, but I have no idea of how it runs (or not).
Changed line 96 from:
* Now you need to annotate the depth images to generate a @@view????/raw/depth.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run recursively:
to:
* Now you need to annotate the depth images to generate a @@view????/raw/depth.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run on the directory containing the grabbed images:
Changed line 98 from:
for im in grab1/view*; do bin/annotate_image $im; done
to:
bin/annotate_image grab1
December 02, 2010, at 11:59 AM by 163.117.150.79 -
Changed line 80 from:
'''2.''' Create a pattern calibration file, e.g. copying and adjusting @@data/pattern_calib_a3@@. The file should contain the number of points (4 here) and the coordinates of the four corners with respect to the coordinates of the first square corners, as shown here:
to:
'''2.''' Create a pattern calibration file, e.g. copying and adjusting @@data/pattern_calib_a3.calib@@. The file should contain the number of points (4 here) and the coordinates of the four corners with respect to the coordinates of the first square corners, as shown here:
Changed lines 83-87 from:
* Note that @@data/pattern_calib_a3@@ is predefined for the given calibration model printed on an A3 sheet of paper and @@data/pattern_calib_a4@@ for the a4 chessboard version.

'''3.''' Grab some images of your chessboard using the viewer (File / Grab frame or Ctrl-G). By default it will save them into directories @@grab1/view????@@. These directories contain the raw files, @@raw/color.png@@, @@raw/depth.yml@@, @@raw/intensity.png@@ that corresponds to the color image, the depth image (in meters), and the depth image normalized to grayscale.

* Now you need to annotate the
depth images to generate a @@view????/raw/intensity.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run recursively:
to:
* Note that @@data/pattern_calib_a3.calib@@ is predefined for the given calibration model printed on an A3 sheet of paper and @@data/pattern_calib_a4.calib@@ for the a4 chessboard version.

'''3.''' Grab some images of your chessboard using the viewer (File / Grab frame or Ctrl-G). By default it will save them into directories @@grab1/view????@@. These directories contain the raw files, @@raw/color.png@@, @@raw/depth.yml@@, @@raw/intensity.png@@ that corresponds to the color image, the depth image (in meters), and the IR image normalized to grayscale. You will also get an additional @@raw/depth.png@@ which is the depth image normalized to grayscale.

* Now you need to annotate the depth images to generate a
@@view????/raw/depth.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run recursively:
Changed lines 92-93 from:
You need to click on the four point in the same order as in the @@pattern_calib@@ file (P1, P2, P3, P4). Then press @@Echap@@ to switch to the next image. You can leave some images without annotation, in which case it will only be used for color intrinsics estimation.
to:
You need to click on the four point in the same order as in the @@pattern_calib@@ file (P1, P2, P3, P4). Then press @@Esc@@ to switch to the next image. You can leave some images without annotation, in which case it will only be used for color intrinsics estimation.
December 01, 2010, at 10:50 AM by 163.117.150.79 -
Changed lines 46-48 from:
!!! Running the viewer without calibration

* Binaries are in
the @@bin/@@ directory, you can give it a try without calibration using:
to:
If you get compilation issues regarding opencv and installed the official packages, you can try this:
Changed lines 48-51 from:
bin/rgbd-viewer
to:
cd rgbdemo-0.2.0-Source
./linux_configure.sh
cmake -DOPENCV_DIR=/usr/share/opencv build
./linux_build.sh
Changed lines 54-55 from:
If you get an error such as:
to:
!!! Running the viewer without calibration

* Binaries are in the @@bin/@@ directory, you can give it a try without calibration using
:
Changed lines 58-60 from:
libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed
to:
bin/rgbd-viewer
Changed lines 61-62 from:
Give access rights to your user with:
to:
If you get an error such as:
Changed lines 64-66 from:
sudo chmod 666 /dev/bus/usb/001/087
to:
libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed
Added lines 69-73:
Give access rights to your user with:
[@
sudo chmod 666 /dev/bus/usb/001/087
@]

Changed line 15 from:
* Faster image acquisition, much faster 3D rendering
to:
* Faster image acquisition, faster 3D rendering
Changed lines 13-14 from:
!!! New features
to:
!!! New features since v0.1
Changed lines 1-2 from:
(:title [=Kinect RGBDemo v2=]:)
to:
(:title [=Kinect RGBDemo v0.2=]:)
Changed lines 1-2 from:
(:title Kinect RGBDemo v2:)
to:
(:title [=Kinect RGBDemo v2=]:)
Changed lines 1-2 from:
(:title Kinect [@RGBDemo@] v2:)
to:
(:title Kinect RGBDemo v2:)
Changed lines 1-2 from:
(:title Kinect @@RGBDemo@@ v2:)
to:
(:title Kinect [@RGBDemo@] v2:)
Changed lines 1-2 from:
(:title Kinect [= RgbDemo =] v2:)
to:
(:title Kinect @@RGBDemo@@ v2:)
Changed lines 1-2 from:
(:title Kinect [= RGBDemo =] v2:)
to:
(:title Kinect [= RgbDemo =] v2:)
Changed lines 1-2 from:
(:title Kinect "[=RGBDemo=]" v2:)
to:
(:title Kinect [= RGBDemo =] v2:)
Changed lines 1-2 from:
(:title Kinect [=RGBDemo=] v2:)
to:
(:title Kinect "[=RGBDemo=]" v2:)
Changed lines 1-2 from:
(:title Kinect RGBDemo v2:)
to:
(:title Kinect [=RGBDemo=] v2:)
Added lines 5-6:
[[<<]]
Added line 21:
Added lines 20-22:
You can have a look at some features on the following video:
(:youtube bQWIz8BmCrg:)

Added lines 1-4:
(:title Kinect RGBDemo v2:)

(:htoc:)

Added lines 121-134:

!!! Replay mode

* You can grab RGBDImages using the @@File/Grab Frame@@ command. This stores the files into @@viewXXXX@@ directories (see the Calibration section), that can be replayed later using the fake image grabber. This can be activated using the @@--image@@ option:
[@
bin/rgbd-viewer --calibration kinect_calibration.yml --image grab1/view0000
@]

* You can also replay a sequence of images stored in a directory with the @@--directory@@ option:
[@
bin/rgbd-viewer --calibration kinect_calibration.yml --directory grab1
@]
This will cycle through the set of viewXXXX images inside the @@grab1@@ directory.

Changed lines 95-96 from:
%height=240px% Attach:viewer_output_main.png
to:
%height=240px% Attach:viewer_output_main_v2.png
Changed lines 95-96 from:
%height=240px% Attach:viewer_output_main_v2.png
to:
%height=240px% Attach:viewer_output_main.png
Added line 10:
* Images can be saved on the hard disk and replayed offline
Changed lines 14-15 from:
* Mesh export is faster and include the color texture file to simplify Blender import
to:
* Mesh export include the color texture file to simplify Blender import
November 30, 2010, at 08:02 PM by 163.117.150.79 -
Changed lines 5-6 from:
The code is based on the [[https://github.com/OpenKinect/libfreenect/|freenect library]]. Note that it was initially designed to deal with small resolution PMD Camcube cameras, so it is quite slow with the high resolution images of Kinect.
to:
The code is based on the [[https://github.com/OpenKinect/libfreenect/|freenect library]], and uses patches from ROS people to grab the infrared images.

!!! New features

* Faster image acquisition, much faster 3D rendering
* Support for raw infrared output
* Support for simultaneous RGB/IR/Depth output (very slow, ~3 FPS, but useful to grab synchronized images)
* Support for tilt motors (Filter Window / Tilt)
* Mesh export is faster and include the color texture file to simplify Blender import

Changed lines 17-20 from:
Source code for Linux [[http://handle2.uc3m.es/pub/kinect/rgbdemo-0.1.0-Source.tar.gz|rgbdemo-0.1.0-Source.tar.gz]] (GPL License)

The archive is quite big because it includes a compatible version of OpenCV (ubuntu 10.10 packages have annoying bugs).

to:
Source code for Linux [[http://handle2.uc3m.es/pub/kinect/rgbdemo-0.2.0-Source.tar.gz|rgbdemo-0.2.0-Source.tar.gz]] (GPL License)
Added line 21:
* The source does not include OpenCV anymore, so you will need an install of Opencv > 2.0.
Changed line 24 from:
sudo apt-get install libusb-1.0-0-dev libqt4-dev libgtk2.0-dev cmake libglew1.5-dev
to:
sudo apt-get install libusb-1.0-0-dev libqt4-dev libgtk2.0-dev cmake libglew1.5-dev libcv-dev libhighgui-dev libcvaux-dev
Changed lines 29-30 from:
tar xvfz rgbdemo-0.1.0-Source.tar.gz
cd rgbdemo-0.1.0-Source
to:
tar xvfz rgbdemo-0.2.0-Source.tar.gz
cd rgbdemo-0.2.0-Source
Changed lines 94-97 from:
%height=240px% Attach:viewer_output_main.png

* The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Gray-level images are normalized depth. Images are now undistorted.
to:
%height=240px% Attach:viewer_output_main_v2.png

* The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Images are now undistorted.
Changed line 104 from:
* You can get a (slow) 3D view in Show / 3D Window.
to:
* You can get a 3D view in Show / 3D Window.
Changed line 110 from:
* And finally textured triangles (very slow):
to:
* And finally textured triangles :
Added lines 118-119:

* The associated texture is written into a @@current_mesh.ply.texture.png@@ file and can be loaded into the UV editor in Blender.
November 30, 2010, at 07:55 PM by 163.117.150.79 -
Added lines 1-110:
!! Demo software to visualize and calibrate Kinect cameras

This is a software for Linux developed in the [[http://roboticslab.uc3m.es | RoboticsLab]] that allows to grab images with the Kinect camera and calibrate it in a semi-automatic way. Thanks to the calibration, the point cloud is in metric space.

The code is based on the [[https://github.com/OpenKinect/libfreenect/|freenect library]]. Note that it was initially designed to deal with small resolution PMD Camcube cameras, so it is quite slow with the high resolution images of Kinect.

!!! Download

Source code for Linux [[http://handle2.uc3m.es/pub/kinect/rgbdemo-0.1.0-Source.tar.gz|rgbdemo-0.1.0-Source.tar.gz]] (GPL License)

The archive is quite big because it includes a compatible version of OpenCV (ubuntu 10.10 packages have annoying bugs).

!!! Compilation

* Install required packages, e.g. on Ubuntu 10.10:
[@
sudo apt-get install libusb-1.0-0-dev libqt4-dev libgtk2.0-dev cmake libglew1.5-dev
@]

* Untar the source, use provided scripts to launch cmake and compile:
[@
tar xvfz rgbdemo-0.1.0-Source.tar.gz
cd rgbdemo-0.1.0-Source
./linux_configure.sh
./linux_build.sh
@]

!!! Running the viewer without calibration

* Binaries are in the @@bin/@@ directory, you can give it a try without calibration using:
[@
bin/rgbd-viewer
@]

If you get an error such as:

[@
libusb couldn't open USB device /dev/bus/usb/001/087: Permission denied.
libusb requires write access to USB device nodes.
FATAL failure: freenect_open_device() failed
@]

Give access rights to your user with:
[@
sudo chmod 666 /dev/bus/usb/001/087
@]

!!! Calibrating your Kinect

A sample calibration file is provided in @@data/kinect_calibration.yml@@. However, you should be able to get a more accurate mapping by estimating new parameters for each Kinect. Below is the procedure I follow.

'''1.''' Build a calibration pattern as shown in [[KinectCalibration]]. You can use the @@Chessboard_A4.pdf@@ or @@Chessboard_A3.pdf@@ file in the @@data/@@ directory for this. I recommend printing the chessboard on a sheet of paper, glue it on a peace of carton, and cut the carton around the paper, as close as possible.

'''2.''' Create a pattern calibration file, e.g. copying and adjusting @@data/pattern_calib_a3@@. The file should contain the number of points (4 here) and the coordinates of the four corners with respect to the coordinates of the first square corners, as shown here:
%width=320px% Attach:calibration_pattern.png

* Note that @@data/pattern_calib_a3@@ is predefined for the given calibration model printed on an A3 sheet of paper and @@data/pattern_calib_a4@@ for the a4 chessboard version.

'''3.''' Grab some images of your chessboard using the viewer (File / Grab frame or Ctrl-G). By default it will save them into directories @@grab1/view????@@. These directories contain the raw files, @@raw/color.png@@, @@raw/depth.yml@@, @@raw/intensity.png@@ that corresponds to the color image, the depth image (in meters), and the depth image normalized to grayscale.

* Now you need to annotate the depth images to generate a @@view????/raw/intensity.png.calib@@ file for each capture. This file should contain the coordinates of the four corners. To help you doing so, there is a tool called @@annotate_image@@ that you can run recursively:
[@
for im in grab1/view*; do bin/annotate_image $im; done
@]

You need to click on the four point in the same order as in the @@pattern_calib@@ file (P1, P2, P3, P4). Then press @@Echap@@ to switch to the next image. You can leave some images without annotation, in which case it will only be used for color intrinsics estimation.

'''4.''' Once your images are annotated, run:
[@
bin/calibrate_kinect --pattern-width 10 --pattern-height 7 --pattern-size 0.025 --pattern-ref data/pattern_a4.calib grab1
@]
The pattern width in the number of inner corners in the horizontal axis (10 with these chessboards), pattern height the number of corners in vertical axis (7 here), and the pattern size in the distance between consecutive corners, i.e. the square size. Here the parameters are set for the A4 pattern, the default parameters should be satisfying for the A3 pattern. The output should look like this:
||
|| %height=200px% Attach:calibration_output_1.png || %height=200px% Attach:calibration_output_2.png || %height=200px% Attach:calibration_output_3.png ||
|| a) Automatic chessboard detection in color image || b) Automatic corner extraction in color image || c) Corner extraction in depth images from manual labeling ||

This will generate the @@kinect_calibration.yml@@ file storing the parameters for the viewer, and two files @@calibration_rgb.yaml@@ and @@calibration_depth.yaml@@ for use with the ROS kinect node.

!!! Running the viewer with calibration

* Just give it the path to the calibration file:
[@
bin/rgbd-viewer --calibration kinect_calibration.yml
@]

* You should get a window similar to this:
%height=240px% Attach:viewer_output_main.png

* The main frame is the color-encoded depth image. By moving the mouse, you can see the distance in meters towards a particular pixel. Gray-level images are normalized depth. Images are now undistorted.

* You can filter out some value and normalize the depth color range with the filter window (Show / Filters). The Edge filter is recommended.
%height=240px% Attach:viewer_output_filters.png

* You can get a very simple depth-threshold based segmentation with Show / Object Detector
%height=240px% Attach:viewer_output_detection.png

* You can get a (slow) 3D view in Show / 3D Window.
%height=240px% Attach:viewer_output_view3d_cloud.png

* By default you get a grayscale point cloud. You can activate color:
%height=240px% Attach:viewer_output_view3d_cloud_color.png

* And finally textured triangles (very slow):
%height=240px% Attach:viewer_output_view3d_triangles.png

* You can also save the mesh using the @@Save current mesh@@ button, it will store in into a @@current_mesh.ply@@ file that you can open with Meshlab [[http://meshlab.sourceforge.net/|Meshlab]]:
%height=320px% Attach:viewer_output_meshlab.png

* Or import into [[http://www.blender.org/|Blender]]:
%height=320px% Attach:viewer_output_blender.png