About me

  • SLAM & 3D computer vision for AR/VR at Arcturus Industries.
  • Previously at Occipital after ManCTL acquisition. 3D reconstruction, RGBD tracking, SLAM, mixed reality. Mostly on mobile, targeting the Structure Sensor.
  • Built the 3D scanning software Skanect and co-founded ManCTL (part of the Microsoft/Techstars accelerator program in 2012).
  • Participated to the Kinect hacking frenesy, developed RGBDemo.
  • Academic researcher until 2012. A-contrario statistical methods, object detection, RGBD reconstruction for robotics (publications).

See my LinkedIn or my resume (last update: Feb 2021) for more details.

Side projects (github)

  • DaltonLens. Desktop system tray utility to assist color-blind people with various real-time filters. Also in the Mac App Store. I also published a number of technical articles about color vision deficiency simulation.

  • stereodemo. Compare and visualize the output of recent stereo depth estimation algorithms on files or on OAK-D camera streams.

  • zv. Early stage attempt at creating a modern, open-source, lightweight and cross-platform replacement for the good old xv image viewer.

  • nbplot. Command-line utility to quickly plot files by generating a Jupyter notebook from the command line.

  • Transforms3D. Online tool to convert between different 3D rotation formalisms and visualize the transform in WebGL.

  • RGBDemo Opensource software to experiment and share potential applications of RGBD sensors. Not maintained anymore.

Blog Posts

Comparing some recent stereo algorithms in the wild

Tons of progress has been made to estimate depth from a pair of stereo images. The current state-of-the-art methods all rely on deep learning to reach impressive accuracy levels on public benchmarks, even on hard textureless areas. But how do they actually perform in practice on stereo images captured in the wild? I am especially interested in indoor scenes (room scanning, object scanning), while the most frequently used large stereo benchmark is KITTI, which is focused on outdoor autonomous driving. So let’s see how well they generalize and what the performance/memory tradeoffs are for some of them.

Using deep learning to undo line anti-aliasing

Line anti-aliasing makes color segmentation difficult on images that include thin lines or small markers, for example in plots. Undoing it would allow software tools for the colorblind to more easily highlight regions that share the same color in color charts. Since it’s relatively easy to generate ground truth data, let’s see if we can tackle this problem with deep learning.