Kinect-Like 3D camera

By hackengineer

Want to take 3D pictures?  Need cool things to make with your 3D printer?  This 3D camera build uses a Structured lighting technique to create a 3D representation that can be rendered with any 3D software.  Structured lighting is basically a set of images that are sequentially projected onto a scene and captured with an ordinary CMOS camera.  The deformation of the structured light is ran though  an algorithm to determine the depth at each pixel.  The resulting X,Y and Z coordinates (3D point cloud) are used to reproduce a 3D model of the scene.

Ill show you how to build this project step by step in the forthcoming posts.  If you cant wait and want to dive into the source code its up on Google Code.

  1. Hardware overview
  2. Structured Light vs Microsoft Kinect
  3. Setting up your BeagleBoard with Angstrom, toolchain, and openCV
  4. Installing your camera module and ENV.txt
  5. Installing a pushbutton and using BeagleBoard GPIO interrupts
  6. Prototyping Structured Light Algorithm in Matlab
  7. Downloading the SL source code and compiling natively on Beagle board
  8. Rendering results with Meshlab
3D_Scanner_Proto
3D camera using BeagleBoard and Pico Projector
3D pictures!
3D picture taken with the 3D camera!


Be Sociable, Share!
  1. Emmanuel says:

    good idea …. next time, to reduce costs, using a raspberry pi http://www.raspberrypi.org

  2. mark says:

    Very nice indeed.
    Any ideas on how to merge two point clouds taken from different angles – to fill in the gaps ?

    • natron3 says:

      Yes, that would be something similar to image stitching. I might do a wirteup on that in the future but for now look into SIFT (Scale-invariant feature transform). This is needed to find the corresponding points between the two point clouds. Apply the correct transformation and overlay the two point clouds!

      • Rick says:

        I wonder if you could connect your system to an electronic gyroscope to sense a change in camera position? A gyroscope should be able to automatically determine the relative position change between 2 images for easier image stitching.

      • Freddy says:

        Hi Natron3, I was wondering if you might be willing to speak with me about a project I am working on. If so, please let me know how I might contact you offline. Thanks, Freddy

    • pranav says:

      Try using Iterative Closest Point(ICP) algorithm available in MeshLab,PCL or CloudCompare.

  3. mark says:

    Instead of the naive homography – indicated by aligning the two lenses and hoping the lens curvatures are a close match. How about computing the homography directly. Here’s a simple python tidbit that does the job and explains how to use the resulting matrix to align the two camera images.
    http://eclecti.cc/computergraphics/easy-interactive-camera-projector-homography-in-python
    I’d like to see you use this approach to make it more accurate and to remove the constraint of similar cameras. It also means you can add things like a second texture camera from another angle and gather higher sampled textures, or a second triangulation camera to improve the 3D accuracy and help with occlusion problems and improving teh S/N ratio for accuracy.

    Just for fun – when you gather the greycode numbers – if you also gathered the intensity of a pixel then you can improve the virtual accuracy at the edges of the SL scans. I.e. there will some rolloff in intensity when a point lies on the edge of a stripe. This simple mod can improve your sampling by a factor of 2 to 4 in resolution. (Assuming the material has a constant albedo. You can check for this as nearby edges can be used to see if they have changes too (if they do, its albedo and not subsampling).)

    • natron3 says:

      Mark, thanks for the link, A homography would be a useful addition to the project. Ill be sure to look into the tip.

      I didn’t mention in the post, but the intensity of the pixels are used when decoding the SL images. For each SL pattern projected, the inverse is projected immediately afterwards (not shown in the gif). By comparing the two intensity values we can better determine the SL value of the pixel. No sampling was needed in the algorithm. For each projected pixel a unique depth is determined with a depth resolution proportional to the projector resolution.

  4. Damien says:

    If you took a photo of the lit scene, could you use its colour information to produce coloured output?

    • natron3 says:

      Yes. I think that could be done in at least two ways. Instead of saving a standard point cloud (X,Y,Z) add a fourth dimension with color information for each vertex.

      Second option might be texture mapping in the 3D rendering stage.

  5. Anthony says:

    Hey, I’m one of the SimpleCV developers (http://www.simplecv.org). Cross platform open source vision framework (uses opencv amongst others). I would love to get your code merged into our stuff, it may help it reach a bigger audience. I don’t use matlab, so we would have to port your algorithm to python, but I think you will find SimpleCV with the built in shell very similiar to matlab if you want to give it a try. You can always shoot me an e-mail if you want to chat some more, seems like you are doing some awesome stuff here.

  6. uglygeorge says:

    3D stills are ok, but movies are a lot better. How soon a practical 3D camcorder (with audio) recordable on a SD chip & powered by batteries? Standing by.

  7. mark says:

    Looking forward to that update :-) nudge nudge…

  8. BCarroll says:

    Has any work been done to port this to the raspberry pi or arduino?

  9. Andy says:

    Hi natron3.

    3 things of IMMENSE interest here:-
    (i) Your comment – “While the hardware used in this project was designed for portability and small form-factor, the algorithm parameters can be easily updated for high resolution hardware.”
    (ii) Your comment that colour could be added.
    (iii) Possibility to merge multiple point clouds.

    Any chance you can contact me ? I want to 3d print high quality human fugures and am currently investigation a “photogrammetry” solution – but your line of thinking looks much more promising. I’d really love to hear from you…

    • hackengineer says:

      Thanks for the interest.

      (i) Your comment – “While the hardware used in this project was designed for portability and small form-factor, the algorithm parameters can be easily updated for high resolution hardware.”

      So if you want high quality (high resolution) you will need to use a higher res camera and projector. In the c code you will need to update the resolution for your specific hardware.

      (ii) Your comment that colour could be added.

      Here I was referring to texture-mapping. Something like this http://meshlabstuff.blogspot.com/2010/07/remeshing-and-texturing-1.html

      (iii) Possibility to merge multiple point clouds.

      I know it can be done in meshlab, but dont have a good link on a “howto”. I would try to keep the camera static and rotate the object by a fixed amount. Each corresponding pointcloud would have to be rotated by the same amount in meshlab…

  10. Andy says:

    thx hackengineer – see the rig half way down the page here – http://www.behance.net/gallery/Cyberpunk-2077/6573211 – I believe it actually uses 82 of 18MPix cameras and 8 of 36MPix cameras. While the results are stunning I don’t have the capital to implement that – as well as the cameras there is (i) usb camera control software which also enables photo download of the 90 photographs toPC (ii) photogrammetry stitching software to produce point cloud (with colour)/mesh – in total that has to be at least £100,000GBP => $160,000USD. While I have a degree in electronic eng it was 25 years ago – I slipped off the straight and narrow into an accounting job and no longer know one end of a Karnaugh map from the other [OO coding is more than confusing for me]. You interested in a tie up to try to implement a low-cost / high quality solution with me ? i’m sure there’s a substantial market for that solution…

  11. Freddy says:

    Hi again natron3, if you would prefer to contact me I can give you my number. Would really like to get your thoughts on a project I am interested in. Hope to hear from you.

  12. FFR says:

    hi! can everyone here, can give me a source code for our project for 3D mapping in mobile robot using kinect?.! ^___________^ tnx in advance.