Rendering results with Meshlab

Posted: 26th February 2012 by hackengineer in Computer Vision
Tags: , ,

Meshlab is pretty great for 3D point clouds and its free!  Here are a few steps that help really make the point clouds look good.

  • Open Meshlab and open the .xyz file from the 3D camera
  • Delete any points that look like they dont belong ( if you only see a small group of points you are probably zoomed out really far dude to a rogue point; delete it and zoom in)

  • Orient the pointcloud so that it represents the original scene (looking straight at it).  We will now compute the normals for each point.
    • Filters->point Set->Compute Normals For Point Sets
      • # of neighbors = 100
      • Check flip normals w.r.t viewpoint
    • Render->Lighting->Light On
    • The points should now have a shading effect depending on there normal.  To verify render->Show Vertex Normals

  • Lets add some color to the depth to make it stand out (darker as depth increases).  Meshlab has a tool to set color per vertex.  Use the z coordinate as a variable as shown below.
    • Filters->Color Creation and Processing->Per Vertex Color Function
    • For each of the colors adjust this formula for good results; z*0.78+125

  • We should have a good looking pointcloud at this point.  We can also generate a surface mesh with our point cloud.
    • Filters->Point Set->Surface Reconstruction:Poisson
      • Set Octree Depth to 10 or so
    • The toolbar has a list of different view options.  Click the ones with cylinders on them to view in surface mode.

And there you have it!  3D picture taken with the beagleboard.  The results look pretty good for a completely portable setup and HVGA projector.  I look forward to seeing what others can do with high resolution hardware!  Thanks for reading!

<–Previous     Project Home

Be Sociable, Share!
  1. ALoopingIcon says:

    (re-posted here: )

    • John says:

      Hey, nice article! Very easy to follow, and cool results without monster hardware.

      I’ve been developing an eye tracking device using a camera+opencv, and have been craving a new project. I was interested in trying some structured light with my DSLR and my projector, so I read up a bit more and did it in a slightly less automated manner. I haven’t had a chance to get too far on it, but I thought I’d share my first decent result.

      I’m trying to learn more about automatically calibrating the camera/projector, since that’s still on my to-do list.

      Your slides really helped me wrap my head around gray code patterns. Thanks!