Built a 3D object from collecting data by taking a series of photos of the object at different angles – front,
back, right and left, for a complete 360 degree view of the object by calibrating two cameras. I was able to reconstruct four meshes (one for each side) and stitched them together to generate the 3D object.
Software: programmed in MATLab and used Poisson Surface Reconstruction
Pipeline:
Step 1: camera parameters and collecting the dataset
Used a tripod with two cameras, tried to approximate the camera's extrinsic parameters and needed to choose an initial guess for its parameters, in MATLab in order to try to perform camera calibration by minimizing the sum of squared re-projection errors.
Data sets: I took a series of 20 photos of each angle for a total of 160 photos, using "Structured light" (projected a binary coded light pattern on the object), here a sample of the photos.
I projected the structured light with an embedded gray-code onto the object while taking photos of the object. I was able to load and decoded the gray-code in MATLab which read in the set of images captured by each camera of the four scan data sets and recover the image, used a noise threshold to prune irrelevant data.
Step 2 and 3: mesh building and cleaning
Points to Surface: Because points are sparse, I needed to interpolate the points to see which ones belonged to the same surface/neighbourhood, because using structured light gives the 3D coordinate for every pixel in the camera image I was able to generate a triangulated mesh based on the nearest neighbour. Due to the noise in the data, I needed to apply some optimization algorithms to clean and smooth the surface.
The first technique was setting a threshold distance to first remove the points
outside the known bounding box; second, remove points whose neighbors are further away then a
given threshold; and lastly, removing triangles which have long edges and then filling in the hole. Second technique was smoothing the surface, by fitting a plane to the neighbors of a point and then project the point onto that plane. My object had well over
100K points, and after pruning bad pixels the points dropped significantly to approximately 23K
for the front and back. Initial cleaning helped the run time complexity,
where it ran quickly after the pruning then if I was using 100K points of bad data. However,
because of the reflective surface of the object, the cleaning algorithms were not able to allot for
this and hence ended dropping a lot of good points.
Reconstruction and mesh generation: loss of data, scattered fragments of the object, due to it's
shinny and reflective surface, plus the shape of the object was a bit complicated. Lots of missing data!
If you compare these recovered meshes with the photos taken, in those areas where there was
reflection or “hot-spots” due to the flash, these were not recovered during decoding, because the
decode algorithm used a threshold to compare each set of images, it just threw away all this good
data.
Here is the recovered mesh for each scan data set as well as an aligned front and back mesh:
Step 4 and 5: Aligning 3D data and surface reconstruction
For mesh alignment the algorithm I combined the Rigid body alignment and the SVD with
the Iterative Closest Point (ICP) to measure how far the 3D recovered location of a point is from its neighbor and followed the algorithm:
- Estimate corresponding points.
- Rigid body alignment and the SVD which brings the matching points closest together; minimize the mean squared error.
- Repeat until convergence.
My program allows the user input gives initial estimate of alignment between
the scans and implement the "Iterative Closest Point" algorithm.
This was the probably the most
challenging part of the project for me. Here is a point cloud example:
Used Poisson Surface Reconstruction software to combine the meshes together.
Shading Results:
Assessment and Evaluation: The biggest challenge in reconstructing this object were the proportions of the object and the reflective surface of the material. Bender's body, feet and
head are nice solid shapes to recover but his arms and legs are very thin in comparison, so after
recovering the meshes from all four sizes, the arms and legs were so sparse which ultimately
caused problems during the alignment process. In the end, the recovered shape worked well for
the front top half and poor in regards legs, which actually makes Bender look pretty buff (too
much bending I guess) and he looks more like a cross between a bulldog. |