In late May of 2013 I came to think about 3D printing and its cousin: 3D reconstruction. 3D printing has been popularized by countless interesting projects on the web. Instructibles, YouTube channels dedicated to 3D printing. You can choose material such as plastic, rubber and even wood! Wood that smells like wood and sounds like wood. It can even have rings, come on, how cool is that? See for yourselves => Hackaday. Now, if you need to replicate a small part and you are not very skilled in 3D modeling, you could use 123D Catch by Autodesk to do this for you. If you don’t have a 3D printer you could also use the 3D reconstructed object and introduce it into a video clip, you can do that too. The technique is called “match moving”, see Wikipedia for more information.
Let’s just say that you have something you wish to print. Take a few photos of the object, download 123D Catch click here and send the photos to the server. It is free, so why not try it out.
The object
To try it out, I used a statue which is in the “Garden of Botany” in Gothenburg, Sweden. The statue is close to my home. I have seen a small version of it at auctions. The statue is of a girl called “vårens huldra”, according to Google Translate it is “spring wood nymph”. It was sculpted by Gunnar Nilsson in 1954. The statue is perfect for 3D reconstruction, because it is stationary, easily accessible all around and features both smooth and sharp corners. This is like a “stress test” for the 123D Catch software.
When you first use the 123D Catch software you can see some of the 3D reconstructions other users has performed. Some of them have only used about 10 photos with stunning results. So, I took about fifteen photos around the statue and went home to see what would happen. Unfortunately, the quality of the resulting mesh and texture turned out to be very low.
So I went back the next day and went overkill instead and took over 350 photos. Ultimately I couldn’t use all 350. I realised that it takes too long to upload and 123D Catch crashed when I tried. I was not going to try that again. I finally ended up with about 150 photos. 123D Catch calculated for about 45 minutes and then I got a mail saying it had completed its 3D reconstruction. The result was interesting.
What I got was a 3d view of the statue with all the camera positions and the photos I took superimposed in the render. It looked like the array of cameras used in “Matrix” Matrix on IMDB .
My workflow:
- You take a couple of photos of the object “scanning it”
- Send the photos to the Autodesk server using the 123D Catch software
- Let Autodesk calculate the 3D model and its texture, this may take a while
- Receive a mail with a link to the object
- Download the obj file and post process the data, fill holes, smoothen, optimize,…
These are all simple steps to take, given that it will produce a 3D reconstruction of the object. Not only that, but you will get the texture of the object. 123D Catch does its job pretty well. However, there are some imperfections that I would like to discuss.
The artifacts
Even though I sent over 150 photos of the statue it still was not able to see the hole between the arm and the body. How come the other object only had 15 photos and was reconstructed perfectly? Shouldn’t more data produce a better 3D object? Does the surroundings affect the outcome?
One imperfection is the texture “bleeding” that occurs. This is clearly seen in the photo below on the head of the girl. This is due to the lack of photos from above the statue. The software can only see what the photos “see”, so grass from the background has been used to fill the gap.
There are also some indentations on the statue, see photo below. Perhaps it is a manifestation of the background again. I believe it could be because the statue is shiny and will give reflections of its surroundings at different angles. At least a part of the hole under the statue’s arm was properly captured, that’s good.
An interesting thing is that under the feet we get a clean capture. I am as far away to the feet as I am to the torso. Why could 123D Catch capture details by the feet better? Perhaps because the background is closer and of higher resolution than the background around her torso. Also, the background that is far away is maybe “busy” with moving bushes and high frequency information like grass. Maybe they disturb the capture, I don’t know.
Another interesting observation is that the photo stitching is clearly seen when looking from above. We see that the road is not smooth but jagged.
The results
In the download settings in 123D Catch we can tell the server that we wish to download a high resolution version of the statue. The statue can then be cropped and downloaded to an .obj file. It is better to tell 123D Catch to work in high resolution because we have more control of the finished result. We will eventually optimize and smooth the model and we will keep the detail we want, instead of loosing the detail before we make that decision. The model consists of about a million triangles and requires some optimization. I used ProOptimize and the option to “keep textures” in 3DS Max (Autodesk again) to accomplish this.
I mentioned before in this post that I took in excess of 250 photos. Those photos were mainly closeups of her feet and arm. I was expecting that the holes in the statue would prove to be a challenge for the software.
So, I created a new capture and only sent the feet photos to the server. After some time I got the results as usual. This time I was very surprised with the result. The capture of her feet was near perfect. The reason the feet capture was of higher quality is simply because I only focused on her feet, the camera was literally closer to her feet. Naturally, the resolution was much higher than with my previous full body photos.
The reason I took full-body photos of the statue was to aid the software to find reference points easily. If the software cannot find reference points, you have to help it by giving it points manually in the photo. It just so happened that photos of the feet was easily reconstructed by 123D Catch. I might have brought something to place close to the statue so the software could find reference points more easy.
To see the dramatic difference in resolution of the feet from my “whole body capture” compared to the feet of the “close up” capture, hover the mouse pointer over the photo below or if you are on a mobile device, just tap on the photo itself. It may take awhile to download, please be patient.
The resulting capture is of higher quality and I believe that it would be better to do a statue in sections. By taking less photos around the statue and more close-up we will get a higher resolution model and texture. Hover mouse pointer (or tap if you are on a mobile device) over the photo below to see the 3D reconstructed statue rotating.
Conclusions
The resulting 3D reconstructed statue was made from 150 photos and the result is impressive. However, it is far from perfect as I point out. By using close-up photos we can build an accurate statue by sections.
The result can be stunning and it only takes a minimal amount of 3d modeling effort to carry out. Next time I do something like this, I will take less photos around the statue and focus more on close-up photos. Also, when I get my hands on a 3D printer I will print this out.
Nice work Matz!
Thank you Patrik, more interesting things is coming soon.