First 3D scans of objects using 123D Catch

Added on February 2, 2013

This semester I am exploring 3D scanning technology to help make it easier for students and faculty to create 3D models and 3D prints of their work in the Art and Art History department. Pursuing this work is giving me the opportunity to fine-tune and document a friendly workflow that can be opened up to any interested educators and students, which is already paying off.

Right now, I am experimenting with pieces of software to clean and repair the scanned 3D models, but the basic process will have the same basic structure:

  1. Scan in an object using 123D Catch.
  2. Clean up the resulting mesh in 123D Catch.
  3. Use Meshmixer to make the mesh manifold and add fun modifications.
  4. Use netfabb to give the mesh a flat bottom for printing and perform any necessary repairs.
  5. Prepare and slice the 3D model using ReplicatorG or Makerware for 3D printing.
  6. Print the model on the department’s Makerbot Replicator.

As the semester goes on, I’ll be providing more and more details about these various steps and try to make it as approachable as possible. I can tell you that the actual 3D scanning process is extremely quick and easy (usually about 10 minutes), and the software is not very difficult to learn. I’ve been able to produce the following three scans and prints within about 24 hours, and if needed, I’m confident I can get it done within one hour!

All of these initial scans were performed in a small-ish drawing studio with consistent fluorescent lighting. A single small window was letting in some excessive light, so I closed the blinds before scanning. Each object was photographed about 40-50 times in circular paths using both a compact camera and a DSLR. I didn’t notice a big enough difference between the resulting models between cameras to warrant a need for expensive equipment, which is great news for people who don’t have access to amazing cameras!

I’m trying a new web app to embed the scanned 3D scans into this article, but it may or may not work for you. The app uses HTML5 canvas and WebGL to natively embed the models into the browser, but you do need a browser and machine capable of this.

Glass “sea foot” piece

The first piece scanned was a glass piece by glassblowing student Kenny Galusha that resembles an underwater creature of some kind. Kenny and I have been collaborating in recent months and lighting up some of his pieces with high-power RGB LEDs, so it seemed obvious that his stuff should be scanned for future experimentation.

You can download and print your own copy of this object at

[flickr-gallery mode=”photoset” photoset=”72157632657680211″]

Small clay “guinea pig aviator” model

One of the main goals of exploring 3D scanning technology this semester is to be able to scan in small clay models produced by students. These models are often delicate and somewhat volatile in nature, and would really benefit from 3D scanning and printing. This model was textured beautifully in 123D Catch, but many of the fine details (like the small hair-like indentations and the rivets on the goggles) didn’t come through well. I suspect that the model itself would benefit from added texture (like speckled paint) to help the program discern finer features.

[flickr-gallery mode=”photoset” photoset=”72157632657674543″]

Cast bronze cube of gears (made by me!)

Finally, I tried scanning a slightly reflective object with a large amount of fine detail just to see what would happen. Using a compact camera, the 3D model was decently textured, but not well defined. But using a DSLR, the 3D model was much more detailed and complicated. I was really surprised at how well this worked, and am looking forward to future scans.

You can download and print your own copy of this object at

[flickr-gallery mode=”photoset” photoset=”72157632661786088″]