Can you 3D print a Lightblub?

In my last post I discussed the difficulties of scanning a light bulb.  Its transparent, and this causes quite a few problems with 3D scanning technology.


But we managed sort of, so now I’m going to discuss the process of turning the digital scan into a 3D printed object.

The chances are you already know all about 3D printing and the potential it offers museums in numerous ways.  If not. Liz Neely and Miriam Langer’s Museum and the Web 2013 paper Please Feel the Museum: The Emergence of 3D Printing and Scanning is a good place to start, as is the Collections Trust post are we ready for 3D printing?

The 3D scanning of a historical light bulb from the UCL Science and Engineering collection raised a number of challenges that museum face when wanting to scan their objects, but we held an rather ‘suck it and see’ attitude and wanted to test out the possibilities.  A quick and dirty action research approach as a means to understand the technological potential.

A point of note, there are really three steps to 3D printing; scanning, modelling and printing.  Due to our novice status in the 3D printing world (and the time we had available), we pretty much skipped the modelling phase.  Hindsight suggests we should haven’t done that.  The scans we produced were not faultless, but scans can be cleaned up using free tools (see my earlier post about my 3D printed head – where MeshMixer was used to fix gaps in the models).  A nice free tool is MeshMixer which has built-in tools to help identify gaps in the scanned mesh and can auto-fix these.  MeshMixer is by AutoDesk and is free to download.

The Printer:


We used a MakerBot: Replicator 2 to print our light bulbs.  UCL CASA kindly let us experiment with their Replicator 2.  If you look carefully you should be able to see my 3D printed head in the photo.

This uses heated plastic as a raw printing material and recreates objects from the scanned mesh files using a moving nozzle which melts the plastic and repeatedly plots layers molten plastic on top of each other to form a matrix.


3D printing is a fairly temperamental process. Because the plastic reaches incredibly high temperatures and the printer has to run for a long time it can be a dangerous to leave it unattended.  So you can spend a long time watching the printing process.  Which is fun to start off with, but it soon gets really really dull.  This light bulb took about 3 hours to print.


The final product:


Over all we were quite impressed with the final product. It isn’t a bad representation.  I still don’t think we can call it a replica but it definitely looks like a lightbulb.

Once we had our printed objects, we did a couple of workshops in the Grant Museum and Sidmouth Museum, and talked to lots of people about 3D printing and whether creating new objects in this way can encourage a closer inspection and deeper understanding of historical museum objects.  Overall, all the visitors we spoke to were really interested in the process of 3D printing, many had heard of it, but had not seen or more importantly held a 3D printed object in real life.  But does 3D printing create a level of deeper engagement with the original museum object?  It certainly provoked visitors to look closely at the original and the 3D printed object, but I’m not sure if a deeper understanding of the historical objects was reached.  But there is definite potential there, which we want to explore in the future.

Despite the relative rough and ready approach, we really learnt a lot, and the process provoked a lot of questions about how museums are responding to 3D printing.

How hard is it to 3D scan a lightbulb? Part 1


Last year whilst curating the temporary Digital Frontiers exhibition in the brilliant Octagon gallery at UCL, both myself and Nick Booth (UCL curator of the science and engineering collections) became a bit obsessed with light bulbs.  From this slightly odd obsession, and thanks to a kind research grant from the Institute of Making, Nick and I get to play with light bulbs and call it research.

We’re looking at the process of materials and making using 3D scanning and printing to see if creating new objects encourages a closer inspection and deeper understanding of historical objects – in our case light bulbs.  Neither Nick nor I claim to experts in 3D scanning museum objects, we wanted to see what we could do with the bare minimum of training on the devices.  How easy is it for a relatively normal person to scan and print museum objects.

Firstly we played with 3D scanning.   In the next blog post I’ll talk about the process of 3D printing.

We wanted to look at different ways we could scan and create a 3D mesh of a light bulb.  We have tried two main ways of scanning, firstly using easily accessible and relatively cheap (in this case free) technology using 123D Catch and then having a go with a NextEngine.

123D Catch is a free application from Autodesk that enables you to take a series of photos and turn them into 3D models.  We used the handy iPhone app.  It works by taking multiple digital photos that have been shot around a stationary object and then submitting those photos to a cloud based server for processing.  The images are then stitched together to produce a 3D model.

NextEngine is a desktop 3D scanner which captures 3D objects in full colour with multi-laser precision.

We knew a light bulb wasn’t going to be easy to scan because scanners don’t tend to like transparent, shiny or mirrored objects.  But we thought we’d have a go anyway.

Scanning transparent objects in practice

Before we experimented with some of the historical science and engineering collection, we used a normal every day bulb to see what worked and didn’t.

Firstly 123D catch

And now NextEngine


As you can see the fitting shows up pretty well, but the glass bulb itself really doesn’t work.  And 123D Catch has gone completely funny.  So after a bit of googling, tweeting and advice from the 3D pros at UCL we decided to try and disguise the transparency with talcum powder.

So here are the versions with talc.

123D catch



Which amazingly worked!

After checking with conservation and we decided to cover the historical lightbulb in talc and try scanning that and here are the results:

123D catch



The NextEngine scan is pretty good. It doesn’t quite capture the peak at the top of the bulb, but it isn’t a bad representation.  I don’t think we can quite call it a replica but it definitely looks like a lightbulb.

Obviously covering a hisitorical object in talc throws a lot of questions up about how museums could utilise 3D scanning if they have to cover delicate and fragile glass objects in powder to get a adequate scan. We’d be really interested to hear if anyone has found a more conservation friendly way of dealing with transparent objects without having to coat them in talc.

It also brings up questions about how accurate 3D representations of museum objects should be.  Should they be identical? Or is an approximate object acceptable?

PhD Acknowledgments


It’s been quiet on the blog front for some time, mostly due to the small matter of finishing my PhD. On Friday the 20th June 2014 I successfully completed my PhD viva and I can happily say that I passed with minor corrections!

It’s going to be a while yet before my thesis is available online, and there are far too many thankyou’s to fit into a tweet. So I thought I would share the acknowledgements section of my thesis.

Over the long course of completing this thesis, many people contributed to this research project in innumerable ways, and I am grateful to all of them.

I should like, first of all, to thank the Provost Strategic Development Fund (PSDF) for its support in funding this PhD, one of the first ever doctoral awards for the UCL Centre for Digital Humanities (UCLDH), and without it I would not have been able to undertake this research. My heartfelt gratitude goes to Professor Melissa Terras, my academic supervisor and an extraordinary mentor and friend, who has been a constant source of inspiration. Not only did Melissa’s understanding of my ideas around this research often exceed my own capability to articulate them, but her advice, support and nit-picking has managed to guide my sporadic thoughts into a scholarly work. Moreover, I consider myself incredibly fortunate to have Professor Claire Warwick as my second supervisor. I would like to thank Claire for her support during the undertaking of this research. I am thankful not only for her shrewd and insightful remarks but also for reminding me to believe in myself when things got too overwhelming.

Both Melissa and Claire also gave me the opportunity to learn important research and networking skills during my time on the Linksphere project and throughout my time at UCLDH which proved indispensable when carrying out my own work. Because of both of these fantastic mentors, I have developed the abilities and skills to question myself, my research, and to focus on achieving to the highest standard.

My case studies were possible only through the vital support and documentation provided by their host institutions, and I am especially indebted to the individuals within and outside those organisations who gave their time, advice and encouragement. I am grateful to all the museum staff and management at The Grant Museum of Zoology, Imperial War Museum London and Imperial War Museum North who offered information and hospitality while I was conducting my fieldwork and gathering data. At the Grant Museum I owe a particular debt to Jack Ashby, but would also like to thank Mark Carnall for his input and advice. This thesis could not be completed without the assistance of Carolyn Royston and Jeremy Ottenvanger from Imperial War Museums, and to Jane Audas and Tom Grinsted whose good humour and friendship got the Social Interpretation project off the ground.

I am especially indebted to the individuals within UCL Centre for Advanced Spatial Analysis who without whom QRator wouldn’t exist. A huge thank you goes to Steve Gray, for being the best developer I know and for sharing my eccentric sense of humour. Additional thanks go to Dr Andy Hudson Smith who provided me with helpful comments for my work as well as an external perspective which proved invaluable.

My great tower of strength throughout this research has been my friends and my family, who have given me love, help, and an important sense of perspective. Most of all, I thank my parents whose support and encouragement throughout has been never ending. Their words of wisdom and constant supply of love, support and reassurance has made me who I am today. Finally to my soon to be husband, Matt, whose patience and sacrifices so that I can complete this work have been vast. I would like to dedicate this thesis to him, my biggest critic, best friend, supporter and proof-reader and with whom this whole adventure began.