Print directly to 3D Hubs

I get lots of people asking me to print their lithophanes for them as they don’t have a printer.  They can be all over the world, so this is a little impractical.

3D Hubs recently opened their API to external partners, so it seemed like a good idea to integrate their printing service directly into the lithophane app.

Now, when you have a model generated, a “Print Now” icon will appear in the top corner of the model viewer and you can click on it to upload and print your model right away.

Litho3DHubs

Once you have uploaded your model, you will be redirected to the 3DHubs website and your model will be in your shopping basket.

Litho3DHubs2

Select where you want it printed, pay and you’re done. Instant gratification 🙂

 

 

SOPHAY now live

Image1
My daughter’s art will now be showcased on her website

http://Sophay.com

For anyone who’s interested.

Changes to add Angular and Bootstrap

Ok, it’s been a while since I updated the blog, sorry for being quiet, I’ve been thinking…

I’ve changed a few things about the lithophane site to make it more responsive to platform changes i.e. different browsers, operating systems and mobile platforms including phones.

LothoAppDome

I started out using Angular to simplify some of the repeated sections and data binding, then I added bootstrap to make the UI consistent across all the platforms.  This meant I had to include JQuery as well.

I must admit, it’s all getting a bit heavy for a single page app that just converts one file to another!

I will revise this, I can conditionally select the bits of bootstrap that I use and I think I can safely remove some other bits an pieces that aren’t used (and probably never will be).  This will happen over the coming weeks.

I have added a few new features though, the ability to repeat the image across the surface in both the X and Y direction (you can also mirror and flip alternate images so that the edges tile cleanly)  You can choose whether the model updates automatically each time you click an image and if the model downloads after each update AND you can also choose ASCII STL files if you’re using IE and you like large files.

The other thing I added is the ability to place the lithophane on a rectangular pillow, a dome or around a heart – like you do…

This image was wrapped “around” the dome in the screen shot above:

celtic3

Hope you like the changes and that it’s not too confusing.

http://3dp.rocks/lithophane

P.S. I’ve moved all the text and settings into a separate file so that it can be translated to another language. Let me know if you’d like any help doing this and I’ll send you some information.

Cylindrical Lithophanes for 360⁰ Panorama Images

We have our rectangular flat lithophanes for putting in the window.

Now we’re going to bend the lithophane surface so that we can produce curved ones including bending it 360 degrees into a cylinder for panoramic photos.

We can then produce single panoramic cylinders for 360 degree images or put a series of lithophanes in a circle to produce a cylindrical photo album.

photosLitho

It doesn’t take much imagination to see how curved lithophanes can be incorporated into lamps, rotating displays, light shades, dioramas, printing wheels, embossing tools, etc.

So, how to convert our flat lithophanes into curved ones? Well, it doesn’t take much. We have a series of x,y,z points on the surface of our model, if we push the x and z coordinates off the flat by adding an offset based on a curve that meets our dimensions, we’ll have a curved lithophane.

There are a few things we need to calculate. if, for instance, we want to produce a lithophane that occupies a 30 degree arc, we need to know where the centre of that arc would be and the radius of the arc so that we can calculate the offset to apply to each point on the surface.

CurveCalc

So, in the diagram above, (a) is the angle in degrees, (w) is the width of the flat lithophane, to work out where the centre of the arc is, we need (d) – the distance between the lithophane surface and the centre of the arc.  we can calculate (d) if we know the radius of the arc (r), we can get these with the following formulas:

var arcRadius = (width/angle) * (180/Math.PI);
var distanceFromFlat = Math.sin(angle * (360/Math.PI)) * arcRadius;

Now that we know these, we can calculate the offset of each pixel in the lithophane using cos and sin just as if calculating the points on a circle:

//circle pseudo function
for (angle=0 to 360) { 
    x = centerX + radius * sin(angle); 
    y = centerY + radius * cos(angle);
}

Putting what we have together:

 var magnitude = heightData[index] + arcRadius;
 x = width/2          + magnitude * Math.sin(rotation);
 z = distanceFromFlat + magnitude * Math.cos(rotation);

Adding these lines into the processVectors routine that calculates the vectors from the 2D height map, allows us to create curved lithophanes, all we need to pass in addition to the original parameters is the curve.

we can add a little extra functionality by allowing the curve to be negative as well as positive.  This will allow us to produce curved lithophanes with the detail on the inside or the outside.

var arcRadius=(width/curve)*(180/Math.PI);

TrafLitho                                bbLitho

I’ve added a curve parameter to the UI  with values of: Flat, 30,45,90,120,135,180,270 and 360 degrees inner and outer (inner – texture on the inside of the curve, outer – texture on the outside):

CurvedLithoUI

http://3dp.rocks/lithophane

I hope you find this useful, my daughter has had all sorts of ideas regarding incorporating images into 3D designs, so there will be more to come.

Maybe you can produce a series of photos for a loved one arranged into a cylinder with an LED candle just in time to save having to think of an alternative by February 14th?  Or maybe make a roller for lino style printing, there are so many cylindrical things you can make better with a little texture.

I suppose I’d better finish off lithophanes with domes & spheres next time – until then, here’s a quick project:

Draw a pattern on a strip:

pic1

Take a picture and crop the part of the image you want to use:

pic2

Create an LED tea light holder:

pic5

What does Vectors per Pixel mean?

 

I’ve been asked to explain the “Vectors per Pixel:” parameter.

Imagine that you have a picture 30 by 30 pixels that is white with a black dot in the middle.  The dot is 2 by 2 pixels. Like this one:

VPerBasePixelIf you were to reproduce this as a lithophane of the same dimensions, i.e. 1 pixel = 1 mm (30mm x 30mm), at 1 Vector per Pixel, a grid of points 30 by 30 would be created in the X and Y planes, each point (called a vector) would be placed in the centre of the pixel and its Z (height) would be set based upon the inverse brightness of the original image.

raisedDot

Each of those points will then be converted into a surface of connected triangles.

Below, you can see the effect of having 1 Vector per Pixel upto 5 Vectors per Pixel

VPerBasePixelAll

For square areas such as our 2 by 2 pixel black dot, the 1 vector per pixel looks OK, but as you add more complex patterns, this is often too coarse for most images.  Although the 5 Vectors per Pixel looks best, this quality is unlikely to be achieved by an FDM printer and the number of triangles is very large causing the STL file and the processing time to increase enormously.

The output quality also depends on the original image and the output size you select, so it’s not possible for me to specify exactly what settings you should use, it all comes down to your preferences, the printer being used, how you are going to print the lithophane and how long you are prepared to wait to process the image into a print.

Hope that clears things up 🙂

Recognising Technical Debt

When people say that you can learn to ‘code in a day’, I think it’s like telling someone they can learn a new language in a day.  You might learn a few phrases and get a feeling for how it might suit you, but you are hardly going to translate War and Peace into Cherokee after a day’s tutorial.

One concept often skimmed over when learning how to program is “Technical Debt”.  Technical debt is a term that refers to all the temporary expedients used when programming a task that help you provide it quickly and allow for experimentation and research but don’t meet basic standards for maintainability, reuse or stability.

Technical debt builds up while you program and must be paid back at some point (generally re-factoring and documenting). If technical debt is allowed to build too high, the software becomes unmaintainable and eventually becomes so bad that it would be quicker to re-write it than fix it.

Technical debt is often overlooked when shipping software, you get the product just about ready, someone sells it and you ship it. Nobody wants to spend any more money fixing things that don’t appear to be broken and if there are too many areas to correct, you won’t be able to spend anywhere near the amount of time needed fixing the things that nobody seems to care about – until the bugs start piling up and someone goes looking for who to blame.

The trick is to regularly factor technical debt into your development plans, you know it’s going to happen, you know it shouldn’t get out of hand and if you allocate time for it (usually after a feature delivery, before the next features are added) then nobody complains about the time it takes and the resources it consumes.  At the end of the day, you will have a product that you’re proud of and anyone who ends up looking after it or building upon it will thank you for your effort (sometimes that will also be you, some years/months down the line).

So, why am I telling you this? Well, I have just put together the lithophane utility. It works, people are using it and it’s just a quick hack, so I could move on to the next thing. BUT, there are some things that don’t quite work as I’d like, some pieces of code that I cut and paste and squeezed into the rest of the code don’t comply with the naming conventions or code style I’ve used and there is a big problem with the structure of the code in terms of reuse and readability – much of it is a long way from good practice.  I will quite probably re-use much of this code in the next utility, so it’s important that it is a stable platform to work on and as one of the intentions is that people should be able to adapt it, it needs to be readable, well-structured and documented. After-all, we all have to pay the piper.

So for the next few days, I’m going to re-factor and document – pay back some of the technical debt. If I think there’s anything significant in the changes, I’ll write it up for you to read.

Utility #1 – Lithophane from image

To pull all the work so far together, I’ve added a simple user interface:

Litho3

With panels to allow you to set the output dimensions and quality, view the monochrome 2D brightness reference image, view and zoom around the generated lithophane and Drag & Drop images from you computer.

You can set:

  • Max Size – the largest X or Y dimension of the output lithophane.
  • Thickness – the maximum Z dimension of the output lithophane.
  • Border – the thickness of the border around the edge
  • Thinnest layer – This is the layer thickness for the brightest pixels in the image
  • Vectors per pixel – each of the pixels in the image is translated into a number of 3D points on the surface of the lithophane, the larger this number, the more detailed the output (and the larger the STL file/slower the processing) 2 is a good value for this you can go up to 5, but it will take time and use memory…
  • Base/Stand depth – I added this for RichRap, he likes to have a small stand on the base when printing vertically.  I haven’t used it as all the test prints I performed all stood on their edges quite happily without a stand.

Once you click on an image that you have dropped on the lower panel, the progress bar is updated to show you the conversion progress and once it has displayed the 3D view, the software converts the data into an STL file and initialises a download.

The steps are:

  • 2D processing – converting the image to a brightness monochrome image
  • Processing Vectors – adding each of the points to the 3D mesh
  • Processing Faces – adding each of the triangles (2 per square)
  • Processing Surface –  adding the features that allow light to reflect off the surface
  • Adding to scene –  putting it into the three.js scene for viewiing
  • Creating STL file –  Arranging the Vectors and Faces onto a binary STL format
  • Downloading – initialising a download of the STL Blob.

All of this happens on your machine with nothing being transferred over the internet, so it’s quick and private with no cloud access or spam emails to worry about.

My daughter is happy with the first tool I’ve made for her and I have received some external feedback that is positive as well, which is very nice to get 🙂

Use it here

The source is available here if you’re interested in the details.

Oh, and here’s a picture of a printed Lithophane:

LithoMotif1

 

Quick update:

People are having some trouble with the parameters the program will only accept the values shown below, if you enter another value, it will be shown in red and ignored:

Output Dimension – between 1 mm and 1000 mm
Thickness – between 1 mm and 100 mm
Border Thickness – between 0 mm and Output Dimension / 2
Thinnest layer – between 0.1 mm and Thickness In MM
Vectors per pixel – between 1 and 5
Base/Stand depth between -50 mm and 50 mm (negative sticks out the back)

The Stand will be the same thickness as the border, unless the border is 0 in which case it will be 2mm thick.

 

OK, 3D here we come

You’ll note that in the last post, we converted the image to monochrome, but the final step was left tantalisingly empty:

function setLevels(heightData, width, height) {
    // TODO - create 3D data from height data
}

So, let’s finish off and populate the function to produce a lithophane.

I’m going to use the 3D JavaScript library three.js, it’s easy to use and open source, so it’s a good candidate and will allow us to display and manipulate the 3D structures required.

There is a type of geometry in three.js called ParametricGeometry that lends itself very well  to our task.  ParametricGeometry is an object that creates a planar structure like a terrain with a set width and depth, but calls a function for each pixel to determine the height of each each point on that plane like a 3D landscape with hills and valleys.

lithoFace=new THREE.ParametricGeometry(getPoint, width, height);

The function called (for each point in the plane), takes two arguments, u and v, each of these is a value between 0 and 1 where 0 is the far left/top of the image and 1 is the far right/bottom. Given these parameters, the function should create and return a 3D vector containing the x, y and z co-ordinates of the referenced point.

function getPoint(u,v) {
   var x=width*u;
   var y=height*v;
   // use the height data collected from the image
   // to return a height for each pixel
   return new THREE.Vector3 (x,y,heightData[width*y+x]);
}

lithoFace now contains the geometry of our negative monochrome image and can be displayed by three.js (if a material is added):

var lithoMaterial = new THREE.MeshBasicMaterial( { color: 0x3030C0 }); 
var lithoMesh = new THREE.Mesh (lithoFace ,lithoMaterial);

scene.add(lithoMesh);

Which gives us a beautiful 3 Dimensional lithophane of Holbein’s masterpiece:

HenryLitho

flipping this over (as we want to print it in negative and have the light shining through it) and adding 5 more planes to this face. Making it an enclosed box and then use the internal structures of the Geometry to export a simple ASCII .STL string that can be saved to a file:

function vertexAsString(vert){
   return vert.x+" "+vert.y+" "+vert.z;
}
function generateSTL(geometry,name) {
   var vertices = geometry.vertices;
   var faces = geometry.faces;
   var stl = "solid "+name+"\n";
   for(var i = 0; i<faces.length; i++){
      stl += ("facet normal "+vertexAsString( faces[i].normal )+" \n");
      stl += ("outer loop \n");
       stl += "vertex "+vertexAsString( vertices[ faces[i].a ])+" \n";
       stl += "vertex "+vertexAsString( vertices[ faces[i].b ])+" \n";
       stl += "vertex "+vertexAsString( vertices[ faces[i].c ])+" \n";
       stl += ("endloop \n");
       stl += ("endfacet \n");
   }
   stl += ("endsolid "+name+"\n");
   return stl;
}

This string can be passed back to the browser as if a download link has been clicked to save the .stl to the local hard disk

function saveSTL( geometry, name ){ 
 var stlString = generateSTL( geometry,name );
 
 var blob = new Blob([stlString], {type: 'text/plain'});
 
 saveAs(blob, name + '.stl'); // add the .STL extension
}
function saveAs(blob,name) {
 var downloadLink = document.createElement("a");
 downloadLink.download = name;
 downloadLink.innerHTML = "Download File";
 if (window.webkitURL !== null) {
 // Chrome allows the link to be clicked
 // without actually adding it to the DOM.
   downloadLink.href = window.webkitURL.createObjectURL(blob);
 }
 else {
 // Firefox requires the link to be added to the DOM
 // before it can be clicked.
   downloadLink.href = window.URL.createObjectURL(blob);
   downloadLink.onclick = destroyClickedElement;
   downloadLink.style.display = "none";
   document.body.appendChild(downloadLink);
 }
 downloadLink.click();
}

That’s it! You have a simple program written in JavaScript that runs in your browser and operates on local files.

The finished version allows you to drag an image from the desktop and sends back the Lithophane as an STL to your download folder.

http://3dp.rocks/lithophane/

Please give it a try and let me know what you think.

I’ll be tackling placing the image on the surface of a shape other than a plane in another post, so come back for more soon…

So, how do you convert an image to a Lithophane?

In short, you take your image, convert it to monochrome, invert it and then use the brightness of each pixel as the height of the output.

To elaborate:

ColourToInvGrey

Load a coloured image, convert the brightness of each pixel to monochrome and then use its negative.

ColourToInvGreyPix

If we look at a pixel in the middle of Henry VIII’s forehead, we can see that the pixel has the values of C5 for Red, A6 for Green and 9B for Blue that is 76% of full brightness on the red channel,68% on the green channel and 60% on the blue.

To convert this we could average the three values and put that value into each channel, but the human eye sees each of the colours slightly differently, green appearing much brighter than blue for instance. In fact we see brightness in approximately the proportions R=30%,G=59%,B=11% so multiplying the colour channels by the 0.3,0.59 and 0.11 respectively and adding them together gives us a much more realistic monochrome image.

You’ll note in the second image, the red,green and blue channels are now all the same and we have effectively reduced the image from 24 to 8 bits in resolution, this will help save some space and speed up processing.

see http://en.wikipedia.org/wiki/Grayscale for more information on RGB to Mono conversion

As we are going to be passing light through our image, we need the lithophane to be more opaque where the image is darker and more transparent where the image is lighter. i.e. thicker = darker, thinner = lighter. The final stage of 2D processing is to invert each pixel i.e. where it is white (Highest value = 255) we need thin plastic and where it is black (Lowest value = 0) we need thick plastic. As each channel is the same , we can use one channel’s value, subtracting each brightness from 255 gives us a level from 0-255 in the same form as needed for the output. We can simply multiply the resultant value by a scale to set the maximum lithophane thickness and add an offset for the minimum lithophane thickness, for example:

thickness = (inverse_pixel_brightness * 0.02mm) + 0.2mm

Will give us a lithophane that is 0.2mm thick at its thinnest point and 12.9mm thick at its thickest.

We can now directly translate those pixel values into 3 dimensional coordinates for our lithophane, we have two options regarding how our lithophane is structured, we can use a cube to represent each pixel, and set the Z-height of the cube based on the inverse brightness value for each X/Y pixel coordinate:

ColourToInvGreyPixBoxLevels

Or, we can create a surface with smooth transitions between the levels of each pixel:

ColourToInvGreyPixTexLevels

Which version you use will depend on your image and how “rough” it is, although the smooth version looks nicest, you may find that there are lots of small high or low features they become very small spikes and do not print well.  On the other hand, the array of cube pixel towers may take a long tome to slice. We will develop both of these methods and see how the look in practice.

So lets start by writing the code for the 2D image processing.

<body>
  <img src="mountain.jpg" onclick="onImageClicked(event);">
  <canvas id="outputcanvas" style="width:320px;height:240px""> </canvas> 
</body>
function onImageClicked(event) {
 var image = event.target; // the image that was clicked
 
 // point at canvas element that will show image data 
 // once we've processed it
 var canvas = document.getElementById("outputcanvas");
 // make our canvas the same size as the image
 canvas.width = image.naturalWidth;
 canvas.height = image.naturalHeight;
 
 // we'll need the 2D context to manipulate the data
 var canvas_context = canvas.getContext("2d");
 canvas_context.drawImage(image, 0, 0); // draw the image on our canvas
 
 // image_data points to the image metadata including each pixel value
 var image_data = canvas_context.getImageData(0, 0, 
                                 image.naturalWidth, image.naturalHeight);
 // pixels points to the canvas pixel array, arranged in 4 byte 
 // blocks of Red, Green, Blue and Alpha channel
 var pixels = image_data.data; 
 
 var numb_pixels=pixels.length/4; // the number of pixels to process
 
 // an array to hold the result data
 var height_data = new Uint8Array(numb_pixels); 
 
 var image_pixel_offset=0;// current image pixel being processed
 // go through each pixel in the image
 for (var height_pixel_index = 0; 
       height_pixel_index < numb_pixels; 
       height_pixel_index++) {
 
    // extract red,green and blue from pixel array
    var red_channel = pixels[image_pixel_offset ],
    green_channel = pixels[image_pixel_offset + 1],
    blue_channel = pixels[image_pixel_offset + 2];
 
    // create negative monochrome value from red, green and blue values
    var negative_average = 255 - (red_channel * 0.299 + 
                                  green_channel * 0.587 + 
                                  blue_channel * 0.114);
 
    // store value in height array
    height_data[height_pixel_index]=negative_average; 
 
    // store value back in canvas for display of negative monochrome image
    pixels[image_pixel_offset] = 
       pixels[image_pixel_offset + 1] = 
       pixels[image_pixel_offset + 2] = 
       negative_average;
 
    image_pixel_offset+=4; // offest of next pixel in RGBA byte array
 }
 
 // display modified image
 canvas_context.putImageData(image_data, 0, 0, 0, 0, 
                            image_data.width, image_data.height);
 
 // create 3D lithophane using height data
 setLevels(height_data, image_data.width, image_data.height);
}
function setLevels(heightData, width, height) {
 // TODO - create 3D data from height data
}

Project with HTML, CSS and JavaScript in separate files

OK, we’re ready to go.  Start your VM and open Aptana Studio (don’t know how? see previously)

In File->New->Web Project->Basic Web Template add the name ‘Lithophane

min web setup VM step1

On the Lithophane project, create a CSS and a JS folder (right click->New->Folder) under CSS create main.css and under JS, main.js (right click->New->File). Double click on Index.html and you’re ready to start programming.

min web setup VM step 2

The files main.css and main.js will contain our style sheet and JavaScript respectively, index.html is the main web page.  First, we need to include these files into our HTML page so that we can reference the definitions within them.

Now to check it all works. Replace the default HTML (in Index.html) which probably looks like this:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
   <meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>
   <title>New Web Project</title>
  </head>
  <body>
    <h1>New Web Project Page</h1>
  </body>
</html>

With this:

<!doctype html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Lithophane Generator</title>
    <link rel="stylesheet" href="css/main.css"> <!--our style sheet-->
  </head>
  <script src="js/main.js"></script> <!--our Java Script-->
  <body onload="initPage();"> <!--call function when page loaded-->
    <h1 id="pageheading">Lithophane Generator</h1>
  </body>
</html>

Add the following to the main.css file

body {
    color : #008; // change colour of body section
}

And the following to the main.js file

function initPage() { // called on page load
    // find heading in document and change the displayed text
    var headingText=document.getElementById('pageheading');
    headingText.innerHTML="2D->3D Lithophane Generator™";
}

Save all the files, select the tab containing index.html and click the green run icon min web setup VM run icon to see that you have everything setup correctly.

The browser should start and the web page should look like this:

min web setup VM step3

The title text has been changed by the JavaScript and the colour has been set to blue by the CSS. So everything is working 🙂

Time to start some real programming!