April 13, 2014

Emergent Properties of Meat

GemCraft: Chasing Shadows compass puzzle

Almost solved compass puzzle (increased contract)
In GemCraft: Chasing Shadows, there are seven fields which have a special compass item. The game never explains the purpose of the compasses, though the first thing you'll note is that on the loading screen of a compass level, there are seven gem outlines shown. I googled around but did not find anyone else who has explained the compass puzzle, so here's my explanation. Spoilers (but not plot spoilers) follow.

April 13, 2014 02:12 PM

April 08, 2014


Custom G-code Generation with Mecode

If you've ever wanted to hard-code gcode but still retain some scripting flexibility (for art, science, or engineering), Jack Minardi just posted a custom g-code generation package he's been working on... it looks great.

Checkout the RepRap wiki entryand
also the github repo with instructions

This could be a big win for 3d printing sacrificial inks like sugars and pluronics where each extruded filament position needs to be placed with precise (x,y,z) coordinates. And for arcs and meanders, there are built-in functions too! Very exciting. From the Github README:
To use, simply instantiate the G object and use its methods to trace your desired tool path.
from mecode import G
g = G()
g.move(10, 10) # move 10mm in x and 10mm in y
g.arc(x=10, y=5, radius=20, direction='CCW') # counterclockwise arc with a radius of 5
g.meander(5, 10, spacing=1) # trace a rectangle meander with 1mm spacing between passes
g.abs_move(x=1, y=1) # move the tool head to position (1, 1)
g.home() # move the tool head to the origin (0, 0)

We got a chance to meet Jack at MRRF and everyone had a great time. Jack Minardi is currently a Research Fellow at Lewis Lab at Harvard.

by (jmil) at April 08, 2014 03:43 PM

Emergent Properties of Meat

Hacking flash apps...

So I enjoy the GemCraft game series, so I was excited to play GemCraft Chasing Shadows. Unfortunately, the game refuses to run on the latest version of Flash Player available for Linux in Firefox. Happily, I've been able to fix that for my own personal use.

April 08, 2014 03:41 PM

April 02, 2014

Espoorastit, Vermo

Orienteering season is here again!

Mostly running and no orienteering on this 1:4000 scale map.


by admin at April 02, 2014 06:38 PM

April 01, 2014

MetaRepRap Soup

March 30, 2014

Emergent Properties of Meat

Australia: Granite Gorge, Fitzroy Island and Lake Barrine

On our way to Cairns, we stopped at Granite Gorge, a private park. You can buy food to feed the Rock Wallabies (supposedly it doesn't make them fat, but they sure like it), and then take a nice hike over many boulders. It was a fun and very scenic day, and then we swam for a bit in their pond to cool off (though Jeff sunburned his back here). There's also a few more photos here from Fitzroy Island, where we enjoyed the rainforest in addition to the reef snorkeling already posted. Finally, we have some photos from our walk around Lake Barrine, a beautiful lake inland from Cairns.

March 30, 2014 06:59 PM

March 27, 2014

BodgeItQuick Rep Strap Bertha Project

The rain from the east took its time today so I was able to pop up 7 Solar panels on to the Home Office roof. Lesson learnt today allowing only a +/- 1mm tolerance on a roof is far too tight a tolerance even when using laser cut templates instead of a ruler it was very tight but they are nice and tight on the roof. Just have to wait for another rain free day to add the rest.
The rain from the east took its time today so I was able to pop up 7 Solar panels on to the Home Office roof. Lesson learnt today allowing only a +/- 1mm tolerance on a roof is far too tight a tolerance even when using laser cut templates instead of a ruler it was very tight but they are nice and tight on the roof. Just have to wait for another rain free day to add the rest.

by (BodgeIt) at March 27, 2014 05:24 PM

March 26, 2014


More triangulation carpeting

Trying to get this surface triangulation distraction closed down today. The proposition was to see whether I could repurpose my toolpath strategy code and apply it to the problem of triangulating trimmed NURBS surfaces. After all, trimmed NURBS surfaces exist on a 2D parametric domain (like the 3-axis drive plane), and the trimming curve is very much like a machining boundary. You have to know that something like the constant scallop routine is actually modelling a surface that is repeatedly being eroded away, because the toolpaths you see are in fact the boundary outline of that surface.

One of the common triangulation results I kept getting looks like a nice Afghan carpet pattern, like this:


I have an explanation. This happens when maxHdiffFibre is too small.

The surface is divided up into cells, and the sides of each cell has at least length/maxHdiffFibre extra points inserted along them. The convex polygonal area of each cell is then triangulated by lopping off the sharpest corners repeatedly, until you have an 8-sided shape that still has points along the top and bottom sides because they were the longest to begin with, so these are joined up. The outer boundary is the trimming curve, so one side of over-sampled cell is trimmed away leaving three sides that have too many points which need to be joined up in the best way they can. Then, at the corners of the trimming curve, the cells are subdivided to get smaller and smaller complete rectangular cells which are small enough not to be over-sampled.

I don’t think anyone else in the world misuses a 3-axis toolpath algorithm to make a triangulated surface, so this type of buggy behavior is unique.

Meanwhile, a correspondent of the blog took up the challenge of making an excellent triangulation of a cone. He did this by starting with the triangular lattice with rotational symmetry of order 6, and then cutting a pie slice out of it in order to join it together to form a cone. Because he didn’t have the power of Twistcodewiki to draw 3D geometry direct from Python in your browser, he implemented the picture of all the edges by exporting Postscript commands, and didn’t have the capability of drawing the triangles themselves in order to see the folds:


Twistcodewiki is great. I use it all the time, even though its light/shading features are crude and terrible, and the interface is almost non-existent. But the one thing it does is so important and unique that I can ignore the fact that it doesn’t do it very well. The same probably goes for a lot of software out there.

by Julian at March 26, 2014 12:29 PM

March 22, 2014 (MetaLab)

[alphabet] Internal Memo

0686 bb70

Internal Memo

Due to a broken lens the Lazzzor is out of order until further notice.

Metalab Facility Management

March 22, 2014 05:44 PM

March 21, 2014

Emergent Properties of Meat

Port Douglas Wildlife Habitat

Wildlife Habitat at Port Douglas, a zoo of North Queensland animals, including a great variety of birds, mammals and reptiles. There was plenty to see and do, and plenty of pictures to be taken. We'd already seen many of the birds in the wild, but it was neat to see them up close.

March 21, 2014 07:10 PM

March 20, 2014

BodgeItQuick Rep Strap Bertha Project

Sorry for the long delay in post but I have been very busy.. By April I will be fully off grid solar powered for Electricity and  Today I will start to integrate the good bits of my my 20 year  old Fridge Freezer into my 1 year old Laser cutter ready for full scale Production in April if all goes well fingers crossed.. .. Reason the laser cutter needs to have its coolant kept below 20C Currently even with a CW3000 Industrial Laser chiller Im limited to 2-3 hours of Laser cutting this is ok for R&D work depending on my coolant start temperature which is about 8-10C. As I discovered last year Ambient temperate soon creeps up and reduces my lasering time considerably.  It took me 2-3 hours of careful destructive disassembly to recover the whole freezer refrigerant system in tact still full of its refrigerant gas.
You ask why dose a 60W laser need so much cooling!! Well the laser tube is only 15% efficient so there is about 300W of waste heat to get rid of. The American RF Lasers are built into a huge lump of Aluminium the 50W one I had on loan was set in 25+ kilos of Aluminium this is then cooled by a stack of cooling fans take a look at the FabLab 30W laser it has a full row of fans on the top and bottom and back of the machine. Pictures of my Laser cutter the mods to follow...

by (BodgeIt) at March 20, 2014 05:06 PM

March 18, 2014


The impossibility of a good triangulation of a cone

I hadn’t worked on triangulating surfaces since the first version of machining strategist in 1994, but always thought I could do it better by distributing the triangles in a nice enough manners. It is true that the people who write surface triangulators in engineering CAD packages don’t in general try very hard, but I don’t think many folks realize that there is no right answer.

Suppose we have a smooth surface S, defined by a function from a subset of the plane to real space

  f: D ------> R3, where D is a subset of R2 
                  representing the set of points within a boundary B defined by:
  b: [0,1] ------> R2 where b(0) = b(1)

… roughly speaking.

We want to approximate this surface to the tolerance e with a set of connected triangles defined by triplets of points in R3-space:
T = { (p0, p1, p2) }

It would be nice if it had the following properties:

1. For every point p in every triangle, there is a point q in S where |p-q| < e.

2. For every point q S, there is a point p in one of the triangles where |p-q| < e.

3. For every point p on a 1-sided triangle boundary, there is a point q in B where |p-q| < e.

4. For every point q B, there is a point p on a 1-sided triangle boundary where |p-q| < e.

5. There is a smooth continuous homeomorphism from S to the surface set of points in R3 that inside all the triangles that doesn’t move any point by more than e.

6. The orientation of the fold at each 2-sided edge triangle boundary is consistent with the curvature of S in that region.

7. The triangle approximation should depend only on the embedding of S in R3, and not have any relation to the 2D parameter map f.

8. The triangles should be as close to equilateral as possible; there should be no shards.

9. There should be as few triangles as possible to satisfy the tolerance value e.

Admittedly, these conditions aren’t very well specified, and line 5 implies the previous four, but I’m just being a lazy amateur. A professional whose job it was to produce high-quality triangle approximations of engineering surfaces would do it properly, I’m sure.

Who cares about the speed? I wrote a program that flipped and cracked triangles under the following three triangular manifold transforms to gradually change an initial triangulation into something that satisfied the requirements above.


The first snag I hit was that I wanted to that the second transform that moved a vertex into the centre of its surrounding polygon wasn’t going to change the standard rectangular triangulation into the more even hexagonal form.


No matter. The smallest perturbation would shake it away from this initial form so things would quickly spread and crystallize out.

But, here’s the real problem. What’s going to happen with a developable surface, like a cone, which is made up of straight lines?


Clearly, the majority of the triangle boundaries must run along these lines. But at the thin end we need n-sides to approximate the surface to tolerance, but at the fat end we need 2n-sides. At some point there is a transition. And at this transition you will need to make edges that go around the cone. And there is no way to do this without edges that fold outwards and produces a manifold that bounds a non-convex volume.

So, although the cone bounds a purely convex volume, we cannot respect its convexity with a triangulated model without over-triangulating at the thin end.

Usually a CAD model is made up of different surface patches that join together. By default, because no one tries to make the triangulations within the patches interesting, the nasty wrong-folding transitions tend to happen at the joins, and so are mis-characterized as a multi-surface problem, rather than an issue that runs deeper and within the impossibility of defining the perfect answer.

by Julian at March 18, 2014 01:54 PM

March 17, 2014

Simple Trajectory Generation


The world is full of PID-loops, thermostats, and PLLs. These are all feedback loops where we control a certain output variable through an input variable, with a more or less known physical process (sometimes called "plant") between input and output. The input or "set-point" is the desired output where we'd like the feedback system to keep our output variable.

Let's say we want to change the set-point. Now what's a reasonable way to do that? If our controller can act infinitely fast we could just jump to the new set-point and hope for the best. In real life the controller, plant, and feedback sensor all have limited bandwidth, and it's unreasonable to ask any feedback system to respond to a sudden change in set-point. We need a Trajectory - a smooth (more or less) time-series of set-point values that will take us from the current set-point to the new desired value.

Here are two simple trajectory planners. The first is called 1st order continuous, since the plot of position is continuous. But note that the velocity plot has sudden jumps, and the acceleration & jerk plots have sharp spikes. In a feedback system where acceleration or jerk corresponds to a physical variable with finite bandwidth this will not work well.


We get a smoother trajectory if we also limit the maximum allowable acceleration. This is a 2nd order trajectory since both position and velocity are continuous. The acceleration still has jumps, and the jerk plot shows (smaller) spikes as before.


Here is the python code that generates these plots:

# AW 2014-03-17
# GPLv2+ license
import math
import matplotlib.pyplot as plt
import numpy
# first order trajectory. bounded velocity.
class Trajectory1:
	def __init__(self, ts = 1.0, vmax = 1.2345):
		self.ts = ts		# sampling time
		self.vmax = vmax	# max velocity
		self.x = float(0)	# position = 0
		self.v = 0 			# velocity
		self.t = 0			# time
		self.v_suggest = 0
	def setTarget(self, T): = T
	def setX(self, x):
		self.x = x
	def run(self):
		self.t = self.t + self.ts	# advance time
		sig = numpy.sign( - self.x ) # direction of move
		if sig > 0:
			if self.x + self.ts*self.vmax >
				# done with move
				self.x =
				self.v = 0
				return False
				# move with max speed towards target
				self.v = self.vmax
				self.x = self.x + self.ts*self.v
				return True
			# negative direction move
			if self.x - self.ts*self.vmax <
				# done with move
				self.x =
				self.v = 0
				return False
				# move with max speed towards target
				self.v = -self.vmax
				self.x = self.x + self.ts*self.v
				return True
	def zeropad(self):
		self.t = self.t + self.ts
	def prnt(self):
		print "%.3f\t%.3f\t%.3f\t%.3f" % (self.t, self.x, self.v )
	def __str__(self):
		return "1st order Trajectory."
# second order trajectory. bounded velocity and acceleration.
class Trajectory2:
	def __init__(self, ts = 1.0, vmax = 1.2345 ,amax = 3.4566):
		self.ts = ts
		self.vmax = vmax
		self.amax = amax
		self.x = float(0) = 0
		self.v = 0 
		self.a = 0
		self.t = 0 = 0 # next velocity
	def setTarget(self, T): = T
	def setX(self, x):
		self.x = x
	def run(self):
		self.t = self.t + self.ts
		sig = numpy.sign( - self.x ) # direction of move
		tm = 0.5*self.ts + math.sqrt( pow(self.ts,2)/4 - (self.ts*sig*self.v-2*sig*( / self.amax )
		if tm >= self.ts: = sig*self.amax*(tm - self.ts)
			# constrain velocity
			if abs( > self.vmax: = sig*self.vmax
			# done (almost!) with move
			self.a = float(0.0-sig*self.v)/float(self.ts)
			if not (abs(self.a) <= self.amax):
				# cannot decelerate directly to zero. this branch required due to rounding-error (?)
				self.a = numpy.sign(self.a)*self.amax = self.v + self.a*self.ts
				self.x = self.x + (*0.5*self.ts
				self.v =
				assert( abs(self.a) <= self.amax )
				assert( abs(self.v) <= self.vmax )
				return True
				# end of move
				assert( abs(self.a) <= self.amax )
				self.v =
				self.x =
				return False
		# constrain acceleration
		self.a = (
		if abs(self.a) > self.amax:
			self.a = numpy.sign(self.a)*self.amax = self.v + self.a*self.ts
		# update position
		#if sig > 0:
		self.x = self.x + (*0.5*self.ts
		self.v =
		assert( abs(self.v) <= self.vmax )
		#	self.x = self.x + (-vn+self.v)*0.5*self.ts
		#	self.v = -vn
		return True
	def zeropad(self):
		self.t = self.t + self.ts
	def prnt(self):
		print "%.3f\t%.3f\t%.3f\t%.3f" % (self.t, self.x, self.v, self.a )
	def __str__(self):
		return "2nd order Trajectory."
vmax = 3 # max velocity
amax = 2 # max acceleration
ts = 0.001 # sampling time
# uncomment one of these:
#traj = Trajectory1( ts, vmax )
traj = Trajectory2( ts, vmax, amax )
traj.setX(0) # current position
traj.setTarget(8) # target position
# resulting (time, position) trajectory stored here:
# add zero motion at start and end, just for nicer plots
Nzeropad = 200
for n in range(Nzeropad):
	t.append( traj.t )
	x.append( traj.x ) 
# generate the trajectory
	t.append( traj.t )
	x.append( traj.x ) 
t.append( traj.t )
x.append( traj.x )
for n in range(Nzeropad):
	t.append( traj.t )
	x.append( traj.x ) 
# plot position, velocity, acceleration, jerk
plt.title( traj )
plt.plot( t , x , 'r')
plt.ylabel('Position, x')
plt.plot( t[:-1] , [d/ts for d in numpy.diff(x)] , 'g')
plt.plot( t , len(t)*[vmax] , 'g--')
plt.plot( t , len(t)*[-vmax] , 'g--')
plt.ylabel('Velocity, v')
plt.plot( t[:-2] , [d/pow(ts,2) for d in numpy.diff( numpy.diff(x) ) ] , 'b')
plt.plot( t , len(t)*[amax] , 'b--')
plt.plot( t , len(t)*[-amax] , 'b--')
plt.ylabel('Acceleration, a')
plt.plot( t[:-3] , [d/pow(ts,3) for d in numpy.diff( numpy.diff( numpy.diff(x) )) ] , 'm')
plt.ylabel('Jerk, j')

See also:

I'd like to extend this example, so if anyone has simple math+code for a third-order or fourth-order trajectory planner available, please publish it and comment below!

by admin at March 17, 2014 06:48 PM

Dan Heeks's Milling

HeeksCNC 1.0

I made a release of HeeksCNC 1.0, my CAD/CAM software, for Windows.

You can program drilling, profile and pocket operations.

You can then run a simulation of solid removal.

This amazing software is less than $20. Get it here

by (Dan Heeks) at March 17, 2014 05:42 PM

March 13, 2014


Factories and surfaces

The days and weeks are passing me by. I’ve got to stop doing this programming and get with something I’m interested in. I don’t think I’ve been outside of Liverpool since January.

Due to a surface triangulation crisis (the speed of machining operations being four times faster than the initial time it takes the CAD kernel to produce the triangles it needs to start work), I spent the last 10 days hacking up a trimmed NURBS surface triangulator and triangle flipper with Martin based on some of the machining algorithms. We pretend the trimming curves are machining boundaries and encode an XYZ point at each XY 3-axis sample point instead of just tool tip height.


The triangle flipping is a second component experiment I’d been meaning to try for a long time. Can you redistribute the triangles in a better way than simply along the UV parametric lines? Not sure I’ve got an answer yet, but it’s exhausting trying to find out. I’ll write it up later when I have the energy.

Meanwhile, in another part of the city, things are being built by real engineers. I’m looking forward to installing it in the office and making it work. Which is more than can be said about many of the programming projects I’ve put my hand to recently.

I’m also wasting time posting ideas onto the internal Autodesk idea database. Most are sinking like lead bricks into mud. It’s pearls before swine. The latest idea which everyone hates is to hold Simultaneous Satellite Tech Conferences in each region rather than wasting a shedload of carbon by flying all the techies to meet in a hotel in Canada for two days.

“Oh, but I find the in-person meetings are so important for building relationships,” they say. “This never happens with on-line meetings.”

No one seems to think that maybe it’s because on-line meetings generally last less than an hour, but when you travel half-way round the world to a meeting, the duration of the meeting is in practice like 20 to 40 hours long (with sleeping breaks).

Perhaps this extended time period is what’s important, eh?

I mean, look, if you teleported into your favourite tech conference for, let’s say, one hour and fifteen minutes before suddenly vanishing in a puff of smoke, you wouldn’t be able to build a lot of relationship experiences with the people there, would you? However, if you were trapped in an elevator for 10 hours, and all you had was your phone which you could use to call the person in stuck in the other shaft, you’d become friends for life with that individual.

It’s all in the mind. Use your imagination. A functioning virtual telepresence system should not involve booking the suite for an hour on a stupid meeting. Instead it should require being time-locked in a tank for 12 or 24 hours where the only communication line is routed via the office to which your business travel destination has been designated. You can phone home, but the phone call will literally need to be be directed into a physical acoustic coupler device in that office. You are there, with all the inconvenience of being stuck there, meaning that the place of least effort to communicate is there, and it will be rude and boring if the people in that office don’t pay attention and entertain you while you are stuck there. Maybe after the first six hours you will have dispensed with enough pleasantries and finally be talking about the things you need to be talking about, and building those bonds of friendship that take time to form. Glue and cement and gelatin all take time to set. Do you not think they same could be true for our frivolous minds?

by Julian at March 13, 2014 07:41 PM

Emergent Properties of Meat

Southern Florida Nature

Photos from various parts of Florida including the Everglades, Keys and Miami area.

March 13, 2014 01:27 AM

March 11, 2014


Buried nuts and hanging holes

I needed to make some M4 nuts that could be finger tightened but I didn't have room for a standard wing-nut, so I decided to embed nuts in a printed plastic knob. I knocked up a simple design in OpenScad :-

M4 nuts are nominally 3.2mm thick. I made the base and lid 2.4mm and sliced it with 0.4mm layers. That meant the top of the nut would be flush with a layer boundary at 5.6mm and I confirmed that the first covering layer was at 6.0mm in Skeinlayer. So I needed to pause the build before the start of the layer at Z=6.0 and insert the nuts.

I run my USB machines using Raspberry PIs and OctoPrint (so that all my machines are connected via Ethernet) and noticed a post by the author, Gina Häußge, that said OctoPrint interprets an M0 in the gcode as a pause command. The host stops sending gcode until you press the pause button to un-pause it again. I believe other hosts use @PAUSE to do the same thing.

So M0 is exactly what I needed. The only problem is that up until then I mistakenly thought M0 meant end of program and placed it at the end of the PLA profiles that I distribute. Fortunately the version of Marlin I use ignores it but if you want to use the latest version, or OctoPrint, then you need to remove it from end.gcode, otherwise either the host or the firmware will pause at the end of the print and wait for a button press. Harmless but a bit confusing.

So, armed with a new appreciation of what M0 is, I searched my gcode for the first instance of Z6.0 which looks like this:

G1 X-9.082 Y3.907 Z6.0 F12000.0
G1 X-5.457 Y-3.937 Z6.0 F12000.0
G1 X-7.05 Y-3.803 Z6.0 F12000.0
G1 X-11.486 Y-4.991 Z6.0 F12000.0
G1 X-13.721 Y-10.229 Z6.0 F12000.0
G1 F1800.0
G1 E1.0
G1 F12000.0
G1 X-12.65 Y-10.848 Z6.0 F1837.1615 E0.036

What we have is a sequence of non-extruding moves followed by an un-retract and the first extrusion. The moves are the result of the comb module and not really relevant if we are restarting after a pause, so I removed all but the last move and inserted my pause code:

M104 S100
G1 Z6.0
G1 X-100 Y-100 F9000
G1 X10.0 Y98.0 F9000
G1 Z0.05
M109 S250
G92 E0
G1 E3 F50
G1 E-1 F1200
G1 X40.0 F4000
G1 Z6.0 F9000

G1 X-13.721 Y-10.229 Z6.0 F12000.0
G1 F1800.0
G1 E1.0
G1 F12000.0
G1 X-12.65 Y-10.848 Z6.0 F1837.1615 E0.036

I set the extruder temperature to 100°C to minimise ooze and stop it discolouring while waiting for me to insert the nuts. The bed is left on so the half printed objects don't detach. It then moves up to Z = 6.0 to clear the objects before going to X = -100, Y =-100. That moves the bed to the front and the extruder to the far right on a Mendel90, giving the best access to the partially printed objects. M0 then pauses the program.

I threaded the nuts onto a screw to insert them easily without touching the hot plastic. 

After pressing the pause button to make OctoPrint resume, the print head moves to the front of the bed to do another ooze free warmup. The only difference from the start of the print is it parks the nozzle 10mm further left to avoid the blob it has already made and it moves to Z = 6.0 before resuming the print.

This all worked very well except for a slight snag. ABS does not stick to steel, so when it extruded the circular holes on top of the nuts it made a bit of a mess.

Normally I would use a one layer support diaphragm when printing suspended holes and drill it out afterwards. In this case it can't be drilled because the nut is in the way, so I developed a method of printing holes in mid air. 

The last layer of the nut trap looks like this: 

You can't print a smaller hole on the next layer as the outline would be printed in mid air. The infill is also only attached at one end. After a few layers it does sort itself out but leaves a mess. However, what you can do is print two bridges over the large hole with a gap between them equal to the diameter of the small hole:

This is done by cutting out a one layer rectangle clipped to the hexagon. It is rotated to match the layer's infill direction because Skeinforge fails to detect it as a bridge, probably because the bridged area is tiny.

On the next layer we can bridge in the opposite direction and close down the hole to a square:

Two sides are supported by the edges of the rectangle below and the other two span the gap. 

On the next layer we can approximate the hole with an octagon. Four edges are coincident with the square and the other four span small gaps:

It is now a good enough approximation to a circle for such a small hole so it continues up to the top as an octagon. The resulting print is much neater:

The cavity for the nut is made by subtracting a shape like this: 

Here is the OpenScad code. It needs various functions from the Mendel90 source tree.

// Smaller alternative to a wingnut
include <conf config.scad>

module hanging_hole(or, ir, ofn = 0) {
union() {
intersection() {
cylinder(r = or, h = 3 * layer_height, center = true, $fn = ofn);
poly_cylinder(r = or, h = 3 * layer_height, center = true);
rotate([0, 0, 90])
cube([2 * or + 1, 2 * ir, 2 * layer_height], center = true);
rotate([0, 0, 90])
cube([ir * 2, ir * 2, 4 * layer_height + 4 * eta], center = true);

rotate([0, 0, 22.5])
translate([0, 0, 2 * layer_height])
cylinder(r = corrected_radius(ir, 8), h = 100, $fn = 8);

base_thickness = 2.4;
lid_thickness = 2.4;

function nut_knob_height(nut) = base_thickness + nut_thickness(nut) + lid_thickness;

module nut_knob_stl(screw = M4_hex_screw, d = 14) {
nut = screw_nut(screw);
h = nut_knob_height(nut);
flutes = 3;

rotate([0, 0, -45])
difference() {
cylinder(r = d / 2, h = h); // basic shape

for(i = [0 : flutes - 1]) // flutes for finger grip
rotate([0, 0, i * 360 / flutes + 30])
translate([d * cos(90 / flutes), 0, base_thickness])
cylinder(r = d / 2, h = 100);

union() { // nut cavity
difference() {
translate([0, 0, base_thickness + nut_thickness(nut)])
nut_trap(screw_clearance_radius(screw), nut_radius(nut), nut_thickness(nut));

translate([0, 0, base_thickness + nut_thickness(nut)]) // remove top of nut trap
cylinder(r = 20, h = 110);

translate([0, 0, base_thickness + nut_thickness(nut)])
hanging_hole(nut_radius(nut), screw_clearance_radius(screw), 6); // replace with hanging hole


So this seems to be a general solution to printing holes in mid air without any support material. The only downside is that it is a bit weaker than using a membrane and drilling it out. In this case no strength above the nut was required. In general you can just make it two layers thicker.

by (nop head) at March 11, 2014 09:29 PM

ADF4350 PLL+VCO and AD9912 DDS power spectra

Here's the 1 GHz output of an ADF4350 PLL+VCO evaluation board, when used with a 25 MHz reference.

The datasheet shows a phase noise of around -100 dBc/Hz @ 1-100 kHz, so this measurement may in fact be dominated by the Rigol DSA1030A phase noise which is quoted as -88 dBc/Hz @ 10 kHz.


The 1 GHz output from the ADF4350 is used as a SYCLK input for an AD9912 DDS. The spectrum below shows a 100 MHz output signal from the DDS with either a 660 MHz or 1 GHz SYSCLK. The 660 MHz SYSCLK is from a 10 MHz reference multiplied 66x by the AD9912 on-board PLL. The 1 GHz SYSCLK is from the ADF4350, with the AD9912 PLL disabled.

The AD9912 output is clearly improved when using an external 1 GHz SYSCLK. The noise-floor drops from -80 dBm to below -90 dBm @ 250 kHz from the carrier. The spurious peaks at +/- 50 kHz disappear. However this result is still far from the datasheet result where all noise is below -95 dBm just a few kHz from the carrier. It shouldn't matter much that the datasheet shows a 200MHz output while I measured a 100 MHz output.

Again I suspect the Rigol DSA1030A's phase-noise performance of -88dBc/Hz @ 10 kHz may in fact significantly determine the shape of the peak. Maybe the real DDS output is a clean delta-peak, we just see it like this with the spectrum analyzer?


Martein/PA3AKE has similar but much nicer results over here: 1 GHz refclock and 14 MHz output from AD9910 DDS. Amazingly both these spectra show a noise-floor below -90 dBm @ 50 Hz! Maybe it's because the spectrum analyzer used (Wandel & Goltermann SNA-62) is much better?

by admin at March 11, 2014 05:11 PM

March 09, 2014

MetaRepRap Soup

Emergent Properties of Meat

Florida 2014 (mostly Everglades)

We recently travelled to Florida. I took a ton of photos, mostly in the Everglades. Here are some of the better examples. Warning: Contains spiders and animals eating other animals.

March 09, 2014 06:02 PM

March 07, 2014

DDS Front Panel

Two of these 1U 19" rack-enclosure front panels came in from Shaeffer today. Around 70 euros each, and the best thing is you get an instant quote on the front panel while you are designing it with their Front Panel Designer software. The box will house an AD9912 DDS.


From left to right: 10 MHz reference frequency input BNC, DDS output BNC, 40mm fan (1824257 and 1545841), 20x2 character LCD (73-1289-ND), rotary encoder with pushbutton (102-1767-ND), and rightmost an Arduino Due (1050-1049-ND) with Ethernet shield (1050-1039-ND). The panel is made to fit a Schroff 1U enclosure (1816030) with the inside PCBs mounted to a chassis plate (1370461).


Here's a view from the inside. I milled down the panel thickness from 4 mm to around 2.5 mm on the inside around the rotary encoder. Otherwise all components seem to fit as designed.


Next stop is coding an improved user-interface as well as remote-control of the DDS and PLL over Ethernet.

by admin at March 07, 2014 07:41 PM

February 26, 2014

Drop-Cutter toroid edge test

The basic CAM-algorithm called axial tool projection, or drop-cutter, works by dropping down a cutter along the z-axis, at a chosen (x,y) location, until we make contact with the CAD model. Since the CAD model consists of triangles, drop-cutter reduces to testing cutters against the triangle vertices, edges, and facets. The most advanced of the basic cutter shapes is the BullCutter, a.k.a. Toroidal cutter, a.k.a. Filleted endmill. On the other hand the most involved of the triangle-tests is the edge test. We thus conclude that the most complex code by far in drop-cutter is the torus edge test.

The opencamlib code that implements this is spread among a few files:

  • millingcutter.cpp translates/rotates the geometry into a "canonical" configuration with CL=(0,0), and the P1-P2 edge along the X-axis.
  • bullcutter.cpp creates the correct ellipse, calls the ellipse-solver, returns the result
  • ellipse.cpp ellipse geometry
  • ellipseposition.cpp represents a point along the perimeter of the ellipse
  • brent_zero.hpp has a general purpose root-finding algorithm

The special cases where we contact the flat bottom of the cutter, or the cylindrical shaft, are easy to deal with. So in the general case where we make contact with the toroid the geometry looks like this:


Here we've fixed the XY coordinates of the cutter location (CL) point, and we're using a filleted endmill of radius R1 with a corner radius of R2. In other words R1-R2 is the major radius and R2 is the minor radius of the Torus. The edge we are testing against is defined by two points P1 and P2 (not shown). The location of these points doesn't really matter, as we do the test against an infinite line through the points (at the end we check if CC is inside the P1-P2 edge). The z-coordinate of CL is the unknown we are after, and we also want the cutter contact point (CC).

There are many ways to solve this problem, but one is based on the offset ellipse. We first realize that dropping down an R2-minor-radius Torus against a zero-radius thin line is actually equivalent to dropping down a zero-minor-radius Torus (a circle or 'ring' or CylCutter) against a R2-radius edge (the edge expanded to an R2-radius cylinder). If we now move into the 2D XY plane of this circle and slice the R2-radius cylinder we get an ellipse:


The circle and ellipse share a common virtual cutter contact point (VCC). At this point the tangents of both curves match, and since the point lies on the R1-R2 circle its distance to CL is exactly R1-R2. In order to find VCC we choose a point along the perimeter of the ellipse (an ellipse-point or ePoint), find out the normal direction, and go a distance R1-R2 along the normal. We arrive at an offset-ellipse point (oePoint), and if we slide the ePoint around the ellipse, the oePoint traces out an offset ellipse.


Now for the shocking news: an offset-ellipse doesn't have any mathematically convenient representation that helps us solve this!
Instead we must numerically find the best ePoint such that the oePoint coincides with CL. Like so:


Once we've found the correct ePoint, this locates the ellipse along the edge and the Z-axis - and thus CL and the cutter . If we go back to looking at the Torus it is obvious that the real CC point is now the point on the P1-P2 line that is closest to VCC.

In order to run Brent's root finding algorithm we must first bracket the root. The error we are minimizing is the difference in Y-coordinate between the oePoint and the CL point. It turns out that oePoints calculated for diangles of 0 and 3 bracket the root. Diangle 0 corresponds to the offset normal aligned with the X-axis, and diangle 3 to the offset normal  aligned with the Y-axis:



Finally a clarifying drawing in 2D. The ellipse center is constrained to lie on the edge, which is aligned with the X-axis, and thus we know the Y-coordinate of the ellipse center. What we get out of the 2D computation is actually the X-coordinate of the ellipse center. In fact we get two solutions, because there are two ePoints that match our requirement that the oePoint should coincide with CL:



Once the ellipse center is known in the XY plane, we can project onto the Edge and find the Z-coordinate. In drop-cutter we obviously approach everything from above, so between the two potential solutions we choose the one with a higher z-coordinate.

The two ePoint solutions have a geometric meaning. One corresponds to a contact between the Torus and the Edge on the underside of the Torus, while the other solution corresponds to a contact point on the topside of the Torus:

I wonder how this is done in the recently released pycam++?

by admin at February 26, 2014 08:16 PM


Washing the progress bar

Last year we got a chance to see SolidCAM’s iMachining and laughed at the way its progress bar jumped all over the place, from -14% to 200% and back again.

Then we looked at our own Adaptive Clearing strategy which we had just spent the past year making multicore — and noticed it did the same stupid thing!

How embarrassing.

You never notice yourself picking your own nose, but when someone else does it in your face, you realize it’s ugly.

Progress bars don’t get as much love and attention from the programmers as they ought to, given how much time the users have to stare at them. The users think it’s so normal for the progress bar to be absolutely wrong that it’s considered a sign of extreme naivety to think about complaining. They probably believe that we’d going to laugh at them if they raised the issue.

It turns out that the progress bar on multiple CPU processing is not hard to get right, but you do it differently to how you do it on a single-threaded process.

Let’s first think about what a progress bar is for. There are two different options. It could report the time remaining for the process to complete, or it could report the percentage of the process job that has been completed.

The time remaining might be the most useful information for organizing your life (is there enough time to grab lunch while this completes?), but there’s no way you’re going to get that information — even though it’s what everyone wants to know.

You will hear: “How many days till it’s done?” more often than “Are we at the 75% complete stage yet?” for a project — and that’s even before it’s over-run by a factor of two.

In fact, the only practical implementation for the time remaining is to run the whole job first, time it, and then set a count-down timer from that value when you run it again. It’ll make everything run twice as slow, but what’s the big deal?

And just suppose we implemented a percentage-complete progress bar that was accurate according to the definition that it moved at a constant rate from 0 to 100%. This would be a scale-free number that didn’t depend on the speed of your processor, or the fact that you have decided to watch a video while you waited, or created all kinds of other factors it is impossible to predict.

If this percentage-complete value was accurate enough in time, you could measure how long it took for the first 5% to pass, and multiplied the number by 19 to estimate the time to completion. Good enough?

Washing dishes

There are 8 plates and 2 oven pans in the sink.

Single-threaded version 1: Take each item from the sink, wash it, dry it, put it in the cupboard, and set the progress bar to the value (number of items done)/(total number of items) = n/10.

nitems = 10; 
for (i = 0; i < nitems; i++) {

The item number i steps from item 0 to item 9, and you need to add 1 to this value to state the number of items that are complete after you have washed each one.

Progress is going to be a bit juddery and uneven, because the oven pans take longer than the plates to wash, and 10% is a big increment.

Single-threaded version 2: Oven pans are harder to wash than plates. Count 6 points for each pan, and 1 point for each plate. This adds up to 20 points. Do the pans first. Set the progress bar to the sum of the points for the items already done divided by 20.

npans = 2; nplates = 8 
pantoplateworkratio = 6; 
totalpoints = pantoplateworkratio*npans + nplates;
for (i = 0; i < npans; i++) {
for (j = 0; j < nplates; j++) {
    setpercentprogress(100*(npans*pantoplateworkratio + (j+1))/totalpoints); 

This implementation is getting a bit complicated, but it’s all based on in-line linear equations, and you can see how it evolves as part of the natural process of software development. The first loop moves us from 0% to 60%, and the second loop goes from 60% to 100%.

Suppose we wanted to break down each operation into proportions of 2/5 for washing, 2/5 for drying and 1/5 for putting in the cupboard components, then we’d expand the first loop to read:


The code evolves and spreads out line this. You simply keep inserting new terms into your setpercentprogress() function calls. It’s not how you would design it if you were starting fresh, but programming is a process of constantly repairing an old machine and patching up its pipe-work to handle new chemicals.

Multi-threaded version 1: How about we set one person to do the plates, and the other to do the pans? Now you’re going to see the progress bar jumping back and forth between a number in the 0 to 60 range and a number in the 60 to 100 range. If, furthermore, each person doesn’t know how many items are in the other person’s sink, like they’ve randomly picked 2 pans and 3 plates for one of the sinks, the conflicts are going to be greater and may even account for the jumpy -14% and +200% ranges.

Single-threaded version 3: Create a special progress object that keeps count of how much progress has been made. Using the points system we have allocated 5% to each plate and 30% to each pan. Also, subdivide these incremental progress percentages into the individual components, like so:

Pick a plate out of the sink, wash it (+2%), dry it (+2%), into the cupboard (+1%). Take a pan out of the sink, wash it (+12%), dry it (+12%), into the cupboard (+6%).

This is getting smoother. But you could break down the drying part into smaller operations of, say, each of three wipes of the tea towel. Also, it doesn’t take that much longer to put an oven pan into the cupboard than a plate, so you should reserve a larger proportion of the pan time for the scrubbing and less for the putting away. Take a pan out of the sink, wash it (+25%), dry it (+3%), into the cupboard (+2%).

Now you have a system where it doesn’t matter what order you do the items, how many people are doing it concurrently, or what jobs they do. You could have a stack of plates on the draining board and two people drying them. If one of the plate-washers finishes first, they could join in the job of putting them away.

Multi-threaded version 2: These percentages may look pretty simple, but there’s the minor issue of how to get the numbers allocated to each component of each item in a sensible way. Those messy calculations of the form setpercentprogress(100*(i+0.4)*pantoplateworkratio/totalpoints) have to happen somewhere, they don’t just go away.

We got into a tangle with this, until we stopped trying to do the calculations in-line, and instead allocated all the percentages at the very start of the operation. We stuck three numbers onto each item while they were still in the sink, one for the washing, another for the drying, and one for the putting away. This can be done in one thread that has the big picture and all the available information. Now, as each job gets done (a pan gets dried), the process peels the stamp displaying the incremental progress allocated to it for the work and sends it to the progress object. Easy.

One of the challenges is that the sink water is dirty so you can’t see exactly how many plates there are left to do. To handle this, we work with an estimate and keep allocating an absolute percentage for each item that comes out of the sink from a remaining stock of allocations. When we run out of plates to wash, we report a residual allocation to the progress object, and it tries to spread out this error across the remaining space on the progress bar, as I outlined in this post about the progress object.

In the Adaptive Clearing algorithm we have 7 stages in the process, instead of just 3 for washing up, and some of the components depend on adjacent pairs (like relinking) so cannot be isolated. There are also two independent hierarchies of allocations and residuals, for the layers and for the toolpaths within each layer.

Here is a diagram of the process with the progress bar value super-imposed. It’s not a very good result, as you can see, but we’ve been working on it.


We should put some effort into balancing the proportions between the operations, but we don’t have time for that work right now. Maybe we’d do it if the users started complaining more about it. We can now see some problems with the strategy by the way the progress bar stalls. As a rule, if you can’t see a problem, you can’t fix it.

The original outline for this story involved harvesting potatoes from a valley with a number of farms, each farm with a number of fields, each field harvested by a number tractor runs along its length producing truckloads of potatoes that were taken back to the barn to be individually washed, sorted and bagged into sacks all overseen by an impatient manager who just wanted to know how much longer it was all going to take to get done.

Washing dishes is so much simpler. I don’t know why I didn’t think of it until now.

by Julian at February 26, 2014 01:41 PM

February 24, 2014

Emergent Properties of Meat

How do you check a signature on a PGP/Mime message

…I'll answer in Python. First, put your whole message in a file in unix "mbox" format (this will be the problem if you use some brain-dead GUI mailer!), then in Python extract the signature and the signed message, and finally invoke gpg:

import email
import os
import sys

with open(sys.argv[1]) as mfile: message = email.message_from_file(mfile)

if message.get_content_type() != "multipart/signed":
    raise SystemExit, "message not signed"

# Really, you'd want safe temporary filenames, and you'd want to clean them up
with open("msg", "wb") as msg: msg.write(message.get_payload(0).as_string())
with open("sig", "wb") as sig: sig.write(message.get_payload(1).get_payload())

# Delegate the real business to gpg
os.execvp("gpg", ["gpg", "--verify", "sig", "msg"])

February 24, 2014 01:10 PM

February 23, 2014

OpenCAMlib and OpenVoronoi toolpath examples

There might be renewed hope for a FreeCAD CAM module. Here are some examples of what opencamlib and openvoronoi can do currently.

The drop-cutter algorithm is the oldest and most stable. This shows a parallel finish toolpath but many variations are possible.


The waterline algorithm has some bugs that show up now and then, but mostly it works. This is useful for finish milling of steep areas for the model. A smart algorithm would use paralllel finish on flat areas and waterline finish on steep areas. Another use for the waterline algorithm is "terrace roughing" where the waterline defines a pocket at a certain z-height of the model, and we run a pocketing path to rough mill the stock at this z-height. Not implemented yet but doable.


This shows offsets calculated with openvoronoi. The VD algorithm is fairly stable for input geometries with line-segments, but some corner cases still cause problems. It would be nice to have arc-segments as input also, but this requires additional work.


The VD is easily filtered down to a medial-axis, more well-known as a V-carving toolpath. I've demonstrated this for a 90-degree V-cutter, but toolpath is easy to adapt for other angles. Input geometry comes from truetype-tracer (which in turn uses FreeType).


Finally there is an experimental advanced pocketing algorithm which I call medial-axis-pocketing. This is just a demonstration for now, but could be developed into an "adaptive" pocketing algorithm. These are the norm now in modern CAM packages and the idea is not to overload the cutter in corners or elsewhere - take multiple light passes instead.


The real fun starts when we start composing new toolpath strategies consisting of many of these simple/basic ones. One can imagine first terrace-roughing a part using the medial-axis-pocketing strategy, then a semi-finish path using a smart shallow/steep waterline/parallel-finish algorithm, and finally a finish operation or even pencil-milling.

by admin at February 23, 2014 08:32 PM

Emergent Properties of Meat

Northern Georgia Waterfalls

We spent a few days exploring the waterfalls in Northern Georgia. Gorgeous area!

February 23, 2014 02:11 AM

February 22, 2014

Photon-correlation test with modulated LED

Further testing of the time-stamping hardware. The idea was to generate a weak beam of light with an intensity modulation at 1-12 MHz, count and time-stamp the photons, and see if the modulation can be measured with a correlation histogram.

To generate a stream of photons intensity modulated at a frequency f_mod I used this simple LED circuit driven by an adjustable DC power-supply and a signal generator. I didn't test the bandwidth of the circuit and LED, but it seems to work well for this test at least up to 12 MHz.


If we are given such a stream of photons, with an average rate of say 10 kphotons/s, how do we detect that the modulation at MHz frequencies is there? Note that the average rate of photons is much much smaller than the modulation frequency. If we receive photons at 10 kcounts/s there is on average 1000 cycles of modulation between each photon-event, when f_mod=10 MHz.

One way is to count the photons with a photon-counter, and time-stamp each photon. We should now on average see more/less photons every 1/f_mod seconds. So if we histogram all time stamps modulo 1/f_mod, we should get a sine-shaped histogram. This assumes that the signal generator creating f_mod and our time-stamping hardware share a common time-base.

This works quite nicely!

At the start of the video we see only the dark counts of the PMT. A DC voltage is then applied to the LED and the histogram rises up, but remains flat. When the modulation is applied we immediately see a sine-shape of the histogram. If we adjust the the frequency, phase, or amplitude of the modulation we see a corresponding change in the histogram.

The video first has testing at f_mod=1 MHz, with a histogram modulo 1/f_mod = 1000 ns, and later with f_mod=12 MHz and the histogram modulo 83333 ps. The later part of the video also has a least-squares sin() fit to the data.

This technique is very sensitive to mismatch between the applied frequency f_mod, and the histogram mod-time 1/f_mod. I first wanted to try this at 12 MHz, so I set the histogram mod-time to 83333 ps - and saw no sine-histogram at all! This was because I had rounded 1/f_mod to the nearest picosecond, and 1/83333 ps is actually 12 000 048 Hz - not 12 MHz!
At 12 MHz a deviation of 48 Hz is a few ppm in fractional frequency, and I later tested that changing f_mod by a fraction of ca 1e-8 makes the histogram slowly wander to the left or right. Any larger deviation and the correlation is lost.
All of this is similar to a lock-in technique, so the same principles should apply.

by admin at February 22, 2014 07:42 AM

February 20, 2014

White Rabbit Fine-Delay time-stamp testing

I'm testing the White Rabbit Fine-Delay FMC. It has an ACAM TDC-GXP time-to-digital converter that time-stamps the leading edge of an input trigger signal with ~30 ps resolution.

Recent work by Alessandro Rubini introduced a raw_tdc=1 driver mode which on my computer is able to collect time-stamps at a maximum rate of ca 150 kHz. Each time-stamp is 24-bytes, so this corresponds to a data-stream of roughly 4 Mb/s.

The video shows two graphs that update in real-time on the machine that collects time-stamps. The first is simply a (reciprocal) frequency counter where we count how many time-stamps arrive within a certain gate/window.

The second part of the video shows a modulo(tau) histogram where we bin time-stamps modulo tau into a histogram. The histogram was calculated for tau=100 us and an input frequency of 1 kHz was used. This results in the central peak in the histogram. I then slightly increased the frequency to 1.000010 kHz which makes the peak wander to the right. The peak on the right was produced by again tuning the input frequency to exactly 1 kHz. Similarly a lower input frequency of 999.970 Hz makes the peak wander to the left, and the peak around 20us was produced after tuning back to 1 kHz.

This hardware/software combination will be useful for collecting statistics and correlations in any experiments where a pulse type detector is used to measure something - provided the pulse-rate is below 150 kHz or so.

by admin at February 20, 2014 06:08 PM

February 19, 2014

Emergent Properties of Meat

Red: the perfect unit of measurement?

If you're like me, when you write software that has to store distance information in floating point, you shudder because you know you have to pick either inches or millimeters, leading most common multiples of the other as numbers that are not exactly expressible in binary floating point.

So let me make a simple proposal: define red = 625nm, so called because light of 625nm is commonly perceived as red. Now, you can store all of the following as exact integers:

  • Any binary fraction of an inch down to 1/64inch (1/64in = 635 red)
  • Any 1/10 of an inch (1/10in = 4064 red)
  • Any multiple of 5um (5um = 8 red)
Unfortunately, the following are still not exact:
  • 1mil = 40.64 red
  • 1pspt = 1/72in ~= 564.44 red
  • 1/300in (common print resolution) ~= 135.46 red

Here are some other visible-light inspired fundamental distance units and the common distances they can express exactly as binary fractions:

Table of magic lengths
Color name Exact definition Wavelength (approximate) Exact binary fractions for multiples of
red 8192/11162109375m 733.91nm 1/64in mil pspt dpi nm
red 64/89296875m 716.71nm 1/64in mil pspt dpi 25nm
red 512/744140625m 688.04nm 1/64in mil dpi nm
red 4/5953125m 671.92nm 1/64in .01in dpi um
red 32/49609375m 645.04nm 1/64in mil 5nm
red 1/1587500m 629.92nm 1/64in .01in 5um
yellow 256/446484375m 573.37nm 1/64in mil pspt dpi 5nm
green 2/3571875m 559.93nm 1/64in .01in pspt 5um
green 2048/3720703125m 550.43nm 1/64in mil dpi nm
green 16/29765625m 537.53nm 1/64in mil dpi 25nm
green 128/248046875m 516.03nm 1/64in mil nm
green 1/1984375m 503.94nm 1/64in .01in um
blue 1024/2232421875m 458.69nm 1/64in mil pspt dpi nm
violet 8/17859375m 447.94nm 1/64in .01in pspt dpi um
violet 64/148828125m 430.03nm 1/64in mil dpi 5nm
violet 1/2381250m 419.95nm 1/64in .01in 5um
violet 512/1240234375m 412.83nm 1/64in mil nm
violet 4/9921875m 403.15nm 1/64in mil 25nm
computed by, frequency range for each colorname from wikipedia. Unlike the original 625nm "red" constant, common lengths are binary fractions rather than whole numbers. And of course these aren't nice integer multiples of nm either. I wonder why no oranges make the table.

Files currently attached to this page:


February 19, 2014 09:17 PM

February 15, 2014

Emergent Properties of Meat

Tropical Rainforest

The gorgeous tropical rainforest of North Queensland. It was full of amazing plants and was lush everywhere you looked. These photos are from a couple of sites—the first set are from the area around Mossman Gorge, where we went on a Dreamtime Tour with an Aboriginal guide. He told us both about what traditional life had been like, but also what life is like for people now and what traditions they have been able to hold onto. The second set is from Cape Tribulation, a little further north, where we took a number of different walks in the forest.

Warning: Spider pictures inside.

February 15, 2014 02:35 PM

Pulse Stretcher - v1

A first try at this pulse stretcher circuit based on the LT1711 comparator. I need it for stretching a short 10ns pulse from a PMT.


The idea is to use the output latch of the LT1711. Once the output goes high, the combination C4 R4 keeps the latch pin (and thus the output) high for a time R*C. The Schottky diode is there to prevent the latch pin from swinging to far negative once the output goes low.


The PCB is made to fit into a BNC-BNC enclosure such as the ones from Pomona.


Messing up the pin-order of voltage regulators is becoming a habit! Note how the regulator is mounted the wrong way round compared to the PCB design - because I had the pin order wrong in my schematic.


I used a Keithley 50 MHz function generator to generate a 20ns long input pulse (the shortest possible from the Keithley) and the pulse-stretcher outputs a ca 483 ns output pulse. The prototype used a 1 nF capacitor with a 500 Ohm resistor which gives a nominal time-constant of 500 ns. The output pulse duration is far from constant and varies quite a bit from pulse to pulse.


This verifies that the propagation delay of the LT1711 in this circuit is within specifications, ca 4.5 ns. In addition to the comparator there is also maybe 70 mm of BNC-connectors, wires, and PCB-traces in the signal path, but that would add only ~350 ps to the propagation delay (assuming 2e8 m/s signal velocity).

One problem with this design is that it is sensitive to the load impedance connected to the output. With a 1 MOhm setting on the oscilloscope the pulse-length is correct, but switching to a 50 Ohm load impedance allows the capacitor to discharge significantly through the load impedance.

Version 2 of this circuit should thus add an output buffer (fast, low-jitter!) that can drive both 1 MOhm and 50 Ohm loads. An adjustable trigger level for the -Input of the LT1711 comparator could also be useful.

by admin at February 15, 2014 01:35 PM

February 14, 2014


Join us at 2014 Midwest RepRap Festival (MRRF), Goshen, Indiana, USA

I'm really looking forward to this!!

The 2014 Midwest RepRap Festival in Elkhart County Indiana is the place to be March 14-16th. This event is totally FREE to come and attend, there are no tickets, no entry fees, just come hang out all weekend and hang out with other 3D printer guys and gals, but please fill out the RSVP form by following the link above so we know how many people to expect. This event will feature build-events, guest speakers and more!

Highlights of the event:

STATE OF REPRAP Come hear Josef Prusa speak on the state of Reprap.
TEST AND TUNE Experts will be on hand to help you troubleshoot issues or take your prints to the next level!  The event is FULL of people who want to see everyone become an expert.  Whether it’s a simple question about a software setting, a new mechanical design, recommendations on where to go to get into reprap or more, don’t hesitate to ask anyone at the event.
MEET THE MAKERS  Meet some of the big names in RepRap, like MaxBots (Mendel MAX dev), Josef Prusa (Prusa Mendel/i3 and more), Logxen (Smoothieboard Dev) and many more
CRAZY NEW REPRAPS  Nicolas Seward (RepRap WALLY, SIMPSON, LISA) will be showing off his newest reprap creations, and talking about the unique features of his designs
BUILD EVENTS  More to come soon on Build events ….
3D PRINTING CHALLENGES  See some of the most difficult prints take shape over the course of the weekend, and some fun printing challenges too, like the hand-fed extruder print challenge

by (jmil) at February 14, 2014 12:38 AM

February 11, 2014


Rolling A-star linking

Taking a break from all my other mindful notions to do some proper work on this stay-down linking job.

I have an A-star function that works on the underlying weave structure. The white zones refer to the weave which defines the interior of the contour, and the red zones are the places where it’s safe to go without colliding with the uncut stock.

Most A-star algorithms involve one start point in each tile which spreads out to all sides of the tile. But in my implementation the paths from the sides of the tile are pulled from the receiving end, so it’s possible to have two start points in the same tile with two paths going through, as shown above.

One slight change to this algorithm resulted in paths crossing each other in the cell, which shouldn’t be possible. (The yellow paths are all the elements of the A-star tree, and the green path is the final chosen shortest route.)

I spent a long time gazing at this “bug” wondering why it didn’t choose the shortest path in the direction of the yellow arrow, but instead went round the bottom to point a and round the top to point b. Turns out that, while my model can handle more than one path in the same tile, it can’t handle more than one path claiming the same side of a tile. When the growth of the A-star tree reaches point a, that claims that side of the tile (from the other side), which means the only way forward from the start is into the upper tile and back round.

So, my function DirectWeaveAstarConnecting(ptfrom, ptto) is not perfect, but it’s qualitatively good enough to work with, because it has the property that if it’s possible to fit straight line between the end points, you get that straight line.

The next thing is finding the shortest path. I do this recursively (using an array rather than the call-stack).

Here are the structs in C++

struct astarcelllink
    tilereference cpi;
    P2 ptback; 
    P2 ptfore;

struct chainstackelem
    list<astarcelllink>::iterator it0; 
    list<astarcelllink>::iterator it1;
    bool btoremap; 

double Length(list<astarcelllink>::iterator it0, list<astarcelllink>::iterator it1)
    double length = (it0->ptfore - it0->ptback).Len(); 
    while (it0 != it1)
        ASSERT(it0->ptfore == (it0+1)->ptback); // points match in sequence
        length += (it0->ptfore - it0->ptback).Len(); 
    return length; 

You can’t actually apply +1 or -1 to a Bidirectional Iterator, which is what you get with a list (there’s only ++ and –), but you know what it means.

Here’s the function that actually does the work:

void PullChainTight(list<astarcelllink>& astarcellchain)
    tlist<chainstackelem> chainstack;
    chainstack.push_back(chainstackelem(astarcellchain.begin(),, false));

    while (!chainstack.empty())
        chainstackelem cse = chainstack.pop_back();
        if (cse.it0 == cse.it1)

        list<astarcelllink>::iterator itextreme = FindExtremumLink(astarcellchain, cse.it0, cse.it1);
        if (Line(cse.it0->ptback, cse.it1->ptfore).SideDistance(itextreme.ptfore) < 0.001)
        if (cse.btoremap)
            list<astarcelllink> lastarcellchain;
            DirectWeaveAstarConnecting(lastarcellchain, cse.it0->ptback, cse.it1->ptfore);
            if (Length(lastarcellchain.begin(), < Length(cse.it0, cse.it1))
                // splice in the new sequence
                list<astarcelllink>::iterator it1n = it1+1;
                list<astarcelllink>::iterator it10p = (it0 != astarcellchain.begin() ? it0-1 : astarcellchain.end()); 
                astarcellchain.erase(it0, it1n);
                astarcellchain.splice(it1n, lastarcellchain);

                // btoremap=false to enable split in next loop, but no new path
                chainstack.push_back(chainstackelem((it10p != astarcellchain.end() ? it10p+1 : astarcellchain.begin()), it1n-1, false));

        // cut chain into two and recursively set up the subpaths
        chainstack.push_back(chainstackelem(it0, itm0, true));
        chainstack.push_back(chainstackelem(itm0+1, it1, true));

I’m glossing over one of the extra stages here, which is that, after creating the linking motion, I add in some extra obstructions in the form of these circles at key turning points and splice the connecting path to go round them.


Then we pull the result tight within the area defined by (a) the contour slice, (b) the stock engagement calculated along each weave fibre, and (c) a set of additional circles.

Finally, we insert further subdivisions within each of the cells the path passes through to bring it up to the appropriate sample tolerance, and get a nifty routing path like so:

The tight green mesh represents the un-cut stock.

The code is about 2300 lines all in one vast file with 8 struct definitions. It’s got vast inefficiencies and duplicate calculations (for example, it recalculates the stock engagement on every single side of every single cell that we enter, which is a factor of 2 out right there since you should be able to copy the engagement across). But we don’t need it to be perfect for now.

The next job is to drop it into place and apply it to real linking examples and watch how the whole thing fails because I’ve only developed and debugged it against this one example.

This is, in essence, my software development process. (1) Create a test case that’s reasonably challenging, (2) Invent a general-purpose algorithm works against that test case using the failures on that test case to learn more about the problem, (3) Drop your function into the real world and see how it performs.

This is not the same as Test Driven Development, because I’m not writing test cases for every single add-statement in the whole algorithm like I know what I’m doing. I can pretty much count on my initial designs totally not working (a reasonably good assumption). I’ve probably deleted another 2000 lines beyond the ones that are here.

Any of this makes sense? Probably not. Too early to say.

by Julian at February 11, 2014 05:43 PM

February 10, 2014

ZeromMQ REQuest/REPly pattern

More zeromq testing. See also PUB/SUB.

This pattern is for a reply&request situation where one machine asks for data from another machine. The pattern enforces an exchange of messages like this:

REQuester always first sends, then receives.

REPlyer always first receives, then sends.


import zmq
import time
# request data from this server:port
server = ""
port = 6000
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect ("tcp://%s:%s" % (server,port) )
while True:
    tstamp = time.time()
    msg = "%f" % tstamp # just send a timestamp
    # REQuester: TX first, then RX
    socket.send(msg)        # TX first
    rep = socket.recv() # RX second
    done = time.time()
    oneway = float(rep)-tstamp
    twoway = done - tstamp
    print "REQuest ",msg
    print "REPly   %s one-way=%.0f us two-way= %.0f us" % ( rep, 1e6*oneway,1e6*twoway )
# typical output:
# REQuest  1392043172.864768
# REPly   1392043172.865097 one-way=329 us two-way= 1053 us
# REQuest  1392043173.866911
# REPly   1392043173.867331 one-way=420 us two-way= 1179 us
# REQuest  1392043174.869177
# REPly   1392043174.869572 one-way=395 us two-way= 1144 us

The one-way delay here assumes we have machines with synchronized clocks. This might be true to within 1ms or so if both machines are close to the same NTP server.

The two-way delay should be more accurate, since we subtract two time-stamps that both come from the REQuesters system clock. Much better performance can probably be achieved by switching to C or C++.


import zmq
import time
port = 6000
context = zmq.Context()
socket = context.socket(zmq.REP)
# reply to anyone who asks on this port
socket.bind("tcp://*:%s" % port) 
print "waiting.."
while True:
    #  Wait for next request from client
    message = socket.recv() # RX first
    print "Received REQuest: ", message
    tstamp = time.time()
    sendmsg = "%f" % tstamp # send back a time-stamp
    print "REPlying: ",sendmsg
    socket.send(sendmsg) # TX second
# typical output:
# Received REQuest:  1392043244.206799
# REPlying:  1392043244.207214
# Received REQuest:  1392043245.209014
# REPlying:  1392043245.209458
# Received REQuest:  1392043246.211170
# REPlying:  1392043246.211567

by admin at February 10, 2014 02:54 PM

ZeroMQ PUB/SUB pattern

I'm playing with zeromq for simple network communication/control. For a nice introduction with diagrams see Nicholas Piel's post "ZeroMQ an introduction"

Here is the PUBlish/SUBscribe patter, which usually consist of one Publisher and one or many Subscribers.


import zmq
from random import choice
import time
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:6000") # publish to anyone listening on port 6000
countries = ['netherlands','brazil','germany','portugal']
events = ['yellow card', 'red card', 'goal', 'corner', 'foul']
while True:
    msg = choice( countries ) +" "+ choice( events )
    print "PUB-> " ,msg
    socket.send( msg ) # PUBlisher only sends messages
# output:
# PUB-> netherlands yellow card
# PUB-> netherlands red card
# PUB-> portugal corner

And here is the Subscriber:

import zmq
context = zmq.Context()
socket = context.socket(zmq.SUB)
# specific server:port to subscribe to
# subscription filter, "" means subscribe to everything
socket.setsockopt(zmq.SUBSCRIBE, "") 
print "waiting.."
while True:
	msg = socket.recv() # SUBscriber only receives messages
	print "SUB -> %s" % msg
# typical output:
# waiting..
# SUB -> portugal corner
# SUB -> portugal foul
# SUB -> portugal foul

by admin at February 10, 2014 02:28 PM

February 03, 2014

Emergent Properties of Meat

Australian Coral Reefs

During our week in Far North Queensland, we took two snorkeling trips on the coral reefs. The first (and better) was Quicksilver Cruises' deep reef trip on a big boat where we snorkeled from a pontoon platform. The second (and not quite as good) trip was from the beaches of Fitzroy Island, which is much closer to the mainland.

The good part about the Fitzroy trip is that we stayed overnight on the island's resort so one or both of us snorkeled both mornings and both afternoons, as opposed to the hurry-hurry feel of the outer reef (not to mention that we also saw island plants and animals on a couple of hikes). The bad parts were that Ingrid had some minor seasickness from our trip on a smaller boat; coral (as opposed to sandy) beaches are uncomfortable to walk on; the water seemed to be cloudier and the coral less colorful compared to the deep reef, though after doing some automatic adjustments on the photos they look comparable.

We recommend getting "stinger suits" (lycra bodysuits) even if you aren't being told there are jellyfish in the water and even if you feel they look unflattering on you. The big benefit is that you basically don't have to worry about sunburn or sunscreen, as the lycra blocks UV. Just about the only thing exposed is your face, and hopefully you have it stuck underwater 90% of the time. I didn't burn my back and shoulders after hours of snorkeling—it was the 20 minutes in an inland freshwater pool later in the week when I finally got a sunburn worth complaining about!

Each time, we rented ("hired") an underwater camera (basic P&S cameras with underwater enclosures), and got a few good pictures from the process.

Please forgive us for not attempting to identify the coral or fish in each photo.

February 03, 2014 10:33 PM

January 31, 2014

Emergent Properties of Meat

Mosh automatic cleanup of dead sessions

I love mosh, using it to connect to a long-lived screen session from multiple computers (laptop, chromebook, $DAY_JOB, and phone).

One problem with it is that a mosh-server process that has no living client will linger for a very long time (forever?). This can happen for various reasons, such as letting a device's battery run down.

With some brainstorming help from the participants from #mosh, I came up with a way to automatically kill old mosh-server processes that probably represented defunct clients without killing ones that may represent working clients.

For a long time, my basic setup has been to run a command equivalent to: mosh myserver -- bin/cs where cs is a script which sets some environment variables and ends with exec screen, so I already had an excellent location to put cleanup code.

I also felt that the cleanup rule I wanted was: kill all mosh-server processes started from the same client. $SSH_CLIENT was suggested for this, but it's not useful since my portable devices naturally connect from different networks with different IP addresses (that's the point of mosh after all!) So I jumped to the conclusion that I needed a unique identifier. Why not involve a UUID?

As soon as I realized I needed to pass the UUID on the commandline or environment of cs, I realized that meant it would show up in the mosh-server commandline. So that led to the following snippet: ($i is already identified as being the UUID= argument):

    ps h --sort start_time -o pid,command -C mosh-server |
        grep "$i" |
        head --lines=-1 |
        awk '{print $1}' |
        xargs --no-run-if-empty -n 1 --verbose kill

That seems to work! One final wrinkle, though: In mosh irssi connectbot, there's no way to specify the mosh commandline directly. Hoever, there is "Requested mosh server". I created a wrapper script reading:

exec mosh-server "$@" -- bin/cs UUID=...
and used that script (with full path, in my case) as the configuration value.

Now I shouldn't have to worry about mosh-server processes piling up on my main linux box anymore.

January 31, 2014 06:05 PM

January 30, 2014


Python code for calculating Allan deviation and related statistics:

Sample output:

Cheat-sheet (see

Phase PSD Frequency PSD ADEV
1/f^4, "Black?" 1/f^2, Brownian, random walk sqrt(tau)
1/f^3 ?, "Black?" 1/f Pink constant
1/f^2, Brownian, random walk f^0, White 1/sqrt(tau)
f^0, White f^2 Violet 1/tau

Phase PSD is frequency dependence of the the power-spectral-density of the phase noise.
Frequency PSD is the frequency dependence of the power-spectral-density of the frequency noise.
ADEV is the tau-dependence of the Allan deviation.

These can get confusing! Please comment below if I made an error!

by admin at January 30, 2014 07:41 AM

Emergent Properties of Meat

Big Day Out

On our first full day in Port Douglas, we went on a day-long birding tour with Murray Hunt of Daintree Boatman Nature Tours. We first went on a morning boat tour with one other person, and then in the late morning and afternoon he drove us to a number of sites that provided numerous opportunities to photograph birds (and a few other things). I took about 900 photos. The first six are from Ingrid's camera, the rest are a selection of the better ones from mine.

January 30, 2014 01:01 AM

January 29, 2014

White Rabbit DIO vs. FDELAY PPS output test

I ran a test to compare the quality of PPS outputs from a DIO (used as GM) and an FDELAY (used as slave) White Rabbit SPEC/FMC combinations. Here is the phase data measured with a Time-interval-counter. The nice clean traces are derived from a H-maser. We then lock a BVA (internally 2x multiplied to 10MHz) to this PPS signal. The GM node is locked to this 10MHz signal.

The average of each trace was removed, and the traces are offset for clarity.


The Allan deviations look like this. Both the DIO and FDELAY PPS outputs have allan deviations about 5x worse than the BVA used as input clock for the GM.


by admin at January 29, 2014 03:07 PM

Dan Heeks's Milling

model exhaust cone

I made a model exhaust cone from aluminium, to fit a friend's 1/5th scale Bloodhound SSC model.
I used the manual lathe for most of it, but did the bore with a 16mm tool on the Bridgeport milling machine.
The details were done using the 4th axis on my Sieg KX1 and a 3mm milling cutter. I made a bit of a mistake when I forgot to set the profile operation to the surface, so it dug into the part. Hopefully still looks good enough, though.

by (Dan Heeks) at January 29, 2014 02:51 PM

January 27, 2014

Emergent Properties of Meat

Royal Botanic Gardens Melbourne

On the second day of our trip, we spent the morning at the Royal Botanic Gardens Melbourne. Here are a few of my photos from our visit. (The first three are Ingrid's.)

January 27, 2014 06:03 PM

January 25, 2014

Emergent Properties of Meat

Ctrl-digit to switch to tab on Linux Firefox

Since I switch between Linux Firefox, Windows Firefox, and Chrome, it would be nice if I could access tabs with the same shortcut everywhere. The path of least resistance turns out to be to make Linux Firefox understand Ctrl-<digit> for tab switching. So here's my first Firefox extension, which seems to do the trick:

Files currently attached to this page:


(tested on Iceweasel 26.0)

January 25, 2014 04:17 PM

January 24, 2014


A* routing on a flexibly subdivided weave

Sometimes it’s a relief to be certain that I’m doing something which is exactly what I am paid to do. All the rest of the mouthing off comes for free. Though maybe it’s important to always question what you are paid to do, so you don’t wind up wasting everyone’s time doing something that’s actually pointless.

After having got the A* routing through the weave structure to work based on my unpatented subdividing model of a 2D area that forms the geometric basis of the adaptive clearing non-gouging motion algorithm, I noticed that it did not give enough sample points for a linking pass to be within tolerance. The route, which must avoid uncut stock, requires a greater sample rate than the basic weave cells. These cells were sampled to a resolution along the boundary contour so that it would be within tolerance, but it is insufficient to handle the uncut stock structures that exist in the interior space of this area partway through the clearing cycle.

There are two options to deal with this insufficiency of sampling. Either add in an additional sample rate system, or improve the basic underlying weave sampling structure.

The additional sample rate system is the more obvious way to do it, because it tackles the problem as it presents itself. One simply samples for collisions with the stock along the connecting path at several intermediate points within each wide square weavecell, and then code up some kind of side-ways deviation around the obstructions, including smooth linking and so forth.

The problem with this approach is that eventually you wind up coding the stock collision avoidance system two times in two different ways. The first way is on the underlying weave structure, and the second way is for paths that cross a single weave cell. Invariably, you’d code this second way as trivially as possible by making the assumption that what happens within a cell is going to be quite simple, so that it is probably sufficient to displace points of the path sideways and perpendicular to the underlying line across the cell.

This isn’t going to be completely easy, but it could be done, and it would be good for 99.95% of cases, until you encounter examples where the shape of the stock within the cell has a little more complexity. Now what are you going to do? Most likely, you’re going to abandon them, because it would be a whole lot of extra shoddy work to handle that 0.05% of annoying cases, and since they’re rare it would be reasonable to fudge the issue — setting aside the fact that even a 0.05% failure rate will seem to happen all the time.

Alternatively, I’ve programmed a way to overlay an additional subdividing structure onto the already subdivided weave, where each cell is binary chopped down in either direction in a not-very-efficient set of code, but which at least works.

In the image above, the white lines are the boundaries of the weave cells, the green lines are the boundaries of extra subdivided cells (done arbitrarily for now), the yellow is part of the final connecting path, and the red lines are parts of the A* search tree.

Here are the underlying structures all thrown in to the current 1700 line file that does everything:

struct astarcellside;   // side of a single (subdivided) cell

struct astarcell;       // single cell (subdivided) structure
  // contains astarcellside acs[4]

struct astarcellhead : astarcell;   // single cell associated with weavecell
  // contains vector<astarcell> splitsubcells

struct astarcelllink;   // path across a single cell
  // contains reference to a weavecell and an index into splitsubcells

struct cellgoend; 	// element of prority queue
  // contains reference to a weavecell, an index into splitsubcells, 
  // and a reference to one of the astarcellside four sides of the cell
  // priority on (dist+distremain) where distremain respects the girth line

There are probably loads of bugs in the code. I’m going to have to handle subdivisions hitting exactly on contour points, or running coincident with contour lines. I hope there are enough ASSERTs and debug code there to direct the corrections efficiently. In theory, there is nothing to stop this method from achieving 100% of the result because it is not limited by any assumption about the level of geometric complexity within a single given weavecell.

On we go. Next up, adaptively target the subdivisions of the cells along the path until it achieves the appropriate sample rate.

by Julian at January 24, 2014 06:07 PM

Emergent Properties of Meat

Got A Chromebook

I have been wanting a travel computer lighter than my 15" laptop but more capable than my Nexus 7 combined with a bluetooth keyboard. And, frankly, I had just had gadget envy for the Samsung ARM Chromebook since it was announced.

When I'm on the go, my needs are basically:

  • Modern web browser with adblock and greasemonkey
  • A ssh (or, better yet, mosh) client
  • A comfortable keyboard
  • Fairly light
In particular, some things I didn't need are
  • Hundreds of GB of local storage
  • Four or eight cores of computing power
  • Full compatibility with desktop Linux or x86
I felt it would be nice to have
  • Long battery life
  • Ability to install Linux or customized ChromiumOS if I decided I needed it
  • Inexpensive enough to buy without being 100% sure it'll meet my needs

From my experience with Chromium-browser on Debian, I know that Chrome is an adequate web browser with Adblock, Ghostery, and TamperMonkey (though it's not 100% identical in function to Firefox), and I was aware of ssh apps for it. I also tried out the Samsung's keyboard at a local Best Buy and found it adequate. So when Amazon had them for $230 last week, I decided to treat myself.

I've now had the device for a few days, and it's been a positive adventure.

I immediately placed the device in developer mode and installed debian wheezy via crouton, but I'm not presently using anything in my debian chroot. For a short time, I used Secure Shell to ssh to the wheezy chroot and run mosh there.

I found that after a few customizations, mosh-chrome is a pefectly adequate mosh client. After I resolved a problem building it, I customized the colors to match my rxvt, added the ability to send a remote command, and hardcoded my default connection settings. However, on at least two occasions, mosh-chrome has failed to resume its session after suspending and changing wireless networks, so I'm not sure it's as reliable as desktop mosh.

Other thoughts:

  • I had to rewrite one of my private GreaseMonkey scripts to work in TamperMonkey.
  • Adblock and ghostery both seem to work adequately on the ChromeBook.
  • I do miss having a compose key.
  • I wish there was a SIP client I could use with callcentric, but I haven't found one yet.
  • I haven't had a chance to assess the battery lifetime yet.
  • Emacs users rejoice, you can map the "search" key (in the position of "caps lock" on a standard PC keyboard) to Control.

January 24, 2014 05:06 PM

January 23, 2014

Emergent Properties of Meat

pnacl vs Debian Jessie

If you get an error like
** Signal 6 from trusted code: pc=2abee0b751e5
when building pnacl software then it may be because your /run/shm is mounted noexec. You can fix it (for your current boot only) with: sudo mount -o remount,exec /run/shm As /run/shm is not mounted from /etc/fstab, I'm unsure how to make a durable fix.

Related naclports bug

January 23, 2014 10:18 PM

January 14, 2014


Multi-threading progress object

A quick day’s work to make a multi-threading percentage progress handling object in Python that handles residuals.

Let’s start with the test code (which I wrote after the working code — take that TDD!):

class ACProgress:
    def __init__(self):
        self.sumprogress = 0
    def ReportProgress(self, lprogress):
        self.sumprogress += lprogress
        print "The progress is at:", (self.sumprogress *100), "percent"

def addprogress(a, n):
    for i in range(n):

acprogress = ACProgress()
addprogress(1.0, 25)

This prints numbers up to 100% in steps of 4%.

Now we want to do this in threads.

acprogress = ACProgress()
ts = [ ]
for j in range(20):
    t = threading.Thread(target=addprogress, args=(1.0/20, 25))
    t.daemon = True

for t in ts:

print "Final progress is", (acprogress.sumprogress *100), "%"

This steps up in increments of 0.2% (a 25th of a 20th) in 20 different threads, but the final result can vary between something like 97.6% and 100% due to the lack of thread safety in ReportProgress(), specifically on the line:

self.sumprogress += lprogress

This is because the code actually evaluates in the following 3 steps:

Step 1:  a = self.sumprogress
Step 2:  b = a + lprogress
Step 3:  self.sumprogress = b

So, if two threads execute these lines in tandem, they’ll both take local copies of sumprogress into a, and return the summed value from each of their b’s which will include the sum from only one, but not both of them. Whichever thread leaves the function last will set the value, and it will therefore be an undercount.

Now, normally, you work round this with a thread lock that blocks other threads from entering and simultaneously executing the same region of code to cause this type of nuisance. This is standard practice, but I didn’t want to do it.

Instead, let’s begin with the list of Python atomic operations, which include array.append() and array.pop() among them.

Suppose we rewrite the object like so:

class ACProgress:
    def __init__(self):
        self.progressstack = [ ]
    def ReportProgress(self, lprogress):
        print "The progress is at:", (sum(self.progressstack)*100), "percent"

Now it’s always going to sum to 100%, because the progressstack list will faithfully accumulate everything that’s appended into it.

Unfortunately, this is a little inefficient when we start having millions of tiny progress increments. What we need is a combination of the two:

class ACProgress:
    def __init__(self):
        self.sumprogress = 0
        self.progressstack = [ ]
        self.sumprogress = 0
        self.tokenlist = [ 999 ]

    def ReportProgress(self, lprogress):
            token = self.tokenlist.pop()          # atomic
        except IndexError:
            self.progressstack.append(lprogress)  # will be accumulated later

        # add in the progress
        self.sumprogress += lprogress
        while self.progressstack:      # non-empty false positives not possible
            self.sumprogress += self.progressstack.pop()
        print "The progress is at:", (self.sumprogress*100), "percent"

        self.tokenlist.append(token)   # return the token

Who knows if this is more efficient because I have avoided use of a Lock? Avoiding Locks is good. The magic happens at the pop() where only one thread at a time can get the single value from the array; all others get the exception.

What more do I need?

I need to handle residuals.

What are residuals?

Well, the way the progress works is that I have divided up my operation several large pieces, which I have distributed to the threads, and these threads do their work in thousands of little pieces, but we’re never quite sure how many.

The trick is to think in terms of overall proportion of the work package in relation to the job, not the time for the work.

Focus on creating a progress meter that goes from 0% to 100% sort of evenly.

Let’s say we have 7 fields we need to plough, and 3 farmers with tractors to do the ploughing. Only one farmer can work in a field at a time. We need to rate the progress towards completion through the day on the progress meter.

First, allocate percentages of the overall job per field so that they add up to 100%. Suppose there are 3 big fields and 4 small fields. Then we can assign 20% to each of the big fields, and 10% to each of the small fields.

Now each farmer goes to a field and starts work, knowing the percentage of the allocation for that field. Suppose he picks a small field and starts ploughing. Every time he completes a furrow line he’s going to phone back to the Controller with a number until the work is completed. His numbers should all add up to 10%, which is the allocation for this field.

In the beginning he looks at the field and thinks he can do it in about 5 furrow lines. So after each furrow he calls back with the number 2%. If he guessed correctly, then his numbers will all add up to 10%, which is the allocation.

All the farmers across the other fields are doing the same concurrently, and the Controller is adding up a progress value that advances to 100% relatively evenly throughout the day until the whole job is done.

But wait, the farmer in the small field estimated the furrows wrongly. When he gets to 4th furrow he can see there are actually 2 more furrows to do to make a total of 6. He can’t give 2% to each one of them, because that would add up to 12%, which is beyond his allocation. Instead he cuts down on his reporting, so the last 2 rows are given only 1% each.

This is not a big problem, because the overall progress meter will still look okay as it advances through the day.

Chances are he will have seen that his estimation was out a little earlier on and could spread the numbers more evenly.

But what happens if he over-estimates and there are only 3 furrows needed. Does he apply 2% on the first 2 furrows and lump 6% onto the last one? That would be uneven. The progress meter would suddenly jump forward. It’s usually a mess because towards the end of the work you’re never sure how many scraps are left. You could potentially be left with a very large unallocated amount.

So instead of creating an unnecessary bump, what he does when he completes a field is phone back with a residual — the part of his allocation that he did not use.

The residuals get cumulatively subtracted from a limitprogress value that starts at 100%. After the first farmer has returned his residual of 4%, the Controller knows that when everything is completed at the end of the day the sumprogress will only add up to 96%. This limitprogress will gradually decrease throughout the day as more residuals come in. And instead of setting the progress meter tracking the sumprogress value, it should be set to sumprogress/limitprogress, so that at the end of the day when the total job is done it will have reached 100%.

But still, when the residuals get added in, there would be a bump because, say, when the sumprogress is at 70%, and the limitprogress is reduced from 100% to 90%, the progress meter will jump from 70% to 77%.

What we need to do is spread that bump between the remaining 30% of the progress meter’s progress.

I do this by also providing a lowerprogress value, which starts at 0.0. For a given sumprogress, the progress meter is calculated to be:

  (sumprogress - lowerprogress) / (limitprogress - lowerprogress)

And when you add in a new residual, the values of the two ends change like this:

  lowerprogress += residual * (sumprogress - lowerprogress) / (limitprogress - sumprogress)
  limitprogress -= residual

This means that there isn’t a bump on the progress meter when we introduce the residual, because:

(sum - lower -  res * (sum - lower) / (limit - sum)) / 
                        (limit - res - lower -  res * (sum - lower) / (limit - sum))
    = ((sum - lower)*(limit - sum) -  res * (sum - lower)) / 
                        ((limit - res - lower)*(limit - sum) -  res * (sum - lower))
    = ((sum - lower)*(limit - sum - res)) / 
                        ((limit - res - lower)*(limit - sum) -  res * (sum - lower))
    = ((sum - lower)*(limit - sum - res)) / 
                        ((limit - lower)*(limit - sum) -  res * (limit - sum + sum - lower))
    = ((sum - lower)*(limit - sum - res)) / 
                        ((limit - lower)*(limit - sum - res))
    = (sum - lower) / (limit - lower)

In practice, to avoid a division by zero, it’s better to put, say, 90% of the residual into the limitprogress, and the remaining 10% sumprogress, because when each field is done it’s okay to have at least a small step forward.

by Julian at January 14, 2014 06:47 PM

January 12, 2014

MetaRepRap Soup

January 10, 2014


A minor A* improvement

The stay-down linking code relies on an implementation of the A* search algorithm to quickly find a connecting path between the start point and end point by spreading a network of paths in an optimal way.

There’s a couple of nice animations from Wikipedia showing what they look like.

Dijksta’s A*

Suppose we have a network of paths, like so:

The path from Start to A and the path from Start to B rank the same in the priority queue because:

length(Start, B) + distance(B, End) length(Start, A) + distance(A, End)

(The length function is the length of the green path, while the distance function is the length of the yellow path.)

According to the A* algorithm, the next step should extend outwards from the point B before we go from A on the basis that there could potentially be a direct path that takes us all the way from B to the End point that is shorter than the best one we are going to get by extending A.

But this is impossible, because the theoretical best direct path cuts through many paths (including the Start to A one) before it reaches End, and any one of these paths could be substituted up to the point of intersection into that Start-B-End path to make it shorter, which is illogical, captain.

Therefore, if there is going to be a shorter path from Start that passes through B, then it will have avoid at the very least the girth of the search tree, defined as a line connected two extreme points, like so:

And when you use this to increase the value of distance(B, End) in the formula above to the length of the two segments required to get round the girth line on either side (whichever is shorter), you don’t get quite so much unnecessary sprawl near the Start point. You can add more girth lines if you like.

It’s a small improvement, but practically for free, and may make a big difference for a lot of realistic cases where there is one major obstruction.

by Julian at January 10, 2014 04:22 PM

January 04, 2014

MetaRepRap Soup

January 02, 2014 (MetaLab)

Emergent Properties of Meat

Eek, ads

I'm experimenting with ads on my blog again, as well as google analytics. If you don't like these technologies, then install privacy and ad blocking software like adblockplus or ghostery. I don't mind a bit if you do—I wouldn't surf the internet without blocking ads either. But I also wouldn't mind making a few bucks off of people who will click ads.

January 02, 2014 03:07 PM

December 30, 2013

AD9912 DDS Test

Update: it turns out the PLL filter components were not connected at all during the first tests I did :) Here's a picture that compares the schematic against the actual board. When I changed the 0R resistor from position "R3" to position "R5"  the PLL started behaving much more nicely. If anyone from AD is reading - please update your documentation!


With this change, and a 66x PLL multiplier giving a 10MHz x 66 = 660 MHz SYSCLOCK I get quite nice output:

Fout100M_Span50M Fout100M_Span400M Fout100M_Span500k

The output is set to 100 MHz and has an amplitude of ~0 dBm. There are -60 dBm spurs at +/- 50 kHz (not sure why?), -70 dBm spurs at +/- 10MHz, and a -65 dBm second harmonic at 200 MHz.

If I activate the "2x Reference" setting which detects both rising and falling edges of the input clock, and use that with a 40x PLL for a 10MHz x2 x40 = 800 MHz SYSCLOCK I still get very strong spurs at 10 MHz:


I have been testing this AD9912 DDS evaluation board:

So far the results are a bit strange, with output as advertized only when no external input clock is supplied. Strange.


by admin at December 30, 2013 03:07 PM

December 22, 2013

Comparing GPS PPP solutions

Update3 on 2014-02-20: by popular demand, screenshots of options (at least roughly correct...) used in RTKPOST to get the results shown:


Update2: gLAB data now at 30s intervals, and an error-plot that compares CSRS to gLAB clock offset.



Update: using the latest RTKLib from github on Ubuntu, I now got this, with the lat/lon/height results between CSRS-PPP and RTKLib in better agreement. There is still a 4ns offset between the clock results - which curiously is about 0.5*ZTD, but this may be just a coincidence.


GPS based Precise Point Positioning is used worldwide to compare clocks against each other and maintain UTC.

The idea is to connect your clock to a dual-frequency GPS receiver that collects data (in RINEX format). The RINEX data is then post-processed by using satellite orbit and clock information available (with delay) from the IGS. While normal GPS has a precision of maybe 1-10 meters (~10 ns clock offset/noise), PPP can achieve <1 cm precision (<1 ns clock offset).

I tested a few of the PPP algorithms I found by running them on the same datafile (KAJA2900.13O) and got the results shown below.

If anyone wants to try this at home you will need from the IGS (I used RTKGet, a tool that comes with RTKLib to download these): clock data CLK file igs17624.clk, orbit data SP3 file igs17624.sp3, NAV file brdc2900.13n, antenna correction ANTEX file igs08.atx, earth rotation parameters ERP file igs17627.erp, and possibly also an ocean tide loading BLQ file available from Onsala Space Observatory.


The CSRS-PPP and ESA-gLAB clock offset solutions agree reasonably well, but the clock offset of the RTKLib/JPL-APPS solutions differ by  3-4 nanoseconds - which is a lot if we are using this for monitoring/steering an atomic clock. I'm not sure why this happens - if anyone knows how to reproduce the CSRS-PPP results with RTKLib please let me know!

  • CSRS-PPP is an online web-based tool that returns results by e-mail. AFAIK the same code is used by the BIPM to maintain UTC.
  • JPL-APPS is a similar web-service by NASA.
  • ESA gLAB and RTKLib are open-source software packages for GPS/GNSS processing.

See also: A Comparison of Free GPS Online Post-Processing Services


by admin at December 22, 2013 08:43 PM

December 21, 2013

Emergent Properties of Meat

First few weeks on Digital Ocean

As I mentioned recently, I moved my hosting to Digital Ocean (link is referral code). All these comments apply to a 20GB instance I have operated in the nyc2 region for about 2 weeks, the sum total of my experience with Digital Ocean.

My needs are modest: I run this personal blog (python), a moin wiki, and some miscellaneous other web-facing stuff. I've selected the most basic 20GB disk / 512MB RAM / 1 core droplet size, which costs $5 a month.

Overall I'm happy with what I'm getting for the price I'm paying, and my sites are substantially more responsive than when I operated them on dreamhost. But there are some caveats:

  • The available graphs (instantaneous transfer, CPU, and disk) are unreliable -- sometimes it is hours or even days between updates, sometimes multiple updates per hour (based on communication with DigitalOcean, this problem may be worst in the nyc2 region)
  • "Real Time" graphs never seem to be realtime. If you want to look at a fresh graph, you have to reload the page and then click "graphs".
  • Graphs aren't available via API. Discussion of graphs here
  • I/O, Bandwidth and pystones are somewhat variable
    • Bonnie++ zcav gives 579MB/s (st dev is 40% of mean) over a 20GB /dev/vda—apparently not data-dependent compression, fast spots move around from run to run
    • Other performance measures vary as shown below (now based on 818 hourly tests):
      statistic mean std dev Units
      pystones 103229.1 ± 15757.1 (15.3%) Pystone/s
      ping 14.9 ± 1.1 (7.3%) ms
      down 24.5 ± 9.0 (36.6%) Mbit/s
      up 49.7 ± 29.6 (59.6%) Mbit/s
    • Apache file transfer as measured by 'ab' from remote is not as fast as speed measured by speedtest-cli, but that's probably my own lack of tuning
    • Contrary to my expectations, there was not a clear daily or weekly component to performance variations.
  • No IPV6 yet. They recommend setting up IPV6 tunneling. Here's the discussion at DigitalOcean. I've set up tunneling using and it seems fine with very low latency to their New York POP.
  • You have to host your own DNS to add AAAA records. This is no big deal as I already hosted my own DNS.

Update, December 25, 2013: An earlier version of this page was inaccurate, particularly about pricing. An e-mail from digitalocean clarified that the price of the small droplet is capped at $5/month (and so on for other droplet sizes), so (now-deleted) my calculations based on per-minute pricing were wrong. They also clarified that IP addresses are never changed for a droplet as long as it is active, and that the degree of performance variation I measured is more or less what was expected. I also deleted some items about bandwidth quotas and monitoring because apparently no bandwidth charges are being made at present.

Update, January 9, 2014: Update statistics based on 818 hourly tests. I got tired of getting the hourly results in my mailbox, so that's the last update I expect to do.

December 21, 2013 03:53 AM

December 13, 2013

Emergent Properties of Meat

Benchmarking ungeli on real data

Since the goal of this little project is to actually read my geli-encrypted zfs filesystems on a Linux laptop, I had to get a USB enclosure that supports drives bigger than 2TB; I also got a model which supports USB 3.0. The news I have is:

Ungeli and zfs-on-linux work for this task. I was able to read files and verify that their content was the same as on the Debian kFreeBSD system.

The raw disk I tested with (WDC WD30 EZRX-00DC0B0) gets ~155MiB/s at the start, ~120MiB at the middle, and ~75MiB/s at the end of the first partition according to zcav. Even though ungeli has had no serious attempt at optimization, it achieves over 90% of this read rate when zcav is run on /dev/nbd0 instead of /dev/sdb1, up to 150MiB/s at the start of the device while consuming about 50%CPU.

(My CPU does have AES instructions but I don't know for certain whether my OpenSSL uses them the way I've written the software. I do use the envelope functions, so I expect that it will. "openssl speed -evp aes-128-xts" gets over 2GB/s on 1024- and 8192-byte blocks)

Unfortunately, zfs read speeds are not that hot. Running md5sum on 2500 files totalling 2GB proceeded at an average of less than 35MB/s. I don't have a figure for this specific device when it's attached to (k)FreeBSD via sata, but I did note that the same disk scrubs at 90MB/s. On the other hand, doing a similar test on my kFreeBSD machine (but on the raidz pool I have handy, not a volume made from a single disk) I also md5sum at about 35MB/s, so maybe this is simply what zfs performance is.

All in all, I'm simply happy to know that I can now read my backups on either Linux or (k)FreeBSD.

December 13, 2013 11:26 PM

Optics adapter plate

Mechanical design for an optics assembly:

I made an adapter plate between the XYZ translation stage and the 2"/1" lens-rings. The XYZ stage has a 50 mm x 50 mm M6 thread pattern, and the optics are mounted on a straight line with M4 screws through the adapter plate. M6 countersink is 12mm diameter 7mm deep, while the M4 countersink is 8mm diameter and 5mm deep.

adapter_plate adapter_plate_machined

In real life this looks almost like the CAD model:

pmt_optics_front pmt_optics_rear pmt_optics_top

by admin at December 13, 2013 07:08 PM

December 10, 2013


Homing in on an A* structure

Of course, I could be going down another blind channel. But for the moment it looks like it is getting closer as I progressively fiddle with the search algorithm and allow it to pick the exit points from each cell and not stick to the mid-point of each edge.




I hacked the function over and over again, trying to make sure each version still worked, and ignored any optimization. This is now the fourth complete rewrite.

The most important change was to alter the structure from paths being defined by a sequence of points, to paths being defined as a sequence of edges.

Specifically, when you have a piecewise linear path, you can represent it either as a sequence of points

[p0, p1, p2,..., pn]

or as a sequence of edges

[ {p0,p1}, {p1,p2},... {pn-1,pn} ]

Obviously you always go for the points representation first, because it looks the simplest and you don’t have redundant information of duplicate points. The redundancy is not so much a waste of memory, so much as that the points can get out of sync. What happens if your sequence goes:

[ ..., {a,b}, {b',c},...]

and then later b != b’? You’re leaving yourself open to inconsistencies.

The problem with the pointwise representation is it doesn’t represent the whole of the geometry.

What happens when you get a intersection that hits on the edge between two points? How do you reference it?

One could establish the convention that any particular point (except the first one) represents the edge immediately preceding it.

Often the edge contains more associated data than the point, such as its length and direction and cell in which it is embedded within. Do these also by convention get associated to that same end point? Pretty soon you’ve got a sequence of what, for all intents, looks exactly like bloated structures representing edges, and the only thing they’re missing which prevents them from being conveniently self-contained is the start point of each edge which you have to iterate one step back in the list to obtain. So why not include that in the structure as well, and complete this evolution?

And then each one structure has to handle a single cell — the one that the edge is embedded within.


Well, anyway, this was working pretty well, until I tried it on something huge and it took 5 minutes to calculate the A* route.

Zoomed in:

Eventually I got it right (after two false starts and hundreds of lines of deleted code) and used a priority_queue — the first one I’ve used in C++.

Next up there’s pulling the connecting path tight, then taking account of stock engagement, and finally making it move smoothly with as few sharp corners as possible — including the first and last where it is meant to be tangent to the incoming and outgoing toolpaths.

Someone did point out today that if the incoming toolpath had a sharp corner just before its endpoint, then one could be permitted to follow through with a tighter corner at the start of the linking path on the assumption that the machine tool would anyway be running at a slower feedrate when it got there. You can have a tight corner at the start of the entrance ramp to a highway because you are not yet up to speed.

Maybe this super-duper linking algorithm ought to allow for variable cornering rads along its length, even though this feature won’t be exposed for many years.

by Julian at December 10, 2013 06:40 PM