When this all ended, I don’t know. To some degree it never ended but things have changed. I often blame GPUs for essentially building one typical application of recreational programming into a chip that is included on all modern computers, ushering in an era in which “computer graphics” has come to be defined solely as rendering lit and textured 3D triangle meshes to the screen and thus leaving little oxygen in the room for splines and voxels and whatever was never invented. Also the death of the international demoscene means that strange computational gems you’ve never seen before are no longer one click to some Belgian web site away. In any case, I am thinking about earlier times because I have recently been working on a project that I started thinking about in the 1990s, a project that is somewhat a child of that era of recreational programming and mathematics. At the time I had been implementing billiards simulations and found myself becoming more interested in what the bugs were like than when the simulation worked correctly.

The project is to come up with an internally consistent system of rules for a fictional alternative to Newtonian mechanics applied to frictionless pucks, specifically rules for 2D physics in which forces act along the arcs of circles rather than straight lines — something like an alternate reality billiard ball simulation. “Curvy physics” as I have it in my notebooks from 25 years ago. Before diving into this I’d like to express once more that this project is about making something up; it is an act of invention. I am not trying to simulate anything from the actual world. I will borrow real physics concepts, e.g. momentum, to the extent they are useful but the goal is just to create something interesting not to model anything real. I make the above disclaimer because I have noticed over the years that people who i would expect to be interested in this project often have a surprisingly hard time understanding what the project is.

Basically the project is to come up with something that looks like this:

(I posted some more videos here.)

We can design such a system by lifting the abstract structure of a simple billiard simulation but changing the details. We will adapt the way in which a typical simulation of 2D Newtonian collisions works at the high-level swapping out the low-level calculations. If we neglect spin, friction, and other forces and just want to simulate a totally elastic Newtonian collision of circular pucks of equal mass, if puck A and B collide and both are in motion, we proceed by

- finding the component of A’s momentum vector that is directed at B and B’s at A, calling these A and B’s “impulse vectors”.
- finding the two “residual vectors” by subtracting the impulse vectors from the total momentum vectors.
- resolving the collision by swapping the two impulse vectors
- finalizing A and B’s new momentum vectors by taking the sum of the residual vectors and the swapped impulse vectors e.g. A’s new momentum equals the sum of A’s residual vector and B’s impulse vector.

Steps 1. to 4. above are general enough that they could be applied to curvy physics if the vectors we refer to are “curvy vectors”, if we can determine the impulse curvy vectors of a specific collision — this has two parts, the circle of the vector and the magnitude of the vector– and if curvy vector arithmetic, addition and subtraction, is defined.

**Curvy Vectors**

I never really considered anything but the following: a curvy vector is a circle, a magnitude, and an orientation — where by orientation we mean clockwise or counter-clockwise. Orientation can be thought of as the sign of the magnitude. The magnitude component could be an angular magnitude or a linear magnitude, which are largely interchangeable, but we choose a linear magnitude so that by definition the linear magnitude is still defined even if a vector’s circle component is degenerate, if the circle has infinite radius i.e. is a line. Note too that the circle component of a curvy vector must have an absolute location. Consider a puck under the influence of a curvy velocity vector. If the circle vector was relative to the puck’s position, as a classical vector is, then if the puck is displaced by the vector at time* t*_{1} at time *t*_{2} to displace it again along the same circle would require a different vector, but we want the motion of the puck to be circular and governed by a single vector. Thus we define curvy vectors in terms of absolute coordinates, breaking with the classical version here. What this means is that curvy vector arithmetic has to be done in terms of a location. To add two vectors we will need to know where in 2D space we are adding them.

**The Circle of an Impulse Vector**

When a collision occurs what will be the equivalent of the green line segment in figure 1? We need some notion of “the circular direction” from one puck to another. That is, along which circles do pucks impose forces when they collide? To answer this question we can appeal to our expectations about the behavior of simple systems. Let us decide a priori that we’d like our physics to exhibit the following phenomenon: when a puck travels around a circle *c* and collides with a stationary puck of the same mass with a center also on *c* then the colliding puck imparts all of its (curvy) momentum along c, comes to a stop, and the object puck continues traveling on *c* with the same speed and orientation the colliding puck had had; thus creating a kind of “Newton’s Cradle” effect as below.

To get this behavior we define the impulse circle as the circle that passes through the centers of both pucks and that is tangent to the colliding puck’s circle of revolution. In the Newton’s cradle case above this definition results in the impulse circle resolving to the circle of revolution of the colliding puck as we want. In other cases the situation looks like the following:

In the above puck A travels clockwise along C and collides with B. The black arrow points in A’s instantaneous direction at the moment of collision. The impulse circle then is the circle that is tangent to the black arrow at the center of A and that also passes through the center of B. If we look at the circle of impulse for all possible centers of B, in the mathematical literature such circles are said to comprise a “parabolic pencil of circles” about the center of A.

This definition of impulse circles implies a couple of differences with the classical case. One consequence is that there is no single circle that is analogous to the green line segment in figure 1. A imposes a circle on B and B imposes *a different* circle on A, if both A and B are in motion. This is because the circle that passes through two given points with given tangent lines at those points is over determined; such a circle may happen to exist but generally doesn’t. I believe there is always an ellipse that passes through two points and is tangent to given lines at those points and if so perhaps one could invent a curvy physics in which vectors are shaped like arcs of ellipses rather than circles but this whole project is a forking path of such choices and my interest currently is finding the best system that only involves circles.

Another consequence of this definition of impulse circles is that the behavior of the object puck after a collision is dependent on the (curvy) direction in which the colliding puck had been traveling. This is unlike the classical case. Under Newtonian physics if the eight ball is located at some point *p* relative to a given pocket, where one must aim to sink the ball in the pocket is dependent only on *p*; where one must aim is independent of the location of the cue ball. Under curvy physics this is not the case, where one must aim depends on the location of the object ball *and* on the particular arc that the cue ball traverses to get to it.

**The Magnitude of an Impulse Vector**

The next question is *what is the magnitude of the impulse vecto*r? We need a function that determines how much momentum is transferred from puck to puck on a collision. If we assume the pucks are unit circles, the momentum transfer function will be a function of the angle of the collision relative to the colliding puck’s instantaneous direction and the colliding puck’s radius of revolution. The function needs a radius because if we want the newton’s cradle-like behavior above we need the momentum transfer function to return 1 when the angle is such that the impulse circle a puck imposes is the same as a puck’s pre-collision radius of revolution. Further we want the momentum transfer function to return 0 if the angle of the collision is completely oblique — if it just glances the object puck, if what I am calling the angle of collision is a right angle. If we let alpha equal the angle of collision at which the impulse circle is the puck’s radius of revolution — this works out to alpha = atan(2 / sqrt( -2 + 4 r^2)) for unit circle pucks — then three given cases are as below.

Unlike defining impulse circles, I can find no momentum transfer function that is elegant and feels compellingly right. There are many options. The simplest is to make a piecewise function that maps [-pi/2, … , alpha] to [0, … ,1] and [alpha, …, pi/2] to [1, … , 0] via linear interpolation. A more complicated approach is to use inversive geometry: taking A’s center as the origin, invert B’s center about a particular circle that turns A’s circle of revolution into a line passing through the origin and use the cosine of the angle between that line and the line between the origin and the inverted point. Both of these functions work. The circle inversion based one ends up being a function that looks much like cosine between -pi/2 and pi/2 but skewed so that the maxima of 1 occurs at alpha rather than zero. (some Mathematica plots below)

In practice in a simulation of curvy physics however neither of these behave the way we want. The trouble is that the relationship between theta and the radius of the impulse circle is particularly nonlinear. It after all goes to infinity when theta is zero because in this case the impulse circle is degenerate; thus around theta equals zero the function is growing extremely quickly. This results in the momentum transfer function we seem to need also being extremely non-linear. Slow, roughly linear functions like linear interpolation or skewed cosine end up returning close to 1 for too big of a range of collisions, only returning values like 0.5 or 0.6 etc. too close to -pi/2 or pi/2. This results in pretty much every collision imparting all momentum and leaving the colliding puck stationary.

So we need the momentum transfer function to be nonlinear. We want something that is roughly hyperbolic. An elegant function to try is just the ratio between the circle of revolution and the impulse circle choosing the numerator and denominator such that the ratio ends up less than one. We also need zeros at -pi/2 and pi/2 so we subtract the minimum possible impulse circle radius from the numerator and denominator . This function looks like the following;

the two maxima occur when theta is such that the impulse circle in both clockwise and counterclockwise orientations is the same size as the circle of revolution. The zero at zero means that no momentum would be passed to a puck if the impulse circle is degenerate. The maxima when the puck imposes a circle of opposite orientation and the valley around zero are both problematic in that impulse circles larger than a puck’s circle of revolution are not unusual and shouldn’t be treated specially. There is also no reason to treat a degenerate impulse circle specially. I thus choose to use a piecewise function in which I stretch the left-hand side of the above such that the two maxima are stitched together as below

This function behaves the way I want but is ugly in its Frankenstein-like definition. I haven’t found anything better unfortunately.

The following video is a visualization of the eight curvy vectors involved when two pucks, both with velocities, collide — puck A and puck B’s initial vectors, their impulse and residual vectors, and their final vectors post-collision — as the position of B changes relative to A. I have not yet explained how the residual vectors are derived via vector subtraction or how the final vector is derived via addition. I will explain curvy vector arithmetic in part 2 of this post.

]]>Below is a video of a curvy force visualization tool I used to develop the rules for this fictional physics. I will discuss what this is visualizing in a later post.

]]>was generated from this tessera script, and the image of Robert Amman’s A5 tiling in the header of this post was generated from this tessera script.

Tessera is a pure functional language implemented in C++17, source code here. I’m using Boost Spirit X3 to implement the parser, Eigen for matrices, Boost Multiprecision for greater than double precision floating point numbers, and my own “graph_ptr” class for garbage collecting smart pointers. These graph_ptrs are somewhat similar to Herb Sutter’s deferred_ptr, which I tried at first but found to perform too inefficiently for my use case. My implementation of Tessera parses source code into an expression tree and then compiles the expression tree into commands for a stack machine representation that actually executes the script.

Tessera is a Turing complete language and the scope of the project is big enough that I can’t easily document the whole thing in a single blog post. I plan on doing more posts like these, one focusing on advanced features of the language and perhaps one focusing on the language’s implementation. However, to get a feel for the language consider the following script which generates a “Sierpinki Carpet”.

let square = { regular_polygon(4) } with { north, west, south, east is this.edges; color is "purple"; }; let sierpinski_carpet = func( n ) { lay s as N, s as NE, s as E, s as SE, s as S, s as SW, s as W, s as NW such_that N.east <-> NE.west, NE.south <-> E.north, E.south <-> SE.north, SE.west <-> S.east, S.west <-> SW.east, SW.north <-> W.south, W.north <-> NW.south with { north, west, south, east on [N.north, W.west, S.south, E.east]; } } where { let s = if (n == 1) square else sierpinski_carpet(n-1); }; tableau( num ) { lay sierpinski_carpet( num ) }

- We define
`square`

to be a square shaped tile with its edges named after the cardinal directions as in (1) below. This is done by wrapping the output of the canned`regular_polygon()`

function in a “with expression”. Using`with`

we can add fields to an existing object (well, to a clone, as this is a pure functional language — objects are immutable). In this case we add a color and we add names for the four edges. - We define a recursive function
`sierpinski_carpet`

.- The meat of the work is done in a
`lay`

expression. Intuitively`lay`

expressions model laying patches of adjacent tiles.- We specify eight arguments to lay. Each argument is whatever object
`s`

is. Again these will be clones of`s`

, as objects are immutable. - We provide aliases for the eight arguments that can be used to refer to them within the
`lay`

expression. - In the
`such_that`

clause of the expression we use the aliases to specify edge-to-edge matchings that define a square ring as in (2) - In the
`with`

clause we specify the names of fields that will be part of the public interface of the patch that the`lay`

expression evaluates to. We can specify the new fields in terms of the aliases of the arguments as they are still in scope. We let “east” refer to the east side of the east square in the patch, etc., as in (3)

- We specify eight arguments to lay. Each argument is whatever object
- In the
`where`

clause of a function, which is evaluated before the function body, we can define local variables that will be in scope during the evaluation of the body. In this case we define an object`s`

that is either a square tile to terminate the recursion or the result of evaluating a recursive call to`sierpinski_carpet`

.

- The meat of the work is done in a
- In the
`tableau`

block we call`sierpinski_carpet`

on a numeric argument that can be provided when the tessera script is executed. tableau is like the main function of a C program, the entry point of the program.

Calling sierpinski_carpet on 4 yields

]]>The structure of the completed puzzle can be viewed as a continuation of a simple process in which you add neighbors to a cell in the oct-tet honeycomb. If you start with an octahedron and then add all of its tetrahedral neighbors you get a stella octangula. If you then add all the octahedral neighbors to the stella octangula you get the shape of the completed Octet-1 which looks like the following

The above can be viewed as a compound of six bars formed by connecting a row of three octahedra with four tetrahedra. Below is a higher order model of one of these bars, “higher order” in that each of the three octahedron in the bar is composed of multiple cells from the oct-tet lattice. This is size in terms of fundamental cells that the puzzle uses.

To find the puzzle I searched the intersection of six of the above allowing octahedron cells to be split into square pyramids (because this turned out to be necessary). The intersection is shaped like a stella octangula. I searched for six non-intersecting paths connecting the octahedra ends of the bars. Ultimately to make the puzzle constructible I had to also split one of the bars into “key” piece in which a single bar is formed from two half pieces. You can see me inserting those at the end in the video above. I believe there is no constructible puzzle of this kind that does not use some kind of key piece like this.

Below is a model of the assembled puzzled that I have “buffered” to make its internal structure somewhat visible.

Diamond-square is a modification of the old algorithm I implemented. It is supposed to not exhibit artifacts on grid lines the way that the simpler version does. (You can see these artifacts in the upper left in the this image from the Wikipedia article — which looks to me like it was generated with the simpler version, actually.) I’ve always thought the diamond-square algorithm is kind of inelegant because the version in which you only displace rectangle centers and just interpolate rectangle sides does not in my opinion produce output that is that much worse and is trivial and fun to implement.

In any case, if you really want zero of these kinds of artifacts then don’t use a grid at all. Instead of using a recursive hierarchy of subdivided tilings of a rectangle, my idea is to use a recursive hierarchy of Delaunay triangulations. I implemented this algorithm last week, initially exactly as below.

- Generate a small set of random vertices
**V**. Assign a random value to each vertex in**V**. - Calculate the Delaunay triangulation
**T**of**V**. - For each triangle
*t*in**T**:- If the area of
*t*is greater than some threshold, randomly generate a point*p*in*t*. - Assign a value to
*p*that is the average value of*t*‘s vertices weighted by their distance from*p*displaced by a random amount that is proportional to the relative area of*t*. - Add
*p*to**V**.

- If the area of
- If any new points were added to
**V**goto 2. otherwise terminate.

The above works but is horribly inefficient. The slow step is 2., generating full Delaunay triangulations at each iteration. The only thing the above algorithm has going for it is ease of implementation. It allows you to treat a typical Delaunay triangulation library call as a black box.

To achieve greater efficiency you need to delve into the inner workings of the triangulation process. Specifically if you have an incremental implementation of Delaunay triangulation you can modify the above to not need to re-triangulate everthing at each iteration.

I chose a modified version of the Bowyer-Watson algorithm for this purpose. Briefly Bowyer-Watson goes like this: given a set of vertices, iteratively add each vertex *v* to an in-progress triangulation by finding all triangles with circumcircles that contain *v*, deleting those triangles, inserting *v*, and wrapping *v* with triangles connecting *v* to each edge of the star-shaped polygon that deleting the triangles created. Done naively this is O(n^2). Done while maintaining the triangle adjacency graph and using some other data structure to efficiently find the triangle containing a given vertex, it is O(n log n). There is actually some confusion online about the time complexity of the Bowyer-Watson algorithm, (see my answer to this StackOverflow question), but it is straight-forward in my case because at every step I already know which triangle I randomly generated a point in — so I do not need some extra data structure to find triangles efficiently.

The modified algorithm is as below:

- Generate a small set of random vertices
**V**. Assign a random value to each vertex in**V**. - Calculate the Delaunay triangulation
**T**of**V**. - while the area of
*t*, the largest triangle in**T**, is greater than*k*do- generate a random point in
*t*. Assign a random value to it as above. - perform a Bowyer-Watson iteration step on
*p*in*t*, i.e. searching the triangle adjacency graph starting at*t*for all triangles with circumcircle’s containing*p*, deleting them. etc. This set of “bad triangles” is guaranteed to be contiguous.

- generate a random point in

The above is very fast. My implementation is here. I wrote it in C# to .NET Framework so that I would get GDI graphics routines for free. This implementation is indebted to Rafael Kuebler’s Delaunay/Voronoi code which I modified for my purposes and Renaud Bédard’s implementation of uniform poisson-disk sampling (I found that if the initial points are uniformly distributed there can be little cusps between seed points that are too close together — poisson-disk sampling yields better results) Also I use Mono.Options for command line argument parsing.

It’s a command line program with options as follows. When I mention the area of a triangle below, area is always represented as a percentage of the area of the mean of the area of the initial seed triangles:

- w : Width of output (pixels)
- h : Height of output (pixels)
- i : Initial vertex distance. This a parameter going to the poisson disk sampler for the creation of seed points. It’s a percentage of min(w,h) defining a distance between sampled points. Higher percentages yield fewer seed points. Lower percentage yield more, which in turn yields more perlin-noise-like output
- m : Minimum area cutoff threshold — area of the smallest of triangle that the program will continue subdividing i.e. the largest triangle allowed in the output .
- c: contrast factor. Apply sigmoid contrast to the output values. I use a similar function to the one used by ImageMagick. High numbers mean more contrast. 10 is a lot of contrast.
- v : Perturbation method, an enumerated type parameter declaring the function to use to displace triangle mid-point values. Can be “Normal”, “Uniform”, “ClampedNormal”, or “ClampedUniform”. The default is Normal, meaning given the weighted mean value of some triangle
*t*‘s vertices*v*generate a new value by randomly sampling a normal distribution with mean*v*and with standard deviation that is the area the*t*raised to the power of the perturbation parameter. Uniform is similar. Clamped perturbation methods cap values at 1.0 and disallow negative values. Unclamped methods do not but I normalize all values before generating the output. Clamping tends to increase the perceived contrast in the output. I prefer using unclamped perturbation and then playing with the contrast directly, however. - p : Perturbation parameter, exponent used by the perturbation method function above.
- s : Scale factor applied to output. Useful for generating output that has triangles that are effectively subpixel in size. Generate large and then scale down.
- b : Color-blend, if specified generates colored output. See examples below. Otherwise the output is gray scale.

Below is some output along with the command line arguments that generated it. The program will also generate SVG based on the file extension. The first two examples are generated at high resolution and then scaled down so that the actual triangles are subpixel in size. The latter ones are color blends with the triangles showing.

-w 2400 -h 2400 -s 0.25 -i 0.6 -m 0.000005 -c 10

-w 2400 -h 2400 -s 0.25 -i 0.6 -m 0.000005 -c 2 -p 0.6

-w 600 -h 600 -i 0.8 -m 0.0005 -p 0.40 -b #FFA242-#FFFFFF-#87CEEB-#FF007F

-w 600 -h 600 -i 0.8 -m 0.001 -p 0.50 -b #CC7722-#fffd74-#507d2a

-w 1024 -h 1024 -i 0.8 -m 0.000005 -p 0.40 -b #FFA242-#FFFFFF-#87CEEB-#FF007F-#CC7722-#fffd74-#507d2a

]]>Recently I spent the time to actually implement the Doyle and McMullen algorithm in C++ as a numeric algorithm. I don’t know of any other implementations. My code is here.

It is an interesting algorithm. Its existence implies that although there is no general formula, akin to the quadratic formula, that you can plug into to get the roots of a 5th degree equation, and in fact although the roots of most 5th degree equations cannot even be represented as expressions composed of ordinary arithmetic operations plus radicals — you *can* represent numbers with radical expressions that are arbitrarily close to these roots. Each of these the roots can be defined exactly in terms of the limit a recursive function converges on as the depth of recursion approaches infinity; each step along the way approximates with increasing accuracy the root of an unsolvable quintic expressed using arithmetic and radicals.

Above I specify my implementation is “a numeric algorithm”, I mean as opposed to an implementation using a symbolic math package; I use ordinary floating point numbers. This is an important point: [1] is from the wilds of number theory, not from a CS text book, and it thus defines an algorithm over real numbers which of course have arbitrary precision. It was an open question to me whether the algorithm has value as a numeric algorithm. The issue I saw is that it is a solution to quintics in the single parameter Brioschi form. The Brioschi form, by magic, collapses every general quintic equation defined by six complex coefficients into a single complex number; it is impossible for such a reduction to not be limited by finite precision. That is, to some extent the finite precision of floating point numbers must determine to which general quintics a numeric implementation of [1] will return sensible results.

I will leave an analysis of this question to someone who is an actual mathematician or computer scientist but empirically the Doyle and McMullen algorithm *does *seem to me to have merit as a numeric algorithm at double precision. On randomly generated general quintics with coefficients uniformly distributed between -1000 and 1000, my implementation of solving the quintic by iteration returns results that look to me to be about as good or better than the implementation of Laguerre’s Method from *Data Structures and Algorithms in C++ *but is about 500 times faster. Both algorithms perform better in terms of speed and correctness when the coefficients are lower. When they are between -10 and 10 my implementation is more accurate and about 100 times faster than the *Data Structures and Algorithms* code.

My implementation works as follows:

- Given a general quintic, convert to principal form. I do this conversion as explained here. I found the resultant and solved for the
*c₁*and*c₂*being canceled out using SageMath. - Convert the principal quintic to Brioschi form. I followed the explanation here.
- Find two solutions to the Brioschi form quintic via iteration as described in [1] pages 32 to 33. The only difficulty here was that I needed to find the derivative to the function g(Z,w). I did this symbolically again via SageMath; however, a speed optimization would be to replace use of an explicit C++ function for g’ and instead evaluate g(Z,w) and g'(Z,w) simultaneously as described in the chapter in
*Numerical Recipes: The Art of Scientific Computing*on polynomials. Also just as a note to anyone else who may want to write an implementation of this algorithm, the wikipedia article here has a mistake in the definition of h(Z,w). Use the original paper. (Or better yet cut-and-paste from Peter Doyle’s macsyma output and overload C++ such that the caret operator is exponentiation, which is what I did) - Convert the two roots back to the general quintic.
- Test both roots. If the error is less than threshold
*k*pass along both roots. If one is less than*k*pass along the one good root. If both roots yield errors greater than*k*perform*n*iterations of Halley’s Method and retest for one or two good roots. If neither root has error less than*k*pass both along anyway. - If you have two good roots
*v₁*and*v₂*, perform synthetic division by (*z*–*v₁*)(*z*–*v₂*) yielding a cubic and solve the cubic via radicals. If you have only one good root divide the quintic by (z-*v*) and solve the resulting quartic. I’m using the cubic solving procedure as described in*Numerical Recipes*and the quartic formula as described here.

My implementation is a header-only C++17 library (“quintic.hpp” in the github repo i linked to above) parametrized on the specific floating point type you want to use. Single precision is not good enough for this algorithm. Double precision works. I didn’t test on long doubles because Visual Studio does not support them.

]]>class ConvexHull { public static double cross(Point O, Point A, Point B) { return (A.X - O.X) * (B.Y - O.Y) - (A.Y - O.Y) * (B.X - O.X); } public static List<Point> GetConvexHull(List<Point> points) { if (points == null) return null; if (points.Count() <= 1) return points; int n = points.Count(), k = 0; List<Point> H = new List<Point>(new Point[2 * n]); points.Sort((a, b) => a.X == b.X ? a.Y.CompareTo(b.Y) : a.X.CompareTo(b.X)); // Build lower hull for (int i = 0; i < n; ++i) { while (k >= 2 && cross(H[k - 2], H[k - 1], points[i]) <= 0) k--; H[k++] = points[i]; } // Build upper hull for (int i = n - 2, t = k + 1; i >= 0; i--) { while (k >= t && cross(H[k - 2], H[k - 1], points[i]) <= 0) k--; H[k++] = points[i]; } return H.Take(k - 1).ToList(); } }]]>

These cellular automata have state tables that can be thought of as 13 rows of 2 columns: there are 12 possible non-zero alive cell counts plus thee zero count and each of these counts can map to either alive or dead in the next generation depending on whether the cell in the current generation is alive or dead (column 1 or column 2). I looked at each of the 4096 cellular automata you get by filling the third through eighth rows of these state tables with each possible allocations of 0s and 1s and letting all other rows contain zeros.

A handful of these 4096 feature the spontaneous generation of gliders but one rule is clearly *the* triangular analog of Conway’s Life. I have no idea if this rule has been described before in the literature but it is the following:

On a triangular grid

- If a cell is dead and it has exactly four or six vertex-adjacent alive neighbors then it alive in the next generation.
- If a cell is alive and it has four to six vertex-adjacent alive neighbors, inclusive, then it remains alive in the next generation.
- Otherwise it is dead in the next generation.

The above has a glider shown below that is often randomly generated and exhibits bounded growth.

Here it running in Jack Kutilek’s web-based CA player:

Tri Life gliders are slightly rarer than in Conway life because they are bigger in terms of number of alive cells in each glider “frame”. If you don’t see a glider in the above, stir it up by dragging in the window.

I was, however, more interested in how such dissections could be turned into an interlocking puzzle, akin to a traditional burr puzzle. amd as such needed code to generate 3D models of the dissections. My generation code is a dumb, constructive, brute force approach in which I just traverse the search space adding rhombohedrons to a candidate dissection in progress and backtracking when reaching a state in which it is impossible to add a rhombohedron without intersecting the one that was already added or the containing triacontahedron, keeping track of configurations that have already been explored.

Dissections of the rhombic triacontahedron into golden rhombohedrons (hereafter “blocks”) turns out to always need 10 and 10 of the two types of blocks that Hart refers to in the above as the “pointy” and “flat” varieties (and which I refer to as yellow and blue). Further it turns out that in all of these dissections there are four blocks that are completely internal, i.e. sharing no face with the triacontahedron; I also believe that the four internal blocks are always three blue and one yellow, but I’m not sure about that.

My strategy for finding an interlocking puzzle was the following:

- Generate a bunch of raw dissections into blocks
- For each dissection, search the adjacency graph for four pieces, the union of sets of five blocks, such that
- Each piece forms a simple path in the dissection; that is, each block in the piece
- is either an end block that is face adjacent to a next or previous block in the piece or is a non-end block that is face adjacent to a next block and a previous block.
- and does not share any edges with other blocks in the piece except for the edges of the face adjacencies.

- Each piece contains at least one fully internal block.
- Each piece is “single axis disentangle-able” from each other piece, where we mean by that that there exists some edge
*e*in the complete construction such that if given piece*p1*and piece*p2*, if you offset*p1*in the direction of*e*by a small amount*p1*does not intersect*p2*. - Each piece is not single axis disentangle-able from the union of the other three pieces.

- Each piece forms a simple path in the dissection; that is, each block in the piece

I never managed to succeed in doing a complete enumeration, generating all of the dissections for reasons that I don’t feel like going into. (As I said above, I did not do anything fancy and it would be easier to just be smarter about how I do the generation than to make what I have more efficient; i.e. could have done the George Hart algorithm if I had known abouyt that or there are ways of transforming one dissection into another that I don’t do — I do an exhaustive search, period — but I never did the smarter stuff because I found what I was looking for, see below)

But from about 10 dissections I found one set of pieces that uniquely satisfies all of the above:

Here’s some video. (Pieces 3D printed via ShapeWays)

I’m calling the above “rhombo”. Those pieces are rough because I only 3D printed the individual rhombohedrons and then superglued them together into the pieces, which is imprecise. I had to sand them heavily to get them to behave nicely. I’ll eventually put full piece models up on Shapeways.

In the course of doing this work, it became apparent that there is no good computational geometry library for C# to use for something like this. There is one called Math.Net Numerics along with Math.Net Spatial that will get you vectors and matrices but not with all the convenience routines you’d expect to treat vectors like 3D points and so forth. What I ended up doing was extracting the vectors and matrices out of monogame and search-and-replacing “float” to “double” to get double precision. Here is that code on github. I also included in there 3D line segment/line segment intersection code and 3D triangle/triangle intersection code which I transliterated to C#. The line segment intersection code came from Paul Bourke’s web site. And the triangle intersection code came from running Tomas Moller’s C code through just a C preprocessor to resolve all the macros and then transliterating the result to C#.

]]>This implementation of signals is a re-work of some code by a user _pi on the forums for the cross-platform application framework Juce — that thread is here — which was itself a re-work of some example code posted by Aardvajk in the GameDev.net forums (here). I just put it all together, cleaned things up, fixed some bugs, and added lambda support.

The basic idea is that given variadic templates being added to C++ it became possible for a more straight-forward implementation of signal and slot functionality beyond what was done for boost::signals et. al. — that is what Aardvajk’s original article was about. _pi changed Aardvajk’s code to make it such that slots are a thing you inherit from, which makes more sense to me. I changed _pi’s code so that it uses std::functions to store the handlers thus allowing lambdas with captures to be attached to a signal.

Usage is like the following:

#include "signals.hpp" #include <iostream> // a handler is a kind of slot, that, as an implementation detail, requires usage of the // "curiously recurring template pattern". That is, the intention is for instances of // a class C that will react to a signal firing to have an is-a relationship with a slot // parametrized on class C itself. class CharacterHandler : public Slot<CharacterHandler> { public: void HandleCharacter(char c) { std::cout << "The user entered '" << c << "'" << std::endl; } }; class DigitHandler : public Slot<DigitHandler> { public: void HandleCharacter(char c) { if (c >= '0' && c <= '9') { int n = static_cast<int>(c - '0'); std::cout << " " << n << " * " << n << " = " << n*n << std::endl; } } }; int main() { bool done = false; char c; Signal<char> signal; CharacterHandler character_handler; DigitHandler digit_handler; // can attach a signal to a slot with matching arguments signal.connect(character_handler, &CharacterHandler::HandleCharacter); // can also attach a lambda, associated with a slot. // (the lambda could capture the slot and use it like anything else it captures // however technically all the associated slot is doing is allowing you to have // a way of disconnecting the lambda e.g. in this case signal.disconnect(characterHandler) signal.connect(character_handler, [&](char c) -> void { if (c == 'q') done = true; } ); // can also attach a slot to a signal ... this means the same as the above. digit_handler.connect(signal, &DigitHandler::HandleCharacter); do { std::cin >> c; signal.fire(c); } while (! done); // can disconnect like this character_handler.disconnect(signal); // or this signal.disconnect(digit_handler); //although disconnecting wasnt necessary here in that just gaving everything go out of scope // wouldve done the right thing. return 0; }]]>