Some of my images are in an online exhibition this month:

eRR0R(iii)

an exploration of the fertility of errors

in a world that covers its flaws in the blinding light of universal truths and institutionally reinforced regimes of visibility, we are interested in the fertile shades opened up by errors. the antiseptic intellectual environment our societies try to achieve, while arguably “healthy” and “safe” for the established values, has the huge disadvantage of obscuring any fundamentally different modes of existence. we [looked for] submissions that explore the fertility of errors and question our inherited worldview.

Here's my mini statement:

I work with mathematics and algorithms to make art. Sometimes it doesn't go to plan. I present recent failures experienced on the road to successful implementation of desired results.

I'm not sure if it'll be archived after the month is over, so experience it while you can.

]]>Here is an algorithm for generating circle packings:

- start with an empty image
- while the image has gaps bigger than a pixel

- pick a random unoccupied point
- draw the largest circle centered on that point that doesn't overlap the previous circles or the image boundary

This probably has been rediscovered many times, but I don't have a reference. To pack shapes other than circles, find the largest circle and put the shape inside it, oriented to touch the tangent with the image, or at random, as you wish.

An alternative algorithm that picks the sizes up front and fits them into the image has nicer fractal properties, but also has halting problems. See:

An Algorithm for Random Fractal Filling of Space

John Shier and Paul Bourke

Computational experiments with a simple algorithm show that it is possible to fill any spatial region with a random fractalization of any shape, with a continuous range of pre-specified fractal dimensions D. The algorithm is presented here in 1, 2, or 3 physical dimensions. The size power- law exponent c or the fractal dimension D can be specified ab initio over a substantial range. The method creates an infinite set of shapes whose areas (lengths, volumes) obey a power law and sum to the area (length, volume) to be filled. The algorithm begins by randomly placing the largest shape and continues using random search to place each smaller shape where it does not overlap or touch any previously placed shape. The resulting gasket is a single connected object.

I implemented both (using the GNU Scientific Library function for Hurwitz Zeta as my naive summation was woefully inaccurate) but the halting problems of the paper are very annoying, even though it produces nicer images. The problem with both is how to make it fast - the bottle-neck I found is the "pick a random unoccupied point" step.

Image pyramids, aka mipmaps, consist of a collection of downscaled-by-2 images reduced from a base layer. By performing high quality low pass filtering at each reduction level, the resulting collection can be used for realtime texturing of 3D objects of varying sizes without aliasing.

Histogram pyramids are similar, though instead of containing image data, each cell contains a count of the number of active cells in the corresponding base layer region. Thus the smallest 1x1 histogram layer contains the total number of active cells in the base layer. It's similar to a quad tree, but without all the cache-busting pointer chasing.

There are two operations of interest: the first is "deactivate a cell in the base layer", which can be done by decrementing all the cells in the path through the layers from the base layer to the smallest 1x1 layer. The complexity of this operation is O(layers) = O(log(max{width, height})). For a flat image without a histogram pyramid it would be O(1).

The second thing we want to do is "pick an active cell uniformly at random", which is where the histogram pyramid comes into its own. Now the technique picks a random subcell of each cell, starting from the 1x1 layer and proceeding towards the base layer. The random numbers are weighted according to the totals stored in the histogram pyramid, which ensures uniformity. Assuming the pseudo-random number generator is O(1) (which is very likely), this algorithm is again O(layers). Without a histogram pyramid, the best alternatives I came up with were O(number of active cells), which is O(width * height) at the start of the packing algorithm, thus the histogram pyramid is a huge improvement (100mins vs 10secs in one small test), or O(number of previously drawn circles), which is O(width * height) at the end of the packing algorithm, again very poor.

Some code:

]]>/* gcc -std=c99 -Wall -Wextra -pedantic -O3 -o x x.c -lm ./x W H > x.pgm */ #include <assert.h> #include <math.h> #include <stdio.h> #include <stdlib.h> #include <time.h> struct histogram { int levels; int **counts; }; struct histogram *h_new(int width, int height) { int levels = 0; int size = 1 << levels; while (size < width || size < height) { levels += 1; size <<= 1; } struct histogram *h = malloc(sizeof(struct histogram)); h->levels = levels; h->counts = malloc(sizeof(int *) * (levels + 1)); for (int l = 0; l <= levels; ++l) { int d = 1 << l; int n = d * d; h->counts[l] = malloc(sizeof(int) * n); } for (int y = 0; y < size; ++y) for (int x = 0; x < size; ++x) h->counts[levels][(y << levels) + x] = y < height && x < width; for (int l = levels - 1; l >= 0; --l) for (int y = 0; y < 1 << l; ++y) for (int x = 0; x < 1 << l; ++x) h->counts[l][(y << l) + x] = h->counts[l+1][(((y<<1) + 0) << (l + 1)) + (x << 1) + 0] + h->counts[l+1][(((y<<1) + 0) << (l + 1)) + (x << 1) + 1] + h->counts[l+1][(((y<<1) + 1) << (l + 1)) + (x << 1) + 0] + h->counts[l+1][(((y<<1) + 1) << (l + 1)) + (x << 1) + 1]; assert(h->counts[0][0] == width * height); return h; } void h_free(struct histogram *h) { for (int l = 0; l <= h->levels; ++l) free(h->counts[l]); free(h->counts); free(h); } void h_decrement(struct histogram *h, int x, int y) { for (int l = h->levels; l >= 0; --l) { int k = (y << l) + x; h->counts[l][k] -= 1; x >>= 1; y >>= 1; } } int h_empty(struct histogram *h) { return h->counts[0][0] == 0; } int h_choose(struct histogram *h, int *x, int *y) { if (h_empty(h)) return 0; *x = 0; *y = 0; for (int l = 1; l <= h->levels; ++l) { *x <<= 1; *y <<= 1; int xs[4] = { *x, *x + 1, *x, *x + 1 }; int ys[4] = { *y, *y, *y + 1, *y + 1 }; int ks[4] = { (ys[0] << l) + xs[0] , (ys[1] << l) + xs[1] , (ys[2] << l) + xs[2] , (ys[3] << l) + xs[3] }; int ss[4] = { h->counts[l][ks[0]] , h->counts[l][ks[1]] , h->counts[l][ks[2]] , h->counts[l][ks[3]] }; int ts[4] = { ss[0] , ss[0] + ss[1] , ss[0] + ss[1] + ss[2] , ss[0] + ss[1] + ss[2] + ss[3] }; int p = rand() % ts[3]; int i; for (i = 0; i < 4; ++i) if (p < ts[i]) break; *x = xs[i]; *y = ys[i]; } return 1; } struct delta { int dx; int dy; }; int cmp_delta(const void *a, const void *b) { const struct delta *p = a; const struct delta *q = b; double x = p->dx * p->dx + p->dy * p->dy; double y = q->dx * q->dx + q->dy * q->dy; if (x < y) return -1; if (x > y) return 1; return 0; } struct delta *deltas; int *image; unsigned char *pgm; struct histogram *initialize(int width, int height) { deltas = malloc(sizeof(struct delta) * width * height); image = malloc(sizeof(int) * width * height); pgm = malloc(sizeof(unsigned char) * width * height); int d = 0; for (int dx = 0; dx < width; ++dx) for (int dy = 0; dy < height; ++dy) { deltas[d].dx = dx; deltas[d].dy = dy; ++d; } qsort(deltas, width * height, sizeof(struct delta), cmp_delta); for (int k = 0; k < width * height; ++k) image[k] = 0; return h_new(width, height); } void draw_circle(struct histogram *h, int width, int height, double cx, double cy, double r, int c) { double r2 = r * r; int x0 = floor(cx - r); int x1 = ceil (cx + r); int y0 = floor(cy - r); int y1 = ceil (cy + r); for (int y = y0; y <= y1; ++y) if (0 <= y && y < height) for (int x = x0; x <= x1; ++x) if (0 <= x && x < width) { double dx = x - cx; double dy = y - cy; double d2 = dx * dx + dy * dy; if (d2 <= r2) { h_decrement(h, x, y); image[y * width + x] = c; } } } double find_radius(int width, int height, int cx, int cy) { for (int d = 0; d < width * height; ++d) { int dx = deltas[d].dx; int dy = deltas[d].dy; int r2 = dx * dx + dy * dy; int xs[2] = { cx - dx, cx + dx }; int ys[2] = { cy - dy, cy + dy }; for (int j = 0; j < 2; ++j) { int y = ys[j]; if (y < 0) return sqrt(r2); if (y >= height) return sqrt(r2); for (int i = 0; i < 2; ++i) { int x = xs[i]; if (x < 0) return sqrt(r2); if (x >= width) return sqrt(r2); if (image[y * width + x]) return sqrt(r2); } } } return 0; } void packing(struct histogram *h, int width, int height, double p) { for (int c = 1; h->counts[0][0] > 0; ++c) { fprintf(stderr, "%16d / %d\r", h->counts[0][0], width * height); int cx = 0; int cy = 0; h_choose(h, &cx, &cy); double r = find_radius(width, height, cx, cy); draw_circle(h, width, height, cx, cy, r, c); } } void edges(int width, int height) { for (int y = 0; y < height; ++y) { for (int x = 0; x < width; ++x) { if (x == 0) pgm[y * width + x] = 0; else if (x == width-1) pgm[y * width + x] = 0; else if (y == 0) pgm[y * width + x] = 0; else if (y == height-1) pgm[y * width + x] = 0; else { int e = (image[y * width + x] != image[(y+1) * width + x+1]) || (image[(y+1) * width + x] != image[y * width + x+1]); pgm[y * width + x] = e ? 0 : 255; } } } } void write_pgm(unsigned char *p, int width, int height) { fprintf(stdout, "P5\n%d %d\n255\n", width, height); fwrite(p, width * height, 1, stdout); fflush(stdout); } int main(int argc, char **argv) { if (argc < 3) return 0; srand(time(0)); int width = atoi(argv[1]); int height = atoi(argv[2]); struct histogram *h = initialize(width, height); packing(h, width, height); edges(width, height); write_pgm(pgm, width, height); return 0; }

Code:

/* gcc -std=c99 -Wall -pedantic -Wextra graph-paper.c -o graph-paper -lGLEW -lGL -lGLU -lglut ./graph-paper for i in *.pgm do pnmtopng -force -interlace -compression 9 -phys 11811 11811 1 <$i >${i%pgm}png done */ #include <stdio.h> #include <string.h> #include <GL/glew.h> #include <GL/glut.h> #define GLSL(s) #s static const int width = 2480; static const int height = 3508; static const float size = 4.0; static int gcd(int x, int y) { if (y == 0) { return x; } else { return gcd(y, x % y); } } static const char *src = GLSL( uniform vec3 twist; vec2 clog(vec2 x) { return vec2(log(length(x)), atan2(x.y, x.x)) * 0.15915494309189535; } void main() { vec2 p = gl_TexCoord[0].xy; p *= mat2(0.0, -1.0, 1.0, 0.0); vec2 q = clog(p); float a = atan2(twist.y, twist.x); float h = length(vec2(twist.x, twist.y)); q *= mat2(cos(a), sin(a), -sin(a), cos(a)) * h; float d = length(vec4(dFdx(q), dFdy(q))); float l = ceil(-log2(d)); float f = pow(2.0, l + log2(d)) - 1.0; l -= 6.0; float o[2]; for (int i = 0; i < 2; ++i) { l += 1.0; vec2 u = q * pow(2.0, l); u *= twist.z; u -= floor(u); float r = min ( min(length(u), length(u - vec2(1.0, 0.0))) , min(length(u - vec2(0.0, 1.0)), length(u - vec2(1.0, 1.0))) ); float c = clamp(1.5 * r / (pow(2.0, l) * d), 0.0, 1.0); vec2 v = q * pow(2.0, l - 2.0); v *= twist.z; v -= floor(v); float s = min(min(v.x, v.y), min(1.0 - v.x, 1.0 - v.y)); float k = clamp(0.75 + 0.25 * s / (pow(2.0, l - 2.0) * d), 0.0, 1.0); o[i] = c * k; } gl_FragColor = vec4(vec3(mix(o[1], o[0], f)), 1.0); } ); int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); glutCreateWindow("graphpaper"); glewInit(); GLint success; int prog = glCreateProgram(); int frag = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(frag, 1, (const GLchar **) &src, 0); glCompileShader(frag); glAttachShader(prog, frag); glLinkProgram(prog); glGetProgramiv(prog, GL_LINK_STATUS, &success); if (!success) exit(1); glUseProgram(prog); GLuint utwist = glGetUniformLocation(prog, "twist"); GLuint tex; glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 4096, 4096, 0, GL_RED, GL_UNSIGNED_BYTE, 0); glBindTexture(GL_TEXTURE_2D, 0); GLuint fbo; glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glViewport(0, 0, width, height); glLoadIdentity(); gluOrtho2D(0, 1, 1, 0); unsigned char *buffer = malloc(width * height); for (int x = 1; x <= 5; ++x) { for (int y = 0; y <= x; ++y) { if ((x > 1 && y == 0) || (y > 0 && gcd(x, y) != 1)) { continue; } for (int n = 5; n <= 8; ++n) { glUniform3f(utwist, x, y, n / 8.0); glBegin(GL_QUADS); { float u = size / 2.0; float v = size * (height - width / 2.0) / width; glTexCoord2f( u, -v); glVertex2f(1, 0); glTexCoord2f( u, u); glVertex2f(1, 1); glTexCoord2f(-u, u); glVertex2f(0, 1); glTexCoord2f(-u, -v); glVertex2f(0, 0); } glEnd(); glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, buffer); char fname[200]; snprintf(fname, 100, "graphpaper-%d-%d-%d.pgm", x, y, n); FILE *f = fopen(fname, "wb"); fputs("P5\n2480 3508\n255\n", f); fflush(f); fwrite(buffer, width * height, 1, f); fflush(f); fclose(f); } } } free(buffer); glDeleteFramebuffers(1, &fbo); glDeleteTextures(1, &tex); glDeleteShader(frag); glDeleteProgram(prog); glutReportErrors(); return 0; }

Output:

This is an excavation from my archives, around 2013 or so. Print out your favourites and enjoy doodling!

]]>Recently I've been revisiting the code from my Monotone, extending it to use OpenGL cube maps to store the feedback texture instead of nonlinear warping in a regular texture. This means I can use Möebius transformations instead of simple similarities and still avoid excessively bad blurriness and edge artifacts. I've been toying with colour too: but unlike the chaos game algorithm for fractal flames (which can colour according to a "hidden" parameter, leading to interesting and dynamic colour structures), the texture feedback mechanism I'm using can only cope with "structural" RGB colours (with an alpha channel for overall brightness). A 4x4 colour matrix seems to be more interesting than the off-white multipliers I was using to start with.

Some videos:

- Moebius Bubble Chamber (stereographic projection, black and white)
- Moebius Blueprints (360, slight colour, low resolution)
- Moebius Blueprints 2 (360, more colour, high resolution)
- Moenotone Demo (360, colour, high resolution)
- Moenotone Demo 2 (stereographic projection, colour)

This blog post is about two separate things, but both involve Möebius transformations so I combined them into one.

Rotations of the Riemann sphere correspond to those elliptic Möebius transformations whose fixed points are antipodal. Suppose we have two vectors \(u, v \in \mathbb{R}^3\) with \(|u| = |v| = 1\) and we want to find the Möebius transformation for the corresponding rotation of the Riemann sphere that takes \(u\) to \(v\) in the shortest way. This rotation has fixed points \(w_\pm = \pm \frac{u \times v}{| u \times v |}\). By stereographic projection these become the fixed points of the Möebius transformation: \(g = \frac{w_x + i w_y}{1 - w_z}\) for each \(\pm\). Further, the elliptic transformation has characteristic constant \(k = e^{i \theta} = \cos \theta + i \sin \theta = u \cdot v + i |u \times v|\) from all of which the transformation is:

\[ M_{u \to v} = \begin{pmatrix} g_+ - k g_- & (k - 1) g_+ g_- \\ 1 - k & k g_+ - g_- \end{pmatrix} \]

If instead of vectors we wanted to rotate between complex numbers, just use stereographic unprojection to get the 3D coordinates and proceed as before: \( \frac{(2x, 2y, x^2+y^2-1)}{x^2+y^2+1} \). (Note: there may be a sign issue with the \(\sin \theta\) calculation, in my use case I didn't need to worry about it as all the issues cancelled each other out.)

Bézier curves are useful for generating smooth curves when only linear interpolation is available (the De Casteljau's algorithm construction works by repeated linear interpolation). Linear interpolation for Möebius transformations involves 2x2 complex matrix diagonalisation for raising to fractional powers (between 0 and 1). I wrote about this in more detail in my 2015 blog post interpolating Möebius transformations.

A C1-continuous (but not C2-continuous at the join points) piecewise cubic Bézier spline can be defined by specifying the points (Möebius transformations) \(P_i\) through which the curve passes, and the tangent \(T_i\) at each point. Then the pieces of the spline are defined by control points \((P_i, P_i T_i, T_{i+1}^{-1} P_{i+1}, P_{i+1})\). The tangents \(T_i\) might be "small" for smoother results, for example these can be constructed by linearly interpolating between the identity (with large weight) and an arbitrary transform (with small weight). This interpolation scheme gives visually much more sensible results than naive Catmull-Rom spline interpolation between each individual coefficient components separately, while still maintaining smoothness.

]]>I updated my Inflector Gadget, adding a keyframe animation feature among other goodies. I also made a new page for it, where all the downloads and documentation are to be found. Go check it out!

PS: Inflector Gadget can make images like these in very little time:

]]>Previously I wrote about an automated Julia morphing method extrapolating patterns in the binary representation of external angles, and then tracing external rays. However this was impractical as it was \(O(p^2)\) for final period \(p\) and the period typically more than doubles at each next level of morphing. This week I devised an \(O(p)\) algorithm, which requires a little bit of setting up and doesn't always work but when it works it works very well.

The first key insight was that in embedded Julia sets, the primary spirals and tips are distinguishable by the preperiods of the Misiurewicz points at their centers. Moreover when using the "full" Newton's method algorithm for Misiurewicz points that rejects lower preperiods by division, the basins of attraction comfortably enclose the center of the embedded Julia set itself.

So, we can choose the appropriate (pre)period to get to the center of the spiral either inwards towards the main body of the Mandelbrot set or outwards towards its tips. Now, from a Misiurewicz center of a spiral, Newton's method for periodic nucleus finding will work for any of the periods that form the structural spine of the spiral - these go up by a multiple of the period of the influencing island. From these nuclei we can jump to the Misiurewicz spiral on the other side, using Newton's method again. In this way we can algorithmically find any nucleus or Misiurewicz point in the structure of the embedded Julia set.

Some images should make this clearer at this point: blue means Newton's method for nucleus, red means Newton's method for Misiurewicz point, nuclei are labeled with their period, Misiurewicz points with preperiod and period in that order, separated by 'p'.

The second key insight was that the atom domain coordinate of the tip of the treeward branch at each successive level was scaled by a power of 1.5 from the one at the previous level. Because atom domain coordinates correspond to the unit disc, this means they are closer to the nucleus. This allowed an initial guess for finding the Misiurewicz point at the tip more precisely (the first insight does only apply to "top-level" embedded Julia sets, not their morphings - there is a "symmetry trap" that breaks Newton's method because the boundary of the basins of attraction passes through the point we want to start from). I implemented a Newton's method iteration to find a point with a given atom domain coordinate. This relationship is only true in the limit, so the input to the automatic morphing algorithm starts at the first morphing, rather than the top level embedded Julia set.

My first test was quite challenging: to morph a tree with length 7 arms, from an embeddded Julia set at angled internal address:

1_{1/2}→2_{1/2}→3_{2/5}→15_{4/7}→88

The C code (full link at the bottom) that sets up the parameters for this morphing looks like this:

#ifdef EXAMPLE_1 const char *embedded_julia_ray = ".011100011100011011100011011100011100011100011011100011011100011100011100011011100011100001110001101110010010010010010010010001110001110001101110001101110001110001110001101110001101110001110001110001101110001110000111000110111(001)"; int ray_preperiod = 225; int ray_period = 3; double _Complex ray_endpoint = -1.76525599938987623396492597243303e+00 + 1.04485517375987067290733632798876e-02 * I; int influencing_island_period = 3; int embedded_julia_set_period = 88; int denominator_of_rotation = 5; int arm_length = 7; double view_size_multiplier = 3600; #endif

The ray lands on the treeward-tip Misiurewicz point of the first morphed Julia set, this end point is cached to avoid long ray tracing computations. The next 4 numbers are involved in the iterative morphing calculations of the relevant periods and preperiods, with the arm length being the primary variable to adjust once the Julia set is found. The view size multiplier sets how to zoom out from the central morphed figure to frame the result nicely, maybe I can find a good heuristic to determine this based on arm length.

The morphing looks like this:

The second example is similar, starting with the island with this angled internal address, with tree morphing arm length 9.

1_{1/2}→2_{1/2}→3_{1/2}→4_{1/2}→8_{1/15}→116_{1/2}→119

The third and final example (for now) is simpler still, starting at the island with this internal address, with tree morphing arm length 1.

1_{1/3}→3_{1/2}→4_{11/23}→89

The code for example 3 contains an ugly hack, because the method for guessing the location of the next Misiurewicz point (for starting Newton's method iterations) isn't good enough - the radius is accurate, but the angle is not - my atom domain coordinate method is clearly not the correct one in general...

Here are the timings in seconds for calculating the coordinates (not parallelized) and rendering the images (I used m-perturbator-offline at 1280x720, the parallel efficiency is somewhat low because it doesn't know the center point is already a good reference and it tries to find one in the view - it would be much faster if I let it take the primary reference as external input - more things TODO):

morph | coordinates | image rendering | |||||||
---|---|---|---|---|---|---|---|---|---|

eg1 | eg2 | eg3 | eg1 | eg2 | eg3 | ||||

real | user | real | user | real | user | ||||

1 | 0 | 0 | 0 | 0.633 | 2.04 | 0.625 | 2.00 | 0.551 | 1.74 |

2 | 0 | 0 | 0 | 0.817 | 2.41 | 0.943 | 2.31 | 0.932 | 3.19 |

3 | 0 | 0 | 0 | 1.16 | 3.56 | 1.43 | 4.53 | 1.26 | 4.19 |

4 | 1 | 0 | 0 | 1.37 | 4.38 | 1.86 | 6.06 | 1.73 | 5.86 |

5 | 1 | 2 | 1 | 2.29 | 7.45 | 3.43 | 11.4 | 2.67 | 8.78 |

6 | 2 | 4 | 2 | 3.95 | 12.3 | 5.73 | 18.4 | 4.26 | 14.9 |

7 | 10 | 14 | 2 | 7.42 | 23.6 | 8.66 | 26.7 | 6.90 | 21.1 |

8 | 24 | 37 | 7 | 28.2 | 95.1 | 42.4 | 142 | 12.6 | 36.4 |

9 | 92 | 155 | 27 | 63.7 | 257 | 92.6 | 292 | 21.5 | 63.5 |

10 | 288 | 442 | 51 | 141 | 419 | 207 | 609 | 77.5 | 263 |

total | 418 | 654 | 90 | 259 | 774 | 372 | 1120 | 137 | 430 |

The code is part of my mandelbrot-numerics project. You also need my mandelbrot-symbolics project to compile the example program, and you may also want mandelbrot-perturbator to render the output (note: the GTK version is currently hardcoded to 65536 maximum iteration count, which isn't enough for deeper morphed Julia sets - adding runtime configuration for this is my next priority). Other deep zoomers are available, for example my Kalles Fraktaler 2 + GMP fork with Windows binaries available (that also work in WINE on Linux).

]]>On the fractal chats Discord server, it was discussed that the "elliptic" variation in fractal flame renderers suffered from precision problems. So I set about trying to fix them. The test parameters are here: elliptic-precision-problems.flame. It looks like this:

The black holes are the problem. Actually it turns out that the main cause of the hole was the addition of an epsilon to prevent division by zero in the "spherical variation", removing that gives this image, still with small black holes in the spirals:

The original code for the flam3 implementation of the elliptic variation is:

void var62_elliptic (flam3_iter_helper *f, double weight) { /* Elliptic in the Apophysis Plugin Pack */ double tmp = f->precalc_sumsq + 1.0; double x2 = 2.0 * f->tx; double xmax = 0.5 * (sqrt(tmp+x2) + sqrt(tmp-x2)); double a = f->tx / xmax; double b = 1.0 - a*a; double ssx = xmax - 1.0; double w = weight / M_PI_2; if (b<0) b = 0; else b = sqrt(b); if (ssx<0) ssx = 0; else ssx = sqrt(ssx); f->p0 += w * atan2(a,b); if (f->ty > 0) f->p1 += w * log(xmax + ssx); else f->p1 -= w * log(xmax + ssx); }

When x is near +/-1 and y is near 0, xmax is near 1, so a is near +/- 1, so there is a catastrophic cancellation (loss of significant digits) in the calculation of b = 1 - a*a. But it turns out that b doesn't need to be computed at all, because atan(a / sqrt(1 - a*a)) is the same as asin(a).

There is a second problem with ssx = xmax - 1, as xmax is near 1 there is a catastrophic cancellation here too. So the next step is to see how to calculate ssx without subtracting two values of roughly equal size and thus losing precision. Some algebra:

ssx = xmax - 1 = 0.5 (sqrt(tmp+x2)+sqrt(tmp-x2)) - 1 = 0.5 (sqrt(tmp+x2)+sqrt(tmp-x2) - 2) = 0.5 (sqrt(tmp+x2)-1 + sqrt(tmp-x2)-1 = 0.5 (sqrt(x*x+y*y+2*x+1)-1 + sqrt(x*x+y*y-2*x+1)-1) = 0.5 (sqrt(u+1)-1 + sqrt(v+1)-1)

Now we have subexpressions of the form sqrt(u+1)-1, which will lose precision when u is near 0. One way of doing this is to use a Taylor series for the function expanded about u=0, then converting this to a Padé approximant. I used a Wolfram Alpha Open Code Notebook to do this, here is the highlight:

> PadeApproximant[Normal[Series[Sqrt[x+1]-1, {x, 0, 8}]], {x, 0, 4}] (x/2+(3 x^2)/4+(5 x^3)/16+x^4/32) / (1+(7 x)/4+(15 x^2)/16+(5 x^3)/32+x^4/256)

Inspecting a plot of the difference between the approximant and the original function shows that it's accurate to about 1e-16 in the range -0.0625..+0.0625, which gives the following code implementation:

double sqrt1pm1(double x) { if (-0.0625 < x && x < 0.0625) { double num = 0; double den = 0; num += 1.0 / 32.0; den += 1.0 / 256.0; num *= x; den *= x; num += 5.0 / 16.0; den += 5.0 / 32.0; num *= x; den *= x; num += 3.0 / 4.0; den += 15.0 / 16.0; num *= x; den *= x; num += 1.0 / 2.0; den += 7.0 / 4.0; num *= x; den *= x; den += 1.0; return num / den; } return sqrt(1 + x) - 1; }

Now we can compute xmax - 1 without subtracting, and finally we can use log1p() to avoid inaccuracy from log of values near 1. The final code looks like this:

void var62_elliptic (flam3_iter_helper *f, double weight) { double x = f->tx; double y = f->ty; double x2 = 2.0 * x; double sq = f->precalc_sumsq; double u = sq + x2; double v = sq - x2; double xmaxm1 = 0.5 * (sqrt1pm1(u) + sqrt1pm1(v)); double a = x / (1 + xmaxm1); double ssx = xmaxm1; double w = weight / M_PI_2; if (ssx<0) ssx = 0; else ssx = sqrt(ssx); f->p0 += w * asin(clamp(a, -1, 1)); if (y > 0) f->p1 += w * log1p(xmaxm1 + ssx); else f->p1 -= w * log1p(xmaxm1 + ssx); }

The pudding, it works: the small black holes in the spirals are gone!

Finally, it seems elliptic is similar but not quite equal to the complex function 1 - acos(z) * 2 / PI. The standard library implementations probably has accuracy-preserving techniques that might be worth a look, I haven't checked yet. But the difference may be significant for images, notably the acos thing is conformal while the elliptic variation doesn't seem to be. Here's a comparison (elliptic on the left, acos on the right):

**EDIT 2017-11-27** I also changed the badval threshold from 1e10
to 1e100, and I've been informed that this change is also critical for getting
the good appearance (i.e., you need both the numerical voodoo and the threshold
increase).

A black-and-white A5 paperback with 100 pages of Turing patterns, hand-selected from 1000s of candidate images generated by a multi-layer reaction-diffusion biochemistry system simulated on digital computer as a coupled cellular automaton.

Click the picture above for lots more information, including photos, and links to print-on-demand and source code too (a fork of my cca project).

I generated the page images as 100dpi bitmaps, then vectorized with potrace. The printed copy is really nice, smooth white paper good for colouring and a bright glossy cover. One small issue is that it's hard to get pencils right into the perfect binding, but I expected that before I had it made.

This project is what my GPU temperature throttle was for, though I'm sure it'll come in handy for other things as well.

]]>A while ago I read this paper and finally got around to implementing it this week:

Real-Time Hatching

Emil Praun, Hugues Hoppe, Matthew Webb, Adam Finkelstein

Appears in SIGGRAPH 2001

The key concept in the paper is the "tonal art map", in which strokes are added to a texture array's mipmap levels to preserve coherence between levels and across tones - each stroke in an image is also present in all images above and to the right:

My possibly-novel contribution is to use the inverse (fast) Fourier transform (IFFT) to generate blue noise for the tonal art map generation. This takes a fraction of a second, compared to the many hours for void-and-cluster methods at large image sizes. The quality may be lower, but something to investigate another time - it's good enough for this hatching experiment. Here's a contrast of white and blue noise, the blue noise is perceptually much more smooth, lacking low-frequency components:

The other parts of the paper I haven't implemented yet, namely adjusting the hatching to match the principal curvature directions of the surface. This is more a mesh parameterization problem - I'm being simple and generating UVs for the bunny by spherical projection, instead of something complicated and good-looking.

My code is here:

git clone https://code.mathr.co.uk/hatching.git

Note that there are horrible hacks in the shaders for the specific scene
geometry at the moment, hopefully I'll find time to clean it up and make it
more general soon. You'll need to download the `bunny.obj`

from
cs5721f07.

I implemented a little widget in HTML5 Javascript and WebGL:

/clusters/

It's inspired by Clusters by Jeffrey Ventrella, but its source seems to be obfuscated so I couldn't see how it worked. Instead I worked backwards from the referenced ideas of Lynn Margulis. I modelled a symbiotic system by a bunch of particles, each craving or disgusted by the emissions of the others. There are a settable number of different substances, and (currently hardcoded) 24 different species with their own tastes, represented by different colours. The particle count is settable too, but due to a bug in my code you have to manually refresh the page after doing it (and don't go too high, the slow down is \(O(n^2)\)).

Some seeds give really interesting large-scale structures that chase each other around, with bits peeling off and joining other groupings. If A is attracted to B but B is repulsed by A, then a pursuit ensues. If the generated rule weights (576 numbers with the default settings) align just right you can get a chain or even a ring that becomes stable and spins on its own accord. Other structures include concentric shells in near-spherical blobs.

One thing I'm not happy with is the friction - I had to add it to make the larger clusters stable, but it makes smaller clusters less mobile. There's probably something my naive model misses from Ventrella's original, maybe some kind of satiation and transfer of actual materials between particles, rather than a per-species (dis)like tendency. If more satiated particles were to move less quickly than hungry particles, that might fix it. I'll try it another day!

]]>The Mandelbrot set contains many hyperbolic components (cardioid-like and disc-like regions), with hairy filaments connecting them in a tree-like way. Each component has a nucleus at its center, which has a periodic orbit containing 0. Each component is surrounded by an atom domain, which for discs has about 4 times the radius (the relationship for cardioids is less regular, but often has about the square root of the size). Labelling a picture of the Mandelbrot set with the periods can provide insights into its deeper structure, and most of the time using the atom domain size as the label size works pretty well.

Inspired by a feature of Power MANDELZOOM (scroll down to the 3rd image titled "Embedded Julia set") that locates periodic points that are too deep to see, I implemented a grid scan algorithm to find periodic points. I vaguelly recall Robert P. Munafo explaining this algorithm to me in private email, so most of the credit belongs with him. The font size variation is all mine though.

Using my mandelbrot-numerics and mandelbrot-graphics libraries, the period scan works like this:

// scan successively finer grids for periodsfor (int grid = mingridsize << 8; grid >= mingridsize; grid >>= 1) for (int y = grid/2; y < h; y += grid) for (int x = grid/2; x < w; x += grid) { double _Complex c0 = x + I * y; double _Complex dc0 = grid;// transform pixel coodinates to the 'c' planem_d_transform_forward(transform, &c0, &dc0);// find the period of a nucleus within a large box// uses Robert P. Munafo's Jordan curve methodint p = m_d_box_period_do(c0, 4.0 * cabs(dc0), maxiters); if (p > 0)// refine the nucleus location (uses Newton's method)if (m_converged == m_d_nucleus(&c0, c0, p, 16)) {// verify the period with a small box// if the period is wrong, the size estimates will be way offas[atoms].period = m_d_box_period_do(c0, 0.001 * cabs(dc0), 2 * p); if (as[atoms].period > 0) { as[atoms].nucleus = c0;// size of component using algorithm from ibiblio.org M-set e-notesas[atoms].size = cabs(m_d_size(c0, as[atoms].period));// size of atom domain using algorithm from an earlier blog post of mineas[atoms].domain_size = m_d_domain_size(c0, as[atoms].period);// shape of component (either cardioid or disc) after Dolotin and Morozov (2008 eq. 5.8)as[atoms].shape = m_d_shape_discriminant(m_d_shape_estimate(c0, as[atoms].period)); atoms++; } } }

This does give duplicates in the output array, but these can be removed later (I found it better to use a mask image (2D array) in which I marked circles around each label, after checking whether the location has already been marked, than to use a quadratic-time loop comparing locations with a threshold distance). Depending on the size of the circles, this also helps prevents messy label overlaps.

One problem is that the range of atom domain sizes can be huge, with domains in filaments being orders of magnitude smaller than the sizes present in embedded Julia sets. This can be fixed with some hacks:

The image above calculates the font size like this:

// convert to pixel coordinatesint p = as[a].period; double _Complex c0 = as[a].nucleus; double _Complex dc0 = p == 1 ? 1 : as[a].domain_size;// period 1 domain is infinitem_d_transform_reverse(transform, &c0, &dc0);// shrink disc labels a bit to avoid overlapsdouble fs = (as[a].shape == m_cardioid ? 1 : 0.5) * cabs(dc0);// rescale filament labels using properties of periods in this particular embedded Julia setif ((p % 4) != (129 % 4)) fs = 8 * log2(fs) + maxfontsize;// ensure a minimum label sizefs = fmax(fs, minfontsize);

The image below replaces the specific period property
`(p % 4) != (129 % 4)`

with `(p % 4) != 0`

. I'll
figure out how best to generalize this and allow command-line arguments,
at the moment I've just been editing the code and recompiling to adapt
to different views, hardly ideal.

You can click the pictures for bigger versions (a few MB each). The
last 3 images are centered on
`-1.9409856638151786271684397e+00 + 6.4820395780451436662598436e-04 i`

.
After a few more cleanups I'll push the code to my mandelbrot-graphics
git repository linked above.