Some of my images are in an online exhibition this month:

eRR0R(iii)

an exploration of the fertility of errors

in a world that covers its flaws in the blinding light of universal truths and institutionally reinforced regimes of visibility, we are interested in the fertile shades opened up by errors. the antiseptic intellectual environment our societies try to achieve, while arguably “healthy” and “safe” for the established values, has the huge disadvantage of obscuring any fundamentally different modes of existence. we [looked for] submissions that explore the fertility of errors and question our inherited worldview.

Here's my mini statement:

I work with mathematics and algorithms to make art. Sometimes it doesn't go to plan. I present recent failures experienced on the road to successful implementation of desired results.

I'm not sure if it'll be archived after the month is over, so experience it while you can.

]]>THSF#9 is taking place 10th-13th May in Toulouse, France. I'll be performing with Clive (live-coding audio in C) on Saturday night, and also giving a lightning talk about it. Bit nervous about it, as my school-boy French is very out of practice. Luckily I'll be giving my talk in English.

**EDIT:** it went well, and I uploaded my set and talk to the
newly update clive website

Recently I've been experimenting with different ways to try to improve image quality of fractal renders. The first key step was using jitter, which is adding independent uniform random offsets to the pixel coordinates before calculating the fractal iterations. This converts strong aliased frequencies (like Moiré patterns) into much perceptually acceptable noise with a broad spectrum. The noise can be reduced by averaging many images with different pseudo-random number generator seed values. I used a Wang/Burtle uint32 hash of the pixel coordinates and a subframe index, the quality of which isn't well known, but visually indistinguishable from the cryptographic hash MD5 in my tests.

Unfortunately it's not perfect, artifacts can return to visibility after averaging (which means they were never completely eliminated in the first place, possibly the colouring algorithm computing image-space derivatives of the fractal iteration data might have defeated the jitter in some way). It seems correctly bandlimiting and sampling fractal images is a hard problem, and simpler techniques like supersampling by a large factor with properly filtered downscaling can be better than jitter alone.

The 1D problem seems more tractable. Here you can see spectrum of the audio version, being a sine sweep from 8kHz going up 4 octaves - with uniform sampling it folds over into a descending sweep:

With jittered sampling the aliased energy becomes white noise:

Unfortunately the noise reduction from averaging isn't great: doubling the number of jittered copies (each section in the image below) reduces the noise level by only 3dB. Uniformly oversampling by only 6x would eliminate aliasing completely for this toy example, but some signals might not have a known upper bandwidth limit.

Performance is terrible too, many minutes of CPU time for the few seconds of output. Anyway, here's source code for the audio experiment, you can listen to the output:

// gcc non-uniform_sampling.c -lfftw3 -lm -O3 // time ./a.out > out.raw // audacity # import out.raw f64 mono 48000Hz #include <complex.h> #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <fftw3.h> #define PI 3.141592653589793 // sweep parameters #define HZ 8000 #define OCTAVES 4 #define SPEED 1 #define SAMPLERATE 48000 #define N (OCTAVES * SAMPLERATE / SPEED) // fft parameters (size and overlap) #define F 256 #define O 4 // jitter amount #define SCALE 1 // buffers double t[N]; // sample time double f[N]; // sample value double g[N]; // accumulated data double h[N]; // normalized data for output // fft buffers fftw_complex *fft; fftw_complex *sig; // raised cosine window for overlap-add resynthesis double window(double t) { return 0.5 - 0.5 * cos(2 * PI * t); } // entrypoint int main() { // initialize fft = fftw_malloc(sizeof(fftw_complex) * F); sig = fftw_malloc(sizeof(fftw_complex) * F); fftw_plan plan = fftw_plan_dft_1d(F, fft, sig, FFTW_BACKWARD, FFTW_MEASURE); memset(g, 0, sizeof(g)); // average many passes with distinct random seeds int pass = 0; for (int PASSES = 0; PASSES < 10; ++PASSES) { for (; pass < 1 << PASSES; ++pass) { fprintf(stderr, "%d\n", pass); // sample non-bandlimited signal at non-uniform intervals for (int i = 0; i < N; ++i) { t[i] = i + (rand() / (double) RAND_MAX - 0.5) * SCALE; f[i] = sin(2 * PI * exp(log(HZ << OCTAVES) * t[i] / N + (1 - t[i] / N) * log(HZ))); } // overlap-add resynthesis for (int start = -F; start < N; start += F / O) { // window and DFT memset(fft, 0, sizeof(fftw_complex) * F); for (int i = start; i < start + F; ++i) { if (0 <= i && i < N) { double t0 = (t[i] - start) / F; double f0 = window(t0) * f[i]; for (int k = 0; k < F; ++k) { double w0 = 2 * PI * (k > F/2 ? k - F : k); double w = - t0 * w0; fft[k] += f0 * (cos(w) + I * sin(w)); } } } // do IFFT fftw_execute(plan); // window and accumulate for (int i = start; i < start + F; ++i) { if (0 <= i && i < N) { double t1 = (i - start) / (double) F; double w = window(t1); double s = sig[i - start]; g[i] += w * creal(s); } } } } // normalize double grms2 = 0; for (int i = 0; i < N; ++i) grms2 += g[i] * g[i]; grms2 /= N; grms2 = sqrt(grms2); grms2 *= 4; for (int i = 0; i < N; ++i) h[i] = g[i] / grms2; // output fwrite(h, sizeof(h), 1, stdout); fflush(stdout); } // cleanup fftw_destroy_plan(plan); fftw_free(sig); fftw_free(fft); return 0; }

Reporting this "failed" experiment in the interest of science!

]]>The essence of perturbation is to find the difference between the high precision values of a function at two nearby points, while using only the low precision value of the difference between the points. In this post I'll write the high precision points in CAPITALS and the low precision deltas in lowercase. There are two auxiliary operations needed to define the perturbation \(P\), \(B\) replaces all variables by their high precision version, and \(W\) replaces all variables by the sum of the high precision version and the low precision delta. Then \(P = W - B\):

\[\begin{aligned} B(f) &= f(X) &\text{ (emBiggen)}\\ W(f) &= f(X + x) &\text{ (Widen)}\\ P(f) &= W(f) - B(f) \\ &= f(X + x) - f(X) &\text{ (Perturb)} \end{aligned}\]

For example, perturbation of \(f(z, c) = z^2 + c\), ie, \(P(f)\), works out like this:

\[\begin{aligned} & P(f) \\ \to & f(Z + z, C + c) - f(z, c) \\ \to & (Z + z)^2 + (C + c) - (Z^2 + C) \\ \to & Z^2 + 2 Z z + z^2 + C + c - Z^2 - C \\ \to & 2 Z z + z^2 + c \end{aligned}\]

where in the final result the additions of \(Z\) and \(z\) have mostly cancelled out and all the terms are "small".

For polynomials, regular algebraic manipulation can lead to successful outcomes, but for other functions it seems some "tricks" are needed. For example, \(|x|\) (over \(\mathbb{R}\)) can be perturbed with a "diffabs" function proceeding via case analysis:

// evaluate |X + x| - |X| without catastrophic cancellation function diffabs(X, x) { if (X >= 0) { if (X + x >= 0) { return x; } else { return -(2 * X + x); } } else { if (X + x > 0) { return 2 * X + x; } else { return -x; } } }

This formulation was developed by laser blaster at fractalforums.com.

For transcendental functions, other tricks are needed. Here for example is a derivation of \(P(\sin)\):

\[\begin{aligned} & P(\sin) \\ \to & \sin(X + x) - \sin(X) \\ \to & \sin(X) \cos(x) + \cos(X) \sin(x) - \sin(X) \\ \to & \sin(X) (\cos(x) - 1) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \left(2 \cos\left(\frac{x}{2}\right) \sin\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \left(-\sin(X) \sin\left(\frac{x}{2}\right) + \cos(X) \cos\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \cos\left(X + \frac{x}{2}\right) \end{aligned}\]

Knowing when to apply the sum- and double-angle-formulae, is a bit of a mystery, especially if the end goal is not known beforehand. This makes implementing a symbolic algebra program that can perform these derivations quite a challenge.

In lieu of a complete symbolic algebra program that does it all on demand, here are a few formulae that I calculated, some by hand, some using Wolfram Alpha:

\[\begin{aligned} P(a) &= 0 \\ P(a f) &= a P(f) \\ P(f + g) &= P(f) + P(g) \\ P(f g) &= P(f) W(g) + B(f) P(g) \\ P\left(\frac{1}{f}\right) &= -\frac{P(f)}{B(f)W(f)} \\ P(|f|) &= \operatorname{diffabs}(B(f), P(f)) \\ P(\exp) &= \exp(X) \operatorname{expm1}(x) \\ P(\log) &= \operatorname{log1p}\left(\frac{x}{X}\right) \\ P(\sin \circ f) &= \phantom{-}2 \sin\left(\frac{P(f)}{2}\right)\cos\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cos \circ f) &= -2 \sin\left(\frac{P(f)}{2}\right)\sin\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tan \circ f) &= \frac{\sin(P(f))}{\cos(B(f))\cos(W(f))} \\ P(\sinh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\cosh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cosh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\sinh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tanh \circ f) &= \frac{\sinh(P(f))}{\cosh(B(f))\cosh(W(f))} \\ \end{aligned}\]

I hope to find time to add these to et soon.

**EDIT** there is a simpler and more general way to derive \(P(\sin)\)
and so on, using \(\sin(a) \pm \sin(b)\) formulae...

Algorave's 6th birthday party is coming up next week!

Algosix

104 live streams of algorithmic dance music+friends over~~52~~72 hours, a mixture of solo streams from across the world + live events in Buenos Aires, McMaster, Melbourne, NYC, Tokyo, Troy, São Paulo, Sheffield, London, and Medellin.

Starts 15 March at 19:30 GMT + ends 18 March at 19:30 GMT.

My time-slot is towards the end of the whole thing, and I'll be live-coding minimal/noisy/tech in C using my Clive system. You can see some preparations here.

**EDIT:** I uploaded an
audio + diff-cast (45MB) and a
video (360MB).

In my work-in-progress et project for escape-time fractals, I currently represent the viewing transform (from pixel grid to complex plane coordinates) by a translation (high precision coordinates of the center of the view), a scaling (high range scale factor), and a 2×2 matrix that accounts for non-uniform scaling and rotation (4 low precision precision numbers, defaulting to the identity [1,0;0,1]) - this matrix should have determinant 1 as any global scaling belongs in the scale factor.

However, editing raw matrix values is not very user friendly, so I plan to
add a friendly user interface based on marking points in the GUI before moving
them around with the mouse (eventually: multi-touch support for tablets etc).
An intermediate step might be to represent the matrix in a more human-relevant
way, decomposing it into rotations and non-uniform scaling (a shear is a
non-uniform scaling conjugated by a rotation, no need to handle it separately).
It so happens that this is a well-known linear algebra problem, called
**polar decomposition**. The matrix M is decomposed into a
rotation V and a stretch P, such that M = V P, and further the stretch P can be
decomposed into a rotation U and a diagonal matrix D, such that
P = U D U^{-1} (though this last decomposition is not unique).

A good description of the problem and examples in higher dimensions is:

Matrix Animation and Polar Decomposition

Ken Shoemake and Tom Duff

AbstractGeneral 3×3 linear or 4×4 homogenous matrices can be formed by composing primitive matrices for translation, rotation, scale, shear, and perspective. Current 3-D computer graphics systems manipulate and interpolate parametric forms of these primitives to generate scenes and motion. For this and other reasons, decomposing a composite matrix in a meaningful way has been a long-standing challenge. This paper presents a theory and method for doing so, proposing that the central issue is rotation extraction, and that the best way to do that is Polar Decomposition. This method also is useful for renormalizing a rotation matrix containing excessive error.

For the 2D case there is a simple explicit formula given in:

Explicit polar decomposition and a near-characteristic polynomial: The 2×2 case

Frank Uhlig

AbstractExplicit algebraic formulas for the polar decomposition of a nonsingular real 2×2 matrix A are given, as well as a classification of all integer 2×2 matrices that admit a rational polar decomposition. These formulas lead to a functional identity which is satisfied by all nonsingular real 2×2 matrices A as well as by exactly one type of exceptional matrix A_{n}, for each n > 2.

Translated into Octave code (presumably Matlab compatible), assuming that M is real with det(M) > 0:

M = [1,-3;2,2] scale = sqrt(det(M)) A = M / scale; B = A + inv(A'); b = sqrt(abs(det(B))); V = B / b; P = (A' * A + eye(2)) / b; [U,D] = eig(P); stretch = D(1); stretchAngle = atan2(U(2,1), U(1,1)); if (stretch < 1) stretch = 1 / stretch stretchAngle = stretchAngle + pi / 2; endif stretchA = mod(stretchAngle, pi) rotation = atan2(V(2,1), V(1,1)) R = [ cos(rotation), -sin(rotation); sin(rotation), cos(rotation) ]; S = [ stretch, 0; 0, 1/stretch ]; T = [ cos(stretchA), -sin(stretchA); sin(stretchA), cos(stretchA) ]; N = scale * R * T * S * T'; error = norm(M - N)

Example output:

M = 1 -3 2 2 scale = 2.8284 stretch = 1.2808 stretchA = 1.4483 rotation = 1.0304 error = 1.0721e-15

Note that eigenvalues and eigenvectors can be found explicitly in 2D, see for example Eigenvalues and eigenvectors of 2x2 matrices.

Final things to note: currently et uses only the view scale factor to choose which number type to use for calculations. This might lead to pixelation artifacts from insufficient precision in highly stretched images near number type thresholds, so the stretch factor should be taken into account too when determining the minimal pixel spacing. Handling reflection (det < 0) is left for future investigation.

]]>Here is an algorithm for generating circle packings:

- start with an empty image
- while the image has gaps bigger than a pixel

- pick a random unoccupied point
- draw the largest circle centered on that point that doesn't overlap the previous circles or the image boundary

This probably has been rediscovered many times, but I don't have a reference. To pack shapes other than circles, find the largest circle and put the shape inside it, oriented to touch the tangent with the image, or at random, as you wish.

An alternative algorithm that picks the sizes up front and fits them into the image has nicer fractal properties, but also has halting problems. See:

An Algorithm for Random Fractal Filling of Space

John Shier and Paul Bourke

Computational experiments with a simple algorithm show that it is possible to fill any spatial region with a random fractalization of any shape, with a continuous range of pre-specified fractal dimensions D. The algorithm is presented here in 1, 2, or 3 physical dimensions. The size power- law exponent c or the fractal dimension D can be specified ab initio over a substantial range. The method creates an infinite set of shapes whose areas (lengths, volumes) obey a power law and sum to the area (length, volume) to be filled. The algorithm begins by randomly placing the largest shape and continues using random search to place each smaller shape where it does not overlap or touch any previously placed shape. The resulting gasket is a single connected object.

I implemented both (using the GNU Scientific Library function for Hurwitz Zeta as my naive summation was woefully inaccurate) but the halting problems of the paper are very annoying, even though it produces nicer images. The problem with both is how to make it fast - the bottle-neck I found is the "pick a random unoccupied point" step.

Image pyramids, aka mipmaps, consist of a collection of downscaled-by-2 images reduced from a base layer. By performing high quality low pass filtering at each reduction level, the resulting collection can be used for realtime texturing of 3D objects of varying sizes without aliasing.

Histogram pyramids are similar, though instead of containing image data, each cell contains a count of the number of active cells in the corresponding base layer region. Thus the smallest 1x1 histogram layer contains the total number of active cells in the base layer. It's similar to a quad tree, but without all the cache-busting pointer chasing.

There are two operations of interest: the first is "deactivate a cell in the base layer", which can be done by decrementing all the cells in the path through the layers from the base layer to the smallest 1x1 layer. The complexity of this operation is O(layers) = O(log(max{width, height})). For a flat image without a histogram pyramid it would be O(1).

The second thing we want to do is "pick an active cell uniformly at random", which is where the histogram pyramid comes into its own. Now the technique picks a random subcell of each cell, starting from the 1x1 layer and proceeding towards the base layer. The random numbers are weighted according to the totals stored in the histogram pyramid, which ensures uniformity. Assuming the pseudo-random number generator is O(1) (which is very likely), this algorithm is again O(layers). Without a histogram pyramid, the best alternatives I came up with were O(number of active cells), which is O(width * height) at the start of the packing algorithm, thus the histogram pyramid is a huge improvement (100mins vs 10secs in one small test), or O(number of previously drawn circles), which is O(width * height) at the end of the packing algorithm, again very poor.

Some code:

]]>/* gcc -std=c99 -Wall -Wextra -pedantic -O3 -o x x.c -lm ./x W H > x.pgm */ #include <assert.h> #include <math.h> #include <stdio.h> #include <stdlib.h> #include <time.h> struct histogram { int levels; int **counts; }; struct histogram *h_new(int width, int height) { int levels = 0; int size = 1 << levels; while (size < width || size < height) { levels += 1; size <<= 1; } struct histogram *h = malloc(sizeof(struct histogram)); h->levels = levels; h->counts = malloc(sizeof(int *) * (levels + 1)); for (int l = 0; l <= levels; ++l) { int d = 1 << l; int n = d * d; h->counts[l] = malloc(sizeof(int) * n); } for (int y = 0; y < size; ++y) for (int x = 0; x < size; ++x) h->counts[levels][(y << levels) + x] = y < height && x < width; for (int l = levels - 1; l >= 0; --l) for (int y = 0; y < 1 << l; ++y) for (int x = 0; x < 1 << l; ++x) h->counts[l][(y << l) + x] = h->counts[l+1][(((y<<1) + 0) << (l + 1)) + (x << 1) + 0] + h->counts[l+1][(((y<<1) + 0) << (l + 1)) + (x << 1) + 1] + h->counts[l+1][(((y<<1) + 1) << (l + 1)) + (x << 1) + 0] + h->counts[l+1][(((y<<1) + 1) << (l + 1)) + (x << 1) + 1]; assert(h->counts[0][0] == width * height); return h; } void h_free(struct histogram *h) { for (int l = 0; l <= h->levels; ++l) free(h->counts[l]); free(h->counts); free(h); } void h_decrement(struct histogram *h, int x, int y) { for (int l = h->levels; l >= 0; --l) { int k = (y << l) + x; h->counts[l][k] -= 1; x >>= 1; y >>= 1; } } int h_empty(struct histogram *h) { return h->counts[0][0] == 0; } int h_choose(struct histogram *h, int *x, int *y) { if (h_empty(h)) return 0; *x = 0; *y = 0; for (int l = 1; l <= h->levels; ++l) { *x <<= 1; *y <<= 1; int xs[4] = { *x, *x + 1, *x, *x + 1 }; int ys[4] = { *y, *y, *y + 1, *y + 1 }; int ks[4] = { (ys[0] << l) + xs[0] , (ys[1] << l) + xs[1] , (ys[2] << l) + xs[2] , (ys[3] << l) + xs[3] }; int ss[4] = { h->counts[l][ks[0]] , h->counts[l][ks[1]] , h->counts[l][ks[2]] , h->counts[l][ks[3]] }; int ts[4] = { ss[0] , ss[0] + ss[1] , ss[0] + ss[1] + ss[2] , ss[0] + ss[1] + ss[2] + ss[3] }; int p = rand() % ts[3]; int i; for (i = 0; i < 4; ++i) if (p < ts[i]) break; *x = xs[i]; *y = ys[i]; } return 1; } struct delta { int dx; int dy; }; int cmp_delta(const void *a, const void *b) { const struct delta *p = a; const struct delta *q = b; double x = p->dx * p->dx + p->dy * p->dy; double y = q->dx * q->dx + q->dy * q->dy; if (x < y) return -1; if (x > y) return 1; return 0; } struct delta *deltas; int *image; unsigned char *pgm; struct histogram *initialize(int width, int height) { deltas = malloc(sizeof(struct delta) * width * height); image = malloc(sizeof(int) * width * height); pgm = malloc(sizeof(unsigned char) * width * height); int d = 0; for (int dx = 0; dx < width; ++dx) for (int dy = 0; dy < height; ++dy) { deltas[d].dx = dx; deltas[d].dy = dy; ++d; } qsort(deltas, width * height, sizeof(struct delta), cmp_delta); for (int k = 0; k < width * height; ++k) image[k] = 0; return h_new(width, height); } void draw_circle(struct histogram *h, int width, int height, double cx, double cy, double r, int c) { double r2 = r * r; int x0 = floor(cx - r); int x1 = ceil (cx + r); int y0 = floor(cy - r); int y1 = ceil (cy + r); for (int y = y0; y <= y1; ++y) if (0 <= y && y < height) for (int x = x0; x <= x1; ++x) if (0 <= x && x < width) { double dx = x - cx; double dy = y - cy; double d2 = dx * dx + dy * dy; if (d2 <= r2) { h_decrement(h, x, y); image[y * width + x] = c; } } } double find_radius(int width, int height, int cx, int cy) { for (int d = 0; d < width * height; ++d) { int dx = deltas[d].dx; int dy = deltas[d].dy; int r2 = dx * dx + dy * dy; int xs[2] = { cx - dx, cx + dx }; int ys[2] = { cy - dy, cy + dy }; for (int j = 0; j < 2; ++j) { int y = ys[j]; if (y < 0) return sqrt(r2); if (y >= height) return sqrt(r2); for (int i = 0; i < 2; ++i) { int x = xs[i]; if (x < 0) return sqrt(r2); if (x >= width) return sqrt(r2); if (image[y * width + x]) return sqrt(r2); } } } return 0; } void packing(struct histogram *h, int width, int height, double p) { for (int c = 1; h->counts[0][0] > 0; ++c) { fprintf(stderr, "%16d / %d\r", h->counts[0][0], width * height); int cx = 0; int cy = 0; h_choose(h, &cx, &cy); double r = find_radius(width, height, cx, cy); draw_circle(h, width, height, cx, cy, r, c); } } void edges(int width, int height) { for (int y = 0; y < height; ++y) { for (int x = 0; x < width; ++x) { if (x == 0) pgm[y * width + x] = 0; else if (x == width-1) pgm[y * width + x] = 0; else if (y == 0) pgm[y * width + x] = 0; else if (y == height-1) pgm[y * width + x] = 0; else { int e = (image[y * width + x] != image[(y+1) * width + x+1]) || (image[(y+1) * width + x] != image[y * width + x+1]); pgm[y * width + x] = e ? 0 : 255; } } } } void write_pgm(unsigned char *p, int width, int height) { fprintf(stdout, "P5\n%d %d\n255\n", width, height); fwrite(p, width * height, 1, stdout); fflush(stdout); } int main(int argc, char **argv) { if (argc < 3) return 0; srand(time(0)); int width = atoi(argv[1]); int height = atoi(argv[2]); struct histogram *h = initialize(width, height); packing(h, width, height); edges(width, height); write_pgm(pgm, width, height); return 0; }

Nebulullaby An Interstellar Cloud Of Dust, released by Nebularosa in 2016, is now on SoundCloud. The record is (I think) still available to buy if you prefer to own a copy and support the cause. My track is only on the digital version NEB01D.

]]>Code:

/* gcc -std=c99 -Wall -pedantic -Wextra graph-paper.c -o graph-paper -lGLEW -lGL -lGLU -lglut ./graph-paper for i in *.pgm do pnmtopng -force -interlace -compression 9 -phys 11811 11811 1 <$i >${i%pgm}png done */ #include <stdio.h> #include <string.h> #include <GL/glew.h> #include <GL/glut.h> #define GLSL(s) #s static const int width = 2480; static const int height = 3508; static const float size = 4.0; static int gcd(int x, int y) { if (y == 0) { return x; } else { return gcd(y, x % y); } } static const char *src = GLSL( uniform vec3 twist; vec2 clog(vec2 x) { return vec2(log(length(x)), atan2(x.y, x.x)) * 0.15915494309189535; } void main() { vec2 p = gl_TexCoord[0].xy; p *= mat2(0.0, -1.0, 1.0, 0.0); vec2 q = clog(p); float a = atan2(twist.y, twist.x); float h = length(vec2(twist.x, twist.y)); q *= mat2(cos(a), sin(a), -sin(a), cos(a)) * h; float d = length(vec4(dFdx(q), dFdy(q))); float l = ceil(-log2(d)); float f = pow(2.0, l + log2(d)) - 1.0; l -= 6.0; float o[2]; for (int i = 0; i < 2; ++i) { l += 1.0; vec2 u = q * pow(2.0, l); u *= twist.z; u -= floor(u); float r = min ( min(length(u), length(u - vec2(1.0, 0.0))) , min(length(u - vec2(0.0, 1.0)), length(u - vec2(1.0, 1.0))) ); float c = clamp(1.5 * r / (pow(2.0, l) * d), 0.0, 1.0); vec2 v = q * pow(2.0, l - 2.0); v *= twist.z; v -= floor(v); float s = min(min(v.x, v.y), min(1.0 - v.x, 1.0 - v.y)); float k = clamp(0.75 + 0.25 * s / (pow(2.0, l - 2.0) * d), 0.0, 1.0); o[i] = c * k; } gl_FragColor = vec4(vec3(mix(o[1], o[0], f)), 1.0); } ); int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); glutCreateWindow("graphpaper"); glewInit(); GLint success; int prog = glCreateProgram(); int frag = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(frag, 1, (const GLchar **) &src, 0); glCompileShader(frag); glAttachShader(prog, frag); glLinkProgram(prog); glGetProgramiv(prog, GL_LINK_STATUS, &success); if (!success) exit(1); glUseProgram(prog); GLuint utwist = glGetUniformLocation(prog, "twist"); GLuint tex; glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 4096, 4096, 0, GL_RED, GL_UNSIGNED_BYTE, 0); glBindTexture(GL_TEXTURE_2D, 0); GLuint fbo; glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glViewport(0, 0, width, height); glLoadIdentity(); gluOrtho2D(0, 1, 1, 0); unsigned char *buffer = malloc(width * height); for (int x = 1; x <= 5; ++x) { for (int y = 0; y <= x; ++y) { if ((x > 1 && y == 0) || (y > 0 && gcd(x, y) != 1)) { continue; } for (int n = 5; n <= 8; ++n) { glUniform3f(utwist, x, y, n / 8.0); glBegin(GL_QUADS); { float u = size / 2.0; float v = size * (height - width / 2.0) / width; glTexCoord2f( u, -v); glVertex2f(1, 0); glTexCoord2f( u, u); glVertex2f(1, 1); glTexCoord2f(-u, u); glVertex2f(0, 1); glTexCoord2f(-u, -v); glVertex2f(0, 0); } glEnd(); glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, buffer); char fname[200]; snprintf(fname, 100, "graphpaper-%d-%d-%d.pgm", x, y, n); FILE *f = fopen(fname, "wb"); fputs("P5\n2480 3508\n255\n", f); fflush(f); fwrite(buffer, width * height, 1, f); fflush(f); fclose(f); } } } free(buffer); glDeleteFramebuffers(1, &fbo); glDeleteTextures(1, &tex); glDeleteShader(frag); glDeleteProgram(prog); glutReportErrors(); return 0; }

Output:

This is an excavation from my archives, around 2013 or so. Print out your favourites and enjoy doodling!

]]>Atom domains in the Mandelbrot set surround mini-Mandelbrot islands. So too in the Burning Ship fractal. These pictures are coloured using the period for hue, and distance estimation for value. Saturation is a simple switch on escaped vs unescaped pixels. Rendered with some Fragmentarium code.

The algorithm is simple: store the iteration count when |Z| reaches a new minimum. The last iteration count so stored is the atom domain. Better start checking after the first iteration if you initialize with 0. IEEE floating point has infinities so you can initialize the stored |Z| value to 1.0/0.0.

I was hoping to use atom domains for interior checking, by using Newton's method to find limit cycles and seeing if their maximal Lyapunov exponent is less than 1, but it didn't work. My guesses are that Newton's method doesn't converge to the limit cycle, but instead to some phantom attractor, or that the maximal Lyapunov exponent isn't an indicator of interiority as I had hoped (I tried with plain determinant too, no joy there either). The method marked some exterior points as interior.

One thing that is interesting to me is the grey region of unescaped pixels with chaotic atom domains (the region is that colour because the anti-aliasing blends subpixels scattered across the whole spectrum into a uniform grey). I'm not sure whether it is an artifact of rendering at a limited iteration count and should be exterior, or if it really is interior and chaotic.

]]>The Burning Ship fractal is defined by iterations of:

\[ \begin{aligned} X &\leftarrow X^2 - Y^2 + A \\ Y &\leftarrow 2|XY| + B \end{aligned} \]

The Burning Ship set is those points \(A + i B \in \mathbb{C}\) whose iteration starting from \(X + i Y = 0\) remains bounded. In practice one iterates a maximum number of times, or until the point diverges (exercise suggested on Reddit: prove a lower bound on an escape radius that is sufficient for the Burning Ship, the Mandelbrot set has the bound \(R = 2\)). Note that traditionally the Burning Ship is rendered with the imaginary \(B\) axis increasing downwards, which makes the "ship" the right way up.

Traditional (continuous) iteration count (escape time) rendering tends to lead to a grainy appearance for this fractal, so I prefer distance estimation. To compute a distance estimate one can use partial derivatives (aka Jacobian matrix):

\[ \begin{aligned} \frac{\partial X}{\partial A} &\leftarrow 2 \left(X \frac{\partial X}{\partial A} - Y \frac{\partial Y}{\partial A}\right) + 1 \\ \frac{\partial X}{\partial B} &\leftarrow 2 \left(X \frac{\partial X}{\partial B} - Y \frac{\partial Y}{\partial B}\right) \\ \frac{\partial Y}{\partial A} &\leftarrow 2 \operatorname{sgn}(X) \operatorname{sgn}(Y) \left( X \frac{\partial Y}{\partial A} + \frac{\partial X}{\partial A} Y \right) \\ \frac{\partial Y}{\partial B} &\leftarrow 2 \operatorname{sgn}(X) \operatorname{sgn}(Y) \left( X \frac{\partial Y}{\partial B} + \frac{\partial X}{\partial B} Y \right) + 1 \end{aligned} \]

Then the distance estimate for an escaped point is (thanks to gerrit on fractalforums.org):

\[ d = \frac{||\begin{pmatrix}X & Y\end{pmatrix})||^2 \log ||\begin{pmatrix}X & Y\end{pmatrix}||}{\left|\left|\begin{pmatrix}X & Y\end{pmatrix} \cdot \begin{pmatrix} \frac{\partial X}{\partial A} & \frac{\partial X}{\partial B} \\ \frac{\partial Y}{\partial A} & \frac{\partial Y}{\partial B} \end{pmatrix} \right|\right|} \]

Then scale \(d\) by the pixel spacing, colouring points with small distance dark, and large distance light. I colour interior points dark too.

Perturbation techniques can be used for efficient deep zooms. Compute a high precision orbit of \(A,B,X,Y\), and have low precision deltas \(a,b,x,y\) for each pixel. It works out as:

\[ \begin{aligned} x &\leftarrow (2 X + x) x - (2 Y + y) y + a \\ y &\leftarrow 2 \operatorname{diffabs}(XY, Xy + xY + xy) + b \end{aligned} \]

where \(\operatorname{diffabs}(c, d) = |c + d| - |c|\) but expanded into case analysis to avoid catastrophic cancellation with limited precision floating point (this is I believe due to laser blaster on fractalforums.com):

\[ \operatorname{diffabs}(c, d) = \begin{cases} d & c \ge 0, c + d \ge 0 \\ -2c - d & c \ge 0, c + d < 0 \\ 2c + d & c < 0, c + d > 0 \\ -d & c < 0, c + d \le 0 \end{cases} \]

Due to the non-analytic functions, series approximation cannot be used. As with perturbation rendering of the Mandelbrot set, glitches can occur. It seems that Pauldelbrot's glitch criterion (originally posted on fractalforums.com) is also applicable, with a glitch when:

\[ |(X + x) + (Y + y) i|^2 < 10^{-3} |X + i Y|^2 \]

Glitched pixels can be recalculated with a new reference. It may be beneficial to pick as new references those pixels with the smallest LHS of the glitch criterion. The derivatives for distance estimation don't need to be perturbed as they are not "small", one can use \(X + x\) etc in the derivative recurrences.

When navigating the Burning Ship, it is noticeable that "mini-ships" occur, being distorted self-similar copies of the whole set. When passing by, embedded Julia sets appear, similarly to the Mandelbrot set, with period doubling when approaching mini-ships. To zoom directly to mini-ships, one can use Newton's method in 2 real variables. First one needs the period, which can be found by iterating the corners of a polygon until it surrounds the origin, that iteration number is the period (this method is due to Robert Munafo's mu-ency, originally for the Mandelbrot set, but seems to work for the Burning Ship too: perhaps the non-conformal folding is sufficiently rare to be unproblematic in practice). Newton's method iterations are like this:

\[ \begin{pmatrix} A \\ B \end{pmatrix} \leftarrow \begin{pmatrix} A \\ B \end{pmatrix} - \begin{pmatrix} \frac{\partial X}{\partial A} & \frac{\partial X}{\partial B} \\ \frac{\partial Y}{\partial A} & \frac{\partial Y}{\partial B} \end{pmatrix}^{-1} \begin{pmatrix} X \\ Y \end{pmatrix} \]

The final part is the mini-ship size estimate, to know how deep to zoom. The Mandelbrot size estimate seems to work with minor modifications to use Jacobian matrices instead of complex numbers.

These concrete equations are specific to the quadratic Burning Ship, but the methods in principle apply to many escape time fractals.

]]>Recently I've been revisiting the code from my Monotone, extending it to use OpenGL cube maps to store the feedback texture instead of nonlinear warping in a regular texture. This means I can use Möebius transformations instead of simple similarities and still avoid excessively bad blurriness and edge artifacts. I've been toying with colour too: but unlike the chaos game algorithm for fractal flames (which can colour according to a "hidden" parameter, leading to interesting and dynamic colour structures), the texture feedback mechanism I'm using can only cope with "structural" RGB colours (with an alpha channel for overall brightness). A 4x4 colour matrix seems to be more interesting than the off-white multipliers I was using to start with.

Some videos:

- Moebius Bubble Chamber (stereographic projection, black and white)
- Moebius Blueprints (360, slight colour, low resolution)
- Moebius Blueprints 2 (360, more colour, high resolution)
- Moenotone Demo (360, colour, high resolution)
- Moenotone Demo 2 (stereographic projection, colour)