Atom domains in the Mandelbrot set surround mini-Mandelbrot islands. So too in the Burning Ship fractal. These pictures are coloured using the period for hue, and distance estimation for value. Saturation is a simple switch on escaped vs unescaped pixels. Rendered with some Fragmentarium code.

The algorithm is simple: store the iteration count when |Z| reaches a new minimum. The last iteration count so stored is the atom domain. Better start checking after the first iteration if you initialize with 0. IEEE floating point has infinities so you can initialize the stored |Z| value to 1.0/0.0.

I was hoping to use atom domains for interior checking, by using Newton's method to find limit cycles and seeing if their maximal Lyapunov exponent is less than 1, but it didn't work. My guesses are that Newton's method doesn't converge to the limit cycle, but instead to some phantom attractor, or that the maximal Lyapunov exponent isn't an indicator of interiority as I had hoped (I tried with plain determinant too, no joy there either). The method marked some exterior points as interior.

One thing that is interesting to me is the grey region of unescaped pixels with chaotic atom domains (the region is that colour because the anti-aliasing blends subpixels scattered across the whole spectrum into a uniform grey). I'm not sure whether it is an artifact of rendering at a limited iteration count and should be exterior, or if it really is interior and chaotic.

]]>The Burning Ship fractal is defined by iterations of:

\[ \begin{aligned} X &\leftarrow X^2 - Y^2 + A \\ Y &\leftarrow 2|XY| + B \end{aligned} \]

The Burning Ship set is those points \(A + i B \in \mathbb{C}\) whose iteration starting from \(X + i Y = 0\) remains bounded. In practice one iterates a maximum number of times, or until the point diverges (exercise suggested on Reddit: prove a lower bound on an escape radius that is sufficient for the Burning Ship, the Mandelbrot set has the bound \(R = 2\)). Note that traditionally the Burning Ship is rendered with the imaginary \(B\) axis increasing downwards, which makes the "ship" the right way up.

Traditional (continuous) iteration count (escape time) rendering tends to lead to a grainy appearance for this fractal, so I prefer distance estimation. To compute a distance estimate one can use partial derivatives (aka Jacobian matrix):

\[ \begin{aligned} \frac{\partial X}{\partial A} &\leftarrow 2 \left(X \frac{\partial X}{\partial A} - Y \frac{\partial Y}{\partial A}\right) + 1 \\ \frac{\partial X}{\partial B} &\leftarrow 2 \left(X \frac{\partial X}{\partial B} - Y \frac{\partial Y}{\partial B}\right) \\ \frac{\partial Y}{\partial A} &\leftarrow 2 \operatorname{sgn}(X) \operatorname{sgn}(Y) \left( X \frac{\partial Y}{\partial A} + \frac{\partial X}{\partial A} Y \right) \\ \frac{\partial Y}{\partial B} &\leftarrow 2 \operatorname{sgn}(X) \operatorname{sgn}(Y) \left( X \frac{\partial Y}{\partial B} + \frac{\partial X}{\partial B} Y \right) + 1 \end{aligned} \]

Then the distance estimate for an escaped point is (thanks to gerrit on fractalforums.org):

\[ d = \frac{||\begin{pmatrix}X & Y\end{pmatrix})||^2 \log ||\begin{pmatrix}X & Y\end{pmatrix}||}{\left|\left|\begin{pmatrix}X & Y\end{pmatrix} \cdot \begin{pmatrix} \frac{\partial X}{\partial A} & \frac{\partial X}{\partial B} \\ \frac{\partial Y}{\partial A} & \frac{\partial Y}{\partial B} \end{pmatrix} \right|\right|} \]

Then scale \(d\) by the pixel spacing, colouring points with small distance dark, and large distance light. I colour interior points dark too.

Perturbation techniques can be used for efficient deep zooms. Compute a high precision orbit of \(A,B,X,Y\), and have low precision deltas \(a,b,x,y\) for each pixel. It works out as:

\[ \begin{aligned} x &\leftarrow (2 X + x) x - (2 Y + y) y + a \\ y &\leftarrow 2 \operatorname{diffabs}(XY, Xy + xY + xy) + b \end{aligned} \]

where \(\operatorname{diffabs}(c, d) = |c + d| - |c|\) but expanded into case analysis to avoid catastrophic cancellation with limited precision floating point (this is I believe due to laser blaster on fractalforums.com):

\[ \operatorname{diffabs}(c, d) = \begin{cases} d & c \ge 0, c + d \ge 0 \\ -2c - d & c \ge 0, c + d < 0 \\ 2c + d & c < 0, c + d > 0 \\ -d & c < 0, c + d \le 0 \end{cases} \]

Due to the non-analytic functions, series approximation cannot be used. As with perturbation rendering of the Mandelbrot set, glitches can occur. It seems that Pauldelbrot's glitch criterion (originally posted on fractalforums.com) is also applicable, with a glitch when:

\[ |(X + x) + (Y + y) i|^2 < 10^{-3} |X + i Y|^2 \]

Glitched pixels can be recalculated with a new reference. It may be beneficial to pick as new references those pixels with the smallest LHS of the glitch criterion. The derivatives for distance estimation don't need to be perturbed as they are not "small", one can use \(X + x\) etc in the derivative recurrences.

When navigating the Burning Ship, it is noticeable that "mini-ships" occur, being distorted self-similar copies of the whole set. When passing by, embedded Julia sets appear, similarly to the Mandelbrot set, with period doubling when approaching mini-ships. To zoom directly to mini-ships, one can use Newton's method in 2 real variables. First one needs the period, which can be found by iterating the corners of a polygon until it surrounds the origin, that iteration number is the period (this method is due to Robert Munafo's mu-ency, originally for the Mandelbrot set, but seems to work for the Burning Ship too: perhaps the non-conformal folding is sufficiently rare to be unproblematic in practice). Newton's method iterations are like this:

\[ \begin{pmatrix} A \\ B \end{pmatrix} \leftarrow \begin{pmatrix} A \\ B \end{pmatrix} - \begin{pmatrix} \frac{\partial X}{\partial A} & \frac{\partial X}{\partial B} \\ \frac{\partial Y}{\partial A} & \frac{\partial Y}{\partial B} \end{pmatrix}^{-1} \begin{pmatrix} X \\ Y \end{pmatrix} \]

The final part is the mini-ship size estimate, to know how deep to zoom. The Mandelbrot size estimate seems to work with minor modifications to use Jacobian matrices instead of complex numbers.

These concrete equations are specific to the quadratic Burning Ship, but the methods in principle apply to many escape time fractals.

]]>Recently I've been revisiting the code from my Monotone, extending it to use OpenGL cube maps to store the feedback texture instead of nonlinear warping in a regular texture. This means I can use Möebius transformations instead of simple similarities and still avoid excessively bad blurriness and edge artifacts. I've been toying with colour too: but unlike the chaos game algorithm for fractal flames (which can colour according to a "hidden" parameter, leading to interesting and dynamic colour structures), the texture feedback mechanism I'm using can only cope with "structural" RGB colours (with an alpha channel for overall brightness). A 4x4 colour matrix seems to be more interesting than the off-white multipliers I was using to start with.

Some videos:

- Moebius Bubble Chamber (stereographic projection, black and white)
- Moebius Blueprints (360, slight colour, low resolution)
- Moebius Blueprints 2 (360, more colour, high resolution)
- Moenotone Demo (360, colour, high resolution)
- Moenotone Demo 2 (stereographic projection, colour)

This blog post is about two separate things, but both involve Möebius transformations so I combined them into one.

Rotations of the Riemann sphere correspond to those elliptic Möebius transformations whose fixed points are antipodal. Suppose we have two vectors \(u, v \in \mathbb{R}^3\) with \(|u| = |v| = 1\) and we want to find the Möebius transformation for the corresponding rotation of the Riemann sphere that takes \(u\) to \(v\) in the shortest way. This rotation has fixed points \(w_\pm = \pm \frac{u \times v}{| u \times v |}\). By stereographic projection these become the fixed points of the Möebius transformation: \(g = \frac{w_x + i w_y}{1 - w_z}\) for each \(\pm\). Further, the elliptic transformation has characteristic constant \(k = e^{i \theta} = \cos \theta + i \sin \theta = u \cdot v + i |u \times v|\) from all of which the transformation is:

\[ M_{u \to v} = \begin{pmatrix} g_+ - k g_- & (k - 1) g_+ g_- \\ 1 - k & k g_+ - g_- \end{pmatrix} \]

If instead of vectors we wanted to rotate between complex numbers, just use stereographic unprojection to get the 3D coordinates and proceed as before: \( \frac{(2x, 2y, x^2+y^2-1)}{x^2+y^2+1} \). (Note: there may be a sign issue with the \(\sin \theta\) calculation, in my use case I didn't need to worry about it as all the issues cancelled each other out.)

Bézier curves are useful for generating smooth curves when only linear interpolation is available (the De Casteljau's algorithm construction works by repeated linear interpolation). Linear interpolation for Möebius transformations involves 2x2 complex matrix diagonalisation for raising to fractional powers (between 0 and 1). I wrote about this in more detail in my 2015 blog post interpolating Möebius transformations.

A C1-continuous (but not C2-continuous at the join points) piecewise cubic Bézier spline can be defined by specifying the points (Möebius transformations) \(P_i\) through which the curve passes, and the tangent \(T_i\) at each point. Then the pieces of the spline are defined by control points \((P_i, P_i T_i, T_{i+1}^{-1} P_{i+1}, P_{i+1})\). The tangents \(T_i\) might be "small" for smoother results, for example these can be constructed by linearly interpolating between the identity (with large weight) and an arbitrary transform (with small weight). This interpolation scheme gives visually much more sensible results than naive Catmull-Rom spline interpolation between each individual coefficient components separately, while still maintaining smoothness.

]]>Earlier today I wrote about
atom domain coordinates,
and thought about extending it to
Misiurewicz domains.
By simple analogy, define the **Misiurewicz domain coordinate** \(G\) with
\(0 \le r \lt q\) and \(1 \le p\):

\[G(c, p, q, r) = \frac{F^{q + p}(0, c) - F^{q}(0, c)}{F^{r + p}(0, c) - F^{r}(0, c)}\]

Calculated similarly to the
atom domain size estimate,
the **Misiurewicz domain size estimate** is:

\[|h| = \left| \frac{F^{r + p}(0, c) - F^{r}(0, c)}{\frac{\partial}{\partial c}F^{q + p}(0, c) - \frac{\partial}{\partial c}F^{q}(0, c)} \right| \]

Like the atom domain coordinate, Newton's method can be used to find a point with a given Misiurewicz domain coordinate. Implementing this is left as an exercise (expect an implementation in my mandelbrot-numerics repository at some point soon).

]]>Previously I wrote about
atom domain size estimates
in the Mandelbrot set. A logical step given \(G(c) = 0\) at the center and
\(|G(c)| = 1\) on the boundary is to take \(G(c)\) as the
**atom domain coordinate**
for \(c\). It turns out to make sense to make \(1 \le q \lt p\) arguments to
the function, and evaluate them at the central nucleus, because otherwise the
assumption that \(p, q\) are constant throughout the domain can be violated
(particularly with neighbouring domains in embedded Julia sets, where the higher
period one is not "influenced" by the medium period one it overlaps, but instead
by the lower period "parent" of both):

\[G(c, q, p) = \frac{F^p(0, c)}{F^q(0, c)}\]

In my efficient automated Julia morphing experiments recently I used the atom domain coordinates for guessing an initial point for Newton's method to find Misiurewicz points. This worked because each next level of morph had an atom domain coordinate approximately the previous raised to the power \(\frac{3}{2}\). To do this I needed to implement Newton's method iterations to find \(c\) given \(G(c), p, q\). Pseudo-code for that looks like this:

double _Complex m_domain_coord ( double _Complex c0 , double _Complex G , int q , int p , int n ) { double _Complex c = c0; for (int j = 0; j < n; ++j) { double _Complex zp = 0; double _Complex dcp = 0; double _Complex z = 0; double _Complex dc = 0; for (int i = 1; i <= p; ++i) { dc = 2 * z * dc + 1; z = z * z + c; if (i == q) { zp = z; dcp = dc; } } double _Complex f = z / zp - G; double _Complex df = (dc * zp - z * dcp) / (zp * zp); c = c - f / df; } return c; }

You can find fuller implementations (including arbitrary precision) in my mandelbrot-numerics repository.

]]>I updated my Inflector Gadget, adding a keyframe animation feature among other goodies. I also made a new page for it, where all the downloads and documentation are to be found. Go check it out!

PS: Inflector Gadget can make images like these in very little time:

]]>Previously I wrote about an automated Julia morphing method extrapolating patterns in the binary representation of external angles, and then tracing external rays. However this was impractical as it was \(O(p^2)\) for final period \(p\) and the period typically more than doubles at each next level of morphing. This week I devised an \(O(p)\) algorithm, which requires a little bit of setting up and doesn't always work but when it works it works very well.

The first key insight was that in embedded Julia sets, the primary spirals and tips are distinguishable by the preperiods of the Misiurewicz points at their centers. Moreover when using the "full" Newton's method algorithm for Misiurewicz points that rejects lower preperiods by division, the basins of attraction comfortably enclose the center of the embedded Julia set itself.

So, we can choose the appropriate (pre)period to get to the center of the spiral either inwards towards the main body of the Mandelbrot set or outwards towards its tips. Now, from a Misiurewicz center of a spiral, Newton's method for periodic nucleus finding will work for any of the periods that form the structural spine of the spiral - these go up by a multiple of the period of the influencing island. From these nuclei we can jump to the Misiurewicz spiral on the other side, using Newton's method again. In this way we can algorithmically find any nucleus or Misiurewicz point in the structure of the embedded Julia set.

Some images should make this clearer at this point: blue means Newton's method for nucleus, red means Newton's method for Misiurewicz point, nuclei are labeled with their period, Misiurewicz points with preperiod and period in that order, separated by 'p'.

The second key insight was that the atom domain coordinate of the tip of the treeward branch at each successive level was scaled by a power of 1.5 from the one at the previous level. Because atom domain coordinates correspond to the unit disc, this means they are closer to the nucleus. This allowed an initial guess for finding the Misiurewicz point at the tip more precisely (the first insight does only apply to "top-level" embedded Julia sets, not their morphings - there is a "symmetry trap" that breaks Newton's method because the boundary of the basins of attraction passes through the point we want to start from). I implemented a Newton's method iteration to find a point with a given atom domain coordinate. This relationship is only true in the limit, so the input to the automatic morphing algorithm starts at the first morphing, rather than the top level embedded Julia set.

My first test was quite challenging: to morph a tree with length 7 arms, from an embeddded Julia set at angled internal address:

1_{1/2}→2_{1/2}→3_{2/5}→15_{4/7}→88

The C code (full link at the bottom) that sets up the parameters for this morphing looks like this:

#ifdef EXAMPLE_1 const char *embedded_julia_ray = ".011100011100011011100011011100011100011100011011100011011100011100011100011011100011100001110001101110010010010010010010010001110001110001101110001101110001110001110001101110001101110001110001110001101110001110000111000110111(001)"; int ray_preperiod = 225; int ray_period = 3; double _Complex ray_endpoint = -1.76525599938987623396492597243303e+00 + 1.04485517375987067290733632798876e-02 * I; int influencing_island_period = 3; int embedded_julia_set_period = 88; int denominator_of_rotation = 5; int arm_length = 7; double view_size_multiplier = 3600; #endif

The ray lands on the treeward-tip Misiurewicz point of the first morphed Julia set, this end point is cached to avoid long ray tracing computations. The next 4 numbers are involved in the iterative morphing calculations of the relevant periods and preperiods, with the arm length being the primary variable to adjust once the Julia set is found. The view size multiplier sets how to zoom out from the central morphed figure to frame the result nicely, maybe I can find a good heuristic to determine this based on arm length.

The morphing looks like this:

The second example is similar, starting with the island with this angled internal address, with tree morphing arm length 9.

1_{1/2}→2_{1/2}→3_{1/2}→4_{1/2}→8_{1/15}→116_{1/2}→119

The third and final example (for now) is simpler still, starting at the island with this internal address, with tree morphing arm length 1.

1_{1/3}→3_{1/2}→4_{11/23}→89

The code for example 3 contains an ugly hack, because the method for guessing the location of the next Misiurewicz point (for starting Newton's method iterations) isn't good enough - the radius is accurate, but the angle is not - my atom domain coordinate method is clearly not the correct one in general...

Here are the timings in seconds for calculating the coordinates (not parallelized) and rendering the images (I used m-perturbator-offline at 1280x720, the parallel efficiency is somewhat low because it doesn't know the center point is already a good reference and it tries to find one in the view - it would be much faster if I let it take the primary reference as external input - more things TODO):

morph | coordinates | image rendering | |||||||
---|---|---|---|---|---|---|---|---|---|

eg1 | eg2 | eg3 | eg1 | eg2 | eg3 | ||||

real | user | real | user | real | user | ||||

1 | 0 | 0 | 0 | 0.633 | 2.04 | 0.625 | 2.00 | 0.551 | 1.74 |

2 | 0 | 0 | 0 | 0.817 | 2.41 | 0.943 | 2.31 | 0.932 | 3.19 |

3 | 0 | 0 | 0 | 1.16 | 3.56 | 1.43 | 4.53 | 1.26 | 4.19 |

4 | 1 | 0 | 0 | 1.37 | 4.38 | 1.86 | 6.06 | 1.73 | 5.86 |

5 | 1 | 2 | 1 | 2.29 | 7.45 | 3.43 | 11.4 | 2.67 | 8.78 |

6 | 2 | 4 | 2 | 3.95 | 12.3 | 5.73 | 18.4 | 4.26 | 14.9 |

7 | 10 | 14 | 2 | 7.42 | 23.6 | 8.66 | 26.7 | 6.90 | 21.1 |

8 | 24 | 37 | 7 | 28.2 | 95.1 | 42.4 | 142 | 12.6 | 36.4 |

9 | 92 | 155 | 27 | 63.7 | 257 | 92.6 | 292 | 21.5 | 63.5 |

10 | 288 | 442 | 51 | 141 | 419 | 207 | 609 | 77.5 | 263 |

total | 418 | 654 | 90 | 259 | 774 | 372 | 1120 | 137 | 430 |

The code is part of my mandelbrot-numerics project. You also need my mandelbrot-symbolics project to compile the example program, and you may also want mandelbrot-perturbator to render the output (note: the GTK version is currently hardcoded to 65536 maximum iteration count, which isn't enough for deeper morphed Julia sets - adding runtime configuration for this is my next priority). Other deep zoomers are available, for example my Kalles Fraktaler 2 + GMP fork with Windows binaries available (that also work in WINE on Linux).

]]>The Mandelbrot set contains hyperbolic components (cardioid-like and circle-like regions), each with a super-attracting periodic nucleus at its center. The image above labels some with their periods (click for bigger).

The Mandelbrot set contains smaller copies of itself, with the periods all multiplied by a factor. The image above is the period 3 island in the antenna of the main set. The multiplication is called tuning or renormalization, and occurs because \(P\) iterations of \(z^2 + c\) are locally equivalent to \(1\) iteration of \(Z^2 + C\) where \(Z, C\) are a linear change of variable from \(z, c\).

Above are the two upper period 15 child bulbs at internal angles 1/5 and 2/5, with parent the period 3 cardioid. You can see that the islands in the primary antennae (of which there are 5) attached to the 1/5 bulb go up by 3 in period when you go 1 step around, but the corresponding ones of the 2/5 bulb go up by 3 in period when you go 2 steps around. This is described in the paper:

Geometry of the Antennas in the Mandelbrot Set

R.L. Devaney and M. Moreno-Rocha

April 11, 2000

In the Mandelbrot set, the bulbs attached directly to the main cardioid are called the p/q -bulbs. The reason for this is that the largest component of the interior of these bulbs consists of c-values for which the quadratic function \(Q_c (z) = z^2 + c\) admits an attracting cycle with rotation number p/q. In this paper we give a geometric method to read off p/q from the geometry of the antenna attached to the bulb.

The island copies are decorated with hairy filaments, and around islands in the hairs are embedded Julia sets, structures of filaments that look similar to Julia sets at the corresponding location to the influencing island. Hairs in the "seahorse valley" of an island will have embedded Julia sets that look like those for parameters in the "seahorse valley" of the main part of the Mandelbrot set (near \(-0.75 + 0.00 i\)).

This pattern carries over to the embedded Julia sets surrounding the period 88 islands in the hairs either side of the period 15 bulbs. Above are the 1/5, the periods go up by 3 going 1 step around the spirals with 5 arms. Below are the 2/5, the periods go up by 3 going 2 steps around the spirals. Note also that the spirals turn in opposite ways according to which side of the bulb the embedded Julia set is, but the direction of increase of the periods is the same in both cases.

Finally, note that the rays at each increasing period divide the embedded Julia set in half, and there is one island of period 3 higher in each part. So there is 1 island of period 88, 2 of 91, 4 of 94, 8 of 97, and so on. This binary subdivision property is not unique to these specific Julia sets. I conjecture that it holds for all of them, apart from possibly those in the filaments heading to the cardioid cusp or from the antenna tip of an island.

Future work includes investigating more embedded Julia sets to check that the conjecture holds, possibly proving it via combinatorial arguments to do with external angles and rays, and extending the investigation to doubly embedded Julia sets which appear when passing close to two islands in a deeper zoom.

PS: the diagrams were made with the new GTK GUI for mandelbrot-perturbator, which I ported from my old book project code. I added "plain" rendering to the library too, it's much faster at low zoom levels, especially because of the interior checking (I still need to add interior checking to the perturbation codepath...). You can download the parameters used for the images in this post: patterns-in-embedded-julia-sets.tbz.

]]>On the fractal chats Discord server, it was discussed that the "elliptic" variation in fractal flame renderers suffered from precision problems. So I set about trying to fix them. The test parameters are here: elliptic-precision-problems.flame. It looks like this:

The black holes are the problem. Actually it turns out that the main cause of the hole was the addition of an epsilon to prevent division by zero in the "spherical variation", removing that gives this image, still with small black holes in the spirals:

The original code for the flam3 implementation of the elliptic variation is:

void var62_elliptic (flam3_iter_helper *f, double weight) { /* Elliptic in the Apophysis Plugin Pack */ double tmp = f->precalc_sumsq + 1.0; double x2 = 2.0 * f->tx; double xmax = 0.5 * (sqrt(tmp+x2) + sqrt(tmp-x2)); double a = f->tx / xmax; double b = 1.0 - a*a; double ssx = xmax - 1.0; double w = weight / M_PI_2; if (b<0) b = 0; else b = sqrt(b); if (ssx<0) ssx = 0; else ssx = sqrt(ssx); f->p0 += w * atan2(a,b); if (f->ty > 0) f->p1 += w * log(xmax + ssx); else f->p1 -= w * log(xmax + ssx); }

When x is near +/-1 and y is near 0, xmax is near 1, so a is near +/- 1, so there is a catastrophic cancellation (loss of significant digits) in the calculation of b = 1 - a*a. But it turns out that b doesn't need to be computed at all, because atan(a / sqrt(1 - a*a)) is the same as asin(a).

There is a second problem with ssx = xmax - 1, as xmax is near 1 there is a catastrophic cancellation here too. So the next step is to see how to calculate ssx without subtracting two values of roughly equal size and thus losing precision. Some algebra:

ssx = xmax - 1 = 0.5 (sqrt(tmp+x2)+sqrt(tmp-x2)) - 1 = 0.5 (sqrt(tmp+x2)+sqrt(tmp-x2) - 2) = 0.5 (sqrt(tmp+x2)-1 + sqrt(tmp-x2)-1 = 0.5 (sqrt(x*x+y*y+2*x+1)-1 + sqrt(x*x+y*y-2*x+1)-1) = 0.5 (sqrt(u+1)-1 + sqrt(v+1)-1)

Now we have subexpressions of the form sqrt(u+1)-1, which will lose precision when u is near 0. One way of doing this is to use a Taylor series for the function expanded about u=0, then converting this to a Padé approximant. I used a Wolfram Alpha Open Code Notebook to do this, here is the highlight:

> PadeApproximant[Normal[Series[Sqrt[x+1]-1, {x, 0, 8}]], {x, 0, 4}] (x/2+(3 x^2)/4+(5 x^3)/16+x^4/32) / (1+(7 x)/4+(15 x^2)/16+(5 x^3)/32+x^4/256)

Inspecting a plot of the difference between the approximant and the original function shows that it's accurate to about 1e-16 in the range -0.0625..+0.0625, which gives the following code implementation:

double sqrt1pm1(double x) { if (-0.0625 < x && x < 0.0625) { double num = 0; double den = 0; num += 1.0 / 32.0; den += 1.0 / 256.0; num *= x; den *= x; num += 5.0 / 16.0; den += 5.0 / 32.0; num *= x; den *= x; num += 3.0 / 4.0; den += 15.0 / 16.0; num *= x; den *= x; num += 1.0 / 2.0; den += 7.0 / 4.0; num *= x; den *= x; den += 1.0; return num / den; } return sqrt(1 + x) - 1; }

Now we can compute xmax - 1 without subtracting, and finally we can use log1p() to avoid inaccuracy from log of values near 1. The final code looks like this:

void var62_elliptic (flam3_iter_helper *f, double weight) { double x = f->tx; double y = f->ty; double x2 = 2.0 * x; double sq = f->precalc_sumsq; double u = sq + x2; double v = sq - x2; double xmaxm1 = 0.5 * (sqrt1pm1(u) + sqrt1pm1(v)); double a = x / (1 + xmaxm1); double ssx = xmaxm1; double w = weight / M_PI_2; if (ssx<0) ssx = 0; else ssx = sqrt(ssx); f->p0 += w * asin(clamp(a, -1, 1)); if (y > 0) f->p1 += w * log1p(xmaxm1 + ssx); else f->p1 -= w * log1p(xmaxm1 + ssx); }

The pudding, it works: the small black holes in the spirals are gone!

Finally, it seems elliptic is similar but not quite equal to the complex function 1 - acos(z) * 2 / PI. The standard library implementations probably has accuracy-preserving techniques that might be worth a look, I haven't checked yet. But the difference may be significant for images, notably the acos thing is conformal while the elliptic variation doesn't seem to be. Here's a comparison (elliptic on the left, acos on the right):

**EDIT 2017-11-27** I also changed the badval threshold from 1e10
to 1e100, and I've been informed that this change is also critical for getting
the good appearance (i.e., you need both the numerical voodoo and the threshold
increase).

Today I presented GULCII, my Graphical Untyped Lambda Calculus Interactive Interpreter, at the University of Edinburgh Informatics department. It went well I think. The first go around in the morning I missed some slides about de Bruijn indexes but the afternoon audience seemed to be familiar with it. First I performed with GULCII, the same set as the FARM conference music evening, testing an equivalence between Church encoding and Scott encoding of natural numbers. Then I presented some slides about those encoding schemes of data in untyped lambda calculus, and a bit about how GULCII works, some shortcomings, and ideas for future developments.

A week before the talk I discovered a bad bug in the evaluator ("exp two two" with Church numerals was not equal to "four"), but I managed to rewrite it in time, using the "scope extrusion" rules from the Lambdascope paper. However, sharing is still broken, so the next version will probably switch to using Lambdascope as a library, so that full lazy graph reduction will work properly. The bug had been there since 2011, and is visible in the video of my FARM performance.

You can download the slides from my presentation,
or view the Pandoc Markdown for LaTex Beamer
source code.
I might upload some video of the second talk soon. GULCII itself is
on Hackage so you can install
it with `cabal install gulcii`

.

Incidentally the brickwork opposite the Informatics Forum reminded me of Donky Kong, with its ladders:

]]>I benchmarked some Mandelbrot set renderers. Click pictures for bigger versions. The corresponding deep zoom images are these:

Location credits:

Olbaid-ST Deep Mandelbrot Zoom 023 |
Dinkydau Evolution Of Trees |
Dinkydau Ssssssssss |

(self-made) | (self-made) | (self-made) |

Traditional deep zoom rendering with arbitrary precision calculations has a constant cost per pixel. This is visiable on the graph as horizontal lines for the renderer MDZ (Mandelbrot Deep Zoom), version 0.1.3 is an unreleased version with some small changes I made to allow the benchmarks to be run from a shell script. On the graph bottom right you can see native machine precision in MDZ is much faster than any perturbation renderer.

Perturbation techniques allow native precision to be used for the bulk of the calculations, as deltas from an arbitrary precision reference. Series approximation techniques allow many per-pixel iterations to be skipped entirely. There is some per-image overhead, but the eventual per-pixel cost is lower when the image size increases. This is visible on the graph as downward sloping lines for the renderer Kalles Fraktaler, version 2.12.5 will be released next month with some bug fixes and command line rendering support, and the renderer mandelbrot-perturbator, which is still highly experimental.

Each of these renderers has two lines, with different thresholds for Pauldelbrot's glitch detection heuristic. One conclusion to be drawn is that the threshold has minimal impact on mandelbrot-perturbator render times, while the higher threshold (necessary for correctness with some locations) can slow Kalles Fraktaler down by a significant amount. Kalles Fraktaler is typically slower than mandelbrot-perturbator with this more accurate mode.

A final conclusion is that while mandelbrot-perturbator flattens out to a constant low cost per pixel as the image size increases, at some locations Kalles Fraktaler starts to slow down further (the lines slope upwards). This indicates some performance bug that I hope to investigate at some point next month. In the mean time tiled rendering might be cheaper.

These benchmarks represent 20 days of (single core) CPU time on a quad core AMD Athlon II X4 640 Processor underclocked to 2.3GHz due to thermal issues. The benchmark data is in the Kalles Fraktaler 2 source code repository.

]]>