The Mandelbrot set \(M\)is formed by iterations of the function \(z \to z^2 + c\) starting from \(z = 0\). If this remains bounded for a given \(c\), then \(c\) is in \(M\), otherwise (if it escapes to infinity) then \(c\) is not in \(M\). The interior of \(M\) is characterized by collections of cardioid-like and disk-like shapes, these are hyperbolic components each associated with an integer, its period. For the \(c\) at the center of each component, if the period is \(p\) then after \(p\) iterations, \(z\) returns to \(0\), and the iterations repeat (hence the name period). For points in the complex plane (either in \(M\) or not) sufficiently near to a hyperbolic compoenent of period \(p\), \(|z|\) reaches a new minimum (discounting the initial \(z=0\)) at iteration \(p\). The region for which this is true is called the atom domain associated with the hyperbolic component.

To find the center (sometimes called nucleus) of a hyperbolic component, one can use Newton's root-finding method in one complex variable. Iterate the derivative with respect to \(c\) along with \(z\) (using \(\frac{\partial z}{\partial c} \to 2 \frac{\partial z}{\partial c} z + 1\)) for \(p\) iterations, then update \(c \to c - \frac{z}{\frac{\partial z}{\partial c}}\) until it converges. However, Newton's method requires a good initial guess for \(c\). As there are multiple roots, and if you are near to a root Newton's method brings you nearer to it, there must be a boundary where which root is reached depends sensitively on the initial guess. It turns out (if there are more than 2 roots) that the boundary is fractal, and for any point on the boundary, an arbitrarily small neighbourhood will be spread to all the roots. See 3blue1brown's videos on YouTube about the Newton fractal for further information. Which comes to my conjecture:

Conjecture: points in the complement of the Mandelbrot in an atom domain of period \(p\) are good initial guesses for Newton's method to find the root of period \(p\) at the center of that atom domain.

It turns out that **this conjecture is false**. The proof
is by counter-example. The counter-example is the period \(18\) island
with angled internal address \(1 \to_{1/2} 2 \to_{1/8} 16 \to_{1/2} 18\),
whose upper external angle is \(.(010101010101100101)\) when
expressed in binary. I found this counter-example by brute force search:
for every period increasing from \(1\), trace every ray pair of that
period until the endpoints reach the atom domain. Then from each use
Newton's method to find the center of the hyperbolic component. Compare
the two centers reached, if they aren't the same then we have found a
counter-example. Here is a picture:

The Mandelbrot set is shown in black, using distance estimation to make its filaments visible. The fractal boundary of the Newton basins of period \(18\) is shown in white. The atom domain is shown in red. The complement of the Mandelbrot set is shown with a binary decomposition grid that follows the external rays and equipotentials. You can see that the path of the ray that goes from the cusp of the mini Mandelbrot island will intersect the Newton basin boundary at the corner of the atom domain, so that the eventual point of convergence of Newton's method is unpredictable. In my experiment it converged to the child bulb with angled internal address \(1 \to_{1/2} 2 \to_{1/9} 18\).

The above image was using regular Newton's method, without factoring out the roots of lower period that divide the target period. With the reduced polynomials, the basins are typically a little bigger, but in this case it made no difference and the problem persists with this counter-example:

I uploaded a short video showing the counter-example with both variants of Newton's method: watch on diode.zone. You can download the FragM source code for the visualisation.

This counter-example shows that the strategy of tracing rays until the atom domain is reached, before switching to Newton's method to find the root, is unsafe. A guaranteed safe strategy remains to be investigated.

]]>The Mandelbrot set fractal is formed by iterations of \(z \to z^2 + c\) where \(c\) is determined by the coordinates of the pixel and \(z\) starts at the critical point \(0\). The critical point is where the derivative w.r.t. \(z\) is \(0\). The image is usually coloured by how quickly \(z\) escapes to infinity, regions that remain bounded forever are typically coloured black. It has a distinctive shape, with a cardioid adjoined by many disks decreasing in size, each with further disks attached. Looking closely there are smaller "island" copies of the whole, but they are actually attached to the main "continent" by thin filaments.

The Newton fractal is formed by applying Newton's method to the cube roots of unity, iterating \(z \to z - \frac{z^3 - 1}{3z^2}\) where the initial \(z\) is determined by the coordinates of the pixel. The image is usually coloured by which of the 3 complex roots of unity is reached, with brightness and saturation showing how quickly it converged. It has its own distinctive shape, with three rays extending from the origin towards infinity separating the immediate basins of attraction, each with an intricate shape: at each point on the Julia set, different points in an arbitrarily small neighbourhood will converge to all three of the roots.

The Nova fractal mashes these two fractals together: the Newton fractal is perturbed with a \(+ c\) term that is determined by the pixel, and the iteration starts from a critical point (any of the cube roots of unity). The image is coloured by how quickly the iteration converges to a fixed point (a different point for each pixel) and points that don't converge (or converge to a cycle of length greater than 1) are usually coloured black. The fractal appearance combines features of the Newton fractal and the Mandelbrot set, with mini-Mandelbrot set islands appearing in the filaments.

Deep zooms of the Mandelbrot set can be rendered efficiently using perturbation techniques: consider \[z = (Z+z) - Z \to ((Z + z)^2 + (C + c)) - (Z^2 + C) = (2 Z + z) z + c \] where \(Z, C\) is a "large" high precision reference and \(z, c\) is a "small" low precision delta for nearby pixels. Going deeper one can notice "glitches" around mini-Mandelbrot sets when the reference is not suitable, but these can be detected with Pauldelbrot's criterion \(|Z+z| << |Z|\), at which point you can use a different reference that might be more appropriate for the glitched pixels.

Trying to do the same thing for the Nova fractal works at first, but going deeper (to about \(10^{30}\) zoom factor) it breaks down and glitches occur that are not fixed by using a nearer reference. These glitches are due to the non-zero critical point recurring in the periodic mini-Mandelbrot sets: precision loss occurs when mixing tiny values with large values. They also occur when affine-conjugating the quadratic Mandelbrot set to have a critical point away from zero (e.g. \(z \to z^2 - 2 z + c\) has a critical point at \(z = 1\)). Affine-conjugation means using an affine function \(m(z) = a z + b\) to conjugate two functions \(f, F\) like \(m(f(z)) = F(m(z))\).

The solution is to affine-conjugate the Nova fractal formula, to move the starting critical point from \(1\) to \(0\). One way of doing it gives the modified Nova formula \[ z \to \frac{ \frac{2}{3} z^3 - 2 z - 1 }{ (z + 1)^2 } + c + 1 \] which seems to work fine when going beyond \(10^{30}\) at the same locations where the variant with critical point \(z=1\) fails. For example, see the ends of the following two short videos:

]]>Newton's method can be used to trace external rays in the Mandelbrot set. See:

An algorithm to draw external rays of the Mandelbrot set

Tomoki Kawahira

April 23, 2009

AbstractIn this note I explain an algorithm to draw the external rays of the Mandelbrot set with an error estimate. Newton’s method is the main tool. (I learned its principle by M. Shishikura, but this idea of using Newton’s method is probably well-known for many other people working on complex dynamics.)

The algorithm uses \(S\) points in each dwell band, this number is called the "sharpness". Increasing the sharpness presumably makes the algorithm more robust when using the previous ray point \(c_n\) as the initial guess for Newton's method to find the next ray point \(c_{n+1}\) as the points are closer together.

I hypothesized that it might be better (faster) to use a different method for choosing the initial guess for the next ray point. I devised 3 new methods in addition to the existing one:

- nearest
- \( c_{n+1} := c_n \)
- linear
- \( c_{n+1} := c_n + (c_n - c_{n-1}) \)
- hybrid
- \( c_{n+1} := c_n + (c_n - c_{n-1}) \left| \frac{c_n - c_{n-1}}{c_{n-1} - c_{n-2}} \right| \)
- geometric
- \( c_{n+1} := c_n + \frac{(c_n - c_{n-1})^2}{c_{n-1} - c_{n-2}} \)

I implemented the methods in a branch of my mandelbrot-numerics repository:

git clone https://code.mathr.co.uk/mandelbrot-numerics.git cd mandelbrot-numerics git checkout exray-methods git diff HEAD~1

I wrote a test program for real-world use of ray tracing, namely tracing rays of preperiod + period ~= 500 to dwell ~1000, with all 4 methods and varying sharpness. I tested for correctness by comparing with the previous method, which was known to work well with sharpness around 4 through 8.

Results were disappointing. The hybrid and geometric methods failed in all cases, no matter the sharpness. The linear method failed for sharpness below 7, but when it worked (sharpness 7 or 8) it took about 68% of the time of the nearest method. However, the nearest method at sharpness 4 took 62% of the time of nearest at sharpness 8, so this is not so impressive.

The nearest method seemed to work all the way down to sharpness 2, which was surprising, and warrants further investigation: nearest at sharpness 2 took only 41% of the time of nearest at sharpness 8, if it turns out to be reliable this would be a good speed boost.

You can download my raw data.

Reporting this failed experiment in the interests of science.

]]>
Melinda Green's webpage
The 4D Mandel/Juli/Buddhabrot Hologram
has a nice video at the bottom, titled
*ZrZi to ZrCr - only points Inside the m-set*.
I recalled my 2013 blog post about the
Ultimate Anti-Buddhabrot
where I used Newton's method to find the limit Z cycle of each C value
inside the Mandelbrot set and plotted them. The (anti-)Buddhagram is
just like the (anti-)Buddhabrot, but the Z points are plotted in 4D space
augmented with their C values. Then the 4D object can be rotated in
various ways before projection down to 2D screen, possibly via a 3D step.

My first attempt was based on my ultimate anti-Buddhabrot code, computing all the points in a fine grid over the C plane. I collected all the points in a large array, then transformed (4D rotation, perspective projection to 3D, perspective projection to 2D) them to 2D and accumulated with additive blending to give an image. This worked well for videos at moderate image resolutions, achieving around 6 frames per second (after porting the point cloud rasterization to OpenGL) at the highest grid density I could fit into RAM, but at larger sizes the grid of dots became visible in areas where the z→z²+c transformation magnified it.

Then I had a flash of inspiration while trying to find the surface normals for lighting. Looking at the formulas on Wikipedia I realized that each "pringle" is an implicit surface \(F_p(c, z) = 0\), with \(F_p(c, z) = f_c^p(z) - z\) and the usual \(f_c(z) = z^2 + c\). \(p\) is the period of the hyperbolic component. Rendering implicit surfaces can be done via sphere-marching through signed distance fields, so I tried to construct a distance estimate. I tried using \(|F_p(c, z)| - t\) as a first try, where \(t\) is a small thickness to make the shapes solid, but that extended beyond the edges of each pringle and looked very wrong. The interior of the pringle has \(\left|\frac{\partial F_p}{\partial z}\right| \le 1\) so I added that to the distance estimate (using max() for intersection) to give:

float DE(vec2 c, vec2 z0) { vec2 z = z0; vec2 dz = vec2(1.0, 0.0); float de = 1.0 / 0.0; for (int p = 1; p <= MaxPeriod; ++p) { dz = 2.0 * cMul(dz, z); z = cSqr(z) + c; de = min(de, max(length(z - z0), length(dz) - 1.0)); } return 0.25 * de - Thickness; }

Note that this has complexity linear in MaxPeriod, my first attempt was quadratic which was way too slow for comfort when MaxPeriod got bigger than about 10. The 0.25 at the end is empirically chosen to avoid graphical glitches.

I have not yet implemented a 4D raytracer in FragM, though it's on my todo list. It's quite straightforward, most of the maths is the same as the 3D case when expressed in vectors, but the cross-product has 3 inputs instead of 2. Check S. R. Hollasch's 1991 masters thesis Four-Space Visualization of 4D Objects for details. Instead I rendered 3D slices (with 4th dimension constant) with 3D lighting, animating the slice coordinate over time, and eventually accumulating all the 3D slices into one image to create a holographic feel similar to Melinda Green's original concept.

Source code is in my fractal-bits repository:

]]>git clone https://code.mathr.co.uk/fractal-bits.git

In yesterday's post I showed how dividing by unwanted roots leads to better stability when finding periodic nuclei \(c\) that satisfy \(f_c^p(0) = 0\) where \(f_c(z) = z^2 + c\). Today I'll show how two techniques can bring this gain to finding periodic cycles \(z_0\) that satisfy \(f_c^p(z_0) = z_0\) for a given \(c\).

The first attempt is just to do the Newton's iterations without any wrong root division, unsurprisingly it isn't very successful. The second attempt divides by wrong period roots, and is a bit better. The third algorithm is much more involved, thus slower, but is much more stable (in terms of larger Newton basins around the desired roots).

Here are some images, each row corresponds to an algorithm as introduced. The colouring is based on lifted domain colouring of the derivative of the limit cycle: \(\left|\frac{\partial}{\partial z}f_c^p(z_0)\right| \le 1\) in the interior of hyperbolic components, and acts as conformal interior coordinates which do extend a bit into the exterior.

The third algorithm works by first finding a \(c_0\) that is a periodic nucleus, then we know that a good \(z_0\) for this \(c_0\) is simply \(0\). Now move \(c_0\) a little bit in the direction of the real \(c\) that we wish to calculate, and use Newton's method with the previous \(z_0\) as the initial guess to find a good \(z_0\) for the moved \(c_0\). Repeat until \(c_0 \to c\) and hopefully the resulting \(z_0\) will be as hoped for, in the periodic cycle for \(c\).

Source code for Fragmentarium: 2018-11-18_newtons_method_for_periodic_cycles.frag.

]]>Previously on mathr: Newton's method for Misiurewicz points (2015). This week I applied the same "divide by undesired roots" technique to the periodic nucleus Newton iterations. I implemented it GLSL in Fragmentarium, which has a Complex.frag with dual numbers for automatic differentiation (this part of the frag is mostly my work, but I largely copy/pasted from C99 standard library manual pages for the transcendental functions, Wikipedia for basic properties of differentiation like product rule, quotient rule, chain rule...).

Here's the improved Newton's method, with the newly added lines in bold:

vec2 nucleus(vec2 c0, int period, int steps) { vec2 c = c0; for (int j = 0; j < steps; ++j) { vec4 G = cConst(1.0); vec4 z = cConst(0.0); for (int l = 1; l <= period; ++l) { z = cSqr(z) + cVar(c);if (l < period && period % l == 0) G = cMul(z, G);} G = cDiv(z, G); c -= cDiv(G.xy, G.zw); } return c; }

And some results, top half of the image is without the added lines, bottom half of the image is with the added line, from left to right the target periods are 2, 3, 4, 9, 12:

You can download the FragM source code for the images in this article: 2018-11-17_newtons_method_for_periodic_points.frag.

]]>Distance estimated Newton fractal for a rational function with zeros at {1, i, -1, -i} and poles at {-2, 2}. I already blogged about this calendar image for March: Distance estimation for Newton fractals. The only other thing to mention is dual numbers for differentiation, which I used to save effort computing derivative equations in an OpenGL GLSL fragment shader re-implementation for Fragmentarium:

#include "Progressive2D.frag" const int nzeros = 4; const vec2[4] zeros = vec2[4] ( vec2( 1.0, 0.0) , vec2( 0.0, 1.0) , vec2(-1.0, 0.0) , vec2( 0.0, -1.0) ); const int npoles = 2; const vec2[2] poles = vec2[2] ( vec2( 2.0, 0.0) , vec2(-2.0, 0.0) ); const vec3[4] colours = vec3[4] ( vec3(1.0, 0.7, 0.7) , vec3(0.7, 0.7, 1.0) , vec3(0.7, 1.0, 0.7) , vec3(1.0, 1.0, 0.7) ); const float weight = 0.25; // change this according to render size const float huge = 1.0e6; const float epsSquared = 1.0e-6; const int newtonSteps = 64; float cmag2(vec2 z) { return dot(z,z); } vec2 csqr(vec2 z) { return vec2(z.x*z.x - z.y*z.y, 2.0*z.x*z.y); } vec2 crecip(vec2 z) { float d = cmag2(z); return z / vec2(d, -d); } vec2 cmul(vec2 a, vec2 b) { return vec2(a.x*b.x - a.y*b.y, a.x*b.y + a.y*b.x); } // dual numbers vec4 constant(vec2 z) { return vec4(z, 0.0, 0.0); } vec4 variable(vec2 z) { return vec4(z, length(vec4(dFdx(z), dFdy(z))), 0.0); } vec4 crecip(vec4 z) { vec2 w = crecip(z.xy); return vec4(w, -cmul(z.zw, csqr(w))); } vec4 newtonStep(vec4 z) { vec4 s = vec4(0.0); for (int i = 0; i < nzeros; ++i) { s += crecip(z - constant(zeros[i])); } for (int i = 0; i < npoles; ++i) { s -= crecip(z - constant(poles[i])); } return z - crecip(s); } vec3 newton(vec4 z00) { vec4 z0 = z00; bool done = false; int count = -1; vec3 rgb = vec3(0.0); float d0 = huge; float d1 = huge; for (int n = 0; n < newtonSteps; ++n) { d1 = d0; z0 = newtonStep(z0); d0 = huge; for (int i = 0; i < nzeros; ++i) { float d = cmag2(z0.xy - zeros[i]); if (d < d0) { d0 = d; rgb = colours[i]; } } if (d0 < epsSquared) { done = true; break; } } float de = 0.0; if (done) { de = 0.5 * abs(log(d0)) * sqrt(d0) / length(z0.zw); } return tanh(clamp(weight * de, 0.0, 8.0)) * rgb; } vec3 color(vec2 z) { return newton(variable(z * 1.5)); }

You can download the above "Two Beetles Meet" Fragmentarium reimplementation.

]]>Misiurewicz points in the Mandelbrot set are strictly preperiodic. Defining the quadratic polynomial \(F_c(z) = z^2 + c\), then a Misiurewicz point with preperiod \(q > 0\) and period \(p > 0\) satisfies:

\[\begin{aligned} {F_c}^{q + p}(c) &= {F_c}^{q}(c) \\ {F_c}^{q' + p}(c) &\ne {F_c}^{q'}(c)\text{ for all } 0 \le q' < q \\ {F_c}^{q + p'}(c) &\ne {F_c}^{q}(c)\text{ for all } 1 \le p' < p \end{aligned}\]

where the first line says it is preperiodic, the second line says that the preperiod is exactly \(q\), and the third line says that the period is exactly \(p\). A naive solution of the first equation would be to use Newton's method for finding a root of \(f_1(c) = 0\) where \(f_1(c) = {F_c}^{q + p}(c) - {F_c}^{q}(c)\), and it does work: but the root found might have lower preperiod or lower period, so it requires checking to see if it's really the Misiurewicz point we want.

This need for checking felt unsatisfactory, so I tried to figure out a way to reject wrong solutions during the Newton's method iterations. The second line equation for exact preperiod gives \({F_c}^{q' + p}(c) - {F_c}^{q'}(c) \ne 0\), so I tried dividing \(f_1(c)\) by all those non-zero values to give an \(f_2(c)\) and applying Newton's method for finding a root of \(f_2(c) = 0\). So:

\[\begin{aligned} f_2(c) &= \frac{g_2(c)}{h_2(c)} \\ g_2(c) &= {F_c}^{q + p}(c) - {F_c}^{q}(c) \\ h_2(c) &= \prod_{q'=0}^{q-1}\left( {F_c}^{q' + p}(c) - {F_c}^{q'}(c) \right) \\ f_2'(c) &= \frac{g_2'(c) h_2(c) - g_2(c) h_2'(c)}{h_2(c)^2} \\ g_2'(c) &= ({F_c}^{q + p})'(c) - ({F_c}^{q})'(c) \\ h_2'(c) &= h_2(c) \sum_{q'=0}^{q-1} \frac{({F_c}^{q' + p})'(c) - ({F_c}^{q'})'(c)}{{F_c}^{q' + p}(c) - {F_c}^{q'}(c)} \end{aligned}\]

with the Newton step \(c_{n+1} = c_{n} - \frac{f_2(c_n)}{f_2'(c_n)}\) where \(c_0\) is an initial guess. Here are some image comparisons, where each pixel is coloured according the the root found (grey for wrong (pre)period, saturated circles surround each root within the basin of attraction, edges between basins coloured black, with the Mandelbrot set overlayed in white). The top half of each image uses the naive method, the bottom half the method detailed in this post (which seems better, because the saturated circles are larger):

The images are labelled with preperiod and period, but there might be an off-by-one error with respect to standard terminology: here I iterate \(F_c\) starting from \(c\), while some iterate \(F_c\) starting from \(0\). So my preperiods are one less than they would be if I'd started from \(0\). I tried extending the method to reject lower periods as well as lower preperiods, but it didn't work very well.

The C99 source code for this post is available: newton-misiurewicz.c, using code from my new (work-in-progress) mandelbrot-numerics library mandelbrot-numerics library (git HEAD at 7fe3b89465390a712c7427093b8fc5377d2e65b6 when this post was written). Compiled using OpenMP for parallelism, it takes a little over 25mins to run on my quad-core machine.

References:

]]>Interior distance estimates can be calculated for the Mandelbrot set using the formula:

\[\frac{1-\left|\frac{\partial}{\partial{z}}f_c^p(z_0)\right|^2}{\left|\frac{\partial}{\partial{c}}\frac{\partial}{\partial{z}}f_c^p(z_0) + \frac{\frac{\partial}{\partial{z}}\frac{\partial}{\partial{z}}f_c^p(z_0) \frac{\partial}{\partial{c}}f_c^p(z_0)} {1-\frac{\partial}{\partial{z}}f_c^p(z_0)}\right|}\]

where \(f_c^p(z_0) = z_0\). Obtaining \(z_0\) by iteration of \(f_c\) is impractical, requiring a huge number of iterations, moreover that leaves \(p\) still to be determined. Assuming \(p\) is known, it's possible to find \(z_0\) more directly by using Newton's method to solve the equation:

\[z_0^{m+1} = z_0^{m} - \frac{f_c^p(z_0^m) - z_0^m}{\frac{\partial}{\partial{z}}f_c^p(z_0^m) - 1}\]

which can be implemented in C99 like this:

#include <complex.h> int attractor ( complex double *z_out , complex double *dz_out , complex double z_in , complex double c , int period ) { double epsilon = 1e-10; complex double zz = z_in; for (int j = 0; j < 64; ++j) { complex double z = zz; complex double dz = 1; for (int i = 0; i < period; ++i) { dz = 2 * z * dz; z = z * z + c; } complex double zz1 = zz - (z - zz) / (dz - 1); if (cabs(zz1 - zz) < epsilon) { *z_out = z; *dz_out = dz; return 1; } zz = zz1; } return 0; }

The derivative is returned because a simple test for interior-hood is \(\left|\frac{\partial}{\partial{z}}f_c^p(z_0)\right| \le 1\) (this follows directly from the interior distance formula, which gives a non-negative distance if the point is interior). Now the problem is finding the period \(p\). Recalling the atom domain representation function, a point \(c\) is in a domain of period \(p\) when \(\left|f_c^p(0)\right| < \left|f_c^q(0)\right| \text{ for all } 1 \le q < p\). These domains completely enclose the hyperbolic components of the same period, so good guesses for the period of an enclosing hyperbolic component of \(c\) are the partials, namely the values of \(p\) for which \(\left|f_c^p(0)\right|\) reach a new minimum. Here's what the first few atom domains look like:

Pseudo-code for interior distance estimate rendering now looks something like:

dc := 0 z := 0 m := infinity p := 0 for (n := 1; n <= maxiters; ++n) { dc := 2 * z * dc + 1 z := z^2 + c if (|z| < m) { m := |z| p := n if (attractor(&z0, &dz0, z, c, p)) { if (|dz0| <= 1) { // point is interior with period p and known z0 // compute interior distance estimate break } } } if (|z| > R) { // point is exterior // compute exterior distance estimate from z and dc break } }

Computing the interior distance estimate with known \(p\) and \(z_0\) is quite simple:

double interior_distance(complex double z0, complex double c, int period) { complex double z = z0; complex double dz = 1; complex double dzdz = 0; complex double dc = 0; complex double dcdz = 0; for (int p = 0; p < period; ++p) { dcdz = 2 * (z * dcdz + dz * dc); dc = 2 * z * dc + 1; dzdz = 2 * (dz * dz + z * dzdz); dz = 2 * z * dz; z = z * z + c; } return (1 - cabs(dz) * cabs(dz)) / cabs(dcdz + dzdz * dc / (1 - dz)); }

So far we have two algorithms for rendering the Mandelbrot set, the
first **plain** just computes exterior distance estimates
with no interior checking, and the second **unbiased** checks
for interior-hood every time a partial is reached. One would think the
extra interior checks would slow down the rendering, but unbiased is
actually faster than plain for the default view because lots of pixels
are interior with low periods, reducing the total number of iterations
required (with plain all interior points are iterated up to the maximum
iteration limit). In the benchmark graph here, plain is red and unbiased
is green.

However, zooming in to a region with no interior pixels visible shows a different result entirely: the interior checks are all useless, and do indeed slow down the rendering dramatically.

The solution is a modified rendering algorithm **biased**,
which uses local connectedness properties to postpone interior checking
in exterior regions. We keep track of the outcome of the previous pixel
(whether it was interior or exterior) and use that to adjust the
calculations - if the previous pixel was interior, perform interior checking
as in unbiased, but if the previous pixel was exterior, instead of performing
interior checking for each period as it is encountered, store the inputs
to the interior checking and carry on. Postponing the expensive interior
checks until the maximum iteration count has been reached means that for
exterior points that escape before the maximum iteration count has been
reached, we don't need to perform the interior checking at all. In the
benchmark graphs, biased is blue, and you can see that it improves
performance to near that of plain for the exterior-dominated embedded
julia set view.

Similar speed improvements result for most views, and this can be explained by the mostly-green images - they are plots of the algorithm performance - points are green when the bias from the previous pixel was accurate to determine the outcome from the current pixel, other colours indicate that the wrong algorithm was chosen (such as iterating all the way to maxiters only to find later that the point was interior, or performing expensive interior checks only to find later that the point was exterior).

You can download the full source code for the program used to render the images in this post. The code is slightly awkward because it includes runtime checks for which algorithm is being used - but now we have shown that the biased method is best we could strip out the checks and use the biased method always.

However, there is a flaw - for deep images (close to the resolution of double) where exterior distance estimates have no problems, the interior distance technique can fail in spectacularly ugly fashion:

It's usually possible to work around this by throwing extra precision at the problem (whether with double-double or MPFR-style arbitrary precision software floating point). Alternatively, with perturbation technique based renderers, given a reference at the nucleus of the dominating island, it's possible to compute interior distances using perturbed double-precision orbits. But this post is long enough, more on that another time perhaps.

]]>K I Martin recently popularized a perturbation technique to accelerate Mandelbrot set rendering in his SuperFractalThing program. I wrote up some of the mathematics behind it, extending Martin's description to handle interior distance estimation too. Unfortunately it's very easy to get glitchy images that are wrong in sometimes subtle ways.

The most obvious reference point is usually in a central minibrot, which means it is strictly periodic:

-1.760732891182472726272e+00 + 1.302137831089206469511e-02 i 4.0194366942304651e-14 @ -1.760732891182472889620498413132e+00 +R 1.302137831089204904674633295328e-02 iR

The current version of mightymandel computes an error estimate and shades worse errors redder. A better reference point for this image is in a non-central minibrot near the tip of a solid red patch:

-1.760732891182472726272e+00 + 1.302137831089206469511e-02 i 4.0194366942304651e-14 @ -1.76073289118248636633168329119453103e+00 +R 1.30213783108675495217125732772512438e-02 iR

Nearby higher period non-central minibrots tend to work even better, and their limit is a pre-periodic point - one that becomes periodic after a finite number of iterations. I explored a bit the basins of attraction of preperiodic points for a couple of embedded Julia sets (which are the features that are most often glitchy).

-1.7607328089719322109e+00 + 1.3021307542195548201e-02 i 1.5258789062500003e-05 @ 1 1/2 2 1/2 3 1/3 6 4/5 35 A

The saturated dots at the tips and spirals in this period 35 embedded Julia set are the preperiodic points of interest. They have period 3 (matching the outer influencing island) and preperiods 35 (matching the inner influencing island) and 36.

Then zooming deeper to the near the period 35 island and into one of its hairs finds a doubly-embedded Julia set between the period 35 outer influencing island and a period 177 inner influencing island. Rendering the Newton Basins now needs more than double precision floating point, and my Haskell code using qd's DoubleDouble took almost 4 hours on a quad core. This time the points of interest have period 35 (matching the outer island) and preperiods 177 (matching the inner island) and 178.

-1.76073288182181309054484516839 + 0.01302138541499395659022491468 i 2.2737367544323211e-13 @

Here's the same view rendered with mightymandel, first with the central minibrot as reference:

-1.76073288182181252065e+00 + 1.30213854149941790026e-02 i 2.2737367544323211e-13 @

Then a non-central minibrot:

-1.76073288182181252e+00 + 1.302138541499417901e-02 i 2.2737367544323211e-13 @ -1.760732881821779248788320301573183e+00 +R 1.302138541488527885806724080050218e-02 iR

And finally a limiting pre-periodic reference point:

-1.76073288182181252e+00 + 1.302138541499417901e-02 i 2.2737367544323211e-13 @ -1.7607328818218582927504155115035552 +R 1.3021385414920574393027950216090353e-2 iR

No single reference point gives a red-free image, but combining multiple reference points, each appropriate to various parts of the image, looks like it would be a promising approach - and the knowledge about where the view is in relation to outer and inner influencing islands and their (potentially echoed) embedded Julia sets could perhaps be used to calculate a few candidate reference points automatically.

]]>I finally got around to postprocessing the photos I took of the pages in my notebook written on topics concerning the Mandelbrot set. You can see it here: Mandelbrot Notebook

One of the (too-many) long term projects I have is to write a book about the Mandelbrot set that bridges the gap between popular science books ("wow fractals are cool") and mathematical texts ("theorem: something hard and obscure"). I've never written a book before and found myself getting distracted by irrelevancies (like page layout, fonts etc) and re-editing text over and over instead of actually adding new content, so I've been experimenting with git version control commit messages:

git clone http://code.mathr.co.uk/book.git cd book git log --color -u --reverse

The working title is How to write a book about the Mandelbrot set.

]]>The Buddhabrot fractal is related to the Mandelbrot set: for each
point \(c\) that is **not** in the Mandelbrot set, plot
all its iterates \(z_m\) where \(z_0 = 0\) and \(z_{n+1} = z_n^2 + c\).
The anti-Buddhabrot flips this, plotting all the iterates for points
that **are** in the Mandelbrot set. For the Buddhabrot,
the points not in the Mandelbrot set will escape to \(\infty\), so we
know when to stop iterating. Points in the interior don't escape, but
almost all converge to a periodic cycle. In the limit of plotting a
very large number of iterations, the iterates in these cycles will
strongly dominate due to the periodic repetition, such that the earlier
iterates will be invisible. Define the **ultimate**
anti-Buddhabrot to be the iterate plots of these limit cycles.

Now, a point \(z_c\) in the limit cycle for \(c\) of period \(p\)
satisfies \( F^p(z_c, c) = z_c \). There will be \(p\) different
\(z_c\) for each \(c\), but we just need one, and we can find it
numerically using Newton's method. Given an arbitrary \(c\) we need
to find its period first, which we can do by checking the interior
distance estimate for each **partial**. Define a partial
as a \(q\) such that \(|z_q| \lt |z_m|, 1 \le m \lt q\). If the interior
distance for \(c\) is negative, then \(c\) is not inside a component
of the Mandelbrot set of that period. Once we have \(z_c\), we can
plot the \(p\) iterates in the cycle.

Buddhabrot colouring commonly uses several monochrome image planes with different iteration limits, brighter where more iterates hit each pixel, which are combined into a colour image. With the ultimate anti-Buddhabrot, we can accumulate a colour image: each period is associated to an RGB value, and we can plot RGBA pixels (with A set to 1) with additive blending. The final A channel indicates how many iterates hit the pixel, but we also have the accumulated colours that can show which periods were involved. Post-processing the high dynamic range accumulation buffer to bring it down to something that can be displayed on a screen can bring out more of the details.

Calculation can be sped up using recursive subdivision. The root of the recursion passes over the Mandelbrot set parameter plane. There are a few different cases that can occur at each point:

- exterior
- Compute the exterior distance estimate: if it's so large that all of the subdivided child pixels would be exterior, bail out now, otherwise subdivide and recurse without plotting iterates.
- interior
- If the interior distance is so large that all of the subdivided child pixels would be interior to the same component, subdivide and recurse with the now-known period, otherwise recurse with the general method described above; plot the iterates in both cases.
- unknown
- If the iteration limit is reached, bail out without recursing.

The key speedup is switching to a simpler algorithm that just calculates \(z_c\) and plots the iterates with a known period. Another optimisation exploits the symmetry of the Mandelbrot set about the real axis. Finally it's possible to parallelize using OpenMP, with atomic pragmas to avoid race conditions when accumulating pixels.

Source code in C99: Ultimate Anti-Buddhabrot reference implementation. Runtime just under 20mins on a 3GHz quadcore CPU.

]]>