Distance estimated Newton fractal for a rational function with zeros at {1, i, -1, -i} and poles at {-2, 2}. I already blogged about this calendar image for March: Distance estimation for Newton fractals. The only other thing to mention is dual numbers for differentiation, which I used to save effort computing derivative equations in an OpenGL GLSL fragment shader re-implementation for Fragmentarium:

#include "Progressive2D.frag" const int nzeros = 4; const vec2[4] zeros = vec2[4] ( vec2( 1.0, 0.0) , vec2( 0.0, 1.0) , vec2(-1.0, 0.0) , vec2( 0.0, -1.0) ); const int npoles = 2; const vec2[2] poles = vec2[2] ( vec2( 2.0, 0.0) , vec2(-2.0, 0.0) ); const vec3[4] colours = vec3[4] ( vec3(1.0, 0.7, 0.7) , vec3(0.7, 0.7, 1.0) , vec3(0.7, 1.0, 0.7) , vec3(1.0, 1.0, 0.7) ); const float weight = 0.25; // change this according to render size const float huge = 1.0e6; const float epsSquared = 1.0e-6; const int newtonSteps = 64; float cmag2(vec2 z) { return dot(z,z); } vec2 csqr(vec2 z) { return vec2(z.x*z.x - z.y*z.y, 2.0*z.x*z.y); } vec2 crecip(vec2 z) { float d = cmag2(z); return z / vec2(d, -d); } vec2 cmul(vec2 a, vec2 b) { return vec2(a.x*b.x - a.y*b.y, a.x*b.y + a.y*b.x); } // dual numbers vec4 constant(vec2 z) { return vec4(z, 0.0, 0.0); } vec4 variable(vec2 z) { return vec4(z, length(vec4(dFdx(z), dFdy(z))), 0.0); } vec4 crecip(vec4 z) { vec2 w = crecip(z.xy); return vec4(w, -cmul(z.zw, csqr(w))); } vec4 newtonStep(vec4 z) { vec4 s = vec4(0.0); for (int i = 0; i < nzeros; ++i) { s += crecip(z - constant(zeros[i])); } for (int i = 0; i < npoles; ++i) { s -= crecip(z - constant(poles[i])); } return z - crecip(s); } vec3 newton(vec4 z00) { vec4 z0 = z00; bool done = false; int count = -1; vec3 rgb = vec3(0.0); float d0 = huge; float d1 = huge; for (int n = 0; n < newtonSteps; ++n) { d1 = d0; z0 = newtonStep(z0); d0 = huge; for (int i = 0; i < nzeros; ++i) { float d = cmag2(z0.xy - zeros[i]); if (d < d0) { d0 = d; rgb = colours[i]; } } if (d0 < epsSquared) { done = true; break; } } float de = 0.0; if (done) { de = 0.5 * abs(log(d0)) * sqrt(d0) / length(z0.zw); } return tanh(clamp(weight * de, 0.0, 8.0)) * rgb; } vec3 color(vec2 z) { return newton(variable(z * 1.5)); }

You can download the above "Two Beetles Meet" Fragmentarium reimplementation.

]]>Misiurewicz points in the Mandelbrot set are strictly preperiodic. Defining the quadratic polynomial \(F_c(z) = z^2 + c\), then a Misiurewicz point with preperiod \(q > 0\) and period \(p > 0\) satisfies:

\[\begin{aligned} {F_c}^{q + p}(c) &= {F_c}^{q}(c) \\ {F_c}^{q' + p}(c) &\ne {F_c}^{q'}(c)\text{ for all } 0 \le q' < q \\ {F_c}^{q + p'}(c) &\ne {F_c}^{q}(c)\text{ for all } 1 \le p' < p \end{aligned}\]

where the first line says it is preperiodic, the second line says that the preperiod is exactly \(q\), and the third line says that the period is exactly \(p\). A naive solution of the first equation would be to use Newton's method for finding a root of \(f_1(c) = 0\) where \(f_1(c) = {F_c}^{q + p}(c) - {F_c}^{q}(c)\), and it does work: but the root found might have lower preperiod or lower period, so it requires checking to see if it's really the Misiurewicz point we want.

This need for checking felt unsatisfactory, so I tried to figure out a way to reject wrong solutions during the Newton's method iterations. The second line equation for exact preperiod gives \({F_c}^{q' + p}(c) - {F_c}^{q'}(c) \ne 0\), so I tried dividing \(f_1(c)\) by all those non-zero values to give an \(f_2(c)\) and applying Newton's method for finding a root of \(f_2(c) = 0\). So:

\[\begin{aligned} f_2(c) &= \frac{g_2(c)}{h_2(c)} \\ g_2(c) &= {F_c}^{q + p}(c) - {F_c}^{q}(c) \\ h_2(c) &= \prod_{q'=0}^{q-1}\left( {F_c}^{q' + p}(c) - {F_c}^{q'}(c) \right) \\ f_2'(c) &= \frac{g_2'(c) h_2(c) - g_2(c) h_2'(c)}{h_2(c)^2} \\ g_2'(c) &= ({F_c}^{q + p})'(c) - ({F_c}^{q})'(c) \\ h_2'(c) &= h_2(c) \sum_{q'=0}^{q-1} \frac{({F_c}^{q' + p})'(c) - ({F_c}^{q'})'(c)}{{F_c}^{q' + p}(c) - {F_c}^{q'}(c)} \end{aligned}\]

with the Newton step \(c_{n+1} = c_{n} - \frac{f_2(c_n)}{f_2'(c_n)}\) where \(c_0\) is an initial guess. Here are some image comparisons, where each pixel is coloured according the the root found (grey for wrong (pre)period, saturated circles surround each root within the basin of attraction, edges between basins coloured black, with the Mandelbrot set overlayed in white). The top half of each image uses the naive method, the bottom half the method detailed in this post (which seems better, because the saturated circles are larger):

The images are labelled with preperiod and period, but there might be an off-by-one error with respect to standard terminology: here I iterate \(F_c\) starting from \(c\), while some iterate \(F_c\) starting from \(0\). So my preperiods are one less than they would be if I'd started from \(0\). I tried extending the method to reject lower periods as well as lower preperiods, but it didn't work very well.

The C99 source code for this post is available: newton-misiurewicz.c, using code from my new (work-in-progress) mandelbrot-numerics library mandelbrot-numerics library (git HEAD at 7fe3b89465390a712c7427093b8fc5377d2e65b6 when this post was written). Compiled using OpenMP for parallelism, it takes a little over 25mins to run on my quad-core machine.

References:

]]>Interior distance estimates can be calculated for the Mandelbrot set using the formula:

\[\frac{1-\left|\frac{\partial}{\partial{z}}f_c^p(z_0)\right|^2}{\left|\frac{\partial}{\partial{c}}\frac{\partial}{\partial{z}}f_c^p(z_0) + \frac{\frac{\partial}{\partial{z}}\frac{\partial}{\partial{z}}f_c^p(z_0) \frac{\partial}{\partial{c}}f_c^p(z_0)} {1-\frac{\partial}{\partial{z}}f_c^p(z_0)}\right|}\]

where \(f_c^p(z_0) = z_0\). Obtaining \(z_0\) by iteration of \(f_c\) is impractical, requiring a huge number of iterations, moreover that leaves \(p\) still to be determined. Assuming \(p\) is known, it's possible to find \(z_0\) more directly by using Newton's method to solve the equation:

\[z_0^{m+1} = z_0^{m} - \frac{f_c^p(z_0^m) - z_0^m}{\frac{\partial}{\partial{z}}f_c^p(z_0^m) - 1}\]

which can be implemented in C99 like this:

#include <complex.h> int attractor ( complex double *z_out , complex double *dz_out , complex double z_in , complex double c , int period ) { double epsilon = 1e-10; complex double zz = z_in; for (int j = 0; j < 64; ++j) { complex double z = zz; complex double dz = 1; for (int i = 0; i < period; ++i) { dz = 2 * z * dz; z = z * z + c; } complex double zz1 = zz - (z - zz) / (dz - 1); if (cabs(zz1 - zz) < epsilon) { *z_out = z; *dz_out = dz; return 1; } zz = zz1; } return 0; }

The derivative is returned because a simple test for interior-hood is \(\left|\frac{\partial}{\partial{z}}f_c^p(z_0)\right| \le 1\) (this follows directly from the interior distance formula, which gives a non-negative distance if the point is interior). Now the problem is finding the period \(p\). Recalling the atom domain representation function, a point \(c\) is in a domain of period \(p\) when \(\left|f_c^p(0)\right| < \left|f_c^q(0)\right| \text{ for all } 1 \le q < p\). These domains completely enclose the hyperbolic components of the same period, so good guesses for the period of an enclosing hyperbolic component of \(c\) are the partials, namely the values of \(p\) for which \(\left|f_c^p(0)\right|\) reach a new minimum. Here's what the first few atom domains look like:

Pseudo-code for interior distance estimate rendering now looks something like:

dc := 0 z := 0 m := infinity p := 0 for (n := 1; n <= maxiters; ++n) { dc := 2 * z * dc + 1 z := z^2 + c if (|z| < m) { m := |z| p := n if (attractor(&z0, &dz0, z, c, p)) { if (|dz0| <= 1) { // point is interior with period p and known z0 // compute interior distance estimate break } } } if (|z| > R) { // point is exterior // compute exterior distance estimate from z and dc break } }

Computing the interior distance estimate with known \(p\) and \(z_0\) is quite simple:

double interior_distance(complex double z0, complex double c, int period) { complex double z = z0; complex double dz = 1; complex double dzdz = 0; complex double dc = 0; complex double dcdz = 0; for (int p = 0; p < period; ++p) { dcdz = 2 * (z * dcdz + dz * dc); dc = 2 * z * dc + 1; dzdz = 2 * (dz * dz + z * dzdz); dz = 2 * z * dz; z = z * z + c; } return (1 - cabs(dz) * cabs(dz)) / cabs(dcdz + dzdz * dc / (1 - dz)); }

So far we have two algorithms for rendering the Mandelbrot set, the
first **plain** just computes exterior distance estimates
with no interior checking, and the second **unbiased** checks
for interior-hood every time a partial is reached. One would think the
extra interior checks would slow down the rendering, but unbiased is
actually faster than plain for the default view because lots of pixels
are interior with low periods, reducing the total number of iterations
required (with plain all interior points are iterated up to the maximum
iteration limit). In the benchmark graph here, plain is red and unbiased
is green.

However, zooming in to a region with no interior pixels visible shows a different result entirely: the interior checks are all useless, and do indeed slow down the rendering dramatically.

The solution is a modified rendering algorithm **biased**,
which uses local connectedness properties to postpone interior checking
in exterior regions. We keep track of the outcome of the previous pixel
(whether it was interior or exterior) and use that to adjust the
calculations - if the previous pixel was interior, perform interior checking
as in unbiased, but if the previous pixel was exterior, instead of performing
interior checking for each period as it is encountered, store the inputs
to the interior checking and carry on. Postponing the expensive interior
checks until the maximum iteration count has been reached means that for
exterior points that escape before the maximum iteration count has been
reached, we don't need to perform the interior checking at all. In the
benchmark graphs, biased is blue, and you can see that it improves
performance to near that of plain for the exterior-dominated embedded
julia set view.

Similar speed improvements result for most views, and this can be explained by the mostly-green images - they are plots of the algorithm performance - points are green when the bias from the previous pixel was accurate to determine the outcome from the current pixel, other colours indicate that the wrong algorithm was chosen (such as iterating all the way to maxiters only to find later that the point was interior, or performing expensive interior checks only to find later that the point was exterior).

You can download the full source code for the program used to render the images in this post. The code is slightly awkward because it includes runtime checks for which algorithm is being used - but now we have shown that the biased method is best we could strip out the checks and use the biased method always.

However, there is a flaw - for deep images (close to the resolution of double) where exterior distance estimates have no problems, the interior distance technique can fail in spectacularly ugly fashion:

It's usually possible to work around this by throwing extra precision at the problem (whether with double-double or MPFR-style arbitrary precision software floating point). Alternatively, with perturbation technique based renderers, given a reference at the nucleus of the dominating island, it's possible to compute interior distances using perturbed double-precision orbits. But this post is long enough, more on that another time perhaps.

]]>K I Martin recently popularized a perturbation technique to accelerate Mandelbrot set rendering in his SuperFractalThing program. I wrote up some of the mathematics behind it, extending Martin's description to handle interior distance estimation too. Unfortunately it's very easy to get glitchy images that are wrong in sometimes subtle ways.

The most obvious reference point is usually in a central minibrot, which means it is strictly periodic:

-1.760732891182472726272e+00 + 1.302137831089206469511e-02 i 4.0194366942304651e-14 @ -1.760732891182472889620498413132e+00 +R 1.302137831089204904674633295328e-02 iR

The current version of mightymandel computes an error estimate and shades worse errors redder. A better reference point for this image is in a non-central minibrot near the tip of a solid red patch:

-1.760732891182472726272e+00 + 1.302137831089206469511e-02 i 4.0194366942304651e-14 @ -1.76073289118248636633168329119453103e+00 +R 1.30213783108675495217125732772512438e-02 iR

Nearby higher period non-central minibrots tend to work even better, and their limit is a pre-periodic point - one that becomes periodic after a finite number of iterations. I explored a bit the basins of attraction of preperiodic points for a couple of embedded Julia sets (which are the features that are most often glitchy).

-1.7607328089719322109e+00 + 1.3021307542195548201e-02 i 1.5258789062500003e-05 @ 1 1/2 2 1/2 3 1/3 6 4/5 35 A

The saturated dots at the tips and spirals in this period 35 embedded Julia set are the preperiodic points of interest. They have period 3 (matching the outer influencing island) and preperiods 35 (matching the inner influencing island) and 36.

Then zooming deeper to the near the period 35 island and into one of its hairs finds a doubly-embedded Julia set between the period 35 outer influencing island and a period 177 inner influencing island. Rendering the Newton Basins now needs more than double precision floating point, and my Haskell code using qd's DoubleDouble took almost 4 hours on a quad core. This time the points of interest have period 35 (matching the outer island) and preperiods 177 (matching the inner island) and 178.

-1.76073288182181309054484516839 + 0.01302138541499395659022491468 i 2.2737367544323211e-13 @

Here's the same view rendered with mightymandel, first with the central minibrot as reference:

-1.76073288182181252065e+00 + 1.30213854149941790026e-02 i 2.2737367544323211e-13 @

Then a non-central minibrot:

-1.76073288182181252e+00 + 1.302138541499417901e-02 i 2.2737367544323211e-13 @ -1.760732881821779248788320301573183e+00 +R 1.302138541488527885806724080050218e-02 iR

And finally a limiting pre-periodic reference point:

-1.76073288182181252e+00 + 1.302138541499417901e-02 i 2.2737367544323211e-13 @ -1.7607328818218582927504155115035552 +R 1.3021385414920574393027950216090353e-2 iR

No single reference point gives a red-free image, but combining multiple reference points, each appropriate to various parts of the image, looks like it would be a promising approach - and the knowledge about where the view is in relation to outer and inner influencing islands and their (potentially echoed) embedded Julia sets could perhaps be used to calculate a few candidate reference points automatically.

]]>I finally got around to postprocessing the photos I took of the pages in my notebook written on topics concerning the Mandelbrot set. You can see it here: Mandelbrot Notebook

One of the (too-many) long term projects I have is to write a book about the Mandelbrot set that bridges the gap between popular science books ("wow fractals are cool") and mathematical texts ("theorem: something hard and obscure"). I've never written a book before and found myself getting distracted by irrelevancies (like page layout, fonts etc) and re-editing text over and over instead of actually adding new content, so I've been experimenting with git version control commit messages:

git clone http://code.mathr.co.uk/book.git cd book git log --color -u --reverse

The working title is How to write a book about the Mandelbrot set.

]]>The Buddhabrot fractal is related to the Mandelbrot set: for each
point \(c\) that is **not** in the Mandelbrot set, plot
all its iterates \(z_m\) where \(z_0 = 0\) and \(z_{n+1} = z_n^2 + c\).
The anti-Buddhabrot flips this, plotting all the iterates for points
that **are** in the Mandelbrot set. For the Buddhabrot,
the points not in the Mandelbrot set will escape to \(\infty\), so we
know when to stop iterating. Points in the interior don't escape, but
almost all converge to a periodic cycle. In the limit of plotting a
very large number of iterations, the iterates in these cycles will
strongly dominate due to the periodic repetition, such that the earlier
iterates will be invisible. Define the **ultimate**
anti-Buddhabrot to be the iterate plots of these limit cycles.

Now, a point \(z_c\) in the limit cycle for \(c\) of period \(p\)
satisfies \( F^p(z_c, c) = z_c \). There will be \(p\) different
\(z_c\) for each \(c\), but we just need one, and we can find it
numerically using Newton's method. Given an arbitrary \(c\) we need
to find its period first, which we can do by checking the interior
distance estimate for each **partial**. Define a partial
as a \(q\) such that \(|z_q| \lt |z_m|, 1 \le m \lt q\). If the interior
distance for \(c\) is negative, then \(c\) is not inside a component
of the Mandelbrot set of that period. Once we have \(z_c\), we can
plot the \(p\) iterates in the cycle.

Buddhabrot colouring commonly uses several monochrome image planes with different iteration limits, brighter where more iterates hit each pixel, which are combined into a colour image. With the ultimate anti-Buddhabrot, we can accumulate a colour image: each period is associated to an RGB value, and we can plot RGBA pixels (with A set to 1) with additive blending. The final A channel indicates how many iterates hit the pixel, but we also have the accumulated colours that can show which periods were involved. Post-processing the high dynamic range accumulation buffer to bring it down to something that can be displayed on a screen can bring out more of the details.

Calculation can be sped up using recursive subdivision. The root of the recursion passes over the Mandelbrot set parameter plane. There are a few different cases that can occur at each point:

- exterior
- Compute the exterior distance estimate: if it's so large that all of the subdivided child pixels would be exterior, bail out now, otherwise subdivide and recurse without plotting iterates.
- interior
- If the interior distance is so large that all of the subdivided child pixels would be interior to the same component, subdivide and recurse with the now-known period, otherwise recurse with the general method described above; plot the iterates in both cases.
- unknown
- If the iteration limit is reached, bail out without recursing.

The key speedup is switching to a simpler algorithm that just calculates \(z_c\) and plots the iterates with a known period. Another optimisation exploits the symmetry of the Mandelbrot set about the real axis. Finally it's possible to parallelize using OpenMP, with atomic pragmas to avoid race conditions when accumulating pixels.

Source code in C99: Ultimate Anti-Buddhabrot reference implementation. Runtime just under 20mins on a 3GHz quadcore CPU.

]]>I started with distance estimator for Julia sets for the case of a super-attracting fixed point:

\[ \delta = - \lim_{k \to \infty} \frac{|z_k - z_\infty| \log |z_k - z_\infty|}{|\frac{d}{dz}_k|} \]

This formula is slightly different to the formula on the linked page, haven't worked out yet exactly why it works and what the significance of the differences are. Anyway, I wanted to apply it to Newton fractals for rational functions.

Recall Newton's root finding method for a function \(G(z)\):

\[ z_{k+1} = z_k - \frac{G(z_k)}{\frac{d}{dz}G(z_k)} \]

If there are more than two roots of G, the boundary between regions that converge to different roots is a fractal. It's actually a Julia set for \(F(z)\) where

\[ F(z) = z - \frac{G(z)}{\frac{d}{dz}G(z)} \]

So we need to compute \(F^k(z)\) and\(\frac{d}{dz}F^k(z)\) for the distance estimate. By the product rule for derivatives, the derivative is the product of the derivatives at each step. It turns out that the actual calculations are very simple. Here's the derivation:

\[ \begin{aligned} F(z) &= z - \frac{G(z)}{\frac{d}{dz}G(z)} \\ \frac{d}{dz} F(z) &= 1 - (\frac{d}{dz} G(z) \frac{1}{\frac{d}{dz} G(z)} + G(z) \frac{d}{dz} \frac{1}{\frac{d}{dz} G(z)}) \\ &= 1 - (1 + G(z) \frac{- \frac{d}{dz}\frac{d}{dz} G(z)}{(\frac{d}{dz} G(z))^2})\\ &= \frac{G(z) \frac{d}{dz}\frac{d}{dz} G(z)}{(\frac{d}{dz} G(z))^2} \end{aligned} \]

As \(\frac{d}{dz}F(z)\) has a factor \(G(z)\), and iterations of F(z) converge to a root \(z_\infty\) where \(G(z_\infty) = 0\), the roots are super-attracting fixed points.

Now, \(G(z)\) is a rational function:

\[ G(z) = \frac{P(z)}{Q(z)} = \frac{\prod_{i} (z - p_i)^{P_i}}{\prod_{j} (z - q_j)^{Q_j}} \]

We need to compute \(F\) and \(\frac{d}{dz}F\), and happily this doesn't need the calculation of all of \(G\), \(\frac{d}{dz}G\) and \(\frac{d}{dz}\frac{d}{dz}G\), because lots of terms cancel each other out:

\[ \begin{aligned} \frac{d}{dz} G(z) &= \frac{ Q(z) \frac{d}{dz} P(z) - P(z) \frac{d}{dz} Q(z) }{ (Q(z))^2 } \\ \frac{d}{dz} P(z) &= \sum_I{(P_I (z - p_I)^{P_I - 1} \prod_{i \ne I} (z - p_i)^P_i)} \\ &= (\prod_i{ (z-p_i)^P_i }) (\sum_i{ \frac{P_i}{z - p_i}}) \\ &= (\sum_i{ \frac{P_i}{z - p_i}}) P(z) \\ \frac{d}{dz} Q(z) &= (\sum_j{ \frac{Q_j}{z - q_j}}) Q(z) \\ \frac{G(z)}{\frac{d}{dz}G(z)} &= \frac{\frac{P(z)}{Q(z)}}{\frac{Q(z)(\sum_i{ \frac{P_i}{z - p_i}})P(z) - P(z) (\sum_j{ \frac{Q_j}{z - q_j}}) Q(z)}{(Q(z))^2}}) \\ &= (P / Q) / ((P Q (\sum_P) - P Q (\sum_Q)) / (Q Q)) \\ &= 1 / ((\sum_P) - (\sum_Q)) \\ F(z) &= z - \frac{1}{(\sum_i{ \frac{P_i}{z - p_i} }) - (\sum_j{ \frac{Q_j}{z - q_j}})} \\ \frac{d}{dz} F(z) &= 1 + \frac{ (\sum_j{ \frac{Q_j}{(z - q_j)^2}}) - (\sum_i{ \frac{P_i}{(z - p_i)^2} }) }{((\sum_i{ \frac{P_i}{z - p_i} }) - (\sum_j{ \frac{Q_j}{z - q_j}}))^2} \end{aligned} \]

where the derivation of the last line is left as an exercise (in other words, I couldn't be bothered to type up all the pages of equations I scribbled on paper).

Putting it into code, here's the algorithm in C99:

#include <complex.h> #include <math.h> typedef unsigned int N; typedef double R; typedef double complex C; R // OUTPUT the distance estimate distance ( C z0 // INPUT starting point , N nzero // INPUT number of zeros , const C *zero // INPUT the zeros , const C *zerop // INPUT the power of each zero , N npole // INPUT number of poles , const C *pole // INPUT the poles , const C *polep // INPUT the power of each pole , N *which // OUTPUT the index of the zero converged to ) { C z = z0; C dz = 1.0; R eps = 0.1; // root radius, should be as large as possible for (N k = 0; k < 1024; ++k) { // fixed iteration limit for (N i = 0; i < nzero; ++i) { // check if converged R e = cabs(z - zero[i]); if (e < eps) { *which = i; return e * -log(e) / cabs(dz); // compute distance } } C sz = 0.0; C sz2 = 0.0; for (N i = 0; i < nzero; ++i) { C d = z - zero[i]; sz += zerop[i] / d; sz2 += zerop[i] / (d * d); } C sp = 0.0; C sp2 = 0.0; for (N j = 0; j < npole; ++j) { C d = z - pole[j]; sp += polep[j] / d; sp2 += polep[j] / (d * d); } C d = sz - sp; z -= 1.0 / d; dz *= (sp2 - sz2) / (d * d) + 1.0; } *which = nzero; return -1; // didn't converge }

complete C99 source code for distance estimated Newton fractals.

]]>I previously wrote about
Mandelbrot set Newton basins
in the context of finding islands, whose nuclei are periodic points. A
periodic point of a function **g** satisfies
**g ^{p} = g^{0}**, where

Newton's method tries to find a root of a function **h(x) = 0**
by iterating **x → x - h(x) / h'(x)** where **h' = dh/dx**,
namely the differential of **h** with respect to **x**.
Here the quadratic function **f** in the Mandelbrot set is
considered as a function of **c**, with **f' = df/dc**
and we want to find a preperiodic **c _{0}** satisfying

Now, **f** and **f'** can be computed by
recurrence relations:

F_{c}^{0}= 0

F'_{c}^{0}= 0

F_{c}^{n+1}= (F_{c}^{n})² + c

F'_{c}^{n+1}= 2 F_{c}^{n}F'_{c}^{n}+ 1

Applying Newton's method gives:

c → c - (F_{c}^{p+k}- F_{c}^{k}) / (F'_{c}^{p+k}- F'_{c}^{k})

But solving this isn't the whole story - it might converge to a preperiodic
point with a preperiod less than **k**, say **k'**.
(Even the target period **p** may be a multiple of the true period,
say **p'**.) The next step is to find the true preperiod of the
resulting **c _{0}**, which can be done by finding the
smallest

Enough of how (for full details read the source linked below), here are some images, each with a certain fixed period and coloured according to the true preperiod of the root converged to at each pixel.

Image *a* shows the whole Mandelbrot set with some preperiodic
basins of period 1 highlighted. You can see they surround some terminal
and branch points, but not all. Images *b* and *c* show
enlarged regions near the 1/3 and 2/5 bulbs. Image *d* starts to
get interesting - this is zoomed in near the 1/3 child of the 1/2 bulb.
Notice how only the outer filaments have basins attached. Compare with
image *e* which increases the target period to 2: here the inner
filaments have basins attached.

This leads me to conjecture that multiplicative *tuning* is at
work: the inner filaments near a child atom will have preperiodic branch
points that have a period a multiple of the parent atom's period, compared
to the corresponding preperiodic branch points at the root. This seems
to be supported by the remaining images: *f*, *g*, *h*
with periods 1, 2, 3 highlighted near the period 3 island; *i*
near a period 4 island with period 4 highlighted, and *j* near a
period 5 island with period 5 highlighted. Note the inner filaments
being highlighted when the periods match.

Newton's root finding method has a few applications when exploring the Mandelbrot set. You can use it to find the center of a component given a reasonable location estimate, or find a particular point on the boundary of a component given its center, and it is also used when tracing external rays from infinity inwards towards the boundary.

Tracing external rays is a slow process, needing many steps with many iterations of Newton's method for each step. When tracing rays to a particular component, it would be desireable to switch to Newton's method for center finding as soon as possible. A rough heuristic (ie, I haven't proved that it works everywhere) might be to trace a few rays in parallel, and check the atom domain at the ray end points, switching when all ray end points are in an atom domain of the target period using the average of the endpoints as initial estimate.

The images show fractal basins of convergence for Newton's method for a particular period, with the atom domains of the target period highlighted, overlayed with the boundary of the Mandelbrot set. It seems that the atom domain is wholy within its own Newton basin, and also significantly larger than the corresponding component.

]]>Combine fractals with juggling and you might get something like this.

This is from six months ago but I didn't get around to posting it before.

Videos available on the Internet Archive:

You can get the code as part of *fractaloids* here:

git clone http://code.mathr.co.uk/fractaloids.git

Or browse fractaloids on Gitorious fractaloids on code.mathr.co.uk.

]]>