I hacked on Inflector Gadget to make it use double precision floating point (53 bits of mantissa compared to 24 for single precision). This allows more playing time before ugly pixelation artifacts appear. It requires OpenGL 4, if that doesn't work on your machine then you'll have to go back to v0.1 (sorry). Downloads:

inflector-gadget-0.2.tar.bz2 (sig)

|inflector-gadget-0.2-win.zip (sig)

or get the freshest source from git (browse inflector-gadget):

git clone https://code.mathr.co.uk/inflector-gadget.git

There's also a new command, 'D', which prints out the inflection coordinates. I added it so I could play with adaptive precision algorithms, more on that soon.

]]>Inspired by recent threads on FractalForums, I wrote a little gadget to do inflection mapping. This means translation and squaring of complex coordinates before regular Mandelbrot or Julia set iterations. Check the source for full details, the fragment shader GLSL is the place to start. You can download it here:

inflector-gadget-0.1.1.tar.bz2 (sig)

|inflector-gadget-0.1-win.zip (sig)

or get the freshest source from git (browse inflector-gadget):

git clone https://code.mathr.co.uk/inflector-gadget.git

It's a bit rough around the edges, press H to print the help to the terminal you started it from (or read the documentation), but in short, each click wraps the pattern twice around the clicked point, allowing you to sculpt patterns out of Julia sets (or the Mandelbrot set if you press M).

Similar sculpting can be achieved by zooming into specfic locations in the Mandelbrot set. The effect is much the same (the outer decorations are missing in the inflector gadget) but it takes a zillion times longer to zoom so deep as required.

]]>The Mandelbrot set is full of hyperbolic components (the circle-like and cardioid-like regions), each of which has a nucleus at its center, which has a superstable periodic orbit. For example the biggest cardioid has center 0 and period 1, while the circle to the left has center -1 and period 2 (verify by: \((0^2 + (-1))^2) + (-1) = 0\)).

Suppose we know the location of the nucleus (the \(c\) parameter) and we want to render a picture of the corresponding hyperbolic component. To do this we need to know its size. I tried to derive a size estimate myself, using Taylor series for \(\frac{\partial}{\partial z}\) using the fact that this derivative tends to \(1\) at the boundary of the component and is \(0\) at the nucleus, but the truncation error smashed everything to pieces. So I fell back on plan B: trying to understand the existing size estimate I found on ibiblio.org.

The size estimate using the notation on that page (go read it first) is \(\frac{1}{\beta \Lambda_n^2}\). I found the page a bit confusing at the first many readings, but reading the referenced paper and thinking hard while writing notes on paper helped me crack it. The size estimate forms a small section of the paper near the start, for reference:

Structure in the parameter dependence of order and chaos for the quadratic map

Brian R Hunt and Edward Ott

J. Phys. A: Math. Gen. 30 (1997) 7067–7076

Many dynamical systems are thought to exhibit windows of attracting periodic behaviour for arbitrarily small perturbations from parameter values yielding chaotic attractors. This structural instability of chaos is particularly well documented and understood for the case of the one-dimensional quadratic map. In this paper we attempt to numerically characterize the global parameter-space structure of the dense set of periodic "windows" occurring in the chaotic regime of the quadratic map. In particular, we use scaling techniques to extract information on the probability distribution of window parameter widths as a function of period and location of the window in parameter space. We also use this information to obtain the uncertainty exponent which is a quantity that globally characterizes the ability to identify chaos in the presence of small parameter uncertainties.

The basic idea is that under iteration of \(z\to z^2+c\), the small neighbourhood of the nucleus \(c\) bounces around the complex plane, being slightly distorted and stretched each time, except for one "central interval" at which the neighbourhood of \(z_{k p}\) contains the origin \(0\) and the next iteration folds the interval in half with quadratic scaling. Now the bouncing around the plane can be approximated as linear, with scaling given by the first derivative (with respect to \(z\)), and there is only one interval \(n = kp\) in which the full quadratic map needs to be preserved. We end up with something like this:

\[\begin{aligned} z_{n + p} = & c + \frac{\partial}{\partial z} z_{n+p-1} ( \\ & \vdots \\ & c + \frac{\partial}{\partial z} z_{n+3} ( \\ & c + \frac{\partial}{\partial z} z_{n+2} ( \\ & c + \frac{\partial}{\partial z} z_{n+1} ( \\ & c + z_n^2 ) ) ) \ldots ) \end{aligned}\]

Expanding out the brackets gives:

\[ z_{n + p} = \left(\prod_{k = 1}^{p - 1} \frac{\partial}{\partial z} z_{n + k}\right) z_n + \left(\sum_{m = 1}^p \prod_{k = m}^{p - 1} \frac{\partial}{\partial z} z_{n+k}\right) c \]

Writing:

\[\begin{aligned} \lambda_m &= \prod_{k = 1}^{m} \frac{\partial}{\partial z} z_{n + k} \\ \Lambda &= \lambda_{p - 1} \end{aligned}\]

the sum can have a factor of \(\Lambda\) drawn out to give:

\[ z_{n + p} = \Lambda \left( z_n^2 + \left( 1 + \lambda_1^{-1} + \lambda_2^{-1} + \ldots + \lambda_{p - 1}^{-1} \right) c \right) = \Lambda \left( z_n^2 + \beta c \right) \]

The final step is a change of variables where \(c_0\) is the nucleus:

\[\begin{aligned} Z &= \Lambda z \\ C &= \beta \Lambda^2 \left(c - c_0\right) \end{aligned}\]

Now there is self-similarity (aka renormalization):

\[Z_{n+1} = Z_n^2 + C\]

ie, one iteration of the new variable corresponds to \(p\) iterations of the original variable. (Exercise: verify the renormalization.) Moreover the definition of \(C\) gives the scale factor in the parameter plane, which gives the size estimate when we multiply by the size of the top level window (the paper referenced above uses \(\frac{9}{4}\) as the size, corresponding to the interval \(\left[-2,\frac{1}{4}\right]\) from cusp to antenna tip - using \(\frac{1}{2}\) makes circles' sizes approximately their radii).

Finally some C99 code to show how easy this size estimate is to compute in practice (see also my mandelbrot-numerics library):

double _Complex m_size(double _Complex nucleus, int period) { double _Complex l = 1; double _Complex b = 1; double _Complex z = 0; for (int i = 1; i < period; ++i) { z = z * z + nucleus; l = 2 * z * l; b = b + 1 / l; } return 1 / (b * l * l); }

As a bonus, using complex values gives an orientation estimate in addition to the size estimate - just use \(\arg\) and \(\left|.\right|\) on the result.

]]>The Farey tree crops up in the Mandelbrot set, a nice introduction can be found in The Mandelbrot Set and The Farey Tree by Robert L. Devaney. The tree operates on rational numbers by Farey addition, and can be defined recursively starting from \(\left(\frac{0}{1},\frac{1}{1}\right)\) with an operation acting on neighbouring numbers:

\[\frac{a}{b} \oplus \frac{c}{d} = \frac{a + c}{b + d}\]

Section 6 of Devaney's paper begins

... Suppose \(0 < \frac{a}{b} < \frac{c}{d} < 1\) are the Farey parents of \(\frac{p}{q}\). ...

In practice it would be nice to be able to compute these Farey parents given \(\frac{p}{q}\). One approach is to perform a search through the tree, starting with bounds at 0 and 1, finding the Farey sum of the bounds and adjusting the bounds at each stage to keep the target fraction within them, stopping when it is reached. Unfortunately this has terrible asymptotic complexity, for example finding the Farey parents of \(\frac{1}{100}\) in this way would step through \(\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots \frac{1}{98}\) before finding the parents \(\left(\frac{0}{1}, \frac{1}{99}\right)\).

Fortunately there is a better way suggested by Siddharth Prasad on math.stackexchange.com: using the extended Euclidean algorithm to solve for \(x\) and \(y\):

\[ q x + p y = \gcd(p, q) = 1 \]

Then one parent is \(-\frac{x}{y}\) and the other can be found by undoing the Farey addition. For example, to find the parents of \(\frac{3}{100}\):

q | r | s | t |

100 | 1 | 0 | |

3 | 0 | 1 | |

33 | 1 | 1 | -33 |

3 | 0 |

which gives one parent \(\frac{1}{33}\), and by Farey addition the other parent is \(\frac{2}{67}\).

The Euclidean algorithm has complexity \(O(\log q)\), which is a vast improvement over the naive search \(O(q)\).

]]>As part of Brud's
*luser stories* I wrote up a lecture/slideshow
about the fractal dimension of Julia sets. You can
download the PDF: julia-dim.pdf (3.5MB)
and there's a page with source code and detailed results
tables here: fractal dimension of Julia sets.

The image is resemblant to the familiar Mandelbrot set because any ball around a boundary point of the Mandelbrot set contains a Julia set with dimension approaching arbitrarily close to 2.

]]>Aaron Klebanoff's 2001 paper \(\pi\) in the Mandelbrot Set got me thinking, in particular the problem in the conclusion:

Another open problem is to determine the function of \(\epsilon\) that multiplies \(N(\epsilon)\). So far, we have \[a \epsilon^b N(\epsilon) \to \pi \] where we have seen \(a = 1, 2\) and \(b = 1, \frac{1}{2}\). In general, should we expect [this limit] to hold for some rational values \(a\) and \(b\)? If so, what does the pinch location in \(M\) tell us about \(a\) and \(b\)?

Here \(\epsilon\) is the distance from a point along a path heading to a cusp or pinch point between two hyperbolic components in the complement of the Mandelbrot set \(M\), and \(N(\epsilon)\) is the iteration count at that point. My conclusion after some scribbling is that \(b = 1\) for all pinch points, and \(a\) is not in general rational. My workings below aren't as formal as Klebanoff's, but hopefully the idea is sound.

As paths through the complement of \(M\) I take the external ray landing at the desired pinch point. The ray passes between lots of child bulbs on either side, which are increasing in period towards the pinch point. The increase is in increments of the period of the smaller (higher period) bulb at the pinch point. The rays landing at the pinch-point side of the roots of these child bulbs come together quickly heading out towards infinity, as they differ only in the last digits of the repeating part of the binary expansions of the corresponding rational external angles. In fact, where corresponding pairs meet in the same grid cell of the pictures below, the iteration count there is the lower of the two periods.

Next, consider the internal angles of the the child bulbs of the smaller (larger period) bulb at the pinch point. These are in the sequence \(\frac{1}{p}\) where \(p\) is the period of the child bulb divided by the period of the parent bulb. Drawing triangles as \(p \to \infty\) shows that the distance to the bulb from the pinch point is approximately \(\epsilon \approx s \tan \frac{2\pi}{p}\) where \(s\) is the radius of the smaller pinch point bulb. Further, as \(\tan \theta \to \theta \text{ as } \theta \to 0\), the distance reduces to \(\epsilon \approx s \frac{2\pi}{p}\).

Now, combining the previous two paragraphs, call the period of the smaller bulb at the pinch point \(P\). Then \(N(\epsilon)\) is approximately \(P p\). Substituting into the expression for \(\epsilon\), we get \(\epsilon \approx s \frac{2\pi P}{N(\epsilon)}\) which simplifies to:

\[ \pi \approx \frac{\epsilon N(\epsilon)}{2 P s}\]

So much for the theory, how does it work out in practice? Not very well, as it turns out. I plotted some numerical results for various external angles, and while they tend to flat lines as the iteration count increases which implies that \(\epsilon N(\epsilon) \to K\) for some constant \(K\), the scaling factors I calculated aren't quite there - the limits are some way off from being \(\pi\).

I suspect this is because the internal angles aren't exactly the same as geometric angles (there is a conformal transformation making up the difference), or something else is going wrong, in any case it was a fun experiment to try out - not all experiments give the desired result, which I guess is the point of experimentation in the first place.

]]>Back in August last year I was experimenting with vector renditions of the Buddhabrot, reasoning that tracing the boundary of hyperbolic components and plotting those iterates would give a reasonable idea of what the limit of the Buddhabrot at infinite iterations would look like. It's not perfect (some straight lines instead of curves, due to problems with highly awkward behaviour near root points) and the level of detail could be a bit higher, but I think it looks quite ok.

The program expects a list of minibrot islands as input on stdin with lines like "period size cre cim", suitable values might be derived by regexp voodoo from my Mandelbrot set feature database. It then finds all child components recursively, down to a minimum size limit, and traces their boundaries and the iterates thereof (the image of a closed curve under the quadratic polynomial map is another loop). The colouring weight is adjusted by the amount of stretching involved in the transformations, as well as the period and size of the component it belongs to.

You can download the Haskell source: vector-buddhabrot.hs whose heaviest dependency is cairo, with other dependencies including strict, deepseq, parallel, and my mandelbrot-numerics library.

]]>I implemented in Haskell some symbolic algorithms related to the Mandelbrot set, but setting up a Haskell tool-chain to try it out is quite a barrier to entry. Some time ago I wrote a server-side web interface, but I didn't run it live on the web because of security concerns (mainly denial of service from heavy computations). Last week I finally got around to installing GHCJS (pro-tip: trying to install in a sandbox is more trouble than its worth), a Haskell to JavaScript compiler, and ported the web interface to run client-side in the browser, neatly side-stepping any security issues and also allowing offline usage.

You can try it out at /mandelbrot/web/, there's no documentation to speak of but hopefully the examples should be enough to get you started. Some calculations can take a long time, but GHCJS's scheduler avoids the dreaded browser "script taking too long" popup, provided the calculations are not in a synchronous callback (thanks to luite in the #ghcjs IRC channel on freenode for pointing this out).

The source code for the web interface is at mandelbrot-web which also requires my mandelbrot-symbolics and mandelbrot-text libraries. You can also download the compiled web interface as a tarball: web.tgz. It has been postprocessed with closurecompiler, as documented on the GHCJS deployment page, which reduced the size quite a bit. Pro-tip #2, Debian calls the nodejs binary "/usr/bin/nodejs", but npm packages expect just "node" - I put a little script in ~/opt/bin/node (which is in my $PATH) to make it work properly:

#!/bin/sh exec nodejs "$@"

I plan to adapt the web interface further to include support for my mandelbrot-numerics library and eventually image rendering with annotations - though there won't be deep zoom support until I find an efficient JavaScript BigFloat library that I can FFI to, or an efficient pure Haskell BigFloat implementation (my variable-precision library is probably too slow and wasteful to be useful).

]]>In a previous post I wrote about some symbolic algebra code that extracts the most parallelism possible for Mandelbrot set series approximation iterations. Unfortunately its time complexity was around \(O(n^{4.5})\) making it prohibitively slow to go beyond order 64. I was enlightened by discussions with hapf from fractalforums.com.

First it turns out that computing the series approximation coefficients to full precision is unnecessary, you can get away with 53 bits as provided by machine double, provided the exponent is extended, because the values get huge. This means the potential gains from parallelism are greatly reduced, because the overhead remains the same while the amount of work needed drops hugely.

Secondly, and more importantly, there is a simple formula for the series approximation coefficient iterations, which means that all the cleverstupid symbolic algebra can be done away with and replaced with a couple of nested loops, with the bonus that the order can be changed at runtime without needing to have generated code beforehand.

For the series approximation coefficients (using the notation from the previous post, extended with \(b\) for the series approximation of the derivative for distance estimation):

\[\begin{aligned} \left<\left< z_n \right>\right> &= \sum_{k=1}^\infty{a_{k,n} {\left<\left< c \right>\right>}^k} \\ \left<\left< z'_n \right>\right> &= \sum_{k=1}^\infty{b_{k,n} {\left<\left< c \right>\right>}^k} \end{aligned}\]

the iterations become:

\[\begin{aligned} a_{1,n+1} &= 2 z_n a_{1,n} + 1 \\ a_{k,n+1} &= 2 z_n a_{k,n} + \sum_{j=1}^{k-1} a_{j,n} a_{k-j,n} \\ b_{k,n+1} &= 2 \left(z_n b_{k,n} + z'_n a_{k,n} + \sum_{j=1}^{k-1} a_{j,n} b_{k-j,n}\right) \end{aligned}\]

The sum for the \(a\) coefficients has some redundancy, with terms multiplied in both orders - this can be optimized using case analysis for odd and even coefficient indices (left as an exercise).

Finally, hapf reassured me that it's normal for the number of per-pixel iterations that series approximation can skip reaches a plateau when zooming deeper towards a particular reference point, where doubling the order of the approximation gets you an extra period's (of the reference) worth of skipping.

I implemented this in mandelbrot-perturbator but there are some issues with glitches with high orders at low zoom levels, so I think I'll make the order be determined automatically by the size of the reference minibrot.

]]>Wolf Jung pointed me towards an interesting pattern in the angled internal addresses corresponding to the external angles in a previous post. The pattern is:

\(1_{\frac{1}{2}} \to 2_{\frac{1}{2}} \to 3_{\frac{1}{2}} \to 4_{\frac{1}{2}} \to 5_{\frac{5}{11}} \to 54_{\frac{1}{2}} \to 64_{\frac{1}{2}} \to 69\)

\(1_{\frac{1}{2}} \to 2_{\frac{1}{2}} \to 3_{\frac{1}{2}} \to 4_{\frac{1}{2}} \to 5_{\frac{5}{11}} \to 54_{\frac{1}{2}} \to 64_{\frac{1}{2}} \to 69_{\frac{2}{3}} \to 143\)

\(1_{\frac{1}{2}} \to 2_{\frac{1}{2}} \to 3_{\frac{1}{2}} \to 4_{\frac{1}{2}} \to 5_{\frac{5}{11}} \to 54_{\frac{1}{2}} \to 64_{\frac{1}{2}} \to 69_{\frac{2}{3}} \to 143_{\frac{2}{3}} \to 291\)

\(1_{\frac{1}{2}} \to 2_{\frac{1}{2}} \to 3_{\frac{1}{2}} \to 4_{\frac{1}{2}} \to 5_{\frac{5}{11}} \to 54_{\frac{1}{2}} \to 64_{\frac{1}{2}} \to 69_{\frac{2}{3}} \to 143_{\frac{2}{3}} \to 291_{\frac{2}{3}} \to 587\)

where the rotation number (internal angle) \(\frac{2}{3}\) is the same each time. The alternative automatic zooming method at the end of my earlier post doesn't respect this, it would unpredictably choose \(\frac{1}{3}\) or \(\frac{2}{3}\) with no efficient way to check whether you ended up with the desired alternative. I thought I'd check to see if this ambiguity matters in practice, so I rendered 16 images with angled internal addresses corresponding to all possible choices:

\[\left\{ 1_{\frac{1}{2}} \to 2_{\frac{1}{2}} \to 3_{\frac{1}{2}} \to 4_{\frac{1}{2}} \to 5_{\frac{5}{11}} \to 54_{\frac{1}{2}} \to 64_{\frac{1}{2}} \to 69_{\frac{a}{3}} \to 143_{\frac{b}{3}} \to 291_{\frac{c}{3}} \to 587_{\frac{d}{3}} \to 1179 \\ : a,b,c,d \in \left\{1,2\right\} \right\}\]

The good news is that the central morphed tree-like pattern is almost indistinguishable across all the variants, but the way it aligns with the surrounding ring of features shows some clear differences - comparing the first and last images, one has the longest thin filament to the left of a larger blob, the other to the right. The rotations are different of course, but I expected that - I didn't expect the alignment of the surrounding ring of features to be so consistent across the images.

]]>Generating deep zoom images of the Mandelbrot set by iterations of \( z_{n+1} = F(z_n, c) \) require high precision for \(z_n\) and \(c\). Perturbation techniques use a high precision reference with low precision deltas which I write \( \left<\left< . \right>\right> \):

\[ z_{n+1} + \left<\left< z_{n+1} \right>\right> \gets F\left( z_n + \left<\left< z_n \right>\right> , c + \left<\left< c \right>\right> \right) \]

Boring algebraic manipulation gives the perturbed iteration equation:

\[ \left<\left< z_{n+1} \right>\right> \gets \left<\left< F \right>\right> \left(z_n, \left<\left< z_n \right>\right>, \left<\left< c \right>\right> \right) \]

Series approximation techniques assume \( \left<\left< z_n \right>\right> \) can be expressed in terms of \( \left<\left< c \right>\right> \):

\[ \left<\left< z_n \right>\right> = \sum_{k=1}^\infty{a_{k,n} {\left<\left< c \right>\right>}^k} \]

Substituting the series expression for \( \left<\left< z_n \right>\right> \) into the right hand side of the perturbed iteration equation and collecting terms by powers of \( \left<\left< c \right>\right> \) gives a collection of iteration equations for the coefficients \( a_k \):

\[ a_{k, n + 1} = A_k \left( z_n , \left\{ a_{j,n} : j \le k \right\} \right) \]

These recurrence relations are in complex variables, but arbitrary precision arithmetic libraries like MPFR generally work with real variables, so more boring algebraic manipulation expands each recurrence into a pair of recurrences involving the real and imaginary parts.

The functions in libraries like MPFR are quite basic - typically they take two input variables and an output variable, and compute a single arithmetic operation. Modern CPUs (and GPUs) have many cores, and systems like OpenMP allow to parallelize code to utilize more than one core. Parallel loops perform the same operation multiple times (compare with SIMD), but the recurrence expressions are mixed up mash of operations. So I canonicalize the recurrences into a representation that allows to extract as much uniform-operation parallelism as possible.

The outer-most operation is a sum, with each term being a product of a small integer with another sum. Each inner sum term is an optional negation of an optional scaling by a power of two of a product of powers. Negation and scaling by a power of two are very cheap operations. Squaring a high precision real number is cheaper than multiplying two different numbers, so powers are factored to use as much squaring as possible.

The computations are split into a sequence of phases, with each phase performing a parallel operation. The first phases compute all the squarings that are necessary for the powers. The next phases perform all the non-squaring products. The product (or sum) of a number of terms can be performed partially in parallel by careful bracketing: a * b * c * d becomes (a * b) * (c * d) with the two inner products performed in parallel before the outer product. Here's a diagram for an order 4 series:

The code generator is implemented in Haskell, and outputs C code using MPFR and OpenMP. You can find it as part of my (still-experimental) mandelbrot-perturbator project. The Haskell is really very ugly though (lots of dirty String concatenation to emit C code), not to mention inefficient - expect high order series to take a vast amount of time. The generator theoretically supports arbitrary integer powers in \(z \to z^p+c\) but I've only tested with \(p = 2\) because the rest of the perturbator code (mostly C-style C++ with a few templates and overloaded numerics) hasn't got some needed functionality for other powers. Eventually I plan to extend the code generator to emit the missing pieces, but first I want to experiment with native precision operations for some coefficients in the hope that it isn't so deathly slow for high order series at very deep zooms. Currently rendering a (boring) test zoom, only 400 frames away from 1e-10000. It's taking a while though, might be done by May.

Note: most of this post was written in May last year, the generator probably changed a bit since then.

]]>In a previous post I described a way to sculpt patterns in the Mandelbrot set, noting that I hoped to automate it in the future. This week I finally got around to it. The first 5 images in the gallery below show the first few stages of a Julia morphing sequence, with some periods annotated and external rays drawn on. I derived the external angles from periodic truncation at the relevant period of the bit sequence given by tracing external rays outwards towards infinity.

The external angles are:

period 5.(01111) .(10000)period 54.(011111000001111100000111110000011111000001111011111000) .(011111000001111100000111110000011111000001111100000111)period 69.(011111000001111100000111110000011111000001111011111000011111000010000) .(011111000001111100000111110000011111000001111011111000100000111101111)period 143.(01111100000111110000011111000001111100000111101111100010000011110111101111100000111110000011111000001111100000111101111100001111100001000010000) .(01111100000111110000011111000001111100000111101111100010000011110111101111100000111110000011111000001111100000111101111100010000011110111101111) .period 291.(011111000001111100000111110000011111000001111011111000100000111101111011111000001111100000111110000011111000001111011111000100000111101111011110111110000011111000001111100000111110000011110111110001000001111011110111110000011111000001111100000111110000011110111110000111110000100001000010000) .(011111000001111100000111110000011111000001111011111000100000111101111011111000001111100000111110000011111000001111011111000100000111101111011110111110000011111000001111100000111110000011110111110001000001111011110111110000011111000001111100000111110000011110111110001000001111011110111101111)

Letting the period 5 angles be .(a) and .(b), and the period 69 angles be .(A) and .(B), the period 143 angles can be written .(C) = .(BAb) and .(D) = .(BBa), and moreover the period 291 angles are then .(DCb) and .(DDa) - this suggests a pattern which could be extrapolated, and indeed repeating this concatenation process seems to work as the last 8 images in the gallery above show.

**However** tracing the external rays to a sufficient
depth that Newton's method iterations can find the correct periodic
nucleus is asymptotically \(O(p^2)\) for period \(p\), and the period
is more than doubled each step, so the runtime increases by a factor
of more than 4 for each successive location in the sequence. This
makes it far too slow to be practical - it's much quicker to do the
zooming and point selection by hand/eye (the last in the sequence
in the gallery took over 24 hours just to find the location on my
machine, dwarfing the time needed to render the actual image).

Here's the core of the code used to render the final sequence,
using my
mandelbrot-numerics
and
mandelbrot-symbolics
libraries to calculate viewing parameters, and
mandelbrot-perturbator
to render the images. I need to turn the **bin/glfw3.c**
of the latter project into a reusable library too, because the
full program is pretty much the same minus input handling etc.

...extern int main(int argc, char **argv) {...// initialize Julia morphing stateint sharpness = 8; m_block a, b; m_block_init(&a); m_block_init(&b); m_block_from_string(&a, "01111"); m_block_from_string(&b, "10000"); m_binangle as[2], bs[2]; m_binangle_init(&as[0]); m_binangle_init(&bs[0]); m_binangle_from_string(&as[0], ".(011111000001111100000111110000011111000001111011111000011111000010000)"); m_binangle_from_string(&bs[0], ".(011111000001111100000111110000011111000001111011111000100000111101111)"); m_binangle_init(&as[1]); m_binangle_init(&bs[1]); mpq_t q; mpq_init(q); mpc_t c[2], delta; mpc_init2(c[0], 53); mpc_init2(c[1], 53); mpc_init2(delta, 53);// create rendererstruct perturbator *context = perturbator_new(workers, width, height, maxiters, chunk, escape_radius, glitch_threshold); for (int depth = 0; depth < 20; ++depth) {// poll GUI for quit eventglfwPollEvents(); if (glfwWindowShouldClose(window)) { break; }// trace external ray to nucleusint w = (depth + 1) & 1; m_binangle_to_rational(q, &as[1 - w]); m_r_exray_in *ray = m_r_exray_in_new(q, sharpness); for (int s = 0; s < 2 * as[1 - w].per.length * sharpness; ++s) { m_r_exray_in_step(ray); } m_r_exray_in_get(ray, c[1 - w]); m_r_nucleus(c[1 - w], c[1 - w], as[1 - w].per.length, 64); if (depth > 0) {// compute view from two nuclei in the embedded Julia setmpc_sub(delta, c[w], c[1 - w], MPC_RNDNN); mpc_abs(state.radius, delta, MPFR_RNDN); mpfr_mul_d(state.radius, state.radius, 12, MPFR_RNDN); mpfr_set_prec(state.centerx, mpc_get_prec(c[w])); mpfr_set_prec(state.centery, mpc_get_prec(c[w])); mpfr_set(state.centerx, mpc_realref(c[w]), MPFR_RNDN); mpfr_set(state.centery, mpc_imagref(c[w]), MPFR_RNDN);// render raw data and wait for it to completeperturbator_start(context, state.centerx, state.centery, state.radius); perturbator_stop(context, false);// refresh image and save to PPM fileglTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_FLOAT, perturbator_get_output(context)); refresh_callback(&state); glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, ppm); printf("P6\n%d %d\n255\n", width, height); for (int y = height - 1; y >= 0; --y) { fwrite(ppm + y * width * 3, width * 3, 1, stdout); } fflush(stdout); }// extrapolate next external angle pairm_block_append(&as[w].per, &bs[1-w].per, &as[1-w].per); m_block_append(&as[w].per, &as[w].per, &b); m_block_append(&bs[w].per, &bs[1-w].per, &bs[1-w].per); m_block_append(&bs[w].per, &bs[w].per, &a); }...}

You can download the full automated julia morphing example code which you can drop into a mandelbrot-perturbator git clone bin/ directory and compile with the hints at the top of the file.

At some point I'll try a hybrid approach that mimics more closely the hand/eye method, approximating the zoom depth needed for each successive morph and using just the pattern of periods with Newton's method to find the next nucleus, but it might find the "wrong" nucleus which is quite likely due to symmetry and there is a chance it might go way off...

In summary: this automated Julia morphing works in theory, but it's not practical computationally.

]]>