**BEGIN UPDATE** I made a page about this
Kalles Fraktaler 2 + GMP
project. Fresh binary with bugfixes as of 2017-04-06! **END UPDATE**

Back at the end of 2014 I wrote some little programs to manipulate the output from Kalles Fraktaler: kf-extras. At that time I had tried to compile the source code on Linux, but failed miserably. More recently I tried again, and was finally successful. I then set about making some improvements, chief among which is using GMP (or optionally MPFR) for the high precision computations, instead of the original homebrew implementation. I used the Boost wrappers for C++. The advantage is a very significant speed boost from those libraries' optimized implementations.

My forked source is available, in the claude-gmp branch:

git clone https://code.mathr.co.uk/kalles-fraktaler-2.git cd kalles-fraktaler-2 git checkout claude-gmp

There are 64bit Windows binaries available too (which work fine in WINE), cross-compiled from Debian using MINGW (the first one, updated to fix a crasher bug in today's first version, is probably the one you want, unless you're testing for regressions):

2.11.1+gmp.20170330.1 (sig) |

2.11.1+gmp.20170330 (sig) |

2.11.1+gmp.20170313 (sig) |

2.11.1+gmp.20170307 (sig) |

2.9.3+gmp.20170307 (sig) |

Significant differences from upstream (apart from the GMP support) are:

- make-based build system for MINGW g++;
- various C++ fixes to make g++ happy;
- long double support built-in instead of separate DLL;
- zoom depth limited only by the system memory needed to store very high precision numbers (GMP has a limit too but it is unlikely to be reached).

Support is available via FractalForums, or send me an email.

]]>I hacked on Inflector Gadget to make it use double precision floating point (53 bits of mantissa compared to 24 for single precision). This allows more playing time before ugly pixelation artifacts appear. It requires OpenGL 4, if that doesn't work on your machine then you'll have to go back to v0.1 (sorry). Downloads:

inflector-gadget-0.2.tar.bz2 (sig)

|inflector-gadget-0.2-win.zip (sig)

or get the freshest source from git (browse inflector-gadget):

git clone https://code.mathr.co.uk/inflector-gadget.git

There's also a new command, 'D', which prints out the inflection coordinates. I added it so I could play with adaptive precision algorithms, more on that soon.

]]>Inspired by recent threads on FractalForums, I wrote a little gadget to do inflection mapping. This means translation and squaring of complex coordinates before regular Mandelbrot or Julia set iterations. Check the source for full details, the fragment shader GLSL is the place to start. You can download it here:

inflector-gadget-0.1.1.tar.bz2 (sig)

|inflector-gadget-0.1-win.zip (sig)

or get the freshest source from git (browse inflector-gadget):

git clone https://code.mathr.co.uk/inflector-gadget.git

It's a bit rough around the edges, press H to print the help to the terminal you started it from (or read the documentation), but in short, each click wraps the pattern twice around the clicked point, allowing you to sculpt patterns out of Julia sets (or the Mandelbrot set if you press M).

Similar sculpting can be achieved by zooming into specfic locations in the Mandelbrot set. The effect is much the same (the outer decorations are missing in the inflector gadget) but it takes a zillion times longer to zoom so deep as required.

]]>My video piece Monotone has been accepted to MADATAC 08 (2017). The exhibition runs January 12th to February 5th, at Centro Conde Duque, Madrid, Spain, and there is also a screening on January 17th.

There were some issues with video codecs, this is the one that worked out:

]]>ffmpeg -i video.mkv -i audio.wav \ -pix_fmt yuv420p -codec:v libx264 -profile:v high -level:v 4.1 -b:v 20M -b:a 192k \ monotone.mov

The Mandelbrot set is full of hyperbolic components (the circle-like and cardioid-like regions), each of which has a nucleus at its center, which has a superstable periodic orbit. For example the biggest cardioid has center 0 and period 1, while the circle to the left has center -1 and period 2 (verify by: \((0^2 + (-1))^2) + (-1) = 0\)).

Suppose we know the location of the nucleus (the \(c\) parameter) and we want to render a picture of the corresponding hyperbolic component. To do this we need to know its size. I tried to derive a size estimate myself, using Taylor series for \(\frac{\partial}{\partial z}\) using the fact that this derivative tends to \(1\) at the boundary of the component and is \(0\) at the nucleus, but the truncation error smashed everything to pieces. So I fell back on plan B: trying to understand the existing size estimate I found on ibiblio.org.

The size estimate using the notation on that page (go read it first) is \(\frac{1}{\beta \Lambda_n^2}\). I found the page a bit confusing at the first many readings, but reading the referenced paper and thinking hard while writing notes on paper helped me crack it. The size estimate forms a small section of the paper near the start, for reference:

Structure in the parameter dependence of order and chaos for the quadratic map

Brian R Hunt and Edward Ott

J. Phys. A: Math. Gen. 30 (1997) 7067–7076

Many dynamical systems are thought to exhibit windows of attracting periodic behaviour for arbitrarily small perturbations from parameter values yielding chaotic attractors. This structural instability of chaos is particularly well documented and understood for the case of the one-dimensional quadratic map. In this paper we attempt to numerically characterize the global parameter-space structure of the dense set of periodic "windows" occurring in the chaotic regime of the quadratic map. In particular, we use scaling techniques to extract information on the probability distribution of window parameter widths as a function of period and location of the window in parameter space. We also use this information to obtain the uncertainty exponent which is a quantity that globally characterizes the ability to identify chaos in the presence of small parameter uncertainties.

The basic idea is that under iteration of \(z\to z^2+c\), the small neighbourhood of the nucleus \(c\) bounces around the complex plane, being slightly distorted and stretched each time, except for one "central interval" at which the neighbourhood of \(z_{k p}\) contains the origin \(0\) and the next iteration folds the interval in half with quadratic scaling. Now the bouncing around the plane can be approximated as linear, with scaling given by the first derivative (with respect to \(z\)), and there is only one interval \(n = kp\) in which the full quadratic map needs to be preserved. We end up with something like this:

\[\begin{aligned} z_{n + p} = & c + \frac{\partial}{\partial z} z_{n+p-1} ( \\ & \vdots \\ & c + \frac{\partial}{\partial z} z_{n+3} ( \\ & c + \frac{\partial}{\partial z} z_{n+2} ( \\ & c + \frac{\partial}{\partial z} z_{n+1} ( \\ & c + z_n^2 ) ) ) \ldots ) \end{aligned}\]

Expanding out the brackets gives:

\[ z_{n + p} = \left(\prod_{k = 1}^{p - 1} \frac{\partial}{\partial z} z_{n + k}\right) z_n + \left(\sum_{m = 1}^p \prod_{k = m}^{p - 1} \frac{\partial}{\partial z} z_{n+k}\right) c \]

Writing:

\[\begin{aligned} \lambda_m &= \prod_{k = 1}^{m} \frac{\partial}{\partial z} z_{n + k} \\ \Lambda &= \lambda_{p - 1} \end{aligned}\]

the sum can have a factor of \(\Lambda\) drawn out to give:

\[ z_{n + p} = \Lambda \left( z_n^2 + \left( 1 + \lambda_1^{-1} + \lambda_2^{-1} + \ldots + \lambda_{p - 1}^{-1} \right) c \right) = \Lambda \left( z_n^2 + \beta c \right) \]

The final step is a change of variables where \(c_0\) is the nucleus:

\[\begin{aligned} Z &= \Lambda z \\ C &= \beta \Lambda^2 \left(c - c_0\right) \end{aligned}\]

Now there is self-similarity (aka renormalization):

\[Z_{n+1} = Z_n^2 + C\]

ie, one iteration of the new variable corresponds to \(p\) iterations of the original variable. (Exercise: verify the renormalization.) Moreover the definition of \(C\) gives the scale factor in the parameter plane, which gives the size estimate when we multiply by the size of the top level window (the paper referenced above uses \(\frac{9}{4}\) as the size, corresponding to the interval \(\left[-2,\frac{1}{4}\right]\) from cusp to antenna tip - using \(\frac{1}{2}\) makes circles' sizes approximately their radii).

Finally some C99 code to show how easy this size estimate is to compute in practice (see also my mandelbrot-numerics library):

double _Complex m_size(double _Complex nucleus, int period) { double _Complex l = 1; double _Complex b = 1; double _Complex z = 0; for (int i = 1; i < period; ++i) { z = z * z + nucleus; l = 2 * z * l; b = b + 1 / l; } return 1 / (b * l * l); }

As a bonus, using complex values gives an orientation estimate in addition to the size estimate - just use \(\arg\) and \(\left|.\right|\) on the result.

]]>My video piece Monotone has been accepted to the Mozilla Festival art exhibition. Mozilla Festival 2016 takes place October 28-30, at Ravensbourne College, London.

Since submitting the pre-rendered video loop I've been working on improving the real-time rendering mode of the Monotone software. The main bottle-neck at this time is the histogram equalisation to take the high dynamic range calculations down to a low dynamic range image for display. I did manage to get a large speed boost by calculating the histogram on a \(\frac{1}{4} \times \frac{1}{4}\) downscaled image, but on my hardware it only achieves \(\frac{1}{2} \times \frac{1}{2}\) of the desired resolution (HD 1920x1080). On my NVIDIA GTX 550 Ti with 192 CUDA cores I get 960x540 at 30 frames per second. Recent hardware has 1000s of cores, so perhaps it's just a matter of throwing more grunt at the problem.

If you want to try it (and have OpenGL 4 capable hardware, and development headers installed for GLFW and JACK, among other things; only tested on Debian):

git clone https://code.mathr.co.uk/monotone.git git clone https://code.mathr.co.uk/clive.git cd monotone/src make ./monotone

You can also browse the monotone source code repository. If you do have a significantly more powerful GPU than me, you can try to edit the source code monotone.cpp to change "#define SCALE 2" to "#define SCALE 1", which will make it target 1920x1080 instead of half that in each dimension. I'd love to hear back if you get it working (or if you have trouble getting it running, maybe I can help).

**UPDATE** here are some photos, the projection was really
impressive, so I'm satisfied even though the sound aspect was absent:

The festival was pretty interesting, many many things all going on at once. Highlight was the Sonic Pi workshop (though I spent most of the time dist-upgrade-ing to Debian Stretch so I could install it), and the Nature In Code workshop was also interesting (though it was packed full and uncomfortable so I didn't attend the full session).

]]>Back in 2008 I made a patch with Pure-data, Gem, and PdLua that took the form of a sliding tile puzzle with generative ambient drones. You can see a video in my 2009 blog post. In 2013 I updated it with extra features (tiles that spin and flip over occasionally, plus black borders, and sound based on rhythms) but never published the changes. I recently got given a Raspberry Pi 3 B, and thought this would be a nice project to test out its graphics capabilities.

I installed the Raspbian Jessie full image on a 32GB SD card, and installed a few additions. Installing mesa-utils and running glxgears was very disappointing - 40fps with a tiny window and very high CPU load, with glxinfo reporting OpenGL 3.0 via Gallium llvmpipe. Some searching of the internet later I found the hint to sudo raspi-config and enable the experimental OpenGL driver in the advanced settings. Then glxgears ran at 60fps with no load, and glxinfo reported OpenGL 2.1 via Broadcom.

I uncommented the deb-src line in /etc/apt/sources.list so I could run

sudo apt-get update sudo apt-get build-dep puredata sudo apt-get build-dep gem sudo apt-get install liblua5.2-dev subversion

Then I got Pd source from git, Gem source from git, and pdlua source from svn (note: I got the whole trunk repository as the build system there is not self-contained):

mkdir ~/opt/src cd ~/opt/src git clone git://git.code.sf.net/p/pure-data/pure-data pure-data-git git clone git://git.code.sf.net/p/pd-gem/gem gem-git svn checkout svn://svn.code.sf.net/p/pure-data/svn/trunk pure-data-svn

I applied a small patch to force generating mipmaps in Gem's pix_snap2tex (hopefully in the future this patch won't be necessary) and compiled everything like this:

export CFLAGS="-g -O3 -mfloat-abi=hard -mfpu=neon" export CXXFLAGS="${CFLAGS}" cd pure-data-git ./autogen.sh ./configure --prefix=${HOME}/opt --enable-jack make -j 4 make install cd .. cd pure-data-svn/externals/loaders/pdlua make prefix=${HOME}/opt objectsdir=${HOME}/opt/lib/pd/extra \ pkglibdir=${HOME}/opt/lib/pd/extra \ CFLAGS="${CFLAGS} -Wall -W -g -I/usr/include/lua5.2" \ ALL_LIBS="-llua5.2 -lc" make prefix=${HOME}/opt objectsdir=${HOME}/opt/lib/pd/extra \ pkglibdir=${HOME}/opt/lib/pd/extra \ CFLAGS="${CFLAGS} -Wall -W -g -I/usr/include/lua5.2" \ ALL_LIBS="-llua5.2 -lc" install cd ../../../.. cd gem-git ./autogen.sh ./configure --prefix=${HOME}/opt make -j 4 make install cd ..

Building Gem especially takes a while (37mins on the quad core Pi). Finally I tried running the Puzzle:

qjackctl & # configure and start JACK git clone https://code.mathr.co.uk/puzzle.git cd puzzle ./start-simple.sh

It works very comfortably at 512x512 window size, but (WARNING) I did try it at 1024x1024 and after a minute or so of struggling to keep up it eventually locked up my Pi hard and I had to cut the power. Luckily on next boot it seems everything is fine (no data loss). Here's a grab of the smaller size:

Fun!

]]>As part of Brud's
*luser stories* I wrote up a lecture/slideshow
about the fractal dimension of Julia sets. You can
download the PDF: julia-dim.pdf (3.5MB)
and there's a page with source code and detailed results
tables here: fractal dimension of Julia sets.

The image is resemblant to the familiar Mandelbrot set because any ball around a boundary point of the Mandelbrot set contains a Julia set with dimension approaching arbitrarily close to 2.

]]>Aaron Klebanoff's 2001 paper \(\pi\) in the Mandelbrot Set got me thinking, in particular the problem in the conclusion:

Another open problem is to determine the function of \(\epsilon\) that multiplies \(N(\epsilon)\). So far, we have \[a \epsilon^b N(\epsilon) \to \pi \] where we have seen \(a = 1, 2\) and \(b = 1, \frac{1}{2}\). In general, should we expect [this limit] to hold for some rational values \(a\) and \(b\)? If so, what does the pinch location in \(M\) tell us about \(a\) and \(b\)?

Here \(\epsilon\) is the distance from a point along a path heading to a cusp or pinch point between two hyperbolic components in the complement of the Mandelbrot set \(M\), and \(N(\epsilon)\) is the iteration count at that point. My conclusion after some scribbling is that \(b = 1\) for all pinch points, and \(a\) is not in general rational. My workings below aren't as formal as Klebanoff's, but hopefully the idea is sound.

As paths through the complement of \(M\) I take the external ray landing at the desired pinch point. The ray passes between lots of child bulbs on either side, which are increasing in period towards the pinch point. The increase is in increments of the period of the smaller (higher period) bulb at the pinch point. The rays landing at the pinch-point side of the roots of these child bulbs come together quickly heading out towards infinity, as they differ only in the last digits of the repeating part of the binary expansions of the corresponding rational external angles. In fact, where corresponding pairs meet in the same grid cell of the pictures below, the iteration count there is the lower of the two periods.

Next, consider the internal angles of the the child bulbs of the smaller (larger period) bulb at the pinch point. These are in the sequence \(\frac{1}{p}\) where \(p\) is the period of the child bulb divided by the period of the parent bulb. Drawing triangles as \(p \to \infty\) shows that the distance to the bulb from the pinch point is approximately \(\epsilon \approx s \tan \frac{2\pi}{p}\) where \(s\) is the radius of the smaller pinch point bulb. Further, as \(\tan \theta \to \theta \text{ as } \theta \to 0\), the distance reduces to \(\epsilon \approx s \frac{2\pi}{p}\).

Now, combining the previous two paragraphs, call the period of the smaller bulb at the pinch point \(P\). Then \(N(\epsilon)\) is approximately \(P p\). Substituting into the expression for \(\epsilon\), we get \(\epsilon \approx s \frac{2\pi P}{N(\epsilon)}\) which simplifies to:

\[ \pi \approx \frac{\epsilon N(\epsilon)}{2 P s}\]

So much for the theory, how does it work out in practice? Not very well, as it turns out. I plotted some numerical results for various external angles, and while they tend to flat lines as the iteration count increases which implies that \(\epsilon N(\epsilon) \to K\) for some constant \(K\), the scaling factors I calculated aren't quite there - the limits are some way off from being \(\pi\).

I suspect this is because the internal angles aren't exactly the same as geometric angles (there is a conformal transformation making up the difference), or something else is going wrong, in any case it was a fun experiment to try out - not all experiments give the desired result, which I guess is the point of experimentation in the first place.

]]>Back in August last year I was experimenting with vector renditions of the Buddhabrot, reasoning that tracing the boundary of hyperbolic components and plotting those iterates would give a reasonable idea of what the limit of the Buddhabrot at infinite iterations would look like. It's not perfect (some straight lines instead of curves, due to problems with highly awkward behaviour near root points) and the level of detail could be a bit higher, but I think it looks quite ok.

The program expects a list of minibrot islands as input on stdin with lines like "period size cre cim", suitable values might be derived by regexp voodoo from my Mandelbrot set feature database. It then finds all child components recursively, down to a minimum size limit, and traces their boundaries and the iterates thereof (the image of a closed curve under the quadratic polynomial map is another loop). The colouring weight is adjusted by the amount of stretching involved in the transformations, as well as the period and size of the component it belongs to.

You can download the Haskell source: vector-buddhabrot.hs whose heaviest dependency is cairo, with other dependencies including strict, deepseq, parallel, and my mandelbrot-numerics library.

]]>The Collatz conjecture involves a piecewise integer recurrence:

\[ c(n) = \left\{ \begin{array}{ll} n/2 & n\text{ even} \\ 3n+1 & n\text{ odd} \end{array} \right. \]

The unsolved conjecture is that iterating \(c\) starting with a positive integer will always reach \(1\).

The integer function \(c\) can be extended to real and complex numbers like this:

\[ c(z) = \frac{1}{4}(1 + 4z - (1 + 2z)\cos(\pi z)) \]

and iterating this function for each pixel of an image gives an escape time fractal with escape condition \(|\Im(z)| \to \infty\). I implemented this in Fragmentarium, also calculating the running derivative for distance estimate colouring:

#include "Progressive2D.frag" float pi = 3.141592653;// hyperbolic cosinefloat cosh(float x) { return (exp(x) + exp(-x))/2.0; }// hyperbolic sinefloat sinh(float x) { return (exp(x) - exp(-x))/2.0; }// complex multiplicationvec2 cmul(vec2 a, vec2 b) { return vec2(a.x * b.x - a.y * b.y, a.x * b.y + a.y * b.x); }// complex cosinevec2 ccos(vec2 a) { return vec2(cos(a.x) * cosh(a.y), -sin(a.x) * sinh(a.y)); }// complex sinevec2 csin(vec2 a) { return vec2(sin(a.x) * cosh(a.y), cos(a.x) * sinh(a.y)); } vec3 color(vec2 z0) {// initializevec2 z = z0; vec2 dz = vec2(1.0, 0.0); int n = 0;// iteratefor (int i = 1; i < 2048; ++i) {// bail out if z gets too bigif (abs(z.y) > 4.0) { n = i; break; }// do one step of generalized Collatz functiondz = cmul(dz, (vec2(4.0,0.0) - cmul(vec2(1.0, 0.0) + 2.0 * z, pi * -csin(pi * z)) - 2.0 * ccos(pi * z)) / 4.0); z = (vec2(1.0,0.0) + 4.0 * z - cmul(vec2(1.0,0.0) + 2.0 * z, ccos(pi * z))) / 4.0; }// colour pixel according to distance estimatefloat de = length(z) * log(length(z)) / length(dz); if (n == 0) { return vec3(1.0, 0.7, 0.0); } return vec3(tanh(clamp(de / length(dFdx(z0)), 0.0, 4.0))); }

You can download the collatz.frag.

]]>Back at the end of April last year I think I was futzing about on math.stackexchange.com answering a question about rendering negative multibrot sets, for example one produced by iterations of \(z \to z^{-2} + c\). I tried applying the atom domain colouring from the regular Mandelbrot set, but found it looked better if I accumulated all the partials with additive blending, not just the final domain. Here's a zoomed in view:

I implemented it as a GLSL fragment shader in Fragmentarium, here's the source code (which you can download too):

// Mandelbrot set for \( z \to z^{-n} + c \) coloured by Lyapunov atom domains// Created: Thu Apr 30 15:10:00 2015#include "Progressive2D.frag" #include "Complex.frag" const float pi = 3.141592653589793; const float phi = (sqrt(5.0) + 1.0) / 2.0; #group Lyapunov atom domains uniform int Iterations; slider[10,200,5000] uniform int Power; slider[-16,-2,16] vec3 color(vec2 c) {// critical point is \( 0 \) for positive Power, and \( 0^Power + c = c \)// critical point is \( \infty \) for negative Power, and \( \infty^Power + c = c \)// so start iterating from \( c \)vec2 z = c;// Lyapunov exponent accumulatorfloat le = 0.0;// atom domain accumulatorfloat minle = 0.0; int mini = 1;// accumulated colourvec4 rgba = vec4(0.0); for (int i = 0; i < Iterations; ++i) {// \( zn1 \gets z^{Power - 1} \)vec2 zn1 = vec2(1.0, 0.0); for (int j = 0; j < abs(Power - 1); ++j) { zn1 = cMul(zn1, z); } if (Power < 0) { zn1 = cInverse(zn1); }// \( dz \gets Power z^{Power - 1} \)vec2 dz = float(Power) * zn1;// \( z \gets z^{Power} + c \)z = cMul(zn1,z) + c;// \( le \gets le + 2 log |dz| \)float dle = log(dot(dz, dz)); le += dle;// if the delta is smaller than any previous, accumulate the atom domain domainif (dle < minle) { minle = dle; mini = i + 1; float hue = 2.0 * pi / (36.0 + 1.0/(phi*phi)) * float(mini); vec3 rainbow = 2.0 * pi / 3.0 * vec3(0.0, 1.0, 2.0); vec3 domain = clamp(vec3(0.5) + 0.5 * sin(vec3(hue) + rainbow), 0.0, 1.0); rgba += vec4(domain, 1.0); } }// accumulated 'iterations' logs of squared magnitudes// so divide by 2 iterationsle /= 2.0 * float(Iterations);// scale accumulated colour and blacken interiorreturn mix(rgba.rgb / rgba.a, vec3(le < 0.001 ? 0.0 : tanh(exp(le))), 0.5); }

More negative powers than -2 don't look very good, though.

]]>