Since last year's article on deep zoom theory and practice, two new developments have been proposed by Zhuoran on fractalforums.org: Another solution to perturbation glitches.

The first technique ("**rebasing**"), explained in the
first post of the forum thread, means resetting the reference iteration to the
start when the pixel orbit (i.e. \(Z+z\), the reference plus delta) gets
near a critical point (like \(0+0i\) for the Mandelbrot set). If there is
more than one critical point, you need reference orbits starting at each
of them, and this test can switch to a different reference orbit. For
this case, pick the orbit \(o\) that minimizes \(|(Z-Z_o)+z|\), among the
current reference orbit at iteration whatever, and the critical point
orbits at iteration number \(0\). Rebasing means you only need as many
reference orbits as critical points (which for simple formulas like the
Mandelbrot set and Burning Ship means only one), and glitches are avoided
rather than detected, needing to be corrected later. This is a big
boost to efficiency (which is nice) and correctness (which is much more
important).

Rebasing also works for hybrids, though you need more reference orbits, because the reference iteration can be reset at any phase in the hybrid loop. For example, if you have a hybrid loop of "(M,BS,M,M)", you need reference orbits for each of "(M,BS,M,M)", "(BS,M,M,M)", "(M,M,M,BS)" and "(M,M,BS,M)". Similarly if there is a pre-periodic part, you need references for each iteration (though for a zoomed in view, the minimum escaping iteration in the image determines whether they will be used in practice): "M,M,(BS,BS,M,BS)" needs reference orbits "M,M,(BS,BS,M,BS)", "M,(BS,BS,M,BS)" and the four rotations of "(BS,BS,M,BS)". Each of these phases needs as many reference orbits as the starting formula has critical points. As each reference orbit calculation is intrinsically serial work, and modern computers typically have many cores, the extra wall-clock time taken by the additional references is minimal because they can be computed in parallel.

The second technique ("**bilinear approximation**") is
only hinted at in the thread. If you have a deep zoom, the region of
\(z\) values starts very small, and bounces around the plane typically
staying small and close together, in a mostly linear way, except for
when the region gets close to a critical point (e.g. \(x=0\) and \(y=0\) for the
Mandelbrot set) or line (e.g. either \(x=0\) or \(y=0\) for the Burning Ship),
when non-linear stuff happens (like complex squaring, or absolute
folding). For example for the Mandelbrot set, the perturbed iteration

\[ z \to 2 Z z + z^2 + c \]

when \(Z\) is not small and \(z\) is small, can be approximated by

\[ z \to 2 Z z + c \]

which is linear in \(z\) and \(c\) (two variables call this "bilinear"). In particular, this approximation is valid when \( z^2 << 2 Z z + c \), which can be rearranged with some handwaving (for critical point at \(0\)) to

\[ z < r = \max\left(0, \epsilon \frac{\left|Z\right| - \max_{\text{image}} \left\{|c|\right\}}{\left|J_f(Z)\right| + 1}\right) \]

where \(\epsilon\) is the hardware precision (e.g. \(2^{-24}\)), and \(J_f(Z) = 2Z\) for the Mandelbrot set. For Burning Ship replace \(|Z|\) with \(\min(|X|,|Y|)\) where \(Z=X+iY\). In practice I divide \(|Z|\) by \(2\) just to be extra safe. For non-complex-analytic functions I use the operator norm for the Jacobian matrix, implemented in C++ by:

template <typename real> inline constexpr real norm(const mat2<real> &a) { using std::max; using std::sqrt, ::sqrt; const mat2<real> aTa = transpose(a) * a; const real T = trace(aTa); const real D = determinant(aTa); return (T + sqrt(max(real(0), sqr(T) - 4 * D))) / 2; } template <typename real> inline constexpr real abs(const mat2<real> &a) { using std::sqrt, ::sqrt; return sqrt(norm(a)); }

This gives a bilinear approximation for one iteration, which is not so useful. The acceleration comes from combining neighbouring BLAs into a BLA that skips many iterations at once. For neighbouring BLAs \(x\) and \(y\), where \(x\) happens first in iteration order, skipping \(l\) iterations via \(z \to A z + B c\), one gets:

\[\begin{aligned} l_{y \circ x} &= l_y + l_x \\ A_{y \circ x} &= A_y A_x \\ B_{y \circ x} &= A_y B_x + B_y \\ r_{y \circ x} &= \min\left(r_x, \max\left(0, \frac{r_y - |B_x| \max_{\text{image}}\left\{|c|\right\}}{|A_x|}\right) \right) \end{aligned}\]

This is a bit handwavy again, higher order terms of Taylor expansion are probably necessary to get a bulletproof radius calculation, but it seems to work ok in practice.

For a reference orbit iterated to \(M\) iterations, one can construct a BLA table with \(2M\) entries. The first level has \(M\) 1-step BLAs for each iteration, the next level has \(M/2\) combining neighbours (without overlap), the next \(M/4\), etc. It's best for each level to start from iteration \(1\), because iteration \(0\) is always starting from a critical point (which makes the radius of BLA validity \(0\)). Now when iterating, pick the BLA that skips the most iterations, among those starting at the current reference iteration that satisfy \(|z| < |r|\). In between, if no BLA is valid, do regular perturbation iterations, rebasing as required. You need one BLA table for each reference orbit, which can be computed in parallel (and each level of reduction can be done in parallel too, perhaps using OpenCL on GPU).

BLA is an alternative to series approximation for the Mandelbrot set, but it's conceptually simpler, easier to implement, easier to parallelize, has better understood stopping conditions, is more general (applies to other formulas like Burning Ship, hybrids, ...) - need to do benchmarks to see how it compares speed-wise before declaring an overall winner.

It remains to research the BLA initialisation for critical points not at \(0\), and check rebasing with multiple critical points: so far I've only actually implemented it for formulas with a single critical point at \(0\), so there may be bugs or subtleties lurking in the corners.

]]>Back in 2009 I posted a short video Reflex preview of truncated 4D polytopes. Today I revisited that old code, reimplementing the core algorithms in OpenGL Shader Language (GLSL) using the FragM environment.

3D polytopes are also known as polyhedra. They can be defined by their Schlaefli symbol, which looks like {4,3} for a cube. This can be understood as having faces made up of squares ({4}), arranged in triangles around each vertex ({3}). This notation extends to 4D, where a hypercube (aka tesseract) has symbol {4,3,3}, made up of cubes {4,3} with {3} around each vertex.

These symbols are all well and good, but to render pictures you need concrete measurements: angles, lengths, etc. In H.S.M. Coxeter's book Regular Polytopes, there is the solution in the form of some recursive equations. Luckily the recursion depth is bounded by the number of dimensions, which is small (either 3 or 4 in my case, though in principle you can go higher). This is important for implementation in GLSL, which disallows recursion. In any case, GLSL doesn't have dynamic length lists, so different functions are needed for different length symbols.

I don't really understand the maths behind it (I don't think I did even when I first wrote the code in Haskell in 2009), but I can describe the implementation. The goal is to find 4 vectors, which are normal to the fundamental region of the tiling of the (hyper)sphere. Given a point anywhere on the sphere, repeated reflections through the planes of the fundamental region can eventually (if you pick the right ones) get you into the fundamental region. Then you can do some signed distance field things to put shapes there, which will be tiled around the whole space when the algorithm is completed.

Starting from the Schlaefli symbol {p,q,r} (doing 4D because it has some tricksy bits, 3D is largely similar), the main task is to find the radii of the sub polytopes ({p,q}, {p}, {q,r}, etc). This is because these radii can be used to calculate the angles of the planes of the fundamental region, using trigonometry. The recursive formula starts here:

float radius(int j, ivec3 p)
{
return sqrt(radius2_j(j, p));
}

Here j will range from 0 to 3 inclusive, and the vector is {p,q,r}. Then radius2_j() evaluates using the radii squared of the relevant subpolytopes, according to j. I think this is an application of Pythagoras' theorem.

float radius2_j(int j, ivec3 p)
{
switch (j)
{
case 0: return radius2_0(p);
case 1: return radius2_0(p) - radius2_0();
case 2: return radius2_0(p) - radius2_0(p.x);
case 3: return radius2_0(p) - radius2_0(p.xy);
}
return 0.0;
}

The function radius2_0() is overloaded for different length inputs, from 0 to 3:

float radius2_0() { return 1.0; }
float radius2_0(int p) { return delta() / delta(p); }
float radius2_0(ivec2 p) { return delta(p.y) / delta(p); }
float radius2_0(ivec3 p) { return delta(p.yz) / delta(p); }

Here it starts to get mysterious, the delta funtion uses trigonometry to find the lengths. I don't know how/why this works, I copy/pasted from the book. Note that it looks recursive at first glance, but in fact each delta calls a different delta(s) with strictly shorter input vectors, so it's just a non-recursive chain of function calls.

float delta()
{
return 1.0;
}
float delta(int p)
{
float s = sin(pi / float(p));
return s * s;
}
float delta(ivec2 p)
{
float c = cos(pi / float(p.x));
return delta(p.y) - delta() * c * c;
}
float delta(ivec3 p)
{
float c = cos(pi / float(p.x));
return delta(p.yz) - delta(p.z) * c * c;
}

Now comes the core function to find the fundamental region: the cosines of the angles are found by ratios of successive radii, the sines are found by Pythagoras' theorem, 3 rotation matrices are constructed, then one axis vector is transformed. Finally these vectors are combined using the 4D cross product (which has 3 inputs), giving the final fundamental region (I'm not sure why the cross products are necessary, but I do know the main property of cross product is that the output is perpendicular to all of the inputs.). Note that some signs need wibbling, either that or permute the order of the inputs.

mat4 fr4(ivec3 pqr)
{
float r0 = radius(0, pqr);
float r1 = radius(1, pqr);
float r2 = radius(2, pqr);
float r3 = radius(3, pqr);
float c1 = r1 / r0;
float c2 = r2 / r1;
float c3 = r3 / r2;
float s1 = sqrt(1.0 - c1 * c1);
float s2 = sqrt(1.0 - c2 * c2);
float s3 = sqrt(1.0 - c3 * c3);
mat4 m1 = mat4(c1, s1, 0, 0, -s1, c1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1);
mat4 m2 = mat4(c2, 0, s2, 0, 0, 1, 0, 0, -s2, 0, c2, 0, 0, 0, 0, 1);
mat4 m3 = mat4(c3, 0, 0, s3, 0, 1, 0, 0, 0, 0, 1, 0, -s3, 0, 0, c3);
vec4 v0 = vec4(1, 0, 0, 0);
vec4 v1 = m1 * v0;
vec4 v2 = m1 * m2 * v0;
vec4 v3 = m1 * m2 * m3 * v0;
return (mat4
( normalize(cross4(v1, v2, v3))
, -normalize(cross4(v0, v2, v3))
, normalize(cross4(v0, v1, v3))
, -normalize(cross4(v0, v1, v2))
));
}

4D cross product can be implemented in terms of 3D determinants:

vec4 cross4(vec4 u, vec4 v, vec4 w)
{
mat3 m0 = mat3(u[1], u[2], u[3], v[1], v[2], v[3], w[1], w[2], w[3]);
mat3 m1 = mat3(u[0], u[2], u[3], v[0], v[2], v[3], w[0], w[2], w[3]);
mat3 m2 = mat3(u[0], u[1], u[3], v[0], v[1], v[3], w[0], w[1], w[3]);
mat3 m3 = mat3(u[0], u[1], u[2], v[0], v[1], v[2], w[0], w[1], w[2]);
return vec4(determinant(m0), -determinant(m1), determinant(m2), -determinant(m3));
}

The projection from 4D to 3D is stereographic, because signed distance fields are implicit we need both directions (one to go from input point in 3D to 4D, then after transformation / tesselation we need to go back to 3D to calculate distances):

vec4 unstereo(vec3 pos)
{
float r = length(pos);
return vec4(2.0 * pos, 1.0 - r * r) / (1.0 + r * r);
}
vec3 stereo(vec4 pos)
{
pos = normalize(pos);
return pos.xyz / (1 - pos.w);
}
float sdistance(vec4 a, vec4 b)
{
return distance(stereo(a), stereo(b));
}

The tiling is done iteratively (the limit of 600 is there because unbounded loops on a GPU are not really advisable, if this limit is set too low, e.g. 100, then visible artifacts occur - it's probably possible to prove an optimal bound somehow):

float poly4(vec4 r)
{
for (int i = 0, j = 0; i < 4 && j < 600; ++j)
{
if (dot(r, FR4[i]) < 0)
{
r = reflect(r, FR4[i]);
i = 0;
}
else
{
i++;
}
}
float de = 1.0 / 0.0;
// signed distance field stuff goes here
return de;
}

The user interface part of the code has some variables for selecting dimension and symmetry group (Schlaefli symbol). It also has 4 sliders for selecting the truncation amount in barycentric coordinates (which makes settings transfer in a meaningful way between polytopes), and 6 checkboxes for enabling different planes (there are 4 ways to choose the first axis and 3 left to choose from for the second axis, divided by 2 because the order doesn't matter).

vec4 s = inverse(transpose(FR4)) * vec4(BX, BY, BZ, BW);

The signed distance field stuff is quite straightforward in the end, though it took a lot of trial and error to get there. The first way I tried was to render tubes for line segments, by projecting the input point 'r' onto the planes of the fundamental region and doing an SDF circle:

float d = sdistance(s, r - FR4[0] * dot(r, FR4[0])) - thickness;

The planes are drawn by projecting 's' onto the cross product of the plane with 'r'. I don't know why this works:

vec4 p = normalize(cross4(FR4[0], FR4[1], r));
float d = sdistance(s, s - p * dot(s, p)) - 0.01;

Finally the DE() function for plugging into the DE-Raytracer.frag that comes with FragM has some animated rotation based on time, and the baseColor() function textures the solid with light and dark circles (actually slices of hyperspheres).

Full code download: reflex.frag.

]]>A new release: kf-2.15.5. Kalles Fraktaler 2 + is fast deep zooming Free Software for fractal graphics (Mandelbrot, Burning Ship, etc). Full change log:

kf-2.15.5 (2021-12-05)

- new: single-reference implementation for avoiding glitches (thanks Zhuoran https://fractalforums.org/f/28/t/4360/msg29835#msg29835); enabled by default; also supported in nanomb1

- known issue: does not work with every hybrid formula (only very simple cases work)
- known issue: may fail if there is more than one critical point
- new: start of support for convergent formulas

- known issue: convergent formulas are not supported in OpenCL
- known issue: convergent formulas are not supported with derivatives (this means neither analytic DE nor analytic slopes)
- new: many new formulas (thanks to Alexandre Vachon aka FractalAlex)
- new: Nova formula; variant implemented with critical point at 0 instead of 1, to avoid precision loss when deep zooming

- known issue: no OpenCL support yet
- known issue: no derivatives support yet
- known issue: Newton zooming does not work properly
- new: Separated Perpendicular formula (thanks Mr Rebooted); variant implemented with critical point at 0, and custom function to avoid precision loss when deep zooming

- known issue: no OpenCL support yet
- known issue: single reference method does not cure all glitches
- new: hybrid formulas support division operator (thanks FractalAlex)

- known issue: implementation is incomplete
- new: Triangle Inequality Average colouring algorithm can be enabled in Formula dialog; requires OpenCL; replaces final angle in phase T channel data

- known issue: likely to change in future versions, use at own risk
- known issue: disable Series Approximation for predictable results
- fix: Newton dialog uses a compact layout (by popular request)
- fix: Newton zooming functions are correctly linked into the EXE (only kf-2.15.4 was broken)
- fix: control-click to zoom correctly views framed rectangle (thanks CFJH)
- fix: NR zoom log should no longer go out of the window (reported by Uma410)
- fix: typo bug in general power Mandelbrot series approximation (thanks superheal)
- fix: some typo bugs in CFixedFloat operators (maybe did not affect anything in the old code, if only by chance)
- fix: some typo bugs in the build system
- fix: name Polarbrot correctly everywhere
- fix: there is no long long in OpenCL (thunks shapeweaver)
- fix: command line detailed status reporting works for all frames of zoom out sequence
- fix: be more robust about stopping rendering before changing internal state; should fix some crashes like changing approximation terms (reported by CFJH)
- internal: support for custom reference orbit values for caching repeated computations (time/space trade-off)

- known issue: no OpenCL support yet
- internal: output stream operators for more types
- internal: refactor smooth iterations handling
- internal: delete obsolete GlitchIter handling
- internal: more functions for CFixedFloat(): log() cosh() sinh()
- internal: more functions for floatexp: cosh() (thanks FractalAlex)
- internal: more functions for complex: sin() cos() cosh() (thanks FractalAlex)
- internal: more functions for preprocessor: cosh() sqrt() (thanks FractalAlex)
- internal: hack for fractions in preprocessor
- internal: complex constructor taking int to allow complex<T> x = 0
- internal: custom glitch tests in formula XML
- internal: brute force (high precision) renderers for tests

Get it from mathr.co.uk/kf/kf.html.

This is likely to be the last KF release from me for the foreseeable future as I'm increasingly busy with other things.

]]>Another 2 months later and kf-2.15.4 is ready. Kalles Fraktaler 2 + is fast deep zooming Free Software for fractal graphics (Mandelbrot, Burning Ship, etc). Full change log:

kf-2.15.4 (2021-07-22)

- new: rewritten GUI for window size / image size (by popular request)
- new: “imaginary axis points up” option in the transformation dialog (requested by saka and others, makes complex plane comply with maths conventions)
- new: Hidden Mandelbrot formula (thanks to FractalAlex, Bruce Dawson) https://fractalforums.org/f/22/t/3576/msg22122#msg22122
- new: Hidden Mandelbrot a la Cos formula (thanks to 3Dickulus) https://fractalforums.org/f/74/t/3591/msg22215#msg22215

- set Factor A real and imaginary parts to control shape (e.g. 1+1i)
- new: Polarbrot formula, for p = 2, 3, 4 (thanks to gerrit) https://fractalforums.org/f/15/t/1916/msg23377#msg23377

- set Factor A real part to control power a
- fractional and/or negative power a is possible
- known issue: need to set Bailout escape radius very high (but not so high that derivatives overflow: try 1e10 or so)
- known issue: for positive a, reference at 0 fails (blank image) (workaround: offset the center very slightly in the Location dialog)
- known issue: for negative a, blank image (workaround: set Formula seeds non-zero (1e-300); this will reduce accuracy for deeper zooms)
- known issue: seams with numeric differences DE (analytic DE is ok)
- known issue: Newton zooming is not functional yet
- known issue: auto-skew is not functional yet
- new: convert between built-in formulas and hybrid formulas (when possible) with new buttons in the Formula dialog
- new: option Ignore Hybrids in the Formula dialog to list only the built-in formulas that don’t have hybrid equivalents
- new: optimized some built-in formulas using common subexpression elimination (6%-58% faster perturbation calculations)

- Burning Ship power 2, 3, 4, 5
- Buffalo power 2, 3, 4
- new: optimized some hybrid OpenCL perturbation calculations
- fix: Hybrid operator multiplication works with OpenCL
- fix: Omnibrot works with OpenCL
- fix: Mandelbrot power 4 and above with derivatives works with OpenCL
- fix: formulas 52, 53, 69, 70, 71 now work with OpenCL
- fix: formulas 4 (power 3), 20, 23-26, 42-50 now have correct derivatives for analytic DE
- fix: renamed some formulas (Abs General Quadratic Plus/Minus, Omnibrot) (suggested by gerrit)
- fix: Zoom Amount spinner in Transformation dialog works live
- fix: Transformation dialog Zoom Amount sign inversion
- fix: right mouse button drag in Transformation dialog stretches in a more intuitive way
- fix: Transformation dialog displays/edits total transformation instead of difference from last set transformation
- fix: Newton zoom dialog (atom domain) size of period <=1 is set to 1
- fix: OpenCL error dialog no longer appears and disappears again instantly
- internal: formula build system refactored for parallel building and much faster incremental builds
- internal: include structure rationalized for faster builds
- internal: use intermediate ar archives for linking many object files
- internal: formula preprocessor supports temporary variables (can be used for common subexpression elimination)
- upgrade to gsl-2.7
- upgrade to openexr-2.5.7

Get it from mathr.co.uk/kf/kf.html.

]]>*Old Wood Dish* (2010) by James W. Morris is a fractal artwork,
a zoomed-in view of part of the Mandelbrot set. The magnification factor
of 10^{152} is quite shallow by today's standards, but in 2010 the
perturbation and series approximation techniques for speeding up image
generation had not yet been developed: this is a deep zoom for that era.
Thankfully JWM's (now defunct) gallery included the parameter files, the
image linked above is a high resolution re-creation in Kalle's Fraktaler,
thanks to a parameter file conversion script I wrote. You can find out
more about JWM's software MDZ and see more of his images on my
mirror of part of his old website.

*Old Wood Dish* is an example of what would now be called
"Julia morphing", using the property that zooming in towards baby Mandelbrot
set islands doubles-up (and then quadruples, octuples, ...) the features
you pass. This allows you to sculpt patterns, here the pattern has a tree
structure.

Each baby Mandelbrot set islands has a positive integer associated to
it: its period. Iteration of the center of its cardioid repeats with that
period, returning to 0. Atom periods are "near miss" periods, where the
iteration gets nearer to 0 than it ever did before. They indicate a nearby
baby Mandelbrot set island (or child bulb) of that period.
The atom periods of the center of *Old Wood Dish* are:

1, 2, 34, 70, 142, 286,574,862, 1438, 2878, 5758

One can see a pattern: 2 * 34 + 2 = 70; 2 * 70 + 2 = 142; 2 * 142 + 2 = 286. But this pattern is broken at the numbers highlighted: 2 * 574 + 2 = 1150 != 862.

Using Newton's root-finding method in one complex variable,
one can find the nearby baby Mandelbrot sets with those periods. When zooming
out, these eventually each become the lowest period island in the view in turn
(higher periods are closer to the starting point), and the zoom level at which
this happens is usually significant in terms of the decisions made when performing
Julia morphing. These zoom levels (log base 10) for *Old Wood Dish* are:

0.114, 0.591, 4.69, 8.44, 14.0, 22.4, 30.8, 43.4, 66.6, 101, 152

and the successive ratios of these numbers are

5.15, 7.94, 1.79, 1.66, 1.59,1.37,1.40, 1.53, 1.52, 1.50

Repeated Julia morphing leads to these ratios tending to a constant (often 1.5), but the two numbers highlighted are clearly outside the curve: one can see that these correspond to the two mismatching periods. I'll have to ask him to see if this was intentional or an accident.

A list of atom domain periods is related to a concept called an
internal address, which is an ascending list of the lowest periods of
the hyperbolic components (cardioid-like or disk-like shapes) that you
pass through along the filaments on the way to the target from the origin.
An extension, angled internal addresses, removes the ambiguity of which
way to turn (for example, there are two period 3 bulbs attached to the
period 1 cardioid, they have internal angles 1/3 and 2/3). One can find
angled internal addresses by converting from external angles, and one
can find external angles by tracing rays outwards from a point towards
infinity. The angled internal address of *Old Wood Dish* starts:

1 1/2 2 16/17 33 1/2 34 1/3 69 1/2 70 1/3 141 1/2 142 1/3 285 1/2 286 ...

and the pattern can be extended indefinitely by

... 1/3 (p-1) 1/2 p 1/3 (2p+1) 1/2 (2p+2) ...

The numerators of the angles in an angled internal address can be varied
freely, so one can create a whole family of variations. Varying the 1/3 to
2/3 only changes the alignments of the decorations outside the tree structure,
but varying 16/17 changes the shapes that tree is built from. Here are
*Old Wood Dish* variations 1-16, with the irregular zoom pattern
adjusted to a fully-regular zoom ending up with period 9214:

I found the center coordinates for these images by tracing external rays towards each period 9214 inner island. This took almost 5 hours wall-clock time with 16 threads in parallel (one for each ray). I then found the approximate view radius by atom domain size raised to the power of 1.125, multiplied by 10. These magic numbers were found by exploring shallower versions graphically. Using this radius I used Newton's method again, to find the pair of period 13820 minibrots at the first junctions near the center. I found these periods using KF-2.15.3's newly improved Newton zooming dialog. I used their locations to rotate and scale all the images for precise alignment. Animated it looks quite hypnotic I think:

Software used:

- mandelbrot-numerics m-describe program and script to get rough idea of period structure;
- mandelbrot-perturbator GTK program to explore the shallow layers and trace external rays to find external angles;
- mandelbrot-symbolics Haskell library in GHCI REPL to convert (both directions) between external angles and angled internal addresses;
- mandelbrot-numerics m-exray-in program to trace rays inwards given external angles;
- mandelbrot-numerics m-nucleus program to find periodic root from ray end point;
- mandelbrot-numerics m-domain-size program to find approximate view size;
- kf-2.15.3 interactive Newton zooming dialog to find period of the first junction nodes;
- custom code in C to align views, using mandelbrot-numerics library;
- custom code in bash shell to combine everything into KFS+KFR files;
- kf-2.15.3 command line mode to render each KFS+KFR to very large TIFF files;
- ImageMagick convert program to downscale for anti-aliasing (PNG for web, and smaller GIFs);
- gifsicle program to combine the 16 frames into 1 animated GIF.

After almost 2 months of work I'm happy to announce a new release of Kalles Fraktaler 2 +, fast deep zooming Free Software for fractal graphics (Mandelbrot, Burning Ship, etc). Most of the focus has been on speed improvements, with rescaled iterations ala Pauldelbrot providing a big speedup for most deep zoom locations. Full change log:

kf-2.15.3 (2021-05-26)

- new: updated progress reporting in status bar to include more information
- new: texture resize control in colouring dialog (disable to get actual image pixels in OpenGL GLSL shaders)
- new: texture browse dialog allows selecting BMP and PNG images as well as JPEG
- new: Newton-Raphson zooming dialog changes

- user interface redesigned from scratch
- new absolute zooming modes (previous mode is called relative)
- new atom domain mode (for Mandelbrot set power 2 and hybrids only)
- new size factor control
- can auto-capture zoom depth after Newton zoom
- auto skew (escape) moved to transformation dialog
- new: transformation dialog changes

- new zoom adjustment control (with rotation on left mouse button)
- now shows the difference between the original transformation and the new transformation, instead of the total transformation
- stretch amount now displayed in cents for a more friendly range
- spin buttons added so scroll wheel and arrows can adjust values, which live-update the transformed image
- auto skew (escape) moved from Newton-Raphson zooming dialog
- new: single precision floating point support for shallow zooms (until zoom e20) (disabled by default due to some locations having undetected glitches)
- new: single precision floatexp extended floating point for arbitrarily deep zooms (disabled by default due to some locations having undetected glitches)
- new: OpenCL can work in single precision mode, for example on devices that don’t support double precision
- new: rescaled perturbation calculations for arbitrarily deep zooms (usually faster than old long double and floatexp implementations; with or without derivatives; with or without OpenCL; single or double precision, single precision disabled by default due to some locations having undetected glitches); supported formulas:

- Mandelbrot power 2
- Mandelbrot power 3
- Burning Ship power 2
- hybrid formula editor
- new: rescaled series approximation calculations for Mandelbrot power 2 (about 30% faster than the all-floatexp implementation, can be disabled if necessary in the perturbation and series approximation tuning dialog)
- new: number type selection dialog (advanced menu) allows fine-tuning allowed implementations
- new: “reuse reference” (advanced menu) can be used together with “auto solve glitches” (this uses additional memory for the primary reference)
- fix: “reuse reference” re-calculates reference when the used number type changes (fixes some issues with bad images and/or crashes)
- new: “reference strict zero” control in perturbation and series approximation tuning dialog (advanced menu, experimental); affects rescaled iterations only
- new: lower level implementation of reference calculations for hybrids is over 7x faster (now only 10% slower than built in versions)
- new: OpenCL can run threaded to improve user interface responsiveness (enabled by default; can be disabled in OpenCL device selection dialog)
- new: “‘Open’ resets default parameters” setting can be disabled to load minimal KFR/KFP without resetting missing parameters to defaults (this setting is enabled by default for backwards compatibility)
- new: “glitch low tolerance” can be a fraction between 0 and 1
- new: “approx low tolerance” can be a fraction between 0 and 1
- new: crash recovery offers to restore settings as well as parameters
- fix: correct power calculation for multiplied hybrid operators (symptom: seams between iteration bands with numeric DE)
- fix: documentation uses subsections instead of lists for improved navigation and table of contents
- known issue: some locations (especially Burning Ship “deep needle”) are much slower and need much more memory; workaround:

- disable “rescaled single” in number type selection dialog; and if still slow:
- disable “rescaled double” in number type selection dialog
- known issue: some locations have undetected glitches in single precision; workaround:

- disable “single”, “rescaled single” and “floatexp single” in number type selection dialog; or
- enable “glitch low tolerance” in perturbation and series approximation tuning dialog

Get it from mathr.co.uk/kf/kf.html.

Now I'll probably take a break from coding KF until after the summer, apart from bugfixes, as I don't have a big exciting idea to inspire and motivate me. Some ideas for when I come back include:

- stripe average colouring via a ring buffer of last few iterations
- auto skew method based on directional DE distribution in an image
- OpenCL/OpenGL sharing
- use multiple OpenCL platforms and devices at the same time
- GLSL file watcher so you can use your favourite text editor when writing colouring algorithms
- plain iterations (without perturbation) for very shallow zooms
- port/embed mandelbrot-perturbator engine for glitch correction of power 2 Mandelbrot by "rebasing and carrying on"
- store starting zoom in Newton-Raphson progress updates and add resume functionality
- rip out 75% of the built-in formulas and replace with hybrid formula designer versions
- refactor the build system for faster incremental development

If anyone out there wants to work on any of these or other ideas, I'll be more than happy to help you get started navigating the code to know where the changes should be made.

]]>*(click the picture to view the video on diode.zone)*

The **legendary colour palette technique** embeds an image in the iteration
bands of an escape time fractal by linearizing it by scanlines and
synchronizing the scan rate to the iterations in the fractal spirals
so they line up to reconstruct the original image. Historically this has
been done by preparing palettes for fractal software using external
tools, and mostly only for small images (KF for example has a palette
limited to 1024 colour slots, which means 32x32 or 64x16 at most).

Kalles Fraktaler 2 has an image texture feature, which historically only allowed you to warp a background through the semi-transparent fractal. I added the ability to create custom colouring algorithms in OpenGL shader language (GLSL), with which it is possible to repurpose this texture and (for example) use it as a legendary palette.

Here I scaled my avatar (originally 256x256) to 128x16 pixels, and fine tuned the iteration count divisor by hand after zooming to a spiral in the Seahorse Valley of the Mandelbrot set. Then the face from the icon is visible in the spirals all the way down to the end of the video. I used a work-in-progress (not yet released) build of KF 2.15.3, which has a new setting not to resize the texture to match the frame size: this allows the legendary technique to work much more straightforwardly.

I rendered exponential map EXR frames from KF and assembled into a zoom video with zoomasm. From KF I exported just the RGB channels with the legendary palette colouring, and the distance estimate channels. I did not colour the RGB with the distance estimate in KF, because with the exponential map transformation they would not be screen-space correct (the details would be smaller in the center of the reprojected video than at the edges). I could not do all the colouring in zoomasm either, because it does not support image textures. I added the boundary of the fractal in zoomasm afterwards, by mixing pink with the RGB from KF according to the length of the screen-space distance estimate channels (which zoomasm scales properly when reprojecting the exponential map).

This post was inspired by Fractal Universe's YouTube videos from 2017:

- Mandelbrot zoom 10^49 - Legendary color palette #1 : Phantom minibrot
- Mandelbrot zoom 10^94 - Legendary color palette #2 : Sierpinsky triangle
- Mandelbrot zoom 10^94 - Legendary color palette #3 : the word "MANDELBROT"

I hope to release KF 2.15.3 before the end of this month. There are big changes (aside from the texture resize setting) that I'm keen to get out into the world.

]]>The complex beauty of the world's most famous fractal, the Mandelbrot set, emerges from the repeated iteration of a simple formula:

\[z \to z^2 + c\]

Zooming into the intricate boundary of the shape reveals ever more detail, but one needs higher precision numbers and higher iteration counts as you go deeper. The computational cost rises quickly with the classical rendering algorithms which use high precision numbers for each pixel.

In 2013, K.I. Martin's SuperFractalThing and accompanying white paper sft_maths.pdf popularized a pair of new acceleration techniques. First one notes that the formula \(z \to z^2 + c\) is continuous, so nearby points remain nearby under iteration. This means you can iterate one point at high precision (the reference orbit) and compute differences from the reference orbit for each pixel in low precision (the perturbed orbits). Secondly, iterating the perturbed formula one ends up with a polynomial series in the initial pertubation in \(c\), which depends only on the reference. The degree rises rapidly but you can truncate it to get an approximation. This means you can compute the series approximation coefficients once, and substitute in the perturbed \(c\) values for each pixel, allowing you to initialize the perturbed orbits at a later iteration, skipping potentially lots of per-pixel work.

The perturbation technique has since been extended to the Burning Ship fractal and other "abs variations", and it also works for hybrid fractals combining iterations of several formulas.

Prerequisites for the rest of this article: a familiarity with complex numbers and algebraic manipulation; knowing how to draw the unzoomed Mandelbrot set; understanding the limitations of computer implementation of numbers (see for example Hardware Floating Point Types).

In the remainder of this post, lower case and upper case variables with the same letter mean different things. Upper case means unperturbed or reference, usually high precision or high range. Lower case means perturbed per pixel delta, low precision and low range.

In perturbation, on starts with the iteration formula [1]:

\[Z \to Z^2 + C\]

Perturb the variables with unevaluated sums [2]:

\[(Z + z) \to (Z + z)^2 + (C + c)\]

Do symbolic algebra to avoid the catastrophic absorption when adding tiny values \(z\) to large values \(Z\) (e.g. 1 million plus 1 is still 1 million if you only have 3 significant digits to work with) [3]:

\[z \to 2 Z z + z^2 + c\]

\(C, Z\) is the "reference" orbit, computed in high precision using [1] and rounded to machine double precision, which works fine most of the time. \(c, z\) are the "pixel" orbit, you can do many of these near each reference (e.g. an entire image).

There is a problem that can be noticed when you zoom deeper near certain features in the fractal. There are parts that can have a "noisy" appearance, or there may be weird flat blobs that look out of place. These are the infamous perturbation glitches. It was observed that adding references in the glitches and recomputing the pixels could fix them, but there was no reliable way to detect them programmatically until Pauldelbrot discovered/invented a method: Perturbation Theory Glitches Improvement.

The solution: if [4]:

\[|Z+z| << |Z|\]

at any iteration, then glitches can occur. The solution: retry with a new reference, or (for well-behaved formulas like the Mandelbrot set) rebase to a new reference and carry on.

Perturbation assumes exact maths, but some images have glitches when naively using perturbation in low precision. Pauldelbrot found his glitch criterion by perturbing the perturbation iterations: one has perturbed iteration as in [3] (recap: \(z \to 2 Z z + z^2 + c\)). Then one perturbs this with \(z \to z + e, c \to c + f\) [5]:

\[e \to (2 (Z + z) + e) e + f\]

We are interested what happens to the ratio \(e/z\) under iteration, so rewrite [3] as [6]:

\[z \to (2 Z + z) z + c\]

Pattern matching, the interesting part (assuming \(c\) and \(f\) are small) of \(e/z\) is \(2(Z + z) / 2 Z\). When \(e/z\) is small, the nearby pixels "stick together" and there is not enough precision in the number type to distinguish them, which makes a glitch. So a glitch can be detected when [7]:

\[|Z + z|^2 < G |Z|^2\]

where G is a threshold (somewhere between 1e-2 and 1e-8, depending how strict you want to be). This does not add much cost, as \(|Z+z|^2\) already needs to be computed for escape test, and \(G|Z^2|\) can be computed once for each iteration of the reference orbit and stored.

The problem now is: How to choose G? Too big and it takes forever as glitches are detected all over, too small and some glitches can be missed leading to bad images.

The glitched pixels can be recalculated with a more appropriate reference point: more glitches may result and adding more references may be necessary until the image is finished.

Double precision floating point (with 53 bits of mantissa) is more than enough for computing perturbed orbits: even single precision (with 24 bits) can be used successfully. But when zooming deeper another problem occurs: double precision has a limited range, once values get smaller than about 1e-308 then they underflow to 0. This means perturbation with double precision can only zoom so far, as eventually the perturbed deltas are smaller than can be represented.

An early technique for extending range is to store the mantissa as a double precision value, but normalized to be near 1 in magnitude, with a separate integer to store the exponent. This floatexp technique works for arbitrarily deep zooms, but the performance is terrible because it needs to handle every arithmetic operation in software (instead of them being a single CPU instruction).

The solution for efficient performance turned out to be using an unevaluated product (compare with the unevaluated sum of perturbation) to rescale the double precision iterations to be nearer 1 and avoid underflow: substitute \(z = S w\) and \(c = S d\) to get [8]:

\[S w \to 2 Z S w + S^2 w^2 + S d\]

and now cancel out one scale factor \(S\) throughout [9]:

\[w \to 2 Z w + S w^2 + d\]

Choose \(S\) so that \(|w|\) is around \(1\). When \(|w|\) is at risk of overflow (or underflow) after some iterations, redo the scaling; this is typically a few hundred iterations as \(|Z|\) is bounded by \(2\) except at final escape.

Optimization: if \(S\) underflowed to \(0\) in double precision, you don't need to calculate the \(+ S w^2\) term at all when \(Z\) is not small. Similarly you can skip the \(+ d\) if it underflowed. For higher powers there will be terms involving \(S^2 w^3\) (for example), which might not need to be calculated either due to underflow. Ideally these tests would be performed once at rescaling time, instead of in every inner loop iteration (though they would be highly predictable I suppose).

There is a problem: if \(|Z|\) is very small, it can underflow to \(0\) in unscaled double in [9]. One needs to store the full range \(Z\) and do a full range (e.g. floatexp) iteration at those points, because \(|w|\) can change dramatically. Rescaling is necessary afterwards. This was described by Pauldelbrot: Rescaled Iterations in Nanoscope.

To do the full iteration, compute \(z = S w\) in floatexp (using a floatexp for \(S\) so that there is no underflow), do the perturbed iteration [3] with all variables in floatexp. To rescale afterwards, compute \(S = |z|\) and \(w = z/S, d = c/S\) (computed in floatexp with \(w\) and \(d\) rounded to double precision afterwards). Then a double precision \(s\) can be computed for use in [9].

The Burning Ship fractal modifies the Mandelbrot set formula by taking absolute values of the real and imaginary parts before the complex squaring [10]:

\[X + i Y \to (|X| + i |Y|)^2 + C\]

When perturbing the Burning Ship and other "abs variations", one ends up with things like [11]:

\[|XY + Xy + xY + xy| - |XY|\]

which naively gives \(0\) by catastrophic absorption and cancellation. laser blaster made a case analysis Perturbation Formula for Burning Ship which can be written as [12]:

diffabs(c, d) := |c+d| - |c| = c >= 0 ? c + d >= 0 ? d : -(2*c+d) : c + d > 0 ? 2*c+d : -d

when \(d\) is small the \(\pm d\) cases are much more likely. With rescaling in the mix [11] works out as [13]:

\[\operatorname{diffabs}(XY/s, Xy + xY + sxy)\]

which has the risk of overflow when \(s\) is small, but the signs work out ok even for infinite \(c\) as \(d\) is known to be finite. Moreover, if \(s = 0\) due to underflow, the \(\pm d\) branches will always be taken (except when \(XY\) is small, when a full floatexp iteration will be performed instead), and as \(s \ge 0\) by construction, [13] reduces to [14]:

\[\operatorname{sign}(X) * \operatorname{sign}(Y) * (X y + x Y)\]

(Note: this formulation helps avoid underflow in \(\operatorname{sign}(XY)\) when \(X\) and \(Y\) are small.)

For well-behaved functions like the Mandelbrot set iterations, one needs to do full iterations when \(Z\) gets small. For the Burning Ship and other abs variations, this is not sufficient: problems occur if either X and Y are small, not only when both are small at the same time. Full iterations need to be done when either variable is small. This makes rescaled iterations for locations near the needle slower than just doing full floatexp iterations all the time (because of the extra wasted work handling the rescaling). This is because near the needle all the iterations have Y near 0, which means floatexp iterations will be done anyway. Using floatexp from the get go avoids many branches and rescaling in the inner loop, so it's significantly faster. The problem is worse in single precision because it has much less range: it underflows below 1e-38 or so, rather than 1e-308 for double precision.

The problem of automatically detecting these "deep needle" locations (which may be in the needles of miniships) and switching implementations to avoid the extra slowdown remains unresolved in KF.

The Mandelbrot set has lovely logarithmic spirals all over, and the Burning Ship has interesting "rigging" on the miniships on its needle. Hybridization provide a way to get both these features in a single fractal image. The basic idea is to interleave the iteration formulas, for example alternating between [1] and [10], but more complicated interleavings are possible (eg [1][10][1][1] in a loop, etc).

Hybrid fractals in KF are built from stanzas, each has some lines, each line has two operators, and each operator has controls for absolute x, absolute y, negate x, negate y, integer power \(p\), complex multiplier \(a\). The two operators in a line can be combined by addition, subtraction or multiplication, and currently the number of lines in a stanza can be either 1 or 2 and there can be 1, 2, 3 or 4 stanzas. The output of each line is fed into the next, and at the end of each stanza the +c part of the formula happens. There are controls to choose how many times to repeat each stanza, and which stanza to continue from after reaching the end.

Implementing perturbation for this is quite methodical. Start from an operator, with inputs \(Z\) and \(z\). Set mutable variables:

z := input W := Z + z B := Z

If absolute x enabled in formula, then update

re(z) := diffabs(re(Z), re(z)) re(W) := abs(W) re(B) := abs(B)

Similarly for the imaginary part. If negate x enabled in formula, then update

re(z) := -re(z) W := -W B := -B

Similarly for the imaginary part. Now compute

\[S = \sum_{i=0}^{p-1} W^i B^{p-1 - i}\]

and return \(a z S\). Combining operators into lines may be done by Perturbation Algebra. Combining lines into stanzas can be done by iterating unperturbed \(Z\) alongside perturbed \(z\); only the \(+C\) needs high precision, and that is not done within a stanza.

Rescaling hybrid iterations seems like a big challenge, but it's not that hard: if either or both the real and imaginary parts of the reference orbit \(Z\) are small, one needs to do a full range iteration with floatexp and recalculate the scale factor afterwards, as with formulas like Burning Ship. Otherwise, thread \(s\) through from the top level down to the operators. Initialize with

W := Z + z*s

and modify the absolute cases to divide the reference by \(s\):

re(z) := diffabs(re(Z/s), re(z))

Similarly for imaginary part. When combining operators (this subterm only occurs with multiplication) replace \(f(o_1, Z + z)\) with \(f(o_1, Z + z s)\).

And that's almost all the changes that need to be made!

For distance estimation of hybrid formulas I use dual numbers for automatic differentiation. One small adjustment was needed for it to work with rescaled iterations: instead of initializing the dual parts (before iteration) with 1 and scaling by the pixel spacing at the end for screen-space colouring, initialize the dual parts with the pixel spacing and don't scale at the end. This avoids overflow of the derivative, and the same rescaling factor can be used for regular and dual parts.

Naive implementations of parametric hybrids are very slow due to all the branches in the inner loops (checking if absolute x enabled at every iteration for every pixel, etc). Using for example OpenCL, these branches can be done once when generating source code for a formula, instead of every iteration for every pixel. This runs much faster, even when compiled to run on the same OpenCL device that is interpreting the parametric code.

The other part of the thing that K I Martin's SuperFractalThing popularized was that iteration of [3] gives a polynomial series in \(c\) [15]:

\[z_n = \sum A_{n,k} c^k\]

(with 0 constant term). This can be used to "skip" a whole bunch of iterations, assuming that truncating the series and/or low precision doesn't cause too much trouble. Substituting [15] into [3] gives [16]:

\[\sum A_{n+1,k} c^k = 2 Z \sum A_{n,k} c^k + (\sum A_{n,k} c^k)^2 + c\]

Equating coefficients of \(c^k\) gives recurrence relations for the series coefficients \(A_{n,k}\). See Simpler Series Approximation.

The traditional way to evaluate that it's ok to do the series approximation at an iteration is to check whether it doesn't deviate too far from regular iterations (or perturbed iterations) at a collection of "probe" points. When it starts to deviate, roll back an iteration and initialize all the image pixels with [15] at that iteration.

Later, knighty extended the series approximation to two complex variables. If the reference \(C\) is a periodic point (for example the center of a minibrot), the biseries in \(z, c\) allows skipping a whole period of iterations. Then multiple periods can be skipped by repeating the biseries step. This gives a further big speedup beyond regular series approximation near minibrots. An escape radius is needed for \(z\), based on properties of the reference, so as not to perform too many biseries iterations. After that, regular perturbed iterations are performed until final escape. This is available in KF as NanoMB1.

Current research by knighty and others involves a chain of minibrots at successively deeper zoom levels. One starts with the deepest minibrot, performing biseries iterations until it escapes its \(z\) radius. Then rebase the iterates to the next outer minibrot, and perform biseries iterations with that. Repeat until final escape. This is available in KF as NanoMB2, but it's highly experimental and fails for many locations. Perhaps it needs to be combined with more perturbation or higher precision: sometimes the iterates may still be too close to each other when they escape a deep minibrot, such that catastrophic absorption occurs. In progress...

For Burning Ship and other abs variations (and presumably hybrids too), series approximation can take the form of two bivariate real series in \(\Re(c)\) and \(\Im(c)\) for the real and imaginary parts of \(z\). But these are only good so long as the region is not folded by an absolute value, so typically only a few iterations can be skipped. Maybe the series can be split into two (or more) parts with the other side(s) shifted when this occurs? In progress...

Perturbation techniques that greatly reduce the quantity of high precision iterations needed, as well as (for well-behaved formulas) series approximation techniques that reduce the quantity of low precision iterations needed still further, provide a vast speedup over classical algorithms that use high precision for every pixel. Rescaling can provide an additional constant factor speedup over using full range floatexp number types for most (not "deep needle") locations. Chained biseries approximation ("NanoMB2") and series approximation for abs variations and hybrids are still topics of research.

It remains open how to choose the \(G\) for Pauldelbrot's glitch detection criterion, and how to robustly compute series approximation skipping: there is still no complete mathematical proof of correctness with rigourous error bounds, although the images do most often look plausible and different implementations do tend to agree.

]]>Kalle's Fraktaler 2 + is fast deep zooming Free Software for fractal graphics (Mandelbrot, Burning Ship, etc).
The new version *kf-2.15.2* I released a couple of days ago has a big new feature:
you can design your own custom colouring algorithms in OpenGL GLSL shader fragments,
compatible with *zoomasm-3.0*.
Several examples are included. Full change log:

kf-2.15.2 (2021-03-31)

- new: custom OpenGL colouring algorithms (compatible with
zoomasm 3.0)

- button bottom left of the colors dialog or Ctrl+G in main window
- text area for GLSL editing, or import and export to `*.glsl` files
- GLSL is stored in KFP palettes, KFR files and image metadata
- user API allows access of KFP parameters within GLSL
- new: OpenGL implementation of colouring algorithm

- default shader gives similar results as the regular implementation
- there are tiny differences (1 or 2 ULP) due to different rounding
- uses portions of libqd ported to GLSL for higher precision (49 bits, vs 24 bits for float and 53 bits for double on CPU)
- some features remain unimplemented in this version:

- least squares 2x2 numerical differencing algorithm
- least squares 3x3 numerical differencing algorithm
- new: experimental ARM builds are possible using llvm-mingw

- 64bit aarch64 can be built but crashes on start
- 32bit armv7 is blocked on a GMP bug(?) (missing symbols in library)
- still needs gcc windres because llvm windres is incomplete
- new: option to save EXR files without their 8bit RGBA preview images (makes files smaller and saves faster)
- new: sRGB gamma-correct downscaling (using patched pixman)
- fix: removed special case for zooms less than 1e3, fixes rendering of Redshifter 1 cubic a=2 z0=-4/3 (reported by gerrit)
- fix: make undo and redo refresh the colors dialog
- fix: remove warning about slow unused derivatives, because it is hard to tell if GLSL colouring might need them
- fix: dependency preparation script adapted to new architectures
- fix: dependency preparation script can build subsets of libraries
- fix: correctly initialize "no glitch detection" vector in SIMD
- fix: use `delete[]` instead of `delete` in some places
- fix: use `static inline` to prevent redefinition errors in llvm
- upgrade to pixman 0.40+git (claudeha/kf branch)
- upgrade to tiff 4.2.0
- upgrade to openexr 2.5.5

Get it from mathr.co.uk/kf/kf.html

For the next version, I plan on trying to implement the optimized rescaling replacement for the floatexp number type described by Pauldelbrot and implemented in rust-fractal (among others): by properties of iterations, it should be possible to renormalize much less frequently (about once every 200 iterations or more) instead of after every single arithmetic operation (which is very very slow). Hopefully the speedup should be dramatic, which will be very beneficial for the OpenCL engine because it uses floatexp much sooner (due to lack of support for 80bit x87 long double).

]]>An iterated function system (IFS) is collection of geometric transformations. If the functions are contractive, then the system has a fixed point obtainable by iterating a point through all the functions by choosing one at random at each step and plotting all the points you travel through. This is called the chaos game algorithm for plotting (there are others, another is multi copy reduction machine). Often the resulting shape is a fractal.

To go further, if you count the number of times you plot each point, the shape approaches a multifractal distribution. Varying the probabilities of each function changes the distribution, but not the shape (apart from if the probability is exactly 0). An iterated function system with probabilities (IFSP) has one probability for each transformation.

An extension of IFS called graph directed iterated function systems (GDIFS) makes each transformation into a node in a graph, which is more general because some transitions between nodes can be forbidden (by not having a corresponding edge). Naturally probabilities can be added to GDIFS to get GDIFSP, and now there is a matrix of probability weights with one value for each possible edge between nodes (if there is no edge, the probability is 0). The graph directs the IFS, and to overload terminology further the edges have a direction (so you can have A to B without B to A) which makes it a directed graph.

This all started when someone in Fractal Chats was trying to "balance" one of their fractal artworks. I found a Maths Stack Exchange question and answer that said for an IFSP of similitudes the optimal probability weights are related to the contraction ratios and the fractal dimension: How to optimally adjust the probabilities for the random IFS algorithm?. One of the comments on the question references a paper with a proof via multifractal spectrum:

A Multifractal Analysis of IFSP Invariant Measures with Application to Fractal Image Generation

J. M. GUTIÉRREZ, A. IGLESIAS and M. A. RODRÍGUEZ

https://doi.org/10.1142/S0218348X96000042

Abstract

In this paper, we focus on invariant measures arising from Iterated Function System with Probabilities (IFSP). We show the equivalence between an IFSP and a linear dynamical system driven by a white noise. Then, we use a multifractal analysis to obtain scaling properties of the resulting invariant measures, working within the framework of dynamical systems. Finally, as an application to fractal image generation, we show how this analysis can be used to obtain the most efficient choice for the probabilities to render the attractor of an IFS by applying the probabilistic algorithm known as “chaos game”.

The maths required to extend this to GDIFSP of non-linear functions is way beyond me (the paper only proves things for similarities, and affine functions are optimized numerically afaict), and I didn't fancy the maths involved for multifractal spectrum (this time), so I implemented a numerical algorithm that is conceptually quite simple: when plotting each point in the chaos game, compare the current location's plot count with the average plot count (averaged over non-empty locations) and adjust the weight of the current transformation according to if it is too dense or not dense enough. Works for any transformations and for both IFSP and GDIFSP

This average can be calculated efficiently by keeping track of the total count of plotted points, and the count of empty locations (start with this full, and decrement when incrementing a location from 0). This algorithm requires the histogram of plotting locations to cover the whole limit set, for the non-linear Moebius transformations I used the Riemann sphere (complex plane plus infinity) modelled as two unit discs using complex reciprocal for the one nearer infinity (equirectangular projection would probably also work, as used in 360 video, or even a cube map, as used in 3D rendering like OpenGL).

Here is a collection of images of iterated function system fractals of randomly generated Moebius transformations, with (from left to right in each image) random weights, uniform weights, and weights optimized by my algorithm:

You can hopefully see that the right hand side is flatter / less dynamic, and the sparser fractals are more filled out.

Here are the same transformations optimized as GDIFSP, ie with a matrix of weights rather than a vector:

Whether this is actually useful in practice remains to be seen, but you can find the code in my fractal-bits repository, subdirectory autoxaos:

]]>git clone https://code.mathr.co.uk/fractal-bits.git cd fractal-bits/autoxaos

I actually released kf-2.15 a couple of weeks ago, but I didn't get around to posting here about it. The highlights include OpenCL support (if you have a good GPU, this can make rendering faster, especially for hybrid formulas), hybrid formula designer (lots of options to make custom fractals), and exponential map transformation for optimizing zoom animations.

kf-2.15.1 (2020-10-28)

- OpenCL support for perturbation iterations (requires double precision support on device: CPUs should work, some GPUs might not)
- hybrid formula editor (design your own fractal formula)
- exponential map coordinate transformation (useful for export to zoomasm for efficient video assembly)
- rotation and skew transformations are rewritten to be more flexible (but old skewed/rotated KFR locations will not load correctly)
- kf-tile.exe tool supports the new rotation and skew transformations
- the bitrotten skew animation feature is removed
- a few speed changes in built in formulas (one example, RedShiftRider 4 with derivatives is almost 2x faster due to using complex analytic derivatives instead of 2x2 Jacobian matrix derivatives)
- flip imaginary part of Buffalo power 2 (to match other powers; derivative was flipped already)
- slope implementation rewritten (appearance is different but it is now independent of zoom level and iteration count)
- smooth (log) iteration count is offset so dwell bands match up with the phase channel

swiftly followed by

kf-2.15.1.1 (2020-10-28)

- fix OpenCL support for NVIDIA GPUs (reported by bezo97)
- fix crash in aligned memory (de)allocation (reported by gerrit)
- documentation improvements (thanks to FractalAlex)

and today by

kf-2.15.1.2 (2020-11-08)

- refactor OpenCL error handling to display errors in the GUI without exiting
- OpenCL hybrids: fix breaking typo in neg x (reported by Microfractal)

I recorded a screencast video to show how to make fractal zoom videos with kf-2.15 and zoomasm-1.0, and there are a couple of zoom videos I made with them too:

- Making fractal zoom videos with kf-2.15 and zoomasm-1.0
- https://archive.org/details/making-fractal-zoom-videos-with-kf-2.15-and-zoomasm-1.0
- https://www.youtube.com/watch?v=72IIn7C3UeI
- Charred Bard
- https://archive.org/details/charred-bard
- https://www.youtube.com/watch?v=NMKBBk-yf_4
- Special Branch
- https://archive.org/details/special-branch
- https://www.youtube.com/watch?v=uQDV87vVIxk

Almost a decade ago I wrote about optimizing zoom animations by reusing the center portion of key frame images. In that post I used a fixed scale factor of 2, because I didn't think about generalizing it. This post here is to rectify that oversight.

So, fix the output video image size to \(W \times H\) with a pixel density (for anti-aliasing) of \(\rho^2\). For example, with \(5 \times 5\) supersampling (25 samples per output pixel), \(\rho = 5\).

Now we want to calculate the size of the key frame images \(H_K \times W_K\) and the scale factor \(R\). Suppose we have calculated an inner / deeper key frame in the zoom animation, then the next outer keyframe only needs the outer border calculated, because we can reuse the center. This means only \( H_K W_K \left(1 - \frac{1}{R^2}\right) \) need to be calculated per keyframe. The number of keyframes decreases as \(R\) increases, it turns out to be proportional to \(\frac{1}{\log R}\) for a fixed total animation zoom depth.

Some simple algebra, shows that \(H_K = R \rho H\) and \(W_K = R \rho W\). Putting this all together means we want to minimize the total number of pixels that need calculating, which is proportional to

\[ \frac{R^2 - 1}{\log R} \]

which decreases to a limit of \(2\) as \(R\) decreases to \(1\). But \(R = 1\) is not possible, as this wouldn't zoom at all; as-close-to-1-as-possible means a ring of pixels 1 pixel thick, at which point you are essentially computing an exponential map.

So if an exponential map is the most efficient way to proceed, how much worse is the key frame interpolation approach? Define the efficiency by \(2 / \frac{R^2 - 1}{\log R}\), then the traditional \(R = 2\) has an efficiency of only 46%, \(R = \frac{4}{3}\) 74%, \(R = \frac{8}{7}\) 87%.

These results are pushing me towards adding an exponential map rendering mode to all of my fractal zoom software, because the efficiency savings are significant. Expect it in (at least the command line mode of) the next release of KF which is most likely to be in early September, and if time allows I'll try to make a cross-platform zoom video assembler that can make use of the exponential map output files.

]]>