Start with the iteration formula for the Burning Ship escape time fractal:

\[ X := X^2 - Y^2 + A \quad\quad Y := \left|2 X Y\right| + B \]

Perturb this iteration, replacing \(X\) by \(X+x\) (etc) where \(x\) is a small deviation:

\[ x := (2 X + x) x - (2 Y + y) y + a \quad\quad y := 2 \operatorname{diffabs}(X Y, X y + x Y + x y) + b \]

Here \(\operatorname{diffabs}(c,d)\) is laser blaster's formula for evaluating \(|c + d| - |c|\) without catastrophic cancellation; see my post about perturbation algebra for more details.

Now replace \(x\) and \(y\) by bivariate power series in \(a\) and \(b\):

\[ x = \sum_{i,j} s_{i,j} a^i b^j \quad\quad y = \sum_{i,j} t_{i,j} a^i b^j \]

To implement this practically (without lazy evaluation) I pick an order \(o\) and limit the sum to \(i + j \le o\). Substituting these series into the perturbation iterations, and collecting terms, gives iteration formulae for the series coefficients \(s_{i,j}\) and \(t_{i,j}\):

\[ s_{i,j} := 2 X s_{i,j} - 2 Y t_{i,j} + \sum_{k=0,l=0}^{k=i,l=j} \left( s_{k,l} s_{i-k,j-l} - t_{k,l} t_{i-k,j-l} \right) + \mathbb{1}_{i=1,j=0} \]

The formula for \(t\) requires knowing which branch of the \(\operatorname{diffabs}(X Y, 0)\) was taken, which turns out has a nice reduction to \(\operatorname{sgn}(XY)\):

\[ t_{i,j} := 2 \operatorname{sgn}(X Y) \left( X t_{i,j} + s_{i,j} Y + \sum_{k=0,l=0}^{k=i,l=j} \left( s_{k,l} t_{i-k,j-l} \right) \right) + \mathbb{1}_{i=0,j=1} \]

\(\mathbb{1}_F\) is the indicator function, \(1\) when \(F\) is true, \(0\) otherwise. For distance estimation, the series of the derivatives are just the derivatives of the series, which can be computed very easily.

The series is only valid in a region that doesn't intersect an axis, at which point the next iteration will fold the region in a way that a series can't represent. Moreover, the series loses accuracy the further from the reference point, so there needs to be a way to check that the series is ok to use. One approach is to iterate points on the boundary of the region using perturbation, and compare the relative error against the same points calculated with the series. Only if all these probe points are accurate, is it safe to try the next iteration.

This usually means that the series will skip at most \(P\) iterations near a central miniship reference of period \(P\). But one can do better, by subdividing the region into parts that are folded in different ways. The parts that are folded the same way as the reference can continue with series approximation, with probe points at the boundary of each part, with the remainder switching to regular perturbation initialized by the series.

It may even be possible to use knighty's techniques like Taylor shift, which is a way to rebase a series to a new reference point (for example, one in the "other side" of the fold) to split the region into two or more separate parts each with their own series approximation. The Horner shift algorithm is not too complicated, and I think it can be extended to bivariate series by shifting along each variable in succession:

// Horner shift code from FLINT2 for (i = n - 2; i >= 0; i--) for (j = i; j < n - 1; j++) poly[j] += poly[j + 1] * c

Untested Haskell idea for bivariate shift:

shift :: Num v => v -> Map Int v -> Map Int v shift = undefined -- left as an exercise shift2 :: Num v => v -> v -> Map (Int, Int) v -> Map (Int, Int) v shift2 x y = twiddle . pull . fmap (shift x) . push . twiddle . pull . fmap (shift y) . push push :: (Ord a, Ord b) => Map (a, b) v -> Map a (Map b v) push m = M.fromListWith M.union [ (i, M.singleton j e) | ((i,j), e) <- M.assocs m ] pull :: (Ord a, Ord b) => Map a (Map b v) -> Map (a, b) v pull m = M.fromList [((i,j),e) | (i, mj) <- M.assocs m, (j, e) <- M.assocs mj ] twiddle :: (Ord a, Ord b) => Map (a, b) v -> Map (b, a) v twiddle = M.mapKeys (\(i,j) -> (j,i))

Exciting developments! I hope to release a new version of Kalles Fraktaler 2 + containing at least some of these algorithms soon...

]]>The Burning Ship fractal is interesting because it has mini-ships among filaments. However, some of these filaments and mini-ships are highly stretched or skewed non-conformally, which makes them look ugly. Some fractal software (Kalles Fraktaler, Ultrafractal, ...) has the ability to apply a non-uniform scaling to the view, which unstretches the filaments and makes (for example) the period-doubling features on the way to the mini-ship appear more circular. However, to my knowledge, all these software require manual adjustment of the unskewing transformation, for example dragging control points in a graphical user interface.

The size estimate for the Burning Ship mini-sets is based on the size estimate for the Mandelbrot set. The Burning Ship iterations are not a well behaved complex function, so Jacobian matrices over real variables are necessary. Generic pseudo-code for the size estimate of an arbitrary function:

d := degree of smallest non-linear power in f a, b := coordinates of a mini-set with period p x, y := critical point of f (usually 0, 0) L := I B := I for j := 1 to p - 1 x, y := f(x, y) L := Jf(x, y) B := B + L^{-1} size := 1 / sqrt(abs(det(L^(d/(d-1)) B)))

In particular for the power 2 Burning Ship this resolves to:

d := 2 x := 0 y := 0 L(lxx, lxy; lyx, lyy) := (1, 0; 0, 1) B(bxx, bxy; byx, byy) := (1, 0; 0, 1) for j := 1 to p - 1 x, y := (x^2 - y^2 + a, abs(2 x y) + b) L := ( 2 x lxx - 2 y lyx , 2 x lxy - 2 y lyy , 2 sgn(x) sgn(y) (x lyx + lxx y) , 2 sgn(x) sgn(y) (x lyy + lxy y) ) detL := lxx * lyy - lxy * lyx B := ( bxx + lyy / detL , bxy - lyx / detL , byx - lxy / detL , byy + lxx / detL ) detL := lxx * lyy - lxy * lyx detB := bxx * byy - bxy * byx size := 1 / sqrt(abs(detL^2 * detB))

Note the renormalization of the \(c=a+bi\) plane by:

\[L^\frac{d}{d - 1} B\]

The size estimate takes out the uniform scaling factor from this 2x2 matrix, but it's also possible to use it to extract rotation and non-uniform scaling (stretching, skew) parameters. A problem comes when trying to raise the matrix \(L\) to a non-integer power when \(d > 2\), so I fudged it and hoped that the non-conformal contribution from \(L\) is non-existent, and just used \(B\). To cancel out the uniform scaling from \(B\), I divide by the square root of its determinant, and I take its inverse to use as a coordinate transformation for the \(c=a+bi\) plane:

\[T = \left(\frac{B}{\sqrt{\det B}}\right)^{-1}\]

I tested it briefly and it seems to work for Burning Ship power 2 and 3, and also Mandelbar/Tricorn power 2. There is a problem however: to get this automatic skew matrix, one needs to zoom deep enough so that Newton's method can find a mini-set, but doing that is tricky in hard-skewed areas. So I still need to write GUI code for manual skew, to make that pre-navigation easier.

Here's just one image pair, power 2 Burning Ship, the first is with an identity transformation, the second with automatically calculated unskewing transformation:

One last thing: it should be possible to separate skew from rotation by applying polar decomposition.

]]>Last week I implemented (in Haskell, using lazy ST with each STRef paired with Natural so that I can have Ord) the algorithm presented in this paper:

Images of Julia sets that you can trust

L. H. de Figueiredo, D. Nehab, J. Stolfi, and J. B. Oliveira

Last updated on January 8, 2013 at 10:45am.

Abstract:We present an algorithm for computing images of quadratic Julia sets that can be trusted in the sense that they contain numerical guarantees against sampling artifacts and rounding errors in floating-point arithmetic. We use cell mapping and color propagation in graphs to avoid function iteration and rounding errors. As a result, our algorithm avoids point sampling and can robustly classify entire rectangles in the complex plane as being on either side of the Julia set. The union of the regions that cannot be so classified is guaranteed to contain the Julia set. Our algorithm computes a refinable quadtree decomposition of the complex plane adapted to the Julia set which can be used for rendering and for approximating geometric properties such as the area of the filled Julia set and the fractal dimension of the Julia set.

Keywords:Fractals, Julia sets, adaptive refinement, cellular models, cell mapping, computer-assisted proofs

You can find my code in my mandelbrot-graphics repository. I reproduced most of the results, I coloured with black interior, white exterior, red unknown (Julia set is inside the red region), and the quad tree cell boundaries in grey:

The last two examples above show how it fails at parabolic Julia sets.

I also implemented a trustworthy Mandelbrot set, based on the idea that if the neighbourhood of the origin in the Julia is all exterior, then the point cannot be in the Mandelbrot set, and if any interior exists in the Julia set, then the point must be in the Mandelbrot set. Now replace 'point' in those two clauses with "closed 2D square", and use the property of the algorithm in the paper that means the proofs for interiorhood and exteriorhood of the Julia set range over the interval.

It's far too slow to be practical, if pretty pictures were the goal! The red zone of unknown doesn't shrink much with each depth increment.

]]>Define the iterated quadratic polynomial:

\[ f_c^0(z) = 0 \\ f_c^{n+1}(z) = f_c(f_c^n(z))^2 + c \]

The Mandelbrot set is those \(c\) for which \(f_c^n(0)\) remains bounded for all \(n\). Misiurewicz points are dense in the boundary of the Mandelbrot set. They are strictly preperiodic, which means they satisfy this polynomial equation:

\[ f_c^{q+p}(0) = f_c^{q}(0) \\ p > 0 \\ q > 0\]

and moreover the period \(p\) and the preperiod \(q\) of a Misiurewicz point \( c \in M_{q,p} \) are the lowest values that make the equation true. For example, \(-2 \in M_{2,1}\) and \(i \in M_{2,2}\), which can be verified by iterating the polynomial (exercise: do that).

Misiurewicz points are algebraic integers (a subset of the algebraic numbers), which means they are the roots of a monic polynomial with integer coefficients. A monic polynomial is one with leading coefficient \(1\), for example \(c^2+c\). Factoring a monic polynomial gives monic polynomials as factors. Factoring over the complex numbers \(\mathbb{C}\) gives the \(M_{q,p}\) in linear factors, factoring over the integers \(\mathbb{Z}\) can give irreducible polynomials of degree greater than \(1\). For example, here's the equation for \(M_{2,2}\):

\[c^3\,\left(c+1\right)^2\,\left(c+2\right)\,\left(c^2+1\right)\]

Note that the repeated root \(0\) corresponds to a hyperbolic component of period \(1\) (the nucleus of the top level cardioid of the Mandelbrot set), and the repeated root \(-1\) corresponds to the period \(2\) circle to the left. And \(-2 \in M_{2,1}\), so the "real" equation we are interested in is the last term, \(c^2+1\), which is irreducible over the integers, but has complex roots \(\pm i\). There are two roots, so \(\left|M_{2,2}\right| = 2\).

So, a first **attempt** at enumerating Misiurewicz points works like this:

-- using numeric-prelude and MemoTrie from Hackage type P = MathObj.Polynomial.T Integer -- h with all factors g removed divideAll :: P -> P -> P divideAll h g | isZero h = h | isOne g = h | isZero g = error "/0" | otherwise = case h `divMod` g of (di, mo) | isZero mo -> di `divideAll` g | otherwise -> h -- h with all factors in the list removed divideAlls :: P -> [P] -> P divideAlls h [] = h divideAlls h (g:gs) = divideAlls (h `divideAll` g) gs -- the variable for the polynomials c :: P c = fromCoeffs [ 0, 1 ] -- the base quadratic polynomial f :: P -> P f z = z^2 + c -- the iterated quadratic polynomial fn :: Int -> P fn = memo fn_ where fn_ 0 = 0 ; fn_ n = f (fn (n - 1)) -- the raw M_{q,p} polynomial m_raw :: Int -> Int -> P m_raw = memo2 m_raw_ where m_raw_ q p = fn (q + p) - fn q -- the M_{q,p} polynomial with lower (pre)periods removed m :: Int -> Int -> P m = memo2 m_ where m_ q p = m_raw q p `divideAlls` [ mqp | q' <- [ 0 .. q ] , p' <- [ 1 .. p ] , q' + p' < q + p , p `mod` p' == 0 , let mqp = m q' p' , not (isZero mqp) ] -- |M_{q,p}| d :: Int -> Int -> Int d q p = case degree (m q p) of Just k -> k ; Nothing -> -1

This is using numeric-prelude and MemoTrie from Hackage, but with a reimplemented divMod for monic polynomials that doesn't try to divide by an Integer (which will always be \(1\) for monic polynomials). The core polynomial divMod from numeric-prelude needs a Field for division, and the integers don't form a field.

Tabulating this **attempt** at \(\left|M_{q,p}\right|\) (`d q p`

)
for various small \(q,p\) gives:

q | p | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | |

0 | 1 | 1 | 3 | 6 | 15 | 27 | 63 | 120 | 252 | 495 | 1023 | 2010 | 4095 | 8127 | 16365 | 32640 |

1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |

2 | 1 | 2 | 6 | 12 | 30 | 54 | 126 | 240 | 504 | 990 | 2046 | 4020 | 8190 | 16254 | ||

3 | 3 | 3 | 12 | 24 | 60 | 108 | 252 | 480 | 1008 | 1980 | 4092 | 8040 | 16380 | |||

4 | 7 | 8 | 21 | 48 | 120 | 216 | 504 | 960 | 2016 | 3960 | 8184 | 16080 | ||||

5 | 15 | 15 | 48 | 90 | 240 | 432 | 1008 | 1920 | 4032 | 7920 | 16368 | |||||

6 | 31 | 32 | 96 | 192 | 465 | 864 | 2016 | 3840 | 8064 | 15840 | ||||||

7 | 63 | 63 | 189 | 384 | 960 | 1701 | 4032 | 7680 | 16128 | |||||||

8 | 127 | 128 | 384 | 768 | 1920 | 3456 | 8001 | 15360 | ||||||||

9 | 255 | 255 | 768 | 1530 | 3840 | 6912 | 16128 | |||||||||

10 | 511 | 512 | 1533 | 3072 | 7680 | 13824 | ||||||||||

11 | 1023 | 1023 | 3072 | 6144 | 15345 | |||||||||||

12 | 2047 | 2048 | 6144 | 12288 | ||||||||||||

13 | 4095 | 4095 | 12285 | |||||||||||||

14 | 8191 | 8192 | ||||||||||||||

15 | 16383 |

\(|M_{0,p}|\) is known to be A000740. \(|M_{2,p}|\) appears to be A038199. \(|M_{q,1}|\) appears to be A000225. \(|M_{q,2}|\) appears to be A166920.

**HOWEVER there is a fatal flaw**. The polynomials might
not be irreducible, which means that `divideAlls`

might not be
removing all of the lower (pre)period roots! A proper solution would be
to port the code to a computer algebra system that can factor polynomials
into irreducible polynomials. Or alternatively, mathematically prove that
the polynomials in question will always be irreducible (as far as I know
this is an open question, verified only for \(M_{0,p}\) up to \(p = 10\),
according to
Corollary 5.6 (Centers of Components as Algebraic Numbers)).

You can download my full Haskell code.

**UPDATE** I wrote some Sage code (Python-based) with an
improved algorithm (I think it's perfect now). The values all matched the
original table, and I extended it with further values and links to OEIS.
All the polynomials in question are irreducible, up to the \(p + q < 16\)
limit. No multiplicities greater than one were reported. Code:

@parallel(16) def core(q, p, allroots): mpq = 0 roots = set() R.<x> = ZZ[] w = 0*x for i in range(q): w = w^2 + x wq = w for i in range(p): w = w^2 + x wqp = w f = wqp - wq r = f.factor() for i in r: m = i[0] k = i[1] if not (m in allroots) and not (m in roots): roots.add(m) mpq += m.degree() if k > 1: print(("multiplicity > 1", k, "q", q, "p", p, "degree", m.degree())) return (q, p, mpq, roots) allroots = set() for n in range(16): print(n) res = sorted(list(core([(q, n - q, allroots) for q in range(n)]))) for r in res: t = r[1] q = t[0] p = t[1] mpq = t[2] roots = t[3] print((q, p, mpq, len(roots), [root.degree() for root in roots])) allroots |= roots

**UPDATE2** I bumped the table to \(q + p < 17\). I
ran into some OOM-kills, so I had to run it with less parallelism to get
it to finish.

**UPDATE3** I found a simple function that fits all the
data in the table, but I don't know if it is correct or will break for
larger values. Code (the function is called `f`

):

import Math.NumberTheory.ArithmeticFunctions (divisors, moebius, runMoebius) -- arithmoi import Data.Set (toList) -- containers mu :: Integer -> Integer mu = runMoebius . moebius mqps :: [[Integer]] mqps = [[1,1,3,6,15,27,63,120,252,495,1023,2010,4095,8127,16365,32640] ,[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] ,[1,2,6,12,30,54,126,240,504,990,2046,4020,8190,16254] ,[3,3,12,24,60,108,252,480,1008,1980,4092,8040,16380] ,[7,8,21,48,120,216,504,960,2016,3960,8184,16080] ,[15,15,48,90,240,432,1008,1920,4032,7920,16368] ,[31,32,96,192,465,864,2016,3840,8064,15840] ,[63,63,189,384,960,1701,4032,7680,16128] ,[127,128,384,768,1920,3456,8001,15360] ,[255,255,768,1530,3840,6912,16128] ,[511,512,1533,3072,7680,13824] ,[1023,1023,3072,6144,15345] ,[2047,2048,6144,12288] ,[4095,4095,12285] ,[8191,8192] ,[16383] ] m :: Integer -> Integer -> Integer m q p = mqps !! fromInteger q !! fromInteger (p - 1) f :: Integer -> Integer -> Integer f 0 p = sum [ mu (p `div` d) * 2 ^ (d - 1) | d <- toList (divisors p) ] f 1 _ = 0 f q 1 = 2 ^ (q - 1) - 1 f q p = (2 ^ (q - 1) - if q `mod` p == 1 then 1 else 0) * f 0 p check :: Bool check = and [ f q p == m q p | n <- [1 .. 16], p <- [1 .. n], let q = n - p ] main :: IO () main = print check

**UPDATE4** Progress! I found a paper with the answer:

Misiurewicz Points for Polynomial Maps and Transversality

Benjamin Hutz, Adam Towsley

Corollary 3.3.The number of \((m,n)\) Misiurewicz points for \(f_{d,c}\) is \[ M_{m,n} = \begin{cases} \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & m = 0 \\ (d^m - d^{m-1} - d + 1) \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & m \ne 0 \text{ and } n \mid (m - 1) \\ (d^m - d^{m-1}) \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & \text{otherwise} \end{cases} \]

They have \(f_{d,c}(z) = z^d + c\), so this result is more general than the case \(d = 2\) I was researching in this post. The formula I came up with is the same, with minor notational differences.

]]>If you're near Split, Croatia this August you can check out the Sounding DIY exhbition, which features an updated version of my 2008 work Puzzle:

Sounding DIY

Tin Dožić, Claude Heiland-Allen, Noise Orchestra, Bioni Samp, Hrvoje Hiršl / Davor Branimir Vincze

Aug 6th 2018. - Aug 22nd 2018.

Curator/s: Darko Fritz, Laura Netz

Supported by: Ministarstvo kulture RH, Zaklada Kultura nova

MKC, Ulica slobode 28, Split

opening and live acts 6th August at 21 h

live acts by Hrvoje Hiršl / Davor Branimir Vincze and Tin Dožić.

Monday-Saturday 10-13 h, 17-21 h

free entrance

I didn't manage to get it running properly on an RPi, otherwise I could have sent just an SD card filesystem image, so it'll be running on my old 2006-era laptop, which I hope will survive the travel (both ways).

]]>Some of my images are in an online exhibition this month:

eRR0R(iii)

an exploration of the fertility of errors

in a world that covers its flaws in the blinding light of universal truths and institutionally reinforced regimes of visibility, we are interested in the fertile shades opened up by errors. the antiseptic intellectual environment our societies try to achieve, while arguably “healthy” and “safe” for the established values, has the huge disadvantage of obscuring any fundamentally different modes of existence. we [looked for] submissions that explore the fertility of errors and question our inherited worldview.

Here's my mini statement:

I work with mathematics and algorithms to make art. Sometimes it doesn't go to plan. I present recent failures experienced on the road to successful implementation of desired results.

I'm not sure if it'll be archived after the month is over, so experience it while you can.

]]>The essence of perturbation is to find the difference between the high precision values of a function at two nearby points, while using only the low precision value of the difference between the points. In this post I'll write the high precision points in CAPITALS and the low precision deltas in lowercase. There are two auxiliary operations needed to define the perturbation \(P\), \(B\) replaces all variables by their high precision version, and \(W\) replaces all variables by the sum of the high precision version and the low precision delta. Then \(P = W - B\):

\[\begin{aligned} B(f) &= f(X) &\text{ (emBiggen)}\\ W(f) &= f(X + x) &\text{ (Widen)}\\ P(f) &= W(f) - B(f) \\ &= f(X + x) - f(X) &\text{ (Perturb)} \end{aligned}\]

For example, perturbation of \(f(z, c) = z^2 + c\), ie, \(P(f)\), works out like this:

\[\begin{aligned} & P(f) \\ \to & f(Z + z, C + c) - f(z, c) \\ \to & (Z + z)^2 + (C + c) - (Z^2 + C) \\ \to & Z^2 + 2 Z z + z^2 + C + c - Z^2 - C \\ \to & 2 Z z + z^2 + c \end{aligned}\]

where in the final result the additions of \(Z\) and \(z\) have mostly cancelled out and all the terms are "small".

For polynomials, regular algebraic manipulation can lead to successful outcomes, but for other functions it seems some "tricks" are needed. For example, \(|x|\) (over \(\mathbb{R}\)) can be perturbed with a "diffabs" function proceeding via case analysis:

// evaluate |X + x| - |X| without catastrophic cancellation function diffabs(X, x) { if (X >= 0) { if (X + x >= 0) { return x; } else { return -(2 * X + x); } } else { if (X + x > 0) { return 2 * X + x; } else { return -x; } } }

This formulation was developed by laser blaster at fractalforums.com.

For transcendental functions, other tricks are needed. Here for example is a derivation of \(P(\sin)\):

\[\begin{aligned} & P(\sin) \\ \to & \sin(X + x) - \sin(X) \\ \to & \sin(X) \cos(x) + \cos(X) \sin(x) - \sin(X) \\ \to & \sin(X) (\cos(x) - 1) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \left(2 \cos\left(\frac{x}{2}\right) \sin\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \left(-\sin(X) \sin\left(\frac{x}{2}\right) + \cos(X) \cos\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \cos\left(X + \frac{x}{2}\right) \end{aligned}\]

Knowing when to apply the sum- and double-angle-formulae, is a bit of a mystery, especially if the end goal is not known beforehand. This makes implementing a symbolic algebra program that can perform these derivations quite a challenge.

In lieu of a complete symbolic algebra program that does it all on demand, here are a few formulae that I calculated, some by hand, some using Wolfram Alpha:

\[\begin{aligned} P(a) &= 0 \\ P(a f) &= a P(f) \\ P(f + g) &= P(f) + P(g) \\ P(f g) &= P(f) W(g) + B(f) P(g) \\ P\left(\frac{1}{f}\right) &= -\frac{P(f)}{B(f)W(f)} \\ P(|f|) &= \operatorname{diffabs}(B(f), P(f)) \\ P(\exp) &= \exp(X) \operatorname{expm1}(x) \\ P(\log) &= \operatorname{log1p}\left(\frac{x}{X}\right) \\ P(\sin \circ f) &= \phantom{-}2 \sin\left(\frac{P(f)}{2}\right)\cos\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cos \circ f) &= -2 \sin\left(\frac{P(f)}{2}\right)\sin\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tan \circ f) &= \frac{\sin(P(f))}{\cos(B(f))\cos(W(f))} \\ P(\sinh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\cosh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cosh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\sinh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tanh \circ f) &= \frac{\sinh(P(f))}{\cosh(B(f))\cosh(W(f))} \\ \end{aligned}\]

I hope to find time to add these to et soon.

**EDIT** there is a simpler and more general way to derive \(P(\sin)\)
and so on, using \(\sin(a) \pm \sin(b)\) formulae...

Atom domains in the Mandelbrot set surround mini-Mandelbrot islands. So too in the Burning Ship fractal. These pictures are coloured using the period for hue, and distance estimation for value. Saturation is a simple switch on escaped vs unescaped pixels. Rendered with some Fragmentarium code.

The algorithm is simple: store the iteration count when |Z| reaches a new minimum. The last iteration count so stored is the atom domain. Better start checking after the first iteration if you initialize with 0. IEEE floating point has infinities so you can initialize the stored |Z| value to 1.0/0.0.

I was hoping to use atom domains for interior checking, by using Newton's method to find limit cycles and seeing if their maximal Lyapunov exponent is less than 1, but it didn't work. My guesses are that Newton's method doesn't converge to the limit cycle, but instead to some phantom attractor, or that the maximal Lyapunov exponent isn't an indicator of interiority as I had hoped (I tried with plain determinant too, no joy there either). The method marked some exterior points as interior.

One thing that is interesting to me is the grey region of unescaped pixels with chaotic atom domains (the region is that colour because the anti-aliasing blends subpixels scattered across the whole spectrum into a uniform grey). I'm not sure whether it is an artifact of rendering at a limited iteration count and should be exterior, or if it really is interior and chaotic.

]]>The Burning Ship fractal is defined by iterations of:

\[ \begin{aligned} X &\leftarrow X^2 - Y^2 + A \\ Y &\leftarrow 2|XY| + B \end{aligned} \]

The Burning Ship set is those points \(A + i B \in \mathbb{C}\) whose iteration starting from \(X + i Y = 0\) remains bounded. In practice one iterates a maximum number of times, or until the point diverges (exercise suggested on Reddit: prove a lower bound on an escape radius that is sufficient for the Burning Ship, the Mandelbrot set has the bound \(R = 2\)). Note that traditionally the Burning Ship is rendered with the imaginary \(B\) axis increasing downwards, which makes the "ship" the right way up.

Traditional (continuous) iteration count (escape time) rendering tends to lead to a grainy appearance for this fractal, so I prefer distance estimation. To compute a distance estimate one can use partial derivatives (aka Jacobian matrix):

\[ \begin{aligned} \frac{\partial X}{\partial A} &\leftarrow 2 \left(X \frac{\partial X}{\partial A} - Y \frac{\partial Y}{\partial A}\right) + 1 \\ \frac{\partial X}{\partial B} &\leftarrow 2 \left(X \frac{\partial X}{\partial B} - Y \frac{\partial Y}{\partial B}\right) \\ \frac{\partial Y}{\partial A} &\leftarrow 2 \operatorname{sgn}(X) \operatorname{sgn}(Y) \left( X \frac{\partial Y}{\partial A} + \frac{\partial X}{\partial A} Y \right) \\ \frac{\partial Y}{\partial B} &\leftarrow 2 \operatorname{sgn}(X) \operatorname{sgn}(Y) \left( X \frac{\partial Y}{\partial B} + \frac{\partial X}{\partial B} Y \right) + 1 \end{aligned} \]

Then the distance estimate for an escaped point is (thanks to gerrit on fractalforums.org):

\[ d = \frac{||\begin{pmatrix}X & Y\end{pmatrix})||^2 \log ||\begin{pmatrix}X & Y\end{pmatrix}||}{\left|\left|\begin{pmatrix}X & Y\end{pmatrix} \cdot \begin{pmatrix} \frac{\partial X}{\partial A} & \frac{\partial X}{\partial B} \\ \frac{\partial Y}{\partial A} & \frac{\partial Y}{\partial B} \end{pmatrix} \right|\right|} \]

Then scale \(d\) by the pixel spacing, colouring points with small distance dark, and large distance light. I colour interior points dark too.

Perturbation techniques can be used for efficient deep zooms. Compute a high precision orbit of \(A,B,X,Y\), and have low precision deltas \(a,b,x,y\) for each pixel. It works out as:

\[ \begin{aligned} x &\leftarrow (2 X + x) x - (2 Y + y) y + a \\ y &\leftarrow 2 \operatorname{diffabs}(XY, Xy + xY + xy) + b \end{aligned} \]

where \(\operatorname{diffabs}(c, d) = |c + d| - |c|\) but expanded into case analysis to avoid catastrophic cancellation with limited precision floating point (this is I believe due to laser blaster on fractalforums.com):

\[ \operatorname{diffabs}(c, d) = \begin{cases} d & c \ge 0, c + d \ge 0 \\ -2c - d & c \ge 0, c + d < 0 \\ 2c + d & c < 0, c + d > 0 \\ -d & c < 0, c + d \le 0 \end{cases} \]

Due to the non-analytic functions, series approximation cannot be used. As with perturbation rendering of the Mandelbrot set, glitches can occur. It seems that Pauldelbrot's glitch criterion (originally posted on fractalforums.com) is also applicable, with a glitch when:

\[ |(X + x) + (Y + y) i|^2 < 10^{-3} |X + i Y|^2 \]

Glitched pixels can be recalculated with a new reference. It may be beneficial to pick as new references those pixels with the smallest LHS of the glitch criterion. The derivatives for distance estimation don't need to be perturbed as they are not "small", one can use \(X + x\) etc in the derivative recurrences.

When navigating the Burning Ship, it is noticeable that "mini-ships" occur, being distorted self-similar copies of the whole set. When passing by, embedded Julia sets appear, similarly to the Mandelbrot set, with period doubling when approaching mini-ships. To zoom directly to mini-ships, one can use Newton's method in 2 real variables. First one needs the period, which can be found by iterating the corners of a polygon until it surrounds the origin, that iteration number is the period (this method is due to Robert Munafo's mu-ency, originally for the Mandelbrot set, but seems to work for the Burning Ship too: perhaps the non-conformal folding is sufficiently rare to be unproblematic in practice). Newton's method iterations are like this:

\[ \begin{pmatrix} A \\ B \end{pmatrix} \leftarrow \begin{pmatrix} A \\ B \end{pmatrix} - \begin{pmatrix} \frac{\partial X}{\partial A} & \frac{\partial X}{\partial B} \\ \frac{\partial Y}{\partial A} & \frac{\partial Y}{\partial B} \end{pmatrix}^{-1} \begin{pmatrix} X \\ Y \end{pmatrix} \]

The final part is the mini-ship size estimate, to know how deep to zoom. The Mandelbrot size estimate seems to work with minor modifications to use Jacobian matrices instead of complex numbers.

These concrete equations are specific to the quadratic Burning Ship, but the methods in principle apply to many escape time fractals.

]]>Recently I've been revisiting the code from my Monotone, extending it to use OpenGL cube maps to store the feedback texture instead of nonlinear warping in a regular texture. This means I can use Möebius transformations instead of simple similarities and still avoid excessively bad blurriness and edge artifacts. I've been toying with colour too: but unlike the chaos game algorithm for fractal flames (which can colour according to a "hidden" parameter, leading to interesting and dynamic colour structures), the texture feedback mechanism I'm using can only cope with "structural" RGB colours (with an alpha channel for overall brightness). A 4x4 colour matrix seems to be more interesting than the off-white multipliers I was using to start with.

Some videos:

- Moebius Bubble Chamber (stereographic projection, black and white)
- Moebius Blueprints (360, slight colour, low resolution)
- Moebius Blueprints 2 (360, more colour, high resolution)
- Moenotone Demo (360, colour, high resolution)
- Moenotone Demo 2 (stereographic projection, colour)

I updated my Inflector Gadget, adding a keyframe animation feature among other goodies. I also made a new page for it, where all the downloads and documentation are to be found. Go check it out!

PS: Inflector Gadget can make images like these in very little time:

]]>Previously I wrote about an automated Julia morphing method extrapolating patterns in the binary representation of external angles, and then tracing external rays. However this was impractical as it was \(O(p^2)\) for final period \(p\) and the period typically more than doubles at each next level of morphing. This week I devised an \(O(p)\) algorithm, which requires a little bit of setting up and doesn't always work but when it works it works very well.

The first key insight was that in embedded Julia sets, the primary spirals and tips are distinguishable by the preperiods of the Misiurewicz points at their centers. Moreover when using the "full" Newton's method algorithm for Misiurewicz points that rejects lower preperiods by division, the basins of attraction comfortably enclose the center of the embedded Julia set itself.

So, we can choose the appropriate (pre)period to get to the center of the spiral either inwards towards the main body of the Mandelbrot set or outwards towards its tips. Now, from a Misiurewicz center of a spiral, Newton's method for periodic nucleus finding will work for any of the periods that form the structural spine of the spiral - these go up by a multiple of the period of the influencing island. From these nuclei we can jump to the Misiurewicz spiral on the other side, using Newton's method again. In this way we can algorithmically find any nucleus or Misiurewicz point in the structure of the embedded Julia set.

Some images should make this clearer at this point: blue means Newton's method for nucleus, red means Newton's method for Misiurewicz point, nuclei are labeled with their period, Misiurewicz points with preperiod and period in that order, separated by 'p'.

The second key insight was that the atom domain coordinate of the tip of the treeward branch at each successive level was scaled by a power of 1.5 from the one at the previous level. Because atom domain coordinates correspond to the unit disc, this means they are closer to the nucleus. This allowed an initial guess for finding the Misiurewicz point at the tip more precisely (the first insight does only apply to "top-level" embedded Julia sets, not their morphings - there is a "symmetry trap" that breaks Newton's method because the boundary of the basins of attraction passes through the point we want to start from). I implemented a Newton's method iteration to find a point with a given atom domain coordinate. This relationship is only true in the limit, so the input to the automatic morphing algorithm starts at the first morphing, rather than the top level embedded Julia set.

My first test was quite challenging: to morph a tree with length 7 arms, from an embeddded Julia set at angled internal address:

1_{1/2}→2_{1/2}→3_{2/5}→15_{4/7}→88

The C code (full link at the bottom) that sets up the parameters for this morphing looks like this:

#ifdef EXAMPLE_1 const char *embedded_julia_ray = ".011100011100011011100011011100011100011100011011100011011100011100011100011011100011100001110001101110010010010010010010010001110001110001101110001101110001110001110001101110001101110001110001110001101110001110000111000110111(001)"; int ray_preperiod = 225; int ray_period = 3; double _Complex ray_endpoint = -1.76525599938987623396492597243303e+00 + 1.04485517375987067290733632798876e-02 * I; int influencing_island_period = 3; int embedded_julia_set_period = 88; int denominator_of_rotation = 5; int arm_length = 7; double view_size_multiplier = 3600; #endif

The ray lands on the treeward-tip Misiurewicz point of the first morphed Julia set, this end point is cached to avoid long ray tracing computations. The next 4 numbers are involved in the iterative morphing calculations of the relevant periods and preperiods, with the arm length being the primary variable to adjust once the Julia set is found. The view size multiplier sets how to zoom out from the central morphed figure to frame the result nicely, maybe I can find a good heuristic to determine this based on arm length.

The morphing looks like this:

The second example is similar, starting with the island with this angled internal address, with tree morphing arm length 9.

1_{1/2}→2_{1/2}→3_{1/2}→4_{1/2}→8_{1/15}→116_{1/2}→119

The third and final example (for now) is simpler still, starting at the island with this internal address, with tree morphing arm length 1.

1_{1/3}→3_{1/2}→4_{11/23}→89

The code for example 3 contains an ugly hack, because the method for guessing the location of the next Misiurewicz point (for starting Newton's method iterations) isn't good enough - the radius is accurate, but the angle is not - my atom domain coordinate method is clearly not the correct one in general...

Here are the timings in seconds for calculating the coordinates (not parallelized) and rendering the images (I used m-perturbator-offline at 1280x720, the parallel efficiency is somewhat low because it doesn't know the center point is already a good reference and it tries to find one in the view - it would be much faster if I let it take the primary reference as external input - more things TODO):

morph | coordinates | image rendering | |||||||
---|---|---|---|---|---|---|---|---|---|

eg1 | eg2 | eg3 | eg1 | eg2 | eg3 | ||||

real | user | real | user | real | user | ||||

1 | 0 | 0 | 0 | 0.633 | 2.04 | 0.625 | 2.00 | 0.551 | 1.74 |

2 | 0 | 0 | 0 | 0.817 | 2.41 | 0.943 | 2.31 | 0.932 | 3.19 |

3 | 0 | 0 | 0 | 1.16 | 3.56 | 1.43 | 4.53 | 1.26 | 4.19 |

4 | 1 | 0 | 0 | 1.37 | 4.38 | 1.86 | 6.06 | 1.73 | 5.86 |

5 | 1 | 2 | 1 | 2.29 | 7.45 | 3.43 | 11.4 | 2.67 | 8.78 |

6 | 2 | 4 | 2 | 3.95 | 12.3 | 5.73 | 18.4 | 4.26 | 14.9 |

7 | 10 | 14 | 2 | 7.42 | 23.6 | 8.66 | 26.7 | 6.90 | 21.1 |

8 | 24 | 37 | 7 | 28.2 | 95.1 | 42.4 | 142 | 12.6 | 36.4 |

9 | 92 | 155 | 27 | 63.7 | 257 | 92.6 | 292 | 21.5 | 63.5 |

10 | 288 | 442 | 51 | 141 | 419 | 207 | 609 | 77.5 | 263 |

total | 418 | 654 | 90 | 259 | 774 | 372 | 1120 | 137 | 430 |

The code is part of my mandelbrot-numerics project. You also need my mandelbrot-symbolics project to compile the example program, and you may also want mandelbrot-perturbator to render the output (note: the GTK version is currently hardcoded to 65536 maximum iteration count, which isn't enough for deeper morphed Julia sets - adding runtime configuration for this is my next priority). Other deep zoomers are available, for example my Kalles Fraktaler 2 + GMP fork with Windows binaries available (that also work in WINE on Linux).

]]>