Pau Ros took some great pictures of my exhibition opening, part of Sonic Electronics Festival:

The exhibition is open until 27th April. Check the Chalton Gallery website for spacetime coordinates.

]]>I have an exhibition coming up April 2019 in London, UK.

Claude Heiland-Allen

Digital Art - Computer Graphics - Free/Libre Open Source Software

Chalton Gallery, 96 Chalton Street, Camden, London UK NW1 1HJ

Opening Thursday 11 April 2019, 6pm.

Concert Thursday 18 April 2019, 7pm.

Exhibition opens 12-27 April 2019.

Tuesdays: 8 am to 3 pm

Wednesday to Saturday: 11:30 am to 5:45 pm

*Digital print 120x60cm, framed*

Prismatic is rendered using a physics-based ray-tracer for spherically curved space. In spherical space the light ray geodesics eventually wrap around, meeting at the opposite pole to the observer. To compound the sphericity a projection is used that wraps the whole sphere-of-view from a point into a long strip.

The scene contains spheres of three different transparent materials (water, glass, quartz) symmetrically arranged at the vertices of a 24-cell. The equatorial plane is filled with a glowing opaque checkerboard, this acts as a light source with a daylight spectrum.

The 3D spherical space is embedded in 4D Euclidean (flat) space. Represent ray directions by points on the “equator” around the ray source, and use trigonometry to transform these ray directions appropriately when tracing the rays through curved space. The code is optimized to use simpler functions like square root and arithmetic instead of costly sines and cosines.

The materials are all physically based, with refractive index varying with simulated light wavelength, which gives rainbow effects when different colours are refracted by different angles. To get the final image requires tracing a monochrome image at many different wavelengths, which are then combined into the XYZ colour space using tristimulus response curves for the light receptors in the human eye.

*Digital prints 20x30cm, 16 pieces, unframed*

The concept for Wedged is “playing Tetris optimally badly”. Badly in that no row is complete, and optimally in that each row has at most one empty cell, and the rectangle is filled. Additional aesthetic constraints are encoded in the source code to generate more pleasing images.

Starting from an empty rectangle, block off one cell in each row, subject to the constraint that blocked cells in nearby rows shouldn’t be too close to each other, and the blocked cells should be roughly evenly distributed between columns. Some of these blockings might be duplicates (taking into account mirroring and rotation), so pick only one from each equivalence class.

Starting from the top left empty cell in each of these boards, fill it with pieces that fit. Fitting means that the piece is entirely within the board, not overlapping any blocked cells or other pieces. There are some additional constraints to improve aesthetic appearance and reduce the number of output images: there should not be too many pieces of the same colour in the board, all adjacent pieces should be a different colour, and no piece should be able to slide into the space left when blocked cells are removed (this applies only to the long thin blue pieces, the other pieces can’t move due to the earlier constraint on nearby blocked cells).

The filling process has some other aesthetic constraints: the board must be diverse (there must be a wide range of distinct colours in each row and column), the complete board must have a roughly even number of pieces of each colour, and there shouldn’t be any long straight line boundaries between multiple pieces. The complete boards might have duplicates under symmetries (in the case that the original blocking arrangement was symmetrical), so pick only one from each equivalence class.

*Sound installation*

Generative techno. Dynamo creates music from carefully controlled randomness, using numbers to invent harmonies, melodies, and rhythms. Dynamo is a Pure-data patch which plays new techno tracks forever. It is a generative system, and not a DJ mix.

When it is time to generate a new track, Dynamo first picks some high level parameters like tempo, density, and the scale of notes to use. Then it fills in the details, such as the specific rhythms of each instrument and which notes to play in which order. Finally an overall sequence is applied to form the large scale musical structure.

Pure-data is deterministic, which makes Dynamo deterministic. To avoid the same output each time the patch is started, entropy is injected from outside the Pure-data environment.

*Audio-visual installation*

Sliding tile puzzles have existed for over a century. The 15-puzzle craze in 1880 offered a cash prize for a problem with no solution. In the Puzzle presented here the computer is manipulating the tiles. No malicious design, but insufficient specification means that no solution can be found; the automaton forever explores the state space but finds every way to position the tiles as good as the last…

Each tile makes a sound, and each possible position has a processing effect associated with it. Part of the Puzzle is to watch and listen carefully, to see and hear and try to pick apart what it is that the computer is doing, to reverse-engineer the machinery inside from its outward appearance. The video is built using eight squares, each coloured tile is textured with the whole Puzzle, descending into an infinite fractal cascade. The control algorithm is a Markov Chain that avoids repetition.

Puzzle is implemented in Pure-data, using GEM for video and pdlua for the tile-control logic.

*Interactive installation*

A graph is a set of nodes and links between them. In GraphGrow the term is overloaded: there are visible graphs of nodes and links on the tablet computer, and a second implicit graph with links between the rules.

The visible graphs give the name of GraphGrow - a fractal is grown from a seed graph by replacing each visible link with its corresponding rule graph, recursively. The correspondence is by colour: a yellow link corresponds to the graph with yellow background, and so on. The implicit graph between rules thus *directs* the expansion. The implict graph is also a *directed graph* (even more terminological overloading!).

The rule graphs are constrained, with two fixed nodes at left and right. When growing a graph, each link is replaced with the corresponding rule graph with the left-hand fixed node of the rule mapped to the start point of the link and the right-hand fixed node of the rule mapped to the end point of the link. The mapping is restricted to uniform scaling, rotation and translation. The fixed nodes are coloured white on the tablet.

The fractal is projected, along with rhythmic drones amplified through speakers. Both are generated from the graph data. Dragging the brightly coloured nodes on the tablet in each of the four rule graphs, allows the gallery visitor to explore a subspace of graph-directed iterated function system of similarities.

*Video installation*

Fractals are mathematical objects exhibiting detail at all scales. Escape-time fractals are plotted by iterating recurrence relations parameterised by pixel coordinates from a seed value until the values exceed an escape radius or until an arbitrary limit on iteration count is reached (this is to ensure termination, as some pixels may not escape at all). The colour of each pixel is determined by the distance of the point from the fractal structure: pixels near the fractal are coloured black and pixels far from the fractal are coloured white, or the reverse.

Escape-time fractals are generated by formulas, for example the Mandelbrot set emerges from *z* → *z*^{2} + *c* and the Burning Ship emerges from *x* + *i**y* → (|*x*| + *i*|*y*|)^{2} + *c*, where *c* is the coordinates of each pixel. Hybrid fractals combine different formulas into one more complicated formula: for example one might perform one iteration of the Mandelbrot set formula, then one iteration of the Burning Ship formula, then two more iterations of the Mandelbrot set formula, repeating this sequence in a loop.

Claude Heiland-Allen is an artist from London interested in the complex emergent behaviour of simple systems, unusual geometries, and mathematical aesthetics.

From 2005 through 2011 Claude was a member of the GOTO10 collective, whose mission was to promote Free/Libre Open Source Software in Art. GOTO10 projects included the make art Festival (Poitiers, France), the Puredyne GNU/Linux distribution, and the GOSUB10 netlabel. Since 2011 he has continued as an unaffiliated independent artist and researcher.

Claude has performed, exhibited and presented internationally, including in the United Kingdom (London, Cambridge, Winchester, Lancaster, Oxford, Sheffield), the Netherlands (Leiden, Amsterdam), Austria (Linz, Graz), Germany (Cologne, Berlin), France (Toulouse, Poitiers, Paris), Spain (Gijon), Norway (Bergen), Slovenia (Maribor), Finland (Helsinki), and Canada (Montreal).

Claude’s larger artistic projects include RDEX (an exploration of digitally simulated reaction diffusion chemistry) and clive (a minimal environment for live-coding audio in the C programming language). As a software developer, Claude has developed several programs and libraries used by the wider free software community, including pdlua (extending the Puredata multimedia environment with the Lua programming language), buildtorrent (a program to create .torrent files), and hp2pretty (a program to graph Haskell heap profiling output).

]]>A recent post by matty686 on fractalforums about Photoshop IFS fractals got me interested. I didn't manage to do it in GIMP 2.8.18, but succeeded with Inkscape 0.92.4. The process needs two PNG images, I didn't succeed with only one. Once you have set up the transformed linked images that reference "bitmap.png", repeatedly export the page to "bitmap2.png" (pressing return in the box with the filename does this quickly) and in a terminal "mv bitmap2.png bitmap.png" when the export has finished. Here are some explanatory screenshots:

This could probably be scripted within Inkscape so you don't have to do so much manual repetitive work at the end: this is just a proof of concept.

]]>In yesterday's post I showed how dividing by unwanted roots leads to better stability when finding periodic nuclei \(c\) that satisfy \(f_c^p(0) = 0\) where \(f_c(z) = z^2 + c\). Today I'll show how two techniques can bring this gain to finding periodic cycles \(z_0\) that satisfy \(f_c^p(z_0) = z_0\) for a given \(c\).

The first attempt is just to do the Newton's iterations without any wrong root division, unsurprisingly it isn't very successful. The second attempt divides by wrong period roots, and is a bit better. The third algorithm is much more involved, thus slower, but is much more stable (in terms of larger Newton basins around the desired roots).

Here are some images, each row corresponds to an algorithm as introduced. The colouring is based on lifted domain colouring of the derivative of the limit cycle: \(\left|\frac{\partial}{\partial z}f_c^p(z_0)\right| \le 1\) in the interior of hyperbolic components, and acts as conformal interior coordinates which do extend a bit into the exterior.

The third algorithm works by first finding a \(c_0\) that is a periodic nucleus, then we know that a good \(z_0\) for this \(c_0\) is simply \(0\). Now move \(c_0\) a little bit in the direction of the real \(c\) that we wish to calculate, and use Newton's method with the previous \(z_0\) as the initial guess to find a good \(z_0\) for the moved \(c_0\). Repeat until \(c_0 \to c\) and hopefully the resulting \(z_0\) will be as hoped for, in the periodic cycle for \(c\).

Source code for Fragmentarium: 2018-11-18_newtons_method_for_periodic_cycles.frag.

]]>Start with the iteration formula for the Burning Ship escape time fractal:

\[ X := X^2 - Y^2 + A \quad\quad Y := \left|2 X Y\right| + B \]

Perturb this iteration, replacing \(X\) by \(X+x\) (etc) where \(x\) is a small deviation:

\[ x := (2 X + x) x - (2 Y + y) y + a \quad\quad y := 2 \operatorname{diffabs}(X Y, X y + x Y + x y) + b \]

Here \(\operatorname{diffabs}(c,d)\) is laser blaster's formula for evaluating \(|c + d| - |c|\) without catastrophic cancellation; see my post about perturbation algebra for more details.

Now replace \(x\) and \(y\) by bivariate power series in \(a\) and \(b\):

\[ x = \sum_{i,j} s_{i,j} a^i b^j \quad\quad y = \sum_{i,j} t_{i,j} a^i b^j \]

To implement this practically (without lazy evaluation) I pick an order \(o\) and limit the sum to \(i + j \le o\). Substituting these series into the perturbation iterations, and collecting terms, gives iteration formulae for the series coefficients \(s_{i,j}\) and \(t_{i,j}\):

\[ s_{i,j} := 2 X s_{i,j} - 2 Y t_{i,j} + \sum_{k=0,l=0}^{k=i,l=j} \left( s_{k,l} s_{i-k,j-l} - t_{k,l} t_{i-k,j-l} \right) + \mathbb{1}_{i=1,j=0} \]

The formula for \(t\) requires knowing which branch of the \(\operatorname{diffabs}(X Y, 0)\) was taken, which turns out has a nice reduction to \(\operatorname{sgn}(XY)\):

\[ t_{i,j} := 2 \operatorname{sgn}(X Y) \left( X t_{i,j} + s_{i,j} Y + \sum_{k=0,l=0}^{k=i,l=j} \left( s_{k,l} t_{i-k,j-l} \right) \right) + \mathbb{1}_{i=0,j=1} \]

\(\mathbb{1}_F\) is the indicator function, \(1\) when \(F\) is true, \(0\) otherwise. For distance estimation, the series of the derivatives are just the derivatives of the series, which can be computed very easily.

The series is only valid in a region that doesn't intersect an axis, at which point the next iteration will fold the region in a way that a series can't represent. Moreover, the series loses accuracy the further from the reference point, so there needs to be a way to check that the series is ok to use. One approach is to iterate points on the boundary of the region using perturbation, and compare the relative error against the same points calculated with the series. Only if all these probe points are accurate, is it safe to try the next iteration.

This usually means that the series will skip at most \(P\) iterations near a central miniship reference of period \(P\). But one can do better, by subdividing the region into parts that are folded in different ways. The parts that are folded the same way as the reference can continue with series approximation, with probe points at the boundary of each part, with the remainder switching to regular perturbation initialized by the series.

It may even be possible to use knighty's techniques like Taylor shift, which is a way to rebase a series to a new reference point (for example, one in the "other side" of the fold) to split the region into two or more separate parts each with their own series approximation. The Horner shift algorithm is not too complicated, and I think it can be extended to bivariate series by shifting along each variable in succession:

// Horner shift code from FLINT2 for (i = n - 2; i >= 0; i--) for (j = i; j < n - 1; j++) poly[j] += poly[j + 1] * c

Untested Haskell idea for bivariate shift:

shift :: Num v => v -> Map Int v -> Map Int v shift = undefined -- left as an exercise shift2 :: Num v => v -> v -> Map (Int, Int) v -> Map (Int, Int) v shift2 x y = twiddle . pull . fmap (shift x) . push . twiddle . pull . fmap (shift y) . push push :: (Ord a, Ord b) => Map (a, b) v -> Map a (Map b v) push m = M.fromListWith M.union [ (i, M.singleton j e) | ((i,j), e) <- M.assocs m ] pull :: (Ord a, Ord b) => Map a (Map b v) -> Map (a, b) v pull m = M.fromList [((i,j),e) | (i, mj) <- M.assocs m, (j, e) <- M.assocs mj ] twiddle :: (Ord a, Ord b) => Map (a, b) v -> Map (b, a) v twiddle = M.mapKeys (\(i,j) -> (j,i))

Exciting developments! I hope to release a new version of Kalles Fraktaler 2 + containing at least some of these algorithms soon...

]]>The Burning Ship fractal is interesting because it has mini-ships among filaments. However, some of these filaments and mini-ships are highly stretched or skewed non-conformally, which makes them look ugly. Some fractal software (Kalles Fraktaler, Ultrafractal, ...) has the ability to apply a non-uniform scaling to the view, which unstretches the filaments and makes (for example) the period-doubling features on the way to the mini-ship appear more circular. However, to my knowledge, all these software require manual adjustment of the unskewing transformation, for example dragging control points in a graphical user interface.

The size estimate for the Burning Ship mini-sets is based on the size estimate for the Mandelbrot set. The Burning Ship iterations are not a well behaved complex function, so Jacobian matrices over real variables are necessary. Generic pseudo-code for the size estimate of an arbitrary function:

d := degree of smallest non-linear power in f a, b := coordinates of a mini-set with period p x, y := critical point of f (usually 0, 0) L := I B := I for j := 1 to p - 1 x, y := f(x, y) L := Jf(x, y) B := B + L^{-1} size := 1 / sqrt(abs(det(L^(d/(d-1)) B)))

In particular for the power 2 Burning Ship this resolves to:

d := 2 x := 0 y := 0 L(lxx, lxy; lyx, lyy) := (1, 0; 0, 1) B(bxx, bxy; byx, byy) := (1, 0; 0, 1) for j := 1 to p - 1 x, y := (x^2 - y^2 + a, abs(2 x y) + b) L := ( 2 x lxx - 2 y lyx , 2 x lxy - 2 y lyy , 2 sgn(x) sgn(y) (x lyx + lxx y) , 2 sgn(x) sgn(y) (x lyy + lxy y) ) detL := lxx * lyy - lxy * lyx B := ( bxx + lyy / detL , bxy - lyx / detL , byx - lxy / detL , byy + lxx / detL ) detL := lxx * lyy - lxy * lyx detB := bxx * byy - bxy * byx size := 1 / sqrt(abs(detL^2 * detB))

Note the renormalization of the \(c=a+bi\) plane by:

\[L^\frac{d}{d - 1} B\]

The size estimate takes out the uniform scaling factor from this 2x2 matrix, but it's also possible to use it to extract rotation and non-uniform scaling (stretching, skew) parameters. A problem comes when trying to raise the matrix \(L\) to a non-integer power when \(d > 2\), so I fudged it and hoped that the non-conformal contribution from \(L\) is non-existent, and just used \(B\). To cancel out the uniform scaling from \(B\), I divide by the square root of its determinant, and I take its inverse to use as a coordinate transformation for the \(c=a+bi\) plane:

\[T = \left(\frac{B}{\sqrt{\det B}}\right)^{-1}\]

I tested it briefly and it seems to work for Burning Ship power 2 and 3, and also Mandelbar/Tricorn power 2. There is a problem however: to get this automatic skew matrix, one needs to zoom deep enough so that Newton's method can find a mini-set, but doing that is tricky in hard-skewed areas. So I still need to write GUI code for manual skew, to make that pre-navigation easier.

Here's just one image pair, power 2 Burning Ship, the first is with an identity transformation, the second with automatically calculated unskewing transformation:

One last thing: it should be possible to separate skew from rotation by applying polar decomposition.

]]>Last week I implemented (in Haskell, using lazy ST with each STRef paired with Natural so that I can have Ord) the algorithm presented in this paper:

Images of Julia sets that you can trust

L. H. de Figueiredo, D. Nehab, J. Stolfi, and J. B. Oliveira

Last updated on January 8, 2013 at 10:45am.

Abstract:We present an algorithm for computing images of quadratic Julia sets that can be trusted in the sense that they contain numerical guarantees against sampling artifacts and rounding errors in floating-point arithmetic. We use cell mapping and color propagation in graphs to avoid function iteration and rounding errors. As a result, our algorithm avoids point sampling and can robustly classify entire rectangles in the complex plane as being on either side of the Julia set. The union of the regions that cannot be so classified is guaranteed to contain the Julia set. Our algorithm computes a refinable quadtree decomposition of the complex plane adapted to the Julia set which can be used for rendering and for approximating geometric properties such as the area of the filled Julia set and the fractal dimension of the Julia set.

Keywords:Fractals, Julia sets, adaptive refinement, cellular models, cell mapping, computer-assisted proofs

You can find my code in my mandelbrot-graphics repository. I reproduced most of the results, I coloured with black interior, white exterior, red unknown (Julia set is inside the red region), and the quad tree cell boundaries in grey:

The last two examples above show how it fails at parabolic Julia sets.

I also implemented a trustworthy Mandelbrot set, based on the idea that if the neighbourhood of the origin in the Julia is all exterior, then the point cannot be in the Mandelbrot set, and if any interior exists in the Julia set, then the point must be in the Mandelbrot set. Now replace 'point' in those two clauses with "closed 2D square", and use the property of the algorithm in the paper that means the proofs for interiorhood and exteriorhood of the Julia set range over the interval.

It's far too slow to be practical, if pretty pictures were the goal! The red zone of unknown doesn't shrink much with each depth increment.

]]>Define the iterated quadratic polynomial:

\[ f_c^0(z) = 0 \\ f_c^{n+1}(z) = f_c(f_c^n(z))^2 + c \]

The Mandelbrot set is those \(c\) for which \(f_c^n(0)\) remains bounded for all \(n\). Misiurewicz points are dense in the boundary of the Mandelbrot set. They are strictly preperiodic, which means they satisfy this polynomial equation:

\[ f_c^{q+p}(0) = f_c^{q}(0) \\ p > 0 \\ q > 0\]

and moreover the period \(p\) and the preperiod \(q\) of a Misiurewicz point \( c \in M_{q,p} \) are the lowest values that make the equation true. For example, \(-2 \in M_{2,1}\) and \(i \in M_{2,2}\), which can be verified by iterating the polynomial (exercise: do that).

Misiurewicz points are algebraic integers (a subset of the algebraic numbers), which means they are the roots of a monic polynomial with integer coefficients. A monic polynomial is one with leading coefficient \(1\), for example \(c^2+c\). Factoring a monic polynomial gives monic polynomials as factors. Factoring over the complex numbers \(\mathbb{C}\) gives the \(M_{q,p}\) in linear factors, factoring over the integers \(\mathbb{Z}\) can give irreducible polynomials of degree greater than \(1\). For example, here's the equation for \(M_{2,2}\):

\[c^3\,\left(c+1\right)^2\,\left(c+2\right)\,\left(c^2+1\right)\]

Note that the repeated root \(0\) corresponds to a hyperbolic component of period \(1\) (the nucleus of the top level cardioid of the Mandelbrot set), and the repeated root \(-1\) corresponds to the period \(2\) circle to the left. And \(-2 \in M_{2,1}\), so the "real" equation we are interested in is the last term, \(c^2+1\), which is irreducible over the integers, but has complex roots \(\pm i\). There are two roots, so \(\left|M_{2,2}\right| = 2\).

So, a first **attempt** at enumerating Misiurewicz points works like this:

-- using numeric-prelude and MemoTrie from Hackage type P = MathObj.Polynomial.T Integer -- h with all factors g removed divideAll :: P -> P -> P divideAll h g | isZero h = h | isOne g = h | isZero g = error "/0" | otherwise = case h `divMod` g of (di, mo) | isZero mo -> di `divideAll` g | otherwise -> h -- h with all factors in the list removed divideAlls :: P -> [P] -> P divideAlls h [] = h divideAlls h (g:gs) = divideAlls (h `divideAll` g) gs -- the variable for the polynomials c :: P c = fromCoeffs [ 0, 1 ] -- the base quadratic polynomial f :: P -> P f z = z^2 + c -- the iterated quadratic polynomial fn :: Int -> P fn = memo fn_ where fn_ 0 = 0 ; fn_ n = f (fn (n - 1)) -- the raw M_{q,p} polynomial m_raw :: Int -> Int -> P m_raw = memo2 m_raw_ where m_raw_ q p = fn (q + p) - fn q -- the M_{q,p} polynomial with lower (pre)periods removed m :: Int -> Int -> P m = memo2 m_ where m_ q p = m_raw q p `divideAlls` [ mqp | q' <- [ 0 .. q ] , p' <- [ 1 .. p ] , q' + p' < q + p , p `mod` p' == 0 , let mqp = m q' p' , not (isZero mqp) ] -- |M_{q,p}| d :: Int -> Int -> Int d q p = case degree (m q p) of Just k -> k ; Nothing -> -1

This is using numeric-prelude and MemoTrie from Hackage, but with a reimplemented divMod for monic polynomials that doesn't try to divide by an Integer (which will always be \(1\) for monic polynomials). The core polynomial divMod from numeric-prelude needs a Field for division, and the integers don't form a field.

Tabulating this **attempt** at \(\left|M_{q,p}\right|\) (`d q p`

)
for various small \(q,p\) gives:

q | p | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | |

0 | 1 | 1 | 3 | 6 | 15 | 27 | 63 | 120 | 252 | 495 | 1023 | 2010 | 4095 | 8127 | 16365 | 32640 |

1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |

2 | 1 | 2 | 6 | 12 | 30 | 54 | 126 | 240 | 504 | 990 | 2046 | 4020 | 8190 | 16254 | ||

3 | 3 | 3 | 12 | 24 | 60 | 108 | 252 | 480 | 1008 | 1980 | 4092 | 8040 | 16380 | |||

4 | 7 | 8 | 21 | 48 | 120 | 216 | 504 | 960 | 2016 | 3960 | 8184 | 16080 | ||||

5 | 15 | 15 | 48 | 90 | 240 | 432 | 1008 | 1920 | 4032 | 7920 | 16368 | |||||

6 | 31 | 32 | 96 | 192 | 465 | 864 | 2016 | 3840 | 8064 | 15840 | ||||||

7 | 63 | 63 | 189 | 384 | 960 | 1701 | 4032 | 7680 | 16128 | |||||||

8 | 127 | 128 | 384 | 768 | 1920 | 3456 | 8001 | 15360 | ||||||||

9 | 255 | 255 | 768 | 1530 | 3840 | 6912 | 16128 | |||||||||

10 | 511 | 512 | 1533 | 3072 | 7680 | 13824 | ||||||||||

11 | 1023 | 1023 | 3072 | 6144 | 15345 | |||||||||||

12 | 2047 | 2048 | 6144 | 12288 | ||||||||||||

13 | 4095 | 4095 | 12285 | |||||||||||||

14 | 8191 | 8192 | ||||||||||||||

15 | 16383 |

\(|M_{0,p}|\) is known to be A000740. \(|M_{2,p}|\) appears to be A038199. \(|M_{q,1}|\) appears to be A000225. \(|M_{q,2}|\) appears to be A166920.

**HOWEVER there is a fatal flaw**. The polynomials might
not be irreducible, which means that `divideAlls`

might not be
removing all of the lower (pre)period roots! A proper solution would be
to port the code to a computer algebra system that can factor polynomials
into irreducible polynomials. Or alternatively, mathematically prove that
the polynomials in question will always be irreducible (as far as I know
this is an open question, verified only for \(M_{0,p}\) up to \(p = 10\),
according to
Corollary 5.6 (Centers of Components as Algebraic Numbers)).

You can download my full Haskell code.

**UPDATE** I wrote some Sage code (Python-based) with an
improved algorithm (I think it's perfect now). The values all matched the
original table, and I extended it with further values and links to OEIS.
All the polynomials in question are irreducible, up to the \(p + q < 16\)
limit. No multiplicities greater than one were reported. Code:

@parallel(16) def core(q, p, allroots): mpq = 0 roots = set() R.<x> = ZZ[] w = 0*x for i in range(q): w = w^2 + x wq = w for i in range(p): w = w^2 + x wqp = w f = wqp - wq r = f.factor() for i in r: m = i[0] k = i[1] if not (m in allroots) and not (m in roots): roots.add(m) mpq += m.degree() if k > 1: print(("multiplicity > 1", k, "q", q, "p", p, "degree", m.degree())) return (q, p, mpq, roots) allroots = set() for n in range(16): print(n) res = sorted(list(core([(q, n - q, allroots) for q in range(n)]))) for r in res: t = r[1] q = t[0] p = t[1] mpq = t[2] roots = t[3] print((q, p, mpq, len(roots), [root.degree() for root in roots])) allroots |= roots

**UPDATE2** I bumped the table to \(q + p < 17\). I
ran into some OOM-kills, so I had to run it with less parallelism to get
it to finish.

**UPDATE3** I found a simple function that fits all the
data in the table, but I don't know if it is correct or will break for
larger values. Code (the function is called `f`

):

import Math.NumberTheory.ArithmeticFunctions (divisors, moebius, runMoebius) -- arithmoi import Data.Set (toList) -- containers mu :: Integer -> Integer mu = runMoebius . moebius mqps :: [[Integer]] mqps = [[1,1,3,6,15,27,63,120,252,495,1023,2010,4095,8127,16365,32640] ,[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] ,[1,2,6,12,30,54,126,240,504,990,2046,4020,8190,16254] ,[3,3,12,24,60,108,252,480,1008,1980,4092,8040,16380] ,[7,8,21,48,120,216,504,960,2016,3960,8184,16080] ,[15,15,48,90,240,432,1008,1920,4032,7920,16368] ,[31,32,96,192,465,864,2016,3840,8064,15840] ,[63,63,189,384,960,1701,4032,7680,16128] ,[127,128,384,768,1920,3456,8001,15360] ,[255,255,768,1530,3840,6912,16128] ,[511,512,1533,3072,7680,13824] ,[1023,1023,3072,6144,15345] ,[2047,2048,6144,12288] ,[4095,4095,12285] ,[8191,8192] ,[16383] ] m :: Integer -> Integer -> Integer m q p = mqps !! fromInteger q !! fromInteger (p - 1) f :: Integer -> Integer -> Integer f 0 p = sum [ mu (p `div` d) * 2 ^ (d - 1) | d <- toList (divisors p) ] f 1 _ = 0 f q 1 = 2 ^ (q - 1) - 1 f q p = (2 ^ (q - 1) - if q `mod` p == 1 then 1 else 0) * f 0 p check :: Bool check = and [ f q p == m q p | n <- [1 .. 16], p <- [1 .. n], let q = n - p ] main :: IO () main = print check

**UPDATE4** Progress! I found a paper with the answer:

Misiurewicz Points for Polynomial Maps and Transversality

Benjamin Hutz, Adam Towsley

Corollary 3.3.The number of \((m,n)\) Misiurewicz points for \(f_{d,c}\) is \[ M_{m,n} = \begin{cases} \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & m = 0 \\ (d^m - d^{m-1} - d + 1) \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & m \ne 0 \text{ and } n \mid (m - 1) \\ (d^m - d^{m-1}) \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & \text{otherwise} \end{cases} \]

They have \(f_{d,c}(z) = z^d + c\), so this result is more general than the case \(d = 2\) I was researching in this post. The formula I came up with is the same, with minor notational differences.

]]>If you're near Split, Croatia this August you can check out the Sounding DIY exhbition, which features an updated version of my 2008 work Puzzle:

Sounding DIY

Tin Dožić, Claude Heiland-Allen, Noise Orchestra, Bioni Samp, Hrvoje Hiršl / Davor Branimir Vincze

Aug 6th 2018. - Aug 22nd 2018.

Curator/s: Darko Fritz, Laura Netz

Supported by: Ministarstvo kulture RH, Zaklada Kultura nova

MKC, Ulica slobode 28, Split

opening and live acts 6th August at 21 h

live acts by Hrvoje Hiršl / Davor Branimir Vincze and Tin Dožić.

Monday-Saturday 10-13 h, 17-21 h

free entrance

I didn't manage to get it running properly on an RPi, otherwise I could have sent just an SD card filesystem image, so it'll be running on my old 2006-era laptop, which I hope will survive the travel (both ways).

]]>Some of my images are in an online exhibition this month:

eRR0R(iii)

an exploration of the fertility of errors

in a world that covers its flaws in the blinding light of universal truths and institutionally reinforced regimes of visibility, we are interested in the fertile shades opened up by errors. the antiseptic intellectual environment our societies try to achieve, while arguably “healthy” and “safe” for the established values, has the huge disadvantage of obscuring any fundamentally different modes of existence. we [looked for] submissions that explore the fertility of errors and question our inherited worldview.

Here's my mini statement:

I work with mathematics and algorithms to make art. Sometimes it doesn't go to plan. I present recent failures experienced on the road to successful implementation of desired results.

I'm not sure if it'll be archived after the month is over, so experience it while you can.

]]>The essence of perturbation is to find the difference between the high precision values of a function at two nearby points, while using only the low precision value of the difference between the points. In this post I'll write the high precision points in CAPITALS and the low precision deltas in lowercase. There are two auxiliary operations needed to define the perturbation \(P\), \(B\) replaces all variables by their high precision version, and \(W\) replaces all variables by the sum of the high precision version and the low precision delta. Then \(P = W - B\):

\[\begin{aligned} B(f) &= f(X) &\text{ (emBiggen)}\\ W(f) &= f(X + x) &\text{ (Widen)}\\ P(f) &= W(f) - B(f) \\ &= f(X + x) - f(X) &\text{ (Perturb)} \end{aligned}\]

For example, perturbation of \(f(z, c) = z^2 + c\), ie, \(P(f)\), works out like this:

\[\begin{aligned} & P(f) \\ \to & f(Z + z, C + c) - f(z, c) \\ \to & (Z + z)^2 + (C + c) - (Z^2 + C) \\ \to & Z^2 + 2 Z z + z^2 + C + c - Z^2 - C \\ \to & 2 Z z + z^2 + c \end{aligned}\]

where in the final result the additions of \(Z\) and \(z\) have mostly cancelled out and all the terms are "small".

For polynomials, regular algebraic manipulation can lead to successful outcomes, but for other functions it seems some "tricks" are needed. For example, \(|x|\) (over \(\mathbb{R}\)) can be perturbed with a "diffabs" function proceeding via case analysis:

// evaluate |X + x| - |X| without catastrophic cancellation function diffabs(X, x) { if (X >= 0) { if (X + x >= 0) { return x; } else { return -(2 * X + x); } } else { if (X + x > 0) { return 2 * X + x; } else { return -x; } } }

This formulation was developed by laser blaster at fractalforums.com.

For transcendental functions, other tricks are needed. Here for example is a derivation of \(P(\sin)\):

\[\begin{aligned} & P(\sin) \\ \to & \sin(X + x) - \sin(X) \\ \to & \sin(X) \cos(x) + \cos(X) \sin(x) - \sin(X) \\ \to & \sin(X) (\cos(x) - 1) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \left(2 \cos\left(\frac{x}{2}\right) \sin\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \left(-\sin(X) \sin\left(\frac{x}{2}\right) + \cos(X) \cos\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \cos\left(X + \frac{x}{2}\right) \end{aligned}\]

Knowing when to apply the sum- and double-angle-formulae, is a bit of a mystery, especially if the end goal is not known beforehand. This makes implementing a symbolic algebra program that can perform these derivations quite a challenge.

In lieu of a complete symbolic algebra program that does it all on demand, here are a few formulae that I calculated, some by hand, some using Wolfram Alpha:

\[\begin{aligned} P(a) &= 0 \\ P(a f) &= a P(f) \\ P(f + g) &= P(f) + P(g) \\ P(f g) &= P(f) W(g) + B(f) P(g) \\ P\left(\frac{1}{f}\right) &= -\frac{P(f)}{B(f)W(f)} \\ P(|f|) &= \operatorname{diffabs}(B(f), P(f)) \\ P(\exp) &= \exp(X) \operatorname{expm1}(x) \\ P(\log) &= \operatorname{log1p}\left(\frac{x}{X}\right) \\ P(\sin \circ f) &= \phantom{-}2 \sin\left(\frac{P(f)}{2}\right)\cos\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cos \circ f) &= -2 \sin\left(\frac{P(f)}{2}\right)\sin\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tan \circ f) &= \frac{\sin(P(f))}{\cos(B(f))\cos(W(f))} \\ P(\sinh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\cosh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cosh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\sinh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tanh \circ f) &= \frac{\sinh(P(f))}{\cosh(B(f))\cosh(W(f))} \\ \end{aligned}\]

I hope to find time to add these to et soon.

**EDIT** there is a simpler and more general way to derive \(P(\sin)\)
and so on, using \(\sin(a) \pm \sin(b)\) formulae...

Atom domains in the Mandelbrot set surround mini-Mandelbrot islands. So too in the Burning Ship fractal. These pictures are coloured using the period for hue, and distance estimation for value. Saturation is a simple switch on escaped vs unescaped pixels. Rendered with some Fragmentarium code.

The algorithm is simple: store the iteration count when |Z| reaches a new minimum. The last iteration count so stored is the atom domain. Better start checking after the first iteration if you initialize with 0. IEEE floating point has infinities so you can initialize the stored |Z| value to 1.0/0.0.

I was hoping to use atom domains for interior checking, by using Newton's method to find limit cycles and seeing if their maximal Lyapunov exponent is less than 1, but it didn't work. My guesses are that Newton's method doesn't converge to the limit cycle, but instead to some phantom attractor, or that the maximal Lyapunov exponent isn't an indicator of interiority as I had hoped (I tried with plain determinant too, no joy there either). The method marked some exterior points as interior.

One thing that is interesting to me is the grey region of unescaped pixels with chaotic atom domains (the region is that colour because the anti-aliasing blends subpixels scattered across the whole spectrum into a uniform grey). I'm not sure whether it is an artifact of rendering at a limited iteration count and should be exterior, or if it really is interior and chaotic.

]]>