Today I released a new version of KF. Kalles Fraktaler 2 + is my fork of Kalle's Fraktaler 2, with many enhancements. KF is a fast deep zoom fractal renderer for Mandelbrot, Burning Ship, and many other formulas. It uses perturbation techniques and series approximation to allow fast low precision deltas per pixel to be used relative to a slow high precision reference orbit per image. The changelog entry is large; over 50 commits since the previous release, affecting 47 files and over 3000 lines. You can download 64bit Windows binaries from the homepage: mathr.co.uk/kf/kf.html (they work in Wine on Linux, and are cross-compiled with MINGW64). I won't replicate the full list of changes, but here are some highlights.

The most visible change is that the old Iterations dialog is gone. In its place are three new dialogs: Formula, Bailout and Information. Some of the controls have moved to the Advanced menu as they shouldn't usually need to be adjusted. The Formula dialog has the fractal type and power and a few other things, the Bailout dialog has things that affect the exit from the inner loop (maximum iteration count, escape radius, and so on).

The main new feature in these dialogs is the ability to control the bailout test with 4 variables: custom escape radius, real and imaginary weights (which can now be any number, including fractions or negative, instead of previously limited to 0 or 1), and norm power. Together these allow for the shape of the iteration bands to be changed in many different ways. The glitch test is now always done with Euclidean norm (unweighted), and reference calculations are simpler because they don't need to calculate their own pixel any more (I prevent the infinite loop of reference being detected as glitch in a different way now).

Colouring-wise, there is the return of the Texture option which had been broken for many years, there is a fourth-root transfer function, and a new phase channel is computed (saved in EXR as T channel in [0..1)). So far it is exposed to colouring only as a "phase strength" setting. It works best with Linear bailout smooth method (Log gives seams as it is independent of escape radius). There is also a new Flat toggle, if you can't stand palette interpolation.

More code is parallelized, as well as significant speedups for Mandelbrot power 3, and faster Newton-zooming for analytic formulas (arbitrary power Mandelbrot and the Redshifter formulas). Upgrading to GMP 6.2 should improve performance especially on AMD Ryzen CPUs.

Lots of bugfixes, including directional DE for NanoMB1+2 and an off-by-one in NanoMB1 that was making colours different vs normal rendering. EXR load and save uses much less memory, and there are new options to select which channels to export for smaller files if you don't need the data.

]]>In an appendix to the paper from which I implemented the slow mating algorithm in my previous post, there is a brief description of another algorithm:

The Thurston Algorithm for quadratic matings

Initialization A.2 (Spider algorithm with a path)Suppose \(\theta = \theta_1 \in \mathbb{Q} \backslash \mathbb{Z}\) has prepreperiod \(k\) and period \(p\). Define \((x_1(t), \ldots, x_{k+p}(t))\) for \(0 \le t \le 1\) as

\[ x_1(t) = t e^{i 2 \pi \theta_1} \\ x_p(t) = (1 - t) e^{i 2 \pi \theta_p}, \text{ if } k = 0 \\ x_j(t) = e^{i 2 \pi \theta_j}, \text{ otherwise.} \]Pull this path back continuously with \(x_i(t + 1) = \pm \sqrt{x_{i+1}(t)-x_1(t)}\). Then it converges to the marked points of \(f_c\) with appropriate collisions.

In short, given a rational \(\theta\) measured in turns, this provides a way to calculate \(c\) in the Mandelbrot set that has corresponding dynamics. Here \(\theta_j = 2^{j - 1} \theta \mod 1\), and the desired \(c = x_1(\infty)\).

This week I implemented it in my mandelbrot-numerics library, in the hope that it might be faster than my previous method of tracing external rays. Alas, it wasn't to be: both algorithms are \(O(n^2)\) when ignoring the way cost varies with numerical precision, and the spider path algorithm has higher constant factors and requires \(O(n)\) space vs ray tracing \(O(1)\) space. This meant spider path was about 6x slower than ray tracing when using a single-threaded implementation, in one test at period 469, and I imagine it would be slower still at higher periods and precisions.

This isn't entirely surprising, spider path does \(s n\) complex square roots to extend the paths by \(t \to t + 1\), while ray trace does \(s t\) arithmetical operations to extend the ray from depth \(t \to t + 1\). The \(O(n^2)\) comes from \(t\) empirically needing to be about \(2 n\) to be close enough to switch to the faster Newton's root finding method.

Moreover spider path needs very high precision all the way through, the initial points on the unit circle need at least \(n\) bits (I used about \(2 n\) to be sure) to resolve the small differences in external angles, even though the final root can usually be distinguished from other roots of the same period using much less precision. In fact I measured spider path time to be around \(O(n^{2.9})\), presumably because of the precision. Ray tracing was very close to \(O(n^2)\).

Ray tracing has a natural stopping condition: when the ray enters the atom domain with period \(p\), Newton's method is very likely to converge to the nucleus at its center. I imagine something similar will apply to preperiodic Misiurewicz domains, but I have not checked yet. I tried it with spider path but in one instance I got a false positive and ended up at a different minibrot to the one I wanted.

The only possible advantages that remain for the spider path algorithm is that it can be parallelized more effectively than ray tracing, and that the numbers are all in the range \([-2,2]\) which means fixed point could be used. Perhaps a GPU implementation of spider path would be competitive with ray tracing on an elapsed wall-clock time metric, though it would probably still lose on power consumption.

I plotted a couple of graphs of the spider paths, the path points end up log-spiraling around their final resting places. I think this means it converges linearly. Ray tracing is also linear when you are far from the landing point (before the period-doubling cascade starts in earnest). Newton's method converges quadratically, which means the number of accurate digits doubles each time, but you need to start from somewhere accurate enough.

]]>I recently came across Arnaud Chéritat's polynomial mating movies and just had to try to recreate them.

If \(p\) and \(q\) are in the Mandelbrot set, they have connected Julia sets for the quadratic polynomial functions \(z^2+p\) and \(z^2+q\). If they are not in conjugate limbs (a limb is everything beyond the main period 1 cardioid attached at the root of a given immediate child bulb, conjugation here is reflection in the real axis, the 1/2 limb is self-conjugate) then the Julia sets can be mated: glue the Julia sets together respecting external angles so that the result fills the complex plane (which is conveniently represented as the Riemann sphere). It turns out that this mating is related to the Julia set of a rational function of the form \(\frac{z^2+a}{z^2+b}\).

One algorithm to compute \(a\) and \(b\) is called "slow mating". Wolf Jung has a pre-print which explains how to do it in chapter 5: The Thurston Algorithm for quadratic matings.

My first attempts just used Wolf Jung's code and later my own code, to compute the rational function and visualize it in Fragmentarium (FragM fork). This only worked for displaying the final limit set, while Chéritat's videos had intermediate forms. I found a paper which had this to say about it:

On The Notions of Mating

Carsten Lunde Petersen & Daniel Meyer

5.4. Cheritat moviesIt is easy to see that \(R_\lambda\) converges uniformly to the monomial \(z^d\) as \(\lambda \to \infty\). Cheritat has used this to visualize the path of Milnor intermediate matings \(R_\lambda\), \(\lambda \in ]1,\infty[\) of quadratic polynomials through films. Cheritat starts from \(\lambda\) very large so that \(K_w^\lambda\) and \(K_b^\lambda\) are essentially just two down scaled copies of \(K_w\) and \(K_b\), the first near \(0\), the second near \(\infty\). From the chosen normalization and the position of the critical values in \(K_w^\lambda \cup K_b^\lambda\) he computes \(R_\sqrt{\lambda}\). From this \(K_w^\sqrt{\lambda} \cup K_b^\sqrt{\lambda}\) can be computed by pull back of \(K_w^\lambda \cup K_b^\lambda\) under \(R_\sqrt{\lambda}\). Essentially applying this procedure iteratively one obtains a sequence of rational maps \(R_{\lambda_n}\) and sets \(K_w^{\lambda_n} \cup K_b^{\lambda_n}\), where \(\lambda_n \to 1+\) and \(\lambda_n^2 = \lambda_{n-1}\). For more details see the paper by Cheritat in this volume.

What seems to be the paper referred to contains this comment:

Tan Lei and Shishikura’s example of non-mateable degree 3 polynomials without a Levy cycle

Arnaud Chéritat

Figure 4The Riemann surface \(S_R\) conformally mapped to the Euclidean sphere, painted with the drawings of Figure 2. The method for producing such a picture is interesting and will be explained in a forthcoming article; it does not work by computing the conformal map, but instead by pulling-back Julia sets by a series of rational maps. It has connections with Thurston's algorithm.

I could not find that "forthcoming" article despite the volume having been published in 2012 following the 2011 workshop, so I emailed Arnaud Chéritat and got a reply to the effect that it had been cancelled by the author.

My first attempts at coding the slow mating algorithm worked by pulling back the critical orbits as described in Wolf Jung's preprint. The curves look something like this:

A little magic formula for finding the parameters \((a, b)\) for the function \(\frac{z^2+a}{z^2+b}\):

\[a = \frac{C(D-1)}{D^3(1-C)}\] \[b = \frac{D-1}{D^2(1-C)}\]

where \((C,D)\) are the pulled back \(1\)th iterates. This was reverse-engineered from Wolf Jung's code, which worked with separate real and imaginary components, with no comments and heavy reuse of the same variable names. I'm not sure if it is correct but it seems to give useable results when plugged into FragM for visualization:

I struggled to implement the intermediate images at first: I tried pulling back from the coordinates of a filled in Julia set but that needed huge amounts of memory and the resolution was very poor:

Eventually I figured out that I could invert each pullback function into something of the form \(\frac{az^2+b}{cz^2+d}\) and push forward from pixel coordinates to colour according to which hemisphere it reached:

I struggled further, until I found the two bugs that were almost cancelling each other out. The coordinates in each respective hemisphere can be rescaled, and thereafter regular iterations of \(z^2+c\) until escape or maximum iterations could be used to colour the filled in Julia sets expanding within each hemisphere:

After that it was quite simple to bolt on dual-complex-numbers for automatic differentiation, to compute the derivatives for distance estimation to make the filaments of some Julia sets visible:

I also added an adaptive super-sampling scheme: if the standard deviation of the current per-pixel sample population divided by the number of samples is less than a threshold, I assume that the next sample will make negligible changes to the appearance, and so I stop. This speeds up interior regions (which need to be computed to the maximum iteration count) because the standard deviation will be 0 and it will stop after only the minimum sample count. I also have a maximum sample count to avoid taking excessive amounts of time. I do blending of samples in linear colour space, with sRGB conversion only for the final output.

Get the code:

git clone https://code.mathr.co.uk/mating.git

Currently about 600 lines of C99 with GNU getopt for argument parsing, but I may port the image generation part to OpenCL because my GPU is about 8x faster than my CPU for some double-precision numerical algorithms, which will help when rendering animations.

]]>
Melinda Green's webpage
The 4D Mandel/Juli/Buddhabrot Hologram
has a nice video at the bottom, titled
*ZrZi to ZrCr - only points Inside the m-set*.
I recalled my 2013 blog post about the
Ultimate Anti-Buddhabrot
where I used Newton's method to find the limit Z cycle of each C value
inside the Mandelbrot set and plotted them. The (anti-)Buddhagram is
just like the (anti-)Buddhabrot, but the Z points are plotted in 4D space
augmented with their C values. Then the 4D object can be rotated in
various ways before projection down to 2D screen, possibly via a 3D step.

My first attempt was based on my ultimate anti-Buddhabrot code, computing all the points in a fine grid over the C plane. I collected all the points in a large array, then transformed (4D rotation, perspective projection to 3D, perspective projection to 2D) them to 2D and accumulated with additive blending to give an image. This worked well for videos at moderate image resolutions, achieving around 6 frames per second (after porting the point cloud rasterization to OpenGL) at the highest grid density I could fit into RAM, but at larger sizes the grid of dots became visible in areas where the z→z²+c transformation magnified it.

Then I had a flash of inspiration while trying to find the surface normals for lighting. Looking at the formulas on Wikipedia I realized that each "pringle" is an implicit surface \(F_p(c, z) = 0\), with \(F_p(c, z) = f_c^p(z) - z\) and the usual \(f_c(z) = z^2 + c\). \(p\) is the period of the hyperbolic component. Rendering implicit surfaces can be done via sphere-marching through signed distance fields, so I tried to construct a distance estimate. I tried using \(|F_p(c, z)| - t\) as a first try, where \(t\) is a small thickness to make the shapes solid, but that extended beyond the edges of each pringle and looked very wrong. The interior of the pringle has \(\left|\frac{\partial F_p}{\partial z}\right| \le 0\) so I added that to the distance estimate (using max() for intersection) to give:

float DE(vec2 c, vec2 z0) { vec2 z = z0; vec2 dz = vec2(1.0, 0.0); float de = 1.0 / 0.0; for (int p = 1; p <= MaxPeriod; ++p) { dz = 2.0 * cMul(dz, z); z = cSqr(z) + c; de = min(de, max(length(z - z0), length(dz) - 1.0)); } return 0.25 * de - Thickness; }

Note that this has complexity linear in MaxPeriod, my first attempt was quadratic which was way too slow for comfort when MaxPeriod got bigger than about 10. The 0.25 at the end is empirically chosen to avoid graphical glitches.

I have not yet implemented a 4D raytracer in FragM, though it's on my todo list. It's quite straightforward, most of the maths is the same as the 3D case when expressed in vectors, but the cross-product has 3 inputs instead of 2. Check S. R. Hollasch's 1991 masters thesis Four-Space Visualization of 4D Objects for details. Instead I rendered 3D slices (with 4th dimension constant) with 3D lighting, animating the slice coordinate over time, and eventually accumulating all the 3D slices into one image to create a holographic feel similar to Melinda Green's original concept.

Source code is in my fractal-bits repository:

]]>git clone https://code.mathr.co.uk/fractal-bits.git

Back in 2017 I forked the Windows fractal explorer software Kalles Fraktaler 2. I've been working on it steadily since, adding plenty of new features (and bugs). My fork's website is here with binary downloads for Windows (including Wine on Linux).

I had been maintaining 3 branches of various ages, purely because the 2.12 branch was faster than the 2.13 and 2.14 branches and I couldn't figure out why, until recently. Hence this blog post. It turns out to be quite obscure. This is the patch that fixed it:

diff --git a/formula/formula.xsl b/formula/formula.xsl index b47f763..d07c002 100644 (file) --- a/formula/formula.xsl +++ b/formula/formula.xsl @@ -370,6 +370,7 @@ bool FORMULA(perturbation,<xsl:value-of select="../@type" />,<xsl:value-of selec (void) Ai; // -Wunused-variable (void) A; // -Wunused-variable (void) c; // -Wunused-variable + bool no_g = g_real == 1.0 && g_imag == 1.0; int antal = antal0; double test1 = test10; double test2 = test20; @@ -385,7 +386,14 @@ bool FORMULA(perturbation,<xsl:value-of select="../@type" />,<xsl:value-of selec Xxr = Xr + xr; Xxi = Xi + xi; test2 = test1; - test1 = double(g_real * Xxr * Xxr + g_imag * Xxi * Xxi); + if (no_g) + { + test1 = double(Xxr * Xxr + Xxi * Xxi); + } + else + { + test1 = double(g_real * Xxr * Xxr + g_imag * Xxi * Xxi); + } if (test1 < Xz) { bGlitch = true;

In short, it adds a branch inside the inner loop, to avoid two
multiplications by 1.0 (which would leave the value unchanged). Normally
branches inside inner loops are harmful for optimization, but because the
condition is static and unchanging over the iterations, the compiler can
actually reverse the order of the loop and branch, generating code for
two loops, one of which has the two multiplications completely gone. In
real-world usage, the values *are* almost always both 1.0 - they
determine which parts of the value to use for the escape test (and glitch
test, but this is probably a bug).

The performance boost from this patch was about **20%**
(CPU time), which is huge in the grand scheme of things, so I was quite
happy, because it brought performance of kf-2.14.7.1 back to the level
of the 2.12 branch, so I don't have to support it any more (by backporting
bugfixes).

But when you get a taste for speed, you want more. So far KF has not
taken advantage of CPUs to their fullest. Until now, KF has been
resolutely scalar, computing one pixel at a time in each thread. Last
night I started work on upgrading KF to use vectorization
(aka SIMD).
Now when I
compile KF for my CPU (which is not portable, so I won't ship binaries with
these flags enabled), I get an **80%** (CPU time) speed boost,
which is absolutely ginormous, and when compiling for more conservative CPU
settings (Intel Haswell / AMD Excavator) the speed boost is **61%**
which is still a very nice thing to have. With no CPU specific flags
(baseline x86_64) the speed boost is **55%** which is great
too.

The vectorization work is not finished yet, so far it is only added
for "type R" formulae in `double`

precision (which allows zoom
depths to 1e300 or so). Unfortunately `long double`

(used after
`double`

until 1e4900 or so) has no SIMD support at the hardware
level, but I will try to add it for the `floatexp`

type used
for even deeper zooms (who knows, maybe `floatexp`

+SIMD will
be competitive with `long double`

, but I doubt it...). I will
also add support for "type C" formulae before the release, which is a little
complicated by the hoops you have to jump through to get gcc to broadcast
a scalar to a vector in initialization.

Here's a table of differently optimized KF versions:

version | vector size | wall-clock time | CPU time | speed boost | |
---|---|---|---|---|---|

2.14.7.1 | 1 | 3m47.959s | 23m30.676s | 1.00 | 1.00 |

git/64 | 1 | 3m46.703s | 23m26.290s | ||

2 | 3m22.280s | 15m11.022s | 1.13 | 1.55 | |

4 | 3m55.158s | 25m26.638s | |||

git/64+ | 1 | 3m46.977s | 23m26.065s | ||

2 | 3m13.442s | 14m34.363s | 1.18 | 1.61 | |

4 | 3m26.012s | 14m54.546s | |||

git/native | 1 | 3m42.554s | 21m51.381s | ||

2 | 3m10.440s | 13m26.100s | |||

4 | 3m08.784s | 13m06.386s | 1.21 | 1.80 | |

8 | 3m50.812s | 24m01.230s |

All these benchmarks are with Dinkydau's "Evolution of Trees" location, quadratic Mandelbrot set at zoom depth 5e227, with maximum iteration count 1200000. Image size was 3840x2160. My CPU is an AMD Ryzen 7 2700X Eight-Core Processor (with 16 threads that appear as distinct CPUs to Linux). Wall-clock performance doesn't scale up as much as CPU because some parts (computing reference orbits) are sequential; only the perturbed per-pixel orbits are embarassingly parallel.

]]>In yesterday's post I showed how dividing by unwanted roots leads to better stability when finding periodic nuclei \(c\) that satisfy \(f_c^p(0) = 0\) where \(f_c(z) = z^2 + c\). Today I'll show how two techniques can bring this gain to finding periodic cycles \(z_0\) that satisfy \(f_c^p(z_0) = z_0\) for a given \(c\).

The first attempt is just to do the Newton's iterations without any wrong root division, unsurprisingly it isn't very successful. The second attempt divides by wrong period roots, and is a bit better. The third algorithm is much more involved, thus slower, but is much more stable (in terms of larger Newton basins around the desired roots).

Here are some images, each row corresponds to an algorithm as introduced. The colouring is based on lifted domain colouring of the derivative of the limit cycle: \(\left|\frac{\partial}{\partial z}f_c^p(z_0)\right| \le 1\) in the interior of hyperbolic components, and acts as conformal interior coordinates which do extend a bit into the exterior.

The third algorithm works by first finding a \(c_0\) that is a periodic nucleus, then we know that a good \(z_0\) for this \(c_0\) is simply \(0\). Now move \(c_0\) a little bit in the direction of the real \(c\) that we wish to calculate, and use Newton's method with the previous \(z_0\) as the initial guess to find a good \(z_0\) for the moved \(c_0\). Repeat until \(c_0 \to c\) and hopefully the resulting \(z_0\) will be as hoped for, in the periodic cycle for \(c\).

Source code for Fragmentarium: 2018-11-18_newtons_method_for_periodic_cycles.frag.

]]>Previously on mathr: Newton's method for Misiurewicz points (2015). This week I applied the same "divide by undesired roots" technique to the periodic nucleus Newton iterations. I implemented it GLSL in Fragmentarium, which has a Complex.frag with dual numbers for automatic differentiation (this part of the frag is mostly my work, but I largely copy/pasted from C99 standard library manual pages for the transcendental functions, Wikipedia for basic properties of differentiation like product rule, quotient rule, chain rule...).

Here's the improved Newton's method, with the newly added lines in bold:

vec2 nucleus(vec2 c0, int period, int steps) { vec2 c = c0; for (int j = 0; j < steps; ++j) { vec4 G = cConst(1.0); vec4 z = cConst(0.0); for (int l = 1; l <= period; ++l) { z = cSqr(z) + cVar(c);if (l < period && period % l == 0) G = cMul(z, G);} G = cDiv(z, G); c -= cDiv(G.xy, G.zw); } return c; }

And some results, top half of the image is without the added lines, bottom half of the image is with the added line, from left to right the target periods are 2, 3, 4, 9, 12:

You can download the FragM source code for the images in this article: 2018-11-17_newtons_method_for_periodic_points.frag.

]]>Last week I implemented (in Haskell, using lazy ST with each STRef paired with Natural so that I can have Ord) the algorithm presented in this paper:

Images of Julia sets that you can trust

L. H. de Figueiredo, D. Nehab, J. Stolfi, and J. B. Oliveira

Last updated on January 8, 2013 at 10:45am.

Abstract:We present an algorithm for computing images of quadratic Julia sets that can be trusted in the sense that they contain numerical guarantees against sampling artifacts and rounding errors in floating-point arithmetic. We use cell mapping and color propagation in graphs to avoid function iteration and rounding errors. As a result, our algorithm avoids point sampling and can robustly classify entire rectangles in the complex plane as being on either side of the Julia set. The union of the regions that cannot be so classified is guaranteed to contain the Julia set. Our algorithm computes a refinable quadtree decomposition of the complex plane adapted to the Julia set which can be used for rendering and for approximating geometric properties such as the area of the filled Julia set and the fractal dimension of the Julia set.

Keywords:Fractals, Julia sets, adaptive refinement, cellular models, cell mapping, computer-assisted proofs

You can find my code in my mandelbrot-graphics repository. I reproduced most of the results, I coloured with black interior, white exterior, red unknown (Julia set is inside the red region), and the quad tree cell boundaries in grey:

The last two examples above show how it fails at parabolic Julia sets.

I also implemented a trustworthy Mandelbrot set, based on the idea that if the neighbourhood of the origin in the Julia is all exterior, then the point cannot be in the Mandelbrot set, and if any interior exists in the Julia set, then the point must be in the Mandelbrot set. Now replace 'point' in those two clauses with "closed 2D square", and use the property of the algorithm in the paper that means the proofs for interiorhood and exteriorhood of the Julia set range over the interval.

It's far too slow to be practical, if pretty pictures were the goal! The red zone of unknown doesn't shrink much with each depth increment.

]]>Define the iterated quadratic polynomial:

\[ f_c^0(z) = 0 \\ f_c^{n+1}(z) = f_c(f_c^n(z))^2 + c \]

The Mandelbrot set is those \(c\) for which \(f_c^n(0)\) remains bounded for all \(n\). Misiurewicz points are dense in the boundary of the Mandelbrot set. They are strictly preperiodic, which means they satisfy this polynomial equation:

\[ f_c^{q+p}(0) = f_c^{q}(0) \\ p > 0 \\ q > 0\]

and moreover the period \(p\) and the preperiod \(q\) of a Misiurewicz point \( c \in M_{q,p} \) are the lowest values that make the equation true. For example, \(-2 \in M_{2,1}\) and \(i \in M_{2,2}\), which can be verified by iterating the polynomial (exercise: do that).

Misiurewicz points are algebraic integers (a subset of the algebraic numbers), which means they are the roots of a monic polynomial with integer coefficients. A monic polynomial is one with leading coefficient \(1\), for example \(c^2+c\). Factoring a monic polynomial gives monic polynomials as factors. Factoring over the complex numbers \(\mathbb{C}\) gives the \(M_{q,p}\) in linear factors, factoring over the integers \(\mathbb{Z}\) can give irreducible polynomials of degree greater than \(1\). For example, here's the equation for \(M_{2,2}\):

\[c^3\,\left(c+1\right)^2\,\left(c+2\right)\,\left(c^2+1\right)\]

Note that the repeated root \(0\) corresponds to a hyperbolic component of period \(1\) (the nucleus of the top level cardioid of the Mandelbrot set), and the repeated root \(-1\) corresponds to the period \(2\) circle to the left. And \(-2 \in M_{2,1}\), so the "real" equation we are interested in is the last term, \(c^2+1\), which is irreducible over the integers, but has complex roots \(\pm i\). There are two roots, so \(\left|M_{2,2}\right| = 2\).

So, a first **attempt** at enumerating Misiurewicz points works like this:

-- using numeric-prelude and MemoTrie from Hackage type P = MathObj.Polynomial.T Integer -- h with all factors g removed divideAll :: P -> P -> P divideAll h g | isZero h = h | isOne g = h | isZero g = error "/0" | otherwise = case h `divMod` g of (di, mo) | isZero mo -> di `divideAll` g | otherwise -> h -- h with all factors in the list removed divideAlls :: P -> [P] -> P divideAlls h [] = h divideAlls h (g:gs) = divideAlls (h `divideAll` g) gs -- the variable for the polynomials c :: P c = fromCoeffs [ 0, 1 ] -- the base quadratic polynomial f :: P -> P f z = z^2 + c -- the iterated quadratic polynomial fn :: Int -> P fn = memo fn_ where fn_ 0 = 0 ; fn_ n = f (fn (n - 1)) -- the raw M_{q,p} polynomial m_raw :: Int -> Int -> P m_raw = memo2 m_raw_ where m_raw_ q p = fn (q + p) - fn q -- the M_{q,p} polynomial with lower (pre)periods removed m :: Int -> Int -> P m = memo2 m_ where m_ q p = m_raw q p `divideAlls` [ mqp | q' <- [ 0 .. q ] , p' <- [ 1 .. p ] , q' + p' < q + p , p `mod` p' == 0 , let mqp = m q' p' , not (isZero mqp) ] -- |M_{q,p}| d :: Int -> Int -> Int d q p = case degree (m q p) of Just k -> k ; Nothing -> -1

This is using numeric-prelude and MemoTrie from Hackage, but with a reimplemented divMod for monic polynomials that doesn't try to divide by an Integer (which will always be \(1\) for monic polynomials). The core polynomial divMod from numeric-prelude needs a Field for division, and the integers don't form a field.

Tabulating this **attempt** at \(\left|M_{q,p}\right|\) (`d q p`

)
for various small \(q,p\) gives:

q | p | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | |

0 | 1 | 1 | 3 | 6 | 15 | 27 | 63 | 120 | 252 | 495 | 1023 | 2010 | 4095 | 8127 | 16365 | 32640 |

1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |

2 | 1 | 2 | 6 | 12 | 30 | 54 | 126 | 240 | 504 | 990 | 2046 | 4020 | 8190 | 16254 | ||

3 | 3 | 3 | 12 | 24 | 60 | 108 | 252 | 480 | 1008 | 1980 | 4092 | 8040 | 16380 | |||

4 | 7 | 8 | 21 | 48 | 120 | 216 | 504 | 960 | 2016 | 3960 | 8184 | 16080 | ||||

5 | 15 | 15 | 48 | 90 | 240 | 432 | 1008 | 1920 | 4032 | 7920 | 16368 | |||||

6 | 31 | 32 | 96 | 192 | 465 | 864 | 2016 | 3840 | 8064 | 15840 | ||||||

7 | 63 | 63 | 189 | 384 | 960 | 1701 | 4032 | 7680 | 16128 | |||||||

8 | 127 | 128 | 384 | 768 | 1920 | 3456 | 8001 | 15360 | ||||||||

9 | 255 | 255 | 768 | 1530 | 3840 | 6912 | 16128 | |||||||||

10 | 511 | 512 | 1533 | 3072 | 7680 | 13824 | ||||||||||

11 | 1023 | 1023 | 3072 | 6144 | 15345 | |||||||||||

12 | 2047 | 2048 | 6144 | 12288 | ||||||||||||

13 | 4095 | 4095 | 12285 | |||||||||||||

14 | 8191 | 8192 | ||||||||||||||

15 | 16383 |

\(|M_{0,p}|\) is known to be A000740. \(|M_{2,p}|\) appears to be A038199. \(|M_{q,1}|\) appears to be A000225. \(|M_{q,2}|\) appears to be A166920.

**HOWEVER there is a fatal flaw**. The polynomials might
not be irreducible, which means that `divideAlls`

might not be
removing all of the lower (pre)period roots! A proper solution would be
to port the code to a computer algebra system that can factor polynomials
into irreducible polynomials. Or alternatively, mathematically prove that
the polynomials in question will always be irreducible (as far as I know
this is an open question, verified only for \(M_{0,p}\) up to \(p = 10\),
according to
Corollary 5.6 (Centers of Components as Algebraic Numbers)).

You can download my full Haskell code.

**UPDATE** I wrote some Sage code (Python-based) with an
improved algorithm (I think it's perfect now). The values all matched the
original table, and I extended it with further values and links to OEIS.
All the polynomials in question are irreducible, up to the \(p + q < 16\)
limit. No multiplicities greater than one were reported. Code:

@parallel(16) def core(q, p, allroots): mpq = 0 roots = set() R.<x> = ZZ[] w = 0*x for i in range(q): w = w^2 + x wq = w for i in range(p): w = w^2 + x wqp = w f = wqp - wq r = f.factor() for i in r: m = i[0] k = i[1] if not (m in allroots) and not (m in roots): roots.add(m) mpq += m.degree() if k > 1: print(("multiplicity > 1", k, "q", q, "p", p, "degree", m.degree())) return (q, p, mpq, roots) allroots = set() for n in range(16): print(n) res = sorted(list(core([(q, n - q, allroots) for q in range(n)]))) for r in res: t = r[1] q = t[0] p = t[1] mpq = t[2] roots = t[3] print((q, p, mpq, len(roots), [root.degree() for root in roots])) allroots |= roots

**UPDATE2** I bumped the table to \(q + p < 17\). I
ran into some OOM-kills, so I had to run it with less parallelism to get
it to finish.

**UPDATE3** I found a simple function that fits all the
data in the table, but I don't know if it is correct or will break for
larger values. Code (the function is called `f`

):

import Math.NumberTheory.ArithmeticFunctions (divisors, moebius, runMoebius) -- arithmoi import Data.Set (toList) -- containers mu :: Integer -> Integer mu = runMoebius . moebius mqps :: [[Integer]] mqps = [[1,1,3,6,15,27,63,120,252,495,1023,2010,4095,8127,16365,32640] ,[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0] ,[1,2,6,12,30,54,126,240,504,990,2046,4020,8190,16254] ,[3,3,12,24,60,108,252,480,1008,1980,4092,8040,16380] ,[7,8,21,48,120,216,504,960,2016,3960,8184,16080] ,[15,15,48,90,240,432,1008,1920,4032,7920,16368] ,[31,32,96,192,465,864,2016,3840,8064,15840] ,[63,63,189,384,960,1701,4032,7680,16128] ,[127,128,384,768,1920,3456,8001,15360] ,[255,255,768,1530,3840,6912,16128] ,[511,512,1533,3072,7680,13824] ,[1023,1023,3072,6144,15345] ,[2047,2048,6144,12288] ,[4095,4095,12285] ,[8191,8192] ,[16383] ] m :: Integer -> Integer -> Integer m q p = mqps !! fromInteger q !! fromInteger (p - 1) f :: Integer -> Integer -> Integer f 0 p = sum [ mu (p `div` d) * 2 ^ (d - 1) | d <- toList (divisors p) ] f 1 _ = 0 f q 1 = 2 ^ (q - 1) - 1 f q p = (2 ^ (q - 1) - if q `mod` p == 1 then 1 else 0) * f 0 p check :: Bool check = and [ f q p == m q p | n <- [1 .. 16], p <- [1 .. n], let q = n - p ] main :: IO () main = print check

**UPDATE4** Progress! I found a paper with the answer:

Misiurewicz Points for Polynomial Maps and Transversality

Benjamin Hutz, Adam Towsley

Corollary 3.3.The number of \((m,n)\) Misiurewicz points for \(f_{d,c}\) is \[ M_{m,n} = \begin{cases} \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & m = 0 \\ (d^m - d^{m-1} - d + 1) \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & m \ne 0 \text{ and } n \mid (m - 1) \\ (d^m - d^{m-1}) \sum_{k \mid n} \mu\left(n \over k \right) d^{k-1} & \text{otherwise} \end{cases} \]

They have \(f_{d,c}(z) = z^d + c\), so this result is more general than the case \(d = 2\) I was researching in this post. The formula I came up with is the same, with minor notational differences.

]]>The essence of perturbation is to find the difference between the high precision values of a function at two nearby points, while using only the low precision value of the difference between the points. In this post I'll write the high precision points in CAPITALS and the low precision deltas in lowercase. There are two auxiliary operations needed to define the perturbation \(P\), \(B\) replaces all variables by their high precision version, and \(W\) replaces all variables by the sum of the high precision version and the low precision delta. Then \(P = W - B\):

\[\begin{aligned} B(f) &= f(X) &\text{ (emBiggen)}\\ W(f) &= f(X + x) &\text{ (Widen)}\\ P(f) &= W(f) - B(f) \\ &= f(X + x) - f(X) &\text{ (Perturb)} \end{aligned}\]

For example, perturbation of \(f(z, c) = z^2 + c\), ie, \(P(f)\), works out like this:

\[\begin{aligned} & P(f) \\ \to & f(Z + z, C + c) - f(z, c) \\ \to & (Z + z)^2 + (C + c) - (Z^2 + C) \\ \to & Z^2 + 2 Z z + z^2 + C + c - Z^2 - C \\ \to & 2 Z z + z^2 + c \end{aligned}\]

where in the final result the additions of \(Z\) and \(z\) have mostly cancelled out and all the terms are "small".

For polynomials, regular algebraic manipulation can lead to successful outcomes, but for other functions it seems some "tricks" are needed. For example, \(|x|\) (over \(\mathbb{R}\)) can be perturbed with a "diffabs" function proceeding via case analysis:

// evaluate |X + x| - |X| without catastrophic cancellation function diffabs(X, x) { if (X >= 0) { if (X + x >= 0) { return x; } else { return -(2 * X + x); } } else { if (X + x > 0) { return 2 * X + x; } else { return -x; } } }

This formulation was developed by laser blaster at fractalforums.com.

For transcendental functions, other tricks are needed. Here for example is a derivation of \(P(\sin)\):

\[\begin{aligned} & P(\sin) \\ \to & \sin(X + x) - \sin(X) \\ \to & \sin(X) \cos(x) + \cos(X) \sin(x) - \sin(X) \\ \to & \sin(X) (\cos(x) - 1) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \sin(x) \\ \to & \sin(X) \left(-2\sin^2\left(\frac{x}{2}\right)\right) + \cos(X) \left(2 \cos\left(\frac{x}{2}\right) \sin\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \left(-\sin(X) \sin\left(\frac{x}{2}\right) + \cos(X) \cos\left(\frac{x}{2}\right)\right) \\ \to & 2 \sin\left(\frac{x}{2}\right) \cos\left(X + \frac{x}{2}\right) \end{aligned}\]

Knowing when to apply the sum- and double-angle-formulae, is a bit of a mystery, especially if the end goal is not known beforehand. This makes implementing a symbolic algebra program that can perform these derivations quite a challenge.

In lieu of a complete symbolic algebra program that does it all on demand, here are a few formulae that I calculated, some by hand, some using Wolfram Alpha:

\[\begin{aligned} P(a) &= 0 \\ P(a f) &= a P(f) \\ P(f + g) &= P(f) + P(g) \\ P(f g) &= P(f) W(g) + B(f) P(g) \\ P(f^{n+1}) &= P(f) \sum_{k=0}^n W(f^k) B(f)^{n-k} \\ P\left(\frac{1}{f}\right) &= -\frac{P(f)}{B(f)W(f)} \\ P(|f|) &= \operatorname{diffabs}(B(f), P(f)) \\ P(\exp) &= \exp(X) \operatorname{expm1}(x) \\ P(\log) &= \operatorname{log1p}\left(\frac{x}{X}\right) \\ P(\sin \circ f) &= \phantom{-}2 \sin\left(\frac{P(f)}{2}\right)\cos\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cos \circ f) &= -2 \sin\left(\frac{P(f)}{2}\right)\sin\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tan \circ f) &= \frac{\sin(P(f))}{\cos(B(f))\cos(W(f))} \\ P(\sinh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\cosh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\cosh \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\sinh\left(\frac{W(f)+B(f)}{2}\right) \\ P(\tanh \circ f) &= \frac{\sinh(P(f))}{\cosh(B(f))\cosh(W(f))} \\ P(\exp \circ f) &= 2 \sinh\left(\frac{P(f)}{2}\right)\exp\left(\frac{W(f)+B(f)}{2}\right) \\ \end{aligned}\]

I hope to find time to add these to et soon.

**EDIT** there is a simpler and more general way to derive \(P(\sin)\)
and so on, using \(\sin(a) \pm \sin(b)\) formulae...

**EDIT 2019-12-23** added \(P(\exp \circ f)\) via \(\exp = \sinh + \cosh\)

**EDIT 2020-05-28** added \(P(f^n)\) via induction

Earlier today I wrote about
atom domain coordinates,
and thought about extending it to
Misiurewicz domains.
By simple analogy, define the **Misiurewicz domain coordinate** \(G\) with
\(0 \le r \lt q\) and \(1 \le p\):

\[G(c, p, q, r) = \frac{F^{q + p}(0, c) - F^{q}(0, c)}{F^{r + p}(0, c) - F^{r}(0, c)}\]

Calculated similarly to the
atom domain size estimate,
the **Misiurewicz domain size estimate** is:

\[|h| = \left| \frac{F^{r + p}(0, c) - F^{r}(0, c)}{\frac{\partial}{\partial c}F^{q + p}(0, c) - \frac{\partial}{\partial c}F^{q}(0, c)} \right| \]

Like the atom domain coordinate, Newton's method can be used to find a point with a given Misiurewicz domain coordinate. Implementing this is left as an exercise (expect an implementation in my mandelbrot-numerics repository at some point soon).

]]>Previously I wrote about
atom domain size estimates
in the Mandelbrot set. A logical step given \(G(c) = 0\) at the center and
\(|G(c)| = 1\) on the boundary is to take \(G(c)\) as the
**atom domain coordinate**
for \(c\). It turns out to make sense to make \(1 \le q \lt p\) arguments to
the function, and evaluate them at the central nucleus, because otherwise the
assumption that \(p, q\) are constant throughout the domain can be violated
(particularly with neighbouring domains in embedded Julia sets, where the higher
period one is not "influenced" by the medium period one it overlaps, but instead
by the lower period "parent" of both):

\[G(c, q, p) = \frac{F^p(0, c)}{F^q(0, c)}\]

In my efficient automated Julia morphing experiments recently I used the atom domain coordinates for guessing an initial point for Newton's method to find Misiurewicz points. This worked because each next level of morph had an atom domain coordinate approximately the previous raised to the power \(\frac{3}{2}\). To do this I needed to implement Newton's method iterations to find \(c\) given \(G(c), p, q\). Pseudo-code for that looks like this:

double _Complex m_domain_coord ( double _Complex c0 , double _Complex G , int q , int p , int n ) { double _Complex c = c0; for (int j = 0; j < n; ++j) { double _Complex zp = 0; double _Complex dcp = 0; double _Complex z = 0; double _Complex dc = 0; for (int i = 1; i <= p; ++i) { dc = 2 * z * dc + 1; z = z * z + c; if (i == q) { zp = z; dcp = dc; } } double _Complex f = z / zp - G; double _Complex df = (dc * zp - z * dcp) / (zp * zp); c = c - f / df; } return c; }

You can find fuller implementations (including arbitrary precision) in my mandelbrot-numerics repository.

]]>