The Mandelbrot set is approximately self-similar, containing miniature baby Mandelbrot set copies. However, all of these copies are distorted, because there is only one perfect circle in the Mandelbrot set. The complex-valued size estimate can be used as a multiplier for looping zoom animations, though the difference in decorations and visible distortion make the seam a little jarring. Here are some examples:

period \(3\) near \(-2\)

period \(4\) near \(i\)

period \(5\) near \(-1.5 + 0.5 i\)

The trick to the looping zoom is to find an appropriate center: if the nucleus of the baby is \(c\) and the complex size is \(r\), there is another miniature copy near the baby around \(c + r c\) with size approximately \(r^2\). Taking the limit gives a geometric progression:

\[c + r c + r^2 c + \cdots = \frac{c}{1 - r}\]

Here's the code used to render the images (also found in the mandelbrot-graphics repository):

#include <stdio.h> #include <mandelbrot-graphics.h> int main(int argc, char **argv) { (void) argc; (void) argv; const double _Complex r0 = 1; const double _Complex c0 = 0; int periods[3] = { 3, 4, 5 }; double _Complex c1s[3] = { -2, I, -1.5 + I * 0.5 }; int w = 512; int h = 512; m_pixel_t red = m_pixel_rgba(1, 0, 0, 1); m_pixel_t black = m_pixel_rgba(0, 0, 0, 1); m_pixel_t white = m_pixel_rgba(1, 1, 1, 1); double er = 600; int maxiters = 1000; m_image *image = m_image_new(w, h); if (image) { m_d_colour_t *colour = m_d_colour_minimal(red, black, white); if (colour) { for (int k = 0; k < 3; ++k) { int period = periods[k]; double _Complex c1 = c1s[k]; m_d_nucleus(&c1, c1, period, 64); double _Complex r1 = m_d_size(c1, period); for (int frame = 0; frame < 50; ++frame) { double f = (frame + 0.5) / 50; double _Complex r = cpow((r1), f) * cpow((r0), 1 - f); double _Complex c = c1 / (1 - r1); m_d_transform *rect = m_d_transform_rectangular(w, h, 0, 1); m_d_transform *move1 = m_d_transform_linear(- c / 2.25, 1); m_d_transform *zoom = m_d_transform_linear(0, r * 2.25); m_d_transform *move2 = m_d_transform_linear(c, 1); m_d_transform *rm1 = m_d_transform_compose(rect, move1); m_d_transform *zm2 = m_d_transform_compose(zoom, move2); m_d_transform *transform = m_d_transform_compose(rm1, zm2); m_d_render_scanline(image, transform, er, maxiters, colour); char filename[100]; snprintf(filename, 100, "%d-%02d.png", k, frame); m_image_save_png(image, filename); m_d_transform_delete(transform); m_d_transform_delete(zm2); m_d_transform_delete(rm1); m_d_transform_delete(move2); m_d_transform_delete(zoom); m_d_transform_delete(move1); m_d_transform_delete(rect); } } m_d_colour_delete(colour); } m_image_delete(image); } return 0; }

I used ImageMagick to convert each PNG to GIF, then gifsicle to combine into animations.

]]>The Mandelbrot is asymptotically self-similar about pre-periodic Misiurewicz points. The derivative of the cycle (with respect to \(z\)) can be used as a multiplier for seamlessly looping zoom animations. Here are some examples:

const dvec2 c0 = dvec2(-0.22815549365396179LF, 1.1151425080399373LF); const int pre = 3; const int per = 1;

const dvec2 c0 = dvec2(-0.10109636384562216LF, 0.9562865108091415LF); const int pre = 4; const int per = 3;

(The above example's Misiurewicz point has period 1, but using 3 here avoids rapid spinning.)

const dvec2 c0 = dvec2(-0.77568377LF, 0.13646737LF); const int pre = 24; const int per = 2;

Here is the rest of the code that made the images, it's for Fragmentarium with
my (as yet unreleased, but coming soon) `Complex.frag`

enhancements
for dual-numbers and double-precision:

]]>#version 400 core #include "Complex.frag" #include "Progressive2D.frag" uniform float time; // insert snippets from above in here to choose image const double r0 = 0.00001LF; vec3 color(vec2 p) { // calculate multiplier for zoom dvec4 z = cVar(0.0LF); dvec4 c = cConst(c0); for (int i = 0; i < pre; ++i) z = cSqr(z) + c; z = cVar(cVar(z)); for (int i = 0; i < per; ++i) z = cSqr(z) + c; dvec2 m = r0 * dvec2(cPow(vec2(cInverse(cDeriv(z))), mod(time, float(per)) / float(per))); const int maxiters = 1000; const double er2 = 1000.0LF; c = cVar(c0 + cMul(m, p)); z = cConst(0.0LF); double pixelsize = cAbs(m) * double(length(vec4(dFdx(p), dFdy(p)))); int i; for (i = 0; i < maxiters; ++i) { z = cSqr(z) + c; if (cNorm(z) > er2) { break; } } if (i == maxiters) { return vec3(1.0, 0.7, 0.0); } else { double de = 2.0 * cAbs(z) * double(log(float(cAbs(z)))) / cAbs(cDeriv(z)); float grey = tanh(clamp( float(de/pixelsize), 0.0, 8.0 )); return vec3(grey); } }

Wolf Jung's Mandel's "algorithm 9" allows locating zeroes of the iterated polynomial at a certain period where 4 colours meet. But I wanted to find the zeroes for lots of periods all at once. Previously I did this in a way that didn't scale efficiently to deep zooms, so I adapted the "algorithm 9" technique. Not implemented yet is the extension of this code to use perturbation techniques for deep zooms, but it should be perfectly possible.

The first thing to do is initialize the array of \(c\) values, here I use my mandelbrot-graphics library as the support code (not shown here) uses it for imaging:

void initialize_cs(int m, int n, m_d_transform *t, double _Complex *cs) { #pragma omp parallel for for (int j = 0; j < n; ++j) { for (int i = 0; i < m; ++i) { double _Complex c = i + I * j; double _Complex dc = 1; m_d_transform_forward(t, &c, &dc); int k = i + j * m; cs[k] = c; } } }

Then in the iteration step, calculate a flag for which quadrant the \(z\) iterate is in. This is set as a bit mask, so ORing many masks together corresponds to set union:

void step_zs(int mn, char *qs, double _Complex *zs, const double _Complex *cs) { #pragma omp parallel for for (int i = 0; i < mn; ++i) { // load double _Complex c = cs[i]; double _Complex z = zs[i]; // step z = z * z + c; // compute quadrant char q = 1 << ((creal(z) > 0) | ((cimag(z) > 0) << 1)); // store zs[i] = z; qs[i] = q; } }

Now the meat of the algorithm: it scans across the data with a 3x3 window, to find where all 4 colours meet in one small square. Then if that happens, check that the 3x3 square has a local minimum at its center, which means that the point found is really near a zero (a proof for that assertion follows immediately from the minimum modulus principle).

int scan_for_zeroes(int m, int n, int ip, int *ops, double _Complex *ocs, const char *qs, const double _Complex *zs, const double _Complex *ics) { int o = 0; // loop over image interior, to avoid tests in inner 3x3 loop #pragma omp parallel for for (int j = 1; j < n - 1; ++j) { for (int i = 1; i < m - 1; ++i) { // find where 4 quadrants meet in 3x3 region char q = 0; for (int dj = -1; dj <= 1; ++dj) { int jdj = j + dj; for (int di = -1; di <= 1; ++di) { int idi = i + di; int kdk = idi + jdj * m; q |= qs[kdk]; } } if (q == 0xF) { // 4 quadrants meet, check for local minimum at center double minmz = 1.0/0.0; for (int dj = -1; dj <= 1; ++dj) { int jdj = j + dj; for (int di = -1; di <= 1; ++di) { int idi = i + di; int kdk = idi + jdj * m; double mz = cabs(zs[kdk]); minmz = mz < minmz ? mz : minmz; } } int k = i + j * m; double mz = cabs(zs[k]); if (mz <= minmz && minmz < 1.0/0.0) { // we found a probable zero, output it double _Complex ic = ics[k]; int out; #pragma omp atomic capture out = o++; ops[out] = ip; ocs[out] = ic; } } } } return o; }

To be safe, the output arrays should be sized at least the desired number of elements plus the number of pixels in the image (which is the maximum number that can be output in one pass). Most of the extra space will be unused by the time the stopping condition (enough space left) is reached.

An earlier version was several times slower, partly due to caching
`cabs()`

calls for every pixel, though only very few pixels
are near a zero at any given iteration.

It is known that in the Mandelbrot set, the smallest hyperbolic component of a given period is in the utter west of the antenna. Each atom is 16 times smaller and 4 times nearer the tip, which also means each successive atom domains is 4 times smaller (they all meet at the tip). This gives atom size \(O(16^{-p})\) and domain size \(O(4^{-p})\) where \(p\) is the period. I verified this relationship with some code:

#include <stdio.h> #include <mandelbrot-numerics.h> int main() { for (int period = 1; period < 64; period += 1) { printf("# period %d\n", period); fflush(stdout); mpfr_prec_t prec = 16 * period + 8; mpc_t guess, nucleus, size; mpc_init2(guess, prec); mpc_init2(nucleus, prec); mpc_init2(size, prec); mpc_set_d(guess, -2, MPC_RNDNN); while (prec > 2) { mpfr_prec_round(mpc_realref(guess), prec, MPFR_RNDN); mpfr_prec_round(mpc_imagref(guess), prec, MPFR_RNDN); m_r_nucleus(nucleus, guess, period, 64); printf("%ld ", prec); fflush(stdout); m_r_size(size, nucleus, period); mpc_norm(mpc_realref(size), size, MPFR_RNDN); mpfr_log2(mpc_realref(size), mpc_realref(size), MPFR_RNDN); mpfr_div_2ui(mpc_realref(size), mpc_realref(size), 1, MPFR_RNDN); mpfr_printf("%Re ", mpc_realref(size)); fflush(stdout); m_r_domain_size(mpc_realref(size), nucleus, period); mpfr_prec_round(mpc_realref(size), 53, MPFR_RNDN); mpfr_log2(mpc_realref(size), mpc_realref(size), MPFR_RNDN); mpfr_printf("%Re\n", mpc_realref(size)); fflush(stdout); prec--; mpfr_prec_round(mpc_realref(nucleus), prec, MPFR_RNDN); mpfr_prec_round(mpc_imagref(nucleus), prec, MPFR_RNDN); } printf("\n\n"); fflush(stdout); } return 0; }

Plotting the output shows it is so:

This makes a rough worst case estimate of the precision required to accurately compute a size estimate (whether for atom or domain) from the nucleus of the atom be around \(16 p\) bits. But it turns out that (at aleast for this sequence of atoms heading to the tip of the antenna) a lot less precision is actually required. Here's a graph showing a seam where the size estimate breaks down when the precision gets too low, and doing some maths shows that the seam is at precision \(2 p\), a factor of \(8\) better than the first guess:

It remains to investigate other sequences of atoms, to see how they behave. Chances are the \(2 p\) bits of precision required estimate is only necessary in rare cases like heading toward filament tips, and that aesthetically-chosen iterated Julia morphing atoms (for example) will be very much larger than would be expected from their period.

]]>Above is what atom domain rendering typicaly looks like. But there is a wealth of information in the periods of islands in the filaments, which isn't visible unless you scan for periods and annotate the image with labels (as in my previous post). After some experimentation, I figured out a way to make their domains visible and moreover get a domain size estimate (useful for labelling).

The hack I came up with is in two parts: the first part is in the iteration calculations. Atom domain calculation typically works like this:

for (iteration = 1; iterating; ++iteration) { z = z * z + c; if (abs(z) < minimum) { minimum = abs(z); p = iteration; zp = z; } ... }

The filtering hack adds another pair of \((q, z_q)\) to the existing \((p, z_p)\), only this time filtered by a function of iteration number:

... if (abs(z) < minimum2 && accept(iteration)) { minimum2 = abs(z); q = iteration; zq = z; } ...

For the image above the `accept()`

filter function was:

bool accept(int p) { return p >= 129 && (p % 4) != 1; }

The second part of the hack is filtering when colouring, to allow the original regular atom domains to be visible too (without this part they get squashed by the new domains). Here's the colouring hack for the image above:

... int p = (computed p); double _Complex zp = (computed zp); if (reject(p)) { p = (computed q); zp = (computed zq); } // colour using p and zp ... bool reject(int p) { return p < 129; }

The image below uses slightly different filters:

bool accept(int p) { return p > 129 && (p % 4) == 2; } bool reject(int p) { return p <= 129 || (p % 4) == 1; }

I did manage to find some filters that showed domains in the filaments at this deeper zoom level, but I lost them while making other changes and I've been unable to recreate them - very frustrating.

These images show another minor development, colouring the domains according to the quadrants of \(z_p\), which meet at the nucleus. I was inspired to do this by Algorithm 9 of Wolf Jung's Mandel which shows the zeros of a particular period (I wanted to show all periods at once). This forms the basis of an improved periodicity scan algorithm, which iterates an image one step at a time, scanning for meeting quadrants at a local minimum of \(z_p\) and recording their locations and periods to an output buffer. This new algorithm is much more scalable to deep zooms (it will work fine with perturbation techniques, though I haven't implemented that yet - the previous algorithm definitely needed lots of arbitrary precision calculations). It might even be possible to accelerate on GPU, its parallelism is amenable.

The size estimate for the filtered domains is fairly similar to the atom domain size estimate I derived previously:

double filtered_domain_size(double _Complex nucleus, int period) { double _Complex z = 0; double _Complex dc = 0; double zq = 1.0/0.0; for (int q = 1; q <= period; ++q) { dc = 2 * z * dc + 1; z = z * z + nucleus; double zp = cabs(z); if (q < period && zp < zq && accept(q)) { zq = zp; } } return zq / cabs(dc); }

This can return infinity if the filter is too restrictive, but domain size of the period 1 cardioid is infinite too, so it's not a big deal for the caller to check and deal with it as appropriate.

I'll clean up the code a bit and push to my mandelbrot-* repositories soon.

]]>A Latin square of order \(n\) is a matrix of \(n^2\) values each in the range \(\{1, 2, ,\ldots, n\}\), such that each value occurs exactly once in each row and each column. The number of Latin squares goes up very quickly with \(n\): see A002860 in the Online Encyclopedia of Integer Sequences. A subset is that of reduced Latin squares, where the first row and the first column are the sequence \((1 2 \ldots n)\) (counted by A000315). And a third group is Latin squares with the first row fixed as \((1 2 \ldots n)\) and no condition on the first column: A000479.

While answering a question on math.SE, I noticed the OEIS has very few terms of another sequence related to Latin squares, namely the number of classes of "structurally equivalent" Latin squares, where equivalence is over rotations, reflections, and permuting the symbols. The computer programs I wrote to search for the answers to the question finished in a long but manageable amount of time, so I wrote a program to search for the next term of A264603:

// gcc -std=c99 -Wall -Wextra -pedantic -O3 -march=native A264603.c// ./a.out order#include <stdio.h> #include <stdlib.h> #include <string.h>// orderstatic int O = 0;// generated squarestatic char *square = 0;// buffer for normalization of symmetrical squaresstatic char *squares = 0;// counter for progressstatic long long total = 0;// counter for uniquesstatic long long unique = 0;// make first row be ABC... in-placestatic inline void relabel(char *s) { char label[O]; for (int i = 0; i < O; ++i) label[s[i] - 'A'] = 'A' + i; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) s[O*i+j] = label[s[O*i+j] - 'A']; }// wrap strcmp with comparator typestatic int compare(const void *a, const void *b) { return strcmp(a, b); }// find lexicographically least of all relabeled symmetries// this acts as the canonical representative for the structure classstatic inline void normalize() {// regularint k = 0; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*i+j] = square[O*i+j]; relabel(&squares[k]);// rotated 90k += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-j-1)+i] = square[O*i+j]; relabel(&squares[k]);// rotated 180k += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-i-1)+(O-j-1)] = square[O*i+j]; relabel(&squares[k]);// rotated 270k += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*j+(O-i-1)] = square[O*i+j]; relabel(&squares[k]);// reflect Ik += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-i-1)+j] = square[O*i+j]; relabel(&squares[k]);// reflect Jk += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*i+(O-j-1)] = square[O*i+j]; relabel(&squares[k]);// reflect IJk += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*j+i] = square[O*i+j]; relabel(&squares[k]);// reflect JIk += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-1-j)+(O-1-i)] = square[O*i+j]; relabel(&squares[k]);// normalizeqsort(squares, 8, O * O + 1, compare); }// return 1 if square is not Latin at index i,jstatic inline int prune(int i, int j) { char symbol = square[O*i+j]; for (int q = 0; q < j; ++q) if (symbol == square[O*i+q]) return 1; for (int p = 0; p < i; ++p) if (symbol == square[O*p+j]) return 1; return 0; } static inline void output(void) {// output normalized representationnormalize(); if (! compare(square, squares)) unique++;// report progresstotal++; if ((total & 0xFFFF) == 0) fprintf(stderr, "\r%lld %lld ", total, unique); }// depth first search across space of Latin squares with pruningstatic void generate(int i, int j) { if (j == O) { i += 1; j = 0; } if (i == O) { output(); return; } if (i == 0) {// first row is ABC... wlogsquare[O*i+j] = 'A' + j; generate(i, j + 1); } else {// try each possibility for next cellfor (int k = 0; k < O; ++k) { square[O*i+j] = 'A' + k; if (prune(i, j)) continue; generate(i, j + 1); } } }// entry pointint main(int argc, char **argv) { if (argc > 1) O = atoi(argv[1]); if (! (0 < O)) { fprintf(stderr, "usage: %s order\n", argv[0]); return 1; } square = calloc(1, O * O + 1); squares = calloc(1, 8 * (O * O + 1)); generate(0, 0); printf("\norder: %d\ntotal: %lld\nunique: %lld\n", O, total, unique); return 0; }

For order 1 through 6 it matches up with the OEIS page, and for order 7 the output after around 16 hours of computation is:

\[1524901344\]

You can download the C source code: A264603.c

]]>Another year has arrived! 2017 is prime, so it occurs a lot in the Online Encyclopedia of Integer Sequences. I visualized one that isn't dependent on its primality: A134169. Haskell code below:

{-# LANGUAGE FlexibleContexts #-} import Prelude hiding (null) import Data.Bits (bit, finiteBitSize, testBit, (.&.)) import Data.Set (Set) import qualified Data.Set as S import Diagrams.Prelude hiding (intersection) import Diagrams.Backend.PGF.CmdLine (B, defaultMain) type Z = Int type S = Int type P = Set S intersection :: S -> S -> S intersection = (.&.) isSubsetOf :: S -> S -> Bool x `isSubsetOf` y = (x `intersection` y) == x isProperSubsetOf :: S -> S -> Bool x `isProperSubsetOf` y = (x `isSubsetOf` y) && x /= y null :: S -> Bool null x = x == 0 member :: Z -> S -> Bool member i x = testBit x i toList :: S -> [Z] toList x = [ i | i <- [0 .. finiteBitSize x - 1], i `member` x ] nset :: Z -> S nset n = bit n - 1 npower :: Z -> P npower n = S.fromList [0 .. bit n - 1] data T = A | B | C | D t :: S -> S -> Maybe T t x y | y > x = Nothing | null (x `intersection` y) && not (x `isSubsetOf` y) && not (y `isSubsetOf` x) = Just A | not (null (x `intersection` y)) && not (x `isSubsetOf` y) && not (y `isSubsetOf` x) = Just B | not (null (x `intersection` y)) && ((x `isProperSubsetOf` y) || (y `isProperSubsetOf` x)) = Just C | x == y = Just D | otherwise = Nothing label is x = [ square 2 # strokeP # lc black # fc (if i `member` x then black else white) # pad 2 | i <- is ] xlabel s x = vcat $ label (reverse $ toList s) x ylabel s y = hcat $ label ( toList s) y withEnvelope' :: Diagram B -> Diagram B -> Diagram B withEnvelope' = withEnvelope cell :: Maybe T -> Diagram B cell Nothing = withEnvelope' (square 2) mempty cell (Just A) = circle 1 # strokeP # lc red cell (Just B) = triangle 2 # centerXY # strokeP # lc green cell (Just C) = square 2 # strokeP # lc magenta cell (Just D) = (p2 (-1, -1) ~~ p2 (1, 1) `atop` p2 (1, -1) ~~ p2 (-1, 1)) # lc blue diagram n = lwL 0.25 . vcat $ ( hcat $ (++[withEnvelope' (ylabel s 0) mempty]) [ xlabel s x | x <- S.toList p ] ) : [ hcat $ (++[ylabel s y]) [ cell (t x y) # pad 2 | x <- S.toList p ] | y <- S.toList p ] where p = npower n s = nset n key a b c d = vcat [ cell (Just D) # pad 2 ||| d , cell (Just A) # pad 2 ||| a , cell (Just B) # pad 2 ||| b , cell (Just C) # pad 2 ||| c ] # scale 8 txt = alignedText 0 0.5 main1 :: Z -> IO () main1 n = defaultMain $ let a = txt "$ x \\cap y = \\emptyset \\wedge x \\not\\subseteq y \\wedge x \\not\\supseteq y $" b = txt "$ x \\cap y \\neq \\emptyset \\wedge x \\not\\subseteq y \\wedge x \\not\\supseteq y $" c = txt "$ x \\cap y \\neq \\emptyset \\wedge \\left( x \\subset y \\vee x \\supset y \\right) $" d = txt "$ x = y $" m = 2^(n - 1) * (2^n - 1) + 1 count = txt $ "$ " ++ show m ++ " $" oeis = alignedText 0 0 "\\phantom{.} OEIS / A134169" in bg white . pad 1.1 . centerXY $ alignBR (alignBL (diagram n # centerXY) `atop` alignBL (key a b c d # centerXY) === alignTL ((strutY 1.1 ||| count) # bold # scale 96)) `atop` alignBR (rotate (90 @@ deg) (oeis # bold # scale 8)) main :: IO () main = main1 6

So, plans for the year ahead? I guess "continuity" sums it up - working on my projects, improving them, maybe finding cross-links between them. Playing around researching new things too. And peforming and exhibiting and presenting when opportunities arise. Concretely, here are a few project ideas:

- cca
- Blog about coupled cellular automata experiments, explore potential links with RDEX and BitBreeder projects, possible sonification.
- clive
- Look into the possibilities of cross-compilation and upload to Bela or other similar low-latency audio platform.
- cmcms
- Make year-rollover automatic instead of manual.
- graphgrow
- Blog about the updated Haskell GUI (GTK, OpenGL) with its Pd sonification. Think about how to present it as an installation, perhaps with touch screen. Also think about live performance possibilities.
- hgmp
- Battle-test my low-level Haskell bindings to GMP by writing FFI wrappers for my Mandelbrot-related C code. Will also require writing similar low-level bindings to MPFR, but which Haskell MPFR library to choose?
- incidents
- Try to make videos for another track or two at least. Blog about recent updates to the visualisation of the inflatable torus physics demo.
- mandelbrot-book
- Continue work on the reboot. Blog about it.
- mandelbrot-*
- Add new algorithms as I uncover them, document the existing code (with references to papers where possible), more examples.
- mightymandel
- Figure out how to merge with mandelbrot-perturbator.
- monotone
- Continue hacking at the code trying to speed it up enough to run in realtime on my hardware. Figure out how to profile the bottlenecks.
- pool-party
- Document and announce this water simulation demo.
- tilda
- Find a project or two for the EMFCamp 2016 badge.
- unarchive
- Blog about the recent bug-fixes to my Internet Archive downloader.

The Mandelbrot set is full of hyperbolic components (the circle-like and cardioid-like regions), each of which has a nucleus at its center, which has a superstable periodic orbit. For example the biggest cardioid has center 0 and period 1, while the circle to the left has center -1 and period 2 (verify by: \((0^2 + (-1))^2) + (-1) = 0\)).

Suppose we know the location of the nucleus (the \(c\) parameter) and we want to render a picture of the corresponding hyperbolic component. To do this we need to know its size. I tried to derive a size estimate myself, using Taylor series for \(\frac{\partial}{\partial z}\) using the fact that this derivative tends to \(1\) at the boundary of the component and is \(0\) at the nucleus, but the truncation error smashed everything to pieces. So I fell back on plan B: trying to understand the existing size estimate I found on ibiblio.org.

The size estimate using the notation on that page (go read it first) is \(\frac{1}{\beta \Lambda_n^2}\). I found the page a bit confusing at the first many readings, but reading the referenced paper and thinking hard while writing notes on paper helped me crack it. The size estimate forms a small section of the paper near the start, for reference:

Structure in the parameter dependence of order and chaos for the quadratic map

Brian R Hunt and Edward Ott

J. Phys. A: Math. Gen. 30 (1997) 7067–7076

Many dynamical systems are thought to exhibit windows of attracting periodic behaviour for arbitrarily small perturbations from parameter values yielding chaotic attractors. This structural instability of chaos is particularly well documented and understood for the case of the one-dimensional quadratic map. In this paper we attempt to numerically characterize the global parameter-space structure of the dense set of periodic "windows" occurring in the chaotic regime of the quadratic map. In particular, we use scaling techniques to extract information on the probability distribution of window parameter widths as a function of period and location of the window in parameter space. We also use this information to obtain the uncertainty exponent which is a quantity that globally characterizes the ability to identify chaos in the presence of small parameter uncertainties.

The basic idea is that under iteration of \(z\to z^2+c\), the small neighbourhood of the nucleus \(c\) bounces around the complex plane, being slightly distorted and stretched each time, except for one "central interval" at which the neighbourhood of \(z_{k p}\) contains the origin \(0\) and the next iteration folds the interval in half with quadratic scaling. Now the bouncing around the plane can be approximated as linear, with scaling given by the first derivative (with respect to \(z\)), and there is only one interval \(n = kp\) in which the full quadratic map needs to be preserved. We end up with something like this:

\[\begin{aligned} z_{n + p} = & c + \frac{\partial}{\partial z} z_{n+p-1} ( \\ & \vdots \\ & c + \frac{\partial}{\partial z} z_{n+3} ( \\ & c + \frac{\partial}{\partial z} z_{n+2} ( \\ & c + \frac{\partial}{\partial z} z_{n+1} ( \\ & c + z_n^2 ) ) ) \ldots ) \end{aligned}\]

Expanding out the brackets gives:

\[ z_{n + p} = \left(\prod_{k = 1}^{p - 1} \frac{\partial}{\partial z} z_{n + k}\right) z_n + \left(\sum_{m = 1}^p \prod_{k = m}^{p - 1} \frac{\partial}{\partial z} z_{n+k}\right) c \]

Writing:

\[\begin{aligned} \lambda_m &= \prod_{k = 1}^{m} \frac{\partial}{\partial z} z_{n + k} \\ \Lambda &= \lambda_{p - 1} \end{aligned}\]

the sum can have a factor of \(\Lambda\) drawn out to give:

\[ z_{n + p} = \Lambda \left( z_n^2 + \left( 1 + \lambda_1^{-1} + \lambda_2^{-1} + \ldots + \lambda_{p - 1}^{-1} \right) c \right) = \Lambda \left( z_n^2 + \beta c \right) \]

The final step is a change of variables where \(c_0\) is the nucleus:

\[\begin{aligned} Z &= \Lambda z \\ C &= \beta \Lambda^2 \left(c - c_0\right) \end{aligned}\]

Now there is self-similarity (aka renormalization):

\[Z_{n+1} = Z_n^2 + C\]

ie, one iteration of the new variable corresponds to \(p\) iterations of the original variable. (Exercise: verify the renormalization.) Moreover the definition of \(C\) gives the scale factor in the parameter plane, which gives the size estimate when we multiply by the size of the top level window (the paper referenced above uses \(\frac{9}{4}\) as the size, corresponding to the interval \(\left[-2,\frac{1}{4}\right]\) from cusp to antenna tip - using \(\frac{1}{2}\) makes circles' sizes approximately their radii).

Finally some C99 code to show how easy this size estimate is to compute in practice (see also my mandelbrot-numerics library):

double _Complex m_size(double _Complex nucleus, int period) { double _Complex l = 1; double _Complex b = 1; double _Complex z = 0; for (int i = 1; i < period; ++i) { z = z * z + nucleus; l = 2 * z * l; b = b + 1 / l; } return 1 / (b * l * l); }

As a bonus, using complex values gives an orientation estimate in addition to the size estimate - just use \(\arg\) and \(\left|.\right|\) on the result.

]]>Interlocking space filling curves - in the limit the perimeter between adjacent blocks becomes infinite, like a perfect lung for transferring oxygen from air to blood and carbon dioxide in the other direction. Made with the Haskell Diagrams library:

]]>{-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Diagrams.Prelude hiding (E) import Diagrams.Backend.SVG.CmdLine (defaultMain) import Control.Monad.State (state, evalState) import Data.Colour.SRGB (sRGB24) data H = N | E | S | W data T = L | R step L = [L,R,L] step R = [L,R,R,R,L] seed = [R,R,R,R] levels = iterate (concatMap step) seed heading N L = W heading E L = N heading S L = E heading W L = S heading N R = E heading E R = S heading S R = W heading W R = N angle N = 90 @@ deg angle E = 0 @@ deg angle S = -90 @@ deg angle W = 180 @@ deg draw d h = (arcD d a a', h') where a = angleDir $ angle h a' = case d of L -> 90 @@ deg R -> -90 @@ deg h' = heading h d arcD L a b = arc a b arcD R a b = scale (-1) $ arcCW a (rotate b a) draws = mconcat . (`evalState` E) . mapM (state . draw) base = draws (levels !! level) # closeTrail # pathFromTrail # translate (r2 (-1, 0)) # stroke left = base middle = base # scale (sqrt 0.5) # rotate (45 @@ deg) # translateX (2 ^ (level + 1)) right = base # scale 0.5 # translateX (2 ^ level * 3) # translateY (2 ^ level) four = base # scale (sqrt 0.125) # rotate (45 @@ deg) # translateX (3 * 2 ^ level) # translateY (2 ^ (level + 1)) lung = (left # fc r <> middle # fc y <> right # fc g <> four # fc b) # rotate ((atan 5 :: Double) @@ rad) # centerXY # pad 1.01 # lc black # lw 0.1 # bg white # rotate (-90 @@ deg) y = sRGB24 0xff 0xdf 0x80 r = sRGB24 0xff 0x88 0xaa b = sRGB24 0xaa 0xaa 0xff g = sRGB24 0x88 0xff 0xaa level = 4 main = defaultMain lung

A000129 Pell numbers

(0, 1, 2, 5, 12, 29,70, 169, 408, 985, ...)

Number of lattice paths from (0,0) to the line x=n-1 consisting of U=(1,1), D=(1,-1) and H=(2,0) steps; for example, a(3)=5, counting the paths H, UD, UU, DU and DD.-- Emeric Deutsch

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List (sort, transpose) import Data.List.Split (chunksOf) u, d, h, z :: (Int, Int) u = (1, 1) d = (1, -1) h = (2, 0) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z l = fst . foldr add z paths n = [ q | m <- [0..n] , q <- replicateM m [u,d,h] , l q == n ] draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . transpose . chunksOf m . map (draw n) . sort $ paths n main = defaultMain (diagram 5 10)

A000332 Binomial coefficient (n,4)

(0, 0, 0, 0, 1, 5, 15, 35,70, 126, 210, 330, 495, 715, ...)

Number of equilateral triangles with vertices in an equilateral triangular array of points with n rows (offset 1), with any orientation.-- Ignacio Larrosa Cañestro

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Data.List (sort, sortOn, nub, transpose) import Data.List.Split (chunksOf) third :: (Int, Int) -> (Int, Int) -> (Int, Int) third (p, q) (p', q') = let (s, t) = (p' - p, q' - q) in (p - t, q + s + t) inTriangle :: Int -> (Int, Int) -> Bool inTriangle n (p, q) = 0 <= p && 0 <= q && p + q < n sizeSquared :: [(Int, Int)] -> Int sizeSquared [(p, q), (p', q'), _] = let (s, t) = (p' - p, q' - q) in s * s + s * t + t * t triangles :: Int -> [[(Int, Int)]] triangles n = sortOn sizeSquared $ nub [ sort [(a, b), (c, d), (e, f)] | a <- [0..n] , b <- [0..n] , inTriangle n (a, b) , c <- [0..n] , d <- [0..n] , inTriangle n (c, d) , (a, b) /= (c, d) , (e, f) <- [ third (a, b) (c, d) , third (c, d) (a, b) ] , inTriangle n (e, f) ] t2 :: (Int, Int) -> V2 Double t2 (p, q) = V2 (fromIntegral p + fromIntegral q / 2) (sqrt 3 * fromIntegral q / 2) t2' = P . t2 draw n t@[ab,cd,ef] = frame 0.75 . scale 1.25 . rotate (15 @@ deg) $ mconcat [ circle 0.25 # fc (if (p, q) `elem` t then grey else white) # translate (t2 (p, q)) # lw thin | p <- [0..n] , q <- [0..n] , inTriangle n (p, q) ] `atop` mconcat [ t2' ab ~~ t2' cd , t2' cd ~~ t2' ef , t2' ef ~~ t2' ab ] grid = vcat . map hcat diagram n m = bg white . grid . chunksOf m . map (draw n) $ triangles n main = defaultMain (diagram 6 7)

A000984 Central binomial coefficient (2n,n)

(1, 2, 6, 20,70, 252, 924, ...)

The number of direct routes from my home to Granny's when Granny lives n blocks south and n blocks east of my home in Grid City. For example, a(2)=6 because there are 6 direct routes: SSEE, SESE, SEES, EESS, ESES and ESSE.-- Dennis P. Walsh

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List.Split (chunksOf) u, d, z :: (Int, Int) u = (1, 0) d = (0, 1) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z l = foldr add z paths n = [ q | q <- replicateM (2 * n) [u,d] , l q == (n, n) ] draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . chunksOf m . map (draw n) $ paths n main = defaultMain (diagram 4 7)

A001405 Binomial (n,floor(n/2))

(1, 1, 2, 3, 6, 10, 20, 35,70, 126, 252, 462, 924, ...)

Number of distinct strings of length n, each of which is a prefix of a string of balanced parentheses; For n = 4, the a(4) = 6 distinct strings of length 4 are ((((, (((), (()(, ()((, ()(), and (()).-- Lee A. Newberg

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List (sort, transpose) import Data.List.Split (chunksOf) u, d, z :: (Int, Int) u = (1, 1) d = (1, -1) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) boundedBelow = not . any ((< 0) . snd) . scanl add z paths n = [ q | q <- replicateM n [u,d] , boundedBelow q ] v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . transpose . chunksOf m . map (draw n) . sort $ paths n main = defaultMain (diagram 8 10)

A002623 Generating function of 1/((1+x)(1-x)^4)

(1, 3, 7, 13, 22, 34, 50,70, 95, 125, 161, 203, 252, 308, 372, 444, 525, 615, 715, 825, 946, ...)

Number of nondegenerate triangles that can be made from rods of length 1,2,3,4,...,n.-- Alfred Bruckstein

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Data.List (sort, sortOn, nub, transpose) import Data.List.Split (chunksOf) nondegenerate :: [Int] -> Bool nondegenerate [a,b,c] = a + b > c corners :: [Int] -> [V2 Double] corners [a',b',c'] = [V2 0 0, V2 c 0, V2 x y] where a = fromIntegral a' b = fromIntegral b' c = fromIntegral c' x = (c^2 - a^2 + b^2) / (2 * c) y = sqrt $ b^2 - x^2 sizeSquared :: [Int] -> Double sizeSquared [a',b',c'] = s * (s - a) * (s - b) * (s - c) where a = fromIntegral a' b = fromIntegral b' c = fromIntegral c' s = (a + b + c) / 2 triangles :: Int -> [([Int], [V2 Double])] triangles n = map (\t -> (t, corners t)) . sortOn sizeSquared $ [ abc | a <- [1..n] , b <- [a..n] , c <- [b..n] , let abc = [a,b,c] , nondegenerate abc ] edge k a b = mconcat [ circle 0.25 # fc white # translate p # lw thin | p <- [ lerp t a b | i <- [0..k] , let t = fromIntegral i / fromIntegral k ] ] `atop` (P a ~~ P b) draw n ([a,b,c], t@[ab,cd,ef]) = frame 0.5 . (`atop` centerXY (strut (fromIntegral n))) . centerXY . rotate (15 @@ deg) $ mconcat [ ] `atop` mconcat [ edge c ab cd , edge a cd ef , edge b ef ab ] grid = vcat . map hcat diagram n m = bg white . grid . chunksOf m . map (draw n) $ triangles n main = defaultMain (diagram 8 7)

The Farey tree crops up in the Mandelbrot set, a nice introduction can be found in The Mandelbrot Set and The Farey Tree by Robert L. Devaney. The tree operates on rational numbers by Farey addition, and can be defined recursively starting from \(\left(\frac{0}{1},\frac{1}{1}\right)\) with an operation acting on neighbouring numbers:

\[\frac{a}{b} \oplus \frac{c}{d} = \frac{a + c}{b + d}\]

Section 6 of Devaney's paper begins

... Suppose \(0 < \frac{a}{b} < \frac{c}{d} < 1\) are the Farey parents of \(\frac{p}{q}\). ...

In practice it would be nice to be able to compute these Farey parents given \(\frac{p}{q}\). One approach is to perform a search through the tree, starting with bounds at 0 and 1, finding the Farey sum of the bounds and adjusting the bounds at each stage to keep the target fraction within them, stopping when it is reached. Unfortunately this has terrible asymptotic complexity, for example finding the Farey parents of \(\frac{1}{100}\) in this way would step through \(\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots \frac{1}{98}\) before finding the parents \(\left(\frac{0}{1}, \frac{1}{99}\right)\).

Fortunately there is a better way suggested by Siddharth Prasad on math.stackexchange.com: using the extended Euclidean algorithm to solve for \(x\) and \(y\):

\[ q x + p y = \gcd(p, q) = 1 \]

Then one parent is \(-\frac{x}{y}\) and the other can be found by undoing the Farey addition. For example, to find the parents of \(\frac{3}{100}\):

q | r | s | t |

100 | 1 | 0 | |

3 | 0 | 1 | |

33 | 1 | 1 | -33 |

3 | 0 |

which gives one parent \(\frac{1}{33}\), and by Farey addition the other parent is \(\frac{2}{67}\).

The Euclidean algorithm has complexity \(O(\log q)\), which is a vast improvement over the naive search \(O(q)\).

]]>**EDIT** the below is wrong, the 3 slow machines were compiled
with the wrong options so were unnecessarily slow. With "-march=native" (or for
the Pi "-mfloat-abi=hard -mfpu=neon") the 9th order polynomial is again conclusively
the best algorithm for all machines.

Last year I implemented a cosine approximation using a 9th order polynomial, and was pleased that in my tests it was both faster and more accurate than other methods using table lookup. However, more recently I re-ran the benchmarks on a number of less powerful computers, and it turned out that the table lookup implementations are faster there.

You can see the full results at /approximations/cosf.html, in short the 9th order polynomial is best for Pentium 4, Pentium M, Core 2, Athlon II; but interpolated table lookup is faster for Celeron T3100, Atom N455 (in 32bit mode), and the ARM chip in the Raspberry Pi Model 3 B. Accuracy seems to be the same across all machines, thanks to IEEE standards (despite being compiled with -ffast-math).

]]>