It is known that in the Mandelbrot set, the smallest hyperbolic component of a given period is in the utter west of the antenna. Each atom is 16 times smaller and 4 times nearer the tip, which also means each successive atom domains is 4 times smaller (they all meet at the tip). This gives atom size \(O(16^{-p})\) and domain size \(O(4^{-p})\) where \(p\) is the period. I verified this relationship with some code:

#include <stdio.h> #include <mandelbrot-numerics.h> int main() { for (int period = 1; period < 64; period += 1) { printf("# period %d\n", period); fflush(stdout); mpfr_prec_t prec = 16 * period + 8; mpc_t guess, nucleus, size; mpc_init2(guess, prec); mpc_init2(nucleus, prec); mpc_init2(size, prec); mpc_set_d(guess, -2, MPC_RNDNN); while (prec > 2) { mpfr_prec_round(mpc_realref(guess), prec, MPFR_RNDN); mpfr_prec_round(mpc_imagref(guess), prec, MPFR_RNDN); m_r_nucleus(nucleus, guess, period, 64); printf("%ld ", prec); fflush(stdout); m_r_size(size, nucleus, period); mpc_norm(mpc_realref(size), size, MPFR_RNDN); mpfr_log2(mpc_realref(size), mpc_realref(size), MPFR_RNDN); mpfr_div_2ui(mpc_realref(size), mpc_realref(size), 1, MPFR_RNDN); mpfr_printf("%Re ", mpc_realref(size)); fflush(stdout); m_r_domain_size(mpc_realref(size), nucleus, period); mpfr_prec_round(mpc_realref(size), 53, MPFR_RNDN); mpfr_log2(mpc_realref(size), mpc_realref(size), MPFR_RNDN); mpfr_printf("%Re\n", mpc_realref(size)); fflush(stdout); prec--; mpfr_prec_round(mpc_realref(nucleus), prec, MPFR_RNDN); mpfr_prec_round(mpc_imagref(nucleus), prec, MPFR_RNDN); } printf("\n\n"); fflush(stdout); } return 0; }

Plotting the output shows it is so:

This makes a rough worst case estimate of the precision required to accurately compute a size estimate (whether for atom or domain) from the nucleus of the atom be around \(16 p\) bits. But it turns out that (at aleast for this sequence of atoms heading to the tip of the antenna) a lot less precision is actually required. Here's a graph showing a seam where the size estimate breaks down when the precision gets too low, and doing some maths shows that the seam is at precision \(2 p\), a factor of \(8\) better than the first guess:

It remains to investigate other sequences of atoms, to see how they behave. Chances are the \(2 p\) bits of precision required estimate is only necessary in rare cases like heading toward filament tips, and that aesthetically-chosen iterated Julia morphing atoms (for example) will be very much larger than would be expected from their period.

]]>Above is what atom domain rendering typicaly looks like. But there is a wealth of information in the periods of islands in the filaments, which isn't visible unless you scan for periods and annotate the image with labels (as in my previous post). After some experimentation, I figured out a way to make their domains visible and moreover get a domain size estimate (useful for labelling).

The hack I came up with is in two parts: the first part is in the iteration calculations. Atom domain calculation typically works like this:

for (iteration = 1; iterating; ++iteration) { z = z * z + c; if (abs(z) < minimum) { minimum = abs(z); p = iteration; zp = z; } ... }

The filtering hack adds another pair of \((q, z_q)\) to the existing \((p, z_p)\), only this time filtered by a function of iteration number:

... if (abs(z) < minimum2 && accept(iteration)) { minimum2 = abs(z); q = iteration; zq = z; } ...

For the image above the `accept()`

filter function was:

bool accept(int p) { return p >= 129 && (p % 4) != 1; }

The second part of the hack is filtering when colouring, to allow the original regular atom domains to be visible too (without this part they get squashed by the new domains). Here's the colouring hack for the image above:

... int p = (computed p); double _Complex zp = (computed zp); if (reject(p)) { p = (computed q); zp = (computed zq); } // colour using p and zp ... bool reject(int p) { return p < 129; }

The image below uses slightly different filters:

bool accept(int p) { return p > 129 && (p % 4) == 2; } bool reject(int p) { return p <= 129 || (p % 4) == 1; }

I did manage to find some filters that showed domains in the filaments at this deeper zoom level, but I lost them while making other changes and I've been unable to recreate them - very frustrating.

These images show another minor development, colouring the domains according to the quadrants of \(z_p\), which meet at the nucleus. I was inspired to do this by Algorithm 9 of Wolf Jung's Mandel which shows the zeros of a particular period (I wanted to show all periods at once). This forms the basis of an improved periodicity scan algorithm, which iterates an image one step at a time, scanning for meeting quadrants at a local minimum of \(z_p\) and recording their locations and periods to an output buffer. This new algorithm is much more scalable to deep zooms (it will work fine with perturbation techniques, though I haven't implemented that yet - the previous algorithm definitely needed lots of arbitrary precision calculations). It might even be possible to accelerate on GPU, its parallelism is amenable.

The size estimate for the filtered domains is fairly similar to the atom domain size estimate I derived previously:

double filtered_domain_size(double _Complex nucleus, int period) { double _Complex z = 0; double _Complex dc = 0; double zq = 1.0/0.0; for (int q = 1; q <= period; ++q) { dc = 2 * z * dc + 1; z = z * z + nucleus; double zp = cabs(z); if (q < period && zp < zq && accept(q)) { zq = zp; } } return zq / cabs(dc); }

This can return infinity if the filter is too restrictive, but domain size of the period 1 cardioid is infinite too, so it's not a big deal for the caller to check and deal with it as appropriate.

I'll clean up the code a bit and push to my mandelbrot-* repositories soon.

]]>A Latin square of order \(n\) is a matrix of \(n^2\) values each in the range \(\{1, 2, ,\ldots, n\}\), such that each value occurs exactly once in each row and each column. The number of Latin squares goes up very quickly with \(n\): see A002860 in the Online Encyclopedia of Integer Sequences. A subset is that of reduced Latin squares, where the first row and the first column are the sequence \((1 2 \ldots n)\) (counted by A000315). And a third group is Latin squares with the first row fixed as \((1 2 \ldots n)\) and no condition on the first column: A000479.

While answering a question on math.SE, I noticed the OEIS has very few terms of another sequence related to Latin squares, namely the number of classes of "structurally equivalent" Latin squares, where equivalence is over rotations, reflections, and permuting the symbols. The computer programs I wrote to search for the answers to the question finished in a long but manageable amount of time, so I wrote a program to search for the next term of A264603:

// gcc -std=c99 -Wall -Wextra -pedantic -O3 -march=native A264603.c// ./a.out order#include <stdio.h> #include <stdlib.h> #include <string.h>// orderstatic int O = 0;// generated squarestatic char *square = 0;// buffer for normalization of symmetrical squaresstatic char *squares = 0;// counter for progressstatic long long total = 0;// counter for uniquesstatic long long unique = 0;// make first row be ABC... in-placestatic inline void relabel(char *s) { char label[O]; for (int i = 0; i < O; ++i) label[s[i] - 'A'] = 'A' + i; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) s[O*i+j] = label[s[O*i+j] - 'A']; }// wrap strcmp with comparator typestatic int compare(const void *a, const void *b) { return strcmp(a, b); }// find lexicographically least of all relabeled symmetries// this acts as the canonical representative for the structure classstatic inline void normalize() {// regularint k = 0; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*i+j] = square[O*i+j]; relabel(&squares[k]);// rotated 90k += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-j-1)+i] = square[O*i+j]; relabel(&squares[k]);// rotated 180k += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-i-1)+(O-j-1)] = square[O*i+j]; relabel(&squares[k]);// rotated 270k += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*j+(O-i-1)] = square[O*i+j]; relabel(&squares[k]);// reflect Ik += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-i-1)+j] = square[O*i+j]; relabel(&squares[k]);// reflect Jk += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*i+(O-j-1)] = square[O*i+j]; relabel(&squares[k]);// reflect IJk += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*j+i] = square[O*i+j]; relabel(&squares[k]);// reflect JIk += O * O + 1; for (int i = 0; i < O; ++i) for (int j = 0; j < O; ++j) squares[k+O*(O-1-j)+(O-1-i)] = square[O*i+j]; relabel(&squares[k]);// normalizeqsort(squares, 8, O * O + 1, compare); }// return 1 if square is not Latin at index i,jstatic inline int prune(int i, int j) { char symbol = square[O*i+j]; for (int q = 0; q < j; ++q) if (symbol == square[O*i+q]) return 1; for (int p = 0; p < i; ++p) if (symbol == square[O*p+j]) return 1; return 0; } static inline void output(void) {// output normalized representationnormalize(); if (! compare(square, squares)) unique++;// report progresstotal++; if ((total & 0xFFFF) == 0) fprintf(stderr, "\r%lld %lld ", total, unique); }// depth first search across space of Latin squares with pruningstatic void generate(int i, int j) { if (j == O) { i += 1; j = 0; } if (i == O) { output(); return; } if (i == 0) {// first row is ABC... wlogsquare[O*i+j] = 'A' + j; generate(i, j + 1); } else {// try each possibility for next cellfor (int k = 0; k < O; ++k) { square[O*i+j] = 'A' + k; if (prune(i, j)) continue; generate(i, j + 1); } } }// entry pointint main(int argc, char **argv) { if (argc > 1) O = atoi(argv[1]); if (! (0 < O)) { fprintf(stderr, "usage: %s order\n", argv[0]); return 1; } square = calloc(1, O * O + 1); squares = calloc(1, 8 * (O * O + 1)); generate(0, 0); printf("\norder: %d\ntotal: %lld\nunique: %lld\n", O, total, unique); return 0; }

For order 1 through 6 it matches up with the OEIS page, and for order 7 the output after around 16 hours of computation is:

\[1524901344\]

You can download the C source code: A264603.c

]]>Another year has arrived! 2017 is prime, so it occurs a lot in the Online Encyclopedia of Integer Sequences. I visualized one that isn't dependent on its primality: A134169. Haskell code below:

{-# LANGUAGE FlexibleContexts #-} import Prelude hiding (null) import Data.Bits (bit, finiteBitSize, testBit, (.&.)) import Data.Set (Set) import qualified Data.Set as S import Diagrams.Prelude hiding (intersection) import Diagrams.Backend.PGF.CmdLine (B, defaultMain) type Z = Int type S = Int type P = Set S intersection :: S -> S -> S intersection = (.&.) isSubsetOf :: S -> S -> Bool x `isSubsetOf` y = (x `intersection` y) == x isProperSubsetOf :: S -> S -> Bool x `isProperSubsetOf` y = (x `isSubsetOf` y) && x /= y null :: S -> Bool null x = x == 0 member :: Z -> S -> Bool member i x = testBit x i toList :: S -> [Z] toList x = [ i | i <- [0 .. finiteBitSize x - 1], i `member` x ] nset :: Z -> S nset n = bit n - 1 npower :: Z -> P npower n = S.fromList [0 .. bit n - 1] data T = A | B | C | D t :: S -> S -> Maybe T t x y | y > x = Nothing | null (x `intersection` y) && not (x `isSubsetOf` y) && not (y `isSubsetOf` x) = Just A | not (null (x `intersection` y)) && not (x `isSubsetOf` y) && not (y `isSubsetOf` x) = Just B | not (null (x `intersection` y)) && ((x `isProperSubsetOf` y) || (y `isProperSubsetOf` x)) = Just C | x == y = Just D | otherwise = Nothing label is x = [ square 2 # strokeP # lc black # fc (if i `member` x then black else white) # pad 2 | i <- is ] xlabel s x = vcat $ label (reverse $ toList s) x ylabel s y = hcat $ label ( toList s) y withEnvelope' :: Diagram B -> Diagram B -> Diagram B withEnvelope' = withEnvelope cell :: Maybe T -> Diagram B cell Nothing = withEnvelope' (square 2) mempty cell (Just A) = circle 1 # strokeP # lc red cell (Just B) = triangle 2 # centerXY # strokeP # lc green cell (Just C) = square 2 # strokeP # lc magenta cell (Just D) = (p2 (-1, -1) ~~ p2 (1, 1) `atop` p2 (1, -1) ~~ p2 (-1, 1)) # lc blue diagram n = lwL 0.25 . vcat $ ( hcat $ (++[withEnvelope' (ylabel s 0) mempty]) [ xlabel s x | x <- S.toList p ] ) : [ hcat $ (++[ylabel s y]) [ cell (t x y) # pad 2 | x <- S.toList p ] | y <- S.toList p ] where p = npower n s = nset n key a b c d = vcat [ cell (Just D) # pad 2 ||| d , cell (Just A) # pad 2 ||| a , cell (Just B) # pad 2 ||| b , cell (Just C) # pad 2 ||| c ] # scale 8 txt = alignedText 0 0.5 main1 :: Z -> IO () main1 n = defaultMain $ let a = txt "$ x \\cap y = \\emptyset \\wedge x \\not\\subseteq y \\wedge x \\not\\supseteq y $" b = txt "$ x \\cap y \\neq \\emptyset \\wedge x \\not\\subseteq y \\wedge x \\not\\supseteq y $" c = txt "$ x \\cap y \\neq \\emptyset \\wedge \\left( x \\subset y \\vee x \\supset y \\right) $" d = txt "$ x = y $" m = 2^(n - 1) * (2^n - 1) + 1 count = txt $ "$ " ++ show m ++ " $" oeis = alignedText 0 0 "\\phantom{.} OEIS / A134169" in bg white . pad 1.1 . centerXY $ alignBR (alignBL (diagram n # centerXY) `atop` alignBL (key a b c d # centerXY) === alignTL ((strutY 1.1 ||| count) # bold # scale 96)) `atop` alignBR (rotate (90 @@ deg) (oeis # bold # scale 8)) main :: IO () main = main1 6

So, plans for the year ahead? I guess "continuity" sums it up - working on my projects, improving them, maybe finding cross-links between them. Playing around researching new things too. And peforming and exhibiting and presenting when opportunities arise. Concretely, here are a few project ideas:

- cca
- Blog about coupled cellular automata experiments, explore potential links with RDEX and BitBreeder projects, possible sonification.
- clive
- Look into the possibilities of cross-compilation and upload to Bela or other similar low-latency audio platform.
- cmcms
- Make year-rollover automatic instead of manual.
- graphgrow
- Blog about the updated Haskell GUI (GTK, OpenGL) with its Pd sonification. Think about how to present it as an installation, perhaps with touch screen. Also think about live performance possibilities.
- hgmp
- Battle-test my low-level Haskell bindings to GMP by writing FFI wrappers for my Mandelbrot-related C code. Will also require writing similar low-level bindings to MPFR, but which Haskell MPFR library to choose?
- incidents
- Try to make videos for another track or two at least. Blog about recent updates to the visualisation of the inflatable torus physics demo.
- mandelbrot-book
- Continue work on the reboot. Blog about it.
- mandelbrot-*
- Add new algorithms as I uncover them, document the existing code (with references to papers where possible), more examples.
- mightymandel
- Figure out how to merge with mandelbrot-perturbator.
- monotone
- Continue hacking at the code trying to speed it up enough to run in realtime on my hardware. Figure out how to profile the bottlenecks.
- pool-party
- Document and announce this water simulation demo.
- tilda
- Find a project or two for the EMFCamp 2016 badge.
- unarchive
- Blog about the recent bug-fixes to my Internet Archive downloader.

The Mandelbrot set is full of hyperbolic components (the circle-like and cardioid-like regions), each of which has a nucleus at its center, which has a superstable periodic orbit. For example the biggest cardioid has center 0 and period 1, while the circle to the left has center -1 and period 2 (verify by: \((0^2 + (-1))^2) + (-1) = 0\)).

Suppose we know the location of the nucleus (the \(c\) parameter) and we want to render a picture of the corresponding hyperbolic component. To do this we need to know its size. I tried to derive a size estimate myself, using Taylor series for \(\frac{\partial}{\partial z}\) using the fact that this derivative tends to \(1\) at the boundary of the component and is \(0\) at the nucleus, but the truncation error smashed everything to pieces. So I fell back on plan B: trying to understand the existing size estimate I found on ibiblio.org.

The size estimate using the notation on that page (go read it first) is \(\frac{1}{\beta \Lambda_n^2}\). I found the page a bit confusing at the first many readings, but reading the referenced paper and thinking hard while writing notes on paper helped me crack it. The size estimate forms a small section of the paper near the start, for reference:

Structure in the parameter dependence of order and chaos for the quadratic map

Brian R Hunt and Edward Ott

J. Phys. A: Math. Gen. 30 (1997) 7067–7076

Many dynamical systems are thought to exhibit windows of attracting periodic behaviour for arbitrarily small perturbations from parameter values yielding chaotic attractors. This structural instability of chaos is particularly well documented and understood for the case of the one-dimensional quadratic map. In this paper we attempt to numerically characterize the global parameter-space structure of the dense set of periodic "windows" occurring in the chaotic regime of the quadratic map. In particular, we use scaling techniques to extract information on the probability distribution of window parameter widths as a function of period and location of the window in parameter space. We also use this information to obtain the uncertainty exponent which is a quantity that globally characterizes the ability to identify chaos in the presence of small parameter uncertainties.

The basic idea is that under iteration of \(z\to z^2+c\), the small neighbourhood of the nucleus \(c\) bounces around the complex plane, being slightly distorted and stretched each time, except for one "central interval" at which the neighbourhood of \(z_{k p}\) contains the origin \(0\) and the next iteration folds the interval in half with quadratic scaling. Now the bouncing around the plane can be approximated as linear, with scaling given by the first derivative (with respect to \(z\)), and there is only one interval \(n = kp\) in which the full quadratic map needs to be preserved. We end up with something like this:

\[\begin{aligned} z_{n + p} = & c + \frac{\partial}{\partial z} z_{n+p-1} ( \\ & \vdots \\ & c + \frac{\partial}{\partial z} z_{n+3} ( \\ & c + \frac{\partial}{\partial z} z_{n+2} ( \\ & c + \frac{\partial}{\partial z} z_{n+1} ( \\ & c + z_n^2 ) ) ) \ldots ) \end{aligned}\]

Expanding out the brackets gives:

\[ z_{n + p} = \left(\prod_{k = 1}^{p - 1} \frac{\partial}{\partial z} z_{n + k}\right) z_n + \left(\sum_{m = 1}^p \prod_{k = m}^{p - 1} \frac{\partial}{\partial z} z_{n+k}\right) c \]

Writing:

\[\begin{aligned} \lambda_m &= \prod_{k = 1}^{m} \frac{\partial}{\partial z} z_{n + k} \\ \Lambda &= \lambda_{p - 1} \end{aligned}\]

the sum can have a factor of \(\Lambda\) drawn out to give:

\[ z_{n + p} = \Lambda \left( z_n^2 + \left( 1 + \lambda_1^{-1} + \lambda_2^{-1} + \ldots + \lambda_{p - 1}^{-1} \right) c \right) = \Lambda \left( z_n^2 + \beta c \right) \]

The final step is a change of variables where \(c_0\) is the nucleus:

\[\begin{aligned} Z &= \Lambda z \\ C &= \beta \Lambda^2 \left(c - c_0\right) \end{aligned}\]

Now there is self-similarity (aka renormalization):

\[Z_{n+1} = Z_n^2 + C\]

ie, one iteration of the new variable corresponds to \(p\) iterations of the original variable. (Exercise: verify the renormalization.) Moreover the definition of \(C\) gives the scale factor in the parameter plane, which gives the size estimate when we multiply by the size of the top level window (the paper referenced above uses \(\frac{9}{4}\) as the size, corresponding to the interval \(\left[-2,\frac{1}{4}\right]\) from cusp to antenna tip - using \(\frac{1}{2}\) makes circles' sizes approximately their radii).

Finally some C99 code to show how easy this size estimate is to compute in practice (see also my mandelbrot-numerics library):

double _Complex m_size(double _Complex nucleus, int period) { double _Complex l = 1; double _Complex b = 1; double _Complex z = 0; for (int i = 1; i < period; ++i) { z = z * z + nucleus; l = 2 * z * l; b = b + 1 / l; } return 1 / (b * l * l); }

As a bonus, using complex values gives an orientation estimate in addition to the size estimate - just use \(\arg\) and \(\left|.\right|\) on the result.

]]>Interlocking space filling curves - in the limit the perimeter between adjacent blocks becomes infinite, like a perfect lung for transferring oxygen from air to blood and carbon dioxide in the other direction. Made with the Haskell Diagrams library:

]]>{-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Diagrams.Prelude hiding (E) import Diagrams.Backend.SVG.CmdLine (defaultMain) import Control.Monad.State (state, evalState) import Data.Colour.SRGB (sRGB24) data H = N | E | S | W data T = L | R step L = [L,R,L] step R = [L,R,R,R,L] seed = [R,R,R,R] levels = iterate (concatMap step) seed heading N L = W heading E L = N heading S L = E heading W L = S heading N R = E heading E R = S heading S R = W heading W R = N angle N = 90 @@ deg angle E = 0 @@ deg angle S = -90 @@ deg angle W = 180 @@ deg draw d h = (arcD d a a', h') where a = angleDir $ angle h a' = case d of L -> 90 @@ deg R -> -90 @@ deg h' = heading h d arcD L a b = arc a b arcD R a b = scale (-1) $ arcCW a (rotate b a) draws = mconcat . (`evalState` E) . mapM (state . draw) base = draws (levels !! level) # closeTrail # pathFromTrail # translate (r2 (-1, 0)) # stroke left = base middle = base # scale (sqrt 0.5) # rotate (45 @@ deg) # translateX (2 ^ (level + 1)) right = base # scale 0.5 # translateX (2 ^ level * 3) # translateY (2 ^ level) four = base # scale (sqrt 0.125) # rotate (45 @@ deg) # translateX (3 * 2 ^ level) # translateY (2 ^ (level + 1)) lung = (left # fc r <> middle # fc y <> right # fc g <> four # fc b) # rotate ((atan 5 :: Double) @@ rad) # centerXY # pad 1.01 # lc black # lw 0.1 # bg white # rotate (-90 @@ deg) y = sRGB24 0xff 0xdf 0x80 r = sRGB24 0xff 0x88 0xaa b = sRGB24 0xaa 0xaa 0xff g = sRGB24 0x88 0xff 0xaa level = 4 main = defaultMain lung

A000129 Pell numbers

(0, 1, 2, 5, 12, 29,70, 169, 408, 985, ...)

Number of lattice paths from (0,0) to the line x=n-1 consisting of U=(1,1), D=(1,-1) and H=(2,0) steps; for example, a(3)=5, counting the paths H, UD, UU, DU and DD.-- Emeric Deutsch

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List (sort, transpose) import Data.List.Split (chunksOf) u, d, h, z :: (Int, Int) u = (1, 1) d = (1, -1) h = (2, 0) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z l = fst . foldr add z paths n = [ q | m <- [0..n] , q <- replicateM m [u,d,h] , l q == n ] draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . transpose . chunksOf m . map (draw n) . sort $ paths n main = defaultMain (diagram 5 10)

A000332 Binomial coefficient (n,4)

(0, 0, 0, 0, 1, 5, 15, 35,70, 126, 210, 330, 495, 715, ...)

Number of equilateral triangles with vertices in an equilateral triangular array of points with n rows (offset 1), with any orientation.-- Ignacio Larrosa Cañestro

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Data.List (sort, sortOn, nub, transpose) import Data.List.Split (chunksOf) third :: (Int, Int) -> (Int, Int) -> (Int, Int) third (p, q) (p', q') = let (s, t) = (p' - p, q' - q) in (p - t, q + s + t) inTriangle :: Int -> (Int, Int) -> Bool inTriangle n (p, q) = 0 <= p && 0 <= q && p + q < n sizeSquared :: [(Int, Int)] -> Int sizeSquared [(p, q), (p', q'), _] = let (s, t) = (p' - p, q' - q) in s * s + s * t + t * t triangles :: Int -> [[(Int, Int)]] triangles n = sortOn sizeSquared $ nub [ sort [(a, b), (c, d), (e, f)] | a <- [0..n] , b <- [0..n] , inTriangle n (a, b) , c <- [0..n] , d <- [0..n] , inTriangle n (c, d) , (a, b) /= (c, d) , (e, f) <- [ third (a, b) (c, d) , third (c, d) (a, b) ] , inTriangle n (e, f) ] t2 :: (Int, Int) -> V2 Double t2 (p, q) = V2 (fromIntegral p + fromIntegral q / 2) (sqrt 3 * fromIntegral q / 2) t2' = P . t2 draw n t@[ab,cd,ef] = frame 0.75 . scale 1.25 . rotate (15 @@ deg) $ mconcat [ circle 0.25 # fc (if (p, q) `elem` t then grey else white) # translate (t2 (p, q)) # lw thin | p <- [0..n] , q <- [0..n] , inTriangle n (p, q) ] `atop` mconcat [ t2' ab ~~ t2' cd , t2' cd ~~ t2' ef , t2' ef ~~ t2' ab ] grid = vcat . map hcat diagram n m = bg white . grid . chunksOf m . map (draw n) $ triangles n main = defaultMain (diagram 6 7)

A000984 Central binomial coefficient (2n,n)

(1, 2, 6, 20,70, 252, 924, ...)

The number of direct routes from my home to Granny's when Granny lives n blocks south and n blocks east of my home in Grid City. For example, a(2)=6 because there are 6 direct routes: SSEE, SESE, SEES, EESS, ESES and ESSE.-- Dennis P. Walsh

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List.Split (chunksOf) u, d, z :: (Int, Int) u = (1, 0) d = (0, 1) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z l = foldr add z paths n = [ q | q <- replicateM (2 * n) [u,d] , l q == (n, n) ] draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . chunksOf m . map (draw n) $ paths n main = defaultMain (diagram 4 7)

A001405 Binomial (n,floor(n/2))

(1, 1, 2, 3, 6, 10, 20, 35,70, 126, 252, 462, 924, ...)

Number of distinct strings of length n, each of which is a prefix of a string of balanced parentheses; For n = 4, the a(4) = 6 distinct strings of length 4 are ((((, (((), (()(, ()((, ()(), and (()).-- Lee A. Newberg

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List (sort, transpose) import Data.List.Split (chunksOf) u, d, z :: (Int, Int) u = (1, 1) d = (1, -1) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) boundedBelow = not . any ((< 0) . snd) . scanl add z paths n = [ q | q <- replicateM n [u,d] , boundedBelow q ] v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . transpose . chunksOf m . map (draw n) . sort $ paths n main = defaultMain (diagram 8 10)

A002623 Generating function of 1/((1+x)(1-x)^4)

(1, 3, 7, 13, 22, 34, 50,70, 95, 125, 161, 203, 252, 308, 372, 444, 525, 615, 715, 825, 946, ...)

Number of nondegenerate triangles that can be made from rods of length 1,2,3,4,...,n.-- Alfred Bruckstein

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Data.List (sort, sortOn, nub, transpose) import Data.List.Split (chunksOf) nondegenerate :: [Int] -> Bool nondegenerate [a,b,c] = a + b > c corners :: [Int] -> [V2 Double] corners [a',b',c'] = [V2 0 0, V2 c 0, V2 x y] where a = fromIntegral a' b = fromIntegral b' c = fromIntegral c' x = (c^2 - a^2 + b^2) / (2 * c) y = sqrt $ b^2 - x^2 sizeSquared :: [Int] -> Double sizeSquared [a',b',c'] = s * (s - a) * (s - b) * (s - c) where a = fromIntegral a' b = fromIntegral b' c = fromIntegral c' s = (a + b + c) / 2 triangles :: Int -> [([Int], [V2 Double])] triangles n = map (\t -> (t, corners t)) . sortOn sizeSquared $ [ abc | a <- [1..n] , b <- [a..n] , c <- [b..n] , let abc = [a,b,c] , nondegenerate abc ] edge k a b = mconcat [ circle 0.25 # fc white # translate p # lw thin | p <- [ lerp t a b | i <- [0..k] , let t = fromIntegral i / fromIntegral k ] ] `atop` (P a ~~ P b) draw n ([a,b,c], t@[ab,cd,ef]) = frame 0.5 . (`atop` centerXY (strut (fromIntegral n))) . centerXY . rotate (15 @@ deg) $ mconcat [ ] `atop` mconcat [ edge c ab cd , edge a cd ef , edge b ef ab ] grid = vcat . map hcat diagram n m = bg white . grid . chunksOf m . map (draw n) $ triangles n main = defaultMain (diagram 8 7)

The Farey tree crops up in the Mandelbrot set, a nice introduction can be found in The Mandelbrot Set and The Farey Tree by Robert L. Devaney. The tree operates on rational numbers by Farey addition, and can be defined recursively starting from \(\left(\frac{0}{1},\frac{1}{1}\right)\) with an operation acting on neighbouring numbers:

\[\frac{a}{b} \oplus \frac{c}{d} = \frac{a + c}{b + d}\]

Section 6 of Devaney's paper begins

... Suppose \(0 < \frac{a}{b} < \frac{c}{d} < 1\) are the Farey parents of \(\frac{p}{q}\). ...

In practice it would be nice to be able to compute these Farey parents given \(\frac{p}{q}\). One approach is to perform a search through the tree, starting with bounds at 0 and 1, finding the Farey sum of the bounds and adjusting the bounds at each stage to keep the target fraction within them, stopping when it is reached. Unfortunately this has terrible asymptotic complexity, for example finding the Farey parents of \(\frac{1}{100}\) in this way would step through \(\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \ldots \frac{1}{98}\) before finding the parents \(\left(\frac{0}{1}, \frac{1}{99}\right)\).

Fortunately there is a better way suggested by Siddharth Prasad on math.stackexchange.com: using the extended Euclidean algorithm to solve for \(x\) and \(y\):

\[ q x + p y = \gcd(p, q) = 1 \]

Then one parent is \(-\frac{x}{y}\) and the other can be found by undoing the Farey addition. For example, to find the parents of \(\frac{3}{100}\):

q | r | s | t |

100 | 1 | 0 | |

3 | 0 | 1 | |

33 | 1 | 1 | -33 |

3 | 0 |

which gives one parent \(\frac{1}{33}\), and by Farey addition the other parent is \(\frac{2}{67}\).

The Euclidean algorithm has complexity \(O(\log q)\), which is a vast improvement over the naive search \(O(q)\).

]]>**EDIT** the below is wrong, the 3 slow machines were compiled
with the wrong options so were unnecessarily slow. With "-march=native" (or for
the Pi "-mfloat-abi=hard -mfpu=neon") the 9th order polynomial is again conclusively
the best algorithm for all machines.

Last year I implemented a cosine approximation using a 9th order polynomial, and was pleased that in my tests it was both faster and more accurate than other methods using table lookup. However, more recently I re-ran the benchmarks on a number of less powerful computers, and it turned out that the table lookup implementations are faster there.

You can see the full results at /approximations/cosf.html, in short the 9th order polynomial is best for Pentium 4, Pentium M, Core 2, Athlon II; but interpolated table lookup is faster for Celeron T3100, Atom N455 (in 32bit mode), and the ARM chip in the Raspberry Pi Model 3 B. Accuracy seems to be the same across all machines, thanks to IEEE standards (despite being compiled with -ffast-math).

]]>As part of Brud's
*luser stories* I wrote up a lecture/slideshow
about the fractal dimension of Julia sets. You can
download the PDF: julia-dim.pdf (3.5MB)
and there's a page with source code and detailed results
tables here: fractal dimension of Julia sets.

The image is resemblant to the familiar Mandelbrot set because any ball around a boundary point of the Mandelbrot set contains a Julia set with dimension approaching arbitrarily close to 2.

]]>I was intrigued by something I spotted on Wikipedia about the plastic number:

There are two ways of partitioning a square into three similar rectangles: the trivial solution given by three equal rectangles with aspect ratio 1:3, and another solution in which the three rectangles all have different sizes, but the same shape, with the square of the plastic number as their aspect ratio.

Wikipedia lacked a diagram so I made one (above), and here follows a proof that the aspect ratio is as claimed. The plastic number \(p\) is the unique real root of \(x^3=x+1\). The sides labeled in the diagram satisfy:

\[ \begin{aligned} b &= 1 - a \\ c &= a (1 - a) \\ d &= 1 - a (1 - a) \end{aligned} \]

The rectangles are all similar, meaning they have the same aspect ratio, so:

\[ \frac{a}{1} = \frac{b}{d} = \frac{1 - a}{1 - a (1 - a)} \]

Multiplying out the equation gives:

\[ a^3 - a^2 + 2a - 1 = 0 \]

The claim is that \(a = \frac{1}{p^2}\) because \(p > 1\) and clearly \(a < 1\), so the aspect ratio convention was switched in my diagram. Substituting this for \(a\) gives:

\[ p^{-6} - p^{-4} + 2p^{-2} - 1 = 0 \]

which multiplies out to:

\[ 1 - p^2 + 2p^4 - p^6 = 0 \]

Now, substituting \(p^3 = p + 1\) gives:

\[ 1 - p^2 + 2p(p+1) - (p+1)^2 = 0 \]

and multiplying this out gives \(0 = 0\) as all the terms cancel.

]]>Aaron Klebanoff's 2001 paper \(\pi\) in the Mandelbrot Set got me thinking, in particular the problem in the conclusion:

Another open problem is to determine the function of \(\epsilon\) that multiplies \(N(\epsilon)\). So far, we have \[a \epsilon^b N(\epsilon) \to \pi \] where we have seen \(a = 1, 2\) and \(b = 1, \frac{1}{2}\). In general, should we expect [this limit] to hold for some rational values \(a\) and \(b\)? If so, what does the pinch location in \(M\) tell us about \(a\) and \(b\)?

Here \(\epsilon\) is the distance from a point along a path heading to a cusp or pinch point between two hyperbolic components in the complement of the Mandelbrot set \(M\), and \(N(\epsilon)\) is the iteration count at that point. My conclusion after some scribbling is that \(b = 1\) for all pinch points, and \(a\) is not in general rational. My workings below aren't as formal as Klebanoff's, but hopefully the idea is sound.

As paths through the complement of \(M\) I take the external ray landing at the desired pinch point. The ray passes between lots of child bulbs on either side, which are increasing in period towards the pinch point. The increase is in increments of the period of the smaller (higher period) bulb at the pinch point. The rays landing at the pinch-point side of the roots of these child bulbs come together quickly heading out towards infinity, as they differ only in the last digits of the repeating part of the binary expansions of the corresponding rational external angles. In fact, where corresponding pairs meet in the same grid cell of the pictures below, the iteration count there is the lower of the two periods.

Next, consider the internal angles of the the child bulbs of the smaller (larger period) bulb at the pinch point. These are in the sequence \(\frac{1}{p}\) where \(p\) is the period of the child bulb divided by the period of the parent bulb. Drawing triangles as \(p \to \infty\) shows that the distance to the bulb from the pinch point is approximately \(\epsilon \approx s \tan \frac{2\pi}{p}\) where \(s\) is the radius of the smaller pinch point bulb. Further, as \(\tan \theta \to \theta \text{ as } \theta \to 0\), the distance reduces to \(\epsilon \approx s \frac{2\pi}{p}\).

Now, combining the previous two paragraphs, call the period of the smaller bulb at the pinch point \(P\). Then \(N(\epsilon)\) is approximately \(P p\). Substituting into the expression for \(\epsilon\), we get \(\epsilon \approx s \frac{2\pi P}{N(\epsilon)}\) which simplifies to:

\[ \pi \approx \frac{\epsilon N(\epsilon)}{2 P s}\]

So much for the theory, how does it work out in practice? Not very well, as it turns out. I plotted some numerical results for various external angles, and while they tend to flat lines as the iteration count increases which implies that \(\epsilon N(\epsilon) \to K\) for some constant \(K\), the scaling factors I calculated aren't quite there - the limits are some way off from being \(\pi\).

I suspect this is because the internal angles aren't exactly the same as geometric angles (there is a conformal transformation making up the difference), or something else is going wrong, in any case it was a fun experiment to try out - not all experiments give the desired result, which I guess is the point of experimentation in the first place.

]]>