mathr / blog / #

Deep zoom theory and practice (again)

Since last year's article on deep zoom theory and practice, two new developments have been proposed by Zhuoran on fractalforums.org: Another solution to perturbation glitches.

Rebasing

The first technique ("rebasing"), explained in the first post of the forum thread, means resetting the reference iteration to the start when the pixel orbit (i.e. \(Z+z\), the reference plus delta) gets near a critical point (like \(0+0i\) for the Mandelbrot set). If there is more than one critical point, you need reference orbits starting at each of them, and this test can switch to a different reference orbit. For this case, pick the orbit \(o\) that minimizes \(|(Z-Z_o)+z|\), among the current reference orbit at iteration whatever, and the critical point orbits at iteration number \(0\). Rebasing means you only need as many reference orbits as critical points (which for simple formulas like the Mandelbrot set and Burning Ship means only one), and glitches are avoided rather than detected, needing to be corrected later. This is a big boost to efficiency (which is nice) and correctness (which is much more important).

Rebasing also works for hybrids, though you need more reference orbits, because the reference iteration can be reset at any phase in the hybrid loop. For example, if you have a hybrid loop of "(M,BS,M,M)", you need reference orbits for each of "(M,BS,M,M)", "(BS,M,M,M)", "(M,M,M,BS)" and "(M,M,BS,M)". Similarly if there is a pre-periodic part, you need references for each iteration (though for a zoomed in view, the minimum escaping iteration in the image determines whether they will be used in practice): "M,M,(BS,BS,M,BS)" needs reference orbits "M,M,(BS,BS,M,BS)", "M,(BS,BS,M,BS)" and the four rotations of "(BS,BS,M,BS)". Each of these phases needs as many reference orbits as the starting formula has critical points. As each reference orbit calculation is intrinsically serial work, and modern computers typically have many cores, the extra wall-clock time taken by the additional references is minimal because they can be computed in parallel.

Bilinear Approximation

The second technique ("bilinear approximation") is only hinted at in the thread. If you have a deep zoom, the region of \(z\) values starts very small, and bounces around the plane typically staying small and close together, in a mostly linear way, except for when the region gets close to a critical point (e.g. \(x=0\) and \(y=0\) for the Mandelbrot set) or line (e.g. either \(x=0\) or \(y=0\) for the Burning Ship), when non-linear stuff happens (like complex squaring, or absolute folding). For example for the Mandelbrot set, the perturbed iteration

\[ z \to 2 Z z + z^2 + c \]

when \(Z\) is not small and \(z\) is small, can be approximated by

\[ z \to 2 Z z + c \]

which is linear in \(z\) and \(c\) (two variables call this "bilinear"). In particular, this approximation is valid when \( z^2 << 2 Z z + c \), which can be rearranged with some handwaving (for critical point at \(0\)) to

\[ z < r = \max\left(0, \epsilon \frac{\left|Z\right| - \max_{\text{image}} \left\{|c|\right\}}{\left|J_f(Z)\right| + 1}\right) \]

where \(\epsilon\) is the hardware precision (e.g. \(2^{-24}\)), and \(J_f(Z) = 2Z\) for the Mandelbrot set. For Burning Ship replace \(|Z|\) with \(\min(|X|,|Y|)\) where \(Z=X+iY\). In practice I divide \(|Z|\) by \(2\) just to be extra safe. For non-complex-analytic functions I use the operator norm for the Jacobian matrix, implemented in C++ by:

template <typename real>
inline constexpr real norm(const mat2<real> &a)
{
  using std::max;
  using std::sqrt, ::sqrt;
  const mat2<real> aTa = transpose(a) * a;
  const real T = trace(aTa);
  const real D = determinant(aTa);
  return (T + sqrt(max(real(0), sqr(T) - 4 * D))) / 2;
}

template <typename real>
inline constexpr real abs(const mat2<real> &a)
{
  using std::sqrt, ::sqrt;
  return sqrt(norm(a));
}

This gives a bilinear approximation for one iteration, which is not so useful. The acceleration comes from combining neighbouring BLAs into a BLA that skips many iterations at once. For neighbouring BLAs \(x\) and \(y\), where \(x\) happens first in iteration order, skipping \(l\) iterations via \(z \to A z + B c\), one gets:

\[\begin{aligned} l_{y \circ x} &= l_y + l_x \\ A_{y \circ x} &= A_y A_x \\ B_{y \circ x} &= A_y B_x + B_y \\ r_{y \circ x} &= \min\left(r_x, \max\left(0, \frac{r_y - |B_x| \max_{\text{image}}\left\{|c|\right\}}{|A_x|}\right) \right) \end{aligned}\]

This is a bit handwavy again, higher order terms of Taylor expansion are probably necessary to get a bulletproof radius calculation, but it seems to work ok in practice.

For a reference orbit iterated to \(M\) iterations, one can construct a BLA table with \(2M\) entries. The first level has \(M\) 1-step BLAs for each iteration, the next level has \(M/2\) combining neighbours (without overlap), the next \(M/4\), etc. It's best for each level to start from iteration \(1\), because iteration \(0\) is always starting from a critical point (which makes the radius of BLA validity \(0\)). Now when iterating, pick the BLA that skips the most iterations, among those starting at the current reference iteration that satisfy \(|z| < |r|\). In between, if no BLA is valid, do regular perturbation iterations, rebasing as required. You need one BLA table for each reference orbit, which can be computed in parallel (and each level of reduction can be done in parallel too, perhaps using OpenCL on GPU).

BLA is an alternative to series approximation for the Mandelbrot set, but it's conceptually simpler, easier to implement, easier to parallelize, has better understood stopping conditions, is more general (applies to other formulas like Burning Ship, hybrids, ...) - need to do benchmarks to see how it compares speed-wise before declaring an overall winner.

It remains to research the BLA initialisation for critical points not at \(0\), and check rebasing with multiple critical points: so far I've only actually implemented it for formulas with a single critical point at \(0\), so there may be bugs or subtleties lurking in the corners.