The Mandelbrot set contains many hyperbolic components (cardioid-like and disc-like regions), with hairy filaments connecting them in a tree-like way. Each component has a nucleus at its center, which has a periodic orbit containing 0. Each component is surrounded by an atom domain, which for discs has about 4 times the radius (the relationship for cardioids is less regular, but often has about the square root of the size). Labelling a picture of the Mandelbrot set with the periods can provide insights into its deeper structure, and most of the time using the atom domain size as the label size works pretty well.

Inspired by a feature of Power MANDELZOOM (scroll down to the 3rd image titled "Embedded Julia set") that locates periodic points that are too deep to see, I implemented a grid scan algorithm to find periodic points. I vaguelly recall Robert P. Munafo explaining this algorithm to me in private email, so most of the credit belongs with him. The font size variation is all mine though.

Using my mandelbrot-numerics and mandelbrot-graphics libraries, the period scan works like this:

// scan successively finer grids for periodsfor (int grid = mingridsize << 8; grid >= mingridsize; grid >>= 1) for (int y = grid/2; y < h; y += grid) for (int x = grid/2; x < w; x += grid) { double _Complex c0 = x + I * y; double _Complex dc0 = grid;// transform pixel coodinates to the 'c' planem_d_transform_forward(transform, &c0, &dc0);// find the period of a nucleus within a large box// uses Robert P. Munafo's Jordan curve methodint p = m_d_box_period_do(c0, 4.0 * cabs(dc0), maxiters); if (p > 0)// refine the nucleus location (uses Newton's method)if (m_converged == m_d_nucleus(&c0, c0, p, 16)) {// verify the period with a small box// if the period is wrong, the size estimates will be way offas[atoms].period = m_d_box_period_do(c0, 0.001 * cabs(dc0), 2 * p); if (as[atoms].period > 0) { as[atoms].nucleus = c0;// size of component using algorithm from ibiblio.org M-set e-notesas[atoms].size = cabs(m_d_size(c0, as[atoms].period));// size of atom domain using algorithm from an earlier blog post of mineas[atoms].domain_size = m_d_domain_size(c0, as[atoms].period);// shape of component (either cardioid or disc) after Dolotin and Morozov (2008 eq. 5.8)as[atoms].shape = m_d_shape_discriminant(m_d_shape_estimate(c0, as[atoms].period)); atoms++; } } }

This does give duplicates in the output array, but these can be removed later (I found it better to use a mask image (2D array) in which I marked circles around each label, after checking whether the location has already been marked, than to use a quadratic-time loop comparing locations with a threshold distance). Depending on the size of the circles, this also helps prevents messy label overlaps.

One problem is that the range of atom domain sizes can be huge, with domains in filaments being orders of magnitude smaller than the sizes present in embedded Julia sets. This can be fixed with some hacks:

The image above calculates the font size like this:

// convert to pixel coordinatesint p = as[a].period; double _Complex c0 = as[a].nucleus; double _Complex dc0 = p == 1 ? 1 : as[a].domain_size;// period 1 domain is infinitem_d_transform_reverse(transform, &c0, &dc0);// shrink disc labels a bit to avoid overlapsdouble fs = (as[a].shape == m_cardioid ? 1 : 0.5) * cabs(dc0);// rescale filament labels using properties of periods in this particular embedded Julia setif ((p % 4) != (129 % 4)) fs = 8 * log2(fs) + maxfontsize;// ensure a minimum label sizefs = fmax(fs, minfontsize);

The image below replaces the specific period property
`(p % 4) != (129 % 4)`

with `(p % 4) != 0`

. I'll
figure out how best to generalize this and allow command-line arguments,
at the moment I've just been editing the code and recompiling to adapt
to different views, hardly ideal.

You can click the pictures for bigger versions (a few MB each). The
last 3 images are centered on
`-1.9409856638151786271684397e+00 + 6.4820395780451436662598436e-04 i`

.
After a few more cleanups I'll push the code to my mandelbrot-graphics
git repository linked above.

Late last year I implemented some coupled continuous cellular automata, inspired by Softology's experiments. Now I'm finally getting around to blogging about it. I used OpenGL shaders, here's some of the fragment shader source of the main algorithm:

void main() { vec4 s1 = texture(state, coord, 1.0); vec4 s100 = texture(state, coord, 100.0); vec4 s; for (int k = 0; k < 4; ++k) s[k] = texture(state, coord, blur[k])[k]; vec4 h = texture(history, coord); s = coupling * (s - s100) + h; s = speed * s; s = mix(s1, vec4(0.5) + 0.5 * cos(s), 0.125); h = mix(s, h, decay); state_out = s; history_out = h; }

The non-linearity of the `cos()`

on the coupled input acts like
a "reaction", the blurring (looking up reduced mipmap levels from the texture)
acts like "diffusion".

Colouring is done with another affine matrix transform, the output from which is thresholded and clamped, before edge-detection filter is applied. The edge-detection uses dFdx and dFdy, so the results are coarse (these derivatives are typically computed for blocks of 2x2 pixels, rendered together in parallel) - for better results the edge detection could be done in another pass, or the whole thing could run at double the resolution and be resized down to screen size afterwards.

Here's a video of it in action:

Here are some static images:

Here is another video, from January when it was still in colour:

Here's where you can get the code:

git clone https://code.mathr.co.uk/cca.git

Future work might be to do proper Gaussian blurs (it's separable, so even large radius might be feasible in real-time) instead of the cheap (but yielding squarish grid artifacts) mipmap reduction.

**EDIT** I worked on it some more, now in colour and with a
high quality mode that does Gaussian blur (on my system frame rate drops from
~60fps to between ~5fps and ~30fps depending on blur radius). Pictures:

I also added a mutation mode, which randomizes the parameters one by one at random. Here's a final example video showing off the new features:

]]>

A000129 Pell numbers

(0, 1, 2, 5, 12, 29,70, 169, 408, 985, ...)

Number of lattice paths from (0,0) to the line x=n-1 consisting of U=(1,1), D=(1,-1) and H=(2,0) steps; for example, a(3)=5, counting the paths H, UD, UU, DU and DD.-- Emeric Deutsch

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List (sort, transpose) import Data.List.Split (chunksOf) u, d, h, z :: (Int, Int) u = (1, 1) d = (1, -1) h = (2, 0) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z l = fst . foldr add z paths n = [ q | m <- [0..n] , q <- replicateM m [u,d,h] , l q == n ] draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . transpose . chunksOf m . map (draw n) . sort $ paths n main = defaultMain (diagram 5 10)

A000332 Binomial coefficient (n,4)

(0, 0, 0, 0, 1, 5, 15, 35,70, 126, 210, 330, 495, 715, ...)

Number of equilateral triangles with vertices in an equilateral triangular array of points with n rows (offset 1), with any orientation.-- Ignacio Larrosa Cañestro

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Data.List (sort, sortOn, nub, transpose) import Data.List.Split (chunksOf) third :: (Int, Int) -> (Int, Int) -> (Int, Int) third (p, q) (p', q') = let (s, t) = (p' - p, q' - q) in (p - t, q + s + t) inTriangle :: Int -> (Int, Int) -> Bool inTriangle n (p, q) = 0 <= p && 0 <= q && p + q < n sizeSquared :: [(Int, Int)] -> Int sizeSquared [(p, q), (p', q'), _] = let (s, t) = (p' - p, q' - q) in s * s + s * t + t * t triangles :: Int -> [[(Int, Int)]] triangles n = sortOn sizeSquared $ nub [ sort [(a, b), (c, d), (e, f)] | a <- [0..n] , b <- [0..n] , inTriangle n (a, b) , c <- [0..n] , d <- [0..n] , inTriangle n (c, d) , (a, b) /= (c, d) , (e, f) <- [ third (a, b) (c, d) , third (c, d) (a, b) ] , inTriangle n (e, f) ] t2 :: (Int, Int) -> V2 Double t2 (p, q) = V2 (fromIntegral p + fromIntegral q / 2) (sqrt 3 * fromIntegral q / 2) t2' = P . t2 draw n t@[ab,cd,ef] = frame 0.75 . scale 1.25 . rotate (15 @@ deg) $ mconcat [ circle 0.25 # fc (if (p, q) `elem` t then grey else white) # translate (t2 (p, q)) # lw thin | p <- [0..n] , q <- [0..n] , inTriangle n (p, q) ] `atop` mconcat [ t2' ab ~~ t2' cd , t2' cd ~~ t2' ef , t2' ef ~~ t2' ab ] grid = vcat . map hcat diagram n m = bg white . grid . chunksOf m . map (draw n) $ triangles n main = defaultMain (diagram 6 7)

A000984 Central binomial coefficient (2n,n)

(1, 2, 6, 20,70, 252, 924, ...)

The number of direct routes from my home to Granny's when Granny lives n blocks south and n blocks east of my home in Grid City. For example, a(2)=6 because there are 6 direct routes: SSEE, SESE, SEES, EESS, ESES and ESSE.-- Dennis P. Walsh

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List.Split (chunksOf) u, d, z :: (Int, Int) u = (1, 0) d = (0, 1) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z l = foldr add z paths n = [ q | q <- replicateM (2 * n) [u,d] , l q == (n, n) ] draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . chunksOf m . map (draw n) $ paths n main = defaultMain (diagram 4 7)

A001405 Binomial (n,floor(n/2))

(1, 1, 2, 3, 6, 10, 20, 35,70, 126, 252, 462, 924, ...)

Number of distinct strings of length n, each of which is a prefix of a string of balanced parentheses; For n = 4, the a(4) = 6 distinct strings of length 4 are ((((, (((), (()(, ()((, ()(), and (()).-- Lee A. Newberg

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Control.Monad (replicateM) import Data.List (sort, transpose) import Data.List.Split (chunksOf) u, d, z :: (Int, Int) u = (1, 1) d = (1, -1) z = (0, 0) add (a, b) (c, d) = (a + c, b + d) boundedBelow = not . any ((< 0) . snd) . scanl add z paths n = [ q | q <- replicateM n [u,d] , boundedBelow q ] v :: (Int, Int) -> V2 Double v (a, b) = V2 (fromIntegral a) (fromIntegral b) vs = map v . scanl add z draw n q = frame 0.5 . (`atop` centerXY (strutY (fromIntegral n))) . centerXY $ mconcat [ circle 0.25 # fc white # translate pq # lw thin | pq <- vs q ] `atop` strokeT (trailFromOffsets (map v q)) grid = vcat . map hcat diagram n m = bg white . centerXY . grid . transpose . chunksOf m . map (draw n) . sort $ paths n main = defaultMain (diagram 8 10)

A002623 Generating function of 1/((1+x)(1-x)^4)

(1, 3, 7, 13, 22, 34, 50,70, 95, 125, 161, 203, 252, 308, 372, 444, 525, 615, 715, 825, 946, ...)

Number of nondegenerate triangles that can be made from rods of length 1,2,3,4,...,n.-- Alfred Bruckstein

{-# LANGUAGE FlexibleContexts #-} import Diagrams.Prelude import Diagrams.Backend.SVG.CmdLine (B, defaultMain) import Data.List (sort, sortOn, nub, transpose) import Data.List.Split (chunksOf) nondegenerate :: [Int] -> Bool nondegenerate [a,b,c] = a + b > c corners :: [Int] -> [V2 Double] corners [a',b',c'] = [V2 0 0, V2 c 0, V2 x y] where a = fromIntegral a' b = fromIntegral b' c = fromIntegral c' x = (c^2 - a^2 + b^2) / (2 * c) y = sqrt $ b^2 - x^2 sizeSquared :: [Int] -> Double sizeSquared [a',b',c'] = s * (s - a) * (s - b) * (s - c) where a = fromIntegral a' b = fromIntegral b' c = fromIntegral c' s = (a + b + c) / 2 triangles :: Int -> [([Int], [V2 Double])] triangles n = map (\t -> (t, corners t)) . sortOn sizeSquared $ [ abc | a <- [1..n] , b <- [a..n] , c <- [b..n] , let abc = [a,b,c] , nondegenerate abc ] edge k a b = mconcat [ circle 0.25 # fc white # translate p # lw thin | p <- [ lerp t a b | i <- [0..k] , let t = fromIntegral i / fromIntegral k ] ] `atop` (P a ~~ P b) draw n ([a,b,c], t@[ab,cd,ef]) = frame 0.5 . (`atop` centerXY (strut (fromIntegral n))) . centerXY . rotate (15 @@ deg) $ mconcat [ ] `atop` mconcat [ edge c ab cd , edge a cd ef , edge b ef ab ] grid = vcat . map hcat diagram n m = bg white . grid . chunksOf m . map (draw n) $ triangles n main = defaultMain (diagram 8 7)

Last year I drew some things, and today I recreated one of them using Haskell with the Diagrams library. It features a circular Gray code with five bits, giving 32 possible combinations. A Gray code has the property that neighbouring combinations change only in one position, which is useful for detecting errors in rotary encoders.

Here is the source code:

{-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE TypeFamilies #-} import Diagrams.Prelude hiding (size) import Diagrams.Backend.SVG.CmdLine (B, defaultMain) main :: IO () main = defaultMain diagram diagram :: Diagram B diagram = bg white . frame 0.125 . centerXY . mconcat $ [ ring 6 2 1 , ring 5 4 2 , ring 4 8 4 , ring 3 16 8 , ring 2 16 0 , txt ] ring :: Double -> Int -> Int -> Diagram B ring inr count offset = mconcat . zipWith (cell inr) [0..31] . drop offset . cycle $ ( replicate count False ++ replicate count True ) cell :: Double -> Double -> Bool -> Diagram B cell inr angle flag = rotate (angle / 32 @@ turn) . fc (if flag then dark else light) . lc white . translate (r2 (i, 0)) . strokeT . closeTrail $ mconcat [ p2 (i, 0) ~~ p2 (o, 0) , arc' o h t , rotate t $ p2 (o, 0) ~~ p2 (i, 0) , arc' i (rotate t h) t' ] where t = 1 / 32 @@ turn t' = -1 / 32 @@ turn i = size inr o = size (inr + 1) h = direction (r2 (1, 0)) light, dark :: Colour Double light = sRGB 0.7 0.7 0.7 dark = sRGB 0.3 0.3 0.3 size :: Double -> Double size r = exp ((r - 7) / 5) txt :: Diagram B txt = scale 0.125 . centerXY . vcat $ centerXY . atop (translateY 0.4 $ strutY 0.8) . font "LMSans10" . italic . fontSizeL 1 . text <$> words "32 Shades of Gray"

I had some frustrations until I realized that **fontSizeL** was
what I needed to make the text change size when rendering at different sizes.

As part of Brud's
*luser stories* I wrote up a lecture/slideshow
about the fractal dimension of Julia sets. You can
download the PDF: julia-dim.pdf (3.5MB)
and there's a page with source code and detailed results
tables here: fractal dimension of Julia sets.

The image is resemblant to the familiar Mandelbrot set because any ball around a boundary point of the Mandelbrot set contains a Julia set with dimension approaching arbitrarily close to 2.

]]>I was intrigued by something I spotted on Wikipedia about the plastic number:

There are two ways of partitioning a square into three similar rectangles: the trivial solution given by three equal rectangles with aspect ratio 1:3, and another solution in which the three rectangles all have different sizes, but the same shape, with the square of the plastic number as their aspect ratio.

Wikipedia lacked a diagram so I made one (above), and here follows a proof that the aspect ratio is as claimed. The plastic number \(p\) is the unique real root of \(x^3=x+1\). The sides labeled in the diagram satisfy:

\[ \begin{aligned} b &= 1 - a \\ c &= a (1 - a) \\ d &= 1 - a (1 - a) \end{aligned} \]

The rectangles are all similar, meaning they have the same aspect ratio, so:

\[ \frac{a}{1} = \frac{b}{d} = \frac{1 - a}{1 - a (1 - a)} \]

Multiplying out the equation gives:

\[ a^3 - a^2 + 2a - 1 = 0 \]

The claim is that \(a = \frac{1}{p^2}\) because \(p > 1\) and clearly \(a < 1\), so the aspect ratio convention was switched in my diagram. Substituting this for \(a\) gives:

\[ p^{-6} - p^{-4} + 2p^{-2} - 1 = 0 \]

which multiplies out to:

\[ 1 - p^2 + 2p^4 - p^6 = 0 \]

Now, substituting \(p^3 = p + 1\) gives:

\[ 1 - p^2 + 2p(p+1) - (p+1)^2 = 0 \]

and multiplying this out gives \(0 = 0\) as all the terms cancel.

]]>Around this time last year I was playing around with a physics-based ray-tracer for spherically curved space. In spherical space the ray geodesics eventually wrap around, meeting at the opposite pole to the observer. To compound the sphericity I used a projection that wraps the whole sphere-of-view from a point into a long strip.

It's been so long I've forgotten the details of how it works, but I embedded the 3D spherical space in 4D Euclidean (flat) space. I represented ray directions by points on the "equator" around the ray source, and a lot of trigonometry was involved to transform these ray directions appropriately when tracing the rays through curved space. Eventually I optimized the code to use simpler functions like sqrt and arithmetic instead of costly sin and cos calls.

The materials are all physically based, with refractive index varying with simulated light wavelength, which gives a rainbow effect when different colours are refracted by different angles. To get the final image requires tracing a monochrome image at many different wavelengths, which are then combined into the XYZ colour space using tristimulus response curves. I collected some Wikipedia articles together into a little A5 booklet: colour.pdf (and a version with the pages rearranged for printing using bookletimposer: colour-booklet.pdf).

Here are some miscellaneous links related to how Prismatic works:

- Projection
- transformation projection (section cubic to/from spherical map)
- Lambert equal-area projection
- spherical coordinates
- Ray-surface intersection distance
- solve a = b cos x + c sin x
- double angle formulae
- Reflection and refraction
- Lambertian reflectance
- Fresnel equations
- Snell's law
- reflect()
- refract()
- Kramers-Kronig relations
- Absorption and emission
- Beer-Lambert law
- complex refractive index
- absorption spectroscopy
- emission spectrum
- Colour, wavelength, CIE XYZ, sRGB
- colour matching functions (section database / CMFs)
- sRGB specification
- illuminant D65
- CMYK colour model
- Materials
- refractiveindex.info
- corundum
- sapphire
- ruby and sapphire
- ruby
- chemistry webbook
- chemistry webbook UV/visibile wavelengths
- crown glass
- flint glass

The prismatic code itself is online at code.mathr.co.uk/prismatic.

]]>**UPDATE** my GL 3.3 version may have been
faster, but modifications to the original program are faster
still (there was a misapplied patch slowing things down
drastically, that getting fixed made it easier to improve).
Check
this rrv issue thread
for more details.

Radiosity is a method for computing diffuse lighting. Unlike raytracing, radiosity is viewpoint independent, which means the lighting calculations can be performed once for a given scene, then visualized multiple times with different virtual camera positions. But ray tracing also supports specular reflections and transparency, so eventually a more complete renderer could combine both.

While searching for radiosity implementations I found rrv, which uses OpenGL to render the view from each patch (triangle) in the scene. My first experiences were disappointing - it was very slow, but that turned out to be missing optimisation flags. But I found some further ways to optimize it - first using vertex buffers instead of glBegin() so that the scene geometry is uploaded to the GPU only once instead of per-patch, secondly using a large flat array instead of a C++ std::map to accumulate the results of rendering. These optimisations gave a speed boost around 7x, but I wasn't satisfied.

I ended up porting the radiosity renderer core to run almost entirely on the GPU, using OpenGL 3.3 (though it could probably be ported to OpenGL 2.1 with a few extensions like floating point textures and framebuffers). The speed boost is around 30x faster than the original OpenGL 1 implementation. Here's rrv's room4 demo scene with lighting calculated by my port in a little over 12 minutes (the visualizer code remains unchanged):

Another change I made involved the form factor calculations for hemicube projection. Radiosity ideally projects to a hemisphere and from there to a circle, but rectangular grids are more convenient for computers. So radiosity implementations tend to project to a hemi-cube, rasterizing the scene 5 times, once for the top and each of the four sides. Then each pixel in the result is scaled by a delta form factor, so that it corresponds more closely to the hemisphere circle projection.

rrv's form factor calculations used a product of cosines, which looked suspect to me as it didn't take into account the edges and corners of the hemicube, so I did some searching and found a paper which gave some different formulas:

The Hemi-cube: a Radiosity Solution for Complex Environments

Michael F. Cohen and Donald P. Greenberg

ACM SIGGRAPH Volume 19, Number 3, 1985

I implemented them in a test program and plotted a comparison between RRV2007 (magenta) and Cohen1985 (green):

The difference is small, but visible in the visualizer as slight shape differences between quantized bands on the gradients. Here's a comparison between the output after 1 step with the two different form factors, amplified 64 times (mid-grey is equal output):

When time allows, I have some ideas for some additional features, like lighting groups (compute the radiosity for each light separately, then combine them at visualization time - hopefully I'm correct in assuming linearity - allowing the brightness (but not colour) of each light to be adjusted separately) and scene symmetry (like an infinite corridor repeating every 5m or so, the symmetry means the radiosity for translationally equivalent parts of the scene must be identical).

I put my changes in a branch at code.mathr.co.uk/rrv, which may end up merged to the upstream at github.com/kdudka/rrv.

]]>I painted the above in acrylic yesterday, today I digitally recreated it:

The basic shape is the lemniscate of Bernoulli, which has the parametric equation:

\[\begin{aligned} x &= \frac{\cos t}{\sin^2 t + 1} \\ y &= \frac{\cos t \sin t}{\sin^2 t + 1} \\ z &= \frac{\sin t}{2} \end{aligned}\]

I added the third \(z\) coordinate to avoid self-intersection in 3D, the real lemniscate is 2D. The lemniscate is a curve with no width, but the painting represents it using a square cross-section, with a half-twist each time around the loop.

A Möbius strip is a 2D surface of a loop with a half-twist, the real strip has no thickness, but the painting represents it using a square cross section, as if the two sides (which are really the same side!) of the surface were pushed apart, and the single boundary of the strip now becomes a surface, which happens to be another Möbius strip with its surface pushed apart.

Thickening the 1D lemniscate in 3D space to a 2D surface surrounding a square cross section with a half-twist involves some slightly fiddly maths. The steps involve finding a local coordinate frame (an orthonormal basis, three mutuallly perpendicular unit-length vectors) along the curve, applying the twist normal to the curve, and then using the twisted local coordinate frame to generate the square cross section.

The local coordinate frame has one basis vector in the direction of the curve. This can be found by differentiation with respect to \(t\) followed by normalization. The parametric equations are a bit awkward to differentiate by hand, but the free computer algebra software Maxima can perform symbolic differentiation:

(%i1)diff( cos(t)/(sin(t)^2 + 1) , t );2 sin(t) 2 cos (t) sin(t) (%o1) - ----------- - ---------------- 2 2 2 sin (t) + 1 (sin (t) + 1) (%i2)diff( cos(t)*sin(t)/(sin(t)^2 + 1) , t );2 2 2 2 sin (t) cos (t) 2 cos (t) sin (t) (%o2) - ----------- + ----------- - ----------------- 2 2 2 2 sin (t) + 1 sin (t) + 1 (sin (t) + 1) (%i3)diff( sin(t)/2 , t );cos(t) (%o3) ------ 2

This vector needs to be normalized for the basis, so divide each coordinate by the length of the vector (which is the square root of the sum of the squares of the coordinates, by Pythagoras' theorem).

Having one basis vector (call it \(\mathbf{u}\)) is a start, but there are two more to find. The vector cross product of two vectors gives a third vector orthogonal to both, so picking an arbitrary direction as the \(z\) axis \((0, 0, 1)^T\), the second basis vector is in the direction \(\mathbf{z} \times \mathbf{u}\), call it \(\mathbf{v}\) when normalized. The third and final basis vector can now be found: \(\mathbf{w} = \mathbf{v} \times \mathbf{u}\). This doesn't really need normalization because the arguments are orthonormal, but rounding errors might make it useful to renormalize anyway.

To give a half twist around the loop, the basis vectors need to be rotated around the curve - having picked one basis vector as the curve direction, the task reduces to a simple planar rotation, whose matrix is:

\[\begin{aligned} M &= \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \theta & \sin \theta \\ 0 & -\sin \theta & \cos \theta \end{pmatrix} \\ \theta & = \frac{t}{2} + \frac{\pi}{4} \end{aligned}\]

The \(\frac{\pi}{4}\) phase offset is to make the ends of the loops flat. Now the curves along the corners of the square cross-section can be defined in terms of the transformed basis vectors, and the two flat surfaces (one white and the other black) are formed by interpolating between neighbouring curves. In the source code linked below the transformed basis vectors are called \(p\) and \(q\) and the edge curves are called \(i_1\), \(j_1\), \(i_2\) and \(j_2\), with the surfaces \(k_1\) and \(k_2\).

I used gnuplot for the visualization, it's a bit awkward having to separate out the vector components, and having to dump the parametric data to table files and replot them to be able to set the fill colour is even more awkward. But it did the trick, here's the source for the image: moebius_infinity.gnuplot.

**Exercise 1**: adjust the thickness of the square cross-section
to better match the original painting.

**Exercise 2**: generate an animation of the loop twisting, by
rendering frames with different \(\theta\) phase offsets.

**Exercise 3**: try a loop with a quarter twist, using a rainbow
colour gradient that wraps around the loop just once.

**Exercise 4**: try different cross-sections: for example a
triangle or pentagon.

Some image formats (like PNG) use RGB, some (like JPEG) use YUV which separates out the colour information from the brightness information (originating in the early history of colour television). Colour space transformation is theoretically lossless, but most image formats store data as small integers (usually 8bit, which gives 256 shades for each colour channel). If you save an RGB image (already quantized) to JPEG, it'll be converted to YUV and quantized again. Then when you load it, it'll be converted to RGB and quantized a third time. (The loss from quantization is different from the loss from JPEG compression, which I'm ignoring in this post.)

**Quantization** (converting from a high precision to a
lower precision) requires irreversibly discarding information.
**Dithering** adds a small amount of noise to each channel
of each pixel before quantization takes place, which averages out
quantization errors across the image. The point I want to make in this
post is that **dithering improves image quality**, at least
when starting with lossless computer-generated source material.

A small example: suppose you have a 1bit image format with one channel, each pixel can be 0 (black) or 1 (white). Moreover you have a source image at high precision with a smooth gradient from 0 to 1. When the gradient is at 0.25, quantization gives 0 100% of the time, and when the gradient is at 0.75, quantization gives 1 100% of the time. This gives a sharp transition from black to white at mid-level grey. If you add dither, however, 0.25 will turn out 1 25% of the time and 0 75% of time, and 0.75 will turn out 1 75% of the time and 0 25% of the time. Instead of a sharp transition band, there will be white speckles on a black background that gradually increase in density from no speckles at 0.00 to all speckles at 1.00. Here's an example in 1bit RGB colour:

dithered | undithered |
---|---|

In 8bit colour the problem is less visible:

dithered | undithered |
---|---|

They look almost identical, but if you bump up the gain you can see the differences more clearly:

dithered gain boosted | undithered gain boosted |
---|---|

Comparing with the lossless (well, double-precision floating point) source image, the errors from quantization have a rather different character:

dithered error | undithered error |
---|---|

Performing a 2D FFT reveals the spectrum of the error is nearly flat when dithering, but rather dramatic without:

dithered error FFT | undithered error FFT |
---|---|

In the worst case (converting RGB to YUV to RGB, quantizing at each stage without dithering), the quantization errors are visible even without bumping up the gain (but I did it anyway):

undithered via YUV | undithered via YUV gain boosted |
---|---|

Even with 16bit image formats, the problem isn't eliminated, though it is much less visible - zooming in on the higher-resolution files linked in the table and looking carefully at the dark areas reveals some banding:

16bit gain boosted | 16bit via YUV gain boosted |
---|---|

And if your image viewer doesn't dither when quantizing 16bit images to your (typically 8bit) display depth, you'll get banding as if you hadn't dithered and saved at 8bit anyway. However, the loss from quantizing 3 times during RGB to YUV to RGB is much lower at 16bit, this is the difference between the two last 8bit images amplified 64 times:

| 16bit gain boosted - 16bit via YUV gain boosted | |
---|

Final thoughts:

- Dither before quantizing.
- If your target codec uses YUV, convert your high-precision source to high-precision YUV, then see (1).

Source code for this post:

- dither.dot
- graphviz source for the diagram
- dither.c
- C99 source for the image generation program
- dither.csv
- CSV error statistics output from the program

You may remember Snowglobe, a screensaver alike graphical demo of generative fractal snowflakes interacting in a particle system. This week I uploaded two new versions to Hackage, snowglobe-2.0.0.2 which is compatible with the latest Haskell OpenGL libraries, and snowglobe-3 which removes an awkward dependency on hmatrix (replaced with simple small matrix maths copied from an old project of mine) and adds a new feature: saving each generated snowflake to a file (hit shift-S to activate and deactivate saving the files to the current working directory). Which makes snowglobe-2.0.0.2 obsolete already. You can get the latest (when you have ghc and cabal-install available) by:

cabal update && cabal install snowglobe

Having the flake images makes it possible to do all kinds of other fun stuff. They look like this:

I thought it would be nice to send all my neighbours a unique home made seasonal card, using these flake images. I got some card blanks from a local shop, they are 5x7 inches when folded, and from experience my printer doesn't print right to the edge (and margins are good too for aesthetic reasons), so I thought a 4x6 grid of flakes each 1 inch square would be nice.

The flake images are saved as PGM, part of the NetPBM family of formats, also known as PNM. There's a large suite of command line tools to do all kinds of image processing, here's the one's I'll be using:

- pngtopnm
- convert PNG to PNM, fairly self explanatory.
- pnmcat
- stack images next to each other, either from top to bottom (-tb) or left to right (-lr).
- pnmsplit
- takes a stream of PNM images and writes each to a separate file.
- pgmtoppm
- I use it to invert the colours, but it gives PPM (RGB) output, so...
- ppmtopgm
- ...this converts RGB back to greyscale.
- pnmgamma
- gamma adjustment, 0.25 makes it darker.
- pnmscale
- the flakes are 1024px square, so we've been working at 1024dpi - but the output should be 300dpi, with fewer pixels
- pnmtopng
- converts to PNG, -force makes the PNG greyscale instead of possibly indexed palette, -interlace makes the PNG interlaced (useful for previewing over a slow network connection, mainly), and the -phys flag sets the DPI (300dpi is 11811 pixels per metre).
- xargs
- not a NetPBM package at all, but I use it here to group same-sized batches of input lines into the pnmcat command

Here's the whole process, maybe run it one line at a time in case something explodes, you need a fair bit of free disk space too (all the NetPBM formats are uncompressed, each flake is 1MB and there are multiple copies in the temporary files):

#!/bin/bash mkdir cards cd cards gimp # make 5120x512 black bakcground greyscale image with no alpha > edge.png # make 1500x2100 white background greyscale image with no alpha > back.png snowglobe # hit shift-S to start saving images to current directory, q to quit # in another window keep track of how many files there are # you need 24 per card and a few extra because: geeqie # delete any you find ugly, there's always a few huge dense ones... pngtopnm < edge.png > edge.pgm pngtopnm < back.png > back.pgm mkdir 4x1 ls snowglobe-*.pgm | xargs -n 4 pnmcat -lr | ( cd 4x1 ; pnmsplit ) mkdir 4x6 ls 4x1/* | xargs -n 6 pnmcat -tb | ( cd 4x6 ; pnmsplit ) mkdir 5x7 cd 4x6 for i in * do pnmcat -tb -black ../edge.pgm $i ../edge.pgm | pgmtoppm white-black | ppmtopgm | pnmgamma 0.25 | pnmscale -xysize 1500 2100 | pnmcat -lr ../back.pgm - | pnmtopng -force -interlace -phys 11811 11811 1 > ../5x7/$i.png done cd ../5x7 geeqie # delete any you find ugly, often the last one is incomplete # finally, print them - load up your printer with card blanks first lp -d R220 -o landscape *.png

The final output images look something like this:

And when printed and folded, like this:

I hope my neighbours like them!

]]>