I updated my Inflector Gadget, adding a keyframe animation feature among other goodies. I also made a new page for it, where all the downloads and documentation are to be found. Go check it out!

PS: Inflector Gadget can make images like these in very little time:

]]>A live-coded bytebeat music session inspired by approaching autumn:

A bit stroboscopic, be careful if that affects you. See Falling Leaves on archive.org or download:

MKV (164MB) | MP4 (61MB) | OGV (46MB) |

Falling Leaves audio code log, plus video rendering with mplayer and ffmpeg:

mplayer -benchmark -nosound -demuxer rawvideo \ -rawvideo fps=31.25:w=16:h=16:y8 -vo pnm falling-leaves.1u8 ffmpeg -i falling-leaves.flac -framerate 31.25 -i "%08d.ppm" -sws_flags neighbor \ -filter:v "scale=w=1024:h=1024, pad=w=1920:h=1080:x=448:y=28:color=0x808080" \ -pix_fmt yuv420p -crf:v 0 -codec:a copy falling-leaves.mkv

(I uploaded this a couple of weeks ago but didn't get around to blogging about it until today.)

]]>Late last year I implemented some coupled continuous cellular automata, inspired by Softology's experiments. Now I'm finally getting around to blogging about it. I used OpenGL shaders, here's some of the fragment shader source of the main algorithm:

void main() { vec4 s1 = texture(state, coord, 1.0); vec4 s100 = texture(state, coord, 100.0); vec4 s; for (int k = 0; k < 4; ++k) s[k] = texture(state, coord, blur[k])[k]; vec4 h = texture(history, coord); s = coupling * (s - s100) + h; s = speed * s; s = mix(s1, vec4(0.5) + 0.5 * cos(s), 0.125); h = mix(s, h, decay); state_out = s; history_out = h; }

The non-linearity of the `cos()`

on the coupled input acts like
a "reaction", the blurring (looking up reduced mipmap levels from the texture)
acts like "diffusion".

Colouring is done with another affine matrix transform, the output from which is thresholded and clamped, before edge-detection filter is applied. The edge-detection uses dFdx and dFdy, so the results are coarse (these derivatives are typically computed for blocks of 2x2 pixels, rendered together in parallel) - for better results the edge detection could be done in another pass, or the whole thing could run at double the resolution and be resized down to screen size afterwards.

Here's a video of it in action:

Here are some static images:

Here is another video, from January when it was still in colour:

Here's where you can get the code:

git clone https://code.mathr.co.uk/cca.git

Future work might be to do proper Gaussian blurs (it's separable, so even large radius might be feasible in real-time) instead of the cheap (but yielding squarish grid artifacts) mipmap reduction.

**EDIT** I worked on it some more, now in colour and with a
high quality mode that does Gaussian blur (on my system frame rate drops from
~60fps to between ~5fps and ~30fps depending on blur radius). Pictures:

I also added a mutation mode, which randomizes the parameters one by one at random. Here's a final example video showing off the new features:

]]>

After instrumenting Monotone with OpenGL timer queries I could see where the major bottleneck lay:

IFS( 7011.936000 ) FLAT( 544.672000 ) PBO( 2921.728000 )SORT( 6797.760000 ) LUP( 71136.064000 )TEX( 284.224000 ) DISP( 272.480000 )

LUP is the per-pixel binary search lookup for histogram equalisation (to compress the dynamic range of the HDR fractal to something suitable for display), the previous SORT generates the histogram from a 4x4 downscaled image. A quick calculation shows that this LUP is taking 80% of the GPU time, so is a good focus for optimisation efforts.

The 4x4 downscaled image for the histogram is still a lot of pixels: 129600. LUP involves finding an index into this array, which gives a value with around 17bits of precision. However, typical computer displays are only 8bit (256 values) so the extra 9 random-access texture lookups per pixel to get a more accurate value are a waste of time and effort. Combined with a reduction of the downscaled image to 8x8, the optimisation to compute a less accurate (but visually indistinguishable) histogram equalisation allows Monotone to now run at 30fps at 1920x1080 full HD resolution. Here are the post-optimisation detailed timing metrics:

IFS( 7087.104000 ) FLAT( 509.888000 ) PBO( 2744.864000 )SORT( 1409.440000 ) LUP( 15696.352000 )TEX( 281.472000 ) DISP( 290.848000 )

A productive day!

]]>My video piece Monotone has been accepted to MADATAC 08 (2017). The exhibition runs January 12th to February 5th, at Centro Conde Duque, Madrid, Spain, and there is also a screening on January 17th.

There were some issues with video codecs, this is the one that worked out:

]]>ffmpeg -i video.mkv -i audio.wav \ -pix_fmt yuv420p -codec:v libx264 -profile:v high -level:v 4.1 -b:v 20M -b:a 192k \ monotone.mov

My video piece Monotone has been accepted to the Mozilla Festival art exhibition. Mozilla Festival 2016 takes place October 28-30, at Ravensbourne College, London.

Since submitting the pre-rendered video loop I've been working on improving the real-time rendering mode of the Monotone software. The main bottle-neck at this time is the histogram equalisation to take the high dynamic range calculations down to a low dynamic range image for display. I did manage to get a large speed boost by calculating the histogram on a \(\frac{1}{4} \times \frac{1}{4}\) downscaled image, but on my hardware it only achieves \(\frac{1}{2} \times \frac{1}{2}\) of the desired resolution (HD 1920x1080). On my NVIDIA GTX 550 Ti with 192 CUDA cores I get 960x540 at 30 frames per second. Recent hardware has 1000s of cores, so perhaps it's just a matter of throwing more grunt at the problem.

If you want to try it (and have OpenGL 4 capable hardware, and development headers installed for GLFW and JACK, among other things; only tested on Debian):

git clone https://code.mathr.co.uk/monotone.git git clone https://code.mathr.co.uk/clive.git cd monotone/src make ./monotone

You can also browse the monotone source code repository. If you do have a significantly more powerful GPU than me, you can try to edit the source code monotone.cpp to change "#define SCALE 2" to "#define SCALE 1", which will make it target 1920x1080 instead of half that in each dimension. I'd love to hear back if you get it working (or if you have trouble getting it running, maybe I can help).

**UPDATE** here are some photos, the projection was really
impressive, so I'm satisfied even though the sound aspect was absent:

The festival was pretty interesting, many many things all going on at once. Highlight was the Sonic Pi workshop (though I spent most of the time dist-upgrade-ing to Debian Stretch so I could install it), and the Nature In Code workshop was also interesting (though it was packed full and uncomfortable so I didn't attend the full session).

]]>June's calendar image is an exponential spiral on the Riemann sphere, rotated so that both poles are visible, and tiled with hexagons that seem to pop into cubes in both directions. I originally implemented it in February 2013, and today I updated it (porting it to Fragmentarium along the way). As a bonus, here's a video of the sphere rotating (flattened to 2D using stereographic projection):

You can download a high resolution Loxodrome video and the Loxodrome Fragmentarium source code.

]]>Can't decide what to watch on TV? Why not watch all the channels at once, in an infinite fractal zoom...

git clone http://code.mathr.co.uk/fractal-channel-hopping.git

Check the **README** for instructions. You
need a fairly beefy machine with a good broadband connection,
plus it's probably UK-only due to BBC geographic restrictions.

(I originally wrote it in 2011 and only announced it on the
Openlab London mailing list, then it broke for a while but it
seems **get-iplayer** has been updated and I got
it going again with some minor edits.)

In a previous post on stretching cusps in the Mandelbrot set I used Moebius transformations to map between generalized circles (circles plus straight lines), in particular mapping three points on a circle to a straight line through \(0\), \(1\) and \(\infty\). Yesterday I was wondering how to animate the transition, which requires interpolating between Moebius transformations.

Some research online led me to David Speyer's answer on mathoverflow, which suggested using a matrix representation and interpolating that:

\[h(t) = f \exp(t \log (f^{-1} g)) \]

The matrix representation for a Moebius transformation is:

\[ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \sim \frac{a z + b}{c z + d} \]

and the trace \(\mathrm{tr}\), determinant \(\det\) and inverse \(.^{-1}\) of a 2x2 matrix are:

\[ \begin{aligned} \mathrm{tr} \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) &= a + d \\ \det \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) &= a d - b c \\ \left( \begin{array}{cc} a & b \\ c & d \end{array} \right)^{-1} &= \frac{1}{a d - b c} \left( \begin{array}{cc} d & -b \\ -c & a \end{array} \right) \end{aligned} \]

Matrix \(\exp\) and \(\log\) can be defined by power series, but it's also possible to compute them using diagonalization: the diagonalization of a matrix \(M\) is a pair of matrices \(D\), \(P\) such that \(D\) is diagonal (all elements not on the diagonal are \(0\)) and \(M = P D P^{-1}\). Then \(\exp\) and \(\log\) on \(M\) simplify to element-wise on the diagonal elements of \(D\):

\[ \begin{aligned} \exp M &= P \exp(D) P^{-1} \\ \log M &= P \log(D) P^{-1} \end{aligned} \]

Diagonalizing a matrix involves computing eigenvalues and eigenvectors, which can be quite an involved process. But for 2x2 matrices there is a simple closed form solution:

\[ \begin{aligned} M &= \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \\ \lambda_\pm &= \frac{\mathrm{tr}{M}}{2} \pm \sqrt{\frac{(\mathrm{tr}M)^2}{4} - \det{M}} \\ D &= \left( \begin{array}{cc} \lambda_+ & 0 \\ 0 & \lambda_- \end{array} \right) \\ P &= \left\{ \begin{array}{l l} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) & \quad b = c = 0 \\ \left( \begin{array}{cc} b & b \\ \lambda_+ - a & \lambda_- - a \end{array} \right) & \quad b \ne 0 \\ \left( \begin{array}{cc} \lambda_+ - d & \lambda_- - d \\ c & c \end{array} \right) & \quad c \ne 0 \end{array} \right. \end{aligned} \]

I implemented this method implemented this method, but sometimes the transitions go a long way round instead of a more direct route. This is hinted at in the other answers on the mathoverflow page, but I didn't find a solution for complex-valued matrices yet (negating when the eigenvalues are negative is tricky, because how do you define negative for a complex number..).

I uploaded a short video demoing this Moebius transformation interpolation: Mandelbrot Moebius Experiments. You can download the C99 source code that generated the video frames. It depends on my pre-release mandlebrot mandlebrot libraries mandelbrot-numerics (git HEAD at 3d55cfc99cc97decb0ba0d9c2fb271a504b8e504) and mandelbrot-graphics (git HEAD at ac34f197c1fc7ce13da2d3a056f4e6d588a82f1f).

]]>Back in 2009 I blogged about Crystalline Cortex from 2006. The original code doesn't work any more (GridFlow changed syntax years ago, and seems no longer maintained now), so I rewrote it in C99. Usage: 'make run', you need 'mplayer' and 'pngtopnm' along with the usual development tools.

Source code: crystalline-cortex.tar.bz2

]]>A teaser trailer for Haystack Situations:

Prints will be available soon, stay tuned for further announcements.

Download options:

1080p 720p DVD WebM MP4 Ogg 52 MB 23 MB 21 MB 5.7 MB 5.7 MB 5.9 MB

The maths behind it involves icosahedral rotational symmetry and stereographic projection.

]]>Mandelbrot Set fractal zoom music video available on the Internet Archive:

Made with mightymandel,
using techniques from optimizing zoom animations
to blend image sequences using *Pure-data* and *Gem*.