Pau Ros took some great pictures of my exhibition opening, part of Sonic Electronics Festival:

The exhibition is open until 27th April. Check the Chalton Gallery website for spacetime coordinates.

]]>I have an exhibition coming up April 2019 in London, UK.

Claude Heiland-Allen

Digital Art - Computer Graphics - Free/Libre Open Source Software

Chalton Gallery, 96 Chalton Street, Camden, London UK NW1 1HJ

Opening Thursday 11 April 2019, 6pm.

Concert Thursday 18 April 2019, 7pm.

Exhibition opens 12-27 April 2019.

Tuesdays: 8 am to 3 pm

Wednesday to Saturday: 11:30 am to 5:45 pm

*Digital print 120x60cm, framed*

Prismatic is rendered using a physics-based ray-tracer for spherically curved space. In spherical space the light ray geodesics eventually wrap around, meeting at the opposite pole to the observer. To compound the sphericity a projection is used that wraps the whole sphere-of-view from a point into a long strip.

The scene contains spheres of three different transparent materials (water, glass, quartz) symmetrically arranged at the vertices of a 24-cell. The equatorial plane is filled with a glowing opaque checkerboard, this acts as a light source with a daylight spectrum.

The 3D spherical space is embedded in 4D Euclidean (flat) space. Represent ray directions by points on the “equator” around the ray source, and use trigonometry to transform these ray directions appropriately when tracing the rays through curved space. The code is optimized to use simpler functions like square root and arithmetic instead of costly sines and cosines.

The materials are all physically based, with refractive index varying with simulated light wavelength, which gives rainbow effects when different colours are refracted by different angles. To get the final image requires tracing a monochrome image at many different wavelengths, which are then combined into the XYZ colour space using tristimulus response curves for the light receptors in the human eye.

*Digital prints 20x30cm, 16 pieces, unframed*

The concept for Wedged is “playing Tetris optimally badly”. Badly in that no row is complete, and optimally in that each row has at most one empty cell, and the rectangle is filled. Additional aesthetic constraints are encoded in the source code to generate more pleasing images.

Starting from an empty rectangle, block off one cell in each row, subject to the constraint that blocked cells in nearby rows shouldn’t be too close to each other, and the blocked cells should be roughly evenly distributed between columns. Some of these blockings might be duplicates (taking into account mirroring and rotation), so pick only one from each equivalence class.

Starting from the top left empty cell in each of these boards, fill it with pieces that fit. Fitting means that the piece is entirely within the board, not overlapping any blocked cells or other pieces. There are some additional constraints to improve aesthetic appearance and reduce the number of output images: there should not be too many pieces of the same colour in the board, all adjacent pieces should be a different colour, and no piece should be able to slide into the space left when blocked cells are removed (this applies only to the long thin blue pieces, the other pieces can’t move due to the earlier constraint on nearby blocked cells).

The filling process has some other aesthetic constraints: the board must be diverse (there must be a wide range of distinct colours in each row and column), the complete board must have a roughly even number of pieces of each colour, and there shouldn’t be any long straight line boundaries between multiple pieces. The complete boards might have duplicates under symmetries (in the case that the original blocking arrangement was symmetrical), so pick only one from each equivalence class.

*Sound installation*

Generative techno. Dynamo creates music from carefully controlled randomness, using numbers to invent harmonies, melodies, and rhythms. Dynamo is a Pure-data patch which plays new techno tracks forever. It is a generative system, and not a DJ mix.

When it is time to generate a new track, Dynamo first picks some high level parameters like tempo, density, and the scale of notes to use. Then it fills in the details, such as the specific rhythms of each instrument and which notes to play in which order. Finally an overall sequence is applied to form the large scale musical structure.

Pure-data is deterministic, which makes Dynamo deterministic. To avoid the same output each time the patch is started, entropy is injected from outside the Pure-data environment.

*Audio-visual installation*

Sliding tile puzzles have existed for over a century. The 15-puzzle craze in 1880 offered a cash prize for a problem with no solution. In the Puzzle presented here the computer is manipulating the tiles. No malicious design, but insufficient specification means that no solution can be found; the automaton forever explores the state space but finds every way to position the tiles as good as the last…

Each tile makes a sound, and each possible position has a processing effect associated with it. Part of the Puzzle is to watch and listen carefully, to see and hear and try to pick apart what it is that the computer is doing, to reverse-engineer the machinery inside from its outward appearance. The video is built using eight squares, each coloured tile is textured with the whole Puzzle, descending into an infinite fractal cascade. The control algorithm is a Markov Chain that avoids repetition.

Puzzle is implemented in Pure-data, using GEM for video and pdlua for the tile-control logic.

*Interactive installation*

A graph is a set of nodes and links between them. In GraphGrow the term is overloaded: there are visible graphs of nodes and links on the tablet computer, and a second implicit graph with links between the rules.

The visible graphs give the name of GraphGrow - a fractal is grown from a seed graph by replacing each visible link with its corresponding rule graph, recursively. The correspondence is by colour: a yellow link corresponds to the graph with yellow background, and so on. The implicit graph between rules thus *directs* the expansion. The implict graph is also a *directed graph* (even more terminological overloading!).

The rule graphs are constrained, with two fixed nodes at left and right. When growing a graph, each link is replaced with the corresponding rule graph with the left-hand fixed node of the rule mapped to the start point of the link and the right-hand fixed node of the rule mapped to the end point of the link. The mapping is restricted to uniform scaling, rotation and translation. The fixed nodes are coloured white on the tablet.

The fractal is projected, along with rhythmic drones amplified through speakers. Both are generated from the graph data. Dragging the brightly coloured nodes on the tablet in each of the four rule graphs, allows the gallery visitor to explore a subspace of graph-directed iterated function system of similarities.

*Video installation*

Fractals are mathematical objects exhibiting detail at all scales. Escape-time fractals are plotted by iterating recurrence relations parameterised by pixel coordinates from a seed value until the values exceed an escape radius or until an arbitrary limit on iteration count is reached (this is to ensure termination, as some pixels may not escape at all). The colour of each pixel is determined by the distance of the point from the fractal structure: pixels near the fractal are coloured black and pixels far from the fractal are coloured white, or the reverse.

Escape-time fractals are generated by formulas, for example the Mandelbrot set emerges from *z* → *z*^{2} + *c* and the Burning Ship emerges from *x* + *i**y* → (|*x*| + *i*|*y*|)^{2} + *c*, where *c* is the coordinates of each pixel. Hybrid fractals combine different formulas into one more complicated formula: for example one might perform one iteration of the Mandelbrot set formula, then one iteration of the Burning Ship formula, then two more iterations of the Mandelbrot set formula, repeating this sequence in a loop.

Claude Heiland-Allen is an artist from London interested in the complex emergent behaviour of simple systems, unusual geometries, and mathematical aesthetics.

From 2005 through 2011 Claude was a member of the GOTO10 collective, whose mission was to promote Free/Libre Open Source Software in Art. GOTO10 projects included the make art Festival (Poitiers, France), the Puredyne GNU/Linux distribution, and the GOSUB10 netlabel. Since 2011 he has continued as an unaffiliated independent artist and researcher.

Claude has performed, exhibited and presented internationally, including in the United Kingdom (London, Cambridge, Winchester, Lancaster, Oxford, Sheffield), the Netherlands (Leiden, Amsterdam), Austria (Linz, Graz), Germany (Cologne, Berlin), France (Toulouse, Poitiers, Paris), Spain (Gijon), Norway (Bergen), Slovenia (Maribor), Finland (Helsinki), and Canada (Montreal).

Claude’s larger artistic projects include RDEX (an exploration of digitally simulated reaction diffusion chemistry) and clive (a minimal environment for live-coding audio in the C programming language). As a software developer, Claude has developed several programs and libraries used by the wider free software community, including pdlua (extending the Puredata multimedia environment with the Lua programming language), buildtorrent (a program to create .torrent files), and hp2pretty (a program to graph Haskell heap profiling output).

]]>The current work-in-progress version of GraphGrow has three components that communicate via OSC over network. The visuals are rendered in OpenGL using texture array feedback, this process is graphgrow-video. The transformations are controlled by graphgrow-iface, with the familiar nodes and links graphical user interface. The interface runs on Linux using GLFW, and I'm working on an Android port for my tablet using GLFM. The component I'll be talking about in this post is the graphgrow-audio engine, which makes sounds using an audio feedback delay network with the same topology as the visual feedback network. Specifically, I'll be writing up my notes on what I did to make it around 2x as CPU efficient, while still making nice sounds.

First up, I tried gprof, but after following the instructions I only got an empty profile. My guess is that it doesn't like JACK doing the audio processing in a separate realtime thread. So I switched to perf:

perf record ./graphgrow-audio perf report

Here's the first few lines of the first report, consider it a baseline:

Overhead Command Shared Object Symbol 34.41% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 18.11% graphgrow-audio libm-2.24.so [.] expm1f 14.27% graphgrow-audio graphgrow-audio [.] audiocb 8.69% graphgrow-audio libm-2.24.so [.] sincos 8.34% graphgrow-audio libm-2.24.so [.] __logf_finite 4.47% graphgrow-audio libm-2.24.so [.] tanhf

That was after I already made some algorithmic improvements: I had 32 delay lines, all of the same length, with 64 delay line readers in two groups of 32, each group reading at the same offset every sample. This meant there was a lot of duplicated work calculating the delay line interpolation coefficients. I factored out the computation of the delay coefficients into another struct, which could be calculated 1x per sample instead of 32x per sample. Then the delay readers are passed the coefficients, instead of computing them themselves.

Looking at what to optimize, the calls to expm1f() seem to be a big target. Looking through the code I saw that I had 32 dynamic range compressors, each doing RMS to dB (and back) conversions every sample, which means a lot of log and exp. My compressor had a ratio of 1/8, so I replaced the gain logic by a version that worked in RMS with 3x sqrt instead of 1x log + 1x exp per sample:

index a6ba512..588d098 100644 --- a/graphgrow3/audio/graphgrow-audio.cc +++ b/graphgrow3/audio/graphgrow-audio.cc @@ -549,18 +549,25 @@ struct compress sample factor; hip hi; lop lo1, lo2; + sample thresrms; compress(sample db) : threshold(db) , factor(0.25f / dbtorms((100.0f - db) * 0.125f + db)) , hi(5.0f), lo1(10.0f), lo2(15.0f) + , thresrms(dbtorms(threshold)) { }; signal operator()(const signal &audio) { signal rms = lo2(0.01f + sqrt(lo1(sqr(hi(audio))))); +#if 0 signal db = rmstodb(rms); db = db > threshold ? threshold + (db - threshold) * 0.125f : threshold; signal gain = factor * dbtorms(db); +#else + signal rms2 = rms > thresrms ? thresrms * root8(rms / thresrms) : thresrms; + signal gain = factor * rms2; +#endif return tanh(audio / rms * gain); }; };

This seemed to work, the new perf output was:

Overhead Command Shared Object Symbol 38.89% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 22.11% graphgrow-audio libm-2.24.so [.] expm1f 10.78% graphgrow-audio libm-2.24.so [.] sincos 5.76% graphgrow-audio libm-2.24.so [.] tanhf

The numbers are higher, but this is actually an improvement, because if graphgrow::operator() goes from 34% to 39%, everything else has gone from 66% to 61%, and I didn't touch graphgrow::operator(). Now, there are still some large amounts of expm1f(), but none of my code calls that, so I made a guess: perhaps tanhf() calls expm1f() internally? My compressor used tanh() for soft-clipping, so I tried simply removing the tanh() call and seeing if the audio would explode or not. In my short test, the audio was stable, and CPU usage was greatly reduced:

Overhead Command Shared Object Symbol 60.53% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 17.62% graphgrow-audio libm-2.24.so [.] sincos 11.51% graphgrow-audio graphgrow-audio [.] audiocb

The next big target is that sincos() using 18% of the CPU. The lack of 'f' suffix tells me this is being computed in double precision, and the only place in the code that was doing double precision maths was the resonant filter biquad implementation. The calculation of the coefficients used sin() and cos(), at double precision, so I swapped them out for single precision polynomial approximations (9th order, I blogged about them before). The approximation is roughly accurate (only a bit or two out) for float (24bits) which should be enough: it's only to control the angle of the poles, and a few cents (or more, I didn't check) error isn't so much to worry about in my context. Another big speed improvement:

Overhead Command Shared Object Symbol 85.48% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 11.45% graphgrow-audio graphgrow-audio [.] audiocb 1.22% graphgrow-audio libc-2.24.so [.] __isinff 0.64% graphgrow-audio libc-2.24.so [.] __isnanf 0.41% graphgrow-audio graphgrow-audio [.] graphgrow::graphgrow

perf has a mode that annotates the assembly and source with hot instructions, looking at that let me see that the resonator was using double precision sqrt() when calculating the gain when single precision sqrtf() would be enough:

Overhead Command Shared Object Symbol 88.70% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 8.49% graphgrow-audio graphgrow-audio [.] audiocb 1.24% graphgrow-audio libc-2.24.so [.] __isinff 0.65% graphgrow-audio libc-2.24.so [.] __isnanf

Replacing costly double precision calculations with cheaper single precision calculations was fun, so I thought about how to refactor the resonator coefficient calculations some more. One part that definitely needed high precision was the calculation of 'r = 1 - t' with t near 0. But I saw some other code was effectively calculating '1 - r', which I could replace with 't', and make it single precision. Again, some code was doing '1 - c * c' with c the cosine of a value near 0 (so 'c' is near 1 and there is catastrophic cancellation), using basic trigonometry this can be replaced by 's * s' with s the sine of the value. However, I kept the final recursive filter in double precision, because I had bad experiences with single precision recursive filters in Pure-data (vcf~ had strongly frequency-dependent ring time, porting to double precision fixed it).

Overhead Command Shared Object Symbol 87.86% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 9.18% graphgrow-audio graphgrow-audio [.] audiocb 1.37% graphgrow-audio libc-2.24.so [.] __isinff 0.68% graphgrow-audio libc-2.24.so [.] __isnanf

The audiocb reponds to OSC from the user interface process, it took so much time because it was busy-looping waiting for the JACK processing to be idle, which was rare because I still hadn't got the CPU load down to something that could run XRUN-free in realtime at this point. I made it stop that, at the cost of increasing the likelihood of the race condition when storing the data from OSC:

Overhead Command Shared Object Symbol 96.87% graphgrow-audio graphgrow-audio [.] graphgrow::operator() 1.49% graphgrow-audio libc-2.24.so [.] __isinff 0.80% graphgrow-audio libc-2.24.so [.] __isnanf

Still not running in realtime, I took drastic action, to compute the resonator filter coefficients only every 64 samples instead of every 1 sample, and linearly interpolate the 3 values (1x float for gain, 2x double for feedback coefficients). This is not a really nice way to do it from a theoretical standpoint, but it's way more efficient. I also check for NaN or Infinity only at the end of each block of 64 samples (if that happens I replace the whole block with zeroes/silence), which is also a bit of a hack - exploding filters sound bad whatever you do to mitigate it, but I haven't managed to make it explode very often.

Success: now it was using 60% of the CPU, comfortably running in real time with no XRUNs. So I added in a (not very accurate, but C2 continuous) rational approximation of tanh() to the compressor that I found on musicdsp (via Pd-list):

signal tanh(const signal &x) { signal x2 = x * x; return x < -3.0f ? -1.0f : x > 3.0f ? 1.0f : x * (27.0f + x2) / (27.0f + 9.0f * x2); }

CPU usage increased to 72% (I have been doing all these tests with the CPU frequency scaling governor set to performance mode so that figures are comparable). I tried g++-8 instead of g++-6, CPU usage reduced to 68%. I tried clang++-3.8 and clang++-6.0, which involved some invasive changes to replace 'c?a:b' (over vectors) with a 'select(c,a,b)' template function, but CPU usage was over 100% with both versions. So I stuck with g++-8.

The last thing I did was an algorithmic improvement: I was doing 4x the necessary amount of work in one place. Each of 4 "rules" gets fed through 4 "edges" per rule, and each edge pitchshifted its rule down an octave. By shifting (ahem) the pitchshifting from being internal to the edge to being internal to the rule, I saved 15% CPU (relative) 10% CPU (absolute), now there are only 8 pitchshifting delay lines instead of 32.

In conclusion, today I brought down the CPU usage of graphgrow-audio down from "way too high to run in realtime", to "60% of 1 core", benchmarking on an Intel Atom N455 1.66GHz netbook running Linux in 32bit (i686) mode. A side channel may be lurking, as the CPU usage (in htop) of graphgrow-audio goes up to 80% temporarily while I regenerate my blog static site...

]]>Seven years of archives for the netbehaviour mailing list filtered down to the most often occurring 1000 words of 4 letters or more, arranged in an infinite fractal zoom - each word is made up of the words that most likely follow it.

This is from six months ago but I didn't post about it yet.

Video available on the Internet Archive:

Made with an early version of the next generation of *graphgrow*,
whose source code is available here:

git clone http://code.mathr.co.uk/graphgrow.git

Or browse graphgrow on Gitorious graphgrow on code.mathr.co.uk.

]]>A decade of irc logs for the #haskell irc channel on freenode filtered down to the most often occurring 1000 words of 4 letters or more, arranged in an infinite fractal zoom - each word is made up of the words that most likely follow it.

This is from six months ago but I didn't post about it yet.

Video available on the Internet Archive:

Made with an early version of the next generation of *graphgrow*,
whose source code is available here:

git clone http://code.mathr.co.uk/graphgrow.git

Or browse graphgrow on Gitorious graphgrow on code.mathr.co.uk.

]]>You might remember my project **GraphGrow** which started
as a **SVG** plus **ECMAScript** system for
designing **Graph-Directed Iterated Function Systems**,
then developed into a comand-line video renderer written in
**C** for higher performance.

The project has been dormant for some months, but today I resumed
work on a new facet : a realtime preview system using **Pd**,
**Gem** and **Lua**. It's currently rather
rough/hardcoded - you have to make a new patch for each structure, and
there is currently a key piece in the puzzle missing (conversion from
**[gemlist_info]** to a format that the **[graphgrow]**
scene exporter understands).

On the plus side, the realtime rendering (implemented with OpenGL
texture feedback) makes it really quick to try out different ways of
animating the fractals, and once the exporter is working it should be
possible to record animations and later render high quality videos with
**graphgrow-engine**.

Patches are in my SVN under "2008/gg/", if you're curious.

**UPDATE** : I got the Gem->GraphGrow bridge working,
apart from a tiny issue (GraphGrow output is upside down compared to Gem
output) that should be easy to fix.

GraphGrow in SVN now has code to calculate the **Hausdorff
dimension** of the generated fractal. However, this calculation
is only guaranteed to be valid if an **open set condition**
holds. Moreover, there is **no algorithm** to check if
the condition holds (because there is no algorithm to construct the
feasible open sets required).

I guess I'll have to add a big fat warning sticker on the info pane of GraphGrow saying that the calculated dimension isn't necessarily valid, but it's frustrating that it seems impossible to check algorithmically if it's valid or not.

References

- A Fractal Dimension Estimate For A Graph-Directed IFS Of Non-Similarities -- G A Edgar, Jeffrey Golds
- Multifractal Decompositions Of Digraph Recursive Fractals -- G A Edgar, R Daniel Mauldin
- On The Open Set Condition For Self-Similar Fractals -- Christoph Bandt, Nguyen Viet Hung, Hui Rao

Footnote: I'm not a professional mathematician, so my ramblings above might be completely wrong..

]]>I've been interested in doing something interactive with
**SVG** plus **Javascript** (or
**ECMAScript**) for a while, since I saw
Andre Schmidt's SVG GUI
widget ideas with JavaScript. I was inspired by remembering parts
of Benoit Mandelbrot's **The Fractal Geometry Of Nature**
in which a "seed graph" was recursively expanded into a fractal form
by replacing each edge with a "rule graph". The process can lead to
images like the above and below:

Tested with Firefox 1.5, it might not work properly in other browsers. No instructions yet, just click around and see what happens. There are some glitches in the event handling, will fix them soon hopefully.

UPDATE: I recorded a short video showing how it works: GraphGrow video (Ogg Theora format, no sound).

UPDATE2: there is now a GraphGrow User Manual.

]]>