Soon:

ALGORITHMS ARE INFECTING DISCOS AND RUINING LIVES

Book online (£10-£15): Resident Advisor // Party For The People

The Algorave scene was born in London back in 2012, since spreading to around 90 cities worldwide, and has been billed as "the future of electronic music" in wired magazine articles on a regular basis. Now enough producers are exploring algorithmic methods that we briefly declare ALGORAVE IS THE PRESENT OF ELECTRONIC MUSIC before becoming a footnote in history.

This one will be a corker though, two rooms full of algorithmic bangers in Elephant and Castle's lovely Corsica Studios. The two rooms will allow parallel exploration of algorithmic flavours of bassline, 4/4 techno, drill 'n bass and vocal pop.

Featuring: Lil Data (PC Music) // Heavy lifting (Pickled Discs) x Graham Dunning (Fractal Meat) // Miri Kat (Establishment) // Deerful // Linux Lewis (Off Me Nut Records) // Hard On Yarn Sourdonk Communion (Hmurd x peb) // Class Compliant Audio Interfaces x Hellocatfood (Computer Club/Keysound) // Digital Selves // Mathr // xname // Luuma // BITPRINT // Deep Vain // Hortense // Tsun Winston Yeung // +777000 // Coral Manton // Rumblesan

Should be good!

]]>Wikipedia on Autostereograms doesn't exactly say how to construct them, so I drew some diagrams and scribbled some equations, and came up with this.

Given background distance and eye separation in inches, resolution in dots per inch, width in pixels, and count the number of vertical strips in the image background, compute the accomodation distance as follows:

accomodation = background * (1 - (width / count) / (separation * resolution))

This will be less than the background distance for positive eye separation (wall-eyed viewing) and greater for negative eye separation (cross-eyed viewing).

Then compute a depth value for each pixel, with the far plane at background inches from the camera. Ray marching a distance field is one way to do this, see Syntopia's blog for details. The scene should be between the camera and the far plane. Sharp depth discontinuities are disturbing, so position it as close to the far plane as possible.

The next step is converting the depth to a horizontal offset at the accomodation plane, using similar triangles:

delta = (depth - accomodation) * separation / depth;

Then compute the normalized texture coordinate increment that matches that offset:

increment[i] = 1 / (delta * resolution)

The i here is the horizontal index of the pixel, you need the whole scanline at a time
if you want to center the texture instead of aligning it to an image edge. Now we have the
speed of texture coordinate change, we can **integrate** this
to get the actual texture coordinate for each pixel:

double sum = 0; for (int i = 0; i < width; ++i) { sum += increment[i]; coordinate[i] = sum; }

and then do the texture lookup, rebasing it to the center of the image (twice % because negatives behave weird in C):

int u = floor((coordinate[i] - coordinate[width / 2]) * texture_width); u %= texture_width; u += texture_width; u %= texture_width; int v = j; v %= texture_height; v += texture_height; v %= texture_height; pixel[j][i] = texture[v][u];

Image above uses eye separation = -3 (cross-eyed), background distance = 12, 1920x1080 at 100dpi, count 32, the scene is a power 8 Mandelbulb copy-pasted from Fragmentarium, the texture is a slice of a NASA starfield image made seamless in GIMP.

]]>Pau Ros took some great pictures of my exhibition opening, part of Sonic Electronics Festival:

The exhibition is open until 27th April. Check the Chalton Gallery website for spacetime coordinates.

]]>I have an exhibition coming up April 2019 in London, UK.

Claude Heiland-Allen

Digital Art - Computer Graphics - Free/Libre Open Source Software

Chalton Gallery, 96 Chalton Street, Camden, London UK NW1 1HJ

Opening Thursday 11 April 2019, 6pm.

Concert Thursday 18 April 2019, 7pm.

Exhibition opens 12-27 April 2019.

Tuesdays: 8 am to 3 pm

Wednesday to Saturday: 11:30 am to 5:45 pm

*Digital print 120x60cm, framed*

Prismatic is rendered using a physics-based ray-tracer for spherically curved space. In spherical space the light ray geodesics eventually wrap around, meeting at the opposite pole to the observer. To compound the sphericity a projection is used that wraps the whole sphere-of-view from a point into a long strip.

The scene contains spheres of three different transparent materials (water, glass, quartz) symmetrically arranged at the vertices of a 24-cell. The equatorial plane is filled with a glowing opaque checkerboard, this acts as a light source with a daylight spectrum.

The 3D spherical space is embedded in 4D Euclidean (flat) space. Represent ray directions by points on the “equator” around the ray source, and use trigonometry to transform these ray directions appropriately when tracing the rays through curved space. The code is optimized to use simpler functions like square root and arithmetic instead of costly sines and cosines.

The materials are all physically based, with refractive index varying with simulated light wavelength, which gives rainbow effects when different colours are refracted by different angles. To get the final image requires tracing a monochrome image at many different wavelengths, which are then combined into the XYZ colour space using tristimulus response curves for the light receptors in the human eye.

*Digital prints 20x30cm, 16 pieces, unframed*

The concept for Wedged is “playing Tetris optimally badly”. Badly in that no row is complete, and optimally in that each row has at most one empty cell, and the rectangle is filled. Additional aesthetic constraints are encoded in the source code to generate more pleasing images.

Starting from an empty rectangle, block off one cell in each row, subject to the constraint that blocked cells in nearby rows shouldn’t be too close to each other, and the blocked cells should be roughly evenly distributed between columns. Some of these blockings might be duplicates (taking into account mirroring and rotation), so pick only one from each equivalence class.

Starting from the top left empty cell in each of these boards, fill it with pieces that fit. Fitting means that the piece is entirely within the board, not overlapping any blocked cells or other pieces. There are some additional constraints to improve aesthetic appearance and reduce the number of output images: there should not be too many pieces of the same colour in the board, all adjacent pieces should be a different colour, and no piece should be able to slide into the space left when blocked cells are removed (this applies only to the long thin blue pieces, the other pieces can’t move due to the earlier constraint on nearby blocked cells).

The filling process has some other aesthetic constraints: the board must be diverse (there must be a wide range of distinct colours in each row and column), the complete board must have a roughly even number of pieces of each colour, and there shouldn’t be any long straight line boundaries between multiple pieces. The complete boards might have duplicates under symmetries (in the case that the original blocking arrangement was symmetrical), so pick only one from each equivalence class.

*Sound installation*

Generative techno. Dynamo creates music from carefully controlled randomness, using numbers to invent harmonies, melodies, and rhythms. Dynamo is a Pure-data patch which plays new techno tracks forever. It is a generative system, and not a DJ mix.

When it is time to generate a new track, Dynamo first picks some high level parameters like tempo, density, and the scale of notes to use. Then it fills in the details, such as the specific rhythms of each instrument and which notes to play in which order. Finally an overall sequence is applied to form the large scale musical structure.

Pure-data is deterministic, which makes Dynamo deterministic. To avoid the same output each time the patch is started, entropy is injected from outside the Pure-data environment.

*Audio-visual installation*

Sliding tile puzzles have existed for over a century. The 15-puzzle craze in 1880 offered a cash prize for a problem with no solution. In the Puzzle presented here the computer is manipulating the tiles. No malicious design, but insufficient specification means that no solution can be found; the automaton forever explores the state space but finds every way to position the tiles as good as the last…

Each tile makes a sound, and each possible position has a processing effect associated with it. Part of the Puzzle is to watch and listen carefully, to see and hear and try to pick apart what it is that the computer is doing, to reverse-engineer the machinery inside from its outward appearance. The video is built using eight squares, each coloured tile is textured with the whole Puzzle, descending into an infinite fractal cascade. The control algorithm is a Markov Chain that avoids repetition.

Puzzle is implemented in Pure-data, using GEM for video and pdlua for the tile-control logic.

*Interactive installation*

A graph is a set of nodes and links between them. In GraphGrow the term is overloaded: there are visible graphs of nodes and links on the tablet computer, and a second implicit graph with links between the rules.

The visible graphs give the name of GraphGrow - a fractal is grown from a seed graph by replacing each visible link with its corresponding rule graph, recursively. The correspondence is by colour: a yellow link corresponds to the graph with yellow background, and so on. The implicit graph between rules thus *directs* the expansion. The implict graph is also a *directed graph* (even more terminological overloading!).

The rule graphs are constrained, with two fixed nodes at left and right. When growing a graph, each link is replaced with the corresponding rule graph with the left-hand fixed node of the rule mapped to the start point of the link and the right-hand fixed node of the rule mapped to the end point of the link. The mapping is restricted to uniform scaling, rotation and translation. The fixed nodes are coloured white on the tablet.

The fractal is projected, along with rhythmic drones amplified through speakers. Both are generated from the graph data. Dragging the brightly coloured nodes on the tablet in each of the four rule graphs, allows the gallery visitor to explore a subspace of graph-directed iterated function system of similarities.

*Video installation*

Fractals are mathematical objects exhibiting detail at all scales. Escape-time fractals are plotted by iterating recurrence relations parameterised by pixel coordinates from a seed value until the values exceed an escape radius or until an arbitrary limit on iteration count is reached (this is to ensure termination, as some pixels may not escape at all). The colour of each pixel is determined by the distance of the point from the fractal structure: pixels near the fractal are coloured black and pixels far from the fractal are coloured white, or the reverse.

Escape-time fractals are generated by formulas, for example the Mandelbrot set emerges from *z* → *z*^{2} + *c* and the Burning Ship emerges from *x* + *i**y* → (|*x*| + *i*|*y*|)^{2} + *c*, where *c* is the coordinates of each pixel. Hybrid fractals combine different formulas into one more complicated formula: for example one might perform one iteration of the Mandelbrot set formula, then one iteration of the Burning Ship formula, then two more iterations of the Mandelbrot set formula, repeating this sequence in a loop.

Claude Heiland-Allen is an artist from London interested in the complex emergent behaviour of simple systems, unusual geometries, and mathematical aesthetics.

From 2005 through 2011 Claude was a member of the GOTO10 collective, whose mission was to promote Free/Libre Open Source Software in Art. GOTO10 projects included the make art Festival (Poitiers, France), the Puredyne GNU/Linux distribution, and the GOSUB10 netlabel. Since 2011 he has continued as an unaffiliated independent artist and researcher.

Claude has performed, exhibited and presented internationally, including in the United Kingdom (London, Cambridge, Winchester, Lancaster, Oxford, Sheffield), the Netherlands (Leiden, Amsterdam), Austria (Linz, Graz), Germany (Cologne, Berlin), France (Toulouse, Poitiers, Paris), Spain (Gijon), Norway (Bergen), Slovenia (Maribor), Finland (Helsinki), and Canada (Montreal).

Claude’s larger artistic projects include RDEX (an exploration of digitally simulated reaction diffusion chemistry) and clive (a minimal environment for live-coding audio in the C programming language). As a software developer, Claude has developed several programs and libraries used by the wider free software community, including pdlua (extending the Puredata multimedia environment with the Lua programming language), buildtorrent (a program to create .torrent files), and hp2pretty (a program to graph Haskell heap profiling output).

]]>Sonic Electronics Festival has an open call:

SONIC ELECTRONICS FESTIVAL borns with the need to create a place where to combine DIGITAL ARTS with ANALOGUE DEVICES. It is interested in showing processes of technological evolution and has as a reference the use of CODE as an original TECHNOLOGY for making MUSIC. It enjoys the DIY and HANDMADE spirit which ARTISTS, MUSICIANS, CODERS, MAKERS & HACKERS share. The activity fosters a community of tool DEVELOPERS and creative PRACTITIONERS interested in supporting creative practice through DIGITAL and ANALOGUE processes.

SEF will present an EXHIBITION, WORKSHOPS, TALKS, CONCERTS, a PUBLICATION and a RECORD.

EXHIBITION – Chalton Gallery

Opening Thursday 11 April 2019, Exhibition opens on 12-27 April 2019.WORKSHOPS, TALKS, CONCERTS – Iklectik Art Lab

Thursday 30 May, Friday 31 May, Saturday 01 June, and Sunday 02 June 2019.

OPEN CALL FOR:# 1. Talks on Sound Arts / Sonic Arts. Thursday 30 May.

Iklectik Art Lab. From 8 pm.

Conditions: 40 minutes. Academics, independent researchers, any affiliation welcome. Sound Art Theory, Aesthetics, and Politics.# 2. Live AV Performances. Saturday 01 June.

Iklectik Art Lab. From 8 pm.

Conditions: 30 minutes maximum. Females, Trans and Non-binary artists. Noise, Techno, Experimental electronics, Live Coding, Modular Synthesis, Free-improv, Electroacoustic, Acousmatic. Sound + Light / Projection.# 3. Live Music for a 4.1 Sound System. Sunday 02 June.

Iklectik Art Lab. From 6.30 pm.

Conditions: 30 minutes maximum. Live Electroacoustic, Acousmatic, and Computer music for four channels sound system.

More information (including how to submit) at sonicelectronicsfestival.org.

]]>This Friday evening will be streaming to APO33's Audioblast Festival #7 in Nantes. Times are for France, for UTC/GMT subtract one hour.

]]>From Friday 22nd to Sunday 24th February

Festival of sound creation using the INTERNET as a venue for diffusing LIVE experimental, drone, noise, field recordings, sound poetry, electronic, contemporary music…. (concerts, retransmissions and performances).

- Friday
- 8:00 : Sébastien Job & Janusz Brudniewicz
- 9:00 : Rémy Carré
- 10:00 : Osvaldo Cibils
- 11:00 : Laura Netz – Mathr
- Saturday
- 2:00 : OFFAL
- 4:00 : Radio Noise Collective
- 5:00 : Les Lumières
- 6:30 : Les Lumières & Guilhem All
- 7:00 : a30t
- 8:00 : STM
- 9:00 : The Manta
- 10:00 : Sebastian Ernesto Pafundo
- Sunday
- 2:00 : Solar Return
- 4:00 : Bot Mix V2.0
- 6:00 : JRF
- 7:00 : Corpse Etanum
The festival is streamed live online and in a quadraphonic sound system at the venue “La Plateforme Intermédia” in Nantes, France.

This year’s theme is : SonoMorphoTectural / MorphoSonicEctural Transformation of bodies, context and architecture by sound.

“I do not hear the world, I suffer it!”

My hand-drawn animation Lumberjackass has been selected for the One-Off Moving Image Festival on the theme of humans vs nature.

65 one second movies

10 60 seconds movies

Movies are screening february 18-24, 2019 in public spaces in Valencia(ES) and Gol(Norway) in addition to the net, using QR-codes and offline wifi-spots to access with smart devices.

Participating artists:

Agne Petrulenaite, Alan Sondheim, Alexander Ness, Anne Fehres, Antonello Matarazzo, Bach Nguyen, Benna G. Maris, Brade Brace, Bubu Mosiashvili, Chih-Yang Chen, Claude Heiland-Allen, Dan Arenzon, Elaine Crowe, Elle Thorkveld, Eric van Zuilen, Eylul Dogruel, Fabian Heller, Fair Brane, Gyula Kovacs, Jaime Orlando Vera Zarate, Jeppe Lange, Jessica Gomula, Joonas Westerlund, Jorge Benet, Joseph Moore, Julia Dyck, Jun-Yuan Hong, Juno, Jurgen Trautwein, Kevin A. Perrin, Khalil Charif, Kirsten Carina Geisser and Ines Christine Geisser, Klaus Pinter, Lin Li, Luke Conroy, Maria-Leena Raihala, Michel Heen, Natallia Sakalova, Nico Vassilakis, Nigel Roberts, Oonagh Shaw, Paul Wiegerinck, Robin Vollmar, Sara Koppel, Sidsel Winther, Silvia Nonnenmacher, Stefanie Reling-Burns, Tatsunori Hosoi, Theodora Prassa, Tija Place, Tivon Rice, Vivian Cintra, Vreneli Harborth, Ynfab Bruno, Yuqi Wang, Zhu HusselWe're collaborating with 60Seconds Festival in Copenhagen(DK) taking place in parallel, to screen a selection of the 1 sec movies mixed with 1 minute movies in Copenhagen, Frederiksberg, Køge and Helsingør during the festival week.

In addition, all 1 second movies will be included in the next Leap Second Festival, an irregular x-ennale lasting one second.

Media: 4B pencil, 2H pencil, layout paper, flatbed scanner, GIMP. No sound.

]]>Sonics Immersive Media Lab (SIML), Room G05, Hatcham Building, Goldsmiths, 25 St James's, SE14 6AD, London, UK (the old church)

Feb 15th 2019, doors 6:15pm, start 6:30pm, end 10:30pm

mathr / Deerful / Luuma / Cassiel / Lil Data / BITPRINT / rumblesan / lnfiniteMonkeys

Part of an international multi-day streaming celebration of 15 years of live-coding.

]]>A recent post by matty686 on fractalforums about Photoshop IFS fractals got me interested. I didn't manage to do it in GIMP 2.8.18, but succeeded with Inkscape 0.92.4. The process needs two PNG images, I didn't succeed with only one. Once you have set up the transformed linked images that reference "bitmap.png", repeatedly export the page to "bitmap2.png" (pressing return in the box with the filename does this quickly) and in a terminal "mv bitmap2.png bitmap.png" when the export has finished. Here are some explanatory screenshots:

This could probably be scripted within Inkscape so you don't have to do so much manual repetitive work at the end: this is just a proof of concept.

]]>In the #supercollider channel on talk.lurk.org, there was recently
discussion about a "DJ filter" that transitions smoothly between low-pass
and high-pass filters. This made me curious to see if I could make one.
I found Miller Puckette's book section on
Butterworth filters,
but figure 8.18 is not quite there yet for my purposes: the normalization
is off for "shelf 2" (it would be better if the Nyquist gain was 1, instead
of having the DC gain as 1). The figure has 3 poles and 3 zeroes, but for
simplicity of implementing with 2 cascaded biquad sections I went with a
**4-pole** filter design.

After fixing the order, the next variable is the **center frequency**
\(\beta = 2 \pi f_c / SR\), which determines \(r = \tan(\beta/2)\).
Using the formula from the above link gives the pole locations:

\[ \frac{(1 - r^2) \pm \mathbf{i} (2 r \sin(\alpha)) }{ 1 + r^2 + 2 r \cos(\alpha))} \]

For a 4-pole filter, \(\alpha \in \{ \frac{\pi}{8}, \frac{3 \pi}{8} \} \).

The **hi/lo** control \(o\) is conveniently expressed in octaves relative
to the center frequency. It controls the stop band gain, which levels
off after \(o\)-many octaves (so these are really shelving filters).
The \(o\) control fixes the location of the zeroes of the filter, the
formula is the same as above but with \(r\) modified using
\(r_Z = \frac{r_P}{2^o}\).

The filter is normalized so that the pass-band gain (at DC for low-pass and Nyquist for high-pass) is unity. Then the gain in the stop band is \(-24 o\) dB, the transition slope is fixed by the order, and the center frequency gain is about \(-3\)dB when \(o\) is away from \(0\). This can be done by computing the gain of the unnormalized filter at \(\pm 1\) (sign chosen as appropriate). Computing the gain of a filter specified by poles and zeroes is simple: multiply by the distance to each zero and divide by the distance to each pole (phase response is left as an exercise).

The poles and zeroes come in conjugate pairs, which are easy to transform to biquad coefficients (see my previous post about biquad conversions). I put the gain normalization in the first biquad of the chain, not sure if this is optimal. The filters should be stable as long as the center frequency is not DC or Nyquist, as the poles are inside the unit circle. But modulating the filter may cause blowups - to be investigated.

You can browse my implementation.

]]>Tomorrow, an afternoon of live coding at New River Studios:

Right on the heels of the International Conference on Live Coding and an Algorave at Access Space in Sheffield, livecodenyc in exile presents an afternoon of live coding at New River Studios.

Come join us to jam and hang out.

Featuring performances from:

- Codie (Sarah Groff Hennigh-Palermo and Melody Loveless, nyc) - @hi_codie
- Ulysses Popple (nyc) - @ulysses_le_sees
- Deerful - @deer_ful
- BITLIP (Evan Raskob, London) - pixelist.info
- Visor (Jack Purvis, New Zealand) - jackvpurvis.com
- mathr - mathr.co.uk
See you there!

New River Studios is at Ground Floor Unit E, 199 Eade Road, N4 1DN London.

]]>I modified Pure-data and libpd just enough to compile it with Emscripten. See microsite here:

mathr.co.uk/empd

Not user friendly yet, maybe someone else will contribute that stuff...

]]>