Computerized representations of numbers come in a few different flavours.
The most common is an `int`

, which is typically 4 bytes long, or
32 bits. A bit is either 0 or 1, and a byte is the smallest unit addressable
by the CPU and consists of 8 bits, able to store 2^{8} or 256 distinct
values. An `unsigned int`

stores numbers from `0`

to
`2`

, and a ^{32}-1`signed int`

uses
twos-complement to represent negative numbers: the most significant bit is
`1`

for negative numbers, and *sign extension* means it
really represents an infinite stream of `1`

s extending to the left.
You can find the twos-complement of a number by inverting all the bits and then
add 1 to the whole number.

Computers also have `float`

numbers, which have a floating point,
like scientific notation for numbers. The number of significant digits is fixed,
but you can represent very large numbers and very small numbers by combining the
mantissa digits with an exponent, which moves the point separating the fractional
part. I'll talk more about this in a future post. Floating point typically uses
a magnitude-and-sign representation instead of twos-complement.

Fixed-point numbers are somewhere in between, they can represent numbers with fractional parts, but the number of digits after the point is fixed, so very small numbers cannot be represented as accurately as numbers with magnitude near 1. For some purposes this is just fine, as the numbers you are concerned with always have a magnitude near 1. For example, when calculating the Mandelbrot set, each pixel's c value is between -2-2i and +2+2i, and the interesting parts have |c|>0.25 too. The escape time algorithm iterates z→z²+c starting from 0 until |z|>2 (or you give up after an arbitrary iteration count limit) which means most the z values have magnitude near 1. This makes high precision fixed-point a good fit: you need more digits when zooming in, otherwise you can't represent neighbouring pixels' c values, but you don't need the more-complex algorithms of floating-point numerics because you don't need to represent very small or very large numbers.

So how do you implement fixed-point numerics on a computer? I used
`unsigned int`

limbs, which are essentially digits in base 2^{32},
and stored numbers as an array of limbs and two numbers counting the number of
limbs before the point and the number of limbs after the point. My code is quite
general, with mixed precision operations, but this aspect is not so well tested as
the Mandelbrot set calculations only need 1 limb before the point. The most primitive
operations are reading a limb from the vector, which does sign extension in
twos-complement for limbs larger than the stored array, and pads with 0 for smaller
limbs; and writing a limb to a vector, which has some assertions to crash fast
instead of corrupting memory with an out of bounds write.

**-a** is implemented by complementing all the limbs and adding 1
in the least significant place, propagating carries along towards the more significant
limbs. Carries can be calculated by comparing the result of addition with the
addends, if the result is smaller then it overflowed (`unsigned int`

arithmetic is modulo 2^{32}) and you need to add 1 to the next limb.

**a<<n** and **a>>n** can be calculated
by reading two adjacent limbs into a double-wide `unsigned long long int`

(there is disagreement over whether `long`

is 32bit or 64bit on
different systems, `long long`

is 64bit almost everywhere I think) and
shift the correct 32bit part out of it while looping over the arrays, being careful
not to clobber the array you are reading from in case it is aliased to the array
you are writing to (it is nice to be able to shift in-place to save memory).

**a+b** is implemented by zipping from least significant limb to
most significant, propagating carries along and adding them to the next pair of
limbs. **a-b** is implemented similarly, only propagating borrows
to subtract from the next pair. Borrow is calculated by the subtracted limb being
larger than the limb from which it is subtracted. OpenGL 4 / GLSL 400 and above
have `uaddCarry()`

and `usubBorrow()`

functions which make
this easier to implement, but I wrote my implementation in OpenCL / C / C++ which
afaik doesn't have these.

**a ^{2}** is implemented as an optimization of multiplication
taking advantage of symmetry to optimize duplicated work like

`x*y+y*x`

as `(x*y)<<1`

which may make a small difference. Multiplication
`a`

by every limb in `b`

and collect up the results, so that is what I did as
a first pass for simplicity (more efficient algorithms like Karatsuba multiplication
may be interesting to investigate later). It differs from integer multiplication as
found in libgmp (for example) because the least significant limbs of the result can
often be discarded or not computed, at least if we don't care about correct rounding
(all the operations I implemented simply truncate towards negative infinity).
I made a diagram (click for bigger) showing how I implemented it (actually a version
of this diagram came before I started writing any code at all):The input numbers are fed in from the right hand side and the bottom, into limb-by-limb multiplication (32×32→64bit) which outputs the most significant and least significant limbs of the result to two wide accumulator busses that flow diagonally down and left. At the end the accumulators are combined into limbs by propagating carries upwards. The accumulators don't need to be super wide, I think they need O(log(number of limbs)) extra bits beyond the width of the limb, but 64bit is certainly plenty and is usually available.

One final thing to note, I tried benchmarking both big-endian and little-endian limb orderings, to see how that affected things. To my surprise big-endian came out a little faster, surprising because most operations operate over limbs from the little end first, but I think that was because I was only testing with 1 limb before the point, which potentially made an addition (statically known to be +0) be completely omitted by the optimizer. At zoom 1e90, which corresponds to about 10 limbs per number, simple escape time fixed-point computation of a Mandelbrot set location on my AMD RX 580 GPU was only about 12-13 times slower (total elapsed wall-clock time) than KF using advanced perturbation and series approximation algorithms on my AMD Ryzen 7 2700X CPU. I want to find a way to compare total power consumption, maybe I need to get a metering hardware plug or some such.

You can find my implementation here:
`git clone https://code.mathr.co.uk/fixed-point.git`

Back in 2017 I forked the Windows fractal explorer software Kalles Fraktaler 2. I've been working on it steadily since, adding plenty of new features (and bugs). My fork's website is here with binary downloads for Windows (including Wine on Linux).

I had been maintaining 3 branches of various ages, purely because the 2.12 branch was faster than the 2.13 and 2.14 branches and I couldn't figure out why, until recently. Hence this blog post. It turns out to be quite obscure. This is the patch that fixed it:

diff --git a/formula/formula.xsl b/formula/formula.xsl index b47f763..d07c002 100644 (file) --- a/formula/formula.xsl +++ b/formula/formula.xsl @@ -370,6 +370,7 @@ bool FORMULA(perturbation,<xsl:value-of select="../@type" />,<xsl:value-of selec (void) Ai; // -Wunused-variable (void) A; // -Wunused-variable (void) c; // -Wunused-variable + bool no_g = g_real == 1.0 && g_imag == 1.0; int antal = antal0; double test1 = test10; double test2 = test20; @@ -385,7 +386,14 @@ bool FORMULA(perturbation,<xsl:value-of select="../@type" />,<xsl:value-of selec Xxr = Xr + xr; Xxi = Xi + xi; test2 = test1; - test1 = double(g_real * Xxr * Xxr + g_imag * Xxi * Xxi); + if (no_g) + { + test1 = double(Xxr * Xxr + Xxi * Xxi); + } + else + { + test1 = double(g_real * Xxr * Xxr + g_imag * Xxi * Xxi); + } if (test1 < Xz) { bGlitch = true;

In short, it adds a branch inside the inner loop, to avoid two
multiplications by 1.0 (which would leave the value unchanged). Normally
branches inside inner loops are harmful for optimization, but because the
condition is static and unchanging over the iterations, the compiler can
actually reverse the order of the loop and branch, generating code for
two loops, one of which has the two multiplications completely gone. In
real-world usage, the values *are* almost always both 1.0 - they
determine which parts of the value to use for the escape test (and glitch
test, but this is probably a bug).

The performance boost from this patch was about **20%**
(CPU time), which is huge in the grand scheme of things, so I was quite
happy, because it brought performance of kf-2.14.7.1 back to the level
of the 2.12 branch, so I don't have to support it any more (by backporting
bugfixes).

But when you get a taste for speed, you want more. So far KF has not
taken advantage of CPUs to their fullest. Until now, KF has been
resolutely scalar, computing one pixel at a time in each thread. Last
night I started work on upgrading KF to use vectorization
(aka SIMD).
Now when I
compile KF for my CPU (which is not portable, so I won't ship binaries with
these flags enabled), I get an **80%** (CPU time) speed boost,
which is absolutely ginormous, and when compiling for more conservative CPU
settings (Intel Haswell / AMD Excavator) the speed boost is **61%**
which is still a very nice thing to have. With no CPU specific flags
(baseline x86_64) the speed boost is **55%** which is great
too.

The vectorization work is not finished yet, so far it is only added
for "type R" formulae in `double`

precision (which allows zoom
depths to 1e300 or so). Unfortunately `long double`

(used after
`double`

until 1e4900 or so) has no SIMD support at the hardware
level, but I will try to add it for the `floatexp`

type used
for even deeper zooms (who knows, maybe `floatexp`

+SIMD will
be competitive with `long double`

, but I doubt it...). I will
also add support for "type C" formulae before the release, which is a little
complicated by the hoops you have to jump through to get gcc to broadcast
a scalar to a vector in initialization.

Here's a table of differently optimized KF versions:

version | vector size | wall-clock time | CPU time | speed boost | |
---|---|---|---|---|---|

2.14.7.1 | 1 | 3m47.959s | 23m30.676s | 1.00 | 1.00 |

git/64 | 1 | 3m46.703s | 23m26.290s | ||

2 | 3m22.280s | 15m11.022s | 1.13 | 1.55 | |

4 | 3m55.158s | 25m26.638s | |||

git/64+ | 1 | 3m46.977s | 23m26.065s | ||

2 | 3m13.442s | 14m34.363s | 1.18 | 1.61 | |

4 | 3m26.012s | 14m54.546s | |||

git/native | 1 | 3m42.554s | 21m51.381s | ||

2 | 3m10.440s | 13m26.100s | |||

4 | 3m08.784s | 13m06.386s | 1.21 | 1.80 | |

8 | 3m50.812s | 24m01.230s |

All these benchmarks are with Dinkydau's "Evolution of Trees" location, quadratic Mandelbrot set at zoom depth 5e227, with maximum iteration count 1200000. Image size was 3840x2160. My CPU is an AMD Ryzen 7 2700X Eight-Core Processor (with 16 threads that appear as distinct CPUs to Linux). Wall-clock performance doesn't scale up as much as CPU because some parts (computing reference orbits) are sequential; only the perturbed per-pixel orbits are embarassingly parallel.

]]>**rounded** provides properly rounded floating point numbers
of arbitrary precision. It does so by wrapping the GNU
MPFR library. Phantom types
carry the information about the precision and rounding mode, letting you treat
properly rounded floating point numbers as instances of `Num`

or
`Floating`

, like any other numeric type in Haskell. Unlike other
attempts to port MPFR to Haskell, this library does not require you to cripple
`Integer`

performance or link your code in an unnatural way.

The new releases are twofold, v0.x is for MPFR 3.1 or above, and v1.x requires MPFR 4.0 or above. When in doubt, depend on the v0.x branch, because both will be supported for the foreseeable future and there's not that much new in MPFR 4.0 (at least, only two additional functions are exposed by rounded so far, though that may increase).

The main changes versus the previous release are bindings for a lot more of the MPFR API (I think all unary functions and binary functions are included now). There are still functions missing, mostly those with less common types, so pull requests adding them are welcome. There is also a fix to a memory corrupting crash in the long-double support: a foreign function was imported with an argument missing so the pointer used to write flags to was running wild.

rounded was originally written around 7 years ago by Edward Kmett and Daniel Peebles, but it couldn't be made to work properly until GHC's integer-gmp implementation was changed later (something to do with GMP custom memory allocation vs GHC's garbage collector). I did the work to rip out the semi-broken C-- code and replace it with a more straight-forward (though perhaps less performant) Foreign Function Interface (FFI) binding, and took over the maintainership about a year ago.

You can get it from Hackage or Github, and I have a mirror of the repository on my own code hosting.

]]>On Thursday 11th July 2019 in the late afternoon, I'll be presenting my
paper *At the Helm of the Burning Ship* at the
EVA London
conference. The proceedings should be free to access, I'll update this
post with links once they are published. I don't know if the talks will
be recorded. I'll put my slides online afterwards too.

At the Helm of the Burning ShipAbstract: The Burning Ship fractal is a non-analytic variation of the Mandelbrot set, formed by taking absolute values in the recurrence. Iterating its Jacobian can identify the period of attracting orbits; Newton’s root-finding method locates their mini-ships. Size estimates tell how deep to zoom to find the mini-ship or its embedded quasi-Julia set. Pre-periodic Misiurewicz points with repelling dynamics are located by Newton’s method. Stretched regions are automatically unskewed by the Jacobian, which is also good for colouring images using distance estimation. Perturbation techniques cheapen deep zooming. The mathematics can be generalised to other fractal formulas. Some artistic zooming techniques and domain colouring methods are also described.

Keywords: Burning Ship. Dynamical systems. Fractal art. Numerical algorithms. Perturbation theory.

If you're in town with cash to splash on registration fees, I think you have to book before Sunday. I'll also be attending on the Tuesday, there looks like a good session on robot drawing.

**EDIT** I made a micro-site for the paper and slides.

I made an ambient drone piece Harmonic Protocol. It is a feedback process, with filters that analyze the sound according to the 12-tone equal temperament scale, and amplifies those components that are 7 semitones away (modulo octaves). The result is a smooth drone that changes over time.

Prototyped with clive and extracted to a standalone C application using SDL2 for audio, which has the advantage of being able to be compiled by emscripten for web audio APIs to run in the browser. Source code on the page.

Out of the box the standard emscripten HTML boilerplate does not work on Chromium or Google Chrome because of web audio autoplay restrictions, you have to start audio from an interaction with the page so for those browers I added a button with Javascript logic copy/pasted from somewhere I found online. View the source of the "index.html" for details. Hopefully future emscripten versions will add a play button to their generated code.

Future ideas: maybe try a 53-tone equal temperament version, though the CPU load will be approximately 4.5 times higher, and the web version already uses 15% of a core in Firefox (standalone version uses much less).

]]>Soon:

ALGORITHMS ARE INFECTING DISCOS AND RUINING LIVES

Book online (£10-£15): Resident Advisor // Party For The People

The Algorave scene was born in London back in 2012, since spreading to around 90 cities worldwide, and has been billed as "the future of electronic music" in wired magazine articles on a regular basis. Now enough producers are exploring algorithmic methods that we briefly declare ALGORAVE IS THE PRESENT OF ELECTRONIC MUSIC before becoming a footnote in history.

This one will be a corker though, two rooms full of algorithmic bangers in Elephant and Castle's lovely Corsica Studios. The two rooms will allow parallel exploration of algorithmic flavours of bassline, 4/4 techno, drill 'n bass and vocal pop.

Featuring: Lil Data (PC Music) // Heavy lifting (Pickled Discs) x Graham Dunning (Fractal Meat) // Miri Kat (Establishment) // Deerful // Linux Lewis (Off Me Nut Records) // Hard On Yarn Sourdonk Communion (Hmurd x peb) // Class Compliant Audio Interfaces x Hellocatfood (Computer Club/Keysound) // Digital Selves // Mathr // xname // Luuma // BITPRINT // Deep Vain // Hortense // Tsun Winston Yeung // +777000 // Coral Manton // Rumblesan

Should be good!

]]>Wikipedia on Autostereograms doesn't exactly say how to construct them, so I drew some diagrams and scribbled some equations, and came up with this.

Given background distance and eye separation in inches, resolution in dots per inch, width in pixels, and count the number of vertical strips in the image background, compute the accomodation distance as follows:

accomodation = background * (1 - (width / count) / (separation * resolution))

This will be less than the background distance for positive eye separation (wall-eyed viewing) and greater for negative eye separation (cross-eyed viewing).

Then compute a depth value for each pixel, with the far plane at background inches from the camera. Ray marching a distance field is one way to do this, see Syntopia's blog for details. The scene should be between the camera and the far plane. Sharp depth discontinuities are disturbing, so position it as close to the far plane as possible.

The next step is converting the depth to a horizontal offset at the accomodation plane, using similar triangles:

delta = (depth - accomodation) * separation / depth;

Then compute the normalized texture coordinate increment that matches that offset:

increment[i] = 1 / (delta * resolution)

The i here is the horizontal index of the pixel, you need the whole scanline at a time
if you want to center the texture instead of aligning it to an image edge. Now we have the
speed of texture coordinate change, we can **integrate** this
to get the actual texture coordinate for each pixel:

double sum = 0; for (int i = 0; i < width; ++i) { sum += increment[i]; coordinate[i] = sum; }

and then do the texture lookup, rebasing it to the center of the image (twice % because negatives behave weird in C):

int u = floor((coordinate[i] - coordinate[width / 2]) * texture_width); u %= texture_width; u += texture_width; u %= texture_width; int v = j; v %= texture_height; v += texture_height; v %= texture_height; pixel[j][i] = texture[v][u];

Image above uses eye separation = -3 (cross-eyed), background distance = 12, 1920x1080 at 100dpi, count 32, the scene is a power 8 Mandelbulb copy-pasted from Fragmentarium, the texture is a slice of a NASA starfield image made seamless in GIMP.

]]>Pau Ros took some great pictures of my exhibition opening, part of Sonic Electronics Festival:

The exhibition is open until 27th April. Check the Chalton Gallery website for spacetime coordinates.

]]>I have an exhibition coming up April 2019 in London, UK.

Claude Heiland-Allen

Digital Art - Computer Graphics - Free/Libre Open Source Software

Chalton Gallery, 96 Chalton Street, Camden, London UK NW1 1HJ

Opening Thursday 11 April 2019, 6pm.

Concert Thursday 18 April 2019, 7pm.

Exhibition opens 12-27 April 2019.

Tuesdays: 8 am to 3 pm

Wednesday to Saturday: 11:30 am to 5:45 pm

*Digital print 120x60cm, framed*

Prismatic is rendered using a physics-based ray-tracer for spherically curved space. In spherical space the light ray geodesics eventually wrap around, meeting at the opposite pole to the observer. To compound the sphericity a projection is used that wraps the whole sphere-of-view from a point into a long strip.

The scene contains spheres of three different transparent materials (water, glass, quartz) symmetrically arranged at the vertices of a 24-cell. The equatorial plane is filled with a glowing opaque checkerboard, this acts as a light source with a daylight spectrum.

The 3D spherical space is embedded in 4D Euclidean (flat) space. Represent ray directions by points on the “equator” around the ray source, and use trigonometry to transform these ray directions appropriately when tracing the rays through curved space. The code is optimized to use simpler functions like square root and arithmetic instead of costly sines and cosines.

The materials are all physically based, with refractive index varying with simulated light wavelength, which gives rainbow effects when different colours are refracted by different angles. To get the final image requires tracing a monochrome image at many different wavelengths, which are then combined into the XYZ colour space using tristimulus response curves for the light receptors in the human eye.

*Digital prints 20x30cm, 16 pieces, unframed*

The concept for Wedged is “playing Tetris optimally badly”. Badly in that no row is complete, and optimally in that each row has at most one empty cell, and the rectangle is filled. Additional aesthetic constraints are encoded in the source code to generate more pleasing images.

Starting from an empty rectangle, block off one cell in each row, subject to the constraint that blocked cells in nearby rows shouldn’t be too close to each other, and the blocked cells should be roughly evenly distributed between columns. Some of these blockings might be duplicates (taking into account mirroring and rotation), so pick only one from each equivalence class.

Starting from the top left empty cell in each of these boards, fill it with pieces that fit. Fitting means that the piece is entirely within the board, not overlapping any blocked cells or other pieces. There are some additional constraints to improve aesthetic appearance and reduce the number of output images: there should not be too many pieces of the same colour in the board, all adjacent pieces should be a different colour, and no piece should be able to slide into the space left when blocked cells are removed (this applies only to the long thin blue pieces, the other pieces can’t move due to the earlier constraint on nearby blocked cells).

The filling process has some other aesthetic constraints: the board must be diverse (there must be a wide range of distinct colours in each row and column), the complete board must have a roughly even number of pieces of each colour, and there shouldn’t be any long straight line boundaries between multiple pieces. The complete boards might have duplicates under symmetries (in the case that the original blocking arrangement was symmetrical), so pick only one from each equivalence class.

*Sound installation*

Generative techno. Dynamo creates music from carefully controlled randomness, using numbers to invent harmonies, melodies, and rhythms. Dynamo is a Pure-data patch which plays new techno tracks forever. It is a generative system, and not a DJ mix.

When it is time to generate a new track, Dynamo first picks some high level parameters like tempo, density, and the scale of notes to use. Then it fills in the details, such as the specific rhythms of each instrument and which notes to play in which order. Finally an overall sequence is applied to form the large scale musical structure.

Pure-data is deterministic, which makes Dynamo deterministic. To avoid the same output each time the patch is started, entropy is injected from outside the Pure-data environment.

*Audio-visual installation*

Sliding tile puzzles have existed for over a century. The 15-puzzle craze in 1880 offered a cash prize for a problem with no solution. In the Puzzle presented here the computer is manipulating the tiles. No malicious design, but insufficient specification means that no solution can be found; the automaton forever explores the state space but finds every way to position the tiles as good as the last…

Each tile makes a sound, and each possible position has a processing effect associated with it. Part of the Puzzle is to watch and listen carefully, to see and hear and try to pick apart what it is that the computer is doing, to reverse-engineer the machinery inside from its outward appearance. The video is built using eight squares, each coloured tile is textured with the whole Puzzle, descending into an infinite fractal cascade. The control algorithm is a Markov Chain that avoids repetition.

Puzzle is implemented in Pure-data, using GEM for video and pdlua for the tile-control logic.

*Interactive installation*

A graph is a set of nodes and links between them. In GraphGrow the term is overloaded: there are visible graphs of nodes and links on the tablet computer, and a second implicit graph with links between the rules.

The visible graphs give the name of GraphGrow - a fractal is grown from a seed graph by replacing each visible link with its corresponding rule graph, recursively. The correspondence is by colour: a yellow link corresponds to the graph with yellow background, and so on. The implicit graph between rules thus *directs* the expansion. The implict graph is also a *directed graph* (even more terminological overloading!).

The rule graphs are constrained, with two fixed nodes at left and right. When growing a graph, each link is replaced with the corresponding rule graph with the left-hand fixed node of the rule mapped to the start point of the link and the right-hand fixed node of the rule mapped to the end point of the link. The mapping is restricted to uniform scaling, rotation and translation. The fixed nodes are coloured white on the tablet.

The fractal is projected, along with rhythmic drones amplified through speakers. Both are generated from the graph data. Dragging the brightly coloured nodes on the tablet in each of the four rule graphs, allows the gallery visitor to explore a subspace of graph-directed iterated function system of similarities.

*Video installation*

Fractals are mathematical objects exhibiting detail at all scales. Escape-time fractals are plotted by iterating recurrence relations parameterised by pixel coordinates from a seed value until the values exceed an escape radius or until an arbitrary limit on iteration count is reached (this is to ensure termination, as some pixels may not escape at all). The colour of each pixel is determined by the distance of the point from the fractal structure: pixels near the fractal are coloured black and pixels far from the fractal are coloured white, or the reverse.

Escape-time fractals are generated by formulas, for example the Mandelbrot set emerges from *z* → *z*^{2} + *c* and the Burning Ship emerges from *x* + *i**y* → (|*x*| + *i*|*y*|)^{2} + *c*, where *c* is the coordinates of each pixel. Hybrid fractals combine different formulas into one more complicated formula: for example one might perform one iteration of the Mandelbrot set formula, then one iteration of the Burning Ship formula, then two more iterations of the Mandelbrot set formula, repeating this sequence in a loop.

Claude Heiland-Allen is an artist from London interested in the complex emergent behaviour of simple systems, unusual geometries, and mathematical aesthetics.

From 2005 through 2011 Claude was a member of the GOTO10 collective, whose mission was to promote Free/Libre Open Source Software in Art. GOTO10 projects included the make art Festival (Poitiers, France), the Puredyne GNU/Linux distribution, and the GOSUB10 netlabel. Since 2011 he has continued as an unaffiliated independent artist and researcher.

Claude has performed, exhibited and presented internationally, including in the United Kingdom (London, Cambridge, Winchester, Lancaster, Oxford, Sheffield), the Netherlands (Leiden, Amsterdam), Austria (Linz, Graz), Germany (Cologne, Berlin), France (Toulouse, Poitiers, Paris), Spain (Gijon), Norway (Bergen), Slovenia (Maribor), Finland (Helsinki), and Canada (Montreal).

Claude’s larger artistic projects include RDEX (an exploration of digitally simulated reaction diffusion chemistry) and clive (a minimal environment for live-coding audio in the C programming language). As a software developer, Claude has developed several programs and libraries used by the wider free software community, including pdlua (extending the Puredata multimedia environment with the Lua programming language), buildtorrent (a program to create .torrent files), and hp2pretty (a program to graph Haskell heap profiling output).

]]>Sonic Electronics Festival has an open call:

SONIC ELECTRONICS FESTIVAL borns with the need to create a place where to combine DIGITAL ARTS with ANALOGUE DEVICES. It is interested in showing processes of technological evolution and has as a reference the use of CODE as an original TECHNOLOGY for making MUSIC. It enjoys the DIY and HANDMADE spirit which ARTISTS, MUSICIANS, CODERS, MAKERS & HACKERS share. The activity fosters a community of tool DEVELOPERS and creative PRACTITIONERS interested in supporting creative practice through DIGITAL and ANALOGUE processes.

SEF will present an EXHIBITION, WORKSHOPS, TALKS, CONCERTS, a PUBLICATION and a RECORD.

EXHIBITION – Chalton Gallery

Opening Thursday 11 April 2019, Exhibition opens on 12-27 April 2019.WORKSHOPS, TALKS, CONCERTS – Iklectik Art Lab

Thursday 30 May, Friday 31 May, Saturday 01 June, and Sunday 02 June 2019.

OPEN CALL FOR:# 1. Talks on Sound Arts / Sonic Arts. Thursday 30 May.

Iklectik Art Lab. From 8 pm.

Conditions: 40 minutes. Academics, independent researchers, any affiliation welcome. Sound Art Theory, Aesthetics, and Politics.# 2. Live AV Performances. Saturday 01 June.

Iklectik Art Lab. From 8 pm.

Conditions: 30 minutes maximum. Females, Trans and Non-binary artists. Noise, Techno, Experimental electronics, Live Coding, Modular Synthesis, Free-improv, Electroacoustic, Acousmatic. Sound + Light / Projection.# 3. Live Music for a 4.1 Sound System. Sunday 02 June.

Iklectik Art Lab. From 6.30 pm.

Conditions: 30 minutes maximum. Live Electroacoustic, Acousmatic, and Computer music for four channels sound system.

More information (including how to submit) at sonicelectronicsfestival.org.

]]>This Friday evening will be streaming to APO33's Audioblast Festival #7 in Nantes. Times are for France, for UTC/GMT subtract one hour.

]]>From Friday 22nd to Sunday 24th February

Festival of sound creation using the INTERNET as a venue for diffusing LIVE experimental, drone, noise, field recordings, sound poetry, electronic, contemporary music…. (concerts, retransmissions and performances).

- Friday
- 8:00 : Sébastien Job & Janusz Brudniewicz
- 9:00 : Rémy Carré
- 10:00 : Osvaldo Cibils
- 11:00 : Laura Netz – Mathr
- Saturday
- 2:00 : OFFAL
- 4:00 : Radio Noise Collective
- 5:00 : Les Lumières
- 6:30 : Les Lumières & Guilhem All
- 7:00 : a30t
- 8:00 : STM
- 9:00 : The Manta
- 10:00 : Sebastian Ernesto Pafundo
- Sunday
- 2:00 : Solar Return
- 4:00 : Bot Mix V2.0
- 6:00 : JRF
- 7:00 : Corpse Etanum
The festival is streamed live online and in a quadraphonic sound system at the venue “La Plateforme Intermédia” in Nantes, France.

This year’s theme is : SonoMorphoTectural / MorphoSonicEctural Transformation of bodies, context and architecture by sound.

“I do not hear the world, I suffer it!”

My hand-drawn animation Lumberjackass has been selected for the One-Off Moving Image Festival on the theme of humans vs nature.

65 one second movies

10 60 seconds movies

Movies are screening february 18-24, 2019 in public spaces in Valencia(ES) and Gol(Norway) in addition to the net, using QR-codes and offline wifi-spots to access with smart devices.

Participating artists:

Agne Petrulenaite, Alan Sondheim, Alexander Ness, Anne Fehres, Antonello Matarazzo, Bach Nguyen, Benna G. Maris, Brade Brace, Bubu Mosiashvili, Chih-Yang Chen, Claude Heiland-Allen, Dan Arenzon, Elaine Crowe, Elle Thorkveld, Eric van Zuilen, Eylul Dogruel, Fabian Heller, Fair Brane, Gyula Kovacs, Jaime Orlando Vera Zarate, Jeppe Lange, Jessica Gomula, Joonas Westerlund, Jorge Benet, Joseph Moore, Julia Dyck, Jun-Yuan Hong, Juno, Jurgen Trautwein, Kevin A. Perrin, Khalil Charif, Kirsten Carina Geisser and Ines Christine Geisser, Klaus Pinter, Lin Li, Luke Conroy, Maria-Leena Raihala, Michel Heen, Natallia Sakalova, Nico Vassilakis, Nigel Roberts, Oonagh Shaw, Paul Wiegerinck, Robin Vollmar, Sara Koppel, Sidsel Winther, Silvia Nonnenmacher, Stefanie Reling-Burns, Tatsunori Hosoi, Theodora Prassa, Tija Place, Tivon Rice, Vivian Cintra, Vreneli Harborth, Ynfab Bruno, Yuqi Wang, Zhu HusselWe're collaborating with 60Seconds Festival in Copenhagen(DK) taking place in parallel, to screen a selection of the 1 sec movies mixed with 1 minute movies in Copenhagen, Frederiksberg, Køge and Helsingør during the festival week.

In addition, all 1 second movies will be included in the next Leap Second Festival, an irregular x-ennale lasting one second.

Media: 4B pencil, 2H pencil, layout paper, flatbed scanner, GIMP. No sound.

]]>