Back in 2009 I posted a short video Reflex preview of truncated 4D polytopes. Today I revisited that old code, reimplementing the core algorithms in OpenGL Shader Language (GLSL) using the FragM environment.

3D polytopes are also known as polyhedra. They can be defined by their Schlaefli symbol, which looks like {4,3} for a cube. This can be understood as having faces made up of squares ({4}), arranged in triangles around each vertex ({3}). This notation extends to 4D, where a hypercube (aka tesseract) has symbol {4,3,3}, made up of cubes {4,3} with {3} around each vertex.

These symbols are all well and good, but to render pictures you need concrete measurements: angles, lengths, etc. In H.S.M. Coxeter's book Regular Polytopes, there is the solution in the form of some recursive equations. Luckily the recursion depth is bounded by the number of dimensions, which is small (either 3 or 4 in my case, though in principle you can go higher). This is important for implementation in GLSL, which disallows recursion. In any case, GLSL doesn't have dynamic length lists, so different functions are needed for different length symbols.

I don't really understand the maths behind it (I don't think I did even when I first wrote the code in Haskell in 2009), but I can describe the implementation. The goal is to find 4 vectors, which are normal to the fundamental region of the tiling of the (hyper)sphere. Given a point anywhere on the sphere, repeated reflections through the planes of the fundamental region can eventually (if you pick the right ones) get you into the fundamental region. Then you can do some signed distance field things to put shapes there, which will be tiled around the whole space when the algorithm is completed.

Starting from the Schlaefli symbol {p,q,r} (doing 4D because it has some tricksy bits, 3D is largely similar), the main task is to find the radii of the sub polytopes ({p,q}, {p}, {q,r}, etc). This is because these radii can be used to calculate the angles of the planes of the fundamental region, using trigonometry. The recursive formula starts here:

float radius(int j, ivec3 p)
{
return sqrt(radius2_j(j, p));
}

Here j will range from 0 to 3 inclusive, and the vector is {p,q,r}. Then radius2_j() evaluates using the radii squared of the relevant subpolytopes, according to j. I think this is an application of Pythagoras' theorem.

float radius2_j(int j, ivec3 p)
{
switch (j)
{
case 0: return radius2_0(p);
case 1: return radius2_0(p) - radius2_0();
case 2: return radius2_0(p) - radius2_0(p.x);
case 3: return radius2_0(p) - radius2_0(p.xy);
}
return 0.0;
}

The function radius2_0() is overloaded for different length inputs, from 0 to 3:

float radius2_0() { return 1.0; }
float radius2_0(int p) { return delta() / delta(p); }
float radius2_0(ivec2 p) { return delta(p.y) / delta(p); }
float radius2_0(ivec3 p) { return delta(p.yz) / delta(p); }

Here it starts to get mysterious, the delta funtion uses trigonometry to find the lengths. I don't know how/why this works, I copy/pasted from the book. Note that it looks recursive at first glance, but in fact each delta calls a different delta(s) with strictly shorter input vectors, so it's just a non-recursive chain of function calls.

float delta()
{
return 1.0;
}
float delta(int p)
{
float s = sin(pi / float(p));
return s * s;
}
float delta(ivec2 p)
{
float c = cos(pi / float(p.x));
return delta(p.y) - delta() * c * c;
}
float delta(ivec3 p)
{
float c = cos(pi / float(p.x));
return delta(p.yz) - delta(p.z) * c * c;
}

Now comes the core function to find the fundamental region: the cosines of the angles are found by ratios of successive radii, the sines are found by Pythagoras' theorem, 3 rotation matrices are constructed, then one axis vector is transformed. Finally these vectors are combined using the 4D cross product (which has 3 inputs), giving the final fundamental region (I'm not sure why the cross products are necessary, but I do know the main property of cross product is that the output is perpendicular to all of the inputs.). Note that some signs need wibbling, either that or permute the order of the inputs.

mat4 fr4(ivec3 pqr)
{
float r0 = radius(0, pqr);
float r1 = radius(1, pqr);
float r2 = radius(2, pqr);
float r3 = radius(3, pqr);
float c1 = r1 / r0;
float c2 = r2 / r1;
float c3 = r3 / r2;
float s1 = sqrt(1.0 - c1 * c1);
float s2 = sqrt(1.0 - c2 * c2);
float s3 = sqrt(1.0 - c3 * c3);
mat4 m1 = mat4(c1, s1, 0, 0, -s1, c1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1);
mat4 m2 = mat4(c2, 0, s2, 0, 0, 1, 0, 0, -s2, 0, c2, 0, 0, 0, 0, 1);
mat4 m3 = mat4(c3, 0, 0, s3, 0, 1, 0, 0, 0, 0, 1, 0, -s3, 0, 0, c3);
vec4 v0 = vec4(1, 0, 0, 0);
vec4 v1 = m1 * v0;
vec4 v2 = m1 * m2 * v0;
vec4 v3 = m1 * m2 * m3 * v0;
return (mat4
( normalize(cross4(v1, v2, v3))
, -normalize(cross4(v0, v2, v3))
, normalize(cross4(v0, v1, v3))
, -normalize(cross4(v0, v1, v2))
));
}

4D cross product can be implemented in terms of 3D determinants:

vec4 cross4(vec4 u, vec4 v, vec4 w)
{
mat3 m0 = mat3(u[1], u[2], u[3], v[1], v[2], v[3], w[1], w[2], w[3]);
mat3 m1 = mat3(u[0], u[2], u[3], v[0], v[2], v[3], w[0], w[2], w[3]);
mat3 m2 = mat3(u[0], u[1], u[3], v[0], v[1], v[3], w[0], w[1], w[3]);
mat3 m3 = mat3(u[0], u[1], u[2], v[0], v[1], v[2], w[0], w[1], w[2]);
return vec4(determinant(m0), -determinant(m1), determinant(m2), -determinant(m3));
}

The projection from 4D to 3D is stereographic, because signed distance fields are implicit we need both directions (one to go from input point in 3D to 4D, then after transformation / tesselation we need to go back to 3D to calculate distances):

vec4 unstereo(vec3 pos)
{
float r = length(pos);
return vec4(2.0 * pos, 1.0 - r * r) / (1.0 + r * r);
}
vec3 stereo(vec4 pos)
{
pos = normalize(pos);
return pos.xyz / (1 - pos.w);
}
float sdistance(vec4 a, vec4 b)
{
return distance(stereo(a), stereo(b));
}

The tiling is done iteratively (the limit of 600 is there because unbounded loops on a GPU are not really advisable, if this limit is set too low, e.g. 100, then visible artifacts occur - it's probably possible to prove an optimal bound somehow):

float poly4(vec4 r)
{
for (int i = 0, j = 0; i < 4 && j < 600; ++j)
{
if (dot(r, FR4[i]) < 0)
{
r = reflect(r, FR4[i]);
i = 0;
}
else
{
i++;
}
}
float de = 1.0 / 0.0;
// signed distance field stuff goes here
return de;
}

The user interface part of the code has some variables for selecting dimension and symmetry group (Schlaefli symbol). It also has 4 sliders for selecting the truncation amount in barycentric coordinates (which makes settings transfer in a meaningful way between polytopes), and 6 checkboxes for enabling different planes (there are 4 ways to choose the first axis and 3 left to choose from for the second axis, divided by 2 because the order doesn't matter).

vec4 s = inverse(transpose(FR4)) * vec4(BX, BY, BZ, BW);

The signed distance field stuff is quite straightforward in the end, though it took a lot of trial and error to get there. The first way I tried was to render tubes for line segments, by projecting the input point 'r' onto the planes of the fundamental region and doing an SDF circle:

float d = sdistance(s, r - FR4[0] * dot(r, FR4[0])) - thickness;

The planes are drawn by projecting 's' onto the cross product of the plane with 'r'. I don't know why this works:

vec4 p = normalize(cross4(FR4[0], FR4[1], r));
float d = sdistance(s, s - p * dot(s, p)) - 0.01;

Finally the DE() function for plugging into the DE-Raytracer.frag that comes with FragM has some animated rotation based on time, and the baseColor() function textures the solid with light and dark circles (actually slices of hyperspheres).

Full code download: reflex.frag.

]]>Recently I've been revisiting the code from my Monotone, extending it to use OpenGL cube maps to store the feedback texture instead of nonlinear warping in a regular texture. This means I can use MÃ¶ebius transformations instead of simple similarities and still avoid excessively bad blurriness and edge artifacts. I've been toying with colour too: but unlike the chaos game algorithm for fractal flames (which can colour according to a "hidden" parameter, leading to interesting and dynamic colour structures), the texture feedback mechanism I'm using can only cope with "structural" RGB colours (with an alpha channel for overall brightness). A 4x4 colour matrix seems to be more interesting than the off-white multipliers I was using to start with.

Some videos:

- Moebius Bubble Chamber (stereographic projection, black and white)
- Moebius Blueprints (360, slight colour, low resolution)
- Moebius Blueprints 2 (360, more colour, high resolution)
- Moenotone Demo (360, colour, high resolution)
- Moenotone Demo 2 (stereographic projection, colour)

A black-and-white A5 paperback with 100 pages of Turing patterns, hand-selected from 1000s of candidate images generated by a multi-layer reaction-diffusion biochemistry system simulated on digital computer as a coupled cellular automaton.

Click the picture above for lots more information, including photos, and links to print-on-demand and source code too (a fork of my cca project).

I generated the page images as 100dpi bitmaps, then vectorized with potrace. The printed copy is really nice, smooth white paper good for colouring and a bright glossy cover. One small issue is that it's hard to get pencils right into the perfect binding, but I expected that before I had it made.

This project is what my GPU temperature throttle was for, though I'm sure it'll come in handy for other things as well.

]]>While rendering some GPU-intensive OpenGL stuff I got scared when my graphics card hit 90C so I paused the process until it had returned to something cooler. I got fed up pausing and restarting it by hand so I wrote this small script:

#!/bin/bash kill -s SIGSTOP "${@}" running=0 stop_threshold=85 cont_threshold=75 while true do temperature="$(nvidia-smi -q -d TEMPERATURE | grep 'GPU Current Temp' | sed 's/^.*: \(.*\) C$/\1/')" if (( running )) then if (( temperature > stop_threshold )) then echo "STOP ${temperature} > ${stop_threshold}" kill -s SIGSTOP "${@}" running=0 fi else if (( temperature < cont_threshold )) then echo "CONT ${temperature} < ${cont_threshold}" kill -s SIGCONT "${@}" running=1 fi fi sleep 1 done | ts

If you want to run it yourself, I advise checking the output from nvidia-smi on your system because its manual page says the format isn't stable. Moreover I suggest monitoring the temperature, at least until you're sure it's working ok for you. Usage is simple, just pass on the command line the PIDs of the processes you want to throttle by GPU temperature, typically these would be OpenGL applications (or Vulkan / OpenCL / CUDA / whatever else they come up with next).

]]>A while ago I read this paper and finally got around to implementing it this week:

Real-Time Hatching

Emil Praun, Hugues Hoppe, Matthew Webb, Adam Finkelstein

Appears in SIGGRAPH 2001

The key concept in the paper is the "tonal art map", in which strokes are added to a texture array's mipmap levels to preserve coherence between levels and across tones - each stroke in an image is also present in all images above and to the right:

My possibly-novel contribution is to use the inverse (fast) Fourier transform (IFFT) to generate blue noise for the tonal art map generation. This takes a fraction of a second, compared to the many hours for void-and-cluster methods at large image sizes. The quality may be lower, but something to investigate another time - it's good enough for this hatching experiment. Here's a contrast of white and blue noise, the blue noise is perceptually much more smooth, lacking low-frequency components:

The other parts of the paper I haven't implemented yet, namely adjusting the hatching to match the principal curvature directions of the surface. This is more a mesh parameterization problem - I'm being simple and generating UVs for the bunny by spherical projection, instead of something complicated and good-looking.

My code is here:

git clone https://code.mathr.co.uk/hatching.git

Note that there are horrible hacks in the shaders for the specific scene
geometry at the moment, hopefully I'll find time to clean it up and make it
more general soon. You'll need to download the `bunny.obj`

from
cs5721f07.

I implemented a little widget in HTML5 Javascript and WebGL:

/clusters/

It's inspired by Clusters by Jeffrey Ventrella, but its source seems to be obfuscated so I couldn't see how it worked. Instead I worked backwards from the referenced ideas of Lynn Margulis. I modelled a symbiotic system by a bunch of particles, each craving or disgusted by the emissions of the others. There are a settable number of different substances, and (currently hardcoded) 24 different species with their own tastes, represented by different colours. The particle count is settable too, but due to a bug in my code you have to manually refresh the page after doing it (and don't go too high, the slow down is \(O(n^2)\)).

Some seeds give really interesting large-scale structures that chase each other around, with bits peeling off and joining other groupings. If A is attracted to B but B is repulsed by A, then a pursuit ensues. If the generated rule weights (576 numbers with the default settings) align just right you can get a chain or even a ring that becomes stable and spins on its own accord. Other structures include concentric shells in near-spherical blobs.

One thing I'm not happy with is the friction - I had to add it to make the larger clusters stable, but it makes smaller clusters less mobile. There's probably something my naive model misses from Ventrella's original, maybe some kind of satiation and transfer of actual materials between particles, rather than a per-species (dis)like tendency. If more satiated particles were to move less quickly than hungry particles, that might fix it. I'll try it another day!

]]>Late last year I implemented some coupled continuous cellular automata, inspired by Softology's experiments. Now I'm finally getting around to blogging about it. I used OpenGL shaders, here's some of the fragment shader source of the main algorithm:

void main() { vec4 s1 = texture(state, coord, 1.0); vec4 s100 = texture(state, coord, 100.0); vec4 s; for (int k = 0; k < 4; ++k) s[k] = texture(state, coord, blur[k])[k]; vec4 h = texture(history, coord); s = coupling * (s - s100) + h; s = speed * s; s = mix(s1, vec4(0.5) + 0.5 * cos(s), 0.125); h = mix(s, h, decay); state_out = s; history_out = h; }

The non-linearity of the `cos()`

on the coupled input acts like
a "reaction", the blurring (looking up reduced mipmap levels from the texture)
acts like "diffusion".

Colouring is done with another affine matrix transform, the output from which is thresholded and clamped, before edge-detection filter is applied. The edge-detection uses dFdx and dFdy, so the results are coarse (these derivatives are typically computed for blocks of 2x2 pixels, rendered together in parallel) - for better results the edge detection could be done in another pass, or the whole thing could run at double the resolution and be resized down to screen size afterwards.

Here's a video of it in action:

Here are some static images:

Here is another video, from January when it was still in colour:

Here's where you can get the code:

git clone https://code.mathr.co.uk/cca.git

Future work might be to do proper Gaussian blurs (it's separable, so even large radius might be feasible in real-time) instead of the cheap (but yielding squarish grid artifacts) mipmap reduction.

**EDIT** I worked on it some more, now in colour and with a
high quality mode that does Gaussian blur (on my system frame rate drops from
~60fps to between ~5fps and ~30fps depending on blur radius). Pictures:

I also added a mutation mode, which randomizes the parameters one by one at random. Here's a final example video showing off the new features:

]]>

After instrumenting Monotone with OpenGL timer queries I could see where the major bottleneck lay:

IFS( 7011.936000 ) FLAT( 544.672000 ) PBO( 2921.728000 )SORT( 6797.760000 ) LUP( 71136.064000 )TEX( 284.224000 ) DISP( 272.480000 )

LUP is the per-pixel binary search lookup for histogram equalisation (to compress the dynamic range of the HDR fractal to something suitable for display), the previous SORT generates the histogram from a 4x4 downscaled image. A quick calculation shows that this LUP is taking 80% of the GPU time, so is a good focus for optimisation efforts.

The 4x4 downscaled image for the histogram is still a lot of pixels: 129600. LUP involves finding an index into this array, which gives a value with around 17bits of precision. However, typical computer displays are only 8bit (256 values) so the extra 9 random-access texture lookups per pixel to get a more accurate value are a waste of time and effort. Combined with a reduction of the downscaled image to 8x8, the optimisation to compute a less accurate (but visually indistinguishable) histogram equalisation allows Monotone to now run at 30fps at 1920x1080 full HD resolution. Here are the post-optimisation detailed timing metrics:

IFS( 7087.104000 ) FLAT( 509.888000 ) PBO( 2744.864000 )SORT( 1409.440000 ) LUP( 15696.352000 )TEX( 281.472000 ) DISP( 290.848000 )

A productive day!

]]>I blogged about this before with an animation (Rolling Torus), the only thing missing now is the Fragmentarium source code, so here it is:

]]>#define providesColor #include "Soft-Raytracer.frag" uniform float time; const float pi = 3.141592653; const float s = 2.0; const float ri = s / (sqrt(s * s + 1.0) + 1.0); const float ro = s / (sqrt(s * s + 1.0) - 1.0); const float rc = 0.5 * (ro + ri); const float rt = 0.5 * (ro - ri); const vec3 rgb[5] = vec3[]( vec3(1.0, 0.7, 0.0), vec3(0.7, 1.0, 0.0), vec3(0.0, 0.7, 1.0), vec3(0.7, 0.0, 1.0), vec3(1.0, 0.0, 0.7)); float torus(vec3 z) { return length(z - rc * normalize(vec3(z.xy, 0.0))) - rt; } float plane(vec3 z) { return z.z; } vec3 spin(vec3 z) { float a = 2.0 * pi * time / 3.0; mat2 m = mat2(cos(a), sin(a), -sin(a), cos(a)); z.xy = m * z.xy; return z.xzy - vec3(0.0,ro,0.0); } vec3 baseColor(vec3 q, vec3 n) { vec2 uv = vec2(0.0); if (q.z < 0.01) { uv = q.xy * sqrt(5.0); } else { vec3 p = spin(q); float k = -1.0; float l = 1.0; if (p.z > 0.0) { l = -l; } if (length(p.xy) > rc) { k = -k; } float a = (p.z * p.z * sqrt(s * s + 1.0) + k * sqrt(max(1.0 - p.z * p.z * s * s, 0.0))) / (p.z * p.z + 1.0); float y = l * acos(clamp(a, -1.0, 1.0)) / (2.0 * pi); float x = s * atan(p.x, p.y) / (2.0 * pi); if (x < 0.0) { x = s + x; } if (y < 0.0) { y = 1.0 + y; } x = 5.0 * x; y = 5.0 * y; uv = vec2(y, x); float b = atan(1.0, 2.0); mat2 m = mat2(cos(b), sin(b), -sin(b), cos(b)) * sqrt(5.0); uv = m * uv; } vec3 t; if (mod(uv.x, 1.0) < 0.25 || mod(uv.y, 1.0) < 0.25) { t = vec3(0.0); } else { int k = clamp(int(mod(floor(uv.x) + 2.0 * floor(uv.y), 5.0)), 0, 4); t = rgb[k]; } return vec3(t); } float DE(vec3 z) { return min(torus(spin(z)), plane(z)); } #preset default FOV = 0.3 Eye = -5,-5,2.25 Target = -0.593772,1.17202,1.5097 Up = 0,0,1 EquiRectangular = false FocalPlane = 5 Aperture = 0.01062 Gamma = 1 ToneMapping = 1 Exposure = 1 Brightness = 1 Contrast = 1 Saturation = 1 GaussianWeight = 5.2308 AntiAliasScale = 1.7333 Detail = -5.47582 DetailAO = -3.26669 FudgeFactor = 1 MaxRaySteps = 317 BoundingSphere = 12 Dither = 1 NormalBackStep = 1 AO = 0,0,0,0.32456 Specular = 0.2 SpecularExp = 24.051 SpotLight = 0.737255,0.686275,0.533333,1.8841 SpotLightPos = 10,-5.0684,8.0822 SpotLightSize = 2.3 CamLight = 0.792157,0.556863,0.682353,0.6579 CamLightMin = 3e-05 Glow = 0,0.917647,1,0 GlowMax = 16 Fog = 0 Shadow = 0.82906 NotLocked Sun = -1.29354,1.01588 SunSize = 0.001 Reflection = 0 NotLocked BaseColor = 1,1,1 OrbitStrength = 1 X = 0.5,0.6,0.6,0.7 Y = 1,0.6,0,0.4 Z = 0.8,0.78,1,0.5 R = 0.4,0.7,1,0.12 BackgroundColor = 0.501961,0.501961,0.501961 GradientBackground = 1.66665 CycleColors = false Cycles = 0.1 EnableFloor = false NotLocked FloorNormal = 0,0,0 FloorHeight = 0 FloorColor = 1,1,1 #endpreset

My video piece Monotone has been accepted to the Mozilla Festival art exhibition. Mozilla Festival 2016 takes place October 28-30, at Ravensbourne College, London.

Since submitting the pre-rendered video loop I've been working on improving the real-time rendering mode of the Monotone software. The main bottle-neck at this time is the histogram equalisation to take the high dynamic range calculations down to a low dynamic range image for display. I did manage to get a large speed boost by calculating the histogram on a \(\frac{1}{4} \times \frac{1}{4}\) downscaled image, but on my hardware it only achieves \(\frac{1}{2} \times \frac{1}{2}\) of the desired resolution (HD 1920x1080). On my NVIDIA GTX 550 Ti with 192 CUDA cores I get 960x540 at 30 frames per second. Recent hardware has 1000s of cores, so perhaps it's just a matter of throwing more grunt at the problem.

If you want to try it (and have OpenGL 4 capable hardware, and development headers installed for GLFW and JACK, among other things; only tested on Debian):

git clone https://code.mathr.co.uk/monotone.git git clone https://code.mathr.co.uk/clive.git cd monotone/src make ./monotone

You can also browse the monotone source code repository. If you do have a significantly more powerful GPU than me, you can try to edit the source code monotone.cpp to change "#define SCALE 2" to "#define SCALE 1", which will make it target 1920x1080 instead of half that in each dimension. I'd love to hear back if you get it working (or if you have trouble getting it running, maybe I can help).

**UPDATE** here are some photos, the projection was really
impressive, so I'm satisfied even though the sound aspect was absent:

The festival was pretty interesting, many many things all going on at once. Highlight was the Sonic Pi workshop (though I spent most of the time dist-upgrade-ing to Debian Stretch so I could install it), and the Nature In Code workshop was also interesting (though it was packed full and uncomfortable so I didn't attend the full session).

]]>Back at the end of April last year I think I was futzing about on math.stackexchange.com answering a question about rendering negative multibrot sets, for example one produced by iterations of \(z \to z^{-2} + c\). I tried applying the atom domain colouring from the regular Mandelbrot set, but found it looked better if I accumulated all the partials with additive blending, not just the final domain. Here's a zoomed in view:

I implemented it as a GLSL fragment shader in Fragmentarium, here's the source code (which you can download too):

// Mandelbrot set for \( z \to z^{-n} + c \) coloured by Lyapunov atom domains// Created: Thu Apr 30 15:10:00 2015#include "Progressive2D.frag" #include "Complex.frag" const float pi = 3.141592653589793; const float phi = (sqrt(5.0) + 1.0) / 2.0; #group Lyapunov atom domains uniform int Iterations; slider[10,200,5000] uniform int Power; slider[-16,-2,16] vec3 color(vec2 c) {// critical point is \( 0 \) for positive Power, and \( 0^Power + c = c \)// critical point is \( \infty \) for negative Power, and \( \infty^Power + c = c \)// so start iterating from \( c \)vec2 z = c;// Lyapunov exponent accumulatorfloat le = 0.0;// atom domain accumulatorfloat minle = 0.0; int mini = 1;// accumulated colourvec4 rgba = vec4(0.0); for (int i = 0; i < Iterations; ++i) {// \( zn1 \gets z^{Power - 1} \)vec2 zn1 = vec2(1.0, 0.0); for (int j = 0; j < abs(Power - 1); ++j) { zn1 = cMul(zn1, z); } if (Power < 0) { zn1 = cInverse(zn1); }// \( dz \gets Power z^{Power - 1} \)vec2 dz = float(Power) * zn1;// \( z \gets z^{Power} + c \)z = cMul(zn1,z) + c;// \( le \gets le + 2 log |dz| \)float dle = log(dot(dz, dz)); le += dle;// if the delta is smaller than any previous, accumulate the atom domain domainif (dle < minle) { minle = dle; mini = i + 1; float hue = 2.0 * pi / (36.0 + 1.0/(phi*phi)) * float(mini); vec3 rainbow = 2.0 * pi / 3.0 * vec3(0.0, 1.0, 2.0); vec3 domain = clamp(vec3(0.5) + 0.5 * sin(vec3(hue) + rainbow), 0.0, 1.0); rgba += vec4(domain, 1.0); } }// accumulated 'iterations' logs of squared magnitudes// so divide by 2 iterationsle /= 2.0 * float(Iterations);// scale accumulated colour and blacken interiorreturn mix(rgba.rgb / rgba.a, vec3(le < 0.001 ? 0.0 : tanh(exp(le))), 0.5); }

More negative powers than -2 don't look very good, though.

]]>The classic Sayagata tiling is a Euclidean (flat) plane tesselation based on a square grid. For May's calendar image I thought it would be fun to make a hyperbolic variation, with 5 squares meeting at each vertex instead of 4. Furthermore, to wrap the hyperbolic plane into a repeating ring shape like what Bulatov did (warning, that page is slow loading but worth it in the end).

I implemented it as a GLSL fragment shader in Fragmentarium (download my source code) with a few constants for changing the tiling parameters (the hyperbolic symmetry group, the orientation across the band, number of repetitions, whether to draw a pretty pattern or just triangles). Here are a few variations:

The code has a few comments; the gist of the overall operation is to transform the coordinates from the ring back into the PoincarÃ© disc model of hyperbolic geometry, then repeatedly apply hyperbolic symmetries towards a central fundamental region, which finally gets the Sayagata texture. Note that tweaking some of the values leads to ugly seams with the parts not lining up - still some bugs I guess! (WONTFIX DONTDOTHAT)

]]>