In my previous post I tried to solve a differential equation using a port of Octave's adaptive Runge-Kutta 4/5 integration algorithm. It failed with energy explosions. Today I tried a different, much simpler, integration algorithm, and the energy explosions seem to be solved.

Velocity Verlet integration can be implemented for this problem in a few lines of code:

#include <math.h>// compute acceleration from positionstatic inline void f(double *a, const double x[2]) { a[0] = -exp(x[1] * x[1]) * x[0]; a[1] = -exp(x[0] * x[0]) * x[1]; }// Velocity Verlet integrationvoid compute() { const double h = 0.01; double x[2] = { 0, 1 }; double v[2] = { 1, 1 }; double a[2] = { 0, 0 }; f(a, x); v[0] += 0.5 * h * a[0]; v[1] += 0.5 * h * a[1]; while (1) { x[0] += h * v[0]; x[1] += h * v[1]; f(a, x); v[0] += h * a[0]; v[1] += h * a[1]; } }

You can download C99 source code for a JACK client which sonifies the chaotic coupled oscillators. I've been running it for 45mins at 48kHz sample rate, and no explosions yet, but be careful in case this is still transient behaviour...

No explosions may seem like a good thing, but what if the original differential equations are truly explosive, and the stability is a computational artifact? I still don't know the answer to that question.

]]>Simple harmonic motion is the solution to the differential equation:

\[\frac{\partial^2}{\partial t^2} x = -\omega^2 x\]

Interested in chaos I wanted to make the angular frequency \(\omega\) be cross-coupled in a pair of oscillators, and I came up with this differential equation:

\[\begin{aligned} \frac{\partial^2}{\partial t^2} x &= -e^{y^2} x \\ \frac{\partial^2}{\partial t^2} y &= -e^{x^2} y \end{aligned}\]

It sounds something like this: audio snippet.

I initially experimented with Octave's ode45() function, but it was rather slow, so I ported it to C99 (specialized to 4-vectors containing the displacement and velocity of each oscillator). Unfortunately it exploded after some time, with the amplitude of the oscillators swinging ever-larger, and the frequency of oscillation getting very very high too (which meant that the adaptive step size Runge-Kutta integration scheme would effectively get stuck and stop making progress).

Investigating this crisis, I thought to plot the energy of the system, and sure enough it exploded:

So this experiment failed, I'll have to try some other coupling expressions to see if they suffer the same fate or otherwise. Eventually I wanted to try controlling chaos by small perturbations to nudge the oscillators into unstable periodic orbits of various kinds, but no joy this week.

You can download the C source code for the integration calculations, gnuplot source code for the diagrams, and Octave source code for converting to audio.

]]>Around this time last year I was playing around with a physics-based ray-tracer for spherically curved space. In spherical space the ray geodesics eventually wrap around, meeting at the opposite pole to the observer. To compound the sphericity I used a projection that wraps the whole sphere-of-view from a point into a long strip.

It's been so long I've forgotten the details of how it works, but I embedded the 3D spherical space in 4D Euclidean (flat) space. I represented ray directions by points on the "equator" around the ray source, and a lot of trigonometry was involved to transform these ray directions appropriately when tracing the rays through curved space. Eventually I optimized the code to use simpler functions like sqrt and arithmetic instead of costly sin and cos calls.

The materials are all physically based, with refractive index varying with simulated light wavelength, which gives a rainbow effect when different colours are refracted by different angles. To get the final image requires tracing a monochrome image at many different wavelengths, which are then combined into the XYZ colour space using tristimulus response curves. I collected some Wikipedia articles together into a little A5 booklet: colour.pdf (and a version with the pages rearranged for printing using bookletimposer: colour-booklet.pdf).

Here are some miscellaneous links related to how Prismatic works:

- Projection
- transformation projection (section cubic to/from spherical map)
- Lambert equal-area projection
- spherical coordinates
- Ray-surface intersection distance
- solve a = b cos x + c sin x
- double angle formulae
- Reflection and refraction
- Lambertian reflectance
- Fresnel equations
- Snell's law
- reflect()
- refract()
- Kramers-Kronig relations
- Absorption and emission
- Beer-Lambert law
- complex refractive index
- absorption spectroscopy
- emission spectrum
- Colour, wavelength, CIE XYZ, sRGB
- colour matching functions (section database / CMFs)
- sRGB specification
- illuminant D65
- CMYK colour model
- Materials
- refractiveindex.info
- corundum
- sapphire
- ruby and sapphire
- ruby
- chemistry webbook
- chemistry webbook UV/visibile wavelengths
- crown glass
- flint glass

The prismatic code itself is online at code.mathr.co.uk/prismatic.

]]>This week I discovered just how bad the acoustics in my front room are. At my typical listening position (sat in front of the computer) there are two huge spikes in the frequency response at around 45Hz and 140Hz (measured with JAAA, an audio analyzer). Unsurprisingly this has a big effect on the music I make, to the extent that some of it sounds wrong to my ears when I listen in other environments (even including other spots in the same room):

- source audio composed and recorded digitally
- recorded with mics at four different listening positions

So I was looking around for ways to mitigate the problem, starting with simple things like moving the desk and speakers to a different position in the room. But my desk is heavy and bulky and has a zillion cables dangling all over it, so moving it around is awkward. Referring back to my blog post about modes on a plate I thought computer simulation could be part of the answer.

Happily I found a really nice paper full of equations (even two different room response modelling algorithms) and a section on just the problem I'm facing (namely optimising speaker and listener positions to get the frequency response as flat as possible):

Room Sizing and Optimization at Low Frequencies

Trevor J. Cox, Peter D'Antonio, and Mark R. Avis

J. Audio Eng. Soc., Vol. 52, No. 6, 2004 June

I implemented in Haskell (source code links at the end of this
post) the first prediction model in the paper, the **modal
decomposition model**. I made a couple of slight changes
to the equations in the paper: the term \(w_n \delta_n\) is a bit
redundant, because \(\delta_n\) has a division by \(w_n\), so I
avoid dividing and multiplying again (which avoids NaN dread when
\(w_n\) is \(0\)). Here are some of its predictions (click for
larger):

The yellow room is where my computer is, and has roughly accurate
room size and speaker and listener positions, but I guessed the wall
absorption coefficients (and the algorithm to convert absorption
coefficients to admittances, I used Daniel A. Russell's
Absorption Coefficients and Impedance).
The bathroom likewise (though I have no
speakers in the bathroom and I don't sing in the shower), and the
standard room size and absorption is from the paper but I picked the
positions myself. I used \(343 m s^{-1}\) as the speed of sound.
I used **gnuplot** to smooth the computed data using its
kernel density method, as the specific locations of the narrow peaks
and troughs in the response are very sensitive to slight variations in
the parameters.

Then I generated some
pink noise
with **audacity**, and recorded it for analysis. Note that
this measures the combined frequency response of the sound-card DAC, the
speaker cones, the mics, and the recorder ADC; as well as the room response
I'm interested in - so it's not really a proper test, just enough to get
a rough idea. I also performed the test with stereo speakers, the modal
model code can be extended for more speakers quite easily but I didn't use
that for the graph above.

I wrote a small C program using **fftw** and
**sndfile** to get the spectrum (I used ~21 seconds), and
more gnuplot stuff to smooth it - the FFT has linear frequency but I
wanted to convert it to logarithmic frequency, and gnuplot kernel density
smoothing needs to know the resolution of the data, which increases with
frequency when \(\log\)ing. I added the modal prediction to the graph,
it's actually not too far out:

Source code download links:

]]>This all started when I was trying to add 3D sound to the reboot of Interstellar Interference. I found some HRIRs (head-related impulse responses), and I encoded the 3D sources into 1st-order ambisonics, then decoded the ambisonics to virtual speaker locations corresponding to each HRIR and convolved. But it was epicly slow (220x slower than realtime), and didn't even sound very good, so I thought about modelling the head in a less brute-force way.

So the problem became, given a source location relative to the oriented head, find a simple filter that makes the far ear quieter than the near ear, taking into account diffraction (low frequencies are shadowed much less than high frequencies). I implemented the wave equation in a couple of OpenGL shaders using Fragmentarium (a simple GLSL IDE), and got some response curves for frequencies spaced every octave from about 75Hz to 4.8kHz. But I also was intrigued by the interference between the reflections from the edge of the screen, and that got me thinking about resonant modes in acoustics.

The wave equation simulation I made needed about 50k iterations to get a result for a single frequency, taking long enough to get annoying. The wave equation constrains a function \(F\) of time and space like this:

\[ \frac{\partial^2}{{\partial t}^2} F = c^2 \frac{\partial^2}{{\partial x}^2} F \]

where \(c\) is the speed of sound. Apparently some separation of variables technique that I didn't quite understand can be applied, which gives an equation that doesn't depend on time:

\[ \frac{\partial^2}{{\partial x}^2} F = -\left(\frac{\omega}{c}\right)^2 F \]

where \(\omega\) is the angular frequency of the wave. This made me think of eigensystems, which have:

\[ M v = \lambda v \]

where typically \(M\) is a matrix, \(v\) is a vector, and \(\lambda\) is a scalar. The solutions are pairs \((\lambda, v)\) of eigenvalues and eigenvectors.

So I needed to turn the partial differential into a matrix, which is conceptually not too hard, but the implementation was a little grotty. In 1D, one can approximate like this (up to a factor of \(h\) or \(h^2\) or something, which turns out to be irrelevant for the purposes of this post):

\[ \frac{\partial^2}{{\partial x}^2} F \approx F(x-h) - 2 F(x) + F(x + h)\]

where \(h\) is the distance between cells of a discrete mesh approximating the contiuum. Modelling \(F\) as a 1D vector, this gives a matrix like:

\[ \left( \begin{array}{rrrr} \ddots & & & \\ 1 & -2 & 1 & \\ & 1 & -2 & 1 \\ & & & \ddots \end{array} \right) \]

with all the \(-2\) along the main diagonal. This doesn't take into account what happens at the ends of the mesh, to make \(F\) be zero there I set the top-left-most and bottom-right-most value to 1 (all the elements are zero unless otherwise specified - leaving a blank space is easier to read). In 2D, the stencil for a single point (previously \((1, -2, 1)\) in 1D) now looks like this:

\[ \left( \begin{array}{rrr} & 1 & \\ 1 & -4 & 1 \\ & 1 & \end{array} \right) \]

But we have to unwrap the 2D grid into a 1D vector, and make a 2D matrix (as before) to solve for eigenvalues and eigenvectors. With the same boundary conditions as before, here's what the resulting \(16 \times 16\) matrix looks like for a \(4 \times 4\) grid:

\[ \left( \begin{array}{rrrrrrrrrrrrrrrr} 1 & & & & & & & & & & & & & & & \\ & 1 & & & & & & & & & & & & & & \\ & & 1 & & & & & & & & & & & & & \\ & & & 1 & & & & & & & & & & & & \\ & & & & 1 & & & & & & & & & & & \\ & 1 & & & 1 & -4 & 1 & & & 1 & & & & & & \\ & & 1 & & & 1 & -4 & 1 & & & 1 & & & & & \\ & & & & & & & 1 & & & & & & & & \\ & & & & & & & & 1 & & & & & & & \\ & & & & & 1 & & & 1 & -4 & 1 & & & 1 & & \\ & & & & & & 1 & & & 1 & -4 & 1 & & & 1 & \\ & & & & & & & & & & & 1 & & & & \\ & & & & & & & & & & & & 1 & & & \\ & & & & & & & & & & & & & 1 & & \\ & & & & & & & & & & & & & & 1 & \\ & & & & & & & & & & & & & & & 1 \end{array} \right) \]

As you can see, it's pretty sparse, with very few non-zero values.
Solving eigensystems is not exactly trivial, but luckily several good
free software packages provide well-tested implementations. I used GNU
Octave, which has two key functions: **spconvert** takes a
dense matrix representation of a sparse matrix and builds a sparse matrix,
and **eigs** solves the sparse eigensystem - I chose to let it find
the 35 smallest-magnitude eigenvalues and their corresponding eigenvectors,
which correspond to the lowest-frequency modes.

Experimenting with image manipulation in Octave was less fruitful, so I saved the raw data and processed it with a small C program. The grid sizes I used were around \(256 \times 256\). Constructing the sparse matrix took about 5 times longer than it took to solve the eigensystem, which goes to show that interpreted loops in Octave are to be avoided at all costs. Enough maths, here's some pictures, first some eigenmodes for square plate (I think some aren't as symmetric as they could be, due to duplicated eigenvalues giving a free choice of eigenvectors):

Rectangular plates are much less interesting, here's 2x1:

And 3x2:

Source code downloads:

- modes.m
- GNU Octave source to generate and solve a sparse eigensystem, outputs raw data.
- modes.c
- C99 source to take the raw data and turn it into pretty pictures.

Each source has hardcoded values (e, m, n) near the top, these must match across both files or bad things will happen.

]]>The first part of the process is a ball in free flight from one step down to the next:

Consider gravitational potential energy \(m g h\) (where m is mass, g is the force of gravity, and h is the height) at the peak height \(H\) of the flight path over steps of height \(S\) relative to A and B:

\[\begin{aligned} E_A &= m g (H - S) \\ E_B &= m g H \end{aligned}\]

Consider kinetic energy \(\frac{1}{2} m v^2 \) at A and B:

\[\begin{aligned} E_A &= \frac{1}{2} m v_A^2 \\ E_B &= \frac{1}{2} m v_B^2 \end{aligned}\]

Some simple algebra gives:

\[\begin{aligned} v_A = \sqrt{ 2 g (H - S) } \\ v_B = \sqrt{ 2 g H } \end{aligned}\]

When in flight, the only force acting on the ball is gravity. Using Newton's law of motion relating force and acceleration \(F = m a\) and integrating with the initial conditions at \(A\) with \(t = 0\) gives:

\[\begin{aligned}\frac{\partial}{\partial t}\frac{\partial}{\partial t} y & = -g \\ \frac{\partial}{\partial t} y &= v_A - g t \\ y &= r + v_A t - \frac{1}{2} g t^2 \end{aligned} \]

where \(y\) is the height of the ball center and \(r\) is the ball radius. Now we can find the duration \(T_f\) of the flight, which ends when \(y(T_f) = y(0) - S\):

\[\begin{aligned} r - S &= r + v_A T_f - \frac{1}{2} g T_f^2 \\ \frac{1}{2} g T_f^2 - v_A T_f - S &= 0 \\ T_f = \frac{v_A + \sqrt{ v_A^2 + 2 g S } }{ g } \end{aligned}\]

The second part of the process is a compressive bounce when the ball reaches the lower step:

Modelling the bounce by a damped harmonic oscillator gives an equation of motion, where \(d\) is the maximal squashing of the ball:

\[y = r \left( 1 - d \sin (\beta t) e^{-\gamma t} \right)\]

Here \(\beta\) determines the speed of the bounce and \(\gamma\) determines the energy lost during the bounce. Using the boundary conditions \(v_C = v_B\) and \(v_D = v_A\) for a smooth transition from free flight to bouncing, and setting \(T_b\) to be the duration of the bounce (time of first return to initial \(y\) value), we can differentiate the equation of motion and solve for the unknowns:

\[ \begin{aligned} \frac{\partial}{\partial t} y &= r e^{-\gamma t} \left( \gamma \sin(\beta t) - d \beta \cos(\beta t) \right) \\ v_B &= r d \beta \\ \beta &= \frac{v_B}{r d} \\ T_b &= \frac{\pi}{\beta} \\ v_A &= r e^{-\gamma T_b} d \beta \\ \gamma &= -\frac{\log \frac{v_A}{r d \beta}}{T_b} \end{aligned} \]

The time taken for the whole loop is \(T = T_f + T_b\). Assuming the ball is initially spherical, and that volume remains constant, we can compute the squashing factor using the property of an ellipsoid that \(V = k r_x r_y r_z\) assuming that \(r_x = r_z\):

\[ \begin{aligned} r_y &= y \\ r_x &= \sqrt{\frac{r^3}{y}} \end{aligned} \]

Implementing all these equations in the Haskell programming language using the Diagrams library for visualization gives some cute animated GIFs:

You can download the full source code for this post.

]]>This toy physics demo featuring inflatable donuts is from April last year, but I didn't get around to blogging about it before. The inline code here is all from the physics tick function, and involves calculating all the forces on each of the vertices of the mesh approximating each torus.

The first step is to find the internal spring forces within
each mesh in isolation. For each of the **P** meshes, we loop
over each of its **M*N** vertices, considering a **B*B**
neighbourhood of vertices on the surface. We accumulate the difference
between the vertex and its neighbour vertex in **dx**, with
the length of this vector in **ldx**. The difference between
this length and the target length **s->l** for this link
determines a force, which we accumulate in **t->f**.

/* intra-mesh spring forces */ #pragma omp parallel for for (int p = 0; p < P; ++p) { for (int m = 0; m < M; ++m) { for (int n = 0; n < N; ++n) { for (int i = -B/2; i <= B/2; ++i) { for (int j = -B/2; j <= B/2; ++j) { int mm = (m + i + M) % M; int nn = (n + j + N) % N; if (i == 0 && j == 0) { nn = (n + N / 2) % N; } R dx[D]; R ldx = 0; for (int d = 0; d < D; ++d) { dx[d] = s->x[p][mm][nn][d] - s->x[p][m][n][d]; ldx += dx[d] * dx[d]; } if (!(ldx > 0)) { continue; } ldx = sqrt(ldx); R k = stiffness * (s->l[p][m][n][i+B/2][j+B/2] - ldx) / ldx; for (int d = 0; d < D; ++d) { t->f[p][m][n][d] -= dx[d] * k; } } } } } }

To simplify the calculation of the forces between different meshes,
we squash the torus mesh down to a loop. We accumulate the center of mass
of each segment in **t->x** and the center of mass of the
whole torus in **t->x0**.

/* mesh center circle */ #pragma omp parallel for for (int p = 0; p < P; ++p) { for (int m = 0; m < M; ++m) { for (int n = 0; n < N; ++n) { for (int d = 0; d < D; ++d) { t->x[p][m][d] += s->x[p][m][n][d] * s->m[p][m][n]; t->x0[p][d] += s->x[p][m][n][d] * s->m[p][m][n]; } t->m[p][m] += s->m[p][m][n]; t->m0[p] += s->m[p][m][n]; } for (int d = 0; d < D; ++d) { t->x[p][m][d] /= t->m[p][m]; t->x0[p][d] /= t->m0[p]; } } }

We also use the previously computed center of mass of segments to model the air pressure inside the torus - this is a constant outwards force on each vertex from the center of its segment.

/* inflate */ #pragma omp parallel for for (int p = 0; p < P; ++p) { for (int m = 0; m < M; ++m) { for (int n = 0; n < N; ++n) { R dx[D]; R ldx = 0; for (int d = 0; d < D; ++d) { dx[d] = s->x[p][m][n][d] - t->x[p][m][d]; ldx += dx[d] * dx[d]; } ldx = sqrt(ldx); for (int d = 0; d < D; ++d) { t->f[p][m][n][d] += pressure * dx[d] / ldx; } } } }

We now check for collisions. We only need to consider nearby meshes,
which we test for assuming the meshes are bounded by spheres
(**s->R,s->r** are the initial radii of each torus mesh).
If this test passes, we loop over nearby segments. Assuming they are
circular, the amount of overlap is scaled by a power law to turn it
into a repulsive force between the two segments, stored in **t->g**.

/* inter mesh forces */ #pragma omp parallel for for (int p = 0; p < P; ++p) { for (int q = p + 1; q < P; ++q) { R l = 0; for (int d = 0; d < D; ++d) { l += (t->x0[p][d] - t->x0[q][d]) * (t->x0[p][d] - t->x0[q][d]); } l = sqrt(l); if (l < s->R[p] + s->r[p] + s->R[q] + s->r[q] + 0.01) { for (int mp = 0; mp < M; ++mp) { for (int mq = 0; mq < M; ++mq) { R dx[D]; R ldx = 0; for (int d = 0; d < D; ++d) { dx[d] = t->x[p][mp][d] - t->x[q][mq][d]; ldx += dx[d] * dx[d]; } if (!(ldx > 0)) { continue; } ldx = sqrt(ldx); R k = pow(fmax(0.01, ldx - (s->r[p] + s->r[q])), -repulsion) / ldx; for (int d = 0; d < D; ++d) { R f = dx[d] * k; t->g[p][mp][d] += f; t->g[q][mq][d] -= f; } } } } } }

For smooth lighting we need to calculate the surface normals **s->n** at each
vertex, which we assume point away from the center of the corresponding
segment.

/* normals */ #pragma omp parallel for for (int p = 0; p < P; ++p) { for (int m = 0; m < M; ++m) { for (int n = 0; n < N; ++n) { R l = 0; for (int d = 0; d < D; ++d) { s->n[p][m][n][d] = s->x[p][m][n][d] - t->x[p][m][d]; l += s->n[p][m][n][d] * s->n[p][m][n][d]; } l = sqrt(l); for (int d = 0; d < D; ++d) { s->n[p][m][n][d] /= l; } } } }

Almost all the forces are now gathered, there just remains gravity
and bouyancy which both act in the vertical direction - gravity is a
constant, and bouyancy is proportional to depth below the surface of
the liquid. For each vertex of each mesh, force **f**
is turned to acceleration **a** by dividing by mass
and accumulated to vertex velocity **s->v**. Friction with
the environment (useful to ensure stability of the simulation) slows down
each vertex, and finally the vertex position **s->x** is updated.

/* accumulate */ #pragma omp parallel for for (int p = 0; p < P; ++p) { for (int m = 0; m < M; ++m) { for (int n = 0; n < N; ++n) { for (int d = 0; d < D; ++d) { R f = t->f[p][m][n][d] + t->g[p][m][d]; R a = f / s->m[p][m][n]; if (d == D - 1) { a -= gravity + bouyancy * fmin(0, s->x[p][m][n][d]); } s->v[p][m][n][d] += a * dt; s->v[p][m][n][d] *= resistance; s->x[p][m][n][d] += s->v[p][m][n][d] * dt; } } } }

You can download the full rubber dinghy rapids source code. You can compile it like:

]]>gcc -std=c99 -Wall -pedantic -Wextra -march=native -O3 -fopenmp -o dinghy dinghy.c -lglfw -lGLEW -lGL

Spin and jerk a hanging rope such that it is excited into a variety of modes.

This is from a year ago but I didn't get around to posting it before.

Videos available on the Internet Archive:

You can get the code as part of *dr1* here:

git clone http://code.mathr.co.uk/dr1.git

Or browse dr1 on Gitorious dr1 on code.mathr.co.uk.

]]>