**UPDATE** my GL 3.3 version may have been
faster, but modifications to the original program are faster
still (there was a misapplied patch slowing things down
drastically, that getting fixed made it easier to improve).
Check
this rrv issue thread
for more details.

Radiosity is a method for computing diffuse lighting. Unlike raytracing, radiosity is viewpoint independent, which means the lighting calculations can be performed once for a given scene, then visualized multiple times with different virtual camera positions. But ray tracing also supports specular reflections and transparency, so eventually a more complete renderer could combine both.

While searching for radiosity implementations I found rrv, which uses OpenGL to render the view from each patch (triangle) in the scene. My first experiences were disappointing - it was very slow, but that turned out to be missing optimisation flags. But I found some further ways to optimize it - first using vertex buffers instead of glBegin() so that the scene geometry is uploaded to the GPU only once instead of per-patch, secondly using a large flat array instead of a C++ std::map to accumulate the results of rendering. These optimisations gave a speed boost around 7x, but I wasn't satisfied.

I ended up porting the radiosity renderer core to run almost entirely on the GPU, using OpenGL 3.3 (though it could probably be ported to OpenGL 2.1 with a few extensions like floating point textures and framebuffers). The speed boost is around 30x faster than the original OpenGL 1 implementation. Here's rrv's room4 demo scene with lighting calculated by my port in a little over 12 minutes (the visualizer code remains unchanged):

Another change I made involved the form factor calculations for hemicube projection. Radiosity ideally projects to a hemisphere and from there to a circle, but rectangular grids are more convenient for computers. So radiosity implementations tend to project to a hemi-cube, rasterizing the scene 5 times, once for the top and each of the four sides. Then each pixel in the result is scaled by a delta form factor, so that it corresponds more closely to the hemisphere circle projection.

rrv's form factor calculations used a product of cosines, which looked suspect to me as it didn't take into account the edges and corners of the hemicube, so I did some searching and found a paper which gave some different formulas:

The Hemi-cube: a Radiosity Solution for Complex Environments

Michael F. Cohen and Donald P. Greenberg

ACM SIGGRAPH Volume 19, Number 3, 1985

I implemented them in a test program and plotted a comparison between RRV2007 (magenta) and Cohen1985 (green):

The difference is small, but visible in the visualizer as slight shape differences between quantized bands on the gradients. Here's a comparison between the output after 1 step with the two different form factors, amplified 64 times (mid-grey is equal output):

When time allows, I have some ideas for some additional features, like lighting groups (compute the radiosity for each light separately, then combine them at visualization time - hopefully I'm correct in assuming linearity - allowing the brightness (but not colour) of each light to be adjusted separately) and scene symmetry (like an infinite corridor repeating every 5m or so, the symmetry means the radiosity for translationally equivalent parts of the scene must be identical).

I put my changes in a branch at code.mathr.co.uk/rrv, which may end up merged to the upstream at github.com/kdudka/rrv.