Volume rendering is something I have been playing with for a while. I have a voxel renderer I’ve been playing with for a while – I developed some of it for a final project for CS 488 at Waterloo, and have continued to improve it since.
It is a realtime binary voxel renderer – it operates on volume data defined as a grid of points where each point is either on or off (ie. inside or outside the surface). Since the data is just binary, it compresses extremely well in an octree. All the algorithms are written to operate on this octree format, so the entire 3D grid never has to be decompressed in RAM. This allows for fairly good performance. Each voxel is drawn as a simple circle, conceptually somewhat similiar to a splat renderer. This puts less load on the rendering pipeline then using lots of very small triangles, as when using marching cubes for volume rendering.
The main difficulty of this approach is producing a normal value for lighting each rendered voxel. Because the data is actually discrete, not continuous, there are no real normals, and some form of estimator must be used. If a large region of voxels is considered at once, then fairly precise normals can be computed, but they will tend to be blurred by the size of the region. If a small number of voxels is considered, then the normals will pick up small details, but won’t be very precise. My solution computes estimates at different levels of the octree, and replaces the precise low resolution estimates with less precise high resolution estimates if a discontinuity is detected. This is approach is based partially on the approach of the papers: “Normal Estimation in 3D Discrete Space” by Roni Yagel, Daniel Cohen, and Arie Kaufman, and “Normal Computation for Discrete Surfaces in 3D Space” by Grit Thürmer and Charles A. Wüthrich.
This renderer currently runs quite interactively on 512x512x512 volumes: once the vertex list has been built, it can be rotated at high frame rates, the normals can be recomputed after a small modification to the volume in about a tenth of a second, and the whole volume can be recomputed in about 2 seconds.
I’m interested in voxel representations because they seem like a good way of representing objects that are not natural to model with polygons, such as rocks. Volume representations are also a lot more natural for applying procedural modifications to.
really like the cullular automata render.
would be interesting to see the reverse process, so maybe degenerating a complete model of the stanford bunny. a kind of voxel decay i suppose.
well done with these,
they look great.