Tuesday, April 15, 2008

Geometry Can Be Abstract, Too

When I first created the geometry loader for the materials I explained, I assumed there was a single geometry source that would suit all platforms and materials, and every material would perform any conversions it needed on the source geometry.

As an example, a material that would render multiple instances of a model using shader constants, would require creating multiple copies of the input geometry, and inserting instance indices as an additional vertex component in the vertex stream. But if you had hardware support for instancing, the geometry stream for the model would be just the same as the source one.

But there could be more complex operations performed on a geometry stream, eg. merging several meshes together, adding or removing components, tesselating higher-order primitives, etc. Therefore, what we would be interested in doing would be encapsulating in a geometry class the operations performed on any geometry data source, and let the class perform them behind the scenes, without the client code having to deal with the specifics of geometry processing.

In order to do this, I've created a base structure called a Mesh, that is a container for both the raw data and its metadescription: vertex components, submeshes, etc. It goes like:

class Mesh

{

protected:

uint nVertices;

uint nIndices;

uint nStreams;

uint nBatches;

Stream* pStreams;

Batch* pBatches;

float* pVertices;

uint* pIndices;

}

with the Stream and Batch structures describing the vertex components and mesh subdivisions respectively (these terms I've taken from Emil “Humus” Persson demo framework):

Such a structure is general enough to hold mostly all types of geometry that can be handled by today's hardware. But it doesn't give any process to initialize, load or otherwise create this geometry data. To do that, we create specialized classes.

In doing this, Tom Forsyth's article on material abstraction comes in handy, yet again. By generalizing his idea on texture sources, we can define as well Mesh-derived classes that perform operations such as:

  • Load geometry from a source file, eg. MeshN3D2, MeshNVX2 would load the geometry from the n3d2 or nvx2 file formats from nebula2.

  • Merge geometry together, eg. CompositeMesh would merge together several source meshes

  • Process the geometry somehow, eg. NormalMesh would compute the normal for a triangle stream, TransformMesh would apply a given transform (translation, rotation, scale) on the source geometry.

  • Prepare the geometry for some specific rendering, eg. ShadowMesh would insert the quads for shader-based shadow volume extrusion, InstanceMesh would insert instance indices in a stream made of multiple copies of some source geometry.

  • Expose an interface for direct geometry manipulation, eg. BuilderMesh would allow adding custom vertices from the application using a comprehensive interface, eg. AddCoord, AddTexCoord, AddTriangle, etc.

And the usage of such specific classes would look like:

Mesh* pSrcMesh = Mesh::Create( “torus.n3d2” );

Mesh* pNormalMesh = Mesh::Create( pSrcMesh );

TransformMesh* pScaleMesh = Mesh::Create( pSrcMesh );

pScaleMesh->Scale( 2.0f );

CompositeMesh* pCompositeMesh = Mesh::Create();

pCompositeMesh->Add( pSrcMesh1 );

pCompositeMesh->Add( pSrcMesh2 );

pCompositeMesh->Add( pSrcMesh3 );

Mesh* pInstanceMesh = Mesh::Create( pSrcMesh );

Mesh* pShadowMesh = Mesh::Create( pSrcMesh );

Now this kind of abstraction makes it possible to create all sort of derived classes, with only one virtual method: Load(), that would perform the required operations, fill the vertex and index arrays, and made them available either for the client application (eg. to load vertex and index buffers) or for another mesh to use it as a source mesh.

Now this abstraction has yet another useful application: reusing geometry across several models. If several models are merged together in order to reuse the same vertex buffer but using different materials for each of the submeshes, the trick would be creating a register of all meshes using a unique string, similar to the one described by Forsyth for textures, eg:

ShadowMesh(CompositeMesh(MeshN3D2(“upper_body.n3d2”),MeshN3D2(“lower_body.n3d2”))))

Now if we have different materials for the upper and lower body of a model using this composite mesh, we would do the following:

Material* material1;

void* pMeshData1 = material1->Load( Mesh );

Material* material2;

void* pMeshData2 = material2->Load( Mesh );

Internally, both materials would load the same mesh, but instead of duplicating data, they reuse the same geometry buffers, and switch to different vertex and index ranges when rendering them.

No comments: