Saturday, October 25, 2008

The unfinished swan


It's been a while since I last wrote in here, so before I start rambling about technology (which I plan to do soon) I'm linking here one of those game concepts that really get to you: one set in an entirely colorless world where you get oriented by spilling ink on the walls and floors. A really, really interesting turn.

The Unfinished Swan - Tech Demo 9/2008 from Ian Dallas on Vimeo.

Tuesday, April 15, 2008

Geometry Can Be Abstract, Too

When I first created the geometry loader for the materials I explained, I assumed there was a single geometry source that would suit all platforms and materials, and every material would perform any conversions it needed on the source geometry.

As an example, a material that would render multiple instances of a model using shader constants, would require creating multiple copies of the input geometry, and inserting instance indices as an additional vertex component in the vertex stream. But if you had hardware support for instancing, the geometry stream for the model would be just the same as the source one.

But there could be more complex operations performed on a geometry stream, eg. merging several meshes together, adding or removing components, tesselating higher-order primitives, etc. Therefore, what we would be interested in doing would be encapsulating in a geometry class the operations performed on any geometry data source, and let the class perform them behind the scenes, without the client code having to deal with the specifics of geometry processing.

In order to do this, I've created a base structure called a Mesh, that is a container for both the raw data and its metadescription: vertex components, submeshes, etc. It goes like:

class Mesh

{

protected:

uint nVertices;

uint nIndices;

uint nStreams;

uint nBatches;

Stream* pStreams;

Batch* pBatches;

float* pVertices;

uint* pIndices;

}

with the Stream and Batch structures describing the vertex components and mesh subdivisions respectively (these terms I've taken from Emil “Humus” Persson demo framework):

Such a structure is general enough to hold mostly all types of geometry that can be handled by today's hardware. But it doesn't give any process to initialize, load or otherwise create this geometry data. To do that, we create specialized classes.

In doing this, Tom Forsyth's article on material abstraction comes in handy, yet again. By generalizing his idea on texture sources, we can define as well Mesh-derived classes that perform operations such as:

  • Load geometry from a source file, eg. MeshN3D2, MeshNVX2 would load the geometry from the n3d2 or nvx2 file formats from nebula2.

  • Merge geometry together, eg. CompositeMesh would merge together several source meshes

  • Process the geometry somehow, eg. NormalMesh would compute the normal for a triangle stream, TransformMesh would apply a given transform (translation, rotation, scale) on the source geometry.

  • Prepare the geometry for some specific rendering, eg. ShadowMesh would insert the quads for shader-based shadow volume extrusion, InstanceMesh would insert instance indices in a stream made of multiple copies of some source geometry.

  • Expose an interface for direct geometry manipulation, eg. BuilderMesh would allow adding custom vertices from the application using a comprehensive interface, eg. AddCoord, AddTexCoord, AddTriangle, etc.

And the usage of such specific classes would look like:

Mesh* pSrcMesh = Mesh::Create( “torus.n3d2” );

Mesh* pNormalMesh = Mesh::Create( pSrcMesh );

TransformMesh* pScaleMesh = Mesh::Create( pSrcMesh );

pScaleMesh->Scale( 2.0f );

CompositeMesh* pCompositeMesh = Mesh::Create();

pCompositeMesh->Add( pSrcMesh1 );

pCompositeMesh->Add( pSrcMesh2 );

pCompositeMesh->Add( pSrcMesh3 );

Mesh* pInstanceMesh = Mesh::Create( pSrcMesh );

Mesh* pShadowMesh = Mesh::Create( pSrcMesh );

Now this kind of abstraction makes it possible to create all sort of derived classes, with only one virtual method: Load(), that would perform the required operations, fill the vertex and index arrays, and made them available either for the client application (eg. to load vertex and index buffers) or for another mesh to use it as a source mesh.

Now this abstraction has yet another useful application: reusing geometry across several models. If several models are merged together in order to reuse the same vertex buffer but using different materials for each of the submeshes, the trick would be creating a register of all meshes using a unique string, similar to the one described by Forsyth for textures, eg:

ShadowMesh(CompositeMesh(MeshN3D2(“upper_body.n3d2”),MeshN3D2(“lower_body.n3d2”))))

Now if we have different materials for the upper and lower body of a model using this composite mesh, we would do the following:

Material* material1;

void* pMeshData1 = material1->Load( Mesh );

Material* material2;

void* pMeshData2 = material2->Load( Mesh );

Internally, both materials would load the same mesh, but instead of duplicating data, they reuse the same geometry buffers, and switch to different vertex and index ranges when rendering them.

Wednesday, April 2, 2008

Yet Another Material Framework

Now that I'm working in some visually appealing demos for the benefit of the world, I've made myself the proposal of developing a Demo Framework. Yet another one.

Yep, no graphics programmer has been able yet to use someone else's Demo framework, despite the amount of those that are readily available. And yet, not only everyone creates their own, but they do it in mostly the same way: create a homogeneous DirectX and OpenGL wrapper, encapsulate your renderer, your models, your textures, and start implementing your fancy demos on top of them.

Quite probably, that's exactly the same thing I've been doing, but in doing so, I've been trying to prove my point that no matter which implementation you are using, you need to plan for the future, you need to support as many different features as possible without sticking to the minimum common denominator of all your target platforms, and you need to be able to take advantage of the most powerful features in every different platform or programming interface.

Curiously enough, that's exactly what Tom Forsyth proposed in [Forsyth2004]. In this great paper, Forsyth exposes the concept of abstract materials, meaning an abstract representation of what a shader does in the form of attributes and operations instead of providing explicit shaders. Not only such a representation would hide the complexity of the shader code from the user, but also would readily enforce code reuse across multiple shaders that shared the same operations. Redundant shader code is probably one of the nastiest issues in shader maintenance and it is the one that originally interested me the most when I addressed the paper.

But then, I found that the implementation is a little less clean than expected, because in order to correctly describe abstract materials, you need the kind of expressive power that only comes from programmability, be it scripting or a comprehensive, graphical representation of shader effects. The rest of the points in Forsyth's document fall into the “simple” category: texture encapsulation, material fallback, static vs dynamic materials, etc. But the way he presents a Material Description as an arbitrary enumeration of attributes and rendering flags is way too generic, and probably would lead to some unextensible, bloated code aggregator. The goal of achieving maximum expressiveness is limited to whatever form of code generation we could turn the abstract Material Description into.

For the sake of argument, let's review some known forms of shader encapsulation.

Shader library

This approach is the one that works the best because it is simple and grants the programmer control over the variety and complexity of shaders. Basically, it means that shaders are entirely coded by a programmer on demand, possibly using some external tool (NVIDIA and ATI have excellent shader editors, packed with samples for the benefit of the public). These shaders are then assigned to a Material, that would possibly encapsulate details of the process such as sorting, grouping or rendering passes, and expose requirements such as system parameters or input formats.

As an example, a PhongDiffuse material could require the Position, Texcoord, and Normal vertex components, and require the DiffuseColor material parameter, the LightDirection and LightColor environment parameters, and the WorldViewProjection system parameters (let's keep it simple). This information is enough to generate a suitable mesh and render it to the screen, assuming all required data are present.

The problem is obviously that artists don't have any power to experiment and define their own material unless they are willing to program shaders themselves. It is a known fact that artists and designer usually hate coding, but still love to customize their data and behaviors through some procedural representation. That's exactly the same as coding, but they don't perceive it as such (think ActionScript or Renderman) and that's something we must take into consideration.

Assembled fragments

Shader fragments were made official in DirectX 9, but at that time, several authors had made their own fragment-based approach to build shaders, myself included. The concept of shader fragments is that if you can isolate reusable operations into code blocks, and describe how these blocks relate to each other (dependencies, order, inputs and outputs, etc), you can theoretically build shaders just by selecting which fragments a Material requires. [Hargreaves] and [Osterlind] both expose different approaches to the concept of splitting shaders into fragments that can then be rebuilt into a meaningful whole. For example, the DiffusePhong shader above could be assembled from the named fragments: Projection (transform position to clip space), DiffuseLight (compute Diffuse color component from the dot product of the Normal and the incident Light direction), etc. If, for example, we wanted to add Bump to this Material, the Normal would be computed in an additional code fragment. By clever definition of dependencies between code fragments to ensure that all required fragments are present at the time of assembly, these shaders can be thus built.

This solution enforces reusability, but moves some flexibility and experimentation into the hands of artists, the programmer still retaining most of the control on how shaders actually work, and it is probably the most comprehensive of all. I must warn you though, it requires some heavy thinking about the best way to mix fragments when combinations start growing exponentially.

I've seen other approaches of the same solution that kept working with shader code but let named fragments be invoked from within the code of other shaders. Other use preprocessor directives to conditionally include or exclude shader code depending on a number of defined values, thus rendering a number of code combinations. While working in Tragnarion, I was personally responsible of one the worst shader assemblers ever: a builder script that programatically outputs lines of code depending on the attributes present in the material description. It seemed like a good and simple idea at the time but it quickly reached the critical mass.

Shader Graph

The whole idea of shader programmability seems to point to good old Renderman, and Shader Graphs basically rescue the original idea and put a very similar concept in the art pipeline, making it look more like an art tool. A Shader Graph is usually presented in the form of a graph of connected blocks, that represent either data or operations. The data implicitly defines the requirements of the Material (textures, numeric parameters, input streams), whereas the operations describe its results. These are usually fed to one of several rendering models that process every output of the graph into the corresponding visual result. For example, a graph that outputs a Diffuse and Normal values can be processed by a Phong renderer, but the Diffuse value can be directly sampled from a texture, or processed through some other operations, and the Normal can come from the input geometry, a Bumpmap, or any other source.

The advantage of this approach over the previous one is that only the interface and outputs of a graph are determined by the blocks it contains, but the user is free to arbitrarily define behaviors.

Examples of this approach to shader synthesis include the Material editor in the Unreal Engine, or the mental mill tool from mental images, available as part of the NVIDIA SDK.

Fragments, on its own, are black boxes: they define the input, the output, and surely the operations they encapsulate. On the other side, fragments are simpler to use, and are a higher level abstraction of shader behaviors when it comes to Material level of detail: A Bump fragment can be implemented in its own way for different implementations (or removed altogether) when it comes down to automatically simplifying a shader. The type of low-level operations that are usually involved (eg. texture sampling, modulation, etc) are basically a fancy form of coding (just like in Action Script or Renderman, remember?) and has the exact same problem when it comes to Material level of detail, data hiding, shader diversity, etc. But it is too powerful a tool to ignore it anyway.

Metamaterials

What is the right way to describe a Material? All of them, of course, as long as they get the job done. For many cases, a shader library will do the trick, and it is a clean way to control how complex the system turns anyway (it won't get easily out of hand). Fragments are a simple and comprehensive way to combine known pieces, and most materials in a game could benefit from just a fistful of these fragments. Shader graphs are the perfect solution for every other effect you may require, letting both artists and graphics programmers play with it for the sake of trial and experimentation.

Each of these approaches works better in certain scenarios. It would be stupid of us to dismise one or the other just because they seem too simple, or too complex, or whatever other reason. But all of them share one common trait: Metainformation. Whether the boxes encapsulate an entire material or small pieces of them, the actual description of the material is irrelevant to the design of the material system. This is why I've thought of creating the kind of Material description that Forsyth describes, but in its own abstract way, that would make it possible to fit a complete shader, a fragment composite, or a shader graph without having to rewrite the whole system.

The base of this system is that the Metainformation is in a homogeneous namespace, that gives it the appearance of a shader library. This simplifies the process of creating materials and using them to load and render geometry to the simplest form:

Material* material = Material::Create( “materials/phong/diffuse” );
Model* model = Model::Create( “models/cube” );
material->Load( model );
material->Render( model );

Now could you tell me what kind of shader abstraction is in Material? No you can't, because the “materials/phong” name is defined in a flat namespace where all kinds of Metamaterials are posible. Let's say the Metamaterial is an effect, in whatever format you prefer (HLSL, GLSL, Cg, etc.) and looks like:

materials/phong/diffuse
{
stream position
stream normal
profile dx9
{
effect dx9/phong.fx
technique tDiffuse
}
}

Now this makes explicit the implementation of the Phong material for every supported profile (roughly, a platform) each in its preferred format.

Now if we would like to clearly separate the code in its building pieces (fragments), we could create a different type of Metamaterial:

materials/phong/diffuse
{
diffusemap
phonglighting
diffuselighting
}

Now we're just saying that by assembling these three fragments (whatever they may be) we're going to achieve the desired effect. Of course the management of these fragments is not trivial, but one of them could look like:

fragment diffusemap
{
stream texcoord0
sampler diffmap0
vertexshader
{
out.uv0 = in.uv0;
}
pixelshader
{
diffuse = sample( diffsampler0, in.uv0 );
}
}

This is probably too simple a shader fragment, but its purpose is to show that fragments are just a higher level of indirection than directly providing a shader file. But in the end it is the same code anyway.

Now let's try to achieve the same thing through a clever shader graph:

materials/phong/diffuse
{
sampler diffuse
{
texture diffusemap
texcoord uv0
}
color output
{
diffuse = diffuse.rgba
}
}

Here, sampler and output are different types of graph nodes, each with a well defined set of inputs and outputs. Sampler is a block that takes a Texture and a Texture coordinate set as an input, and outputs a 4-component value. Output is a block with a number of inputs, such as diffuse, normal, specular, etc. that it combines in any way the material wants to. This is too simple an example to make it obvious, but when it comes to lighting, shadowing, environment mapping and other such advanced effects, there are many models that could be used to process the output block.

It is obvious that these approaches are all variations of the same theme. The difference is the interface and expressiveness it gives the user: there's no rule preventing to create a Phong block in a shader graph that takes all input and processes them into all outputs. It's up to the actual material designer to decide the level of abstraction after all.

And this is where I'm stuck now, trying to build a Metadescription that can be turned into an actual shader (or set of render states) still keeping the advantage of automatic quality downgrading, code reuse, and extensibility. I will report back when I get there.

References
  • [Forsyth] Tom Forsyth, “Shader Abstraction”, from ShaderX2. Shader Programming Tips and Tricks with DirectX 9
  • [Hargreaves] Shawn Hargreaves, "Generating Shaders From HLSL Fragments", in ShaderX3
  • [Osterlind] Magnus Österlind, "Shaderbreaker", in ShaderX