Now that I'm working in some visually appealing demos for the benefit of the world, I've made myself the proposal of developing a Demo Framework. Yet another one.
Yep, no graphics programmer has been able yet to use someone else's Demo framework, despite the amount of those that are readily available. And yet, not only everyone creates their own, but they do it in mostly the same way: create a homogeneous DirectX and OpenGL wrapper, encapsulate your renderer, your models, your textures, and start implementing your fancy demos on top of them.
Quite probably, that's exactly the same thing I've been doing, but in doing so, I've been trying to prove my point that no matter which implementation you are using, you need to plan for the future, you need to support as many different features as possible without sticking to the minimum common denominator of all your target platforms, and you need to be able to take advantage of the most powerful features in every different platform or programming interface.
Curiously enough, that's exactly what Tom Forsyth proposed in [Forsyth2004]. In this great paper, Forsyth exposes the concept of abstract materials, meaning an abstract representation of what a shader does in the form of attributes and operations instead of providing explicit shaders. Not only such a representation would hide the complexity of the shader code from the user, but also would readily enforce code reuse across multiple shaders that shared the same operations. Redundant shader code is probably one of the nastiest issues in shader maintenance and it is the one that originally interested me the most when I addressed the paper.
But then, I found that the implementation is a little less clean than expected, because in order to correctly describe abstract materials, you need the kind of expressive power that only comes from programmability, be it scripting or a comprehensive, graphical representation of shader effects. The rest of the points in Forsyth's document fall into the “simple” category: texture encapsulation, material fallback, static vs dynamic materials, etc. But the way he presents a Material Description as an arbitrary enumeration of attributes and rendering flags is way too generic, and probably would lead to some unextensible, bloated code aggregator. The goal of achieving maximum expressiveness is limited to whatever form of code generation we could turn the abstract Material Description into.
For the sake of argument, let's review some known forms of shader encapsulation.
Shader libraryThis approach is the one that works the best because it is simple and grants the programmer control over the variety and complexity of shaders. Basically, it means that shaders are entirely coded by a programmer on demand, possibly using some external tool (NVIDIA and ATI have excellent shader editors, packed with samples for the benefit of the public). These shaders are then assigned to a Material, that would possibly encapsulate details of the process such as sorting, grouping or rendering passes, and expose requirements such as system parameters or input formats.
As an example, a PhongDiffuse material could require the Position, Texcoord, and Normal vertex components, and require the DiffuseColor material parameter, the LightDirection and LightColor environment parameters, and the WorldViewProjection system parameters (let's keep it simple). This information is enough to generate a suitable mesh and render it to the screen, assuming all required data are present.
The problem is obviously that artists don't have any power to experiment and define their own material unless they are willing to program shaders themselves. It is a known fact that artists and designer usually hate coding, but still love to customize their data and behaviors through some procedural representation. That's exactly the same as coding, but they don't perceive it as such (think ActionScript or Renderman) and that's something we must take into consideration.
Assembled fragmentsShader fragments were made official in DirectX 9, but at that time, several authors had made their own fragment-based approach to build shaders, myself included. The concept of shader fragments is that if you can isolate reusable operations into code blocks, and describe how these blocks relate to each other (dependencies, order, inputs and outputs, etc), you can theoretically build shaders just by selecting which fragments a Material requires. [Hargreaves] and [Osterlind] both expose different approaches to the concept of splitting shaders into fragments that can then be rebuilt into a meaningful whole. For example, the DiffusePhong shader above could be assembled from the named fragments: Projection (transform position to clip space), DiffuseLight (compute Diffuse color component from the dot product of the Normal and the incident Light direction), etc. If, for example, we wanted to add Bump to this Material, the Normal would be computed in an additional code fragment. By clever definition of dependencies between code fragments to ensure that all required fragments are present at the time of assembly, these shaders can be thus built.
This solution enforces reusability, but moves some flexibility and experimentation into the hands of artists, the programmer still retaining most of the control on how shaders actually work, and it is probably the most comprehensive of all. I must warn you though, it requires some heavy thinking about the best way to mix fragments when combinations start growing exponentially.
I've seen other approaches of the same solution that kept working with shader code but let named fragments be invoked from within the code of other shaders. Other use preprocessor directives to conditionally include or exclude shader code depending on a number of defined values, thus rendering a number of code combinations. While working in Tragnarion, I was personally responsible of one the worst shader assemblers ever: a builder script that programatically outputs lines of code depending on the attributes present in the material description. It seemed like a good and simple idea at the time but it quickly reached the critical mass.
Shader GraphThe whole idea of shader programmability seems to point to good old Renderman, and Shader Graphs basically rescue the original idea and put a very similar concept in the art pipeline, making it look more like an art tool. A Shader Graph is usually presented in the form of a graph of connected blocks, that represent either data or operations. The data implicitly defines the requirements of the Material (textures, numeric parameters, input streams), whereas the operations describe its results. These are usually fed to one of several rendering models that process every output of the graph into the corresponding visual result. For example, a graph that outputs a Diffuse and Normal values can be processed by a Phong renderer, but the Diffuse value can be directly sampled from a texture, or processed through some other operations, and the Normal can come from the input geometry, a Bumpmap, or any other source.
The advantage of this approach over the previous one is that only the interface and outputs of a graph are determined by the blocks it contains, but the user is free to arbitrarily define behaviors.
Examples of this approach to shader synthesis include the Material editor in the Unreal Engine, or the mental mill tool from mental images, available as part of the NVIDIA SDK.
Fragments, on its own, are black boxes: they define the input, the output, and surely the operations they encapsulate. On the other side, fragments are simpler to use, and are a higher level abstraction of shader behaviors when it comes to Material level of detail: A Bump fragment can be implemented in its own way for different implementations (or removed altogether) when it comes down to automatically simplifying a shader. The type of low-level operations that are usually involved (eg. texture sampling, modulation, etc) are basically a fancy form of coding (just like in Action Script or Renderman, remember?) and has the exact same problem when it comes to Material level of detail, data hiding, shader diversity, etc. But it is too powerful a tool to ignore it anyway.
MetamaterialsWhat is the right way to describe a Material? All of them, of course, as long as they get the job done. For many cases, a shader library will do the trick, and it is a clean way to control how complex the system turns anyway (it won't get easily out of hand). Fragments are a simple and comprehensive way to combine known pieces, and most materials in a game could benefit from just a fistful of these fragments. Shader graphs are the perfect solution for every other effect you may require, letting both artists and graphics programmers play with it for the sake of trial and experimentation.
Each of these approaches works better in certain scenarios. It would be stupid of us to dismise one or the other just because they seem too simple, or too complex, or whatever other reason. But all of them share one common trait: Metainformation. Whether the boxes encapsulate an entire material or small pieces of them, the actual description of the material is irrelevant to the design of the material system. This is why I've thought of creating the kind of Material description that Forsyth describes, but in its own abstract way, that would make it possible to fit a complete shader, a fragment composite, or a shader graph without having to rewrite the whole system.
The base of this system is that the Metainformation is in a homogeneous namespace, that gives it the appearance of a shader library. This simplifies the process of creating materials and using them to load and render geometry to the simplest form:
Material* material = Material::Create( “materials/phong/diffuse” );
Model* model = Model::Create( “models/cube” );
material->Load( model );
material->Render( model );
Now could you tell me what kind of shader abstraction is in Material? No you can't, because the “materials/phong” name is defined in a flat namespace where all kinds of Metamaterials are posible. Let's say the Metamaterial is an effect, in whatever format you prefer (HLSL, GLSL, Cg, etc.) and looks like:
materials/phong/diffuse
{
stream position
stream normal
profile dx9
{
effect dx9/phong.fx
technique tDiffuse
}
}
Now this makes explicit the implementation of the Phong material for every supported profile (roughly, a platform) each in its preferred format.
Now if we would like to clearly separate the code in its building pieces (fragments), we could create a different type of Metamaterial:
materials/phong/diffuse
{
diffusemap
phonglighting
diffuselighting
}
Now we're just saying that by assembling these three fragments (whatever they may be) we're going to achieve the desired effect. Of course the management of these fragments is not trivial, but one of them could look like:
fragment diffusemap
{
stream texcoord0
sampler diffmap0
vertexshader
{
out.uv0 = in.uv0;
}
pixelshader
{
diffuse = sample( diffsampler0, in.uv0 );
}
}
This is probably too simple a shader fragment, but its purpose is to show that fragments are just a higher level of indirection than directly providing a shader file. But in the end it is the same code anyway.
Now let's try to achieve the same thing through a clever shader graph:
materials/phong/diffuse
{
sampler diffuse
{
texture diffusemap
texcoord uv0
}
color output
{
diffuse = diffuse.rgba
}
}
Here, sampler and output are different types of graph nodes, each with a well defined set of inputs and outputs. Sampler is a block that takes a Texture and a Texture coordinate set as an input, and outputs a 4-component value. Output is a block with a number of inputs, such as diffuse, normal, specular, etc. that it combines in any way the material wants to. This is too simple an example to make it obvious, but when it comes to lighting, shadowing, environment mapping and other such advanced effects, there are many models that could be used to process the output block.
It is obvious that these approaches are all variations of the same theme. The difference is the interface and expressiveness it gives the user: there's no rule preventing to create a Phong block in a shader graph that takes all input and processes them into all outputs. It's up to the actual material designer to decide the level of abstraction after all.
And this is where I'm stuck now, trying to build a Metadescription that can be turned into an actual shader (or set of render states) still keeping the advantage of automatic quality downgrading, code reuse, and extensibility. I will report back when I get there.
References- [Forsyth] Tom Forsyth, “Shader Abstraction”, from ShaderX2. Shader Programming Tips and Tricks with DirectX 9
- [Hargreaves] Shawn Hargreaves, "Generating Shaders From HLSL Fragments", in ShaderX3
- [Osterlind] Magnus Österlind, "Shaderbreaker", in ShaderX