Multi-texturing Demystified

Not too long ago, I finally got into multi-texturing. Apparently, the most recent video cards now accomplish multi-texturing through their pixel shader abilities, and those abilities do offer a somewhat deeper level of control. That aside, multi-texturing is a good introduction into the more advanced aspects of texture mapping.

I had a hard time getting my mind around it, though, since there were plenty of coding samples but no tutorials that properly illustrated the methodology behind figuring out which parameters you need to set. Unfortunately, in order to do so, you must have a fairly complete picture of everything the multi-texturing pipeline can accomplish. I'll present a few samples here to get you started (something I wish I would have found), and links to the primary documentation for the API's.

Here are the examples:

Documentation:

NOTE: Opening these documents right now may cause information overload, you may want to look at my discussions below first.

A basic understanding

To get multi-texturing working, you have to enable the levels of the pipeline you plan to use, load the textures into the stages, and set the operations you want to perform at each stage. Note that the textures and stage operations may not necessarily be intimately tied to each other. You could be working with entirely different textures or results at each stage (although DirectX is a little limited in this capacity).

In order to tell DirectX you want to multitexture, you just enable color and alpha operations beyond stage 0. Multi-texturing will continue until the API encounters a stage where the color and alpha ops are disabled. Operations start at 0 and work their way up, and the final result is what is rendered.

For OpenGL to multi-texture, you have to enable texturing at a target higher than GL_TEXTURE0_ARB (that is, GL_TEXTURE1_ARB and up). Multi-texturing continues until the API encounter a texture level for which texturing is disabled. Operations start at stage 0 and work their way up, and the final result is what is rendered.

Example 1:

I want to render a texture on a polygon affected by light, then render an alpha decal on top of it that isn't affected by light

Pseudo-code:

OpenGL:

You'll notice a pattern to how things are set up in OpenGL. First I tell GL_TEXTURE_ENV_MODE to use GL_COMBINE (you can set simpler operations, it will still pass the result to the next stage). Then I set up operations and parameters for COLOR and ALPHA. In each section, I tell OpenGL which COLOR combine operation I want, then provide the parameters in the form of the source and what part of that source I use (the operand). Then I repeat for the ALPHA combine portion. Operations can require 1, 2 or 3 parameters. GL_REPLACE only requires 1, GL_MODULATE requires 2, and GL_INTERPOLATE requires 3. Disabling GL_TEXTURE2_ARB at the end tells the multi-texturing pipeline to stop when it's done with stage 1.

Also note that for stage 0, instead of GL_COMBINE, I could have just used GL_MODULATE and avoided setting up all the combine parameters, since a simple modulate is what we are doing there.

Here's what the same thing looks like in DirectX:

It certainly doesn't take as much to set it up in DirectX. But you'll find that OpenGL has a slightly more flexible setup, sometimes allowing OpenGL to accomplish similar results with fewer texture stages, as you'll see.

One primary difference between DirectX and OpenGL is that when you are setting up COLOR and ALPHA sources for the stage operations, you don't need to specify that you are using the color or alpha portions of the sources. DirectX just assumes that you are using color or alpha in the various sections. It is possible to substitute the alpha for the color in the color section using the D3DTA_ALPHAREPLICATE flag.

Also note the differences in the terminologies:

DirectX has a specific Color op for blending texture alpha, but with OpenGL you must use GL_INTERPOLATE and specify all three parameters.

To perform the actual rendering in OpenGL, it looks like this (using a quad as an example):

For DirectX, you must first tell D3D what vertex shader you're planning to use. Make sure you specify one that makes use of two sets of texture coordinates (u, v, u2 and v2 in your vertex structure). u and v correspond to the GL_TEXTURE0_ARB coordinates above, and u2 and v2 correspond to GL_TEXTURE1_ARB. Just populate a 4-element vertex array with your vertex, colors and texture coordinates then call DrawPrimitive (assuming you understand enough about DirectX rendering, sorry to confuse you here).

Example 2: I want to simulate a spotlight shining on a wall, but also modulate the wall with the vertex colors to make it appear that two light sources are illuminating the wall.

Pseudo-code:

OpenGL:

This setup only allows a pure alpha spotlight with no color information. If you want to set it up for a colored spotlight, you'll need a first stage that prepares the colors as texture color * texture alpha, then add it to the vertex colors in stage 1 to arrive at your lightmap, then modulate it with the wall texture in stage 2. You will need to have the spotlight color information in the texture itself, since the wall is still lit via the color information in the vertices. If you want to allow a dynamically colored spotlight, you will probably have to render to an offscreen texture and use that to modulate with the wall.

Note how we don't even really use the vertex alphas until we get to stage 1. When we refer to them in stage 0, it's just to skip past the alpha stage with as few operands as possible.

Here is the same operation in DirectX:

Note the use of the D3DTA_ALPHAREPLICATE flag in stage 0 so we could access the texture alpha in a color operation.

Example 3: I have two ground textures and want to blend them together on a heightmapped terrain using the alpha component of the vertices, then modulate the result with the color of the vertices to simulate lighting.

If you are only working with two textures on a terrain, the alpha portion of the vertices is a good way to blend the textures together. The color portion of the vertices, then, is still available to modulate afterwards to perform lighting.

Notice how in stage 0 we are actually referring directly to texture stages 0 and 1 for the sources. DirectX can't do this. OpenGL allows you to grab texture data from a stage you haven't gotten to yet in the process. DirectX can only look at the results from the previous stage, so setting it up in DirectX is a little trickier.

Important note: The documentation at opengl.org states that referring to GL_TEXTUREn_ARB returns the results of the given texture stage. In this case, however, it returned the colors for the texture I provided at stage n. So this is different from the documentation.

DirectX:

We needed 3 stages to pull it off in DirectX. Stage 0 was used to pass on relevent values to stage 1. Had we been able to look ahead to other stages, we could have just performed the blend in stage 0. Also note that we only submitted textures in stages 0 and 1, but used operations in stages 0, 1 and 2. Remember, DirectX continues until it hits disabled operations in the next texture stage.

A pre-cursor to pixel shaders

Pixel shaders can do everything that multi-texturing can do, and then some. For example, in the terrain multi-texture example above, a pixel shader could blend four textures together using a base texture and then blending the other three on top of it using an RGB mask texture and the red, green and blue components from that texture to blend the other three textures on top. The multi-texturing pipeline can't make specific use of the red, green and blue components.

Then, of course, there's all the fancy shmancy water, reflection, and ripply sci-fi volume effects.

Conclusion

Hopefully this at least gives you a start with multi-texturing. check out the documentation for the API's to see what the other possible operations, sources, and operands are, and for clarifications on the operations I used here.