I knew that part of an on-par graphics engine would have to include the ability to lay decals on surfaces, to simulate things like bullet holes, explosion scars, blood spatters, etc. such as those seen in Half-life. So, I experimented with that game in an attempt to figure out how it applied decals to the walls. The decals were affected by the surrounding light and were illuminated by the flashlight as well.
I thought at first that the decals were actually modifying the base texture of the walls, since their texel sizes seemed to match the walls exactly. But modifying the wall textures would require a lot of pixel data to accommodate. Then I noticed that sometimes when I shot the wall that some of the decals were removed and replaced. So clearly there was some other kind of storage taking place.
I finally realized that applying decals to a wall was no more complicated that assembling and storing a polygonal surface with a texture and attaching it to the surface data so that when it came time to render, the engine would draw the wall with whatever lighting applied to it, then add the decals afterwards one by one using the same lighting result.
The next problem I had to worry about, then, was how to render the decal so that it doesn't extend past the edge of the surface. Was some special stenciling taking place? Ultimately, I decided on a polygon intersection that would reshape the base square decal into a shape that conforms to the shape of the wall. I would only need to do this once for each decal as I placed it, and it wouldn't require any other special calculations or operations during rendering, so this seemed like the best solution. Decals that stretch past one intersection and on to another wall would just require another partial decal applied to the other wall to make the decal appear that it is a single object.
Next came the challenge of how to render the decals so that they don't Z-fight with the wall or themselves. I wanted a wall to be able to have an infinite number of decals on it without any concern of Z-fighting. At first, I thought I would need to make use of some engine Z-bias alteration for every decal, and make it more extreme as the decals piled on. It turns out that I only needed to adjust the Z-bias once, out from the wall. After that, I can render all the decals with depth testing enabled, but no depth writing. I wouldn't need the decals to render depth values, since the wall's depth values will already occlude anything behind it (alpha walls would be another story, but that's a matter of rendering order). Using this painter's method, then, I had my infinite decals solution.
Lighting the decals wasn't too difficult. I found a way to calculate a texture coordinate on a surface from any given point. Using that, I could provide a lightmap texture coordinate for every vertex on a arbitrary decal and use the wall's lightmap to illuminate it.
As I goofed around with applying decals (and writing a horrendous single pass surface intersection routine, which I hope never to have to write again), I realized that alpha walls should realistically have decals visible on both sides (imagine shooting a window and causing a crack - it should be visible on both sides of the glass, right?). Having the engine detect decals on the opposite surface during rendering seemed like a very computable problem, but I ran into some trouble. I had already coded the engine to light walls modulated with their prepared lightmaps, and was doing so with the decals by providing them with accurate lightmap coordinates when I placed them. But the surface on the other side of the alpha wall was actually part of a different cluster of surfaces, and that cluster had a different set of decal lists to access. To add to the problem, the decals on the opposing walls had texture coordinates set up to make use of the opposing wall's lightmap, and nothing was prepared to allow them to be properly lightmapped on the front wall. It was too much of a headache to figure out how to give a surface access to any other possible surface's decal list and provide texture coordinates to decals without over-convoluting the initialization routines, so I came up with a nifty compromise. I would flag two surfaces as opposing each other, so that at any time a decal is applied to one while evaluating game events, another decal was applied to the other, flagged as a 'backside' decal. This allowed both surfaces to access their corresponding decal lists and have everything they needed to render them, including proper lightmap texture coordinates. I would render the backside decals without any depth writing or Z-bias (facing backwards with front-face mode switched), then render the alpha wall with depth, then Z-bias and render the frontside decals without any depth writing. This gave me what I was looking for - decals slightly shaded on the other side of the alpha surface, then the alpha surface, then apparent decals on the front side. To the user, it appears that each decal is a single entity, when in the game code, there are actually two, one for each surface. It works, so I kept it.
Applying decals to mobile objects, such as doors, involved transforming the surface decal I prepared into object space. Other than that, every game object would need its own set of decal lists, one for each surface, in order to render them properly. I ended up providing the game objects with their own set of lightmaps for proper shading as well (I don't do this with smaller objects, just larger significant ones).