Hi,
Here's another problem with shaders.
Vertex shader input is marked up with tags that hint at the data usage - semantics. For the fixed pipeline, those tags must be 100% correct, because they tell the pipeline how to interpret the data - is that the position stream, or the color stream, or the texcoord stream. For shaders, however, I think this is not the case. Since the app has control of the shader, it doesn't need to give meanings to the different semantics.
I've looked at two demos, which demonstrate various problems. Demo #1: Instancing: http://www.humus.ca/index.php?page=3D&ID=52 This demo has hlsl shaders, which make use of the TEXCOORD8 semantic. This is an invalid semantic, according to the way we map things, since only 0-7 texcoords are supported. Furthermore, comments make it clear that this is not texture data. It would seem that the semantic is just meant to relay the data to the vertex shader input that is marked dcl_texcoord8. This means that we need to support all semantic names and indices 0-15 as possible labels for shader input data. MSDN even mentions that texcoord can be used for user-defined data. This presentation: http://www.ati.com/developer/gdc/D3DTutorial1_Shaders.pdf , has examples which make use of position > 0, normal > 0, etc, which are all invalid on the fixed function pipeline.
I am currently working on a very large patch, which will restructure strided data to address this problem, but that's not all.
Demo #2: Many Per Pixel Lights, http://www.zanir.szm.sk/dx/017_Many_Per_Pixel_Lights.zip This is a d3d8 demo. The shader inputs are not marked with a semantic - the declaration data is mapped 1:1 to the shader inputs, so a specific register number is designated as the D3DVSDE_DIFFUSE register. Now, consider that we use the semantic to implement shader fixup - flipping red and blue on color inputs. Previously this fixup did not work at all on d3d8 shaders (as far as I can tell), and I just made it work today, by storing a fake semantic for d3d8 shaders. The result is that in the demo above, everything turned green, and very wrong. Why? Looking at the demo it seems that it loads registers that contain user-defined data in all the declaration inputs, without paying any attention to the "meaning" of that register, since it controls the shader. As a result, we are flipping random data, that could be important, and not red and blue. In this particular case, it's a relative address token, which means that's absolutely critical that it gets the right value - that's why everything breaks.
Therefore, how can we rely on a semantic tag for shader fixups? Seems we can't do that. Also, I don't understand why we're applying fixups on shader input either. Can someone explain why this fixup is needed exactly? If we need to flip red and blue because the format is backwards, shouldn't this be done at the end of the pipeline, at the point of interpretation by GL. Flipping things at an intermediate point can affect all kinds of calculations in the shader. At the end of the pipeline we can also reliably tell what's to be interpreted as color data, instead of following semantic hints.
Any thoughts?