Shader Model 3.0 and earlier versions have different ways to pass varyings from vertex to pixel shader. In the end this means that the pixel shader is dependent on the vertex shader. If the vertex shader is a GLSL shader, it can read either the builtin varyings or the declared ones, the GLSL vertex shader writes both and the driver (should) sort it out. If the vertex shader is not a GLSL shader, we can only write to builtin varyings, so a GLSL pixel shader has to use different code. Currently the GLSL linking code sorts it out, but we have dependencies between vertex and pixel shaders. In the end I think at least the linker object has to know way too many things about the shaders, and the shaders have to know things about each other.
You mean the code in generate_param_reoder_function ? That's exactly the kind of thing that should go in a "linker" object - it takes as input a vertex and pixel shader, and handles how they talk to each other. The "semantics" map is the interface between them that describes varyings - of course the interface has to be available to the linker to read the vertex outputs and make decisions as to the pixel inputs . Your "linker" could be backend-independent and try to support odd combinations like software vs / glsl ps or something like that, or it could be glsl-specific, and basically know how to link one glsl vertex shader to one glsl pixel shader (you could use a different linker for other things).
Why is the vertex/pixel linkage related to how you generated the vertex (or pixel) shader ? Does it matter if your shader was created from the fixed or programmable interface on the d3d-side in order to link it to the other one?
How will geometry shaders fit into the picture ?
Ivan