On 12/04/2008, Stefan Dösinger stefan@codeweavers.com wrote:
But since D3D vertex shaders always read the numbered arrays and fixed function always reads the named arrays the named arrays get de-facto replaced as far as we're concerned.
Which is completely irrelevant for classifying the operation. Please read the ARB_vertex_program spec, issue 3 and the ARB_fragment_program spec, issue 13 to get a better idea of what shaders replace and what they don't.
You only need two. One to toggle writing 1.0 to the 4th coordinate when needed, and one to toggle copying the 3th coordinate to the 4th when needed. It would certainly beat doing an extension check for every possible backend. Right now we always do the fixup, so in that respect it would be an improvement as well.
I'm working on a patch that makes the atifs shader code take care of applying the texture transform matrix. That works naturally without any flags or backend check if you don't try to split vertex and fragment processing by force.
If the issue is that you've got an interest in keeping the existing structure because you've already written code on top of it there's not much point in having this discussion in the first place. If that's not the issue, I'd like to mention that the whole point of having interfaces is that you can avoid ugliness like setting vertex processing state in the fragment processing part of the pipeline. There's also nothing forceful about splitting your pipeline in vertex and fragment processing, that's how the hardware works, it's how GL works, and it's how D3D works.
That depends on the fog type. Vertex fog is a vertex state, fragment fog is a fragment state. Changing the type would obviously have interactions with both parts of the pipeline.
On the GL side both vertex and fragment fog are applied to the same GL state. Using a ARBFP or GLSL fragment shader replaces vertex fog as well, so you'll have to implement both types in the fragment processing replacement.
The fog blending is a fragment operation, yes. Coordinate calculation depends on the coordinate source and fog hint, and can happen either during vertex processing, fragment processing or not at all if fog coordinates are specified.
The fog settings depend on the vertex decl(XYZ vs XYZRHW) and the shader properties("foggy shader"). That means the fragment processing code(pixel shader and ffp replacement) would look at the core of the vertex processing settings. Doesn't that defeat separating them in the first place?
No, like you correctly mention below, the point is to separate the implementation, not where the implementation gets its information from on the D3D side.
(I understand that you want to split the state types on the GL side, not the D3D side. But when you split applying of one D3D state in 3 pieces, I fail to see how it is cleaner)
This would hardly be something new. You mentioned the vertexdeclaration state yourself that modifies multiple GL states, and it's hardly the only one.
You didn't answer how you plan to implement state dirtification. You have a SetRenderState call that changes render state X. Which implementation state(s) do you dirtify? Ie:
device->fragment_state_manager->mark_state_dirty(device->fragment_private_data,
state);
Where does "state" come from?
I don't remember you asking, but I see no reason to change the basic way dirtification is currently done.
if (!use_vs) { device->vertex_state_manager->apply_states(device->vertex_private_data); }
...
This would be done where?
ActivateContext, CTXUSAGE_DRAWPRIM. (Yes it should probably be part of the context, not the device, my bad.)
No, I mean a D3D (3.0) vertex shader that is translated and running via GLSL. Currently the pixel shader decides where the vertex shader(all versions) writes the varyings to(generate_param_reorder_function). If a fixed function vertex replacement shader and pre-3.0 shader writes to the regular fixed function output, how would you run a pixel shader like the one in vshader_version_varying_test() in visual.c together with a 1.x or 2.0 vertex shader, or XYZRHW data from a GLSL vertex pipeline replacement shader? (With the fixed function GL vertex pipeline we're screwed, but we aren't necessarily screwed with a GLSL vertex pipeline replacement)
The pipeline object would certainly have access to all the information required to create such a reordering function, but I fail to see how it's relevant at this point. The idea here certainly isn't to magically fix all state and shader related issues in wined3d, it's just about making fixed function replacement shaders possible in a maintainable way.
What I meant was that: Scenario 1: GLSL is used for pixel shaders and fragment pipeline replacement. How do I find out if I have to link the fragment replacement GLSL shader or the pixel shader GLSL shader into my program I activate? -> Share private data between GLSL shader backend and GLSL fixed function object
Followup scenario 2: We don't have a GLSL fixed function backend yet. I have an ATI X1600 card, I am using GLSL for pixel shaders, and I am using ATIFS for the pipeline replacement. The GLSL pixel shader code reads the GLSL fragment pipeline replacement private data. -> This GLSL fragment replacement private data is either nonexistant or hanging around without code maintaining it. How do we deal with that?
Not exactly. In the first place, if there's no GLSL fixed function implementation you'll never have to link to it, simple as that. Now in case there *is* a GLSL fixed function replacement but it isn't used (for whatever reason), that simply means the GLSL pipeline object's private data will tell it it doesn't have to link anything (or rather, *not* tell it it has to link to the ffp replacement).
There really is no distinction between "pixel shader private data" and "ffp replacement private data", they're both pointers to the same block of memory, and eg. the shader backend will always only get passed its own private data.
I'd also like to note that most of the issues you bring up are not new or specific to this design at all, and some of them wouldn't work at all with the current structure.