Am Samstag, 12. April 2008 15:06:38 schrieb H. Verbeet:
It doesn't get ignored, you still do the upload and the data is still available should the shader choose to use it. Still, I probably should've phrased it as "functionality that gets replaced by a vertex / fragment shader".
But since D3D vertex shaders always read the numbered arrays and fixed function always reads the named arrays the named arrays get de-facto replaced as far as we're concerned.
That means different fragment processing implementations have different vertex processing requirements. Now you could make that a flag in the fragment processing and pixel shader implementation. You'd need 4 flags(nonshader_unprojected, shader_unprojected, nonshader_count3, shader_count3). Are you sure the flags won't grow out of control?
You only need two. One to toggle writing 1.0 to the 4th coordinate when needed, and one to toggle copying the 3th coordinate to the 4th when needed. It would certainly beat doing an extension check for every possible backend. Right now we always do the fixup, so in that respect it would be an improvement as well.
I'm working on a patch that makes the atifs shader code take care of applying the texture transform matrix. That works naturally without any flags or backend check if you don't try to split vertex and fragment processing by force.
That depends on the fog type. Vertex fog is a vertex state, fragment fog is a fragment state. Changing the type would obviously have interactions with both parts of the pipeline.
On the GL side both vertex and fragment fog are applied to the same GL state. Using a ARBFP or GLSL fragment shader replaces vertex fog as well, so you'll have to implement both types in the fragment processing replacement.
The fog settings depend on the vertex decl(XYZ vs XYZRHW) and the shader properties("foggy shader"). That means the fragment processing code(pixel shader and ffp replacement) would look at the core of the vertex processing settings. Doesn't that defeat separating them in the first place?
(I understand that you want to split the state types on the GL side, not the D3D side. But when you split applying of one D3D state in 3 pieces, I fail to see how it is cleaner)
You didn't answer how you plan to implement state dirtification. You have a SetRenderState call that changes render state X. Which implementation state(s) do you dirtify? Ie:
device->fragment_state_manager->mark_state_dirty(device->fragment_private_data,
state);
Where does "state" come from?
if (!use_vs) { device->vertex_state_manager->apply_states(device->vertex_private_data); } ...
This would be done where?
Where would you write the TEXCOORD0-7 and D3DCOLOR0 and 1 varyings from a GLSL vertex shader, and where do you read them from in the pixel shader? Keep indirect varying addressing in the pshader in mind.
Just to be clear, with "GLSL vertex shader" you mean "GLSL vertex processing replacement", right? A vertex processing replacement shader would write to the regular fixed function output, ie gl_FrontColor, gl_FrontSecondaryColor, gl_TexCoord[], etc. The fragment shader would read them the same way as it does when paired with fixed function or pre-3.0 vertex shaders.
No, I mean a D3D (3.0) vertex shader that is translated and running via GLSL. Currently the pixel shader decides where the vertex shader(all versions) writes the varyings to(generate_param_reorder_function). If a fixed function vertex replacement shader and pre-3.0 shader writes to the regular fixed function output, how would you run a pixel shader like the one in vshader_version_varying_test() in visual.c together with a 1.x or 2.0 vertex shader, or XYZRHW data from a GLSL vertex pipeline replacement shader? (With the fixed function GL vertex pipeline we're screwed, but we aren't necessarily screwed with a GLSL vertex pipeline replacement)
Pixel shaders + fragment processing replacement doesn't make sense. Either a GLSL vertex processing replacement + GLSL pixel shader or a GLSL vertex shader + GLSL fragment processing replacement would work though. The "GLSL pipeline object" would know if it's being used as vertex and/or fragment replacement and link everything together. In case atifs is used no linking is required.
What I meant was that: Scenario 1: GLSL is used for pixel shaders and fragment pipeline replacement. How do I find out if I have to link the fragment replacement GLSL shader or the pixel shader GLSL shader into my program I activate? -> Share private data between GLSL shader backend and GLSL fixed function object
Followup scenario 2: We don't have a GLSL fixed function backend yet. I have an ATI X1600 card, I am using GLSL for pixel shaders, and I am using ATIFS for the pipeline replacement. The GLSL pixel shader code reads the GLSL fragment pipeline replacement private data. -> This GLSL fragment replacement private data is either nonexistant or hanging around without code maintaining it. How do we deal with that?
They are pretty widespread and are even used in the EEEPC, so I think dealing with these cards will become a priority soon, at least for me. Unfortunately the driver sucks in terms of stability and performance.
Are those cards powerful enough to support a fixed function replacement?
Afaik the older cards do not support vertex processing, it's done in software, so it depends on the driver. On the fragment side they are pretty solid and work only with programmable fragment processing internally. The newer X3100 only supports programmable vertex and fragment processing in hardware, so a pipeline replacement is either done by us or the driver. It might be powerful enough for GLSL as well though.