I don't quite understand why it's necessary to write ARB, GLSL, and NONE shader descriptors inside the ati_shader file. How will this infrastructure scale to a new shader backend added a year from now ?
For such a large patchset maybe there should be a high level design diagram that explains how it works ?
I very much like that backend-specific functionality ends up in the appropriate backend - as per patches 10, 11, and 14. However, it seems those patches stand on their own, independent of the rest of the patchset and this feature.
Ivan
Am Sonntag, 16. März 2008 15:00:49 schrieb Ivan Gyurdiev:
I don't quite understand why it's necessary to write ARB, GLSL, and NONE shader descriptors inside the ati_shader file. How will this infrastructure scale to a new shader backend added a year from now ?
Currently I am doing that so ATI cards which have ARB_fp or GLSL can use the fragment replacement code and d3d pshaders at the same time. As commented in the code we can remove all but the ATIfs+ARBvp backend the day we have a ffp replacement in ARB or GLSL. Thus there's no need for that to scale, what we have here is the worst case.
I very much like that backend-specific functionality ends up in the appropriate backend - as per patches 10, 11, and 14. However, it seems those patches stand on their own, independent of the rest of the patchset and this feature.
You're right, the shader model changes do not really need the ATIfs code to work. However, separating them would be quite a pain, and I do not see any gain from that beyong the point where the patches are applied. It won't affect regression testing really.
The reason why I wrote it that way was that I needed an implementation of an ffp replacement to see where the whole thing goes. I couldn't do that with the existing nvts code because nvrc+nvts are quite different from arb/glsl/atifs and as described in the other mail putting that code into a separate shader backend won't help us much.
Stefan Dösinger wrote:
Am Sonntag, 16. März 2008 15:00:49 schrieb Ivan Gyurdiev:
I don't quite understand why it's necessary to write ARB, GLSL, and NONE shader descriptors inside the ati_shader file. How will this infrastructure scale to a new shader backend added a year from now ?
Currently I am doing that so ATI cards which have ARB_fp or GLSL can use the fragment replacement code and d3d pshaders at the same time.
To me this suggests that there should be two separate shader backends, selected differently for "fixed pipeline replacement" purposes and for shaders. I don't like the argument that this is only an intermediate step - if no one writes the glsl/arb replacement code, it becomes permanent.
Ivan
Am Sonntag, 16. März 2008 17:35:25 schrieb Ivan Gyurdiev:
To me this suggests that there should be two separate shader backends, selected differently for "fixed pipeline replacement" purposes and for shaders. I don't like the argument that this is only an intermediate step - if no one writes the glsl/arb replacement code, it becomes permanent.
How would that work with e.g. GLSL, where vertex and fragment shaders are linked? How would you use a D3D vertex shader together with a ffp replacement fragment shader?
One of the aims of my patches is providing the infrastructure where summer of code applicants could kick in. So I think chances that we get an ARB or GLSL replacement soon are pretty good.
On 17/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Am Sonntag, 16. März 2008 17:35:25 schrieb Ivan Gyurdiev:
To me this suggests that there should be two separate shader backends, selected differently for "fixed pipeline replacement" purposes and for shaders. I don't like the argument that this is only an intermediate step - if no one writes the glsl/arb replacement code, it becomes permanent.
How would that work with e.g. GLSL, where vertex and fragment shaders are linked? How would you use a D3D vertex shader together with a ffp replacement fragment shader?
One of the aims of my patches is providing the infrastructure where summer of code applicants could kick in. So I think chances that we get an ARB or GLSL replacement soon are pretty good.
Personally, I'm not quite convinced of the need of an ati fragment shader ffp replacement in the first place. The functionality seems more of the level of register combiners and texture shaders.
Also, all the cards that are powerful enough to support a shader implementation of fixed function processing support GLSL vertex and fragment shaders, so realistically all you really need is a GLSL ffp replacement.
On the subject of GSoC, I'm somewhat sceptical.
Am Montag, 17. März 2008 10:01:30 schrieb H. Verbeet:
Personally, I'm not quite convinced of the need of an ati fragment shader ffp replacement in the first place. The functionality seems more of the level of register combiners and texture shaders.
Using that argument we could remove the nvrc/nvts code as well. Obviously this doesn't get us more functionality, but it brings ATI cards, most importantly the r200 cards to the same level as Nvidia cards.
On 17/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Am Montag, 17. März 2008 10:01:30 schrieb H. Verbeet:
Personally, I'm not quite convinced of the need of an ati fragment shader ffp replacement in the first place. The functionality seems more of the level of register combiners and texture shaders.
Using that argument we could remove the nvrc/nvts code as well. Obviously this doesn't get us more functionality, but it brings ATI cards, most importantly the r200 cards to the same level as Nvidia cards.
Supporting ATI fragment shader is useful, obviously. What I'm not so sure about is positioning it as an ffp replacement.
Am Montag, 17. März 2008 12:18:11 schrieb H. Verbeet:
Supporting ATI fragment shader is useful, obviously. What I'm not so sure about is positioning it as an ffp replacement.
Do you mean the term "ffp replacement", or what the code is doing? I for one call our nvrc code an ffp replacement as well, however, there's no sharp border between programmable and fixed function functionality. I've seen articles which called GL_ARB_texture_env_combine "programmable" as well.
I don't know if we can support 1.x pixel shaders properly using atifs because texkill and texdepth are missing. However, the visual test shows that the texdepth test is completely broken on my r200 card on Windows, and texkill basically works, but doesn't conform to the refrast and behavior of newer cards, so games should not really use those instructions.
On 17/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Do you mean the term "ffp replacement", or what the code is doing? I for one call our nvrc code an ffp replacement as well, however, there's no sharp border between programmable and fixed function functionality. I've seen articles which called GL_ARB_texture_env_combine "programmable" as well.
It's mostly a matter of where you integrate it into the code. At this point I don't really see the advantage of making it a different shader backend compared to integrating it into the existing fixed function code.
Am Montag, 17. März 2008 15:53:04 schrieb H. Verbeet:
It's mostly a matter of where you integrate it into the code. At this point I don't really see the advantage of making it a different shader backend compared to integrating it into the existing fixed function code.
As I explained in the other mail, atifs uses a complete fragment shader object, as opposed to nvrc/nvts and arb combiners, which configure single stages. This requires different state linking. So to implement that efficiently I need at least a different state table which links all fixed function states together, instead of one that links colorop settings and alphaop settings per stage.
Also if we ever implement pixel shaders using atifs we'll need a shader backend for it.
Am Montag, 17. März 2008 10:01:30 schrieb H. Verbeet:
Also, all the cards that are powerful enough to support a shader implementation of fixed function processing support GLSL vertex and fragment shaders, so realistically all you really need is a GLSL ffp replacement.
That's not quite true, my atifs replacement supports everything on the fragment side, and that card doesn't support GLSL(not even ARBfp). Also on the vertex side there are cards that have ARB only and not GLSL. I don't yet know if we can replace the fixed function vertex pipeline on r200 cards, as the 128 shader constants seem rather limiting, especially when it comes to indexed vertex blending. However, it should be able to do non-indexed vertex blending just fine, and we can get rid of the color conversion.
On 17/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Am Montag, 17. März 2008 10:01:30 schrieb H. Verbeet:
Also, all the cards that are powerful enough to support a shader implementation of fixed function processing support GLSL vertex and fragment shaders, so realistically all you really need is a GLSL ffp replacement.
That's not quite true, my atifs replacement supports everything on the fragment side, and that card doesn't support GLSL(not even ARBfp).
Sure, but in that aspect it isn't really different from register combiners.
Also on the vertex side there are cards that have ARB only and not GLSL. I don't yet know if we can replace the fixed function vertex pipeline on r200 cards, as the 128 shader constants seem rather limiting, especially when it comes to indexed vertex blending. However, it should be able to do non-indexed vertex blending just fine, and we can get rid of the color conversion.
I think a useful ffp replacement should at least replace the vertex processing part. Extension s like ATI_fragment_shader and NV_register_combiners, etc. can already support pretty much all of fixed function fragment processing without having to integrate it in the shader backends.
In terms of replacing vertex processing, anything that still has dedicated hardware for fixed function processing isn't really powerful enough to run such an ffp replacement, either because of instruction count / number of constant limitations or simply execution speed. For nvidia hardware that basically means you'll need at least something like a GF5 or GF6.
Am Montag, 17. März 2008 15:41:07 schrieb H. Verbeet:
I think a useful ffp replacement should at least replace the vertex processing part. Extension s like ATI_fragment_shader and NV_register_combiners, etc. can already support pretty much all of fixed function fragment processing without having to integrate it in the shader backends.
From the atifs API putting the code into its own shader backend really helps abstraction and efficiency in wined3d(e.g. different state linking). nvrc has a vastly different API(individual stages instead of one "atomic" program that can't be split up).
If we ever implement pixel shaders using nvrc+nvts we'll have to move the code into a separate backend as well.
In terms of replacing vertex processing, anything that still has dedicated hardware for fixed function processing isn't really powerful enough to run such an ffp replacement, either because of instruction count / number of constant limitations or simply execution speed. For nvidia hardware that basically means you'll need at least something like a GF5 or GF6.
I don't know how the r200 card works internally, but it might have dedicated ffp vertex hardware as well. At least from the development history of the r200 driver it seems like that, because the card didn't have ARBvp until the reverse engineered r300 code was backported.
Anyway, I do not intend to break the ability to use opengl fixed function + d3d shaders like we do now. Its just a matter of the shader backend configuring their state tables accordingly. Essentially glsl shader backend + its own state handlers = replacement pipeline, glsl shader backend + unmodified state table = opengl fixed function pipeline.
For the start I think we should prefer our current fixed function code with nvrc/ati by default and make a glsl or arb replacement opt-in for now. Especially glsl with its long shader linking time.
On 17/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
From the atifs API putting the code into its own shader backend really helps abstraction and efficiency in wined3d(e.g. different state linking). nvrc has a vastly different API(individual stages instead of one "atomic" program that can't be split up).
That's more an issue with the state table design than a reason to create a new shader backend. A more hierarchical state table would probably have helped there, but the general issue you're trying to solve is that you want the state table to change depending on the available extensions and wined3d configuration. I don't think that's limited to shaders or fragment processing. If a GL3 spec ever actually gets released, and actual drivers start implementing it, we'll run into it there as well.
Am Montag, 17. März 2008 16:22:30 schrieb H. Verbeet:
That's more an issue with the state table design than a reason to create a new shader backend. A more hierarchical state table would probably have helped there, but the general issue you're trying to solve is that you want the state table to change depending on the available extensions and wined3d configuration. I don't think that's limited to shaders or fragment processing. If a GL3 spec ever actually gets released, and actual drivers start implementing it, we'll run into it there as well.
Couldn't we have discussed that one week ago when I sent the patches to wine-devel? ;-)
I think putting the state table into a shader backend is quite reasonable considering the existing opengl spec and extension, even if we do not call atifs/nvrc "shaders". With my patches we set the shader backend using complex conditions based on available extensions. If we keep the state table separate we'd set both that way, and probably need shader specific decisions in either the state handlers, or state table specific considerations in the shaders. I do not see what we would gain by not putting the atifs code into a shader backend, other than moving the temporary uglyness of the 3 backends to a permanent state table selection uglyness.
As far as opengl 3 is concerned, I think they wanted to get rid of the fixed function pipeline, in that case we'd be best of with the shader backend anyway. However, I think trying to find an architecture that deals with future interfaces is a bit of lottery anyway. We don't know what Direct3D 11 will bring either.
Am Montag, 17. März 2008 16:47:39 schrieb Stefan Dösinger:
Am Montag, 17. März 2008 16:22:30 schrieb H. Verbeet:
That's more an issue with the state table design than a reason to create a new shader backend. A more hierarchical state table would probably have helped there, but the general issue you're trying to solve is that you want the state table to change depending on the available extensions and wined3d configuration. I don't think that's limited to shaders or fragment processing. If a GL3 spec ever actually gets released, and actual drivers start implementing it, we'll run into it there as well.
I think putting the state table into a shader backend is quite reasonable considering the existing opengl spec and extension, even if we do not call atifs/nvrc "shaders". With my patches we set the shader backend using complex conditions based on available extensions. ...
Thinking about a bit more I think I have a better phrasing of this: The API of GL_ATI_fragment_shader is very close to the shader API of GL_ARB_fragment_program / GLSL, and as far as the global WineD3D design is concerend it is equal to the ARBfp one. Contrary to that, the nvrc API is comparable to the ARB_texture_env_combine API. We do not care how it works in the driver or the hardware, that's all abstracted away from us. API-wise it is a shader API, I think that is reason enough to give the atifs code its own shader backend.
Hi,
Given the past discussion, do you agree with the code now? Alexandre wants your OKs before applying the patches.
Thanks, Stefan
On 18/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Hi,
Given the past discussion, do you agree with the code now? Alexandre wants your OKs before applying the patches.
I don't think putting this stuff in its own shader backend is the right approach. On the other hand, given my current level of involvement with wined3d, I wouldn't mind too much if it got committed anyway.
Stefan Dösinger wrote:
Hi,
Given the past discussion, do you agree with the code now? Alexandre wants your OKs before applying the patches.
I am not familiar with the state table, atifs, or recent developments in the d3d codebase. That's why I suggested a diagram, so that everyone can understand the discussion. My concern is long-term maintainability of the shader API. I think the problem is with the definition of "shader backend".
The original intent, and the way it's currently used is: "A backend for the d3d shader pipeline ("d3d shader backend"), which happens to be implemented using some kind of gl shader".
The way this patchset is heading is: "A (gl shader backend), which implements both d3d shader and ffp pipeline, depending on the circumstances, through a mixed api"
I don't mind sharing the GL shader code to implement FFP or D3D shader pipelines - whether it's ATIfs, ARBfp/ARBvp, GLSL, or some other thing providing the implementation. What I do mind is sharing that code through the same API for both the FFP and D3D pipeline codepaths. This leads to combining APIs that don't belong together, and odd multiplexing like you have in patch 004 - forcing the shader path through atifs, even though atifs currently doesn't support handling shaders properly, and has to "borrow code" from other backends, and implement routing to the right one based on what flags are set.
Why do you need to reroute the shader path through atifs to support an unrelated set of functionality (ffp replacement)? Isn't it possible to have an ffp_backend, and a shader_backend (shader being the d3d shader), and you can implement both differently, with different APIs?
Ivan
Am Mittwoch, 19. März 2008 07:46:07 schrieb Ivan Gyurdiev:
Why do you need to reroute the shader path through atifs to support an unrelated set of functionality (ffp replacement)? Isn't it possible to have an ffp_backend, and a shader_backend (shader being the d3d shader), and you can implement both differently, with different APIs?
Sounds all great and cool, but: How do you handle a case of a d3d vertex shader + fixed function fragment processing using a GLSL shader replacement? An Uber-Shader-Backend that links the shaders together?
I can happily drop the strange routing through GLSL and the none shader backend for ATIFS, or with ARB pixel shaders enabled. In that case we'll just don't make use of it on ATI 9500+ cards until we have an ARB or GLSL replacement.
Stefan Dösinger wrote:
Am Mittwoch, 19. März 2008 07:46:07 schrieb Ivan Gyurdiev:
Why do you need to reroute the shader path through atifs to support an unrelated set of functionality (ffp replacement)? Isn't it possible to have an ffp_backend, and a shader_backend (shader being the d3d shader), and you can implement both differently, with different APIs?
Sounds all great and cool, but: How do you handle a case of a d3d vertex shader + fixed function fragment processing using a GLSL shader replacement? An Uber-Shader-Backend that links the shaders together?
I can happily drop the strange routing through GLSL and the none shader backend for ATIFS, or with ARB pixel shaders enabled. In that case we'll just don't make use of it on ATI 9500+ cards until we have an ARB or GLSL replacement.
I'll get back to you on that later tonight, need to think about this some more - way late for work right now... (thanks to you!)
However, yes, I think there needs to be distinction between a standalone shader concept, and a pipeline concept, which is concerned with linking several multifunctional shaders together - your "uber-shader-backend". Lack of distinction on this point is causing all this confusion.
Ivan
I'll get back to you on that later tonight, need to think about this some more - way late for work right now... (thanks to you!)
However, yes, I think there needs to be distinction between a standalone shader concept, and a pipeline concept, which is concerned with linking several multifunctional shaders together - your "uber-shader-backend". Lack of distinction on this point is causing all this confusion.
Cool, I'm looking forward to suggestions.
Meanwhile, I've separated the ATIFS implementation and the shader backend changes in my patches. The result is attached. The patches named "1", "2", ... will be merged together to avoid regressions due to partial implementations, and they need some reordering. I've hacked that together during my train ride, so I've no idea if it really works.
I've separated the shader model changes and the atifs implementation to make Alexandre happy. I'm now also enabling ATIFS only if ARB vertex shaders are enabled, pixel shaders not disabled, GLSL not supported and ARB_fragment_program not supported. Thus atifs only inherits from the arb backend, which avoids the 3 shader backend structures and makes dealing with the private data easier. So I think it partially addresses your concerns. atifs still has to route vertex processing through the ARB backend, but we will need that if we implement d3d pixel shaders using atifs, unless we split up vertex and fragment shader processing backends(So separating fixed function and programmable won't help there).
Stefan Dösinger wrote:
I'll get back to you on that later tonight, need to think about this some more - way late for work right now... (thanks to you!)
However, yes, I think there needs to be distinction between a standalone shader concept, and a pipeline concept, which is concerned with linking several multifunctional shaders together - your "uber-shader-backend". Lack of distinction on this point is causing all this confusion.
Cool, I'm looking forward to suggestions.
It looks to me as if shader_backend is being overloaded for many different purposes that are not really related to each other. Typical object structure is to group related data an functions in one place, but what's happening in shader_backend is that it has no data of its own, and it's a vtable routing between different GL extensions with the data being scattered across multiple different places.
- some functions are related to management of a single shader and its resources [ state is BaseShader ] - other code manages the link between vertex and fragment shader [ glsl programs stored in the Device ] - other code manages a group of 2 shaders to handle some fixed function purpose [ _blt, using Device->shader_priv_data ] - now you want to replace the main fixed function fragment processing [ state is in the state table ]
I think it would be worthwhile to review all of this, and see if this organization makes sense. Why aren't the functions grouped together with the data ? Why is some of the data in the device object ? Why are functions managing different data containers in the same vtable ?
I am no longer familiar with the details, but there's way too many things called "shader" by now - d3d shader, gl shader, this made up "shader_backend" that actually does fixed function stuff and represents neither. Maybe it makes sense to create new object types - like a 'pipeline', containing a 'vertex processor', 'fragment processor' (not necessarily implemented via shaders). Maybe each of these should have a "fixed" and "dynamic" facing d3d API, but attach or detach to the pipeline in the same way. Maybe they can have different gl extensions "backend" implementing each.
Meanwhile, I've separated the ATIFS implementation and the shader backend changes in my patches. The result is attached. The patches named "1", "2", ... will be merged together to avoid regressions due to partial implementations, and they need some reordering. I've hacked that together during my train ride, so I've no idea if it really works.
Will take a look...
Am Donnerstag, 20. März 2008 05:51:23 schrieb Ivan Gyurdiev:
It looks to me as if shader_backend is being overloaded for many different purposes that are not really related to each other. Typical object structure is to group related data an functions in one place, but what's happening in shader_backend is that it has no data of its own, and it's a vtable routing between different GL extensions with the data being scattered across multiple different places.
Yes, you're right, the shader backend does a few things right now. You have to see it as something dealing with OpenGL settings, don't see it as something talking D3D slang. Maybe renaming it to gl_pipeline_backend would give it a more suitable name.
I think it would be worthwhile to review all of this, and see if this organization makes sense. Why aren't the functions grouped together with the data ? Why is some of the data in the device object ? Why are functions managing different data containers in the same vtable ?
Henri once suggested making the shader_backend_t structure a COM object, that is created and destroyed by the device. That would make the state of the data it manages clearer(belongs to the shader object), though it would add some stuff we don't need, like the IUnknown parts, refcounting. If we did that we could call the "shader function routing" "inheritance" and are well within OOP slang again.
Keep in mind though that a D3DDevice is a conglomerate of many different things(fixed function, shaders, evaluators, resource management, ...), and many of the critiques you mentioned apply to the D3D API as a whole. And even if our shader_backend_t is opengl-centric, we can't fully ignore the D3D design.
I am no longer familiar with the details, but there's way too many things called "shader" by now - d3d shader, gl shader, this made up "shader_backend" that actually does fixed function stuff and represents neither. Maybe it makes sense to create new object types - like a 'pipeline', containing a 'vertex processor', 'fragment processor' (not necessarily implemented via shaders). Maybe each of these should have a "fixed" and "dynamic" facing d3d API, but attach or detach to the pipeline in the same way. Maybe they can have different gl extensions "backend" implementing each.
My concern is that if we break up the shader structure into multiple objects(e.g. vertex shader handler, pixel shader handler, fixed function vertex replacement, fixed function pipeline replacement, depth blit handler, pipeline linker), then we'll get simple single objects, but the putting these parts together adds more complexity than we save in the first place. Often in OOP programs(and other paradigms as well) you don't know what one component does without knowing the whole program.
On 20/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Henri once suggested making the shader_backend_t structure a COM object, that is created and destroyed by the device. That would make the state of the data
I never mentioned COM there.
Am Donnerstag, 20. März 2008 14:17:56 schrieb H. Verbeet:
On 20/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
Henri once suggested making the shader_backend_t structure a COM object, that is created and destroyed by the device. That would make the state of the data
I never mentioned COM there.
On IRC I think you once mentioned making the shader backend an object, and since we don't have a C++ compiler anyway I concluded COM. (That was a follow-up discussion on the patch that moved the shader backend resources into a device instead of using static variables). I'm sorry if I put words in your mouth you didn't say.
On 20/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
On IRC I think you once mentioned making the shader backend an object, and since we don't have a C++ compiler anyway I concluded COM. (That was a
In essence an object is just a call table with some data, you don't really need a C++ compiler for that. (And that specific discussion was about moving the private data of the shader backend out of the device and into the backend itself). COM would add a lot of stuff that we don't really need.
Am Donnerstag, 20. März 2008 15:26:07 schrieb H. Verbeet:
On 20/03/2008, Stefan Dösinger stefan@codeweavers.com wrote:
On IRC I think you once mentioned making the shader backend an object, and since we don't have a C++ compiler anyway I concluded COM. (That was a
In essence an object is just a call table with some data, you don't really need a C++ compiler for that. (And that specific discussion was about moving the private data of the shader backend out of the device and into the backend itself). COM would add a lot of stuff that we don't really need.
Agreed. However, I'd rather use COM and have a few simple IUnknown methods hanging around that are not used instead of inventing yet another way to implement object to keep the code more readable. COM is a well-known scheme, so it will keep the code more understandable.
If we use COM and make the state table a per-shader-backend property I'd shadow the pointer in the device though, to avoid an extra function call in each MarkStateDirty call.
Henri once suggested making the shader_backend_t structure a COM object, that is created and destroyed by the device. That would make the state of the data it manages clearer(belongs to the shader object), though it would add some stuff we don't need, like the IUnknown parts, refcounting. If we did that we could call the "shader function routing" "inheritance" and are well within OOP slang again.
Yes, what you are doing is exactly inheritance - whether it's done through COM or not doesn't really matter.
However, since there are 3 base things to inherit from (arb, glsl, none), you're inheriting each of them and now you have 6 "backends". It seems that the base functionality is fairly independent of the derived functionality, so that's why I suggested composition over inheritance here. I understand you're concerned with linking vertex and fragment together - but that should probably go in a separate object.
My concern is that if we break up the shader structure into multiple objects(e.g. vertex shader handler, pixel shader handler, fixed function vertex replacement, fixed function pipeline replacement, depth blit handler, pipeline linker), then we'll get simple single objects, but the putting these parts together adds more complexity than we save in the first place. Often in OOP programs(and other paradigms as well) you don't know what one component does without knowing the whole program.
Sure, there are always tradeoffs - it's your call, I'm just offering a semi-informed opinion :)
I'm sure any approach will be successful in the end, but it will be interesting to see how code that's written now scales to GL3, D3D10, geometry shaders, and upcoming extensions. In that respect, it seems like a good idea to try an ATIfs backend now as a test of the underlying framework.
Ivan
Am Freitag, 21. März 2008 03:07:48 schrieb Ivan Gyurdiev:
However, since there are 3 base things to inherit from (arb, glsl, none), you're inheriting each of them and now you have 6 "backends". It seems that the base functionality is fairly independent of the derived functionality, so that's why I suggested composition over inheritance here. I understand you're concerned with linking vertex and fragment together - but that should probably go in a separate object.
Thinking about it a bit more I am less convinced that having a separate linker object is going to work, thanks to the different varying handling.
Shader Model 3.0 and earlier versions have different ways to pass varyings from vertex to pixel shader. In the end this means that the pixel shader is dependent on the vertex shader. If the vertex shader is a GLSL shader, it can read either the builtin varyings or the declared ones, the GLSL vertex shader writes both and the driver (should) sort it out. If the vertex shader is not a GLSL shader, we can only write to builtin varyings, so a GLSL pixel shader has to use different code. Currently the GLSL linking code sorts it out, but we have dependencies between vertex and pixel shaders. In the end I think at least the linker object has to know way too many things about the shaders, and the shaders have to know things about each other.
(The visual tests suggest that the ATI driver on Windows does not handle pre-3.0 <-> 3.0 shader varying passing properly, but it is in violation of the refrast. While that suggests that games are unlikely to depend on this tricky behavior, I do not like the idea of justifying our design with an ATI driver bug)
Shader Model 3.0 and earlier versions have different ways to pass varyings from vertex to pixel shader. In the end this means that the pixel shader is dependent on the vertex shader. If the vertex shader is a GLSL shader, it can read either the builtin varyings or the declared ones, the GLSL vertex shader writes both and the driver (should) sort it out. If the vertex shader is not a GLSL shader, we can only write to builtin varyings, so a GLSL pixel shader has to use different code. Currently the GLSL linking code sorts it out, but we have dependencies between vertex and pixel shaders. In the end I think at least the linker object has to know way too many things about the shaders, and the shaders have to know things about each other.
You mean the code in generate_param_reoder_function ? That's exactly the kind of thing that should go in a "linker" object - it takes as input a vertex and pixel shader, and handles how they talk to each other. The "semantics" map is the interface between them that describes varyings - of course the interface has to be available to the linker to read the vertex outputs and make decisions as to the pixel inputs . Your "linker" could be backend-independent and try to support odd combinations like software vs / glsl ps or something like that, or it could be glsl-specific, and basically know how to link one glsl vertex shader to one glsl pixel shader (you could use a different linker for other things).
Why is the vertex/pixel linkage related to how you generated the vertex (or pixel) shader ? Does it matter if your shader was created from the fixed or programmable interface on the d3d-side in order to link it to the other one?
How will geometry shaders fit into the picture ?
Ivan
Am Freitag, 21. März 2008 16:22:41 schrieb Ivan Gyurdiev:
Shader Model 3.0 and earlier versions have different ways to pass varyings from vertex to pixel shader. In the end this means that the pixel shader is dependent on the vertex shader. If the vertex shader is a GLSL shader, it can read either the builtin varyings or the declared ones, the GLSL vertex shader writes both and the driver (should) sort it out. If the vertex shader is not a GLSL shader, we can only write to builtin varyings, so a GLSL pixel shader has to use different code. Currently the GLSL linking code sorts it out, but we have dependencies between vertex and pixel shaders. In the end I think at least the linker object has to know way too many things about the shaders, and the shaders have to know things about each other.
You mean the code in generate_param_reoder_function ?
No, I mean pshader_glsl_input_pack, or rather its callers. As for the generate_param_reoder_function you're perfectly right, it is part of the linking code.
Why is the vertex/pixel linkage related to how you generated the vertex (or pixel) shader ? Does it matter if your shader was created from the fixed or programmable interface on the d3d-side in order to link it to the other one?
It matters wether or not the vertex shader writes to the implicit opengl varyings(gl_TexCoord[], gl_Color, gl_SecondaryColor, ... in GLSL terms, output.* in ARB terms) or custom varyings("varying vec4 foo" in GLSL). If both shaders are GLSL, everything's fine, the vertex shader writes to both and lets the linker and the driver sort things out. But if the vertex processing is non-GLSL(fixed function, ARB or whatever), and the D3D pixel shader needs custom varyings(3.0 shader), then the GLSL fragment shader has to fetch them from the implicit varyings.
Am Freitag, 21. März 2008 03:07:48 schrieb Ivan Gyurdiev:
However, since there are 3 base things to inherit from (arb, glsl, none), you're inheriting each of them and now you have 6 "backends".
That's what I got rid of in the newest patches I sent to this list.
Am Mittwoch, 19. März 2008 07:46:07 schrieb Ivan Gyurdiev:
The way this patchset is heading is: "A (gl shader backend), which implements both d3d shader and ffp pipeline, depending on the circumstances, through a mixed api"
Yes, that's the only way that will work. The shader backend mutually talks to opengl, we have to design it around the requirements of the opengl (extension) APIs. Trying to stick the shader backend's design to d3d will not work properly
All in all we can only have a ffp replacement shader OR a d3d shader active. Some instance has to sort out what is active, and given that in GLSL we'll need to link vertex+fragment+geometry shaders the shader backend is the best place to find that out IMHO