On Fri Oct 20 16:14:23 2023 +0000, Giovanni Mascellani wrote:
Hi, I haven't reviewed in detail yet, because my feeling is that right now we're adding features to the shader runner without a clear idea of what it should eventually look like, so I'd like to have a discussion about that first. My personal idea is that eventually all the components in the shader runner pipeline should be as configurable as possible. I had already written something in the [vkd3d-todo](https://wiki.winehq.org/Vkd3d-todo), but let me repeat that here. We basically have the following variables:
- the library used to compile the shaders: vkd3d-shader,
d3dcompiler_xx.dll or dxcompiler.dll (in particular dxcompiler.dll would have no special treatment as it has now);
- the Shader Model to target;
- the API to use to run the shader (d3d9, d3d10, d3d11, d3d12, Vulkan,
or even none if we just want to test the compiler);
- when applicable, whether to execute the shaders using the native
implementation or vkd3d;
- possibly, one day, the library to use to recompile the shaders (it
currently only makes sense on macOS once we support the Apple shader compiler for Metal; in all the other cases the choice is forced). Ideally there would be a function in the shader runner that takes these options and runs a .shader_test file with that configuration. The shader runner executable would then accept command line options that describe the desired configuration, or run a number of sensible configurations depending on which are available (e.g., whether this is a Linux or MinGW build: clearly native libraries aren't available on Linux, with the exception of dxcompiler.dll). This latest mode would be what would be run on the CI. By multiplying the number of available configurations the language to define which outcomes are expected might become somewhat more complicated. The usual convention should remain in force: the native implementations are assumed to be the "ground truth" on which `fail` and `notimpl` (and similar ones, if needed) are gauged, while `todo` is used to measure the difference of any other configuration with respect to native. It might at some point turn out to be necessary to evaluate `todo` on something more complicated than just the Shader Model, which is just one of the variables above, but I hope this complexity to be manageable. This is more of a side thing, but if we eventually come to the point I'm describing the MinGW tests will likely be strictly more powerful than the crosstests, so the crosstests could be dropper. At least for the shader runner, but probably similar considerations could be done for the other crosstests. What's your view on that? I guess @zfigura, @Mystral and @hverbeet might have something to say too.
Hi, I think that this MR helps in this general direction.
I am writing my opinion on the components you mentioned, and explain why these patches (and the following ones) help separating/configuring some of them.
- the Shader Model to target;
I think that here we mean to target a **range** of shader models instead of a specific one.
Currently we have the policy of only running the minimum shader model that intersects the ones that the shader supports, and the range given by the [require] directives. Which I think is a good thing in comparison to running the tests with all the shader models in that intersection (which would be costly).
However, the current implementation doesn't give us the capacity of choosing to run all the shader models, or something more practical like running the lowest SM1 shader model, the lowest SM4 shader model, and the lowest SM6 shader model (if there was a backend that supports them all).
3/4 intents to reinterpret the `minimum_shader_model` and `maximum_shader_model` arguments as "The range of shader models for which we are interested on running a test on this call." instead of "The range of shader models this runner supports.". This so we can call run_shader_tests() 3 times to achieve this example, with this hypothetical backend: ``` run_shader_tests(runner, SHADER_MODEL_2_0, SHADER_MODEL_3_0) run_shader_tests(runner, SHADER_MODEL_4_0, SHADER_MODEL_5_1) run_shader_tests(runner, SHADER_MODEL_6_0, SHADER_MODEL_6_0) ```
Or even running all the profiles: ``` for(m = SHADER_MODEL_2_0; m <= SHADER_MODEL_6_0; ++m) run_shader_tests(runner, m, m) ```
So for each call, the backend runs every test using the lowest shader model it supports, within the provided range, that matches the [require] directive requirements. If there isn't any shader model, it just skips them.
- the API to use to run the shader (d3d9, d3d10, d3d11, d3d12, Vulkan, or even none if we just want to test the compiler);
4/4 adds a `check_syntax` that controls whether the [shader] directives are run, and supports passing a NULL `runner->ops` in which case the other directives are not run, so this way compilation can be tested separately from backend execution (which also requires compilation but on the `draw quad`s on [test] directives).
The reinterpretation of the `minimum_shader_model` and `maximum_shader_model` arguments also helps in this regard, because we could now test compilation on the `minimum_shader_model` and not always start from shader model 4 as we do now, which I do in this patch on part 2:
[63e404fecdea34cc6f1bb32f014b9f7343aeb3cd](https://gitlab.winehq.org/fcasas/vkd3d/-/commit/63e404fecdea34cc6f1bb32f014b...)
- the library used to compile the shaders: vkd3d-shader, d3dcompiler_xx.dll or dxcompiler.dll (in particular dxcompiler.dll would have no special treatment as it has now);
I imagine that in the future we could expect a `struct shader_compiler` (with its `compiler->ops`) as an additional argument to specify the compilation strategies, in case several compilers are available. Unlike the `runner->ops` this one would not be optional, because we would be decoupling the compilation code for each backend.
Ideally there would be a function in the shader runner that takes these options and runs a .shader_test file with that configuration. The shader runner executable would then accept command line options that describe the desired configuration, or run a number of sensible configurations depending on which are available (e.g., whether this is a Linux or MinGW build: clearly native libraries aren't available on Linux, with the exception of dxcompiler.dll). This latest mode would be what would be run on the CI.
Yep, I agree that this would be ideal.
By multiplying the number of available configurations the language to define which outcomes are expected might become somewhat more complicated. The usual convention should remain in force: the native implementations are assumed to be the "ground truth" on which `fail` and `notimpl` (and similar ones, if needed) are gauged, while `todo` is used to measure the difference of any other configuration with respect to native. It might at some point turn out to be necessary to evaluate `todo` on something more complicated than just the Shader Model, which is just one of the variables above, but I hope this complexity to be manageable.
The following patches in my master6i branch (for the second part of this series) allow for more expressive qualifiers, even AND (e.g. `todo(sm>=4,sm<6)`) and OR (e.g. `todo(sm<4) todo(sm>=6)`) expressions.
[ff960463c78be983adbcf805cfd722b40002a214](https://gitlab.winehq.org/fcasas/vkd3d/-/commit/ff960463c78be983adbcf805cfd7...)
[e767803dffb8ab565c9e246b091881f6e91c6f86](https://gitlab.winehq.org/fcasas/vkd3d/-/commit/e767803dffb8ab565c9e246b0918...)
[3a7a316fe6bf8c188c4ced0faa32ce898c222009](https://gitlab.winehq.org/fcasas/vkd3d/-/commit/3a7a316fe6bf8c188c4ced0faa32...)
Maybe whether to ignore or not these qualifiers should be specified in the `struct shader_runner` (and the hypthetical `struct shader_compiler`). For instance dxcompiler.dll should ignore all todo() qualifiers, as it does now.
This is more of a side thing, but if we eventually come to the point I'm describing the MinGW tests will likely be strictly more powerful than the crosstests, so the crosstests could be dropper. At least for the shader runner, but probably similar considerations could be done for the other crosstests.
I like the ability to quickly compile the shader runner on my machine and passing it, with the tests, to the minitestbot though.