[PATCH 0/1] MR5061: d3d10/tests: Avoid implicit cast changing value.
Fixes clang warning: ``` ../dlls/d3d10/tests/effect.c:9150:33: warning: implicit conversion from 'unsigned int' to 'float' changes value from 4294967295 to 4294967296 [-Wimplicit-const-int-float-conversion] ok(blend_factor[idx] == UINT_MAX, "Got unexpected blend_factor[%u] %.8e.\n", idx, blend_factor[idx]); ~~ ^~~~~~~~ ../include/msvcrt/limits.h:27:21: note: expanded from macro 'UINT_MAX' #define UINT_MAX 0xffffffffU ^~~~~~~~~~~ ``` -- https://gitlab.winehq.org/wine/wine/-/merge_requests/5061
From: Jacek Caban <jacek(a)codeweavers.com> --- dlls/d3d10/tests/effect.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dlls/d3d10/tests/effect.c b/dlls/d3d10/tests/effect.c index 35f99fc7d46..d981be65e20 100644 --- a/dlls/d3d10/tests/effect.c +++ b/dlls/d3d10/tests/effect.c @@ -9147,7 +9147,7 @@ static void test_effect_value_expression(void) ID3D10Device_OMGetBlendState(device, &blend_state, blend_factor, &sample_mask); ok(!blend_state, "Unexpected blend state %p.\n", blend_state); for (idx = 0; idx < ARRAY_SIZE(blend_factor); ++idx) - ok(blend_factor[idx] == UINT_MAX, "Got unexpected blend_factor[%u] %.8e.\n", idx, blend_factor[idx]); + ok(blend_factor[idx] == 1ull << 32, "Got unexpected blend_factor[%u] %.8e.\n", idx, blend_factor[idx]); ok(!sample_mask, "Got unexpected sample_mask %#x.\n", sample_mask); /* movc */ -- GitLab https://gitlab.winehq.org/wine/wine/-/merge_requests/5061
So, what's happening is that UINT_MAX can't be represented exactly as a float and the closest integer number happens to be UINT_MAX + 1? If so, I'd expect the same happens in the implementation (including native) and I think I'd rather lean the same way with the test e.g. by explicitly casting UINT_MAX to float instead. -- https://gitlab.winehq.org/wine/wine/-/merge_requests/5061#note_60712
participants (3)
-
Jacek Caban -
Jacek Caban (@jacek) -
Matteo Bruni (@Mystral)