2015-03-02 22:29 GMT+01:00 Stefan Dösinger stefan@codeweavers.com:
- /* The input data was designed for D3DFMT_L6V5U5 and then transfered
* to the other formats because L6V5U5 is the lowest precision format.
* It tests the extreme values -1.0 (-16) and 1.0 (15) for U/V and
* 0.0 (0) and 1.0 (63) for L, the neutral point 0 as well as -1 and 1.
* Some other intermediate values are tested too. For the 8 bit formats
* the equivalents of -1 and 1 are -8 and 8, that's why there is no
* 0xffff input for these formats. The input value -15 (min + 1) is
* tested as well. Unlike what OpenGL 4.4 says in section 2.3.4.1, this
* value does not represent -1.0. For 8 bit singed data -127 is tested
* in the Q channel of D3DFMT_Q8W8V8U8. Here d3d seems to follow the
* rules from the GL spec. AMD's r200 is broken though and returns a
* value < -1.0 for -128. The difference between using -127 or -128 as
* the lowest possible value gets lost in the slop of 1 though. */
For the 8-bit format you mean that 1 is not representable in the signed u/v components, correct? Also "singed" typo, but otherwise good to me.
- static const USHORT content_v8u8[4][4] =
- {
{0x0000, 0x7f7f, 0x8880, 0x0000},
{0x0080, 0x8000, 0x7f00, 0x007f},
{0x193b, 0xe8c8, 0x0808, 0xf8f8},
{0x4444, 0xc0c0, 0xa066, 0x22e0},
- };
- static const DWORD content_v16u16[4][4] =
- {
{0x00000000, 0x7fff7fff, 0x88008000, 0x00000000},
{0x00008000, 0x80000000, 0x7fff0000, 0x00007fff},
{0x19993bbb, 0xe800c800, 0x08880888, 0xf800f800},
{0x44444444, 0xc000c000, 0xa0006666, 0x2222e000},
- };
- static const DWORD content_q8w8v8u8[4][4] =
- {
{0x00000000, 0xff7f7f7f, 0x7f008880, 0x817f0000},
{0x10000080, 0x20008000, 0x30007f00, 0x4000007f},
{0x5020193b, 0x6028e8c8, 0x70020808, 0x807ff8f8},
{0x90414444, 0xa000c0c0, 0x8261a066, 0x834922e0},
- };
- static const DWORD content_x8l8v8u8[4][4] =
- {
{0x00000000, 0x00ff7f7f, 0x00008880, 0x00ff0000},
{0x00000080, 0x00008000, 0x00007f00, 0x0000007f},
{0x0041193b, 0x0051e8c8, 0x00040808, 0x00fff8f8},
{0x00824444, 0x0000c0c0, 0x00c2a066, 0x009222e0},
- };
- /* D3DFMT_L6V5U5 has poor precision on some GPUs. On a GeForce 7 the highest U and V value (15)
* results in the output color 0xfb, which is 4 steps away from the correct value 0xff. It is
* not the ~0xf0 you'd get if you blindly left-shifted the 5 bit values to form an 8 bit value
* though.
*
* There may also be an off-by-one bug involved: The value -7 should result in the output 0x47,
* but ends up as 0x4d. Likewise, -3 becomes 0x6e instead of 0x67. Those values are close to
* the proper results of -6 and -2. */
- static const USHORT content_l6v5u5[4][4] =
- {
{0x0000, 0xfdef, 0x0230, 0xfc00},
{0x0010, 0x0200, 0x01e0, 0x000f},
{0x4067, 0x53b9, 0x0421, 0xffff}, /* 0x4067, 0x53b9, 0x0421, 0xffff */
{0x8108, 0x0318, 0xc28c, 0x909c}, /* 0x8108, 0x0318, 0xc28c, 0x909c */
- };
What did you mean to put in the comments here? Those currently there match the actual content of the array.
for (y = 0; y < 4; y++)
{
for (x = 0; x < tests[j].width; x++)
{
expected_color = expected_colors[y][x];
if (!formats[i].blue)
expected_color |= 0x000000ff;
This is clearly my brain failing right now, but why is blue expected to be 0xff when the texture component is missing?
hr = IDirect3DTexture9_GetSurfaceLevel(texture, 0, &dst_surface);
ok(SUCCEEDED(hr), "Failed to get surface, hr %#x.\n", hr);
IDirect3DTexture9_GetSurfaceLevel(texture_sysmem, 0, &src_surface);
ok(SUCCEEDED(hr), "Failed to get surface, hr %#x.\n", hr);
hr = IDirect3DDevice9_UpdateSurface(device, src_surface,
&tests[j].src_rect, dst_surface, &tests[j].dst_point);
ok(SUCCEEDED(hr), "Failed to update surface, hr %#x.\n", hr);
IDirect3DSurface9_Release(dst_surface);
IDirect3DSurface9_Release(src_surface);
hr = IDirect3DDevice9_Clear(device, 0, NULL, D3DCLEAR_TARGET, 0x00003300, 0.0f, 0);
ok(SUCCEEDED(hr), "Failed to clear, hr %#x.\n", hr);
hr = IDirect3DDevice9_BeginScene(device);
ok(SUCCEEDED(hr), "Failed to begin scene, hr %#x.\n", hr);
hr = IDirect3DDevice9_DrawPrimitiveUP(device, D3DPT_TRIANGLESTRIP, 2, &quad[0], sizeof(*quad));
ok(SUCCEEDED(hr), "Failed to draw, hr %#x.\n", hr);
hr = IDirect3DDevice9_EndScene(device);
ok(SUCCEEDED(hr), "Failed to end scene, hr %#x.\n", hr);
for (y = 0; y < 4; y++)
{
for (x = 0; x < tests[j].width; x++)
{
if (tests[j].width == 4)
expected_color = expected_colors2[y][x];
else
expected_color = expected_colors3[y];
if (!formats[i].blue)
expected_color |= 0x000000ff;
color = getPixelColor(device, 80 + 160 * x, 60 + 120 * y);
ok(color_match(color, expected_color, 1)
|| color_match(color, expected_color, formats[i].slop_broken),
"Expected color 0x%08x, got 0x%08x, format %s, location %ux%u.\n",
expected_color, color, formats[i].name, x, y);
}
}
Cool additional test for UpdateSurface, thank you for that. Here and in the previous test I guess you might want to put the slop_broken check into a broken(), unless that also fails on Linux.
2015-03-03 12:22 GMT+01:00 Matteo Bruni matteo.mystral@gmail.com:
For the 8-bit format you mean that 1 is not representable in the signed u/v components, correct? Also "singed" typo, but otherwise good to me.
1 and -1 (the signed input value, not the post-scaled -1.0 and 1.0) are missing from U and V, yeah. The reason is that this way I can reuse the expected color array. Q in D3DFMT_Q8W8V8U8 tests some 8 bit specific values. I'll see if I can clarify that comment a bit.
{0x4067, 0x53b9, 0x0421, 0xffff}, /* 0x4067, 0x53b9, 0x0421, 0xffff */
{0x8108, 0x0318, 0xc28c, 0x909c}, /* 0x8108, 0x0318, 0xc28c, 0x909c */
- };
What did you mean to put in the comments here? Those currently there match the actual content of the array.
I thought I removed those comments. It was a sort of backup of the values while I experimented with the problem Geforce 7 cards have.
This is clearly my brain failing right now, but why is blue expected to be 0xff when the texture component is missing?
All formats return 1.0 for channels that aren't defined by the format. E.g. D3DFMT_G32R32F returns R, G, 1.0, 1.0. That differs from GL, which returns 1.0 for alpha and 0.0 for color channels.
Cool additional test for UpdateSurface, thank you for that. Here and in the previous test I guess you might want to put the slop_broken check into a broken(), unless that also fails on Linux.
I also think I fixed that before sending. I wonder if I forgot a git commit --amend...