On Thu, Dec 7, 2017 at 4:04 PM, Henri Verbeet <hverbeet(a)gmail.com> wrote:
On 7 December 2017 at 14:44, Józef Kucia <jkucia(a)codeweavers.com> wrote:
+ static const unsigned int bias_tests[] = + { + -10000, -1000, -100, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, + 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 50, 100, 200, 500, 1000, 10000, + }; I think it would be more appropriate for these to be signed integers.
+ static const float quad_slopes[] = + { + 0.0f, 0.5f, 1.0f + }; ... + m = quad_slopes[i] / texture_desc.Height; + m = sqrtf(m * m); sqrtf(m * m) = fabsf(m) = m
Right?
+ bias = rasterizer_desc.SlopeScaledDepthBias * m; + get_texture_readback(texture, 0, &rb); + for (y = 0; y < texture_desc.Height; ++y) + { + depth = min(max(0.0f, depth_values[y] + bias), 1.0f); + switch (format) + { + case DXGI_FORMAT_D32_FLOAT: + data = get_readback_float(&rb, 0, y); + ok(compare_float(data, depth, 2), + "Got depth %.8e, expected %.8e.\n", data, depth); This needs a tolerance of 64 to pass here on Intel SKL.