On 5 October 2016 at 17:33, Nikolay Sivov nsivov@codeweavers.com wrote:
+static inline BOOL float_eq(FLOAT left, FLOAT right) +{
- int x = *(int *)&left;
- int y = *(int *)&right;
- if (x < 0)
x = INT_MIN - x;
- if (y < 0)
y = INT_MIN - y;
- return abs(x - y) <= 8;
+}
Not wrong per se, but in the D3D tests that's generally called "compare_float()" and takes an explicit "ulps" argument. Is "8" based on test results or a guess? It seems fairly large for something that shouldn't be a particularly complicated calculation. Also, "float".
- /* Both pixel size and DIP size are specified. */
- set_size_u(&pixel_size, 128, 128);
- hr = ID2D1HwndRenderTarget_CreateCompatibleRenderTarget(hwnd_rt, &size, &pixel_size, NULL,
D2D1_COMPATIBLE_RENDER_TARGET_OPTIONS_NONE, &rt);
- ok(SUCCEEDED(hr), "Failed to create render target, hr %#x.\n", hr);
- /* Doubled pixel size dimensions with the same DIP size give doubled dpi. */
- ID2D1BitmapRenderTarget_GetDpi(rt, dpi2, dpi2 + 1);
- ok(dpi[0] == dpi2[0] / 2.0f && dpi[1] == dpi2[1] / 2.0f, "Got dpi mismatch.\n");
As an aside, the reason some of the existing tests use different DPI settings for the x and y axes is to avoid accidentally swapping them in the implementation.