Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=57665
---
The issue is that The Medium launcher uses a dialog window procedure, and implements its background by doing a StretchBlt call on WM_PAINT in the dialog procedure call, which happens before the window message loop.
It then itself calls InvalidateRect(hwnd, NULL, 0), which queues a WM_PAINT as well but with only the RDW_INVALIDATE flag.
Next, when the window message loop is executed, the WM_PAINT message is being processed as it should, but as we've invalidate the window with RDW_ERASE ourselves from the expose event, the WM_NCPAINT handler erases the entire window, clearing the pixels that the launcher has just painted.
This regressed since the window surface clipping region logic changed, as the expose event handling was previously not calling `NtUserRedrawWindow` on windows with a surface and without a clipping region. The clipping region was also previously not always set, or set later, and we're setting it more consistently since 51b16963f6e0e8df43118deac63f640aee4698b7, even when it matches the window rect, for compatibility with some old wineandroid logic (where I believe it was used to clip window surfaces to their proper sizes).
--
v2: winex11: Only erase the desktop window pixels on expose events.
https://gitlab.winehq.org/wine/wine/-/merge_requests/7157
This ensures that the induced non-client redraw are sent and executed
right away, and before the WM_PAINT messages are sent.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=57665
---
The issue is that The Medium launcher uses a dialog window procedure, and implements its background by doing a StretchBlt call on WM_PAINT in the dialog procedure call, which happens before the window message loop.
It then itself calls InvalidateRect(hwnd, NULL, 0), which queues a WM_PAINT as well but with only the RDW_INVALIDATE flag.
Next, when the window message loop is executed, the WM_PAINT message is being processed as it should, but as we've invalidate the window with RDW_ERASE ourselves from the expose event, the WM_NCPAINT handler erases the entire window, clearing the pixels that the launcher has just painted.
Using RDW_ERASENOW we make sure that the erase happens right away on expose, and before the WM_PAINT message is being processed.
This regressed since the window surface clipping region logic changed, as the expose event handling was previously not calling `NtUserRedrawWindow` on windows with a surface and without a clipping region. The clipping region was also previously not always set, or set later, and we're setting it more consistently since 51b16963f6e0e8df43118deac63f640aee4698b7, even when it matches the window rect, for compatibility with some old wineandroid logic (where I believe it was used to clip window surfaces to their proper sizes).
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/7157
Tests run via your command line fail to run for me, certainly due to lack of dependencies in my build. I'll work on addressing those, to see if I can get the tests to run.
./wine programs/winetest/x86_64-windows/winetest.exe -o - cmd.exe
0050:err:vulkan:vulkan_init_once Wine was built without Vulkan support.
00c4:err:ntoskrnl:ZwLoadDriver failed to create driver L"\\Registry\\Machine\\System\\CurrentControlSet\\Services\\winebth": c00000e5
003c:fixme:service:scmdatabase_autostart_services Auto-start service L"winebth" failed to start: 1359
wine: failed to open "programs/winetest/x86_64-windows/winetest.exe": c0000135
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/7131#note_92190
Currently the logic of syncing of certificates with host effectively assumes that all the root certificates come from host and doesn't mind the certificates added by the app (those erroneously get deleted during host sync).
That fixes Battle.net being unable to complete game installs / update after Battle net update on 14 Jan 2025.
This is not for code freeze obviously.
The issue is that Battle.net fails to verify certificate chain which depends on an ephemeral certificate marked valid for DNS:localbattle.net (which resolves to 127.0.0.1) server auth. The certificate is self signed and is added to system root storage by Battle.bet setup (and also possibly later if it is missing). The addition of certificate to system root storage works per se, but then upon syncing host root certificates (in another process or new prefix start) the certificate gets wiped from registry.
--
v2: crypt32: Do not delete root certs which were not imported from host in sync_trusted_roots_from_known_locations().
https://gitlab.winehq.org/wine/wine/-/merge_requests/7150
This MR intends to add the complex numbers related functions:
* cimag
* _FCbuild
* crealf
* cimagf
These functions are defined in dlls/msvcr120/math.c and mapped in dlls/msvcr120/msvcr120.spec and dlls/ucrbase/ucrbase.spec.
The related tests were added in dlls/mscr120/tests/msvcr120.c and result were checked.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/7109
On Fri Jun 21 06:04:13 2024 +0000, Conor McCarthy wrote:
> I replaced this with a wait for the blit before returning from
> `d3d12_swapchain_present()`. After a call to Present() returns, the
> client can render to the next buffer in sequence, so we must wait for
> completion of the blit read of that buffer. The old version using
> `vkAcquireNextImageKHR()` is not strict enough. HZD with vsync has frame
> pacing issues in some parts of the benchmark even on the old
> implementation, and this makes it a little bit worse. We can't really
> avoid that though, and the best fix is to improve frame times in vkd3d.
Sorry it took so much time to react, but I don't think that's correct, at least not in general.
I did some research on how maximum frame latency and `Present()` timing is supposed to work and this is what I could gather so far:
* I'll assume that we're using a flip presentation model (the only one that is allowed for D3D12; the blt model is restricted to D3D11 and previous) and assume that `Present()` is always called with sync interval 1.
* Partially repealing an earlier comment by me, it doesn't seem that `Present()` cares at all about `BufferCount`. The swapchain images are basically treated like a ring buffer of `BufferCount` elements. Each time `Present()` is called a presentation operation is queued; each time a presentation operation is dequeued (following the timing expressed by the sync interval), the next image is selected from the ring buffer (adding one to the read counter and wrapping it around) and presented. If you didn't synchronize correctly and write on an image before it is presented (or even while it is presented), bad for you, but the presentation engine doesn't care. You'll probably miss frames or have tearing.
* So your only hope to not step on the presentation engine's feet is to make judicious use of frame latency commands. There the swapchain behaves differently depending of whether it was created with `DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT` or not.
* The legacy behavior is without `DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT`. In this case the swapchain automatically manages the frame latency waits, so the application can just relay on `Present()` waiting appropriately. The maximum latency has to be set with `IDXGIDevice1::SetMaximumFrameLatency()`, but `IDXGIDevice` is not available for D3D12 devices, as far as I can tell; so we can only keep the default value, which is 3 in the docs (and that value aligns with my timing observations). In practice not having `DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT` isn't a lot different than having it: the only difference is that at the end of each `Present()` call a wait on the latency waitable is done by D3D12 on behalf of the client (other minor differences is that you can't retrive the waitable itself or change the latency number).
* Instead, if the swapchain is created with `DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT` then some cooperation from the application is expected: the application has to retrieve the waitable object (which in practice behaves like a semaphore, even if the docs never seem to explicitly mention that). This is already tested in `test_frame_latency_event()`, even if the waitable is not an event: the semaphore starts at the default maximum latency (which is 1) and is released each time a frame is presented (for real, not when `Present()` is called). When `IDXGISwapChain2::SetMaximumFrameLatency()` and the new value is larger than the previous one, then the semaphore is released a number of times equal to the difference between the new and old value. If the new value is smaller, the semaphore is not touched (i.e., `SetMaximumFrameLatency()` never waits). The application is supposed to wait on the semaphore before starting rendering; if it doesn't, `Present()` will happily keep queuei
ng presentation operations even if the ring buffer overflows and/or the set maximum latency is exceeded.
I wrote an alternative implementation which should fix the same problem. It's in !7152 (it replaces only the last commit here, the first three make sense on their own).
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/5830#note_92165
On Fri Jun 21 06:04:13 2024 +0000, Conor McCarthy wrote:
> I do see a small but measurable performance gain in HZD. It's only 0.7%,
> but that shoud be weighed that against the very simple change needed to
> gain it.
That's indeed a lot for such a simple change. Good!
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/5830#note_92164