This MR tests and fix the std handles in child process when using
CreateProcess with extended information and passing a pseudo-console
handle.
From the added tests, and other manual testing (see linked bugzilla entry),
it turns out native overrides the (bound) std handles from the parent console
with handles on the passed pseudo-console for the child process.
But happily inherits other kind of handles.
As side notes:
- resetting the std handle in RTL_USER_PROCESS_PARAMETERS being console handles
could be optimized: potentially, native could just test if the passed handles
are bound to the passed console and reset them if not.
- hijacked an unused bit in ConsoleFlags to discriminate when to recreate the
std handles in child (didn't check-out how native does it though).
--
v2: kernelbase: Create std handles from passed pseudo-console.
kernel32/tests: Test std handles in CreateProcess with pseudo console.
https://gitlab.winehq.org/wine/wine/-/merge_requests/9500
Pegasus Mail posts a WM_USER + 8991 message to its own window when it
loses focus, then calls SetFocus on its window when processing this
message. Several other applications have been seen calling SetWindowPos
during WM_ACTIVATE message, which might also attempt to reactivate the
window.
That processing happens shortly after we have changed the foreground
window to the desktop window, when focus is lost, then SetFocus tries
to change the foreground window again.
This SetFocus behavior is tested, and should work like this, but would
only activate the window if the process is allowed to do so. Windows has
various rules around this, and it seems to boil down to something like:
* Allow taking focus if the process never was foreground, ie: when
process is starting and gets initial focus on its windows.
* Allow taking focus if the process was foreground but lost it recently
because of a window being destroyed.
* Forbid taking focus back if the process had foreground and called
SetForegroundWindow explicitly to give it to another process.
This doesn't implement all this rules, but rather keep rely on the host
window management with some additional heuristics to avoid activating
windows which lost foreground recently. This is mostly about keeping
track of user input time, updating it on user input and on focus change.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=58167
---
I think this should be enough to prevent spurious window reactivation, either
from calls to SetWindowPos during WM_ACTIVATE messages like in https://gitlab.winehq.org/wine/wine/-/merge_requests/9398
or from calls to SetFocus from posted messages right after window deactivation
as described above.
Fwiw we could have tests for that, and show that this MR is fixing them but fvwm
doesn't implement window minimization properly, and doesn't change focus when
window is minimized, so the tests would be meaningless.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9511
This fixes Trials Fusion often crashing when disconnecting a controller while there are more still connected.
--
v23: ntoskrnl/tests: Use the 'Nt' version of the CancelIo APIs.
ntoskrnl/tests: Test the thread ID the cancellation routine runs from.
ntoskrnl/tests: Add more cancellation tests.
ntoskrnl/tests: Fix tests on current Windows 10 / 11.
ntdll/tests: Test IOSB values of the cancel operation.
ntdll/tests: Add more NtCancelIoFile[Ex]() tests.
ntdll: Wait for all asyncs to handle cancel in NtCancelIoFile().
server: Factor out a cancel_async() function.
server: Introduce a find_async_from_user helper.
ntdll: Factor out a cancel_io() function.
https://gitlab.winehq.org/wine/wine/-/merge_requests/7797
Pretty much the same kind of thing Proton does, this is only for non-shared semaphores for now.
This creates timeline events backed by host timeline semaphores, for every wait and signal operation on a client timeline semaphore. Events are host timeline semaphores with monotonically increasing values. One event is a `(host semaphore, value)` unique tuple, and their value is incremented every time they get signaled. They can be reused right away as a different event, with the new value. Signaled events are queued to a per-device event list for reuse, as we cannot safely destroy them [^1] and as creating them might be costly.
A thread is spawned for every device that uses timeline semaphores, monitoring the events and semaphore value changes. The semaphore wrapper keeps the current client semaphore value, which is read directly by `vkGetSemaphoreCounterValue`.
CPU and GPU waits on the client semaphore are swapped with a wait on a timeline event, and GPU signals with a signal on a timeline event. CPU signals simply update the client semaphore value on the wrapper, signaling the timeline thread to check and notify any waiter. The timeline thread waits on signal events, coming from the GPU, and on a per-device semaphore for CPU notifications (wait/signal list updates, CPU-side signal). It will then signal any pending wait for a client semaphore value that has been reached, and removes timed out waits.
---
For shared semaphores my idea is to use the client timeline semaphore host handle itself as a signal to notify other device threads of semaphore value changes, as the host semaphore is what is exported and imported in other devices. The timeline threads would wait on that semaphore too in addition to the signal events and thread notification semaphore.
The main issue with shared semaphores actually comes from sharing the client semaphore values, so that they can be read from each device timeline threads as well as from other processes. There's two scenarios to support, one that is for in-process sharing which could perhaps keep the information locally, but we also need to support cross-process sharing so I think there's no other way than to involve wineserver.
My idea then is to move shared semaphore client value to wineserver, which will cost a request on every signal and wait (and reading the value for waits could later be moved to a shared memory object). This will also allow us to implement `D3DKMTWaitForSynchronizationObjectFromCpu` which might be useful for D3D12 fence implementation as it'll allow us to translate a timeline semaphore wait to an asynchronous NT event signal.
[^1]: Vulkan spec indicates that semaphores may only be destroyed *after* every operation that uses them has fully executed, and it's unclear whether signaling or waiting a semaphore is enough as an indicator for full execution or whether we would need to wait on a submit fence.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9510
On Tue Nov 18 08:06:53 2025 +0000, Martin Storsjö wrote:
> This change (in particular, commit "win32u: Use the vulkan instance
> wrappers for D3DKMT.") broke builds without Vulkan (e.g.
> `--without-vulkan`). Starting up (in a clean prefix) prints:
> ```
> 004c:err:vulkan:vulkan_init_once Wine was built without Vulkan support.
> [5 minute wait]
> 0024:err:environ:run_wineboot boot event wait timed out
> wine: could not load kernel32.dll, status c0000135
> ```
> This looks highly surprising, but seems reproducible. I'll skip digging
> further and leave it up to you to figure out why this gets broken.
Ah right thanks and sorry about that. It's not very surprising actually, it probably crashes and breaks everything.
--
https://gitlab.winehq.org/wine/wine/-/merge_requests/9493#note_122724