On x86_64 macOS (real Intel/not under Rosetta), a signal delivered to a thread inside a syscall will have CS in `sigcontext` set to the kernel's selector (0x07, `SYSCALL_CS` in xnu source) instead of the user selector (`cs64_sel`: 0x2b, `USER64_CS`).
This causes crashes when running a 32-bit CEF sample or Steam: those do lots of async I/O, SIGUSR1 is received often, and `CS_sig` is `0x07` if received while in `_thread_set_tsd_base`. `get_wow_context()` later compares `SegCs` to `cs64_sel`, sees that it's different, assumes that we are in 32-bit code, and the 32-bit context is used. The top 32-bits of `RIP` and every register are stripped off, and the thread crashes when it tries to return after handling the signal.
To fix this, set `CS_sig` to `cs64_sel` unless it's set to `cs32_sel`. As far as I know, these are the only `%cs` values that Wine is expecting to see.
It appears that macOS has always done this (I tested back to 10.12), but it wasn't an issue until 3a16aabbf55be5e0416c53b498ed1d085b8d410d/!6866 started doing `_thread_set_tsd_base()` outside of a Wine/NT syscall.
From: Brendan Shanks bshanks@codeweavers.com
--- dlls/ntdll/unix/signal_x86_64.c | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/dlls/ntdll/unix/signal_x86_64.c b/dlls/ntdll/unix/signal_x86_64.c index 58496685398..a340341bd1c 100644 --- a/dlls/ntdll/unix/signal_x86_64.c +++ b/dlls/ntdll/unix/signal_x86_64.c @@ -841,6 +841,15 @@ static inline ucontext_t *init_handler( void *sigcontext ) #elif defined __APPLE__ struct ntdll_thread_data *thread_data = (struct ntdll_thread_data *)&get_current_teb()->GdiTebBatch; _thread_set_tsd_base( (uint64_t)((struct amd64_thread_data *)thread_data->cpu_data)->pthread_teb ); + + /* When in a syscall, CS will be set to the kernel's selector (0x07, SYSCALL_CS in xnu source) + * instead of the user selector (cs64_sel: 0x2b, USER64_CS). + * Fix up sigcontext so later code can compare it to cs64_sel. + * + * Only applies on Intel, not under Rosetta. + */ + if (CS_sig((ucontext_t *)sigcontext) != cs32_sel) + CS_sig((ucontext_t *)sigcontext) = cs64_sel; #endif return sigcontext; }
Is there a way to account for 16-bit protected mode here? That's not implemented yet, but it hopefully will be at some point...
On Tue Jul 29 00:08:09 2025 +0000, Elizabeth Figura wrote:
Is there a way to account for 16-bit protected mode here? That's not implemented yet, but it hopefully will be at some point...
I suppose I could just compare against 0x07 (`SYSCALL_CS`), since these values are all hardcoded in the kernel anyway.