On x86_64 macOS (real Intel/not under Rosetta), a signal delivered to a thread inside a syscall will have CS in `sigcontext` set to the kernel's selector (0x07, `SYSCALL_CS` in xnu source) instead of the user selector (`cs64_sel`: 0x2b, `USER64_CS`).
This causes crashes when running a 32-bit CEF sample or Steam: those do lots of async I/O, SIGUSR1 is received often, and `CS_sig` is `0x07` if received while in `_thread_set_tsd_base`. `get_wow_context()` later compares `SegCs` to `cs64_sel`, sees that it's different, assumes that we are in 32-bit code, and the 32-bit context is used. The top 32-bits of `RIP` and every register are stripped off, and the thread crashes when it tries to return after handling the signal.
To fix this, set `CS_sig` to `cs64_sel` unless it's set to `cs32_sel`. As far as I know, these are the only `%cs` values that Wine is expecting to see.
It appears that macOS has always done this (I tested back to 10.12), but it wasn't an issue until 3a16aabbf55be5e0416c53b498ed1d085b8d410d/!6866 started doing `_thread_set_tsd_base()` outside of a Wine/NT syscall.