Jan. 23, 2026
11:44 p.m.
On Sat Jan 24 05:44:39 2026 +0000, Paul Gofman wrote: > First of all, you are using Proton-GE which is a downstream fork of > Proton, and there are things which work quite different. I personally > wouldn't mind to look at logs from official Proton Experimental, but it > is a bit difficult for GE because I am not tracking what else could be > there. Mind that Proton (and Proton GE from which this log is too) has > the MR I was referring to above, so these things can work a bit > different. Can you check with upstream Wine if the issue is the same? > Or, with official Proton with commit "winepulse.drv: Process streams > timer updates from PA main loop." reverted? What if that commit changes > things here in certain way. > > If you don't mind answering, what exactly happens when we trigger > NtSetEvent late? > That highly depends on the game, but a lot depends on the timing going > accurate (to the point they may have fixed hardcoded 10ms buffer lengths > and to just crash or refuse to fill more). Besides event timing, the > effect of how we drain the buffer is seen, e. g., through > _GetCurrentPadding. See description in > https://gitlab.winehq.org/wine/wine/-/merge_requests/8628 for one (not > the most obvious) example how those may affect things. > > Minreq is equal to period_bytes, which is equal to the quant, while a > game's refill is whatever the engine decides. Tlength to minreq ratio is > 3.5-2.5:1, and even hardcoding higher, I couldn't change it. It probably > has something to do with how pipewire-pulse sets it. Winealsa is 4:1 though. > That is controlable through pulse config (if pipewire is used, > pipewire-pulse.conf global on inside /home/<user>/... config if present) > or /usr/share/, or original pulse config files if pipewire is not used). > From the log, I see that advertised minreq is actually 10666mcs. I guess > lowering it in config to something below 10ms might happen to help, it > is unfortunate to have anything bigger than 10ms there. Although maybe > it doesn't explain it all in this case. > From the log I also see that the game is using xaudio, those mmdevapi > calls and buffer refills are most likely done by it. And it fills > maximum 441 samples each time which (with samples per sec of 44100 as I > see in the log) is almost exactly 10ms. So I guess it is at least clear > what's likely going on, xaudio is firmly bound to 10ms periods (it minds > _GetCurrentPadding() number of samples in a sense it won't output more > than there is available buffer space but also won't output more it looks > like). Our mmdevapi period is 10.6ms (stipulated by pulse setup), so > that is doomed to underrun at some moment as you describe. So I think > making sure that our period is 10ms should be helping (can be checked by > lowering pulse latency in setup), this way the buffer will be filled in > full. That is by the way an example how things may depend on accurate > timing matching Windows (even though in this case it is our xaudio, the > apps may be doing the same). > So, besides adjusting setup so we are compatible, I think there are two > directions in principle which could be considered (not sure offhand > about any if we want to pursue that): > - Working around PA setup in winepulse.drv between the lines of what I > suggested above (leaving the current loop timings alone but introducing > extra backing buffer and a separate loop to push data to PA). I think > this is technically feasible but also don't know if we want to > complicate things like that and introduce extra buffers (probably > depends on whether those PA configs with >10ms latency are really needed > or maybe can be changed instead); > - Doing something in FAudio to fill more data if the minimum period it > gets from mmedvapi suggests so). But I don't think it can be reasonably > done there. Those 10ms in xaudio / FAudio are not arbitrary at all too. > Back then it was just following backend period recommendation (and could > end up with app visible buffer >10ms), that resulted in some games > outright crashing because they were just hardcoding 10ms buffer length > at allocation and didn't consider what xaudio tells them about actual > lengths. So it works with 10ms quantums and probably simply doesn't have > more data to pass once it sees that more padding is available. The game does not underfill, I added a log that shows held_bytes, pa_held and adv_bytes. Note that I added it right after adv_bytes calculation. The point is, held_bytes never dips to zero, and adv_bytes isn't breaking. I genuinely do not understand how that makes sense though. I also tested with the commit reversed, also crackles. Changing quant used to help (I'd get audio issues only maybe 15-20 minutes in), but it still wouldn't solve all issues. Attaching both logs. I tested Arc Raiders and Zenless Zone Zero. I think my suspicions about write pointer stepping over read pointer were correct. I attached the Zenless log, there were no underflows at the time of buzz. Also ignore the first 3 underflows, they happened during launch. Overflow has to do with held_bytes dipping below pa_held_bytes, but I don't fully understand the relationship. The problem is that the server audio consumption is slower than adv_bytes movement. But shouldn't there be a resampler on the server side to maintain an equilibrium? Because right now it looks like a catch-22, you can get steady pointer advancement with write stepping over read, or you can throttle adv_bytes and get mathematecally guaranteed ram buffer overfill. Or, like in my code, inconsistent NtSetEvent. Of course, none of the options sound right. I am not familiar with winealsa's logic, but maybe we can take an inspiration from there, since it's very stable? [pulse-logs.tar.gz](/uploads/35c45ffa35f3e0bc408d4dfb1cd6df80/pulse-logs.tar.gz) -- https://gitlab.winehq.org/wine/wine/-/merge_requests/9840#note_127900