- /* When is_seeking is true, the code is in GST_Seeking_SetPositions() and is trying to change
* positions. In this case, pause the streaming thread until seeking is completed. Otherwise,
* GST_Seeking_SetPositions() may take a long time acquiring all the flushing locks. */
Yeah, although I don't think this describes the situation quite accurately enough.
The intent of grabbing flushing_lock *is* to pause the stream thread. It's basically just a convenient, if unconventional, use of a mutex. The problem is that this only works if mutexes are fair, in the specific sense that if thread A releases a mutex and thread B is waiting, thread B will always grab that mutex.
As it happens, win32 critical sections *are* fair in that sense, or at least testing strongly suggests that it's the case. Our current implementation violates that assumption, and there's a good argument that we should fix Wine in case other applications are broken. (Then again, if we haven't heard a report of breakage... but also, this issue would be rather difficult to debug.) On the other hand, it's not clear to me what the correct way of fixing the problem is. Putting a wait queue in the critical section works, but so does just yielding after RtlUnWaitCriticalSection, or even RtlWakeByAddressSingle or NtAlertThreadByThreadId, and I don't know how to tell which of these is the correct place to yield if any. (We don't want to make wakeups slower unless we're sure they should be...)
Despite that I think we should also fix winegstreamer not to depend on that, because it's subtle and I don't trust every mutex implementation to guarantee that (and it'd be nice if we can port winegstreamer code elsewhere, especially the high-level quartz stuff.)
I'm not sure how I feel about the Sleep(1) loop. In practice the sleep can be longer than 1 ms, in fact I think Windows has it as 16 ms. A one-time cost when seeking maybe doesn't matter though, even if you consider that seeking should be seamless.