Why does quartz deadlock? Read requests shouldn't block, and while
sample requests can block they should be resolved by earlier samples getting delivered, which shouldn't deadlock.
Because it synchronizes streams together in a way that's is supposed to be. You don't know whether read requests are blocking or not, as soon as it goes back to application code it can be anything.
Sample allocators definitely are blocking, depending on the pool sizes and synchronizing stream threads toghether is not a good idea. We should not assume anything about it, and we should instead make sure that the code is flexible and avoids unncessary synchronization on our side.
We also have no control over the GStreamer decisions, or its queueing strategy, and whether it will deliver output buffer soon enough to unblock allocators is beyond our reach.
Sure, but there's also no known application that depends on that
detail. I don't like the idea of making the parser interface that much more complicated if it's not necessary.
I'd appreciate something more detailed than "much more complicated" because saying that it is isn't enough to make a case.
I could for instance say the opposite, then point out that:
1) it reduces the number of necessary entry points, ultimately:
wait_request, push_data, read_data, done_alloc,
vs
get_next_read_offset, push_data, stream_get_buffer, stream_copy_buffer, stream_release_buffer,
2) it's not even required to have the allocation request / done_alloc, it's only to support zero-copy, and without it that would even be one less entry.
3) the design doesn't force anything upon the clients, wait_request can be called concurrently, as long as the stream / request masks are disjoint.
4) replying to requests is done in a single call, and doesn't require the client to correctly sequence the get_buffer / copy_buffer / release_buffer calls.
5) replies can be done concurrently with other requests as they are kept in a list and the client provides their token.