foobar2000.exe's UPnP Media Renderer component (foo_out_upnp.dll) expects that, if a select() call completes successfully with a non-empty writefds set, any immediately following send() call on a socket in the writefds set never fails with WSAEWOULDBLOCK.
On Wine, the Winsock select() and send() implementations both call the Unix poll(2) under the hood to test if I/O is possible on the socket. As it turns out, it's entirely possible that Unix poll() may yield POLLOUT on the first call (for select) but *not* the second (for send), even if no send() call has been made in the meanwhile.
On Linux (as of v5.19), a connected (ESTABLISHED) TCP socket that has not been shut down indicates the (E)POLLOUT condition only if the ratio of sk_wmem_queued (the amount of bytes queued in the send buffer) to sk_sndbuf (the size of send buffer size itself, which can be retrieved via SO_SNDBUF) is below a certain threshold. Therefore, a falling edge in POLLOUT can be triggered due to a number of reasons:
1. TCP retransmission and control packets (e.g. MTU probing). Such packets share the same buffer with application-initiated packets, and thus counted in sk_wmem_queued_add just like application data. See also: sk_wmem_queued_add() callers (Linux 5.19).
2. Memory pressure. This causes sk_sndbuf to shrink. See also: sk_stream_moderate_sndbuf() callers (Linux 5.19).
Fix this by always attempting synchronous I/O first if the nonblocking flag is set.
Note: for diagnosis, `getsockopt(fd, SOL_SOCKET, SO_MEMINFO, ...)` can be used to retrieve both sk_wmem_queued (the amount of bytes queued in the send buffer) and sk_sndbuf (the size of the send buffer itself, which can also be retrieved via SO_SNDBUF).
-- v4: server: Always prefer synchronous I/O in nonblocking mode.
From: Jinoh Kang jinoh.kang.kr@gmail.com
--- server/sock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/server/sock.c b/server/sock.c index 7d7e470be28..caa3724eb59 100644 --- a/server/sock.c +++ b/server/sock.c @@ -3470,7 +3470,7 @@ DECL_HANDLER(recv_socket) */ struct pollfd pollfd; pollfd.fd = get_unix_fd( sock->fd ); - pollfd.events = req->oob ? POLLPRI : POLLIN; + pollfd.events = req->oob && !is_oobinline( sock ) ? POLLPRI : POLLIN; pollfd.revents = 0; if (poll(&pollfd, 1, 0) >= 0 && pollfd.revents) {
From: Jinoh Kang jinoh.kang.kr@gmail.com
--- server/sock.c | 12 ++---------- 1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/server/sock.c b/server/sock.c index caa3724eb59..7b5bb187aa0 100644 --- a/server/sock.c +++ b/server/sock.c @@ -3468,11 +3468,7 @@ DECL_HANDLER(recv_socket) * asyncs will not consume all available data; if there's no data * available, the current request won't be immediately satiable. */ - struct pollfd pollfd; - pollfd.fd = get_unix_fd( sock->fd ); - pollfd.events = req->oob && !is_oobinline( sock ) ? POLLPRI : POLLIN; - pollfd.revents = 0; - if (poll(&pollfd, 1, 0) >= 0 && pollfd.revents) + if (check_fd_events( sock->fd, req->oob && !is_oobinline( sock ) ? POLLPRI : POLLIN )) { /* Give the client opportunity to complete synchronously. * If it turns out that the I/O request is not actually immediately satiable, @@ -3568,11 +3564,7 @@ DECL_HANDLER(send_socket) * asyncs will not consume all available space; if there's no space * available, the current request won't be immediately satiable. */ - struct pollfd pollfd; - pollfd.fd = get_unix_fd( sock->fd ); - pollfd.events = POLLOUT; - pollfd.revents = 0; - if (poll(&pollfd, 1, 0) >= 0 && pollfd.revents) + if (check_fd_events( sock->fd, POLLOUT )) { /* Give the client opportunity to complete synchronously. * If it turns out that the I/O request is not actually immediately satiable,
From: Jinoh Kang jinoh.kang.kr@gmail.com
foobar2000.exe's UPnP Media Renderer component (foo_out_upnp.dll) expects that, if a select() call completes successfully with a non-empty writefds set, any immediately following send() call on a socket in the writefds set never fails with WSAEWOULDBLOCK.
On Wine, the Winsock select() and send() implementations both call the Unix poll(2) under the hood to test if I/O is possible on the socket. As it turns out, it's entirely possible that Linux poll() may yield POLLOUT on the first call (by select) but *not* the second (by send), even if no send() call has been made in the meanwhile.
On Linux (as of v5.19), a connected (ESTABLISHED) TCP socket that has not been shut down indicates (E)POLLOUT only if the ratio of sk_wmem_queued (the amount of bytes queued in the send buffer) to sk_sndbuf (the size of send buffer size itself, which can be retrieved via SO_SNDBUF) is below a certain threshold. Therefore, a falling edge in POLLOUT can be triggered due to a number of reasons:
1. TCP fragmentation. Once a TCP packet is split out from a larger sk_buff, it incurs extra bookkeeping overhead (e.g. sk_buff header) that is counted in sk_wmem_queued alongside application data. See also: tcp_fragment(), tso_fragment() (Linux 5.19).
2. Control packets (e.g. MTU probing). Such packets share the same buffer with application-initiated packets, and thus counted in sk_wmem_queued. See also: sk_wmem_queued_add() callers (Linux 5.19).
3. Memory pressure. This causes sk_sndbuf to shrink. See also: sk_stream_moderate_sndbuf() callers (Linux 5.19).
Fix this by always attempting synchronous I/O first if req->force_async is unset and the nonblocking flag is set.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=53486 --- server/sock.c | 32 ++++++++++++++++++++++++++++---- 1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/server/sock.c b/server/sock.c index 7b5bb187aa0..4e57d6774a6 100644 --- a/server/sock.c +++ b/server/sock.c @@ -3468,11 +3468,19 @@ DECL_HANDLER(recv_socket) * asyncs will not consume all available data; if there's no data * available, the current request won't be immediately satiable. */ - if (check_fd_events( sock->fd, req->oob && !is_oobinline( sock ) ? POLLPRI : POLLIN )) + if ((!req->force_async && sock->nonblocking) || + check_fd_events( sock->fd, req->oob && !is_oobinline( sock ) ? POLLPRI : POLLIN )) { /* Give the client opportunity to complete synchronously. * If it turns out that the I/O request is not actually immediately satiable, - * the client may then choose to re-queue the async (with STATUS_PENDING). */ + * the client may then choose to re-queue the async (with STATUS_PENDING). + * + * Note: If the nonblocking flag is set, we don't poll the socket + * here and always opt for synchronous completion first. This is + * because the application has probably seen POLLIN already from a + * preceding select()/poll() call before it requested to receive + * data. + */ status = STATUS_ALERTED; } } @@ -3564,11 +3572,27 @@ DECL_HANDLER(send_socket) * asyncs will not consume all available space; if there's no space * available, the current request won't be immediately satiable. */ - if (check_fd_events( sock->fd, POLLOUT )) + if ((!req->force_async && sock->nonblocking) || check_fd_events( sock->fd, POLLOUT )) { /* Give the client opportunity to complete synchronously. * If it turns out that the I/O request is not actually immediately satiable, - * the client may then choose to re-queue the async (with STATUS_PENDING). */ + * the client may then choose to re-queue the async (with STATUS_PENDING). + * + * Note: If the nonblocking flag is set, we don't poll the socket + * here and always opt for synchronous completion first. This is + * because the application has probably seen POLLOUT already from a + * preceding select()/poll() call before it requested to send data. + * + * Furthermore, some applications expect that any send() call on a + * socket that has indicated POLLOUT beforehand never fails with + * WSAEWOULDBLOCK. It's possible that Linux poll() may yield + * POLLOUT on the first call but not the second, even if no send() + * call has been made in the meanwhile. This can happen for a + * number of reasons; for example, TCP fragmentation may consume + * extra buffer space for each packet that has been split out, or + * the TCP/IP networking stack may decide to shrink the send buffer + * due to memory pressure. + */ status = STATUS_ALERTED; } }
This merge request was approved by Zebediah Figura.