This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com ---
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
diff --git a/dlls/ntdll/unix/socket.c b/dlls/ntdll/unix/socket.c index 23059e3cff8..d04ebb7f874 100644 --- a/dlls/ntdll/unix/socket.c +++ b/dlls/ntdll/unix/socket.c @@ -873,6 +873,7 @@ static NTSTATUS sock_send( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi NTSTATUS status; unsigned int i; ULONG options; + data_size_t data_size;
async_size = offsetof( struct async_send_ioctl, iov[count] );
@@ -906,6 +907,18 @@ static NTSTATUS sock_send( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi async->iov_cursor = 0; async->sent_len = 0;
+ data_size = 0; + for (i = 0; i < count; ++i) + { + SIZE_T len = async->iov[i].iov_len; + if (len > (SIZE_T)(data_size_t)-1 || (data_size_t)(data_size + len) < data_size) + { + release_fileio( &async->io ); + return STATUS_NO_MEMORY; + } + data_size += len; + } + status = try_send( fd, async );
if (status != STATUS_SUCCESS && status != STATUS_DEVICE_NOT_READY) @@ -919,6 +932,7 @@ static NTSTATUS sock_send( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi
SERVER_START_REQ( send_socket ) { + req->data_size = data_size; req->status = status; req->total = async->sent_len; req->async = server_async( handle, &async->io, event, apc, apc_user, iosb_client_ptr(io) ); @@ -1099,6 +1113,7 @@ static NTSTATUS sock_transmit( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc,
SERVER_START_REQ( send_socket ) { + req->data_size = async->head_len + async->file_len + async->tail_len; req->status = STATUS_PENDING; req->total = 0; req->async = server_async( handle, &async->io, event, apc, apc_user, iosb_client_ptr(io) ); diff --git a/server/protocol.def b/server/protocol.def index 9d90544fa41..2e57c91ccad 100644 --- a/server/protocol.def +++ b/server/protocol.def @@ -1461,6 +1461,7 @@ enum server_fd_type
/* Perform a send on a socket */ @REQ(send_socket) + data_size_t data_size; /* total number of bytes to send (>= total) */ async_data_t async; /* async I/O parameters */ unsigned int status; /* status of initial call */ unsigned int total; /* number of bytes already sent */ diff --git a/server/sock.c b/server/sock.c index 91f98556552..0e386e3537c 100644 --- a/server/sock.c +++ b/server/sock.c @@ -3454,16 +3454,17 @@ DECL_HANDLER(recv_socket) release_object( sock ); }
-DECL_HANDLER(send_socket) +static void send_socket_initial_callback( void *private, struct async *async, struct fd *fd, unsigned int status ) { - struct sock *sock = (struct sock *)get_handle_obj( current->process, req->async.handle, 0, &sock_ops ); - unsigned int status = req->status; - timeout_t timeout = 0; - struct async *async; - struct fd *fd; + struct sock *sock = get_fd_user( fd ); + struct iosb *iosb; + int is_short_write = 0;
- if (!sock) return; - fd = sock->fd; + if ((iosb = async_get_iosb( async ))) + { + is_short_write = iosb->result < (unsigned long)private; + release_object( iosb ); + }
if (sock->type == WS_SOCK_DGRAM) { @@ -3476,13 +3477,29 @@ DECL_HANDLER(send_socket) sock->bound = 1; }
- if (status != STATUS_SUCCESS) + if (status != STATUS_SUCCESS || is_short_write) { - /* send() calls only clear and reselect events if unsuccessful. */ + /* send() calls only clear and reselect events if unsuccessful. + * Also treat short writes as being unsuccessful. + */ sock->pending_events &= ~AFD_POLL_WRITE; sock->reported_events &= ~AFD_POLL_WRITE; }
+ sock_reselect( sock ); +} + +DECL_HANDLER(send_socket) +{ + struct sock *sock = (struct sock *)get_handle_obj( current->process, req->async.handle, 0, &sock_ops ); + unsigned int status = req->status; + timeout_t timeout = 0; + struct async *async; + struct fd *fd; + + if (!sock) return; + fd = sock->fd; + /* If we had a short write and the socket is nonblocking (and the client is * not trying to force the operation to be asynchronous), return success. * Windows actually refuses to send any data in this case, and returns @@ -3518,15 +3535,14 @@ DECL_HANDLER(send_socket) } set_error( status );
+ async_set_initial_status_callback( async, send_socket_initial_callback, (void *)(unsigned long)req->data_size ); + if (timeout) async_set_timeout( async, timeout, STATUS_IO_TIMEOUT );
if (status == STATUS_PENDING) queue_async( &sock->write_q, async );
- /* always reselect; we changed reported_events above */ - sock_reselect( sock ); - reply->wait = async_handoff( async, NULL, 0 ); reply->options = get_fd_options( fd ); release_object( async );
On 3/3/22 07:30, Jinoh Kang wrote:
This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
I still am failing to see the need to do this, especially by adding a new callback. What's preventing send_socket from working the same way as recv_socket?
On 3/5/22 02:51, Zebediah Figura wrote:
On 3/3/22 07:30, Jinoh Kang wrote:
This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
I still am failing to see the need to do this, especially by adding a new callback. What's preventing send_socket from working the same way as recv_socket?
As stated in the commit message, none of the actions in send_socket_initial_callback can be performed *before* the initial I/O.
In send_socket, "send() calls only clear and reselect events if unsuccessful." We can tell if the I/O has succeeded only after set_async_direct_result. Therefore, we can only clear and reselect events after the client tells us the initial I/O has been unsuccessful.
Also, the implicitly bound address is available only after the send* system call has been performed.
On 3/4/22 13:16, Jinoh Kang wrote:
On 3/5/22 02:51, Zebediah Figura wrote:
On 3/3/22 07:30, Jinoh Kang wrote:
This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
I still am failing to see the need to do this, especially by adding a new callback. What's preventing send_socket from working the same way as recv_socket?
As stated in the commit message, none of the actions in send_socket_initial_callback can be performed *before* the initial I/O.
In send_socket, "send() calls only clear and reselect events if unsuccessful." We can tell if the I/O has succeeded only after set_async_direct_result. Therefore, we can only clear and reselect events after the client tells us the initial I/O has been unsuccessful.
Right, sorry, I'm not thinking this through, or reading apparently...
I still am not thrilled about adding a new callback, though, at least if it can be avoided. In this case I wonder if we can make use of sock_reselect_async().
The exact conditions (and timing) under which we need to clear the AFD_POLL_WRITE bit are not really clear. In particular the correct condition may not be "status != STATUS_SUCCESS" but "status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY". I can't off the top of my head think of any interesting cases where send() returns an error (other than EWOULDBLOCK) but subsequent calls can succeed...
But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
On 3/5/22 09:13, Zebediah Figura wrote:
On 3/4/22 13:16, Jinoh Kang wrote:
On 3/5/22 02:51, Zebediah Figura wrote:
On 3/3/22 07:30, Jinoh Kang wrote:
This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
I still am failing to see the need to do this, especially by adding a new callback. What's preventing send_socket from working the same way as recv_socket?
As stated in the commit message, none of the actions in send_socket_initial_callback can be performed *before* the initial I/O.
In send_socket, "send() calls only clear and reselect events if unsuccessful." We can tell if the I/O has succeeded only after set_async_direct_result. Therefore, we can only clear and reselect events after the client tells us the initial I/O has been unsuccessful.
Right, sorry, I'm not thinking this through, or reading apparently...
No need to be sorry; I'm sometimes guilty of this too :-/. I'll try to be more clear the next time.
I still am not thrilled about adding a new callback, though, at least if it can be avoided.
To be fair, extending structs and introducing indirect calls are a pet peeve of mine too.
In this case I wonder if we can make use of sock_reselect_async().
The exact conditions (and timing) under which we need to clear the AFD_POLL_WRITE bit are not really clear. In particular the correct condition may not be "status != STATUS_SUCCESS" but "status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY". I can't off the top of my head think of any interesting cases where send() returns an error (other than EWOULDBLOCK) but subsequent calls can succeed...
Does STATUS_NO_MEMORY count? ;-)
But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
On 3/4/22 13:16, Jinoh Kang wrote:
On 3/5/22 02:51, Zebediah Figura wrote:
On 3/3/22 07:30, Jinoh Kang wrote:
This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
I still am failing to see the need to do this, especially by adding a new callback. What's preventing send_socket from working the same way as recv_socket?
As stated in the commit message, none of the actions in send_socket_initial_callback can be performed *before* the initial I/O.
In send_socket, "send() calls only clear and reselect events if unsuccessful." We can tell if the I/O has succeeded only after set_async_direct_result. Therefore, we can only clear and reselect events after the client tells us the initial I/O has been unsuccessful.
Right, sorry, I'm not thinking this through, or reading apparently...
No need to be sorry; I'm sometimes guilty of this too :-/. I'll try to be more clear the next time.
I still am not thrilled about adding a new callback, though, at least if it can be avoided.
To be fair, extending structs and introducing indirect calls are a pet peeve of mine too.
In this case I wonder if we can make use of sock_reselect_async().
The exact conditions (and timing) under which we need to clear the AFD_POLL_WRITE bit are not really clear. In particular the correct condition may not be "status != STATUS_SUCCESS" but "status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY". I can't off the top of my head think of any interesting cases where send() returns an error (other than EWOULDBLOCK) but subsequent calls can succeed...
Does STATUS_NO_MEMORY count? ;-)
Not at all ;-)
But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
On 3/4/22 13:16, Jinoh Kang wrote:
On 3/5/22 02:51, Zebediah Figura wrote:
On 3/3/22 07:30, Jinoh Kang wrote:
This allows the client to postpone the initial I/O until the server has queued the I/O request. The server should perform the postprocessing only after the initial I/O has been done.
In the case of send_socket, the manipulation of event flags shall ideally be done *after* (not *before*) the client has attempted the initial I/O, since the outbound queue status of the socket may change in the meanwhile. Also, the implicitly bound address is available only after the send* system call has been performed.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com
Notes: v1 -> v2: - pass around total size of data to be transmitted - detect short write in send_socket_initial_callback v2 -> v3: no changes v3 -> v4: no changes
dlls/ntdll/unix/socket.c | 15 ++++++++++++++ server/protocol.def | 1 + server/sock.c | 42 +++++++++++++++++++++++++++------------- 3 files changed, 45 insertions(+), 13 deletions(-)
I still am failing to see the need to do this, especially by adding a new callback. What's preventing send_socket from working the same way as recv_socket?
As stated in the commit message, none of the actions in send_socket_initial_callback can be performed *before* the initial I/O.
In send_socket, "send() calls only clear and reselect events if unsuccessful." We can tell if the I/O has succeeded only after set_async_direct_result. Therefore, we can only clear and reselect events after the client tells us the initial I/O has been unsuccessful.
Right, sorry, I'm not thinking this through, or reading apparently...
No need to be sorry; I'm sometimes guilty of this too :-/. I'll try to be more clear the next time.
I still am not thrilled about adding a new callback, though, at least if it can be avoided.
To be fair, extending structs and introducing indirect calls are a pet peeve of mine too.
In this case I wonder if we can make use of sock_reselect_async().
The exact conditions (and timing) under which we need to clear the AFD_POLL_WRITE bit are not really clear. In particular the correct condition may not be "status != STATUS_SUCCESS" but "status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY". I can't off the top of my head think of any interesting cases where send() returns an error (other than EWOULDBLOCK) but subsequent calls can succeed...
Does STATUS_NO_MEMORY count? ;-)
Not at all ;-)
But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
I'll test if Windows does update it before I/O completion.
On 3/9/22 00:42, Jinoh Kang wrote:
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
On 3/4/22 13:16, Jinoh Kang wrote: But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
I'll test if Windows does update it before I/O completion.
Ok, here's the deal: it turns out Windows _does_ update the bound address before I/O completion.
I tested it by flooding loads of UDP packets, each having 1500*20 bytes fragmented into ~ 21 frames (MTU=1500) over a slow NIC. Soon enough, WSASend() eventually returns with last error WSA_IO_PENDING. The I/O request was apparently still pending at the moment getsockname() returned.
OK: getsockname() succeeded while I/O pending. Waiting for I/O completion... Completion took 23859 milliseconds.
Here's the source code:
#define WIN32_LEAN_AND_MEAN #define WINVER 0x0602 #define _WIN32_WINNT 0x0602 #include <windows.h> #include <stdio.h> #include <stdlib.h> #include <winsock2.h> #include <ws2tcpip.h>
void die_error_(int line, DWORD err) { fprintf(stderr, "%d: %u (%08x)\n", line, err, err); exit(1); } #define die_error() die_error_(__LINE__, GetLastError())
int main(int argc, char **argv) { WSADATA wsaData; SOCKET sock; struct sockaddr_in sa_in, sa_in2; WSAOVERLAPPED ovl; WSABUF vecs[1]; int sa_len2; DWORD result, flags; static unsigned char buffer[1500*20]; WSAEVENT event;
if (WSAStartup(MAKEWORD(2, 2), &wsaData)) die_error();
event = WSACreateEvent(); if (event == WSA_INVALID_EVENT) die_error();
memset(&sa_in, 0, sizeof(sa_in)); sa_in.sin_family = AF_INET; if (argc >= 1) inet_pton(AF_INET, argv[1], &sa_in.sin_addr.s_addr); sa_in.sin_port = htons(1);
for (;;) { sock = socket(AF_INET, SOCK_DGRAM, 0); if (sock == SOCKET_ERROR) die_error();
vecs[0].buf = (CHAR*)buffer; vecs[0].len = sizeof(buffer);
memset(&ovl, 0, sizeof(ovl)); ovl.hEvent = event; if (WSASendTo(sock, vecs, 1, NULL, 0, (struct sockaddr *)&sa_in, sizeof(sa_in), &ovl, NULL) == SOCKET_ERROR) { DWORD t0 = GetTickCount(); if (WSAGetLastError() != WSA_IO_PENDING) die_error();
sa_len2 = sizeof(sa_in2); if (getsockname(sock, (struct sockaddr *)&sa_in2, &sa_len2) == SOCKET_ERROR) die_error();
/* still not completed? */ if (!HasOverlappedIoCompleted((LPOVERLAPPED)&ovl) && WaitForSingleObject(event, 0) == WAIT_TIMEOUT && WaitForSingleObject((HANDLE)sock, 0) == WAIT_TIMEOUT) { puts("OK: getsockname() succeeded while I/O pending.\nWaiting for I/O completion..."); fflush(stdout);
WaitForSingleObject(event, INFINITE); closesocket(sock);
printf("Completion took %lu milliseconds.\n", GetTickCount() - t0); fflush(stdout);
break; /* success */ }
WSAGetOverlappedResult(sock, &ovl, &result, TRUE, &flags); } closesocket(sock); }
WSACleanup(); return 0; }
On 3/9/22 12:36, Jinoh Kang wrote:
On 3/9/22 00:42, Jinoh Kang wrote:
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
On 3/4/22 13:16, Jinoh Kang wrote: But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
I'll test if Windows does update it before I/O completion.
Ok, here's the deal: it turns out Windows _does_ update the bound address before I/O completion.
Right, okay.
My intuition still tells me we should avoid adding a new callback, and rely on the existing completion and reselect callbacks as much as possible. This is largely because we already have the need to execute coroutines, as it were, and reselect fills that need.
I think it's possible to accomplish that in this case relatively easily, by checking if there is a write async queued and trying to get the socket name if so. Although perhaps a more idiomatic alternative would be to replace all of the instances of "sock->bound" with a helper that tries to retrieve the socket name for unbound DGRAM sockets.
I'm not as confident about this as I'd like to be, and I don't like how complex the reselect callback can get. But adding more complexity to the async infrastructure just does not sound like a good idea.
On 3/10/22 04:23, Zebediah Figura wrote:
On 3/9/22 12:36, Jinoh Kang wrote:
On 3/9/22 00:42, Jinoh Kang wrote:
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
On 3/4/22 13:16, Jinoh Kang wrote: But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
I'll test if Windows does update it before I/O completion.
Ok, here's the deal: it turns out Windows _does_ update the bound address before I/O completion.
Right, okay.
My intuition still tells me we should avoid adding a new callback, and rely on the existing completion and reselect callbacks as much as possible. This is largely because we already have the need to execute coroutines, as it were, and reselect fills that need.
I think it's possible to accomplish that in this case relatively easily, by checking if there is a write async queued and trying to get the socket name if so. Although perhaps a more idiomatic alternative would be to replace all of the instances of "sock->bound" with a helper that tries to retrieve the socket name for unbound DGRAM sockets.
I just came up with another approach: why not just auto-bind the socket explicitly at send_socket?
We'll eventually call try_send() only after the send_socket server call, so binding the socket beforehand in the send_socket handler makes sense.
I'm not as confident about this as I'd like to be, and I don't like how complex the reselect callback can get. But adding more complexity to the async infrastructure just does not sound like a good idea.
On 3/10/22 07:08, Jinoh Kang wrote:
On 3/10/22 04:23, Zebediah Figura wrote:
On 3/9/22 12:36, Jinoh Kang wrote:
On 3/9/22 00:42, Jinoh Kang wrote:
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote: > On 3/4/22 13:16, Jinoh Kang wrote: > But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
I'll test if Windows does update it before I/O completion.
Ok, here's the deal: it turns out Windows _does_ update the bound address before I/O completion.
Right, okay.
My intuition still tells me we should avoid adding a new callback, and rely on the existing completion and reselect callbacks as much as possible. This is largely because we already have the need to execute coroutines, as it were, and reselect fills that need.
I think it's possible to accomplish that in this case relatively easily, by checking if there is a write async queued and trying to get the socket name if so. Although perhaps a more idiomatic alternative would be to replace all of the instances of "sock->bound" with a helper that tries to retrieve the socket name for unbound DGRAM sockets.
I just came up with another approach: why not just auto-bind the socket explicitly at send_socket?
I think that makes sense?
I don't actually know if there are any circumstances in which that can be incorrect. As before, I don't know how to trigger an interesting error from send()...
We'll eventually call try_send() only after the send_socket server call, so binding the socket beforehand in the send_socket handler makes sense.
I'm not as confident about this as I'd like to be, and I don't like how complex the reselect callback can get. But adding more complexity to the async infrastructure just does not sound like a good idea.
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
Are you comfortable with allocating new memory for callback private data?
On 3/10/22 10:46, Jinoh Kang wrote:
On 3/8/22 03:11, Zebediah Figura wrote:
On 3/5/22 02:24, Jinoh Kang wrote:
On 3/5/22 09:13, Zebediah Figura wrote:
But assuming that we only really need to clear flags when the kernel send buffer is full, I think the right thing to do would be to clear events if "async_waiting( &sock->write_q )".
We still need to account for nonblocking I/O (STATUS_DEVICE_NOT_READY), and also is_short_write, which is another indicator for a full send buffer. Otherwise, it breaks ws2_32:sock:test_write_events. Also, note the implicitly bound address.
In any case I think the co-routine pattern is inevitable due to the client-server role split. In this case, AFD.SYS can do both pre- and post-processing inside a single function, but wineserver can't block for the client to finish the I/O.
Right. Maybe we could make use of async_set_completion_callback() instead? I think the only reason that can't work is if the sock name needs to be updated and visible *before* the I/O is necessarily complete.
Are you comfortable with allocating new memory for callback private data?
In general, yes; we already do that for some asyncs. In fact the purpose of async_set_completion_callback() is usually to free that data.