(cc/ Dongwan Kim: This is the 2nd successor to the patch serie I just replied.)
This is my third attempt to tackle the issue #52401 [1]: "Improper synchronization in sock_recv/sock_send leads to arbitrary reordering of completion of I/O requests", which was initially reported (outside Bugzilla) by Dongwan Kim [2].
Basically, the old process of sock_recv is:
1. Perform the I/O synchronously. 2. Report the result back to the server, or queue up the request in the server. 3. If blocking, wait for I/O to complete.
The new process of sock_recv would be:
1. Queue up the request in the server. 2. Perform the I/O synchronously. 3. Report the result back to the server, 4. If blocking, wait for I/O to complete.
Everything except the actual I/O requires communicating with wineserver. My goal here is to fix the issue without introducing extra calls to wineserver. (However, if it turns out that this goal is not of utmost significance, then this patch serie can be easily modified so that it issues separate server calls.)
My previous approaches are listed here:
- Add a new select type called SELECT_SIGNAL_WAIT_ASYNC [3].
Zebediah has pointed out [4] that it is not very elegant to (ab-)use the synchronization machinery for communicating the async result.
- Use APC_ASYNC_IO to perform the synchronous I/O [5].
This ended up with a total of 11 patches, and turned out to be rather too complicated for a simple task.
So here comes my third approach, which hooks the "add_queue" method of async objects. Quoting [6]:
If we are not going with SIGNAL_WAIT_ASYNC, there's an alternate approach (avoiding three server round trips):
- Add a new wineserver request that calls async_set_result(), as in [1].
- Re-implement async's add_queue to switch to pending mode before calling the original add_queue.
In this approach, Unix side will handle synchronous I/O operation as follows:
- If the operation is completed, we request wineserver to call async_set_result() directly, and don't wait *at all*.
- If the operation needs to be queued (STATUS_PENDING), we do async_wait() as before. This will "magically" switch the async to STATUS_PENDING and continue polling.
Note "magically" -- we're injecting impure semantics into async's pre-wait stage. I don't think this would be a significant problem, since the "async" object type is internal to Wine anyway.
[1] https://bugs.winehq.org/show_bug.cgi?id=52401 [2] https://www.winehq.org/pipermail/wine-devel/2021-December/202546.html [3] https://www.winehq.org/pipermail/wine-devel/2022-January/204695.html [4] https://www.winehq.org/pipermail/wine-devel/2022-January/204710.html [5] https://www.winehq.org/pipermail/wine-devel/2022-January/205168.html [6] https://www.winehq.org/pipermail/wine-devel/2022-January/205193.html
Jinoh Kang (6): server: Allow calling async_handoff() with status code STATUS_ALERTED. server: Allow setting async result directly from Unix side. ntdll: Add a Unix-side wrapper function for "notify_async_direct_result". server: Attempt to complete I/O request immediately in recv_socket. ntdll: Don't call try_recv before server call in sock_recv. server: Replace redundant recv_socket status fields with force_async boolean field.
dlls/ntdll/unix/socket.c | 39 ++++++++++--------- dlls/ntdll/unix/sync.c | 21 ++++++++++ dlls/ntdll/unix/unix_private.h | 1 + dlls/ws2_32/tests/sock.c | 8 ++-- server/async.c | 70 +++++++++++++++++++++++++++++----- server/protocol.def | 12 +++++- server/sock.c | 40 +++++++++++++------ 7 files changed, 144 insertions(+), 47 deletions(-)
If the server detects that an I/O request could be completed immediately (e.g. the socket to read from already has incoming data), it can now return STATUS_ALERTED to allow opportunistic synchronous I/O. The Unix side will then attempt to perform I/O in nonblocking mode and report back the I/O status to the server with signal_wait_async(). If the operation returns e.g. EAGAIN or EWOULDBLOCK, the client can opt to either abandon the request (by specifying an error status) or poll for it in the server as usual (by waiting on the wait handle).
Without such mechanism in place, the client cannot safely perform immediately satiable I/O operations synchronously, since it can potentially conflict with other pending I/O operations that have already been queued.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com --- server/async.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/server/async.c b/server/async.c index 7aef28355f0..2a67892dbe2 100644 --- a/server/async.c +++ b/server/async.c @@ -65,6 +65,7 @@ struct async };
static void async_dump( struct object *obj, int verbose ); +static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ); static int async_signaled( struct object *obj, struct wait_queue_entry *entry ); static void async_satisfied( struct object * obj, struct wait_queue_entry *entry ); static void async_destroy( struct object *obj ); @@ -74,7 +75,7 @@ static const struct object_ops async_ops = sizeof(struct async), /* size */ &no_type, /* type */ async_dump, /* dump */ - add_queue, /* add_queue */ + async_add_queue, /* add_queue */ remove_queue, /* remove_queue */ async_signaled, /* signaled */ async_satisfied, /* satisfied */ @@ -105,6 +106,28 @@ static void async_dump( struct object *obj, int verbose ) fprintf( stderr, "Async thread=%p\n", async->thread ); }
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops ); + + if (!async->pending && async->terminated && async->alerted) + { + /* The client has failed to complete synchronously (e.g. EWOULDBLOCK). + * Restart the async as fully fledged asynchronous I/O, where + * the completion port notification and APC call will be triggered + * appropriately. */ + async->pending = 1; + + /* Unset the signaled flag if the client wants to block on this async. */ + if (async->blocking) async->signaled = 0; + + async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */ + } + + return add_queue( obj, entry ); +} + static int async_signaled( struct object *obj, struct wait_queue_entry *entry ) { struct async *async = (struct async *)obj; @@ -126,7 +149,8 @@ static void async_satisfied( struct object *obj, struct wait_queue_entry *entry async->direct_result = 0; }
- if (async->initial_status == STATUS_PENDING && async->blocking) + if ((async->initial_status == STATUS_PENDING && async->blocking) || + async->initial_status == STATUS_ALERTED) set_wait_status( entry, async->iosb->status ); else set_wait_status( entry, async->initial_status );
On 1/23/22 11:29, Jinoh Kang wrote:
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{
- struct async *async = (struct async *)obj;
- assert( obj->ops == &async_ops );
- if (!async->pending && async->terminated && async->alerted)
- {
/* The client has failed to complete synchronously (e.g. EWOULDBLOCK).
* Restart the async as fully fledged asynchronous I/O, where
* the completion port notification and APC call will be triggered
* appropriately. */
async->pending = 1;
/* Unset the signaled flag if the client wants to block on this async. */
if (async->blocking) async->signaled = 0;
async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */
- }
- return add_queue( obj, entry );
+}
I'll admit, this kind of thing is why I didn't really want to have to try to optimize 3 server calls into 2. Asyncs are already really complicated, in terms of the many paths they can take, and it seems like no matter what we do they're going to get worse.
Still, I have a potential idea.
What we need to do here is similar to the infrastructure that already exists for device asyncs, namely "unknown_status" etc. It would be nice to use that instead of reinventing it, and although I haven't tried, it seems possible.
async_add_queue() as it is above is not great. I'm not sure that code actually works in every case; it definitely increases the mental burden even if it does. (Consider for instance that it would be triggered for *every* async).
Instead what I'd suggest is to use the request introduced here in every case, even if the initial status was pending. This introduces a new server call, but it only does so in cases where we already have to wait.
The end result would be not unlike get_next_device_request, which is essentially that request combined with some other things (and it doesn't deal in async object handles).
On 1/27/22 08:52, Zebediah Figura (she/her) wrote:
On 1/23/22 11:29, Jinoh Kang wrote:
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops );
+ if (!async->pending && async->terminated && async->alerted) + { + /* The client has failed to complete synchronously (e.g. EWOULDBLOCK). + * Restart the async as fully fledged asynchronous I/O, where + * the completion port notification and APC call will be triggered + * appropriately. */ + async->pending = 1;
+ /* Unset the signaled flag if the client wants to block on this async. */ + if (async->blocking) async->signaled = 0;
+ async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */ + }
+ return add_queue( obj, entry ); +}
I'll admit, this kind of thing is why I didn't really want to have to try to optimize 3 server calls into 2.
I concur. Hence,
(However, if it turns out that this goal is not of utmost significance, then this patch serie can be easily modified so that it issues separate server calls.)
Asyncs are already really complicated,
To be fair, asynchronous I/O is inherently complicated in itself.
in terms of the many paths they can take,
Although that one has a lot of room for improvement, yes.
and it seems like no matter what we do they're going to get worse.
Still, I have a potential idea.
What we need to do here is similar to the infrastructure that already exists for device asyncs, namely "unknown_status" etc. It would be nice to use that instead of reinventing it, and although I haven't tried, it seems possible.
That one was on the table, too. In fact it can also help eliminate the initial_status == STATUS_ALERTED check.
One catch is that async_set_unknown_status also sets direct_result to 0, which means to always fire off APC on completion. I wasn't entirely sure of what the effects of { .unknown_status = 1, .direct_result = 1 } would be.
async_add_queue() as it is above is not great. I'm not sure that code actually works in every case;
!pending && terminated && alerted was the condition I was able to deduce to detect this exact condition. It does sound a little arbitrary though, especially since it's testing for three unrelated conditions.
it definitely increases the mental burden even if it does. (Consider for instance that it would be triggered for *every* async).
Instead what I'd suggest is to use the request introduced here in every case, even if the initial status was pending.
You mean, along with use of unknown_status?
This introduces a new server call, but it only does so in cases where we already have to wait.
Sounds reasonable.
The end result would be not unlike get_next_device_request, which is essentially that request combined with some other things (and it doesn't deal in async object handles).
On 1/27/22 02:44, Jinoh Kang wrote:
What we need to do here is similar to the infrastructure that already exists for device asyncs, namely "unknown_status" etc. It would be nice to use that instead of reinventing it, and although I haven't tried, it seems possible.
That one was on the table, too. In fact it can also help eliminate the initial_status == STATUS_ALERTED check.
One catch is that async_set_unknown_status also sets direct_result to 0, which means to always fire off APC on completion. I wasn't entirely sure of what the effects of { .unknown_status = 1, .direct_result = 1 } would be.
I believe that !direct_result is correct here, actually. Note that we only get an APC when calling async_terminate(), but we shouldn't be introducing any extra terminate calls.
async_add_queue() as it is above is not great. I'm not sure that code actually works in every case;
!pending && terminated && alerted was the condition I was able to deduce to detect this exact condition. It does sound a little arbitrary though, especially since it's testing for three unrelated conditions.
it definitely increases the mental burden even if it does. (Consider for instance that it would be triggered for *every* async).
Instead what I'd suggest is to use the request introduced here in every case, even if the initial status was pending.
You mean, along with use of unknown_status?
In a sense they're orthogonal, but yes, I think doing both would help.
On 1/27/22 00:52, Zebediah Figura (she/her) wrote:
On 1/23/22 11:29, Jinoh Kang wrote:
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops );
+ if (!async->pending && async->terminated && async->alerted) + { + /* The client has failed to complete synchronously (e.g. EWOULDBLOCK). + * Restart the async as fully fledged asynchronous I/O, where + * the completion port notification and APC call will be triggered + * appropriately. */ + async->pending = 1;
+ /* Unset the signaled flag if the client wants to block on this async. */ + if (async->blocking) async->signaled = 0;
+ async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */ + }
+ return add_queue( obj, entry ); +}
I'll admit, this kind of thing is why I didn't really want to have to try to optimize 3 server calls into 2. Asyncs are already really complicated, in terms of the many paths they can take, and it seems like no matter what we do they're going to get worse.
Did you consider moving whole sock_send() and try_send() to server? We could then have a proper and reliable queue in server socket object and client could just do a regular generic server ioctl. (I didn't really follow closely the conversation, so sorry if I missed something).
Thanks,
Jacek
On 1/27/22 21:02, Jacek Caban wrote:
On 1/27/22 00:52, Zebediah Figura (she/her) wrote:
On 1/23/22 11:29, Jinoh Kang wrote:
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops );
+ if (!async->pending && async->terminated && async->alerted) + { + /* The client has failed to complete synchronously (e.g. EWOULDBLOCK). + * Restart the async as fully fledged asynchronous I/O, where + * the completion port notification and APC call will be triggered + * appropriately. */ + async->pending = 1;
+ /* Unset the signaled flag if the client wants to block on this async. */ + if (async->blocking) async->signaled = 0;
+ async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */ + }
+ return add_queue( obj, entry ); +}
I'll admit, this kind of thing is why I didn't really want to have to try to optimize 3 server calls into 2. Asyncs are already really complicated, in terms of the many paths they can take, and it seems like no matter what we do they're going to get worse.
Did you consider moving whole sock_send() and try_send() to server?
The patchset is about sock_recv(), although I suppose sock_send() needs fixing as well.
We could then have a proper and reliable queue in server socket object and client could just do a regular generic server ioctl. (I didn't really follow closely the conversation, so sorry if I missed something).
For small packets it seems like a good idea. We still need to handle scatter/gather I/O though.
Thanks,
Jacek
On 1/27/22 06:02, Jacek Caban wrote:
On 1/27/22 00:52, Zebediah Figura (she/her) wrote:
On 1/23/22 11:29, Jinoh Kang wrote:
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops );
+ if (!async->pending && async->terminated && async->alerted) + { + /* The client has failed to complete synchronously (e.g. EWOULDBLOCK). + * Restart the async as fully fledged asynchronous I/O, where + * the completion port notification and APC call will be triggered + * appropriately. */ + async->pending = 1;
+ /* Unset the signaled flag if the client wants to block on this async. */ + if (async->blocking) async->signaled = 0;
+ async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */ + }
+ return add_queue( obj, entry ); +}
I'll admit, this kind of thing is why I didn't really want to have to try to optimize 3 server calls into 2. Asyncs are already really complicated, in terms of the many paths they can take, and it seems like no matter what we do they're going to get worse.
Did you consider moving whole sock_send() and try_send() to server? We could then have a proper and reliable queue in server socket object and client could just do a regular generic server ioctl. (I didn't really follow closely the conversation, so sorry if I missed something).
That's actually a good point, thanks for making it. I think I actually did consider this when first moving things around, but decided against it on the grounds of keeping stuff out of the server...
Note that we do have to deal with scatter/gather, and the control and address in recvmsg, so it can't quite do a generic ioctl, we'd still need to use a special async I/O proc on the ntdll side. But it does obviate the need to change the async infrastructure at all.
One possible concern is that we'd end up doing a lot of large copies over the server request sockets. It's not clear to me if that's anything to worry about.
"Zebediah Figura (she/her)" zfigura@codeweavers.com writes:
On 1/27/22 06:02, Jacek Caban wrote:
On 1/27/22 00:52, Zebediah Figura (she/her) wrote:
On 1/23/22 11:29, Jinoh Kang wrote:
+static int async_add_queue( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops );
+ if (!async->pending && async->terminated && async->alerted) + { + /* The client has failed to complete synchronously (e.g. EWOULDBLOCK). + * Restart the async as fully fledged asynchronous I/O, where + * the completion port notification and APC call will be triggered + * appropriately. */ + async->pending = 1;
+ /* Unset the signaled flag if the client wants to block on this async. */ + if (async->blocking) async->signaled = 0;
+ async_set_result( obj, STATUS_PENDING, 0 ); /* kick it off */ + }
+ return add_queue( obj, entry ); +}
I'll admit, this kind of thing is why I didn't really want to have to try to optimize 3 server calls into 2. Asyncs are already really complicated, in terms of the many paths they can take, and it seems like no matter what we do they're going to get worse.
Did you consider moving whole sock_send() and try_send() to server? We could then have a proper and reliable queue in server socket object and client could just do a regular generic server ioctl. (I didn't really follow closely the conversation, so sorry if I missed something).
That's actually a good point, thanks for making it. I think I actually did consider this when first moving things around, but decided against it on the grounds of keeping stuff out of the server...
Note that we do have to deal with scatter/gather, and the control and address in recvmsg, so it can't quite do a generic ioctl, we'd still need to use a special async I/O proc on the ntdll side. But it does obviate the need to change the async infrastructure at all.
One possible concern is that we'd end up doing a lot of large copies over the server request sockets. It's not clear to me if that's anything to worry about.
I think that's a concern, yes. You'd also need large buffers on the server side. It would of course make some things easier, but I'm not sure it's a good idea.
Some I/O operations need a way to "queue" the async to the target object first, even if the operation itself is to be completed synchronously. After synchronous completion, it needs a way to notify the IO_STATUS_BLOCK values back to the server.
Add a new wineserver request, "notify_async_direct_result", which notifies direct (i.e. synchronous) completion of async from the same thread.
Wrap the "notify_async_direct_result" request in a function named "notify_async", which is no-op if the supplied wait handle is NULL.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com --- server/async.c | 48 ++++++++++++++++++++++++++++++++++----------- server/protocol.def | 8 ++++++++ 2 files changed, 45 insertions(+), 11 deletions(-)
diff --git a/server/async.c b/server/async.c index 2a67892dbe2..9b08c139929 100644 --- a/server/async.c +++ b/server/async.c @@ -135,11 +135,8 @@ static int async_signaled( struct object *obj, struct wait_queue_entry *entry ) return async->signaled; }
-static void async_satisfied( struct object *obj, struct wait_queue_entry *entry ) +static void async_set_direct_result( struct async *async ) { - struct async *async = (struct async *)obj; - assert( obj->ops == &async_ops ); - /* we only return an async handle for asyncs created via create_request_async() */ assert( async->iosb );
@@ -149,18 +146,26 @@ static void async_satisfied( struct object *obj, struct wait_queue_entry *entry async->direct_result = 0; }
+ /* close wait handle here to avoid extra server round trip */ + if (async->wait_handle) + { + close_handle( async->thread->process, async->wait_handle ); + async->wait_handle = 0; + } +} + +static void async_satisfied( struct object *obj, struct wait_queue_entry *entry ) +{ + struct async *async = (struct async *)obj; + assert( obj->ops == &async_ops ); + + async_set_direct_result( async ); + if ((async->initial_status == STATUS_PENDING && async->blocking) || async->initial_status == STATUS_ALERTED) set_wait_status( entry, async->iosb->status ); else set_wait_status( entry, async->initial_status ); - - /* close wait handle here to avoid extra server round trip */ - if (async->wait_handle) - { - close_handle( async->thread->process, async->wait_handle ); - async->wait_handle = 0; - } }
static void async_destroy( struct object *obj ) @@ -752,3 +757,24 @@ DECL_HANDLER(get_async_result) } set_error( iosb->status ); } + +/* Notify direct completion of async and close the wait handle */ +DECL_HANDLER(notify_async_direct_result) +{ + struct async *async = (struct async *)get_handle_obj( current->process, req->handle, 0, &async_ops ); + unsigned int status; + + if (!async) return; + + if (async->iosb) + { + async->iosb->status = req->status; + async->iosb->result = req->information; + async_set_direct_result( async ); + status = req->status; + } + else status = STATUS_ACCESS_DENIED; + + release_object( &async->obj ); + set_error( status ); +} diff --git a/server/protocol.def b/server/protocol.def index db73f0418a9..c4596e0bd58 100644 --- a/server/protocol.def +++ b/server/protocol.def @@ -2147,6 +2147,14 @@ enum message_type @END
+/* Notify direct completion of async and close the wait handle */ +@REQ(notify_async_direct_result) + obj_handle_t handle; /* wait handle */ + unsigned int status; /* completion status */ + apc_param_t information; /* IO_STATUS_BLOCK Information */ +@END + + /* Perform a read on a file object */ @REQ(read) async_data_t async; /* async I/O parameters */
Wrap the "notify_async_direct_result" request in a function named "notify_async".
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com --- dlls/ntdll/unix/sync.c | 21 +++++++++++++++++++++ dlls/ntdll/unix/unix_private.h | 1 + 2 files changed, 22 insertions(+)
diff --git a/dlls/ntdll/unix/sync.c b/dlls/ntdll/unix/sync.c index 442243d8bcf..6b9ffd047d2 100644 --- a/dlls/ntdll/unix/sync.c +++ b/dlls/ntdll/unix/sync.c @@ -2510,3 +2510,24 @@ NTSTATUS WINAPI NtWaitForAlertByThreadId( const void *address, const LARGE_INTEG }
#endif + +/* Notify direct completion of async and close the wait handle. + * This function is a no-op (returns status as-is) if the supplied handle is NULL. + */ +NTSTATUS notify_async( HANDLE optional_handle, NTSTATUS status, ULONG_PTR information ) +{ + NTSTATUS ret; + + if (!optional_handle) return status; + + SERVER_START_REQ( notify_async_direct_result ) + { + req->handle = wine_server_obj_handle( optional_handle ); + req->status = status; + req->information = information; + ret = wine_server_call( req ); + } + SERVER_END_REQ; + + return ret; +} diff --git a/dlls/ntdll/unix/unix_private.h b/dlls/ntdll/unix/unix_private.h index a79edabc37c..046818e482a 100644 --- a/dlls/ntdll/unix/unix_private.h +++ b/dlls/ntdll/unix/unix_private.h @@ -274,6 +274,7 @@ extern NTSTATUS get_device_info( int fd, struct _FILE_FS_DEVICE_INFORMATION *inf extern void init_files(void) DECLSPEC_HIDDEN; extern void init_cpu_info(void) DECLSPEC_HIDDEN; extern void add_completion( HANDLE handle, ULONG_PTR value, NTSTATUS status, ULONG info, BOOL async ) DECLSPEC_HIDDEN; +extern NTSTATUS notify_async( HANDLE optional_handle, NTSTATUS status, ULONG_PTR information );
extern void dbg_init(void) DECLSPEC_HIDDEN;
Make recv_socket alert the async immediately if poll() call detects that there are incoming data in the socket, bypassing the wineserver's main polling loop.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com --- dlls/ntdll/unix/socket.c | 24 ++++++++++++++++++------ server/protocol.def | 1 + server/sock.c | 23 ++++++++++++++++++++++- 3 files changed, 41 insertions(+), 7 deletions(-)
diff --git a/dlls/ntdll/unix/socket.c b/dlls/ntdll/unix/socket.c index 71dfcdd1114..241e78888d4 100644 --- a/dlls/ntdll/unix/socket.c +++ b/dlls/ntdll/unix/socket.c @@ -686,6 +686,7 @@ static NTSTATUS sock_recv( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi NTSTATUS status; unsigned int i; ULONG options; + BOOL may_restart;
if (unix_flags & MSG_OOB) { @@ -756,17 +757,28 @@ static NTSTATUS sock_recv( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi status = wine_server_call( req ); wait_handle = wine_server_ptr_handle( reply->wait ); options = reply->options; - if ((!NT_ERROR(status) || wait_handle) && status != STATUS_PENDING) + may_restart = reply->may_restart; + } + SERVER_END_REQ; + + if (status == STATUS_ALERTED) + { + status = try_recv( fd, async, &information ); + if (status == STATUS_DEVICE_NOT_READY && may_restart) + status = STATUS_PENDING; + } + + if (status != STATUS_PENDING) + { + if (!NT_ERROR(status) || wait_handle) { io->Status = status; io->Information = information; } + release_fileio( &async->io ); + status = notify_async( wait_handle, status, information ); } - SERVER_END_REQ; - - if (status != STATUS_PENDING) release_fileio( &async->io ); - - if (wait_handle) status = wait_async( wait_handle, options & FILE_SYNCHRONOUS_IO_ALERT ); + else if (wait_handle) status = wait_async( wait_handle, options & FILE_SYNCHRONOUS_IO_ALERT ); return status; }
diff --git a/server/protocol.def b/server/protocol.def index c4596e0bd58..24b964e7d02 100644 --- a/server/protocol.def +++ b/server/protocol.def @@ -1440,6 +1440,7 @@ enum server_fd_type @REPLY obj_handle_t wait; /* handle to wait on for blocking recv */ unsigned int options; /* device open options */ + int may_restart; /* May restart async? */ @END
diff --git a/server/sock.c b/server/sock.c index 650e67a2e0a..947de81723a 100644 --- a/server/sock.c +++ b/server/sock.c @@ -3397,6 +3397,7 @@ DECL_HANDLER(recv_socket) { struct sock *sock = (struct sock *)get_handle_obj( current->process, req->async.handle, 0, &sock_ops ); unsigned int status = req->status; + int pending = 0; timeout_t timeout = 0; struct async *async; struct fd *fd; @@ -3422,6 +3423,25 @@ DECL_HANDLER(recv_socket) if ((status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY) && sock->rd_shutdown) status = STATUS_PIPE_DISCONNECTED;
+ /* NOTE: If read_q is not empty, we cannot really tell if the already queued asyncs + * NOTE: will not consume all available data; if there's no data available, + * NOTE: the current request won't be immediately satiable. */ + if ((status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY) && !async_queued( &sock->read_q )) + { + struct pollfd pollfd; + pollfd.fd = get_unix_fd( sock->fd ); + pollfd.events = req->oob ? POLLPRI : POLLIN; + pollfd.revents = 0; + if (poll(&pollfd, 1, 0) >= 0 && pollfd.revents) + { + /* Give the client opportunity to complete synchronously. + * If it turns out that the I/O request is not actually immediately satiable, + * the client may then choose to re-queue the async (with STATUS_PENDING). */ + pending = status == STATUS_PENDING; + status = STATUS_ALERTED; + } + } + sock->pending_events &= ~(req->oob ? AFD_POLL_OOB : AFD_POLL_READ); sock->reported_events &= ~(req->oob ? AFD_POLL_OOB : AFD_POLL_READ);
@@ -3438,7 +3458,7 @@ DECL_HANDLER(recv_socket) if (timeout) async_set_timeout( async, timeout, STATUS_IO_TIMEOUT );
- if (status == STATUS_PENDING) + if (status == STATUS_PENDING || status == STATUS_ALERTED) queue_async( &sock->read_q, async );
/* always reselect; we changed reported_events above */ @@ -3446,6 +3466,7 @@ DECL_HANDLER(recv_socket)
reply->wait = async_handoff( async, NULL, 0 ); reply->options = get_fd_options( fd ); + reply->may_restart = pending && status == STATUS_ALERTED; release_object( async ); } release_object( sock );
Otherwise, try_recv() call from sock_recv() may race against try_recv() call from async_recv_proc(), shuffling the packet order.
Wine-Bug: https://bugs.winehq.org/show_bug.cgi?id=52401 Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com --- dlls/ntdll/unix/socket.c | 12 ++---------- dlls/ws2_32/tests/sock.c | 8 ++++---- 2 files changed, 6 insertions(+), 14 deletions(-)
diff --git a/dlls/ntdll/unix/socket.c b/dlls/ntdll/unix/socket.c index 241e78888d4..15b805643e4 100644 --- a/dlls/ntdll/unix/socket.c +++ b/dlls/ntdll/unix/socket.c @@ -737,16 +737,8 @@ static NTSTATUS sock_recv( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi } }
- status = try_recv( fd, async, &information ); - - if (status != STATUS_SUCCESS && status != STATUS_BUFFER_OVERFLOW && status != STATUS_DEVICE_NOT_READY) - { - release_fileio( &async->io ); - return status; - } - - if (status == STATUS_DEVICE_NOT_READY && force_async) - status = STATUS_PENDING; + status = force_async ? STATUS_PENDING : STATUS_DEVICE_NOT_READY; + information = 0;
SERVER_START_REQ( recv_socket ) { diff --git a/dlls/ws2_32/tests/sock.c b/dlls/ws2_32/tests/sock.c index 054e597b719..4199676f460 100644 --- a/dlls/ws2_32/tests/sock.c +++ b/dlls/ws2_32/tests/sock.c @@ -7750,8 +7750,8 @@ static void test_shutdown(void)
WSASetLastError(0xdeadbeef); ret = recv(server, buffer, sizeof(buffer), 0); - todo_wine ok(ret == -1, "got %d\n", ret); - todo_wine ok(WSAGetLastError() == WSAESHUTDOWN, "got error %u\n", WSAGetLastError()); + ok(ret == -1, "got %d\n", ret); + ok(WSAGetLastError() == WSAESHUTDOWN, "got error %u\n", WSAGetLastError());
ret = send(server, "test", 5, 0); ok(ret == 5, "got %d\n", ret); @@ -7845,8 +7845,8 @@ static void test_shutdown(void)
WSASetLastError(0xdeadbeef); ret = recv(server, buffer, sizeof(buffer), 0); - todo_wine ok(ret == -1, "got %d\n", ret); - todo_wine ok(WSAGetLastError() == WSAESHUTDOWN, "got error %u\n", WSAGetLastError()); + ok(ret == -1, "got %d\n", ret); + ok(WSAGetLastError() == WSAESHUTDOWN, "got error %u\n", WSAGetLastError());
WSASetLastError(0xdeadbeef); ret = send(server, "test", 5, 0);
The 'status' field of recv_socket_request is always either STATUS_PENDING or STATUS_DEVICE_NOT_READY, and the 'total' field is always zero.
Replace the 'status' field with 'force_async' boolean field, and get rid of the 'total' field entirely.
Also, clean up the recv_socket handler code a bit.
Signed-off-by: Jinoh Kang jinoh.kang.kr@gmail.com --- dlls/ntdll/unix/socket.c | 7 ++----- server/protocol.def | 3 +-- server/sock.c | 23 +++++++++-------------- 3 files changed, 12 insertions(+), 21 deletions(-)
diff --git a/dlls/ntdll/unix/socket.c b/dlls/ntdll/unix/socket.c index 15b805643e4..20134f607c3 100644 --- a/dlls/ntdll/unix/socket.c +++ b/dlls/ntdll/unix/socket.c @@ -737,13 +737,9 @@ static NTSTATUS sock_recv( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi } }
- status = force_async ? STATUS_PENDING : STATUS_DEVICE_NOT_READY; - information = 0; - SERVER_START_REQ( recv_socket ) { - req->status = status; - req->total = information; + req->force_async = force_async; req->async = server_async( handle, &async->io, event, apc, apc_user, iosb_client_ptr(io) ); req->oob = !!(unix_flags & MSG_OOB); status = wine_server_call( req ); @@ -753,6 +749,7 @@ static NTSTATUS sock_recv( HANDLE handle, HANDLE event, PIO_APC_ROUTINE apc, voi } SERVER_END_REQ;
+ information = 0; if (status == STATUS_ALERTED) { status = try_recv( fd, async, &information ); diff --git a/server/protocol.def b/server/protocol.def index 24b964e7d02..f3c9aae9e51 100644 --- a/server/protocol.def +++ b/server/protocol.def @@ -1435,8 +1435,7 @@ enum server_fd_type @REQ(recv_socket) int oob; /* are we receiving OOB data? */ async_data_t async; /* async I/O parameters */ - unsigned int status; /* status of initial call */ - unsigned int total; /* number of bytes already read */ + int force_async; /* Force asynchronous mode? */ @REPLY obj_handle_t wait; /* handle to wait on for blocking recv */ unsigned int options; /* device open options */ diff --git a/server/sock.c b/server/sock.c index 947de81723a..a00728b88e5 100644 --- a/server/sock.c +++ b/server/sock.c @@ -3396,8 +3396,8 @@ struct object *create_socket_device( struct object *root, const struct unicode_s DECL_HANDLER(recv_socket) { struct sock *sock = (struct sock *)get_handle_obj( current->process, req->async.handle, 0, &sock_ops ); - unsigned int status = req->status; - int pending = 0; + unsigned int status = STATUS_PENDING; + int pending = req->force_async; timeout_t timeout = 0; struct async *async; struct fd *fd; @@ -3405,8 +3405,8 @@ DECL_HANDLER(recv_socket) if (!sock) return; fd = sock->fd;
- /* recv() returned EWOULDBLOCK, i.e. no data available yet */ - if (status == STATUS_DEVICE_NOT_READY && !sock->nonblocking) + /* Synchronous, *blocking* I/O requested? */ + if (!req->force_async && !sock->nonblocking) { /* Set a timeout on the async if necessary. * @@ -3417,16 +3417,16 @@ DECL_HANDLER(recv_socket) if (is_fd_overlapped( fd )) timeout = (timeout_t)sock->rcvtimeo * -10000;
- status = STATUS_PENDING; + pending = 1; }
- if ((status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY) && sock->rd_shutdown) + if (status == STATUS_PENDING && sock->rd_shutdown) status = STATUS_PIPE_DISCONNECTED;
/* NOTE: If read_q is not empty, we cannot really tell if the already queued asyncs * NOTE: will not consume all available data; if there's no data available, * NOTE: the current request won't be immediately satiable. */ - if ((status == STATUS_PENDING || status == STATUS_DEVICE_NOT_READY) && !async_queued( &sock->read_q )) + if (status == STATUS_PENDING && !async_queued( &sock->read_q )) { struct pollfd pollfd; pollfd.fd = get_unix_fd( sock->fd ); @@ -3437,22 +3437,17 @@ DECL_HANDLER(recv_socket) /* Give the client opportunity to complete synchronously. * If it turns out that the I/O request is not actually immediately satiable, * the client may then choose to re-queue the async (with STATUS_PENDING). */ - pending = status == STATUS_PENDING; status = STATUS_ALERTED; } }
+ if (!pending && status == STATUS_PENDING) status = STATUS_DEVICE_NOT_READY; + sock->pending_events &= ~(req->oob ? AFD_POLL_OOB : AFD_POLL_READ); sock->reported_events &= ~(req->oob ? AFD_POLL_OOB : AFD_POLL_READ);
if ((async = create_request_async( fd, get_fd_comp_flags( fd ), &req->async ))) { - if (status == STATUS_SUCCESS) - { - struct iosb *iosb = async_get_iosb( async ); - iosb->result = req->total; - release_object( iosb ); - } set_error( status );
if (timeout)