Just a comment to the mentioned use of LPC to ememinate the server part of rpc: Since wine uses a server every lpc-call causes a lot of context switching. LPC with the wine server will never be as fast as on winnt. -The client would have to wait on the client side what means a client call needs 2 roundtrips to the server. - There are 3 processes playing a role: client, wine-server and the server. Where winnt needs only a context switch between the client and the server and one back, we need at least 6:
client -> wineserver (copies request-message to wineserver) wineserver -> client (gives wait handle back) wineserver->server (server wakes up on it's wait handle and gets request-message) ... server is working now ... server->wineserver (server copies replay-message to wineserver) client->wineserver (client is waken up and calls wineserver for data) wineserver->client (client gets replay-message from wineserver)
So I would consider a different IPC mechanism for this. Or could we implement LPC in a better way?
juergen --- juergen.schmied@debitel.net
On Sunday 27 October 2002 03:43 pm, Jürgen Schmied wrote:
client -> wineserver (copies request-message to wineserver) wineserver -> client (gives wait handle back) wineserver->server (server wakes up on it's wait handle and gets request-message) ... server is working now ... server->wineserver (server copies replay-message to wineserver) client->wineserver (client is waken up and calls wineserver for data) wineserver->client (client gets replay-message from wineserver)
So I would consider a different IPC mechanism for this.
you are right, this kind of defeats the purpose of LPC per-se. Frankly, I'm not sure I understand why LPC is the recommended means for this in the first place.
What are our real (non RPC) IPC capabilities right now?
o the wineserver o the filesystem (hey, it's fast!) o named pipes / winsockets (seems like a terrible solution to me) o map file into ram thingys (do they use the wineserver? how/when?) o unix stuff like signals, sockets, shm, etc. (allowed?) o ? tons more I'm not thinking of...
I'm kind of clueless about some of these things... and, to be honest, I'm not even sure I've fully wrapped my mind around the precise nature of the problem-space, which obviously doesn't help me to solve this problem :)
Perhaps I should be asking: what IPC can occur /without/ the wineserver?
Or could we implement LPC in a better way?
probably... doesn't winnt use some clever trick with interlocked increment/decrement between threads? couldn't we basically do the same? or is that a wineserver call too? whatever, I can see I'm not thinking clearly on the subject...
It seems to come down to: What are the absolutely most efficient ways for a wine developer to get:
A. some shared memory, B. atomicity controls, C. IPC signalling (can be used to implement B., given A.)
Many, many problems are solvable given these features, of course, and this endpoint mapping business ought to be in this category, not to mention an LPC implementation, if we want that.
And now for a joke:
I know the answer!!! We need a kernel module for this!<<<
It seems to come down to: What are the absolutely most efficient ways for a wine developer to get:
A. some shared memory,
mmap() of /dev/null (or MAP_ANON) with MAP_SHARED set.
David
On Mon, 28 Oct 2002, Greg Turner wrote:
On Sunday 27 October 2002 03:43 pm, Jürgen Schmied wrote:
client -> wineserver (copies request-message to wineserver) wineserver -> client (gives wait handle back) wineserver->server (server wakes up on it's wait handle and gets request-message) ... server is working now ... server->wineserver (server copies replay-message to wineserver) client->wineserver (client is waken up and calls wineserver for data) wineserver->client (client gets replay-message from wineserver)
So I would consider a different IPC mechanism for this.
you are right, this kind of defeats the purpose of LPC per-se. Frankly, I'm not sure I understand why LPC is the recommended means for this in the first place.
Probably because it's "the Microsoft way" (i.e. the right thing for Wine), but it wasn't me who recommended it. I'm fine with named pipes.
But why is it such an imperative to implement a superfast RPC transport right now? Wouldn't the old rule "get it working first, then optimize" mean more than usual when it comes to what we're dealing with here, undocumented stuff that we don't fully know how is supposed to work, and may not get right? Why make getting it right harder by adding a lot of complexity, instead of adding infrastructure? For example, RPC server parts needs to be rewritten to be able to serve requests asynchronously in a flexible way, and we don't know *exactly* how it's done in Windows; this is more important (and more likely to kill performance than a few context switches), yet you risk making fixing such a fundamental issue very difficult by trying to bolt a complex transport on top, when that can always be done more easily later, with a more reliable infrastructure in place.
What are our real (non RPC) IPC capabilities right now?
o the wineserver o the filesystem (hey, it's fast!) o named pipes / winsockets (seems like a terrible solution to me) o map file into ram thingys (do they use the wineserver? how/when?) o unix stuff like signals, sockets, shm, etc. (allowed?) o ? tons more I'm not thinking of...
I'm kind of clueless about some of these things... and, to be honest, I'm not even sure I've fully wrapped my mind around the precise nature of the problem-space, which obviously doesn't help me to solve this problem :)
Perhaps I should be asking: what IPC can occur /without/ the wineserver?
IPC is pretty much the most important reason of having the wineserver in the first place, so I don't see why ignore it... in most cases you only need it to set up the channel anyway. Once the channel is set up, you only need to talk to the kernel. For example, setting up shared memory is done through the wineserver, but once mapped, you can use that shared memory without bothering the wineserver. Similarly, you can tell the wineserver to set up a pipe (named or unnamed), then just acquire its Unix file descriptors, and then use these for further interprocess communication (provided you don't need to check for the other end doing anything Windowsy). Since these are mapped straight to Unix equivalents (named pipes=unix socket pair, unnamed pipes=unix pipes), the wineserver isn't involved in the actual data transfer.
It seems to come down to: What are the absolutely most efficient ways for a wine developer to get:
A. some shared memory, B. atomicity controls, C. IPC signalling (can be used to implement B., given A.)
What do you want to signal? It sounds like all you can implement with just this is busy-wait-loops. Surely you're not thinking cpu-wasting busy waits are more efficient than the wineserver?
Many, many problems are solvable given these features, of course, and this endpoint mapping business ought to be in this category, not to mention an LPC implementation, if we want that.
The endpoint mapper is a service that's accessed through RPC, it shouldn't be a client-side feature in shared memory. In Windows, the local endpoint mapper is hosted by rpcss.exe. (In Wine, it could be implemented by launching a Wine-rpcss whenever we need an endpoint-mapping service to hold an endpoint registration, and make it stay alive for (only) as long as the endpoint registrations it holds are still alive.)
The IDL definition of the endpoint mapper's RPC interface can probably be found in freedce or samba/samba-tng. But that'd mean marshalling would have to work first, which is why I implemented it the wineserver way to begin with...
Anyway, I'll try to help with these RPC efforts soon, but I'm still a bit sick (and in the little time I'm still able to do stuff, matters of greater urgency tend to turn up), so I haven't been able to start yet...