On Sunday 27 October 2002 03:43 pm, Jürgen Schmied wrote:
client -> wineserver (copies request-message to wineserver) wineserver -> client (gives wait handle back) wineserver->server (server wakes up on it's wait handle and gets request-message) ... server is working now ... server->wineserver (server copies replay-message to wineserver) client->wineserver (client is waken up and calls wineserver for data) wineserver->client (client gets replay-message from wineserver)
So I would consider a different IPC mechanism for this.
you are right, this kind of defeats the purpose of LPC per-se. Frankly, I'm not sure I understand why LPC is the recommended means for this in the first place.
What are our real (non RPC) IPC capabilities right now?
o the wineserver o the filesystem (hey, it's fast!) o named pipes / winsockets (seems like a terrible solution to me) o map file into ram thingys (do they use the wineserver? how/when?) o unix stuff like signals, sockets, shm, etc. (allowed?) o ? tons more I'm not thinking of...
I'm kind of clueless about some of these things... and, to be honest, I'm not even sure I've fully wrapped my mind around the precise nature of the problem-space, which obviously doesn't help me to solve this problem :)
Perhaps I should be asking: what IPC can occur /without/ the wineserver?
Or could we implement LPC in a better way?
probably... doesn't winnt use some clever trick with interlocked increment/decrement between threads? couldn't we basically do the same? or is that a wineserver call too? whatever, I can see I'm not thinking clearly on the subject...
It seems to come down to: What are the absolutely most efficient ways for a wine developer to get:
A. some shared memory, B. atomicity controls, C. IPC signalling (can be used to implement B., given A.)
Many, many problems are solvable given these features, of course, and this endpoint mapping business ought to be in this category, not to mention an LPC implementation, if we want that.
And now for a joke:
I know the answer!!! We need a kernel module for this!<<<