On Monday 05 April 2004 05:21, Dimitrie O. Paun wrote:
Use #ifdef stuff so that when building on WINE, it uses unix sockets.
We have to be smart about this, having #ifdefs all over the file is not acceptable. But from the look of the patch that was just posted, it certainly looks like we can achieve this without too much pain. But yes, I'm also curious how much of a hit we're taking.
Here are some timings for a loop of 100 synchronous HTTP GETs from localhost on a 777 bytes file:
Unix sockets
real 0m4.099s | real 0m4.137s | real 0m4.092s | real 0m4.104s user 0m0.480s | user 0m0.503s | user 0m0.514s | user 0m0.533s sys 0m0.312s | sys 0m0.283s | sys 0m0.280s | sys 0m0.303s
Windows sockets
real 0m4.172s | real 0m4.255s | real 0m4.219s | real 0m4.867s user 0m0.888s | user 0m0.858s | user 0m0.807s | user 0m0.936s sys 0m0.910s | sys 0m0.820s | sys 0m0.839s | sys 0m0.988s
So yes, there is a performance hit. Especially the 'user' and 'sys' measurements are respectively nearly 2 and 3 times higher than with Unix sockets.
So, is this a problem? Depends on what's important to you, but I'd argue that it's more important for Wine to open up wininet (and consequently winsock) to more users and developers. That may eventually attract more developers to fix bugs or even the performance issues with our implementation.
I would also argue that performance in a typical scenario is probably not bounded by wininet's implementation but by the user's bandwidth or, for example, by his browser's rendering speed.
By the way, 100 *asynchronous* HTTP GETs in a tight loop will reliably crash Wine, both with Unix sockets and Windows sockets.
-Hans