https://bugs.winehq.org/show_bug.cgi?id=49113
--- Comment #5 from Rémi Bernon rbernon@codeweavers.com --- Created attachment 67093 --> https://bugs.winehq.org/attachment.cgi?id=67093 Dishonored 2 loading time
(In reply to Zebediah Figura from comment #2)
(In reply to Rémi Bernon from comment #0)
In general this does not translate in much slowdowns, as memory allocation is rarely done in such highly concurrent way, but in some situations the difference is clearly noticeable, and in particular with many games during their loading times.
I'm not expecting leagues of difference of course, as you say, but all the same could you give some exact numbers for a handful of specific titles?
I may be overselling it a bit and it's actually hard to measure precisely.
Here's for instance the individual frame time taken during the loading of Dishonored 2, with the standard heap, and the thread local implementation.
(In reply to Dmitry Timoshkov from comment #3)
This seems to be going in the wrong direction (is the actual problem due to locking primitives being inefficient?) since the whole effort has been driven by an artificial tests, and as the result there's no visible improvement for the real world applications. On the contrary Sebastian's patchset in the staging tree was based on the research and proper heap manager design, and as a result provided huge performance improvements for real world applications.
Of course, optimizing locking primitives also help, and esync and fsync have an effect there as well. I think it's not exclusive.
(In reply to Dmitry Timoshkov from comment #4)
I'd suggest to spend the efforts on mainlining Sebastian's patch instead.
Sure.