On 2020-08-21 00:50, Paul Gofman wrote:
On 8/21/20 01:17, Rémi Bernon wrote:
On 2020-08-21 00:03, Paul Gofman wrote:
On 8/21/20 00:47, Rémi Bernon wrote:
I was thinking that we could do something like in the attached patches, where if the mapping in reserved areas failed in bottom up alloc, we reserve more memory and try again until it succeeds.
I think it's simpler and the existing code already handles the reservation with bisecting steps nicely. Wouldn't that work?
Of course this will cause some more memory to be definitively reserved for Windows, but that will then also be used as needed by following allocations.
Alternatively we could release the over-reserved memory every time, but that also means we will have to reserve again on next allocations.
I guess something like that can work, but I don't understand how playing harder with reserve areas is simpler or less hacky. But maybe its only me.
What will happen if there is a native mapping close to the previous reserve area end and reserve_area() will shrink its size below the size we need? Anyway, that can be solved some way by introducing a smarter search probably between the lines of existing search in try_map_free_area. And yeah, not immediately clear to me also if the reserved areas are needed to given back or not.
What I can't understand yet, how it is simpler or better to increase those free areas each time we run out of space with them (top down allocs are rare, and there is no "don't care" flag). It IMO further twists the "free area" - "reserved area" allocation duality.
Well the free ranges were mostly meant as an optimization over iterating the view tree, and they only track free addresses from the Windows point of view. The reserved area, are the addresses that Wine has reserved for its Windows usage, and that are so known not to be used by the system.
Yes, but it was previously implied that they can be reserved once at start. They apparently were not meant to reserve all space for Wine VM allocations, probably they are needed to make address conflicts for that non-relocatable modules less likely.
For me that works in the same way an allocator reserves larger blocks from the system virtual memory allocator, and then splits them in small blocks depending on the allocation requests that it will give to its clients.
In a way, yes. But it is not clear to me which exactly benefit do you see in reserving an extra space in advance? And doing it in a way that now we will have distinct top down and bottom up areas?
FWIW, the logic could be extended to the top down allocations in a similar way, and completely replace the non-reserved allocation case. Maybe that could even simplify the code?
Do you have concerns about the performance? Will it help if I send a test program with some test data extracted from a +virtual log with a game run which was previously hitting a problematic case? I also found DayZ Server very good for testing as it allocates an insane amount of small virtual memory chunks on startup.
BTW I doubt anything like that with hard distinction between top-down and bottom up areas can work for x86 where we already face VM exhaustion in many cases. It, again, can probably be solved by adding some heuristics and better handling "first chance" out of memory cases or going from top down / bottom up distinction to something neater, but it won't make the thing simpler. Or it should be a separate path for x64 only, which is also not simplifying things.
Yes, I added that as a quick way to prevent allocations from succeeding in the top reserved areas if it was supposed to be bottom up, but the idea isn't to prevent it if it is the only free memory region.
That could even be an issue as the code will try forever until it succeeds, or exhaust memory, which isn't going to end well in that case.
I thought of limiting the number of retries, or it could be x64 specific indeed -- IIRC the problem with high pointers doesn't happen on 32bit anyway.