Currently, the free list consists of a "small list" for sizes below 256, which are linearly spaced, and a "large list" which is manually split into a few chunks.
This patch replaces it with a single log-linear policy, while expanding the range the large list covers.
The old implementation had issues when a lot of large allocations happened. In this case, all the allocations went in the last catch-all bucket in the "large list", and what happens is: 1. The linked list grew in size over time, causing searching cost to skyrocket. 2. With the first-fit allocation policy, fragmentation was also making the problem worse.
The new bucketing covers the entire range up until we start allocating large blocks, which will not enter the free list. It also makes the allocation policy closer to best-fit (although not exactly), reducing fragmentation.
The increase in number of free lists does incur some cost when it needs to be skipped over, but the improvement in allocation performance outweighs it.
For future work, these ideas (mostly from glibc) might or might not benefit performance: - Use an exact best-fit allocation policy. - Add a bitmap for freelist, allowing empty lists to be skipped with a single bit scan.
For the benchmark, this drastically improves initial shader loading performance in Overwatch 2. In this workload 78k shaders are passed to DXVK for DXBC -> SPIRV translation, and for each shader a few allocation happens in the 4K – 100K range for the staging buffer.
Before this patch, malloc consisted a whooping 43% of overhead. The overhead with log-linear bucketing is drastically lower, resulting in a ~2x improvement in loading time.
Overhead for each `FREE_LIST_LINEAR_BITS` is as below: - 0: 7.7% - 1: 2.9% - 2: 1.3% - 3: 0.6%
Since performance seems to scale linearly with increase in buckets (up to the point I have tested), I've opted for 3 (8 buckets per doubling) in the current revision of patch.
Signed-off-by: Tatsuyuki Ishi ishitatsuyuki@gmail.com
-- v5: ntdll: Use log-linear bucketing for free lists.
From: Tatsuyuki Ishi ishitatsuyuki@gmail.com
Currently, the free list consists of a "small list" for sizes below 256, which are linearly spaced, and a "large list" which is manually split into a few chunks.
This patch replaces it with a single log-linear policy, while expanding the range the large list covers.
The old implementation had issues when a lot of large allocations happened. In this case, all the allocations went in the last catch-all bucket in the "large list", and what happens is: 1. The linked list grew in size over time, causing searching cost to skyrocket. 2. With the first-fit allocation policy, fragmentation was also making the problem worse.
The new bucketing covers the entire range up until we start allocating large blocks, which will not enter the free list. It also makes the allocation policy closer to best-fit (although not exactly), reducing fragmentation.
The increase in number of free lists does incur some cost when it needs to be skipped over, but the improvement in allocation performance outweighs it.
For future work, these ideas (mostly from glibc) might or might not benefit performance: - Use an exact best-fit allocation policy. - Add a bitmap for freelist, allowing empty lists to be skipped with a single bit scan.
Signed-off-by: Tatsuyuki Ishi ishitatsuyuki@gmail.com --- dlls/kernel32/tests/heap.c | 2 - dlls/ntdll/heap.c | 101 ++++++++++++++++++++++++------------- 2 files changed, 65 insertions(+), 38 deletions(-)
diff --git a/dlls/kernel32/tests/heap.c b/dlls/kernel32/tests/heap.c index b0b56132393..d2742b55495 100644 --- a/dlls/kernel32/tests/heap.c +++ b/dlls/kernel32/tests/heap.c @@ -452,9 +452,7 @@ static void test_HeapCreate(void)
SetLastError( 0xdeadbeef ); ptr1 = HeapAlloc( heap, 0, alloc_size - (0x200 + 0x80 * sizeof(void *)) ); - todo_wine ok( !ptr1, "HeapAlloc succeeded\n" ); - todo_wine ok( GetLastError() == ERROR_NOT_ENOUGH_MEMORY, "got error %lu\n", GetLastError() ); ret = HeapFree( heap, 0, ptr1 ); ok( ret, "HeapFree failed, error %lu\n", GetLastError() ); diff --git a/dlls/ntdll/heap.c b/dlls/ntdll/heap.c index 6688fab9690..5a19b5d73b1 100644 --- a/dlls/ntdll/heap.c +++ b/dlls/ntdll/heap.c @@ -146,7 +146,8 @@ C_ASSERT( sizeof(ARENA_LARGE) == 4 * BLOCK_ALIGN );
#define ROUND_ADDR(addr, mask) ((void *)((UINT_PTR)(addr) & ~(UINT_PTR)(mask))) #define ROUND_SIZE(size, mask) ((((SIZE_T)(size) + (mask)) & ~(SIZE_T)(mask))) -#define FIELD_MAX(type, field) (((SIZE_T)1 << (sizeof(((type *)0)->field) * 8)) - 1) +#define FIELD_BITS(type, field) (sizeof(((type *)0)->field) * 8) +#define FIELD_MAX(type, field) (((SIZE_T)1 << FIELD_BITS(type, field)) - 1)
#define HEAP_MIN_BLOCK_SIZE ROUND_SIZE(sizeof(struct entry) + BLOCK_ALIGN, BLOCK_ALIGN - 1)
@@ -168,17 +169,11 @@ C_ASSERT( HEAP_MAX_FREE_BLOCK_SIZE >= HEAP_MAX_BLOCK_REGION_SIZE ); /* minimum size to start allocating large blocks */ #define HEAP_MIN_LARGE_BLOCK_SIZE (HEAP_MAX_USED_BLOCK_SIZE - 0x1000)
-/* There will be a free list bucket for every arena size up to and including this value */ -#define HEAP_MAX_SMALL_FREE_LIST 0x100 -C_ASSERT( HEAP_MAX_SMALL_FREE_LIST % BLOCK_ALIGN == 0 ); -#define HEAP_NB_SMALL_FREE_LISTS (((HEAP_MAX_SMALL_FREE_LIST - HEAP_MIN_BLOCK_SIZE) / BLOCK_ALIGN) + 1) - -/* Max size of the blocks on the free lists above HEAP_MAX_SMALL_FREE_LIST */ -static const SIZE_T free_list_sizes[] = -{ - 0x200, 0x400, 0x1000, ~(SIZE_T)0 -}; -#define HEAP_NB_FREE_LISTS (ARRAY_SIZE(free_list_sizes) + HEAP_NB_SMALL_FREE_LISTS) +#define FREE_LIST_LINEAR_BITS 3 +#define FREE_LIST_LINEAR_MASK ((1 << FREE_LIST_LINEAR_BITS) - 1) +#define FREE_LIST_COUNT ((FIELD_BITS( struct block, block_size ) - FREE_LIST_LINEAR_BITS + 1) * (1 << FREE_LIST_LINEAR_BITS) + 1) +/* for reference, update this when changing parameters */ +C_ASSERT( FREE_LIST_COUNT == 0x71 );
typedef struct DECLSPEC_ALIGN(BLOCK_ALIGN) tagSUBHEAP { @@ -304,7 +299,7 @@ struct heap DWORD pending_pos; /* Position in pending free requests ring */ struct block **pending_free; /* Ring buffer for pending free requests */ RTL_CRITICAL_SECTION cs; - struct entry free_lists[HEAP_NB_FREE_LISTS]; + struct entry free_lists[FREE_LIST_COUNT]; struct bin *bins; SUBHEAP subheap; }; @@ -567,23 +562,6 @@ static void valgrind_notify_free_all( SUBHEAP *subheap, const struct heap *heap #endif }
-/* locate a free list entry of the appropriate size */ -/* size is the size of the whole block including the arena header */ -static inline struct entry *find_free_list( struct heap *heap, SIZE_T block_size, BOOL last ) -{ - struct entry *list, *end = heap->free_lists + ARRAY_SIZE(heap->free_lists); - unsigned int i; - - if (block_size <= HEAP_MAX_SMALL_FREE_LIST) - i = (block_size - HEAP_MIN_BLOCK_SIZE) / BLOCK_ALIGN; - else for (i = HEAP_NB_SMALL_FREE_LISTS; i < HEAP_NB_FREE_LISTS - 1; i++) - if (block_size <= free_list_sizes[i - HEAP_NB_SMALL_FREE_LISTS]) break; - - list = heap->free_lists + i; - if (last && ++list == end) list = heap->free_lists; - return list; -} - /* get the memory protection type to use for a given heap */ static inline ULONG get_protection_type( DWORD flags ) { @@ -622,10 +600,61 @@ static void heap_set_status( const struct heap *heap, ULONG flags, NTSTATUS stat if (status) RtlSetLastWin32ErrorAndNtStatusFromNtStatus( status ); }
-static size_t get_free_list_block_size( unsigned int index ) +static SIZE_T get_free_list_block_size( unsigned int index ) +{ + DWORD log = index >> FREE_LIST_LINEAR_BITS; + DWORD linear = index & FREE_LIST_LINEAR_MASK; + + if (log == 0) return index * BLOCK_ALIGN; + + return (((1 << FREE_LIST_LINEAR_BITS) + linear) << (log - 1)) * BLOCK_ALIGN; +} + +/* + * Given a size, return its index in the block size list for freelists. + * + * With FREE_LIST_LINEAR_BITS=3, the list looks like this + * (with respect to size / BLOCK_ALIGN): + * 0, + * 1, 2, 3, 4, 5, 6, 7, 8, + * 9, 10, 11, 12, 13, 14, 15, 16, + * 18, 20, 22, 24, 26, 28, 30, 32, + * 36, 40, 44, 48, 52, 56, 60, 64, + * 72, 80, 88, 96, 104, 112, 120, 128, + * ... + */ +static unsigned int get_free_list_index( SIZE_T block_size ) +{ + DWORD bit, log, linear; + + if (block_size > get_free_list_block_size( FREE_LIST_COUNT - 1 )) + return FREE_LIST_COUNT - 1; + + block_size /= BLOCK_ALIGN; + /* find the highest bit */ + if (!BitScanReverse( &bit, block_size ) || bit < FREE_LIST_LINEAR_BITS) + { + /* for small values, the index is same as block_size. */ + log = 0; + linear = block_size; + } + else + { + /* the highest bit is always set, ignore it and encode the next FREE_LIST_LINEAR_BITS bits + * as a linear scale, combined with the shift as a log scale, in the free list index. */ + log = bit - FREE_LIST_LINEAR_BITS + 1; + linear = (block_size >> (bit - FREE_LIST_LINEAR_BITS)) & FREE_LIST_LINEAR_MASK; + } + + return (log << FREE_LIST_LINEAR_BITS) + linear; +} + +/* locate a free list entry of the appropriate size */ +static inline struct entry *find_free_list( struct heap *heap, SIZE_T block_size, BOOL last ) { - if (index < HEAP_NB_SMALL_FREE_LISTS) return HEAP_MIN_BLOCK_SIZE + index * BLOCK_ALIGN; - return free_list_sizes[index - HEAP_NB_SMALL_FREE_LISTS]; + unsigned int index = get_free_list_index( block_size ); + if (last && ++index == FREE_LIST_COUNT) index = 0; + return &heap->free_lists[index]; }
static void heap_dump( const struct heap *heap ) @@ -649,7 +678,7 @@ static void heap_dump( const struct heap *heap ) }
TRACE( " free_lists: %p\n", heap->free_lists ); - for (i = 0; i < HEAP_NB_FREE_LISTS; i++) + for (i = 0; i < FREE_LIST_COUNT; i++) TRACE( " %p: size %#8Ix, prev %p, next %p\n", heap->free_lists + i, get_free_list_block_size( i ), LIST_ENTRY( heap->free_lists[i].entry.prev, struct entry, entry ), LIST_ENTRY( heap->free_lists[i].entry.next, struct entry, entry ) ); @@ -1124,7 +1153,7 @@ static BOOL is_valid_free_block( const struct heap *heap, const struct block *bl unsigned int i;
if ((subheap = find_subheap( heap, block, FALSE ))) return TRUE; - for (i = 0; i < HEAP_NB_FREE_LISTS; i++) if (block == &heap->free_lists[i].block) return TRUE; + for (i = 0; i < FREE_LIST_COUNT; i++) if (block == &heap->free_lists[i].block) return TRUE; return FALSE; }
@@ -1508,7 +1537,7 @@ HANDLE WINAPI RtlCreateHeap( ULONG flags, void *addr, SIZE_T total_size, SIZE_T list_init( &heap->large_list );
list_init( &heap->free_lists[0].entry ); - for (i = 0, entry = heap->free_lists; i < HEAP_NB_FREE_LISTS; i++, entry++) + for (i = 0, entry = heap->free_lists; i < FREE_LIST_COUNT; i++, entry++) { block_set_flags( &entry->block, ~0, BLOCK_FLAG_FREE_LINK ); block_set_size( &entry->block, 0 );
Hi,
It looks like your patch introduced the new failures shown below. Please investigate and fix them before resubmitting your patch. If they are not new, fixing them anyway would help a lot. Otherwise please ask for the known failures list to be updated.
The tests also ran into some preexisting test failures. If you know how to fix them that would be helpful. See the TestBot job for the details:
The full results can be found at: https://testbot.winehq.org/JobDetails.pl?Key=131749
Your paranoid android.
=== debian11b (64 bit WoW report) ===
kernel32: heap.c:446: Test failed: HeapAlloc failed, error 8 Unhandled exception: page fault on read access to 0xfffffffffffffffe in 64-bit code (0x0000017002b120).
On Tue Apr 11 08:20:15 2023 +0000, Rémi Bernon wrote:
I know, after discussing this elsewhere, that this is supposed to improve shader compilation performance in Overwatch 2 even more, but if you have some actual numbers it would be nice to mention them here.
Thanks, added to MR description.
We can use `get_free_list_block_size` to early reject large cases
Thanks, good idea.
I've reorganized the small number path to make it more clear that log=0 and linear=block_size.
Spacing should also be fixed now.
On Tue Apr 11 07:23:05 2023 +0000, Rémi Bernon wrote:
/* locate a free list entry of the appropriate size */ static inline struct entry *find_free_list( struct heap *heap, SIZE_T block_size, BOOL last ) { UINT index = get_free_list_index( block_size ); if (last && ++index == FREE_LIST_COUNT) index = 0; return &heap->free_lists[index]; }
I think we can remove the block_size comments at this point, `block_size` is used consistently now and implies that the size includes the block header. I'd also prefer to use "index" rather than "block" which already has a different meaning.
Thanks, done.
Something like that, I'd use a common `FREE_LIST_` prefix for all the related defines, remove `FREE_LIST_MAX_LOG` which isn't very useful, and add a `C_ASSERT` to better show that the actual number of free lists.
Done.
Note that apparently the induced heap size increase makes some tests to fail, using only 2 bits seems to be an easy fix, as it reduces the number of free lists to ~64, but I don't know if that's still good enough?
It does make a difference, malloc overhead halves from 1.3% to 0.6%, but if the test issue is annoying enough I think using only 2 bits is also fine. Thoughts?
On Tue Apr 11 08:11:00 2023 +0000, Tatsuyuki Ishi wrote:
changed this line in [version 3 of the diff](/wine/wine/-/merge_requests/2622/diffs?diff_id=41725&start_sha=0eae71c305ccc3d592d2ebda10694df97593ec8e#ce84fe3b548046500368ac49e84ae7006719c2d4_625_635)
I think that might be too clever, and we do need to handle the small number path differently anyway. I've made the small number more clear about log=0 and linear=block_size.
On Tue Apr 11 08:10:59 2023 +0000, Tatsuyuki Ishi wrote:
changed this line in [version 3 of the diff](/wine/wine/-/merge_requests/2622/diffs?diff_id=41725&start_sha=0eae71c305ccc3d592d2ebda10694df97593ec8e#ce84fe3b548046500368ac49e84ae7006719c2d4_615_626)
Changed to `unsigned int` or `DWORD` or `SIZE_T`.
On Tue Apr 11 08:20:18 2023 +0000, Tatsuyuki Ishi wrote:
Something like that, I'd use a common `FREE_LIST_` prefix for all the
related defines, remove `FREE_LIST_MAX_LOG` which isn't very useful, and add a `C_ASSERT` to better show that the actual number of free lists. Done.
Note that apparently the induced heap size increase makes some tests
to fail, using only 2 bits seems to be an easy fix, as it reduces the number of free lists to ~64, but I don't know if that's still good enough? It does make a difference, malloc overhead halves from 1.3% to 0.6%, but if the test issue is annoying enough I think using only 2 bits is also fine. Thoughts?
I don't have any strong opinion, but if you want to keep 3 bits you'll have to adjust the test.
Imho using 2 bits would be good enough according to your numbers, while keeping the struct heap overhead slightly smaller, and avoid having to argue about the test, which passes on Windows.