Jinoh Kang (@iamahuman) commented about dlls/ntdll/heap.c:
+static struct block *find_free_bin_block( struct heap *heap, ULONG flags, SIZE_T block_size, struct bin *bin ) +{ + ULONG affinity = heap_current_thread_affinity(); + struct block *block; + struct group *group; + + /* acquire a group, the thread will own it and no other thread can clear free bits. + * some other thread might still set the free bits if they are freeing blocks. + */ + if (!(group = heap_acquire_bin_group( heap, flags, block_size, bin ))) return NULL; + group->affinity = affinity; + + /* serialize with heap_free_block_lfh: atomically set GROUP_FLAG_FREE when the free bits are all 0. */ + if (group_find_free_block( group, block_size, &block )) + InterlockedAnd( &group->free_bits, ~GROUP_FLAG_FREE ); + else Here's my hopefully final suggestion, which enhances both performance (less Interlocked ops) and verifiability without harming readability: how about ensuring that all groups in list or affinity array have `GROUP_FLAG_FREE` unset? This way, we don't need two `InterlockedAnd` ops for the _partially free_ case.
Patches linked to this suggestion is annotated with `[free-unset-v7]`. ```suggestion:-2+0 if (!group_find_free_block( group, block_size, &block )) ``` -- https://gitlab.winehq.org/wine/wine/-/merge_requests/1628#note_23418