On Mon Feb 6 16:15:26 2023 +0000, Jinoh Kang wrote:
Personally, I find the `GROUP_FLAG_FREE` flag a little unintuitive due to the following reasons:
- Spatial dependence of semantics. The `GROUP_FLAG_FREE` flag in
`free_bits` indicates many things depending on the value of the entire `free_bits` bitfield:
- If `GROUP_FLAG_FREE` is set and all other bits in `free_bits` are
set, it means that the group is not being referenced anywhere else.
- If `GROUP_FLAG_FREE` is set and some other bits in `free_bits` are
unset, it means that the group is still being shared with other allocations and any subsequent or concurrent `heap_free_block_lfh` call may free the group.
- If `GROUP_FLAG_FREE` is unset, it means that the group is owned by
a thread (a stack variable), the affinity group array, or the group lookaside list.
- Temporal dependence of computation. Whether `GROUP_FLAG_FREE` is set
or not is a function of not only the `free_bits` bitfield, but also the history of previous `heap_allocate_block_lfh` and `heap_free_block_lfh` calls. Instead, I propose reusing the `group->entry.Next` field to indicate whether the group is contained in a affinity group array or the group lookaside list. The `Next` field will be interpreted as follows:
- `Next == (SLIST_ENTY *)-1`: The group is currently unlinked from
`affinity_group` and any group lookaside list. Next call to `heap_free_block_lfh` will release the group.
- `Next == NULL`: The group is owned by a thread (a stack variable), is
the last element of a group lookaside list, or is referenced by `affinity_group`.
- `Next` is any other value: The group is an element of a group
lookaside list. With the above rules in mind, the algorithm will be changed as follows:
- If `find_free_bin_block` determines that the free bits are all 0 (i.e.
the block is full), it sets `group->entry.Next = (void *)-1` instead of putting it back to the group list. Otherwise, it sets `group->entry.Next = NULL` before putting it back to the group list.
- `heap_free_block_lfh` calls `InterlockedCompareExchange(
&group->entry.Next, NULL, (void *)-1 )` to claim ownership of the group _before_ maipulating `free_bits`. If the CAS is successful, it calls `heap_release_bin_group` _after_ setting the corresponding bit `free_bits`. The advantages of this approach are twofold:
- We have one extra block per group.
- Groups are immediately available for use after one of its block is freed.
Both contribute to less memory footprint.
Hm, the approach might not be viable unless we roll our own SList, since the RTL SList operations do not perform atomic access on `entry->Next`.