I've played around with dbghelp performance. My test case was breaking
at an unknown symbol (break gaga) while WoW was loaded in the debugger
(wine winedbg WoW.exe). The time was hand stopped, memory usage measured
with ps -AF and looked at the RSS column.
Test Time(s) Memory Usage(MB)
current git 4.5 54
pool_heap.patch 4.5 63
process_heap.patch 4.5 126
insert_first.patch 4.5 54
current git, r300 115 146
pool_heap.patch, r300 17 119
process_heap.patch, r300 17 260
insert_first.patch, r300 27 167
insert_first is the patch from Eric Pouech.
r300 means with the debug version of Mesas r300_dri.so, which has a
total compilation unit size of around 9.2M (compared to the second
biggest Wines user32 with 1.1M).
Conclusions:
- current git wins with small debug files (<2M or so), pool_heap wins
with bigger files. insert_first, process_heap are out.
- small pools have less memory overhead than small heaps
- big pools have more memory overhead than big heaps.
- big pools are a lot slower than big heaps.
IMO the best results would give removing the pools (like in
process_heap) and freeing unused memory manually, the other way round it
was allocated. But at a first glance it looks like quite a bit of work,
which I'm not sure is worth the result. I think the best approach would
be to code some destroy functions in storage.c which would free the
allocated vector, sparse_array and hash_table memory. And then gradually
replace pool_alloc calls with HeapAlloc/HeapFree pairs.
Markus