When sweeping for small objects of a known size, instead of fitting
swept regions into the largest available bucket size, eagerly break
the regions into the requested size. Throw away any fragmented space;
the next collection will get it.
When allocating small objects, just look in the size-segmented freelist;
don't grovel in other sizes on the global freelist. The thought is that
we only add to the global freelists when allocating large objects, and
in that case some fragmentation is OK. Perhaps this is the wrong
dynamic.
Reclaim 32 kB at a time instead of 1 kB. This helps remove scalability
bottlenecks.
When you have multiple mutators -- perhaps many more than marker threads
-- they can mark their roots in parallel but they can't enqueue them on
the same mark queue concurrently -- mark queues are single-producer,
multiple-consumer queues. Therefore, mutator threads will collect grey
roots from their own root sets, and then send them to the mutator that
is controlling GC, for it to add to the mark queue (somehow).
Also expand GC interface with "allocate_pointerless". Limit lazy
sweeping to the allocation size that is causing the sweep, without
adding to fragmentation.