If the mutator finds completely empty blocks, it puts them on the side.
The large object space acquires empty blocks, sweeping if needed, and
causes them to be unmapped, possibly causing GC.
Use uint64 instead of uintptr when bulk-reading metadata bytes. Assume
that live objects come in plugs rather than each object being separated
by a hole. Always bulk-load metadata bytes when measuring holes, and be
less branchy. Lazily clear hole bytes as we allocate. Add a place to
record lost space due to fragmentation.
Read a word at a time from the mark byte array. If the mark word
doesn't correspond to live data there will be no contention and we can
clear it with one write.
Don't require that mark bytes be cleared; instead we have rotating
colors. Beginnings of support for concurrent marking, pinning,
conservative roots, and generational collection.
This lets mutators run in parallel. There is a bug currently however
with a race between stopping mutators marking their roots and other
mutators still sweeping. Will fix in a followup.
There are 4 MB aligned slabs, divided into 64 KB pages. (On 32-bit this
will be 2 MB ad 32 kB). Then you can get a mark byte per granule by
slab plus granule offset. The unused slack that would correspond to
mark bytes for the blocks used *by* the mark bytes is used for other
purposes: remembered sets (not yet used), block summaries (not used),
and a slab header (likewise).
Probably the collector should use 8 byte granules on 32-bit but for now
we're working on 64-bit sizes. Since we don't (and never did) pack
pages with same-sized small objects, no need to make sure that small
object sizes fit evenly into the medium object threshold; just keep
packed freelists. This is a simplification that lets us reclaim the
tail of a region in constant time rather than looping through the size
classes.
This will let us partition the mark space into chunks of 32 or 64 kB, as
we won't need to allocate chunk-spanning objects. This will improve
sweeping parallelism and is a step on the way to immix.
When sweeping for small objects of a known size, instead of fitting
swept regions into the largest available bucket size, eagerly break
the regions into the requested size. Throw away any fragmented space;
the next collection will get it.
When allocating small objects, just look in the size-segmented freelist;
don't grovel in other sizes on the global freelist. The thought is that
we only add to the global freelists when allocating large objects, and
in that case some fragmentation is OK. Perhaps this is the wrong
dynamic.
Reclaim 32 kB at a time instead of 1 kB. This helps remove scalability
bottlenecks.
When you have multiple mutators -- perhaps many more than marker threads
-- they can mark their roots in parallel but they can't enqueue them on
the same mark queue concurrently -- mark queues are single-producer,
multiple-consumer queues. Therefore, mutator threads will collect grey
roots from their own root sets, and then send them to the mutator that
is controlling GC, for it to add to the mark queue (somehow).