1
Fork 0
mirror of https://git.savannah.gnu.org/git/guile.git synced 2025-05-11 08:10:21 +02:00
Commit graph

96 commits

Author SHA1 Message Date
Andy Wingo
7af8bb6bd0 Add machinery to disable ragged-stop marking
We'll need to disable the optimization that mutators mark their own
stacks once we support evacuation.
2022-07-20 14:40:47 +02:00
Andy Wingo
e4342f6c45 Add helper for yielding in a spinlock 2022-07-20 14:40:47 +02:00
Andy Wingo
52166fe286 Add gc_edge data structure
Less casting in user programs, and it's a step on the way to evacuation
in whippet.
2022-07-20 14:40:47 +02:00
Andy Wingo
808d365f4b We identify empty blocks lazily now 2022-07-20 14:40:47 +02:00
Andy Wingo
bc73c5ad02 Whitespace fix 2022-07-20 14:40:47 +02:00
Andy Wingo
157d40466b mark_space_reacquire_memory updates pending_unavailable_bytes 2022-07-20 14:40:47 +02:00
Andy Wingo
33a3af2c73 Large object space properly acquires blocks from mark space
If the mutator finds completely empty blocks, it puts them on the side.
The large object space acquires empty blocks, sweeping if needed, and
causes them to be unmapped, possibly causing GC.
2022-07-20 14:40:47 +02:00
Andy Wingo
71b656bca4 When sweeping, return empty blocks to global freelist
This will facilitate management of overhead for defragmentation as well
as blocks to unmap, for compensating large object allocations.
2022-07-20 14:40:47 +02:00
Andy Wingo
8f06b914b0 Refactor to allow "next" pointer embedded in block summary 2022-07-20 14:40:47 +02:00
Andy Wingo
7d80d45c79 Rename mark-sweep.h to whippet.h 2022-07-20 14:40:44 +02:00
Andy Wingo
061d92d125 Update README 2022-05-15 22:06:41 +02:00
Andy Wingo
69d7ff83dd More wording 2022-05-11 22:29:37 +02:00
Andy Wingo
c39e26159d Some README updates 2022-05-11 22:25:09 +02:00
Andy Wingo
7ac0b5bb4b More precise heap size control
No longer clamped to 4 MB boundaries.  Not important in production but
very important for comparing against other collectors.
2022-05-11 21:19:26 +02:00
Andy Wingo
fa3b7bd1b3 Add global yield and fragmentation computation 2022-05-09 22:03:50 +02:00
Andy Wingo
3bc81b1654 Collect per-block statistics
This will let us compute fragmentation.
2022-05-09 21:46:27 +02:00
Andy Wingo
7461b2d5c3 Be more permissive with heap multiplier
Also if there's an error, print the right argument
2022-05-06 15:08:24 +02:00
Andy Wingo
815f206e28 Optimize sweeping
Use uint64 instead of uintptr when bulk-reading metadata bytes.  Assume
that live objects come in plugs rather than each object being separated
by a hole.  Always bulk-load metadata bytes when measuring holes, and be
less branchy.  Lazily clear hole bytes as we allocate.  Add a place to
record lost space due to fragmentation.
2022-05-06 15:07:43 +02:00
Andy Wingo
0d0d684952 Mark-sweep does bump-pointer allocation into holes
Instead of freelists, have mark-sweep use the metadata byte array to
identify holes, and bump-pointer allocate into those holes.
2022-05-01 17:07:30 +02:00
Andy Wingo
f51e969730 Use atomics when sweeping
Otherwise, there is a race with concurrent marking, though possibly just
during the ragged stop.
2022-05-01 16:23:10 +02:00
Andy Wingo
2a68dadf22 Accelerate sweeping
Read a word at a time from the mark byte array.  If the mark word
doesn't correspond to live data there will be no contention and we can
clear it with one write.
2022-05-01 16:09:20 +02:00
Andy Wingo
ce69e9ed4c Record object sizes in metadata byte array
This will let us avoid paging in objects when sweeping.
2022-05-01 15:19:13 +02:00
Andy Wingo
3a04078044 mark-sweep uses all the metadata bits
Don't require that mark bytes be cleared; instead we have rotating
colors.  Beginnings of support for concurrent marking, pinning,
conservative roots, and generational collection.
2022-05-01 15:04:21 +02:00
Andy Wingo
f97906421e Sweep by block, not by slab
This lets mutators run in parallel.  There is a bug currently however
with a race between stopping mutators marking their roots and other
mutators still sweeping.  Will fix in a followup.
2022-05-01 14:46:36 +02:00
Andy Wingo
83bf1d8cf3 Fix bug ensuring zeroed memory
If the granule size is bigger than a pointer, we were leaving the first
granule uncleared.
2022-05-01 14:45:25 +02:00
Andy Wingo
7fc2fdbbf7 Use block-structured heap for mark-sweep
There are 4 MB aligned slabs, divided into 64 KB pages.  (On 32-bit this
will be 2 MB ad 32 kB).  Then you can get a mark byte per granule by
slab plus granule offset.  The unused slack that would correspond to
mark bytes for the blocks used *by* the mark bytes is used for other
purposes: remembered sets (not yet used), block summaries (not used),
and a slab header (likewise).
2022-04-27 22:31:09 +02:00
Andy Wingo
bea9ce883d mark-sweep collector uses 16 byte granules, packed small freelists
Probably the collector should use 8 byte granules on 32-bit but for now
we're working on 64-bit sizes.  Since we don't (and never did) pack
pages with same-sized small objects, no need to make sure that small
object sizes fit evenly into the medium object threshold; just keep
packed freelists.  This is a simplification that lets us reclaim the
tail of a region in constant time rather than looping through the size
classes.
2022-04-20 10:54:19 +02:00
Andy Wingo
adc4a7a269 Add large object space to mark-sweep collector
This will let us partition the mark space into chunks of 32 or 64 kB, as
we won't need to allocate chunk-spanning objects.  This will improve
sweeping parallelism and is a step on the way to immix.
2022-04-18 21:20:00 +02:00
Andy Wingo
3ee2009de9 Move a lot of mark_space state to heap 2022-04-18 20:56:48 +02:00
Andy Wingo
119e273fa4 Rename mark-sweep "markers" to "tracers"
There could be other reasons than marking to trace the heap.
2022-04-18 15:19:55 +02:00
Andy Wingo
19f7f72b68 Rename mark-sweep "large" objects to "medium" 2022-04-18 10:00:44 +02:00
Andy Wingo
3f54fb3dbf Fix semispace page stealing
Ensure number of stolen pages is even.  Avoid madvising on every
collection.  Cache the page size.
2022-04-17 21:51:20 +02:00
Andy Wingo
3315fc7477 Add large object space to semi-space collector 2022-04-14 22:20:27 +02:00
Andy Wingo
619a49ba41 Add large object space
Not wired up yet.
2022-04-13 21:43:18 +02:00
Andy Wingo
d425620d37 Add address map and set 2022-04-12 21:41:26 +02:00
Andy Wingo
b0b4c4d893 Remove unneeded files 2022-03-31 09:24:54 +02:00
Andy Wingo
54ce801c72 Update README now that we have parallel mutators 2022-03-30 23:21:45 +02:00
Andy Wingo
a1dbbfd6ae Speed up sweeping for small objects
When sweeping for small objects of a known size, instead of fitting
swept regions into the largest available bucket size, eagerly break
the regions into the requested size.  Throw away any fragmented space;
the next collection will get it.

When allocating small objects, just look in the size-segmented freelist;
don't grovel in other sizes on the global freelist.  The thought is that
we only add to the global freelists when allocating large objects, and
in that case some fragmentation is OK.  Perhaps this is the wrong
dynamic.

Reclaim 32 kB at a time instead of 1 kB.  This helps remove scalability
bottlenecks.
2022-03-30 23:15:29 +02:00
Andy Wingo
6300203738 Add call_without_gc API
This lets us call pthread_join safely
2022-03-29 21:58:52 +02:00
Andy Wingo
680032fa89 Minor stop-the-world optimizations. There are still bugs
Probably should switch to using a semaphore; no need to reacquire the
lock on wakeup.
2022-03-29 15:47:19 +02:00
Andy Wingo
d879a01913 Remove gcbench in favor of mt-gcbench. Update quads 2022-03-29 15:12:56 +02:00
Andy Wingo
5522d827e3 mt-gcbench: write the "j" field in the binary tree nodes. 2022-03-29 15:12:56 +02:00
Andy Wingo
ac57e01e31 BDW doesn't have mutator-local freelists for pointerless objects 2022-03-29 15:12:56 +02:00
Andy Wingo
e837d51f53 mark-sweep collector allows parallel mutators 2022-03-29 15:12:56 +02:00
Andy Wingo
ded3b3c7a3 Update parallel marker API to use struct gcobj 2022-03-29 15:12:56 +02:00
Andy Wingo
14529f11e9 mark-sweep: add global small object freelist
This will be useful to collect data when sweeping, if a mutator doesn't
need those objects.
2022-03-29 15:11:56 +02:00
Andy Wingo
2d1e76eccc mark-sweep: remote markers can send roots via mark buffers
When you have multiple mutators -- perhaps many more than marker threads
-- they can mark their roots in parallel but they can't enqueue them on
the same mark queue concurrently -- mark queues are single-producer,
multiple-consumer queues.  Therefore, mutator threads will collect grey
roots from their own root sets, and then send them to the mutator that
is controlling GC, for it to add to the mark queue (somehow).
2022-03-29 15:07:59 +02:00
Andy Wingo
be90f7ba49 mark-sweep: Remove context, use mark space instead
This is the end of a series of refactors before adding thread-local
allocation.
2022-03-29 15:07:32 +02:00
Andy Wingo
9b0bc6e975 mark-sweep: Update markers to deal in heap and spaces
This will let us get rid of "struct context".
2022-03-29 15:06:28 +02:00
Andy Wingo
2401732e31 mark-sweep: mutator data structure separate from heap
This will allow thread-local allocation buffers.
2022-03-29 15:05:59 +02:00