1
Fork 0
mirror of https://git.savannah.gnu.org/git/guile.git synced 2025-05-15 10:10:21 +02:00
Commit graph

156 commits

Author SHA1 Message Date
Andy Wingo
053dbf0b61 Pass heap to tracer functions
This will allow conservative intra-heap edges.  Hopefully no overhead?
2022-10-25 14:25:55 +02:00
Andy Wingo
5e986e84e9 Update README 2022-10-03 16:12:06 +02:00
Andy Wingo
703bb30e19 add conservative makefile targets 2022-10-03 16:09:21 +02:00
Andy Wingo
1944b54a19 Whippet can trace conservative roots
Next up, enabling it via the makefiles.
2022-10-03 16:09:21 +02:00
Andy Wingo
deed415a06 Whippet captures stack when stopping mutators
This is part of work to enable conservative GC.
2022-10-03 16:09:21 +02:00
Andy Wingo
d2bde8319f Add conservative stack capture
This isn't really wired up yet anywhere, but add a precursor to
conservative stack scanning.
2022-10-03 16:09:21 +02:00
Andy Wingo
a5b1a66d21 Add platform abstraction
This will allow us to iterate conservative roots from stacks and static
data segments.
2022-10-03 16:09:21 +02:00
Andy Wingo
05d2c95950 mt-gcbench: Only crash when tracing holes for precise GC 2022-10-03 16:09:21 +02:00
Andy Wingo
e328346bbd Refactor alignment utilities in whippet.c
Add align_up and align_down helpers.
2022-10-03 15:37:16 +02:00
Andy Wingo
24bd94d9f7 Fix race condition in computation of mark-while-stopping
Choose the ragged stop strategy when the GC kind is determined, so that
we do so with respect to a single measurement of pending unavailable
bytes.

Also remove assert in heap_should_mark_while_stopping, as it can be
called after stopping too, when evacuation is enabled.
2022-10-03 15:37:16 +02:00
Andy Wingo
1e3122d054 trace_worker_steal first does a try_pop on its own deque
Before asking other threads for values, see if there is any pending data
that overflowed from the local mark stack.
2022-10-03 15:37:16 +02:00
Andy Wingo
8b8ddaf6e2 work-stealing optimization: stay with last-stolen worker
Previously we were always going round-robin.  Now a thief tries to
plunder its victim again directly.  Should result in less churn.
2022-10-03 15:37:16 +02:00
Andy Wingo
56aad402c9 Fix bug in try_pop on chase-lev deque
The counters are unsigned, so that they can overflow.  (Is that really
necessary though?)  In any case try_pop can decrement a counter, leading
to a situation where you can think you have (size_t)-1 elements; not
good.  Instead when computing the queue size, use a signed value.
Limits total queue size to half the unsigned space; fine.
2022-10-03 15:37:16 +02:00
Andy Wingo
f77cf923c1 Fix parallel tracer for gc_ref API change 2022-10-03 15:37:01 +02:00
Andy Wingo
1228e346fa Fix semi-space collector for refactor 2022-10-03 15:37:01 +02:00
Andy Wingo
8a51117763 Rework pinning, prepare for conservative tracing
We don't need a pin bit: we just need to mark pinned objects before
evacuation starts.  This way we can remove the stopping / marking race
so that we can always mark while stopping.
2022-08-22 21:11:30 +02:00
Andy Wingo
2199d5f48d Excise struct gcobj 2022-08-16 23:21:16 +02:00
Andy Wingo
6ecf226570 More typesafety, more gc_ref 2022-08-16 22:48:46 +02:00
Andy Wingo
92b8f1e917 Add gc_ prefix to struct heap, struct mutator 2022-08-16 21:35:16 +02:00
Andy Wingo
b082f5f50d Separate compilation!!!!! 2022-08-16 17:54:15 +02:00
Andy Wingo
fe9bdf6397 Separate out embedder API from mt-gcbench, quads 2022-08-16 16:09:36 +02:00
Andy Wingo
112f27b77b Simplify GC attributes for the inline allocator
Don't require pulling in all of gc-api.h.
2022-08-16 16:00:06 +02:00
Andy Wingo
8a111256c6 Compile with -fvisibility=hidden; will be good for separate compilation 2022-08-16 12:04:56 +02:00
Andy Wingo
9e8940e59f Get handles out of collectors 2022-08-16 11:53:32 +02:00
Andy Wingo
607585e7f0 Add whippet-inline.h 2022-08-16 10:25:23 +02:00
Andy Wingo
33aa5230da Add bdw-inline.h 2022-08-15 18:30:42 +02:00
Andy Wingo
8f2f4f7c69 API-ify gc_print_stats; add semi-inline.h 2022-08-15 18:17:18 +02:00
Andy Wingo
a00c83878e Inline post-allocation actions 2022-08-15 16:06:52 +02:00
Andy Wingo
a75842be90 Mostly implementation-independent inline allocation
This is a step towards separate compilation of the GC without losing
performance.  Only remaining task is the write barrier.
2022-08-15 11:17:15 +02:00
Andy Wingo
4d8a7169d0 Add inline to gc-api.h 2022-08-14 09:18:21 +02:00
Andy Wingo
fb71c4c363 Separate tagging from collector
The collector now has an abstract interface onto the embedder.  The
embedder has to supply some functionality, such as tracing and
forwarding.  This is a pretty big change in terms of lines but it's
supposed to have no functional or performance change.
2022-08-12 16:44:38 +02:00
Andy Wingo
cacc28b577 Always add a header onto objects
We're targetting systems that need to be able to inspect the kind of an
object, so this information has to be somewhere.  If it's out-of-line,
we might save memory, but we would lose locality.  Concretely in Guile
the tag bits are in the object itself.
2022-08-09 16:14:47 +02:00
Andy Wingo
d8bcbf2d74 More API-ification 2022-08-09 11:35:31 +02:00
Andy Wingo
4ccb489869 Set fixed heap size, parallelism via explicit options 2022-08-09 11:21:02 +02:00
Andy Wingo
2e6dde66b3 Attempt to start creating a proper API 2022-08-09 09:49:51 +02:00
Andy Wingo
c824f17bd9 Rename gc-types.h to gc-api.h 2022-08-08 11:08:36 +02:00
Andy Wingo
67f9c89f2a Use fragmentation_low_threshold for venerable_threshold
This way fragmentation from venerable blocks doesn't cause the collector
to keep evacuating.
2022-08-04 11:32:06 +02:00
Andy Wingo
0450a282dd Skip mostly-tenured blocks during sweep/allocate after minor GC 2022-08-04 09:04:27 +02:00
Andy Wingo
0fe13e1cab Accelerate scanning of remembered set 2022-08-03 21:25:18 +02:00
Andy Wingo
47c07dd0eb Fix embarassing ctz issue 2022-08-03 16:40:34 +02:00
Andy Wingo
8f6a2692ab Update README 2022-08-03 12:13:25 +02:00
Andy Wingo
96b68095b7 Fix mark pattern updating for generational whippet
After a minor collection, we were erroneously failing to sweep dead
objects with the survivor tag.
2022-08-03 12:06:19 +02:00
Andy Wingo
0210a8caf0 Refactor out-of-memory detection
Firstly, we add a priority evacuation reserve to prioritize having a few
evacuation blocks on hand.  Otherwise if we give them all to big
allocations first and we have a fragmented heap, we won't be able to
evacuate that fragmented heap to give more blocks to the large
allocations.

Secondly, we remove `enum gc_reason`.  The issue is that with multiple
mutator threads, the precise thread triggering GC does not provide much
information.  Instead we should make choices on how to collect based on
the state of the heap.

Finally, we move detection of out-of-memory inside the collector,
instead of the allocator.

Together, these changes let mt-gcbench (with fragmentation) operate in
smaller heaps.
2022-08-03 10:10:33 +02:00
Andy Wingo
1358d99abc Fix yield calculation after evacuating collections 2022-08-02 22:15:13 +02:00
Andy Wingo
a4e1f55f37 Implement generational collection
Not really battle-tested but it seems to work.  Need to implement
heuristics for when to do generational vs full-heap GC.
2022-08-02 15:37:02 +02:00
Andy Wingo
13b3bb5b24 Update barrier functions to also have the object being written
Also remove read barriers, as they were unused, and we have no plans to
use them.
2022-08-02 15:37:02 +02:00
Andy Wingo
7f405c929e Initial live mask does not include young allocations
After rotation, the young bit wasn't being included anyway.  This just
improves the first collection.
2022-08-02 15:37:02 +02:00
Andy Wingo
1781c5aed4 Fix evacuation allocator to clear any holes 2022-08-02 15:36:59 +02:00
Andy Wingo
22a9cc87a0 Update TODO 2022-07-20 14:40:47 +02:00
Andy Wingo
279309b821 mt-gcbench allocates garbage between live data
This obviously invalidates previous benchmark results; perhaps we should
make this optional.
2022-07-20 14:40:47 +02:00