1
Fork 0
mirror of https://git.savannah.gnu.org/git/guile.git synced 2025-05-15 10:10:21 +02:00
Commit graph

68 commits

Author SHA1 Message Date
Andy Wingo
3ee2009de9 Move a lot of mark_space state to heap 2022-04-18 20:56:48 +02:00
Andy Wingo
119e273fa4 Rename mark-sweep "markers" to "tracers"
There could be other reasons than marking to trace the heap.
2022-04-18 15:19:55 +02:00
Andy Wingo
19f7f72b68 Rename mark-sweep "large" objects to "medium" 2022-04-18 10:00:44 +02:00
Andy Wingo
3f54fb3dbf Fix semispace page stealing
Ensure number of stolen pages is even.  Avoid madvising on every
collection.  Cache the page size.
2022-04-17 21:51:20 +02:00
Andy Wingo
3315fc7477 Add large object space to semi-space collector 2022-04-14 22:20:27 +02:00
Andy Wingo
619a49ba41 Add large object space
Not wired up yet.
2022-04-13 21:43:18 +02:00
Andy Wingo
d425620d37 Add address map and set 2022-04-12 21:41:26 +02:00
Andy Wingo
b0b4c4d893 Remove unneeded files 2022-03-31 09:24:54 +02:00
Andy Wingo
54ce801c72 Update README now that we have parallel mutators 2022-03-30 23:21:45 +02:00
Andy Wingo
a1dbbfd6ae Speed up sweeping for small objects
When sweeping for small objects of a known size, instead of fitting
swept regions into the largest available bucket size, eagerly break
the regions into the requested size.  Throw away any fragmented space;
the next collection will get it.

When allocating small objects, just look in the size-segmented freelist;
don't grovel in other sizes on the global freelist.  The thought is that
we only add to the global freelists when allocating large objects, and
in that case some fragmentation is OK.  Perhaps this is the wrong
dynamic.

Reclaim 32 kB at a time instead of 1 kB.  This helps remove scalability
bottlenecks.
2022-03-30 23:15:29 +02:00
Andy Wingo
6300203738 Add call_without_gc API
This lets us call pthread_join safely
2022-03-29 21:58:52 +02:00
Andy Wingo
680032fa89 Minor stop-the-world optimizations. There are still bugs
Probably should switch to using a semaphore; no need to reacquire the
lock on wakeup.
2022-03-29 15:47:19 +02:00
Andy Wingo
d879a01913 Remove gcbench in favor of mt-gcbench. Update quads 2022-03-29 15:12:56 +02:00
Andy Wingo
5522d827e3 mt-gcbench: write the "j" field in the binary tree nodes. 2022-03-29 15:12:56 +02:00
Andy Wingo
ac57e01e31 BDW doesn't have mutator-local freelists for pointerless objects 2022-03-29 15:12:56 +02:00
Andy Wingo
e837d51f53 mark-sweep collector allows parallel mutators 2022-03-29 15:12:56 +02:00
Andy Wingo
ded3b3c7a3 Update parallel marker API to use struct gcobj 2022-03-29 15:12:56 +02:00
Andy Wingo
14529f11e9 mark-sweep: add global small object freelist
This will be useful to collect data when sweeping, if a mutator doesn't
need those objects.
2022-03-29 15:11:56 +02:00
Andy Wingo
2d1e76eccc mark-sweep: remote markers can send roots via mark buffers
When you have multiple mutators -- perhaps many more than marker threads
-- they can mark their roots in parallel but they can't enqueue them on
the same mark queue concurrently -- mark queues are single-producer,
multiple-consumer queues.  Therefore, mutator threads will collect grey
roots from their own root sets, and then send them to the mutator that
is controlling GC, for it to add to the mark queue (somehow).
2022-03-29 15:07:59 +02:00
Andy Wingo
be90f7ba49 mark-sweep: Remove context, use mark space instead
This is the end of a series of refactors before adding thread-local
allocation.
2022-03-29 15:07:32 +02:00
Andy Wingo
9b0bc6e975 mark-sweep: Update markers to deal in heap and spaces
This will let us get rid of "struct context".
2022-03-29 15:06:28 +02:00
Andy Wingo
2401732e31 mark-sweep: mutator data structure separate from heap
This will allow thread-local allocation buffers.
2022-03-29 15:05:59 +02:00
Andy Wingo
61d38e4205 Refactor mark-sweep to send mutator to collect()
This will let the mutator hold a pointer to the heap.
2022-03-29 15:05:33 +02:00
Andy Wingo
edd46d8fe2 Start to adapt mark-sweep collector for separate heap/mutator
The current hack is that the mutator contains the heap.  We'll relax
later on.
2022-03-29 15:04:53 +02:00
Andy Wingo
5a92b43e94 Change serial marker to deal in struct gcobj* instead of uintptr
"struct gcobj*" is how we denote live objects, and the marker will only
see live objects.
2022-03-29 15:04:20 +02:00
Andy Wingo
81037fd6d2 Convert semi-space collector to new API 2022-03-28 22:18:25 +02:00
Andy Wingo
06a213d1ed Adapt GC API to have separate heap and mutator structs
Only BDW is adapted, so far.
2022-03-28 20:49:24 +02:00
Andy Wingo
883a761775 Stub out support for multiple mutator threads on semi, mark-sweep
For semi probably we never implement support for multiple mutator
threads.  We will do local freelists for mark-sweep though.
2022-03-20 21:03:26 +01:00
Andy Wingo
a654a790b9 Add inline allocation for small objects for bdw-gc 2022-03-18 22:57:41 +01:00
Andy Wingo
d63288048c Add mt-gcbench 2022-03-18 16:19:42 +01:00
Andy Wingo
e703568857 Remove heap-stretching phase
We should separate evaluation of the heap stretching heuristics from the
evaluation of the GC itself, otherwise our analysis of the GC itself
will be too sensitive to the details of the final heap size.  Anyway
this doesn't affect results as we already specified the heap size
precisely.
2022-03-18 14:36:48 +01:00
Andy Wingo
4b7fb84ba0 gcbench takes heap multiplier on command line 2022-03-18 14:29:59 +01:00
Andy Wingo
887bdd5441 Clean up gcbench naming, to be consistent 2022-03-18 09:42:20 +01:00
Andy Wingo
7dda5b992d Refactor pop_handle to not take the handle 2022-03-16 21:36:21 +01:00
Andy Wingo
32ddaa7624 Allocate GC context in GC-managed heap 2022-03-16 21:31:51 +01:00
Andy Wingo
f04b0bbd45 Simplify output of quads test 2022-03-16 14:28:49 +01:00
Andy Wingo
e7a3f83bcc Add quads benchmark
Also expand GC interface with "allocate_pointerless".  Limit lazy
sweeping to the allocation size that is causing the sweep, without
adding to fragmentation.
2022-03-16 14:16:22 +01:00
Andy Wingo
aac0faf4cf Refactor type definitions 2022-03-16 09:05:31 +01:00
Andy Wingo
a1b4311cfc Update status 2022-03-13 21:55:58 +01:00
Andy Wingo
a693c4ea8a Bugfix to mark-sweep
Before this, the last sweep would cause premature gc
2022-03-13 21:45:20 +01:00
Andy Wingo
fddd4d9416 Hey parallel marking is finally an improvement?? 2022-03-13 21:38:59 +01:00
Andy Wingo
4d7041bfa9 Another attempt at parallel marking, avoiding the channel
Not great though!
2022-03-13 13:54:58 +01:00
Andy Wingo
7ce07de670 First crack at parallel marking 2022-03-12 21:09:17 +01:00
Andy Wingo
9c89672c88 Put a local mark queue in front of the work-stealing queue 2022-03-11 11:57:14 +01:00
Andy Wingo
df9edfdff2 Remove tiny objects from mark-sweep 2022-03-11 11:48:26 +01:00
Andy Wingo
f57a1b8a55 Refactor to separate gcbench from gc 2022-03-11 11:48:26 +01:00
Andy Wingo
77ac530360 Add beginnings of parallel marker 2022-03-11 11:48:26 +01:00
Andy Wingo
01d3f9627e Further accelerate sweeping 2022-03-11 11:48:17 +01:00
Andy Wingo
f6ac9d2571 Ability to set heap size on command line 2022-03-11 11:48:04 +01:00
Andy Wingo
5edc4fa81a More efficient sweep 2022-03-11 11:44:11 +01:00