Andy Wingo
c824f17bd9
Rename gc-types.h to gc-api.h
2022-08-08 11:08:36 +02:00
Andy Wingo
67f9c89f2a
Use fragmentation_low_threshold for venerable_threshold
...
This way fragmentation from venerable blocks doesn't cause the collector
to keep evacuating.
2022-08-04 11:32:06 +02:00
Andy Wingo
0450a282dd
Skip mostly-tenured blocks during sweep/allocate after minor GC
2022-08-04 09:04:27 +02:00
Andy Wingo
0fe13e1cab
Accelerate scanning of remembered set
2022-08-03 21:25:18 +02:00
Andy Wingo
47c07dd0eb
Fix embarassing ctz issue
2022-08-03 16:40:34 +02:00
Andy Wingo
8f6a2692ab
Update README
2022-08-03 12:13:25 +02:00
Andy Wingo
96b68095b7
Fix mark pattern updating for generational whippet
...
After a minor collection, we were erroneously failing to sweep dead
objects with the survivor tag.
2022-08-03 12:06:19 +02:00
Andy Wingo
0210a8caf0
Refactor out-of-memory detection
...
Firstly, we add a priority evacuation reserve to prioritize having a few
evacuation blocks on hand. Otherwise if we give them all to big
allocations first and we have a fragmented heap, we won't be able to
evacuate that fragmented heap to give more blocks to the large
allocations.
Secondly, we remove `enum gc_reason`. The issue is that with multiple
mutator threads, the precise thread triggering GC does not provide much
information. Instead we should make choices on how to collect based on
the state of the heap.
Finally, we move detection of out-of-memory inside the collector,
instead of the allocator.
Together, these changes let mt-gcbench (with fragmentation) operate in
smaller heaps.
2022-08-03 10:10:33 +02:00
Andy Wingo
1358d99abc
Fix yield calculation after evacuating collections
2022-08-02 22:15:13 +02:00
Andy Wingo
a4e1f55f37
Implement generational collection
...
Not really battle-tested but it seems to work. Need to implement
heuristics for when to do generational vs full-heap GC.
2022-08-02 15:37:02 +02:00
Andy Wingo
13b3bb5b24
Update barrier functions to also have the object being written
...
Also remove read barriers, as they were unused, and we have no plans to
use them.
2022-08-02 15:37:02 +02:00
Andy Wingo
7f405c929e
Initial live mask does not include young allocations
...
After rotation, the young bit wasn't being included anyway. This just
improves the first collection.
2022-08-02 15:37:02 +02:00
Andy Wingo
1781c5aed4
Fix evacuation allocator to clear any holes
2022-08-02 15:36:59 +02:00
Andy Wingo
22a9cc87a0
Update TODO
2022-07-20 14:40:47 +02:00
Andy Wingo
279309b821
mt-gcbench allocates garbage between live data
...
This obviously invalidates previous benchmark results; perhaps we should
make this optional.
2022-07-20 14:40:47 +02:00
Andy Wingo
d106f3ca71
Mutator collects evacuation target blocks
2022-07-20 14:40:47 +02:00
Andy Wingo
92b05a6310
Add implementation of parallel evacuation
2022-07-20 14:40:47 +02:00
Andy Wingo
4a9908bc4d
Refactor evacuation vs pinning support
...
Marking conservative roots in place effectively prohibits them from
being moved, and we need to trace the roots anyway to discover
conservative roots. No need therefore for a pin bit.
2022-07-20 14:40:47 +02:00
Andy Wingo
a16bb1833c
Add logic to compute evacuation candidate blocks
2022-07-20 14:40:47 +02:00
Andy Wingo
c7c8fa2d32
Refactor to add "block_list" type
2022-07-20 14:40:47 +02:00
Andy Wingo
a8214af467
Whippet reserves a bit in object kind for forwarding
...
Tags without the bit are forwarding addresses.
2022-07-20 14:40:47 +02:00
Andy Wingo
8409383ee1
Refactor post-collection for mark space
2022-07-20 14:40:47 +02:00
Andy Wingo
69caead182
Add heuristics to choose when to compact or mark in place
...
We can choose to compact (evacuate) or mark in place. What we choose
has some effects on how we mark.
2022-07-20 14:40:47 +02:00
Andy Wingo
09d2df1626
Compute GC yield as fraction of total heap size
2022-07-20 14:40:47 +02:00
Andy Wingo
c998f1cd5c
Measure fragmentation as fraction of total heap size
...
This allows a relatively more fragmented mark space if the majority of
the heap is taken up by lospace.
2022-07-20 14:40:47 +02:00
Andy Wingo
7af8bb6bd0
Add machinery to disable ragged-stop marking
...
We'll need to disable the optimization that mutators mark their own
stacks once we support evacuation.
2022-07-20 14:40:47 +02:00
Andy Wingo
e4342f6c45
Add helper for yielding in a spinlock
2022-07-20 14:40:47 +02:00
Andy Wingo
52166fe286
Add gc_edge data structure
...
Less casting in user programs, and it's a step on the way to evacuation
in whippet.
2022-07-20 14:40:47 +02:00
Andy Wingo
808d365f4b
We identify empty blocks lazily now
2022-07-20 14:40:47 +02:00
Andy Wingo
bc73c5ad02
Whitespace fix
2022-07-20 14:40:47 +02:00
Andy Wingo
157d40466b
mark_space_reacquire_memory updates pending_unavailable_bytes
2022-07-20 14:40:47 +02:00
Andy Wingo
33a3af2c73
Large object space properly acquires blocks from mark space
...
If the mutator finds completely empty blocks, it puts them on the side.
The large object space acquires empty blocks, sweeping if needed, and
causes them to be unmapped, possibly causing GC.
2022-07-20 14:40:47 +02:00
Andy Wingo
71b656bca4
When sweeping, return empty blocks to global freelist
...
This will facilitate management of overhead for defragmentation as well
as blocks to unmap, for compensating large object allocations.
2022-07-20 14:40:47 +02:00
Andy Wingo
8f06b914b0
Refactor to allow "next" pointer embedded in block summary
2022-07-20 14:40:47 +02:00
Andy Wingo
7d80d45c79
Rename mark-sweep.h to whippet.h
2022-07-20 14:40:44 +02:00
Andy Wingo
061d92d125
Update README
2022-05-15 22:06:41 +02:00
Andy Wingo
69d7ff83dd
More wording
2022-05-11 22:29:37 +02:00
Andy Wingo
c39e26159d
Some README updates
2022-05-11 22:25:09 +02:00
Andy Wingo
7ac0b5bb4b
More precise heap size control
...
No longer clamped to 4 MB boundaries. Not important in production but
very important for comparing against other collectors.
2022-05-11 21:19:26 +02:00
Andy Wingo
fa3b7bd1b3
Add global yield and fragmentation computation
2022-05-09 22:03:50 +02:00
Andy Wingo
3bc81b1654
Collect per-block statistics
...
This will let us compute fragmentation.
2022-05-09 21:46:27 +02:00
Andy Wingo
7461b2d5c3
Be more permissive with heap multiplier
...
Also if there's an error, print the right argument
2022-05-06 15:08:24 +02:00
Andy Wingo
815f206e28
Optimize sweeping
...
Use uint64 instead of uintptr when bulk-reading metadata bytes. Assume
that live objects come in plugs rather than each object being separated
by a hole. Always bulk-load metadata bytes when measuring holes, and be
less branchy. Lazily clear hole bytes as we allocate. Add a place to
record lost space due to fragmentation.
2022-05-06 15:07:43 +02:00
Andy Wingo
0d0d684952
Mark-sweep does bump-pointer allocation into holes
...
Instead of freelists, have mark-sweep use the metadata byte array to
identify holes, and bump-pointer allocate into those holes.
2022-05-01 17:07:30 +02:00
Andy Wingo
f51e969730
Use atomics when sweeping
...
Otherwise, there is a race with concurrent marking, though possibly just
during the ragged stop.
2022-05-01 16:23:10 +02:00
Andy Wingo
2a68dadf22
Accelerate sweeping
...
Read a word at a time from the mark byte array. If the mark word
doesn't correspond to live data there will be no contention and we can
clear it with one write.
2022-05-01 16:09:20 +02:00
Andy Wingo
ce69e9ed4c
Record object sizes in metadata byte array
...
This will let us avoid paging in objects when sweeping.
2022-05-01 15:19:13 +02:00
Andy Wingo
3a04078044
mark-sweep uses all the metadata bits
...
Don't require that mark bytes be cleared; instead we have rotating
colors. Beginnings of support for concurrent marking, pinning,
conservative roots, and generational collection.
2022-05-01 15:04:21 +02:00
Andy Wingo
f97906421e
Sweep by block, not by slab
...
This lets mutators run in parallel. There is a bug currently however
with a race between stopping mutators marking their roots and other
mutators still sweeping. Will fix in a followup.
2022-05-01 14:46:36 +02:00
Andy Wingo
83bf1d8cf3
Fix bug ensuring zeroed memory
...
If the granule size is bigger than a pointer, we were leaving the first
granule uncleared.
2022-05-01 14:45:25 +02:00