If the mutator finds completely empty blocks, it puts them on the side.
The large object space acquires empty blocks, sweeping if needed, and
causes them to be unmapped, possibly causing GC.
This port is of limited use if it cannot be used reliably. Rather than
behaving as if the input has finished when it ends unexpectedly, instead
raise an exception.
* module/web/http.scm (make-chunked-input-port): Raise an exception on
premature termination.
(&chunked-input-ended-prematurely): New exception type.
(chunked-input-ended-prematurely-error?): New procedure.
* test-suite/tests/web-http.test (pass-if-named-exception): Rename to
pass-if-named-exception.
(pass-if-named-exception): New syntax.
("Exception on premature chunk end"): New test for this behaviour.
Signed-off-by: Ludovic Courtès <ludo@gnu.org>
The chunked transfer encoding specifies the chunked body ends with
CRLF. This is in addition to the CRLF at the end of the last chunk, so
there should be CRLF twice at the end of the chunked body:
https://datatracker.ietf.org/doc/html/rfc2616#section-3.6.1
* module/web/http.scm (make-chunked-input-port): Read two extra bytes at
the end of the chunked input.
(make-chunked-output-port): Write the missing \r\n when closing the
port.
* test-suite/tests/web-http.test (chunked encoding): Add missing \r\n to
test data.
Signed-off-by: Ludovic Courtès <ludo@gnu.org>
* module/web/http.scm (write-credentials): capitalize authorization
header scheme. The standard allows the scheme to be case-insensitive,
however most libraries out there expect the scheme to be capitalized,
which is what it is actually used in RFC
docs (e.g. https://datatracker.ietf.org/doc/html/rfc7617#section-2). Some
libraries even reject lowercase scheme making Guile incompatible.
Signed-off-by: Ludovic Courtès <ludo@gnu.org>
The code coverage function `coverage-data->lcov` has a documented
`modules` argument, however that was missing from the source. I have
added it so when supplied it only converts the coverage data for the
supplied modules. If not supplied it defaults the old behavour of
including all the modules currently loaded.
* module/system/vm/coverage.scm (coverage-data->lcov): Add #:modules
parameter and honor it.
Signed-off-by: Ludovic Courtès <ludo@gnu.org>
The current socket address constructors all assume, that there are no
null bytes in the socket path. This assumption does not hold in Linux,
which uses an initial null byte to demarcate abstract sockets and
ignores all further null bytes [1].
[1] https://www.man7.org/linux/man-pages/man7/unix.7.html
* libguile/sockets.c (scm_fill_sockaddr)[HAVE_UNIX_DOMAIN_SOCKETS]:
Use scm_to_locale_stringn to construct c_address.
Use memcpy instead of strcpy and calculate size directly instead of
using SUN_LEN.
(_scm_from_sockaddr): Copy the entire path up to the limits imposed by
addr_size.
* test-suite/tests/00-socket.test: ("make-socket-address"): Add case for
abstract unix sockets.
("AF_UNIX/SOCK_STREAM"): Add abstract socket versions of bind, listen,
connect and accept.
Signed-off-by: Ludovic Courtès <ludo@gnu.org>
NetBSD and pkgsrc have been using an empty vendor string since the
mid-'90s, such as x86_64--netbsd. pkgsrc has been carrying around a
workaround just the guile build for a long time. (Before that,
NetBSD omitted the vendor altogether, so if x86_64 existed then it
might have been `x86_64-netbsd', but that caused more problems.)
This change makes Guile accept an empty vendor string so workarounds
are no longer necessary.
* module/system/base/target.scm (validate-target): Allow empty vendor
string in GNU target triplets.
* test-suite/tests/cross-compilation.test ("cross-compilation"): Add
tests for "x86_64--netbsd".
Co-authored-by: Ludovic Courtès <ludo@gnu.org>
Guile (3.0.8) reports a compilation error when cond-expand tries to
check existence of a missing library:
scheme@(guile-user)> (define-library (test)
(cond-expand
((library (scheme sort))
(import (scheme sort)))))
While compiling expression:
no code for module (scheme sort)
It looks like bug #40252 was not fully eliminated.
Also, (library ...) cannot handle module names like (srfi 1), though
(import (srfi 1)) works fine. For example, this code fails:
scheme@(guile-user)> (define-library (test)
(cond-expand
((library (srfi 1))
(import (srfi 1)))))
While compiling expression:
In procedure symbol->string: Wrong type argument in position 1
(expecting symbol): 1
There are probably other cases when (library ...) and (import ...) does
not work identically: (library ...) uses resolve-interface while
(import ...) uses resolve-r6rs-interface.
This patch fixes both issues.
* module/ice-9/r7rs-libraries.scm (define-library): Replace
'resolve-interface' call by 'resolve-r6rs-interface', wrapped in
'cond-expand'.
Signed-off-by: Ludovic Courtès <ludo@gnu.org>
The problem with callr is that the register that contains the
function to be called, can be overwritten by the logic that moves
the values into argument registers. To fix this, I added a
get_callr_temp function that should return a platform specific
register that is not used to pass arguments. For Aarch64/Arm the
link registers seems to work; for Amd64/i686 the RAX register.
The function/tmp pair becomes an additional argument to the
parallel assigment; this way the original function register is not
accidentally overwritten.
The problem with calli is that it may not have enough temp
registers to move arguments. The windmill paper says that at most
one temporary register is needed for the parallel assignment.
However, we also need a temp register for mem-to-mem moves. So it
seems that we need a second temporary. For Amd64/i686 we have
only one temporary GPR and one temporary FPR. To fix this, I
modified the algorithm from the paper a bit: we perform the
mem-to-mem moves before the other moves. Later when we need the
temp to break cycles, there shouldn't be any mem-to-mem moves
left. So we should never need two temps at the same time.
* lightening/lightening.c: (get_callr_temp): New function; need
for each platform.
(prepare_call_args): Include the function/callr_temp pair in the
arguments for the parallel assignment.
* lightening/x86.c, lightening/arm.c, lightening/aarch64.c
(get_callr_temp): Implementation for each platform.
* lightening/arm.c (next_abi_arg): Fix the stack size for doubles.
* tests/call_10_2.c, tests/callr_10.c: New tests.
* tests/regarrays.inc: New file. Common code between the above two
tests that would be tedious to duplicate.
Use uint64 instead of uintptr when bulk-reading metadata bytes. Assume
that live objects come in plugs rather than each object being separated
by a hole. Always bulk-load metadata bytes when measuring holes, and be
less branchy. Lazily clear hole bytes as we allocate. Add a place to
record lost space due to fragmentation.
Read a word at a time from the mark byte array. If the mark word
doesn't correspond to live data there will be no contention and we can
clear it with one write.
Don't require that mark bytes be cleared; instead we have rotating
colors. Beginnings of support for concurrent marking, pinning,
conservative roots, and generational collection.
This lets mutators run in parallel. There is a bug currently however
with a race between stopping mutators marking their roots and other
mutators still sweeping. Will fix in a followup.
There are 4 MB aligned slabs, divided into 64 KB pages. (On 32-bit this
will be 2 MB ad 32 kB). Then you can get a mark byte per granule by
slab plus granule offset. The unused slack that would correspond to
mark bytes for the blocks used *by* the mark bytes is used for other
purposes: remembered sets (not yet used), block summaries (not used),
and a slab header (likewise).
Probably the collector should use 8 byte granules on 32-bit but for now
we're working on 64-bit sizes. Since we don't (and never did) pack
pages with same-sized small objects, no need to make sure that small
object sizes fit evenly into the medium object threshold; just keep
packed freelists. This is a simplification that lets us reclaim the
tail of a region in constant time rather than looping through the size
classes.
This will let us partition the mark space into chunks of 32 or 64 kB, as
we won't need to allocate chunk-spanning objects. This will improve
sweeping parallelism and is a step on the way to immix.