mirror of
https://git.savannah.gnu.org/git/guile.git
synced 2025-04-30 03:40:34 +02:00
* libguile/async.c: * libguile/async.h: * libguile/deprecated.c: * libguile/deprecated.h (scm_async, scm_async_mark, scm_run_asyncs): Deprecate these functions, which comprise the "users asyncs" facility. * module/oop/goops.scm: Adapt to <async> deprecation. * doc/ref/api-scheduling.texi: * doc/ref/libguile-concepts.texi: * doc/ref/libguile-foreign-objects.texi: * doc/ref/posix.texi: Remove documentation on user asyncs, and replace references to "system asyncs" to be just "asyncs".
1179 lines
47 KiB
Text
1179 lines
47 KiB
Text
@c -*-texinfo-*-
|
|
@c This is part of the GNU Guile Reference Manual.
|
|
@c Copyright (C) 1996, 1997, 2000, 2001, 2002, 2003, 2004, 2007, 2009, 2010, 2012, 2013
|
|
@c Free Software Foundation, Inc.
|
|
@c See the file guile.texi for copying conditions.
|
|
|
|
@node Scheduling
|
|
@section Threads, Mutexes, Asyncs and Dynamic Roots
|
|
|
|
@menu
|
|
* Asyncs:: Asynchronous procedure invocation.
|
|
* Atomics:: Atomic references.
|
|
* Threads:: Multiple threads of execution.
|
|
* Mutexes and Condition Variables:: Synchronization primitives.
|
|
* Blocking:: How to block properly in guile mode.
|
|
* Critical Sections:: Avoiding concurrency and reentries.
|
|
* Fluids and Dynamic States:: Thread-local variables, etc.
|
|
* Parameters:: Dynamic scoping in Scheme.
|
|
* Futures:: Fine-grain parallelism.
|
|
* Parallel Forms:: Parallel execution of forms.
|
|
@end menu
|
|
|
|
|
|
@node Asyncs
|
|
@subsection Asyncs
|
|
|
|
@cindex asyncs
|
|
|
|
Asyncs are a means of deferring the execution of Scheme code until it is
|
|
safe to do so.
|
|
|
|
Asyncs are integrated into the core of Guile. A running Guile program
|
|
will periodically check if there are asyncs to run, invoking them as
|
|
needed. For example, it is not possible to execute Scheme code in a
|
|
POSIX signal handler, but in Guile a signal handler can enqueue a system
|
|
async to be executed in the near future, when it is safe to do so.
|
|
|
|
Asyncs can also be queued for threads other than the current one. This
|
|
way, you can cause threads to asynchronously execute arbitrary code.
|
|
|
|
To cause the future asynchronous execution of a procedure in a given
|
|
thread, use @code{system-async-mark}.
|
|
|
|
Automatic invocation of asyncs can be temporarily disabled by calling
|
|
@code{call-with-blocked-asyncs}. This function works by temporarily
|
|
increasing the @emph{async blocking level} of the current thread while a
|
|
given procedure is running. The blocking level starts out at zero, and
|
|
whenever a safe point is reached, a blocking level greater than zero
|
|
will prevent the execution of queued asyncs.
|
|
|
|
Analogously, the procedure @code{call-with-unblocked-asyncs} will
|
|
temporarily decrease the blocking level of the current thread. You
|
|
can use it when you want to disable asyncs by default and only allow
|
|
them temporarily.
|
|
|
|
In addition to the C versions of @code{call-with-blocked-asyncs} and
|
|
@code{call-with-unblocked-asyncs}, C code can use
|
|
@code{scm_dynwind_block_asyncs} and @code{scm_dynwind_unblock_asyncs}
|
|
inside a @dfn{dynamic context} (@pxref{Dynamic Wind}) to block or
|
|
unblock asyncs temporarily.
|
|
|
|
@deffn {Scheme Procedure} system-async-mark proc [thread]
|
|
@deffnx {C Function} scm_system_async_mark (proc)
|
|
@deffnx {C Function} scm_system_async_mark_for_thread (proc, thread)
|
|
Mark @var{proc} (a procedure with zero arguments) for future execution
|
|
in @var{thread}. When @var{proc} has already been marked for
|
|
@var{thread} but has not been executed yet, this call has no effect.
|
|
When @var{thread} is omitted, the thread that called
|
|
@code{system-async-mark} is used.
|
|
|
|
As we mentioned above, Scheme signal handlers are already called within
|
|
an async and so can run any Scheme code. However, note that the C
|
|
function is not safe to be called from C signal handlers. Use
|
|
@code{scm_sigaction} or @code{scm_sigaction_for_thread} to install
|
|
signal handlers.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} call-with-blocked-asyncs proc
|
|
@deffnx {C Function} scm_call_with_blocked_asyncs (proc)
|
|
Call @var{proc} and block the execution of asyncs by one level for the
|
|
current thread while it is running. Return the value returned by
|
|
@var{proc}. For the first two variants, call @var{proc} with no
|
|
arguments; for the third, call it with @var{data}.
|
|
@end deffn
|
|
|
|
@deftypefn {C Function} {void *} scm_c_call_with_blocked_asyncs (void * (*proc) (void *data), void *data)
|
|
The same but with a C function @var{proc} instead of a Scheme thunk.
|
|
@end deftypefn
|
|
|
|
@deffn {Scheme Procedure} call-with-unblocked-asyncs proc
|
|
@deffnx {C Function} scm_call_with_unblocked_asyncs (proc)
|
|
Call @var{proc} and unblock the execution of asyncs by one level for the
|
|
current thread while it is running. Return the value returned by
|
|
@var{proc}. For the first two variants, call @var{proc} with no
|
|
arguments; for the third, call it with @var{data}.
|
|
@end deffn
|
|
|
|
@deftypefn {C Function} {void *} scm_c_call_with_unblocked_asyncs (void *(*proc) (void *data), void *data)
|
|
The same but with a C function @var{proc} instead of a Scheme thunk.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} void scm_dynwind_block_asyncs ()
|
|
During the current dynwind context, increase the blocking of asyncs by
|
|
one level. This function must be used inside a pair of calls to
|
|
@code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
|
|
Wind}).
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} void scm_dynwind_unblock_asyncs ()
|
|
During the current dynwind context, decrease the blocking of asyncs by
|
|
one level. This function must be used inside a pair of calls to
|
|
@code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
|
|
Wind}).
|
|
@end deftypefn
|
|
|
|
@node Atomics
|
|
@subsection Atomics
|
|
|
|
When accessing data in parallel from multiple threads, updates made by
|
|
one thread are not generally guaranteed to be visible by another thread.
|
|
It could be that your hardware requires special instructions to be
|
|
emitted to propagate a change from one CPU core to another. Or, it
|
|
could be that your hardware updates values with a sequence of
|
|
instructions, and a parallel thread could see a value that is in the
|
|
process of being updated but not fully updated.
|
|
|
|
Atomic references solve this problem. Atomics are a standard, primitive
|
|
facility to allow for concurrent access and update of mutable variables
|
|
from multiple threads with guaranteed forward-progress and well-defined
|
|
intermediate states.
|
|
|
|
Atomic references serve not only as a hardware memory barrier but also
|
|
as a compiler barrier. Normally a compiler might choose to reorder or
|
|
elide certain memory accesses due to optimizations like common
|
|
subexpression elimination. Atomic accesses however will not be
|
|
reordered relative to each other, and normal memory accesses will not be
|
|
reordered across atomic accesses.
|
|
|
|
As an implementation detail, currently all atomic accesses and updates
|
|
use the sequential consistency memory model from C11. We may relax this
|
|
in the future to the acquire/release semantics, which still issues a
|
|
memory barrier so that non-atomic updates are not reordered across
|
|
atomic accesses or updates.
|
|
|
|
To use Guile's atomic operations, load the @code{(ice-9 atomic)} module:
|
|
|
|
@example
|
|
(use-modules (ice-9 atomic))
|
|
@end example
|
|
|
|
@deffn {Scheme Procedure} make-atomic-box init
|
|
Return an atomic box initialized to value @var{init}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} atomic-box? obj
|
|
Return @code{#t} if @var{obj} is an atomic-box object, else
|
|
return @code{#f}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} atomic-box-ref box
|
|
Fetch the value stored in the atomic box @var{box} and return it.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} atomic-box-set! box val
|
|
Store @var{val} into the atomic box @var{box}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} atomic-box-swap! box val
|
|
Store @var{val} into the atomic box @var{box}, and return the value that
|
|
was previously stored in the box.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} atomic-box-compare-and-swap! box expected desired
|
|
If the value of the atomic box @var{box} is the same as, @var{expected}
|
|
(in the sense of @code{eq?}), replace the contents of the box with
|
|
@var{desired}. Otherwise does not update the box. Returns the previous
|
|
value of the box in either case, so you can know if the swap worked by
|
|
checking if the return value is @code{eq?} to @var{expected}.
|
|
@end deffn
|
|
|
|
|
|
@node Threads
|
|
@subsection Threads
|
|
@cindex threads
|
|
@cindex Guile threads
|
|
@cindex POSIX threads
|
|
|
|
Guile supports POSIX threads, unless it was configured with
|
|
@code{--without-threads} or the host lacks POSIX thread support. When
|
|
thread support is available, the @code{threads} feature is provided
|
|
(@pxref{Feature Manipulation, @code{provided?}}).
|
|
|
|
The procedures below manipulate Guile threads, which are wrappers around
|
|
the system's POSIX threads. For application-level parallelism, using
|
|
higher-level constructs, such as futures, is recommended
|
|
(@pxref{Futures}).
|
|
|
|
@deffn {Scheme Procedure} all-threads
|
|
@deffnx {C Function} scm_all_threads ()
|
|
Return a list of all threads.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} current-thread
|
|
@deffnx {C Function} scm_current_thread ()
|
|
Return the thread that called this function.
|
|
@end deffn
|
|
|
|
@c begin (texi-doc-string "guile" "call-with-new-thread")
|
|
@deffn {Scheme Procedure} call-with-new-thread thunk [handler]
|
|
Call @code{thunk} in a new thread and with a new dynamic state,
|
|
returning the new thread. The procedure @var{thunk} is called via
|
|
@code{with-continuation-barrier}.
|
|
|
|
When @var{handler} is specified, then @var{thunk} is called from
|
|
within a @code{catch} with tag @code{#t} that has @var{handler} as its
|
|
handler. This catch is established inside the continuation barrier.
|
|
|
|
Once @var{thunk} or @var{handler} returns, the return value is made
|
|
the @emph{exit value} of the thread and the thread is terminated.
|
|
@end deffn
|
|
|
|
@deftypefn {C Function} SCM scm_spawn_thread (scm_t_catch_body body, void *body_data, scm_t_catch_handler handler, void *handler_data)
|
|
Call @var{body} in a new thread, passing it @var{body_data}, returning
|
|
the new thread. The function @var{body} is called via
|
|
@code{scm_c_with_continuation_barrier}.
|
|
|
|
When @var{handler} is non-@code{NULL}, @var{body} is called via
|
|
@code{scm_internal_catch} with tag @code{SCM_BOOL_T} that has
|
|
@var{handler} and @var{handler_data} as the handler and its data. This
|
|
catch is established inside the continuation barrier.
|
|
|
|
Once @var{body} or @var{handler} returns, the return value is made the
|
|
@emph{exit value} of the thread and the thread is terminated.
|
|
@end deftypefn
|
|
|
|
@deffn {Scheme Procedure} thread? obj
|
|
@deffnx {C Function} scm_thread_p (obj)
|
|
Return @code{#t} ff @var{obj} is a thread; otherwise, return
|
|
@code{#f}.
|
|
@end deffn
|
|
|
|
@c begin (texi-doc-string "guile" "join-thread")
|
|
@deffn {Scheme Procedure} join-thread thread [timeout [timeoutval]]
|
|
@deffnx {C Function} scm_join_thread (thread)
|
|
@deffnx {C Function} scm_join_thread_timed (thread, timeout, timeoutval)
|
|
Wait for @var{thread} to terminate and return its exit value. Threads
|
|
that have not been created with @code{call-with-new-thread} or
|
|
@code{scm_spawn_thread} have an exit value of @code{#f}. When
|
|
@var{timeout} is given, it specifies a point in time where the waiting
|
|
should be aborted. It can be either an integer as returned by
|
|
@code{current-time} or a pair as returned by @code{gettimeofday}.
|
|
When the waiting is aborted, @var{timeoutval} is returned (if it is
|
|
specified; @code{#f} is returned otherwise).
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} thread-exited? thread
|
|
@deffnx {C Function} scm_thread_exited_p (thread)
|
|
Return @code{#t} if @var{thread} has exited, or @code{#f} otherwise.
|
|
@end deffn
|
|
|
|
@c begin (texi-doc-string "guile" "yield")
|
|
@deffn {Scheme Procedure} yield
|
|
If one or more threads are waiting to execute, calling yield forces an
|
|
immediate context switch to one of them. Otherwise, yield has no effect.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} cancel-thread thread
|
|
@deffnx {C Function} scm_cancel_thread (thread)
|
|
Asynchronously notify @var{thread} to exit. Immediately after
|
|
receiving this notification, @var{thread} will call its cleanup handler
|
|
(if one has been set) and then terminate, aborting any evaluation that
|
|
is in progress.
|
|
|
|
Because Guile threads are isomorphic with POSIX threads, @var{thread}
|
|
will not receive its cancellation signal until it reaches a cancellation
|
|
point. See your operating system's POSIX threading documentation for
|
|
more information on cancellation points; note that in Guile, unlike
|
|
native POSIX threads, a thread can receive a cancellation notification
|
|
while attempting to lock a mutex.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} set-thread-cleanup! thread proc
|
|
@deffnx {C Function} scm_set_thread_cleanup_x (thread, proc)
|
|
Set @var{proc} as the cleanup handler for the thread @var{thread}.
|
|
@var{proc}, which must be a thunk, will be called when @var{thread}
|
|
exits, either normally or by being canceled. Thread cleanup handlers
|
|
can be used to perform useful tasks like releasing resources, such as
|
|
locked mutexes, when thread exit cannot be predicted.
|
|
|
|
The return value of @var{proc} will be set as the @emph{exit value} of
|
|
@var{thread}.
|
|
|
|
To remove a cleanup handler, pass @code{#f} for @var{proc}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} thread-cleanup thread
|
|
@deffnx {C Function} scm_thread_cleanup (thread)
|
|
Return the cleanup handler currently installed for the thread
|
|
@var{thread}. If no cleanup handler is currently installed,
|
|
thread-cleanup returns @code{#f}.
|
|
@end deffn
|
|
|
|
Higher level thread procedures are available by loading the
|
|
@code{(ice-9 threads)} module. These provide standardized
|
|
thread creation.
|
|
|
|
@deffn macro make-thread proc arg @dots{}
|
|
Apply @var{proc} to @var{arg} @dots{} in a new thread formed by
|
|
@code{call-with-new-thread} using a default error handler that display
|
|
the error to the current error port. The @var{arg} @dots{}
|
|
expressions are evaluated in the new thread.
|
|
@end deffn
|
|
|
|
@deffn macro begin-thread expr1 expr2 @dots{}
|
|
Evaluate forms @var{expr1} @var{expr2} @dots{} in a new thread formed by
|
|
@code{call-with-new-thread} using a default error handler that display
|
|
the error to the current error port.
|
|
@end deffn
|
|
|
|
@node Mutexes and Condition Variables
|
|
@subsection Mutexes and Condition Variables
|
|
@cindex mutex
|
|
@cindex condition variable
|
|
|
|
A mutex is a thread synchronization object, it can be used by threads
|
|
to control access to a shared resource. A mutex can be locked to
|
|
indicate a resource is in use, and other threads can then block on the
|
|
mutex to wait for the resource (or can just test and do something else
|
|
if not available). ``Mutex'' is short for ``mutual exclusion''.
|
|
|
|
There are two types of mutexes in Guile, ``standard'' and
|
|
``recursive''. They're created by @code{make-mutex} and
|
|
@code{make-recursive-mutex} respectively, the operation functions are
|
|
then common to both.
|
|
|
|
Note that for both types of mutex there's no protection against a
|
|
``deadly embrace''. For instance if one thread has locked mutex A and
|
|
is waiting on mutex B, but another thread owns B and is waiting on A,
|
|
then an endless wait will occur (in the current implementation).
|
|
Acquiring requisite mutexes in a fixed order (like always A before B)
|
|
in all threads is one way to avoid such problems.
|
|
|
|
@sp 1
|
|
@deffn {Scheme Procedure} make-mutex flag @dots{}
|
|
@deffnx {C Function} scm_make_mutex ()
|
|
@deffnx {C Function} scm_make_mutex_with_flags (SCM flags)
|
|
Return a new mutex. It is initially unlocked. If @var{flag} @dots{} is
|
|
specified, it must be a list of symbols specifying configuration flags
|
|
for the newly-created mutex. The supported flags are:
|
|
@table @code
|
|
@item unchecked-unlock
|
|
Unless this flag is present, a call to `unlock-mutex' on the returned
|
|
mutex when it is already unlocked will cause an error to be signalled.
|
|
|
|
@item allow-external-unlock
|
|
Allow the returned mutex to be unlocked by the calling thread even if
|
|
it was originally locked by a different thread.
|
|
|
|
@item recursive
|
|
The returned mutex will be recursive.
|
|
|
|
@end table
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} mutex? obj
|
|
@deffnx {C Function} scm_mutex_p (obj)
|
|
Return @code{#t} if @var{obj} is a mutex; otherwise, return
|
|
@code{#f}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} make-recursive-mutex
|
|
@deffnx {C Function} scm_make_recursive_mutex ()
|
|
Create a new recursive mutex. It is initially unlocked. Calling this
|
|
function is equivalent to calling `make-mutex' and specifying the
|
|
@code{recursive} flag.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} lock-mutex mutex [timeout [owner]]
|
|
@deffnx {C Function} scm_lock_mutex (mutex)
|
|
@deffnx {C Function} scm_lock_mutex_timed (mutex, timeout, owner)
|
|
Lock @var{mutex}. If the mutex is already locked, then block and
|
|
return only when @var{mutex} has been acquired.
|
|
|
|
When @var{timeout} is given, it specifies a point in time where the
|
|
waiting should be aborted. It can be either an integer as returned
|
|
by @code{current-time} or a pair as returned by @code{gettimeofday}.
|
|
When the waiting is aborted, @code{#f} is returned.
|
|
|
|
When @var{owner} is given, it specifies an owner for @var{mutex} other
|
|
than the calling thread. @var{owner} may also be @code{#f},
|
|
indicating that the mutex should be locked but left unowned.
|
|
|
|
For standard mutexes (@code{make-mutex}), and error is signalled if
|
|
the thread has itself already locked @var{mutex}.
|
|
|
|
For a recursive mutex (@code{make-recursive-mutex}), if the thread has
|
|
itself already locked @var{mutex}, then a further @code{lock-mutex}
|
|
call increments the lock count. An additional @code{unlock-mutex}
|
|
will be required to finally release.
|
|
|
|
If @var{mutex} was locked by a thread that exited before unlocking it,
|
|
the next attempt to lock @var{mutex} will succeed, but
|
|
@code{abandoned-mutex-error} will be signalled.
|
|
|
|
When an async (@pxref{Asyncs}) is activated for a thread blocked in
|
|
@code{lock-mutex}, the wait is interrupted and the async is executed.
|
|
When the async returns, the wait resumes.
|
|
@end deffn
|
|
|
|
@deftypefn {C Function} void scm_dynwind_lock_mutex (SCM mutex)
|
|
Arrange for @var{mutex} to be locked whenever the current dynwind
|
|
context is entered and to be unlocked when it is exited.
|
|
@end deftypefn
|
|
|
|
@deffn {Scheme Procedure} try-mutex mx
|
|
@deffnx {C Function} scm_try_mutex (mx)
|
|
Try to lock @var{mutex} as per @code{lock-mutex}. If @var{mutex} can
|
|
be acquired immediately then this is done and the return is @code{#t}.
|
|
If @var{mutex} is locked by some other thread then nothing is done and
|
|
the return is @code{#f}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} unlock-mutex mutex [condvar [timeout]]
|
|
@deffnx {C Function} scm_unlock_mutex (mutex)
|
|
@deffnx {C Function} scm_unlock_mutex_timed (mutex, condvar, timeout)
|
|
Unlock @var{mutex}. An error is signalled if @var{mutex} is not locked
|
|
and was not created with the @code{unchecked-unlock} flag set, or if
|
|
@var{mutex} is locked by a thread other than the calling thread and was
|
|
not created with the @code{allow-external-unlock} flag set.
|
|
|
|
If @var{condvar} is given, it specifies a condition variable upon
|
|
which the calling thread will wait to be signalled before returning.
|
|
(This behavior is very similar to that of
|
|
@code{wait-condition-variable}, except that the mutex is left in an
|
|
unlocked state when the function returns.)
|
|
|
|
When @var{timeout} is also given and not false, it specifies a point in
|
|
time where the waiting should be aborted. It can be either an integer
|
|
as returned by @code{current-time} or a pair as returned by
|
|
@code{gettimeofday}. When the waiting is aborted, @code{#f} is
|
|
returned. Otherwise the function returns @code{#t}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} mutex-owner mutex
|
|
@deffnx {C Function} scm_mutex_owner (mutex)
|
|
Return the current owner of @var{mutex}, in the form of a thread or
|
|
@code{#f} (indicating no owner). Note that a mutex may be unowned but
|
|
still locked.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} mutex-level mutex
|
|
@deffnx {C Function} scm_mutex_level (mutex)
|
|
Return the current lock level of @var{mutex}. If @var{mutex} is
|
|
currently unlocked, this value will be 0; otherwise, it will be the
|
|
number of times @var{mutex} has been recursively locked by its current
|
|
owner.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} mutex-locked? mutex
|
|
@deffnx {C Function} scm_mutex_locked_p (mutex)
|
|
Return @code{#t} if @var{mutex} is locked, regardless of ownership;
|
|
otherwise, return @code{#f}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} make-condition-variable
|
|
@deffnx {C Function} scm_make_condition_variable ()
|
|
Return a new condition variable.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} condition-variable? obj
|
|
@deffnx {C Function} scm_condition_variable_p (obj)
|
|
Return @code{#t} if @var{obj} is a condition variable; otherwise,
|
|
return @code{#f}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} wait-condition-variable condvar mutex [time]
|
|
@deffnx {C Function} scm_wait_condition_variable (condvar, mutex, time)
|
|
Wait until @var{condvar} has been signalled. While waiting,
|
|
@var{mutex} is atomically unlocked (as with @code{unlock-mutex}) and
|
|
is locked again when this function returns. When @var{time} is given,
|
|
it specifies a point in time where the waiting should be aborted. It
|
|
can be either a integer as returned by @code{current-time} or a pair
|
|
as returned by @code{gettimeofday}. When the waiting is aborted,
|
|
@code{#f} is returned. When the condition variable has in fact been
|
|
signalled, @code{#t} is returned. The mutex is re-locked in any case
|
|
before @code{wait-condition-variable} returns.
|
|
|
|
When an async is activated for a thread that is blocked in a call to
|
|
@code{wait-condition-variable}, the waiting is interrupted, the mutex is
|
|
locked, and the async is executed. When the async returns, the mutex is
|
|
unlocked again and the waiting is resumed. When the thread block while
|
|
re-acquiring the mutex, execution of asyncs is blocked.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} signal-condition-variable condvar
|
|
@deffnx {C Function} scm_signal_condition_variable (condvar)
|
|
Wake up one thread that is waiting for @var{condvar}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} broadcast-condition-variable condvar
|
|
@deffnx {C Function} scm_broadcast_condition_variable (condvar)
|
|
Wake up all threads that are waiting for @var{condvar}.
|
|
@end deffn
|
|
|
|
@sp 1
|
|
The following are higher level operations on mutexes. These are
|
|
available from
|
|
|
|
@example
|
|
(use-modules (ice-9 threads))
|
|
@end example
|
|
|
|
@deffn macro with-mutex mutex body1 body2 @dots{}
|
|
Lock @var{mutex}, evaluate the body @var{body1} @var{body2} @dots{},
|
|
then unlock @var{mutex}. The return value is that returned by the last
|
|
body form.
|
|
|
|
The lock, body and unlock form the branches of a @code{dynamic-wind}
|
|
(@pxref{Dynamic Wind}), so @var{mutex} is automatically unlocked if an
|
|
error or new continuation exits the body, and is re-locked if
|
|
the body is re-entered by a captured continuation.
|
|
@end deffn
|
|
|
|
@deffn macro monitor body1 body2 @dots{}
|
|
Evaluate the body form @var{body1} @var{body2} @dots{} with a mutex
|
|
locked so only one thread can execute that code at any one time. The
|
|
return value is the return from the last body form.
|
|
|
|
Each @code{monitor} form has its own private mutex and the locking and
|
|
evaluation is as per @code{with-mutex} above. A standard mutex
|
|
(@code{make-mutex}) is used, which means the body must not
|
|
recursively re-enter the @code{monitor} form.
|
|
|
|
The term ``monitor'' comes from operating system theory, where it
|
|
means a particular bit of code managing access to some resource and
|
|
which only ever executes on behalf of one process at any one time.
|
|
@end deffn
|
|
|
|
|
|
@node Blocking
|
|
@subsection Blocking in Guile Mode
|
|
|
|
Up to Guile version 1.8, a thread blocked in guile mode would prevent
|
|
the garbage collector from running. Thus threads had to explicitly
|
|
leave guile mode with @code{scm_without_guile ()} before making a
|
|
potentially blocking call such as a mutex lock, a @code{select ()}
|
|
system call, etc. The following functions could be used to temporarily
|
|
leave guile mode or to perform some common blocking operations in a
|
|
supported way.
|
|
|
|
Starting from Guile 2.0, blocked threads no longer hinder garbage
|
|
collection. Thus, the functions below are not needed anymore. They can
|
|
still be used to inform the GC that a thread is about to block, giving
|
|
it a (small) optimization opportunity for ``stop the world'' garbage
|
|
collections, should they occur while the thread is blocked.
|
|
|
|
@deftypefn {C Function} {void *} scm_without_guile (void *(*func) (void *), void *data)
|
|
Leave guile mode, call @var{func} on @var{data}, enter guile mode and
|
|
return the result of calling @var{func}.
|
|
|
|
While a thread has left guile mode, it must not call any libguile
|
|
functions except @code{scm_with_guile} or @code{scm_without_guile} and
|
|
must not use any libguile macros. Also, local variables of type
|
|
@code{SCM} that are allocated while not in guile mode are not
|
|
protected from the garbage collector.
|
|
|
|
When used from non-guile mode, calling @code{scm_without_guile} is
|
|
still allowed: it simply calls @var{func}. In that way, you can leave
|
|
guile mode without having to know whether the current thread is in
|
|
guile mode or not.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} int scm_pthread_mutex_lock (pthread_mutex_t *mutex)
|
|
Like @code{pthread_mutex_lock}, but leaves guile mode while waiting for
|
|
the mutex.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} int scm_pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex)
|
|
@deftypefnx {C Function} int scm_pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, struct timespec *abstime)
|
|
Like @code{pthread_cond_wait} and @code{pthread_cond_timedwait}, but
|
|
leaves guile mode while waiting for the condition variable.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} int scm_std_select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout)
|
|
Like @code{select} but leaves guile mode while waiting. Also, the
|
|
delivery of an async causes this function to be interrupted with error
|
|
code @code{EINTR}.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} {unsigned int} scm_std_sleep ({unsigned int} seconds)
|
|
Like @code{sleep}, but leaves guile mode while sleeping. Also, the
|
|
delivery of an async causes this function to be interrupted.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} {unsigned long} scm_std_usleep ({unsigned long} usecs)
|
|
Like @code{usleep}, but leaves guile mode while sleeping. Also, the
|
|
delivery of an async causes this function to be interrupted.
|
|
@end deftypefn
|
|
|
|
|
|
@node Critical Sections
|
|
@subsection Critical Sections
|
|
|
|
@deffn {C Macro} SCM_CRITICAL_SECTION_START
|
|
@deffnx {C Macro} SCM_CRITICAL_SECTION_END
|
|
These two macros can be used to delimit a critical section.
|
|
Syntactically, they are both statements and need to be followed
|
|
immediately by a semicolon.
|
|
|
|
Executing @code{SCM_CRITICAL_SECTION_START} will lock a recursive mutex
|
|
and block the executing of asyncs. Executing
|
|
@code{SCM_CRITICAL_SECTION_END} will unblock the execution of system
|
|
asyncs and unlock the mutex. Thus, the code that executes between these
|
|
two macros can only be executed in one thread at any one time and no
|
|
asyncs will run. However, because the mutex is a recursive one, the
|
|
code might still be reentered by the same thread. You must either allow
|
|
for this or avoid it, both by careful coding.
|
|
|
|
On the other hand, critical sections delimited with these macros can
|
|
be nested since the mutex is recursive.
|
|
|
|
You must make sure that for each @code{SCM_CRITICAL_SECTION_START},
|
|
the corresponding @code{SCM_CRITICAL_SECTION_END} is always executed.
|
|
This means that no non-local exit (such as a signalled error) might
|
|
happen, for example.
|
|
@end deffn
|
|
|
|
@deftypefn {C Function} void scm_dynwind_critical_section (SCM mutex)
|
|
Call @code{scm_dynwind_lock_mutex} on @var{mutex} and call
|
|
@code{scm_dynwind_block_asyncs}. When @var{mutex} is false, a recursive
|
|
mutex provided by Guile is used instead.
|
|
|
|
The effect of a call to @code{scm_dynwind_critical_section} is that
|
|
the current dynwind context (@pxref{Dynamic Wind}) turns into a
|
|
critical section. Because of the locked mutex, no second thread can
|
|
enter it concurrently and because of the blocked asyncs, no system
|
|
async can reenter it from the current thread.
|
|
|
|
When the current thread reenters the critical section anyway, the kind
|
|
of @var{mutex} determines what happens: When @var{mutex} is recursive,
|
|
the reentry is allowed. When it is a normal mutex, an error is
|
|
signalled.
|
|
@end deftypefn
|
|
|
|
|
|
@node Fluids and Dynamic States
|
|
@subsection Fluids and Dynamic States
|
|
|
|
@cindex fluids
|
|
|
|
A @emph{fluid} is an object that can store one value per @emph{dynamic
|
|
state}. Each thread has a current dynamic state, and when accessing a
|
|
fluid, this current dynamic state is used to provide the actual value.
|
|
In this way, fluids can be used for thread local storage, but they are
|
|
in fact more flexible: dynamic states are objects of their own and can
|
|
be made current for more than one thread at the same time, or only be
|
|
made current temporarily, for example.
|
|
|
|
Fluids can also be used to simulate the desirable effects of
|
|
dynamically scoped variables. Dynamically scoped variables are useful
|
|
when you want to set a variable to a value during some dynamic extent
|
|
in the execution of your program and have them revert to their
|
|
original value when the control flow is outside of this dynamic
|
|
extent. See the description of @code{with-fluids} below for details.
|
|
|
|
New fluids are created with @code{make-fluid} and @code{fluid?} is
|
|
used for testing whether an object is actually a fluid. The values
|
|
stored in a fluid can be accessed with @code{fluid-ref} and
|
|
@code{fluid-set!}.
|
|
|
|
@deffn {Scheme Procedure} make-fluid [dflt]
|
|
@deffnx {C Function} scm_make_fluid ()
|
|
@deffnx {C Function} scm_make_fluid_with_default (dflt)
|
|
Return a newly created fluid, whose initial value is @var{dflt}, or
|
|
@code{#f} if @var{dflt} is not given.
|
|
Fluids are objects that can hold one
|
|
value per dynamic state. That is, modifications to this value are
|
|
only visible to code that executes with the same dynamic state as
|
|
the modifying code. When a new dynamic state is constructed, it
|
|
inherits the values from its parent. Because each thread normally executes
|
|
with its own dynamic state, you can use fluids for thread local storage.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} make-unbound-fluid
|
|
@deffnx {C Function} scm_make_unbound_fluid ()
|
|
Return a new fluid that is initially unbound (instead of being
|
|
implicitly bound to some definite value).
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} fluid? obj
|
|
@deffnx {C Function} scm_fluid_p (obj)
|
|
Return @code{#t} if @var{obj} is a fluid; otherwise, return
|
|
@code{#f}.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} fluid-ref fluid
|
|
@deffnx {C Function} scm_fluid_ref (fluid)
|
|
Return the value associated with @var{fluid} in the current
|
|
dynamic root. If @var{fluid} has not been set, then return
|
|
its default value. Calling @code{fluid-ref} on an unbound fluid produces
|
|
a runtime error.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} fluid-set! fluid value
|
|
@deffnx {C Function} scm_fluid_set_x (fluid, value)
|
|
Set the value associated with @var{fluid} in the current dynamic root.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} fluid-unset! fluid
|
|
@deffnx {C Function} scm_fluid_unset_x (fluid)
|
|
Disassociate the given fluid from any value, making it unbound.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} fluid-bound? fluid
|
|
@deffnx {C Function} scm_fluid_bound_p (fluid)
|
|
Returns @code{#t} if the given fluid is bound to a value, otherwise
|
|
@code{#f}.
|
|
@end deffn
|
|
|
|
@code{with-fluids*} temporarily changes the values of one or more fluids,
|
|
so that the given procedure and each procedure called by it access the
|
|
given values. After the procedure returns, the old values are restored.
|
|
|
|
@deffn {Scheme Procedure} with-fluid* fluid value thunk
|
|
@deffnx {C Function} scm_with_fluid (fluid, value, thunk)
|
|
Set @var{fluid} to @var{value} temporarily, and call @var{thunk}.
|
|
@var{thunk} must be a procedure with no argument.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} with-fluids* fluids values thunk
|
|
@deffnx {C Function} scm_with_fluids (fluids, values, thunk)
|
|
Set @var{fluids} to @var{values} temporary, and call @var{thunk}.
|
|
@var{fluids} must be a list of fluids and @var{values} must be the
|
|
same number of their values to be applied. Each substitution is done
|
|
in the order given. @var{thunk} must be a procedure with no argument.
|
|
It is called inside a @code{dynamic-wind} and the fluids are
|
|
set/restored when control enter or leaves the established dynamic
|
|
extent.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Macro} with-fluids ((fluid value) @dots{}) body1 body2 @dots{}
|
|
Execute body @var{body1} @var{body2} @dots{} while each @var{fluid} is
|
|
set to the corresponding @var{value}. Both @var{fluid} and @var{value}
|
|
are evaluated and @var{fluid} must yield a fluid. The body is executed
|
|
inside a @code{dynamic-wind} and the fluids are set/restored when
|
|
control enter or leaves the established dynamic extent.
|
|
@end deffn
|
|
|
|
@deftypefn {C Function} SCM scm_c_with_fluids (SCM fluids, SCM vals, SCM (*cproc)(void *), void *data)
|
|
@deftypefnx {C Function} SCM scm_c_with_fluid (SCM fluid, SCM val, SCM (*cproc)(void *), void *data)
|
|
The function @code{scm_c_with_fluids} is like @code{scm_with_fluids}
|
|
except that it takes a C function to call instead of a Scheme thunk.
|
|
|
|
The function @code{scm_c_with_fluid} is similar but only allows one
|
|
fluid to be set instead of a list.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Function} void scm_dynwind_fluid (SCM fluid, SCM val)
|
|
This function must be used inside a pair of calls to
|
|
@code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
|
|
Wind}). During the dynwind context, the fluid @var{fluid} is set to
|
|
@var{val}.
|
|
|
|
More precisely, the value of the fluid is swapped with a `backup'
|
|
value whenever the dynwind context is entered or left. The backup
|
|
value is initialized with the @var{val} argument.
|
|
@end deftypefn
|
|
|
|
@deffn {Scheme Procedure} make-dynamic-state [parent]
|
|
@deffnx {C Function} scm_make_dynamic_state (parent)
|
|
Return a copy of the dynamic state object @var{parent}
|
|
or of the current dynamic state when @var{parent} is omitted.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} dynamic-state? obj
|
|
@deffnx {C Function} scm_dynamic_state_p (obj)
|
|
Return @code{#t} if @var{obj} is a dynamic state object;
|
|
return @code{#f} otherwise.
|
|
@end deffn
|
|
|
|
@deftypefn {C Procedure} int scm_is_dynamic_state (SCM obj)
|
|
Return non-zero if @var{obj} is a dynamic state object;
|
|
return zero otherwise.
|
|
@end deftypefn
|
|
|
|
@deffn {Scheme Procedure} current-dynamic-state
|
|
@deffnx {C Function} scm_current_dynamic_state ()
|
|
Return the current dynamic state object.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} set-current-dynamic-state state
|
|
@deffnx {C Function} scm_set_current_dynamic_state (state)
|
|
Set the current dynamic state object to @var{state}
|
|
and return the previous current dynamic state object.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} with-dynamic-state state proc
|
|
@deffnx {C Function} scm_with_dynamic_state (state, proc)
|
|
Call @var{proc} while @var{state} is the current dynamic
|
|
state object.
|
|
@end deffn
|
|
|
|
@deftypefn {C Procedure} void scm_dynwind_current_dynamic_state (SCM state)
|
|
Set the current dynamic state to @var{state} for the current dynwind
|
|
context.
|
|
@end deftypefn
|
|
|
|
@deftypefn {C Procedure} {void *} scm_c_with_dynamic_state (SCM state, void *(*func)(void *), void *data)
|
|
Like @code{scm_with_dynamic_state}, but call @var{func} with
|
|
@var{data}.
|
|
@end deftypefn
|
|
|
|
@node Parameters
|
|
@subsection Parameters
|
|
|
|
@cindex SRFI-39
|
|
@cindex parameter object
|
|
@tindex Parameter
|
|
|
|
A parameter object is a procedure. Calling it with no arguments returns
|
|
its value. Calling it with one argument sets the value.
|
|
|
|
@example
|
|
(define my-param (make-parameter 123))
|
|
(my-param) @result{} 123
|
|
(my-param 456)
|
|
(my-param) @result{} 456
|
|
@end example
|
|
|
|
The @code{parameterize} special form establishes new locations for
|
|
parameters, those new locations having effect within the dynamic scope
|
|
of the @code{parameterize} body. Leaving restores the previous
|
|
locations. Re-entering (through a saved continuation) will again use
|
|
the new locations.
|
|
|
|
@example
|
|
(parameterize ((my-param 789))
|
|
(my-param)) @result{} 789
|
|
(my-param) @result{} 456
|
|
@end example
|
|
|
|
Parameters are like dynamically bound variables in other Lisp dialects.
|
|
They allow an application to establish parameter settings (as the name
|
|
suggests) just for the execution of a particular bit of code, restoring
|
|
when done. Examples of such parameters might be case-sensitivity for a
|
|
search, or a prompt for user input.
|
|
|
|
Global variables are not as good as parameter objects for this sort of
|
|
thing. Changes to them are visible to all threads, but in Guile
|
|
parameter object locations are per-thread, thereby truly limiting the
|
|
effect of @code{parameterize} to just its dynamic execution.
|
|
|
|
Passing arguments to functions is thread-safe, but that soon becomes
|
|
tedious when there's more than a few or when they need to pass down
|
|
through several layers of calls before reaching the point they should
|
|
affect. And introducing a new setting to existing code is often easier
|
|
with a parameter object than adding arguments.
|
|
|
|
@deffn {Scheme Procedure} make-parameter init [converter]
|
|
Return a new parameter object, with initial value @var{init}.
|
|
|
|
If a @var{converter} is given, then a call @code{(@var{converter}
|
|
val)} is made for each value set, its return is the value stored.
|
|
Such a call is made for the @var{init} initial value too.
|
|
|
|
A @var{converter} allows values to be validated, or put into a
|
|
canonical form. For example,
|
|
|
|
@example
|
|
(define my-param (make-parameter 123
|
|
(lambda (val)
|
|
(if (not (number? val))
|
|
(error "must be a number"))
|
|
(inexact->exact val))))
|
|
(my-param 0.75)
|
|
(my-param) @result{} 3/4
|
|
@end example
|
|
@end deffn
|
|
|
|
@deffn {library syntax} parameterize ((param value) @dots{}) body1 body2 @dots{}
|
|
Establish a new dynamic scope with the given @var{param}s bound to new
|
|
locations and set to the given @var{value}s. @var{body1} @var{body2}
|
|
@dots{} is evaluated in that environment. The value returned is that of
|
|
last body form.
|
|
|
|
Each @var{param} is an expression which is evaluated to get the
|
|
parameter object. Often this will just be the name of a variable
|
|
holding the object, but it can be anything that evaluates to a
|
|
parameter.
|
|
|
|
The @var{param} expressions and @var{value} expressions are all
|
|
evaluated before establishing the new dynamic bindings, and they're
|
|
evaluated in an unspecified order.
|
|
|
|
For example,
|
|
|
|
@example
|
|
(define prompt (make-parameter "Type something: "))
|
|
(define (get-input)
|
|
(display (prompt))
|
|
...)
|
|
|
|
(parameterize ((prompt "Type a number: "))
|
|
(get-input)
|
|
...)
|
|
@end example
|
|
@end deffn
|
|
|
|
Parameter objects are implemented using fluids (@pxref{Fluids and
|
|
Dynamic States}), so each dynamic state has its own parameter
|
|
locations. That includes the separate locations when outside any
|
|
@code{parameterize} form. When a parameter is created it gets a
|
|
separate initial location in each dynamic state, all initialized to the
|
|
given @var{init} value.
|
|
|
|
New code should probably just use parameters instead of fluids, because
|
|
the interface is better. But for migrating old code or otherwise
|
|
providing interoperability, Guile provides the @code{fluid->parameter}
|
|
procedure:
|
|
|
|
@deffn {Scheme Procedure} fluid->parameter fluid [conv]
|
|
Make a parameter that wraps a fluid.
|
|
|
|
The value of the parameter will be the same as the value of the fluid.
|
|
If the parameter is rebound in some dynamic extent, perhaps via
|
|
@code{parameterize}, the new value will be run through the optional
|
|
@var{conv} procedure, as with any parameter. Note that unlike
|
|
@code{make-parameter}, @var{conv} is not applied to the initial value.
|
|
@end deffn
|
|
|
|
As alluded to above, because each thread usually has a separate dynamic
|
|
state, each thread has its own locations behind parameter objects, and
|
|
changes in one thread are not visible to any other. When a new dynamic
|
|
state or thread is created, the values of parameters in the originating
|
|
context are copied, into new locations.
|
|
|
|
@cindex SRFI-39
|
|
Guile's parameters conform to SRFI-39 (@pxref{SRFI-39}).
|
|
|
|
|
|
@node Futures
|
|
@subsection Futures
|
|
@cindex futures
|
|
@cindex fine-grain parallelism
|
|
@cindex parallelism
|
|
|
|
The @code{(ice-9 futures)} module provides @dfn{futures}, a construct
|
|
for fine-grain parallelism. A future is a wrapper around an expression
|
|
whose computation may occur in parallel with the code of the calling
|
|
thread, and possibly in parallel with other futures. Like promises,
|
|
futures are essentially proxies that can be queried to obtain the value
|
|
of the enclosed expression:
|
|
|
|
@lisp
|
|
(touch (future (+ 2 3)))
|
|
@result{} 5
|
|
@end lisp
|
|
|
|
However, unlike promises, the expression associated with a future may be
|
|
evaluated on another CPU core, should one be available. This supports
|
|
@dfn{fine-grain parallelism}, because even relatively small computations
|
|
can be embedded in futures. Consider this sequential code:
|
|
|
|
@lisp
|
|
(define (find-prime lst1 lst2)
|
|
(or (find prime? lst1)
|
|
(find prime? lst2)))
|
|
@end lisp
|
|
|
|
The two arms of @code{or} are potentially computation-intensive. They
|
|
are independent of one another, yet, they are evaluated sequentially
|
|
when the first one returns @code{#f}. Using futures, one could rewrite
|
|
it like this:
|
|
|
|
@lisp
|
|
(define (find-prime lst1 lst2)
|
|
(let ((f (future (find prime? lst2))))
|
|
(or (find prime? lst1)
|
|
(touch f))))
|
|
@end lisp
|
|
|
|
This preserves the semantics of @code{find-prime}. On a multi-core
|
|
machine, though, the computation of @code{(find prime? lst2)} may be
|
|
done in parallel with that of the other @code{find} call, which can
|
|
reduce the execution time of @code{find-prime}.
|
|
|
|
Futures may be nested: a future can itself spawn and then @code{touch}
|
|
other futures, leading to a directed acyclic graph of futures. Using
|
|
this facility, a parallel @code{map} procedure can be defined along
|
|
these lines:
|
|
|
|
@lisp
|
|
(use-modules (ice-9 futures) (ice-9 match))
|
|
|
|
(define (par-map proc lst)
|
|
(match lst
|
|
(()
|
|
'())
|
|
((head tail ...)
|
|
(let ((tail (future (par-map proc tail)))
|
|
(head (proc head)))
|
|
(cons head (touch tail))))))
|
|
@end lisp
|
|
|
|
Note that futures are intended for the evaluation of purely functional
|
|
expressions. Expressions that have side-effects or rely on I/O may
|
|
require additional care, such as explicit synchronization
|
|
(@pxref{Mutexes and Condition Variables}).
|
|
|
|
Guile's futures are implemented on top of POSIX threads
|
|
(@pxref{Threads}). Internally, a fixed-size pool of threads is used to
|
|
evaluate futures, such that offloading the evaluation of an expression
|
|
to another thread doesn't incur thread creation costs. By default, the
|
|
pool contains one thread per available CPU core, minus one, to account
|
|
for the main thread. The number of available CPU cores is determined
|
|
using @code{current-processor-count} (@pxref{Processes}).
|
|
|
|
When a thread touches a future that has not completed yet, it processes
|
|
any pending future while waiting for it to complete, or just waits if
|
|
there are no pending futures. When @code{touch} is called from within a
|
|
future, the execution of the calling future is suspended, allowing its
|
|
host thread to process other futures, and resumed when the touched
|
|
future has completed. This suspend/resume is achieved by capturing the
|
|
calling future's continuation, and later reinstating it (@pxref{Prompts,
|
|
delimited continuations}).
|
|
|
|
Note that @code{par-map} above is not tail-recursive. This could lead
|
|
to stack overflows when @var{lst} is large compared to
|
|
@code{(current-processor-count)}. To address that, @code{touch} uses
|
|
the suspend mechanism described above to limit the number of nested
|
|
futures executing on the same stack. Thus, the above code should never
|
|
run into stack overflows.
|
|
|
|
@deffn {Scheme Syntax} future exp
|
|
Return a future for expression @var{exp}. This is equivalent to:
|
|
|
|
@lisp
|
|
(make-future (lambda () exp))
|
|
@end lisp
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} make-future thunk
|
|
Return a future for @var{thunk}, a zero-argument procedure.
|
|
|
|
This procedure returns immediately. Execution of @var{thunk} may begin
|
|
in parallel with the calling thread's computations, if idle CPU cores
|
|
are available, or it may start when @code{touch} is invoked on the
|
|
returned future.
|
|
|
|
If the execution of @var{thunk} throws an exception, that exception will
|
|
be re-thrown when @code{touch} is invoked on the returned future.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} future? obj
|
|
Return @code{#t} if @var{obj} is a future.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} touch f
|
|
Return the result of the expression embedded in future @var{f}.
|
|
|
|
If the result was already computed in parallel, @code{touch} returns
|
|
instantaneously. Otherwise, it waits for the computation to complete,
|
|
if it already started, or initiates it. In the former case, the calling
|
|
thread may process other futures in the meantime.
|
|
@end deffn
|
|
|
|
|
|
@node Parallel Forms
|
|
@subsection Parallel forms
|
|
@cindex parallel forms
|
|
|
|
The functions described in this section are available from
|
|
|
|
@example
|
|
(use-modules (ice-9 threads))
|
|
@end example
|
|
|
|
They provide high-level parallel constructs. The following functions
|
|
are implemented in terms of futures (@pxref{Futures}). Thus they are
|
|
relatively cheap as they re-use existing threads, and portable, since
|
|
they automatically use one thread per available CPU core.
|
|
|
|
@deffn syntax parallel expr @dots{}
|
|
Evaluate each @var{expr} expression in parallel, each in its own thread.
|
|
Return the results of @var{n} expressions as a set of @var{n} multiple
|
|
values (@pxref{Multiple Values}).
|
|
@end deffn
|
|
|
|
@deffn syntax letpar ((var expr) @dots{}) body1 body2 @dots{}
|
|
Evaluate each @var{expr} in parallel, each in its own thread, then bind
|
|
the results to the corresponding @var{var} variables, and then evaluate
|
|
@var{body1} @var{body2} @enddots{}
|
|
|
|
@code{letpar} is like @code{let} (@pxref{Local Bindings}), but all the
|
|
expressions for the bindings are evaluated in parallel.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} par-map proc lst1 lst2 @dots{}
|
|
@deffnx {Scheme Procedure} par-for-each proc lst1 lst2 @dots{}
|
|
Call @var{proc} on the elements of the given lists. @code{par-map}
|
|
returns a list comprising the return values from @var{proc}.
|
|
@code{par-for-each} returns an unspecified value, but waits for all
|
|
calls to complete.
|
|
|
|
The @var{proc} calls are @code{(@var{proc} @var{elem1} @var{elem2}
|
|
@dots{})}, where each @var{elem} is from the corresponding @var{lst} .
|
|
Each @var{lst} must be the same length. The calls are potentially made
|
|
in parallel, depending on the number of CPU cores available.
|
|
|
|
These functions are like @code{map} and @code{for-each} (@pxref{List
|
|
Mapping}), but make their @var{proc} calls in parallel.
|
|
@end deffn
|
|
|
|
Unlike those above, the functions described below take a number of
|
|
threads as an argument. This makes them inherently non-portable since
|
|
the specified number of threads may differ from the number of available
|
|
CPU cores as returned by @code{current-processor-count}
|
|
(@pxref{Processes}). In addition, these functions create the specified
|
|
number of threads when they are called and terminate them upon
|
|
completion, which makes them quite expensive.
|
|
|
|
Therefore, they should be avoided.
|
|
|
|
@deffn {Scheme Procedure} n-par-map n proc lst1 lst2 @dots{}
|
|
@deffnx {Scheme Procedure} n-par-for-each n proc lst1 lst2 @dots{}
|
|
Call @var{proc} on the elements of the given lists, in the same way as
|
|
@code{par-map} and @code{par-for-each} above, but use no more than
|
|
@var{n} threads at any one time. The order in which calls are
|
|
initiated within that threads limit is unspecified.
|
|
|
|
These functions are good for controlling resource consumption if
|
|
@var{proc} calls might be costly, or if there are many to be made. On
|
|
a dual-CPU system for instance @math{@var{n}=4} might be enough to
|
|
keep the CPUs utilized, and not consume too much memory.
|
|
@end deffn
|
|
|
|
@deffn {Scheme Procedure} n-for-each-par-map n sproc pproc lst1 lst2 @dots{}
|
|
Apply @var{pproc} to the elements of the given lists, and apply
|
|
@var{sproc} to each result returned by @var{pproc}. The final return
|
|
value is unspecified, but all calls will have been completed before
|
|
returning.
|
|
|
|
The calls made are @code{(@var{sproc} (@var{pproc} @var{elem1} @dots{}
|
|
@var{elemN}))}, where each @var{elem} is from the corresponding
|
|
@var{lst}. Each @var{lst} must have the same number of elements.
|
|
|
|
The @var{pproc} calls are made in parallel, in separate threads. No more
|
|
than @var{n} threads are used at any one time. The order in which
|
|
@var{pproc} calls are initiated within that limit is unspecified.
|
|
|
|
The @var{sproc} calls are made serially, in list element order, one at
|
|
a time. @var{pproc} calls on later elements may execute in parallel
|
|
with the @var{sproc} calls. Exactly which thread makes each
|
|
@var{sproc} call is unspecified.
|
|
|
|
This function is designed for individual calculations that can be done
|
|
in parallel, but with results needing to be handled serially, for
|
|
instance to write them to a file. The @var{n} limit on threads
|
|
controls system resource usage when there are many calculations or
|
|
when they might be costly.
|
|
|
|
It will be seen that @code{n-for-each-par-map} is like a combination
|
|
of @code{n-par-map} and @code{for-each},
|
|
|
|
@example
|
|
(for-each sproc (n-par-map n pproc lst1 ... lstN))
|
|
@end example
|
|
|
|
@noindent
|
|
But the actual implementation is more efficient since each @var{sproc}
|
|
call, in turn, can be initiated once the relevant @var{pproc} call has
|
|
completed, it doesn't need to wait for all to finish.
|
|
@end deffn
|
|
|
|
|
|
|
|
@c Local Variables:
|
|
@c TeX-master: "guile.texi"
|
|
@c End:
|