@page @node Scheduling @chapter Threads, Mutexes, Asyncs and Dynamic Roots [FIXME: This is pasted in from Tom Lord's original guile.texi chapter plus the Cygnus programmer's manual; it should be *very* carefully reviewed and largely reorganized.] @menu * Arbiters:: Synchronization primitives. * Asyncs:: Asynchronous procedure invocation. * Dynamic Roots:: Root frames of execution. * Threads:: Multiple threads of execution. * Fluids:: Thread-local variables. * Futures:: Delayed execution in new threads. * Parallel Forms:: Parallel execution of forms. @end menu @node Arbiters @section Arbiters @cindex arbiters @c FIXME::martin: Review me! Arbiters are synchronization objects. They are created with @code{make-arbiter}. Two or more threads can synchronize on an arbiter by trying to lock it using @code{try-arbiter}. This call will succeed if no other thread has called @code{try-arbiter} on the arbiter yet, otherwise it will fail and return @code{#f}. Once an arbiter is successfully locked, it cannot be locked by another thread until the thread holding the arbiter calls @code{release-arbiter} to unlock it. @deffn {Scheme Procedure} make-arbiter name @deffnx {C Function} scm_make_arbiter (name) Return an object of type arbiter and name @var{name}. Its state is initially unlocked. Arbiters are a way to achieve process synchronization. @end deffn @deffn {Scheme Procedure} try-arbiter arb @deffnx {C Function} scm_try_arbiter (arb) Return @code{#t} and lock the arbiter @var{arb} if the arbiter was unlocked. Otherwise, return @code{#f}. @end deffn @deffn {Scheme Procedure} release-arbiter arb @deffnx {C Function} scm_release_arbiter (arb) Return @code{#t} and unlock the arbiter @var{arb} if the arbiter was locked. Otherwise, return @code{#f}. @end deffn @node Asyncs @section Asyncs @cindex asyncs @cindex user asyncs @cindex system asyncs Asyncs are a means of deferring the excution of Scheme code until it is safe to do so. Guile provides two kinds of asyncs that share the basic concept but are otherwise quite different: system asyncs and user asyncs. System asyncs are integrated into the core of Guile and are executed automatically when the system is in a state to allow the execution of Scheme code. For example, it is not possible to execute Scheme code in a POSIX signal handler, but such a signal handler can queue a system async to be executed in the near future, when it is safe to do so. System asyncs can also be queued for threads other than the current one. This way, you can cause threads to asynchronously execute arbitrary code. User asyncs offer a convenient means of queueing procedures for future execution and triggering this execution. They will not be executed automatically. @menu * System asyncs:: * User asyncs:: @end menu @node System asyncs @subsection System asyncs To cause the future asynchronous execution of a procedure in a given thread, use @code{system-async-mark}. Automatic invocation of system asyncs can be temporarily disabled by calling @code{call-with-blocked-asyncs}. This function works by temporarily increasing the @emph{async blocking level} of the current thread while a given procedure is running. The blocking level starts out at zero, and whenever a safe point is reached, a blocking level greater than zero will prevent the execution of queued asyncs. Analogously, the procedure @code{call-with-unblocked-asyncs} will temporarily decrease the blocking level of the current thread. You can use it when you want to disable asyncs by default and only allow them temporarily. @deffn {Scheme Procedure} system-async-mark proc [thread] @deffnx {C Function} scm_system_async_mark (proc) @deffnx {C Function} scm_system_async_mark_for_thread (proc, thread) Mark @var{proc} (a procedure with zero arguments) for future execution in @var{thread}. When @var{proc} has already been marked for @var{thread} but has not been executed yet, this call has no effect. When @var{thread} is omitted, the thread that called @code{system-async-mark} is used. This procedure is not safe to be called from signal handlers. Use @code{scm_sigaction} or @code{scm_sigaction_for_thread} to install signal handlers. @end deffn @c FIXME: The use of @deffnx for scm_c_call_with_blocked_asyncs and @c scm_c_call_with_unblocked_asyncs puts "void" into the function @c index. Would prefer to use @deftypefnx if makeinfo allowed that, @c or a @deftypefn with an empty return type argument if it didn't @c introduce an extra space. @deffn {Scheme Procedure} call-with-blocked-asyncs proc @deffnx {C Function} scm_call_with_blocked_asyncs (proc) @deffnx {C Function} void *scm_c_call_with_blocked_asyncs (void * (*proc) (void *data), void *data) @findex scm_c_call_with_blocked_asyncs Call @var{proc} and block the execution of system asyncs by one level for the current thread while it is running. Return the value returned by @var{proc}. For the first two variants, call @var{proc} with no arguments; for the third, call it with @var{data}. @end deffn @deffn {Scheme Procedure} call-with-unblocked-asyncs proc @deffnx {C Function} scm_call_with_unblocked_asyncs (proc) @deffnx {C Function} void *scm_c_call_with_unblocked_asyncs (void *(*p) (void *d), void *d) @findex scm_c_call_with_unblocked_asyncs Call @var{proc} and unblock the execution of system asyncs by one level for the current thread while it is running. Return the value returned by @var{proc}. For the first two variants, call @var{proc} with no arguments; for the third, call it with @var{data}. @end deffn @node User asyncs @subsection User asyncs A user async is a pair of a thunk (a parameterless procedure) and a mark. Setting the mark on a user async will cause the thunk to be executed when the user async is passed to @code{run-asyncs}. Setting the mark more than once is satisfied by one execution of the thunk. User asyncs are created with @code{async}. They are marked with @code{async-mark}. @deffn {Scheme Procedure} async thunk @deffnx {C Function} scm_async (thunk) Create a new user async for the procedure @var{thunk}. @end deffn @deffn {Scheme Procedure} async-mark a @deffnx {C Function} scm_async_mark (a) Mark the user async @var{a} for future execution. @end deffn @deffn {Scheme Procedure} run-asyncs list_of_a @deffnx {C Function} scm_run_asyncs (list_of_a) Execute all thunks from the marked asyncs of the list @var{list_of_a}. @end deffn @node Dynamic Roots @section Dynamic Roots @cindex dynamic roots A @dfn{dynamic root} is a root frame of Scheme evaluation. The top-level repl, for example, is an instance of a dynamic root. Each dynamic root has its own chain of dynamic-wind information. Each has its own set of continuations, jump-buffers, and pending CATCH statements which are inaccessible from the dynamic scope of any other dynamic root. In a thread-based system, each thread has its own dynamic root. Therefore, continuations created by one thread may not be invoked by another. Even in a single-threaded system, it is sometimes useful to create a new dynamic root. For example, if you want to apply a procedure, but to not allow that procedure to capture the current continuation, calling the procedure under a new dynamic root will do the job. @deffn {Scheme Procedure} call-with-dynamic-root thunk handler @deffnx {C Function} scm_call_with_dynamic_root (thunk, handler) Evaluate @code{(thunk)} in a new dynamic context, returning its value. If an error occurs during evaluation, apply @var{handler} to the arguments to the throw, just as @code{throw} would. If this happens, @var{handler} is called outside the scope of the new root -- it is called in the same dynamic context in which @code{call-with-dynamic-root} was evaluated. If @var{thunk} captures a continuation, the continuation is rooted at the call to @var{thunk}. In particular, the call to @code{call-with-dynamic-root} is not captured. Therefore, @code{call-with-dynamic-root} always returns at most one time. Before calling @var{thunk}, the dynamic-wind chain is un-wound back to the root and a new chain started for @var{thunk}. Therefore, this call may not do what you expect: @lisp ;; Almost certainly a bug: (with-output-to-port some-port (lambda () (call-with-dynamic-root (lambda () (display 'fnord) (newline)) (lambda (errcode) errcode)))) @end lisp The problem is, on what port will @samp{fnord} be displayed? You might expect that because of the @code{with-output-to-port} that it will be displayed on the port bound to @code{some-port}. But it probably won't -- before evaluating the thunk, dynamic winds are unwound, including those created by @code{with-output-to-port}. So, the standard output port will have been re-set to its default value before @code{display} is evaluated. (This function was added to Guile mostly to help calls to functions in C libraries that can not tolerate non-local exits or calls that return multiple times. If such functions call back to the interpreter, it should be under a new dynamic root.) @end deffn @deffn {Scheme Procedure} dynamic-root @deffnx {C Function} scm_dynamic_root () Return an object representing the current dynamic root. These objects are only useful for comparison using @code{eq?}. They are currently represented as numbers, but your code should in no way depend on this. @end deffn @c begin (scm-doc-string "boot-9.scm" "quit") @deffn {Scheme Procedure} quit [exit_val] Throw back to the error handler of the current dynamic root. If integer @var{exit_val} is specified and if Guile is being used stand-alone and if quit is called from the initial dynamic-root, @var{exit_val} becomes the exit status of the Guile process and the process exits. @end deffn When Guile is run interactively, errors are caught from within the read-eval-print loop. An error message will be printed and @code{abort} called. A default set of signal handlers is installed, e.g., to allow user interrupt of the interpreter. It is possible to switch to a "batch mode", in which the interpreter will terminate after an error and in which all signals cause their default actions. Switching to batch mode causes any handlers installed from Scheme code to be removed. An example of where this is useful is after forking a new process intended to run non-interactively. @c begin (scm-doc-string "boot-9.scm" "batch-mode?") @deffn {Scheme Procedure} batch-mode? Returns a boolean indicating whether the interpreter is in batch mode. @end deffn @c begin (scm-doc-string "boot-9.scm" "set-batch-mode?!") @deffn {Scheme Procedure} set-batch-mode?! arg If @var{arg} is true, switches the interpreter to batch mode. The @code{#f} case has not been implemented. @end deffn @node Threads @section Threads @cindex threads @cindex Guile threads @strong{[NOTE: this chapter was written for Cygnus Guile and has not yet been updated for the Guile 1.x release.]} Here is a the reference for Guile's threads. In this chapter I simply quote verbatim Tom Lord's description of the low-level primitives written in C (basically an interface to the POSIX threads library) and Anthony Green's description of the higher-level thread procedures written in scheme. @cindex posix threads @cindex Lord, Tom @cindex Green, Anthony When using Guile threads, keep in mind that each guile thread is executed in a new dynamic root. @menu * Low level thread primitives:: * Higher level thread procedures:: * C level thread interface:: @end menu @node Low level thread primitives @subsection Low level thread primitives @c NJFIXME no current mechanism for making sure that these docstrings @c are in sync. @c begin (texi-doc-string "guile" "call-with-new-thread") @deffn {Scheme Procedure} call-with-new-thread thunk error-handler Evaluate @code{(thunk)} in a new thread, and new dynamic context, returning a new thread object representing the thread. If an error occurs during evaluation, call error-handler, passing it an error code. If this happens, the error-handler is called outside the scope of the new root -- it is called in the same dynamic context in which with-new-thread was evaluated, but not in the caller's thread. All the evaluation rules for dynamic roots apply to threads. @end deffn @c begin (texi-doc-string "guile" "join-thread") @deffn {Scheme Procedure} join-thread thread Suspend execution of the calling thread until the target @var{thread} terminates, unless the target @var{thread} has already terminated. @end deffn @c begin (texi-doc-string "guile" "yield") @deffn {Scheme Procedure} yield If one or more threads are waiting to execute, calling yield forces an immediate context switch to one of them. Otherwise, yield has no effect. @end deffn @c begin (texi-doc-string "guile" "make-mutex") @deffn {Scheme Procedure} make-mutex Create a new mutex object. @end deffn @c begin (texi-doc-string "guile" "lock-mutex") @deffn {Scheme Procedure} lock-mutex mutex Lock @var{mutex}. If the mutex is already locked, the calling thread blocks until the mutex becomes available. The function returns when the calling thread owns the lock on @var{mutex}. Locking a mutex that a thread already owns will succeed right away and will not block the thread. That is, Guile's mutexes are @emph{recursive}. When a system async is activated for a thread that is blocked in a call to @code{lock-mutex}, the waiting is interrupted and the async is executed. When the async returns, the waiting is resumed. @end deffn @deffn {Scheme Procedure} try-mutex mutex Try to lock @var{mutex}. If the mutex is already locked by someone else, return @code{#f}. Else lock the mutex and return @code{#t}. @end deffn @c begin (texi-doc-string "guile" "unlock-mutex") @deffn {Scheme Procedure} unlock-mutex mutex Unlocks @var{mutex} if the calling thread owns the lock on @var{mutex}. Calling unlock-mutex on a mutex not owned by the current thread results in undefined behaviour. Once a mutex has been unlocked, one thread blocked on @var{mutex} is awakened and grabs the mutex lock. Every call to @code{lock-mutex} by this thread must be matched with a call to @code{unlock-mutex}. Only the last call to @code{unlock-mutex} will actually unlock the mutex. @end deffn @c begin (texi-doc-string "guile" "make-condition-variable") @deffn {Scheme Procedure} make-condition-variable Make a new condition variable. @end deffn @c begin (texi-doc-string "guile" "wait-condition-variable") @deffn {Scheme Procedure} wait-condition-variable cond-var mutex [time] Wait until @var{cond-var} has been signalled. While waiting, @var{mutex} is atomically unlocked (as with @code{unlock-mutex}) and is locked again when this function returns. When @var{time} is given, it specifies a point in time where the waiting should be aborted. It can be either a integer as returned by @code{current-time} or a pair as returned by @code{gettimeofday}. When the waiting is aborted, @code{#f} is returned. When the condition variable has in fact been signalled, @code{#t} is returned. The mutex is re-locked in any case before @code{wait-condition-variable} returns. When a system async is activated for a thread that is blocked in a call to @code{wait-condition-variable}, the waiting is interrupted, the mutex is locked, and the async is executed. When the async returns, the mutex is unlocked again and the waiting is resumed. @end deffn @c begin (texi-doc-string "guile" "signal-condition-variable") @deffn {Scheme Procedure} signal-condition-variable cond-var Wake up one thread that is waiting for @var{cv}. @end deffn @c begin (texi-doc-string "guile" "broadcast-condition-variable") @deffn {Scheme Procedure} broadcast-condition-variable cond-var Wake up all threads that are waiting for @var{cv}. @end deffn @node Higher level thread procedures @subsection Higher level thread procedures @c new by ttn, needs review Higher level thread procedures are available by loading the @code{(ice-9 threads)} module. These provide standardized thread creation and mutex interaction. @deffn macro make-thread proc [args@dots{}] Apply @var{proc} to @var{args} in a new thread formed by @code{call-with-new-thread} using a default error handler that display the error to the current error port. @end deffn @deffn macro begin-thread first [rest@dots{}] Evaluate forms @var{first} and @var{rest} in a new thread formed by @code{call-with-new-thread} using a default error handler that display the error to the current error port. @end deffn @deffn macro with-mutex m [body@dots{}] Lock mutex @var{m}, evaluate @var{body}, and then unlock @var{m}. These sub-operations form the branches of a @code{dynamic-wind}. @end deffn @deffn macro monitor first [rest@dots{}] Evaluate forms @var{first} and @var{rest} under a newly created anonymous mutex, using @code{with-mutex}. @end deffn @node C level thread interface @subsection C level thread interface You can create and manage threads, mutexes, and condition variables with the C versions of the primitives above. For example, you can create a mutex with @code{scm_make_mutex} and lock it with @code{scm_lock_mutex}. In addition to these primitives there is also a second set of primitives for threading related things. These functions and data types are only available from C and can not be mixed with the first set from above. However, they might be more efficient and can be used in situations where Scheme data types are not allowed or are inconvenient to use. Furthermore, they are the primitives that Guile relies on for its own higher level threads. By reimplementing them, you can adapt Guile to different low-level thread implementations. @deftp {C Data Type} scm_t_thread This data type represents a thread, to be used with scm_thread_create, etc. @end deftp @deftypefn {C Function} int scm_thread_create (scm_t_thread *t, void (*proc)(void *), void *data) Create a new thread that will start by calling @var{proc}, passing it @var{data}. A handle for the new thread is stored in @var{t}, which must be non-NULL. The thread terminated when @var{proc} returns. When the thread has not been detached, its handle remains valid after is has terminated so that it can be used with @var{scm_thread_join}, for example. When it has been detached, the handle becomes invalid as soon as the thread terminates. @end deftypefn @deftypefn {C Function} void scm_thread_detach (scm_t_thread t) Detach the thread @var{t}. See @code{scm_thread_create}. @end deftypefn @deftypefn {C Function} void scm_thread_join (scm_t_thread t) Wait for thread @var{t} to terminate. The thread must not have been detached at the time that @code{scm_thread_join} is called, but it might have been detached by the time it terminates. @end deftypefn @deftypefn {C Function} scm_t_thread scm_thread_self () Return the handle of the calling thread. @end deftypefn @deftp {C Data Type} scm_t_mutex This data type represents a mutex, to be used with scm_mutex_init, etc. @end deftp @deftypefn {C Function} void scm_mutex_init (scm_t_mutex *m) Initialize the mutex structure pointed to by @var{m}. @end deftypefn @deftypefn {C Function} void scm_mutex_destroy (scm_t_mutex *m) Deallocate all resources associated with @var{m}. @end deftypefn @deftypefn {C Function} void scm_mutex_lock (scm_t_mutex *m) Lock the mutex @var{m}. When it is already locked by a different thread, wait until it becomes available. Locking a mutex that is already locked by the current threads is not allowd and results in undefined behavior. The mutices are not guaranteed to be fair. That is, a thread that attempts a lock after yourself might be granted it before you. @end deftypefn @deftypefn {C Function} int scm_mutex_trylock (scm_t_mutex *m) Lock @var{m} as with @code{scm_mutex_lock} but don't wait when this does succeed immediately. Returns non-zero when the mutex could in fact be locked , and zero when it is already locked by some other thread. @end deftypefn @deftypefn {C Function} void scm_mutex_unlock (scm_t_mutex *m) Unlock the mutex @var{m}. The mutex must have been locked by the current thread, else the behavior is undefined. @end deftypefn @deftp {C Data Type} scm_t_cond This data type represents a condition variable, to be used with scm_cond_init, etc. @end deftp @deftypefn {C Function} void scm_cond_init (scm_t_cond *c) Initialize the mutex structure pointed to by @var{c}. @end deftypefn @deftypefn {C Function} void scm_cond_destroy (scm_t_cond *c) Deallocate all resources associated with @var{c}. @end deftypefn @deftypefn {C Function} void scm_cond_wait (scm_t_cond *c, scm_t_mutex *m) Wait for @var{c} to be signalled. While waiting @var{m} is unlocked and locked again before @code{scm_cond_wait} returns. @end deftypefn @deftypefn {C Function} void scm_cond_timedwait (scm_t_cond *c, scm_t_mutex *m, timespec *abstime) Wait for @var{c} to be signalled as with @code{scm_cond_wait} but don't wait longer than the point in time specified by @var{abstime}. when the waiting is aborted, zero is returned; non-zero else. @end deftypefn @deftypefn {C Function} void scm_cond_signal (scm_t_cond *c) Signal the condition variable @var{c}. When one or more threads are waiting for it to be signalled, select one arbitrarily and let its wait succeed. @end deftypefn @deftypefn {C Function} void scm_cond_broadcast (scm_t_cond *c) Signal the condition variable @var{c}. When there are threads waiting for it to be signalled, wake them all up and make all their waits succeed. @end deftypefn @deftp {C Type} scm_t_key This type represents a key for a thread-specific value. @end deftp @deftypefn {C Function} void scm_key_create (scm_t_key *keyp) Create a new key for a thread-specific value. Each thread has its own value associated to such a handle. The new handle is stored into @var{keyp}, which must be non-NULL. @end deftypefn @deftypefn {C Function} void scm_key_delete (scm_t_key key) This function makes @var{key} invalid as a key for thread-specific data. @end deftypefn @deftypefn {C Function} void scm_key_setspecific (scm_t_key key, const void *value) Associate @var{value} with @var{key} in the calling thread. @end deftypefn @deftypefn {C Function} int scm_key_getspecific (scm_t_key key) Return the value currently associated with @var{key} in the calling thread. When @code{scm_key_setspecific} has not yet been called in this thread with this key, @code{NULL} is returned. @end deftypefn @deftypefn {C Function} int scm_thread_select (...) This function does the same thing as the system's @code{select} function, but in a way that is friendly to the thread implementation. You should call it in preference to the system @code{select}. @end deftypefn @node Fluids @section Fluids @cindex fluids @c FIXME::martin: Review me! Fluids are objects to store values in. They have a few properties which make them useful in certain situations: Fluids can have one value per dynamic root (@pxref{Dynamic Roots}), so that changes to the value in a fluid are only visible in the same dynamic root. Since threads are executed in separate dynamic roots, fluids can be used for thread local storage (@pxref{Threads}). Fluids can be used to simulate the desirable effects of dynamically scoped variables. Dynamically scoped variables are useful when you want to set a variable to a value during some dynamic extent in the execution of your program and have them revert to their original value when the control flow is outside of this dynamic extent. See the description of @code{with-fluids} below for details. New fluids are created with @code{make-fluid} and @code{fluid?} is used for testing whether an object is actually a fluid. The values stored in a fluid can be accessed with @code{fluid-ref} and @code{fluid-set!}. @deffn {Scheme Procedure} make-fluid @deffnx {C Function} scm_make_fluid () Return a newly created fluid. Fluids are objects of a certain type (a smob) that can hold one SCM value per dynamic root. That is, modifications to this value are only visible to code that executes within the same dynamic root as the modifying code. When a new dynamic root is constructed, it inherits the values from its parent. Because each thread executes in its own dynamic root, you can use fluids for thread local storage. @end deffn @deffn {Scheme Procedure} fluid? obj @deffnx {C Function} scm_fluid_p (obj) Return @code{#t} iff @var{obj} is a fluid; otherwise, return @code{#f}. @end deffn @deffn {Scheme Procedure} fluid-ref fluid @deffnx {C Function} scm_fluid_ref (fluid) Return the value associated with @var{fluid} in the current dynamic root. If @var{fluid} has not been set, then return @code{#f}. @end deffn @deffn {Scheme Procedure} fluid-set! fluid value @deffnx {C Function} scm_fluid_set_x (fluid, value) Set the value associated with @var{fluid} in the current dynamic root. @end deffn @code{with-fluids*} temporarily changes the values of one or more fluids, so that the given procedure and each procedure called by it access the given values. After the procedure returns, the old values are restored. @deffn {Scheme Procedure} with-fluids* fluids values thunk @deffnx {C Function} scm_with_fluids (fluids, values, thunk) Set @var{fluids} to @var{values} temporary, and call @var{thunk}. @var{fluids} must be a list of fluids and @var{values} must be the same number of their values to be applied. Each substitution is done in the order given. @var{thunk} must be a procedure with no argument. it is called inside a @code{dynamic-wind} and the fluids are set/restored when control enter or leaves the established dynamic extent. @end deffn @deffn {Scheme Macro} with-fluids ((fluid value) ...) body... Execute @var{body...} while each @var{fluid} is set to the corresponding @var{value}. Both @var{fluid} and @var{value} are evaluated and @var{fluid} must yield a fluid. @var{body...} is executed inside a @code{dynamic-wind} and the fluids are set/restored when control enter or leaves the established dynamic extent. @end deffn @node Futures @section Futures @cindex futures Futures are a convenient way to run a calculation in a new thread, and only wait for the result when it's actually needed. Futures are similar to promises (@pxref{Delayed Evaluation}), in that they allow mainline code to continue immediately. But @code{delay} doesn't evaluate at all until forced, whereas @code{future} starts immediately in a new thread. @deffn {syntax} future expr Begin evaluating @var{expr} in a new thread, and return a ``future'' object representing the calculation. @end deffn @deffn {Scheme Procedure} make-future thunk @deffnx {C Function} scm_make_future (thunk) Begin evaluating the call @code{(@var{thunk})} in a new thread, and return a ``future'' object representing the calculation. @end deffn @deffn {Scheme Procedure} future-ref f @deffnx {C Function} scm_future_ref (f) Return the value computed by the future @var{f}. If @var{f} has not yet finished executing then wait for it to do so. @end deffn @node Parallel Forms @section Parallel forms @cindex parallel forms The functions described in this section are available from @example (use-modules (ice-9 threads)) @end example @deffn syntax parallel expr1 @dots{} exprN Evaluate each @var{expr} expression in parallel, each in a new thread. Return the results as a set of @var{N} multiple values (@pxref{Multiple Values}). @end deffn @deffn syntax letpar ((var1 expr1) @dots{} (varN exprN)) body@dots{} Evaluate each @var{expr} in parallel, each in a new thread, then bind the results to the corresponding @var{var} variables and evaluate @var{body}. @code{letpar} is like @code{let} (@pxref{Local Bindings}), but all the expressions for the bindings are evaluated in parallel. @end deffn @deffn {Scheme Procedure} par-map proc lst1 @dots{} lstN @deffnx {Scheme Procedure} par-for-each proc lst1 @dots{} lstN Call @var{proc} on the elements of the given lists. @code{par-map} returns a list comprising the return values from @var{proc}. @code{par-for-each} returns an unspecified value, but waits for all calls to complete. The @var{proc} calls are @code{(@var{proc} @var{elem1} @dots{} @var{elemN})}, where each @var{elem} is from the corresponding @var{lst}. Each @var{lst} must be the same length. The calls are made in parallel, each in a new thread. These functions are like @code{map} and @code{for-each} (@pxref{List Mapping}), but make their @var{proc} calls in parallel. @end deffn @deffn {Scheme Procedure} n-par-map n proc lst1 @dots{} lstN @deffnx {Scheme Procedure} n-par-for-each n proc lst1 @dots{} lstN Call @var{proc} on the elements of the given lists, in the same way as @code{par-map} and @code{par-for-each} above, but use no more than @var{n} new threads at any one time. The order in which calls are initiated within that threads limit is unspecified. These functions are good for controlling resource consumption if @var{proc} calls might be costly, or if there are many to be made. On a dual-CPU system for instance @math{@var{n}=4} might be enough to keep the CPUs utilized, and not consume too much memory. @end deffn @deffn {Scheme Procedure} n-for-each-par-map n sproc pproc lst1 @dots{} lstN Apply @var{pproc} to the elements of the given lists, and apply @var{sproc} to each result returned by @var{pproc}. The final return value is unspecified, but all calls will have been completed before returning. The calls made are @code{(@var{sproc} (@var{pproc} @var{elem1} @dots{} @var{elemN}))}, where each @var{elem} is from the corresponding @var{lst}. Each @var{lst} must have the same number of elements. The @var{pproc} calls are made in parallel, in new threads. No more than @var{n} new threads are used at any one time. The order in which @var{pproc} calls are initiated within that limit is unspecified. The @var{sproc} calls are made serially, in list element order, one at a time. @var{pproc} calls on later elements may execute in parallel with the @var{sproc} calls. Exactly which thread makes each @var{sproc} call is unspecified. This function is designed for individual calculations that can be done in parallel, but with results needing to be handled serially, for instance to write them to a file. The @var{n} limit on threads controls system resource usage when there are many calculations or when they might be costly. It will be seen that @code{n-for-each-par-map} is like a combination of @code{n-par-map} and @code{for-each}, @example (for-each sproc (n-par-map pproc lst1 ... lstN)) @end example @noindent But the actual implementation is more efficient since each @var{sproc} call, in turn, can be initiated once the relevant @var{pproc} call has completed, it doesn't need to wait for all to finish. @end deffn @c Local Variables: @c TeX-master: "guile.texi" @c End: