mirror of
https://https.git.savannah.gnu.org/git/guix.git/
synced 2025-07-12 18:10:47 +02:00
The container that slirp4netns runs in should already be quite difficult to do anything malicious in beyond basic denial of service or sending of network traffic. There is, however, one hole remaining in the case in which there is an adversary able to run code locally: abstract unix sockets. Because these are governed by network namespaces, not IPC namespaces, and slirp4netns is in the root network namespace, any process in the root network namespace can cooperate with the slirp4netns process to take over its user. To close this, we use seccomp to block the creation of unix-domain sockets by slirp4netns. This requires some finesse, since slirp4netns absolutely needs to be able to create other types of sockets - at minimum AF_INET and AF_INET6 Seccomp has many, many pitfalls. To name a few: 1. Seccomp provides you with an "arch" field, but this does not uniquely determine the ABI being used; the actual meaning of a system call number depends on both the number (which is often the result of ORing a related system call with a flag for an alternate ABI) and the architecture. 2. Seccomp provides no direct way of knowing what the native value for the arch field should be; the user must do configure/compile-time testing for every architecture+ABI combination they want to support. Amusingly enough, the linux-internal header files have this exact information (SECCOMP_ARCH_NATIVE), but they aren't sharing it. 3. The only system call numbers we naturally have are the native ones in asm/unistd.h. __NR_socket will always refer to the system call number for the target system's ABI. 4. Seccomp can only manipulate 32-bit words, but represents every system call argument as a uint64. 5. New system call numbers with as-yet-unknown semantics can be added to the kernel at any time. 6. Based on this comment in arch/x86/entry/syscalls/syscall_32.tbl: # 251 is available for reuse (was briefly sys_set_zone_reclaim) previously-invalid system call numbers may later be reused for new system calls. 7. Most architecture+ABI combinations have system call tables with many gaps in them. arm-eabi, for example, has 35 such gaps (note: this is just the number of distinct gaps, not the number of system call numbers contained in those gaps). 8. Seccomp's BPF filters require a fully-acyclic control flow graph. Any operation on a data structure must therefore first be fully unrolled before it can be run. 9. Seccomp cannot dereference pointers. Only the raw bits provided to the system calls can be inspected. 10. Some architecture+ABI combos have multiplexer system calls. For example, socketcall can perform any socket-related system call. The arguments to the multiplexed system call are passed indirectly, via a pointer to user memory. They therefore cannot be inspected by seccomp. 11. Some valid system calls are not listed in any table in the kernel source. For example, __ARM_NR_cacheflush is an "ARM private" system call. It does not appear in any *.tbl file. 12. Conditional branches are limited to relative jumps of at most 256 instructions forward. 13. Prior to Linux 4.8, any process able to spawn another process and call ptrace could bypass seccomp restrictions. To address (1), (2), and (3), we include preprocessor checks to identify the native architecture value, and reject all system calls that don't use the native architecture. To address (4), we use the AC_C_BIGENDIAN autoconf check to conditionally define WORDS_BIGENDIAN, and match up the proper portions of any uint64 we test for with the value in the accumulator being tested against. To address (5) and (6), we use system call pinning. That is, we hardcode a snapshot of all the valid system call numbers at the time of writing, and reject any system call numbers not in the recorded set. A set is recorded for every architecture+ABI combo, and the native one is chosen at compile-time. This ensures that not only are non-native architectures rejected, but so are non-native ABIs. For the sake of conciseness, we represent these sets as sets of disjoint ranges. Due to (7), checking each range in turn could add a lot of overhead to each system call, so we instead binary search through the ranges. Due to (8), this binary search has to be fully unrolled, so we do that too. It can be tedious and error-prone to manually produce the syscall ranges by looking at linux's *.tbl files, since the gaps are often small and uncommented. To address this, a script, build-aux/extract-syscall-ranges.sh, is added that will produce them given a *.tbl filename and an ABI regex (some tables seem to abuse the ABI field with strange values like "memfd_secret"). Note that producing the final values still requires looking at the proper asm/unistd.h file to find any private numbers and to identify any offsets and ABI variants used. (10) used to have no good solution, but in the past decade most architectures have gained dedicated system call alternatives to at least socketcall, so we can (hopefully) just block it entirely. To address (13), we block ptrace also. * build-aux/extract-syscall-ranges.sh: new script. * Makefile.am (EXTRA_DIST): register it. * config-daemon.ac: use AC_C_BIGENDIAN. * nix/libutil/spawn.cc (setNoNewPrivsAction, addSeccompFilterAction): new functions. * nix/libutil/spawn.hh (setNoNewPrivsAction, addSeccompFilterAction): new declarations. (SpawnContext)[setNoNewPrivs, addSeccompFilter]: new fields. * nix/libutil/seccomp.hh: new header file. * nix/libutil/seccomp.cc: new file. * nix/local.mk (libutil_a_SOURCES, libutil_headers): register them. * nix/libstore/build.cc (slirpSeccompFilter, writeSeccompFilterDot): new functions. (spawnSlirp4netns): use them, set seccomp filter for slirp4netns. Change-Id: Ic92c7f564ab12596b87ed0801b22f88fbb543b95 Signed-off-by: John Kehayias <john.kehayias@protonmail.com>
162 lines
6.1 KiB
C++
162 lines
6.1 KiB
C++
#if __linux__
|
|
#include <util.hh>
|
|
#include <seccomp.hh>
|
|
#include <algorithm>
|
|
|
|
namespace nix {
|
|
|
|
struct FilterInstruction {
|
|
struct sock_filter instruction;
|
|
bool fallthroughJt = false;
|
|
bool fallthroughJf = false;
|
|
bool fallthroughK = false;
|
|
};
|
|
|
|
/* Note: instructions in "out" should have already verified that sysno is
|
|
* >= ranges[lowIndex].low. The value to compare against should already be
|
|
* in the accumulator. */
|
|
static void
|
|
rangeActionsToFilter(std::vector<Uint32RangeAction> & ranges,
|
|
size_t lowIndex, /* Inclusive */
|
|
size_t end, /* Exclusive */
|
|
std::vector<FilterInstruction> & out)
|
|
{
|
|
if(lowIndex >= end) return;
|
|
|
|
if(end == lowIndex + 1) {
|
|
FilterInstruction branch;
|
|
Uint32RangeAction range = ranges.at(lowIndex);
|
|
branch.instruction = BPF_JUMP(BPF_JMP | BPF_JGT | BPF_K,
|
|
range.high,
|
|
/* To be fixed up */
|
|
0,
|
|
0);
|
|
branch.fallthroughJt = true;
|
|
out.push_back(branch);
|
|
for(auto & i : range.instructions) {
|
|
FilterInstruction f;
|
|
f.instruction = i;
|
|
out.push_back(f);
|
|
}
|
|
FilterInstruction fallthroughBranch;
|
|
fallthroughBranch.instruction = BPF_JUMP(BPF_JMP | BPF_JA | BPF_K,
|
|
/* To be fixed up */
|
|
0,
|
|
0,
|
|
0);
|
|
fallthroughBranch.fallthroughK = true;
|
|
out.push_back(fallthroughBranch);
|
|
return;
|
|
}
|
|
|
|
size_t middle = lowIndex + ((end - lowIndex) / 2);
|
|
Uint32RangeAction range = ranges.at(middle);
|
|
FilterInstruction branch;
|
|
size_t branchIndex = out.size();
|
|
branch.instruction = BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K,
|
|
range.low,
|
|
0,
|
|
/* To be fixed up a little farther down */
|
|
0);
|
|
out.push_back(branch);
|
|
rangeActionsToFilter(ranges, middle, end, out);
|
|
size_t elseIndex = out.size();
|
|
out[branchIndex].instruction.jf = (elseIndex - branchIndex - 1);
|
|
rangeActionsToFilter(ranges, lowIndex, middle, out);
|
|
}
|
|
|
|
|
|
static bool compareRanges(Uint32RangeAction a, Uint32RangeAction b)
|
|
{
|
|
return (a.low < b.low);
|
|
}
|
|
|
|
|
|
/* Produce a loop-unrolled binary search of RANGES for the u32 currently in
|
|
* the accumulator. If the binary search finds a range that contains it, it
|
|
* will execute the corresponding instructions. If these instructions fall
|
|
* through, or if no containing range is found, control resumes after the last
|
|
* instruction in the returned sequence. */
|
|
std::vector<struct sock_filter>
|
|
rangeActionsToFilter(std::vector<Uint32RangeAction> & ranges)
|
|
{
|
|
if(ranges.size() == 0) return {};
|
|
std::sort(ranges.begin(), ranges.end(), compareRanges);
|
|
if(ranges.size() > 1) {
|
|
for(auto & i : ranges)
|
|
if(i.low > i.high)
|
|
throw Error("Invalid range in rangeActionsToFilter");
|
|
for(size_t j = 1; j < ranges.size(); j++)
|
|
if(ranges[j].low <= ranges[j - 1].high)
|
|
throw Error("Overlapping ranges in rangeActionsToFilter");
|
|
}
|
|
std::vector<FilterInstruction> out;
|
|
Uint32RangeAction first = ranges.at(0);
|
|
FilterInstruction branch;
|
|
/* Verify accumulator value is >= first.low, to satisfy initial invariant */
|
|
branch.instruction = BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K,
|
|
first.low,
|
|
0,
|
|
/* to be fixed up */
|
|
0);
|
|
branch.fallthroughJf = true;
|
|
out.push_back(branch);
|
|
rangeActionsToFilter(ranges, 0, ranges.size(), out);
|
|
size_t fallthrough = out.size();
|
|
std::vector<struct sock_filter> out2;
|
|
for(size_t j = 0; j < out.size(); j++) {
|
|
if(out[j].fallthroughJt) out[j].instruction.jt = (fallthrough - j - 1);
|
|
if(out[j].fallthroughJf) out[j].instruction.jf = (fallthrough - j - 1);
|
|
if(out[j].fallthroughK) out[j].instruction.k = (fallthrough - j - 1);
|
|
out2.push_back(out[j].instruction);
|
|
}
|
|
return out2;
|
|
}
|
|
|
|
|
|
/* If the uint64 at offset OFFSET has value VALUE, run INSTRUCTIONS.
|
|
* Otherwise, or if INSTRUCTIONS falls through, continue past the last
|
|
* instruction of OUT at the time seccompMatchu64 returns. Clobbers
|
|
* accumulator! */
|
|
std::vector<struct sock_filter> seccompMatchu64(std::vector<struct sock_filter> & out,
|
|
uint64_t value,
|
|
std::vector<struct sock_filter> instructions,
|
|
uint32_t offset)
|
|
{
|
|
/* Note: this only works where the order of bytes in uint64 is big or
|
|
* little endian, and the same order holds for uint32. */
|
|
/* Load lower-addressed 32 bits */
|
|
out.push_back(BPF_STMT(BPF_LD | BPF_W | BPF_ABS, offset));
|
|
size_t jmp1Index = out.size();
|
|
|
|
out.push_back(BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K,
|
|
#ifdef WORDS_BIGENDIAN
|
|
(uint32_t)((value >> 32) & 0xffffffff),
|
|
#else
|
|
(uint32_t)(value & 0xffffffff),
|
|
#endif
|
|
0,
|
|
/* To be fixed up */
|
|
0));
|
|
/* Load higher-addressed 32 bits */
|
|
out.push_back(BPF_STMT(BPF_LD | BPF_W | BPF_ABS, offset + (uint32_t)sizeof(uint32_t)));
|
|
size_t jmp2Index = out.size();
|
|
out.push_back(BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K,
|
|
#ifdef WORDS_BIGENDIAN
|
|
(uint32_t)(value & 0xffffffff),
|
|
#else
|
|
(uint32_t)((value >> 32) & 0xffffffff),
|
|
#endif
|
|
0,
|
|
/* To be fixed up */
|
|
0));
|
|
|
|
out.insert(out.end(), instructions.begin(), instructions.end());
|
|
out[jmp1Index].jf = (out.size() - jmp1Index - 1);
|
|
out[jmp2Index].jf = (out.size() - jmp2Index - 1);
|
|
return out;
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|