Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch adds __must_check annotations to kasan hooks that return a
pointer to make sure that a tagged pointer always gets propagated.
Link: http://lkml.kernel.org/r/03b269c5e453945f724bfca3159d4e1333a8fb1c.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Suggested-by: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Tag-based KASAN doesn't check memory accesses through pointers tagged with
0xff. When page_address is used to get pointer to memory that corresponds
to some page, the tag of the resulting pointer gets set to 0xff, even
though the allocated memory might have been tagged differently.
For slab pages it's impossible to recover the correct tag to return from
page_address, since the page might contain multiple slab objects tagged
with different values, and we can't know in advance which one of them is
going to get accessed. For non slab pages however, we can recover the tag
in page_address, since the whole page was marked with the same tag.
This patch adds tagging to non slab memory allocated with pagealloc. To
set the tag of the pointer returned from page_address, the tag gets stored
to page->flags when the memory gets allocated.
Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Tag-based KASAN inline instrumentation mode (which embeds checks of shadow
memory into the generated code, instead of inserting a callback) generates
a brk instruction when a tag mismatch is detected.
This commit adds a tag-based KASAN specific brk handler, that decodes the
immediate value passed to the brk instructions (to extract information
about the memory access that triggered the mismatch), reads the register
values (x0 contains the guilty address) and reports the bug.
Link: http://lkml.kernel.org/r/c91fe7684070e34dc34b419e6b69498f4dcacc2d.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This commit adds tag-based KASAN specific hooks implementation and
adjusts common generic and tag-based KASAN ones.
1. When a new slab cache is created, tag-based KASAN rounds up the size of
the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).
2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
memory, that corresponds to this object to this tag, and embeds this
tag value into the top byte of the returned pointer.
3. On each kfree tag-based KASAN poisons the shadow memory with a random
tag to allow detection of use-after-free bugs.
The rest of the logic of the hook implementation is very much similar to
the one provided by generic KASAN. Tag-based KASAN saves allocation and
free stack metadata to the slab object the same way generic KASAN does.
Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
While with SLUB we can actually preassign tags for caches with contructors
and store them in pointers in the freelist, SLAB doesn't allow that since
the freelist is stored as an array of indexes, so there are no pointers to
store the tags.
Instead we compute the tag twice, once when a slab is created before
calling the constructor and then again each time when an object is
allocated with kmalloc. Tag is computed simply by taking the lowest byte
of the index that corresponds to the object. However in kasan_kmalloc we
only have access to the objects pointer, so we need a way to find out
which index this object corresponds to.
This patch moves obj_to_index from slab.c to include/linux/slab_def.h to
be reused by KASAN.
Link: http://lkml.kernel.org/r/c02cd9e574cfd93858e43ac94b05e38f891fef64.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This commit adds rountines, that print tag-based KASAN error reports.
Those are quite similar to generic KASAN, the difference is:
1. The way tag-based KASAN finds the first bad shadow cell (with a
mismatching tag). Tag-based KASAN compares memory tags from the shadow
memory to the pointer tag.
2. Tag-based KASAN reports all bugs with the "KASAN: invalid-access"
header.
Also simplify generic KASAN find_first_bad_addr.
Link: http://lkml.kernel.org/r/aee6897b1bd077732a315fd84c6b4f234dbfdfcb.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Move generic KASAN specific error reporting routines to generic_report.c
without any functional changes, leaving common error reporting code in
report.c to be later reused by tag-based KASAN.
Link: http://lkml.kernel.org/r/ba48c32f8e5aefedee78998ccff0413bee9e0f5b.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The krealloc function checks where the same buffer was reused or a new one
allocated by comparing kernel pointers. Tag-based KASAN changes memory
tag on the krealloc'ed chunk of memory and therefore also changes the
pointer tag of the returned pointer. Therefore we need to perform
comparison on untagged (with tags reset) pointers to check whether it's
the same memory region or not.
Link: http://lkml.kernel.org/r/14f6190d7846186a3506cd66d82446646fe65090.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Tag-based KASAN uses the Top Byte Ignore feature of arm64 CPUs to store a
pointer tag in the top byte of each pointer. This commit enables the
TCR_TBI1 bit, which enables Top Byte Ignore for the kernel, when tag-based
KASAN is used.
Link: http://lkml.kernel.org/r/f51eca084c8cdb2f3a55195fe342dc8953b7aead.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Right now arm64 fault handling code removes pointer tags from addresses
covered by TTBR0 in faults taken from both EL0 and EL1, but doesn't do
that for pointers covered by TTBR1.
This patch adds two helper functions is_ttbr0_addr() and is_ttbr1_addr(),
where the latter one accounts for the fact that TTBR1 pointers might be
tagged when tag-based KASAN is in use, and uses these helper functions to
perform pointer checks in arch/arm64/mm/fault.c.
Link: http://lkml.kernel.org/r/3f349b0e9e48b5df3298a6b4ae0634332274494a.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Suggested-by: Mark Rutland <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
An object constructor can initialize pointers within this objects based on
the address of the object. Since the object address might be tagged, we
need to assign a tag before calling constructor.
The implemented approach is to assign tags to objects with constructors
when a slab is allocated and call constructors once as usual. The
downside is that such object would always have the same tag when it is
reallocated, so we won't catch use-after-frees on it.
Also pressign tags for objects from SLAB_TYPESAFE_BY_RCU caches, since
they can be validy accessed after having been freed.
Link: http://lkml.kernel.org/r/f158a8a74a031d66f0a9398a5b0ed453c37ba09a.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
virt_addr_is_linear (which is used by virt_addr_valid) assumes that the
top byte of the address is 0xff, which isn't always the case with
tag-based KASAN.
This patch resets the tag in this macro.
Link: http://lkml.kernel.org/r/df73a37dd5ed37f4deaf77bc718e9f2e590e69b1.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This commit adds a few helper functions, that are meant to be used to work
with tags embedded in the top byte of kernel pointers: to set, to get or
to reset the top byte.
Link: http://lkml.kernel.org/r/f6c6437bb8e143bc44f42c3c259c62e734be7935.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Move the untagged_addr() macro from arch/arm64/include/asm/uaccess.h
to arch/arm64/include/asm/memory.h to be later reused by KASAN.
Also make the untagged_addr() macro accept all kinds of address types
(void *, unsigned long, etc.). This allows not to specify type casts in
each place where the macro is used. This is done by using __typeof__.
Link: http://lkml.kernel.org/r/2e9ef8d2ed594106eca514b268365b5419113f6a.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
A tag-based KASAN shadow memory cell contains a memory tag, that
corresponds to the tag in the top byte of the pointer, that points to that
memory. The native top byte value of kernel pointers is 0xff, so with
tag-based KASAN we need to initialize shadow memory to 0xff.
[[email protected]: arm64: skip kmemleak for KASAN again\
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/5cc1b789aad7c99cf4f3ec5b328b147ad53edb40.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
With tag based KASAN mode the early shadow value is 0xff and not 0x00, so
this patch renames kasan_zero_(page|pte|pmd|pud|p4d) to
kasan_early_shadow_(page|pte|pmd|pud|p4d) to avoid confusion.
Link: http://lkml.kernel.org/r/3fed313280ebf4f88645f5b89ccbc066d320e177.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Suggested-by: Mark Rutland <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Tag-based KASAN uses 1 shadow byte for 16 bytes of kernel memory, so it
requires 1/16th of the kernel virtual address space for the shadow memory.
This commit sets KASAN_SHADOW_SCALE_SHIFT to 4 when the tag-based KASAN
mode is enabled.
Link: http://lkml.kernel.org/r/308b6bd49f756bb5e533be93c6f085ba99b30339.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This commit splits the current CONFIG_KASAN config option into two:
1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
that exists now);
2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.
The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
another hardware tag-based KASAN mode, that will rely on hardware memory
tagging support in arm64.
With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
instrument kernel files with -fsantize=kernel-hwaddress (except the ones
for which KASAN_SANITIZE := n is set).
Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.
This commit also adds empty placeholder (for now) implementation of
tag-based KASAN specific hooks inserted by the compiler and adjusts
common hooks implementation.
While this commit adds the CONFIG_KASAN_SW_TAGS config option, this option
is not selectable, as it depends on HAVE_ARCH_KASAN_SW_TAGS, which we will
enable once all the infrastracture code has been added.
Link: http://lkml.kernel.org/r/b2550106eb8a68b10fefbabce820910b115aa853.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
We now have two KASAN modes: generic KASAN and tag-based KASAN. Rename
kasan.c to generic.c to reflect that. Also rename kasan_init.c to init.c
as it contains initialization code for both KASAN modes.
Link: http://lkml.kernel.org/r/88c6fd2a883e459e6242030497230e5fb0d44d44.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Tag-based KASAN reuses a significant part of the generic KASAN code, so
move the common parts to common.c without any functional changes.
Link: http://lkml.kernel.org/r/114064d002356e03bb8cc91f7835e20dc61b51d9.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The previous patch updated KASAN hooks signatures and their usage in SLAB
and SLUB code, except for the early_kmem_cache_node_alloc function. This
patch handles that function separately, as it requires to reorder some of
the initialization code to correctly propagate a tagged pointer in case a
tag is assigned by kasan_kmalloc.
Link: http://lkml.kernel.org/r/fc8d0fdcf733a7a52e8d0daaa650f4736a57de8c.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Patch series "kasan: add software tag-based mode for arm64", v13.
This patchset adds a new software tag-based mode to KASAN [1]. (Initially
this mode was called KHWASAN, but it got renamed, see the naming rationale
at the end of this section).
The plan is to implement HWASan [2] for the kernel with the incentive,
that it's going to have comparable to KASAN performance, but in the same
time consume much less memory, trading that off for somewhat imprecise bug
detection and being supported only for arm64.
The underlying ideas of the approach used by software tag-based KASAN are:
1. By using the Top Byte Ignore (TBI) arm64 CPU feature, we can store
pointer tags in the top byte of each kernel pointer.
2. Using shadow memory, we can store memory tags for each chunk of kernel
memory.
3. On each memory allocation, we can generate a random tag, embed it into
the returned pointer and set the memory tags that correspond to this
chunk of memory to the same value.
4. By using compiler instrumentation, before each memory access we can add
a check that the pointer tag matches the tag of the memory that is being
accessed.
5. On a tag mismatch we report an error.
With this patchset the existing KASAN mode gets renamed to generic KASAN,
with the word "generic" meaning that the implementation can be supported
by any architecture as it is purely software.
The new mode this patchset adds is called software tag-based KASAN. The
word "tag-based" refers to the fact that this mode uses tags embedded into
the top byte of kernel pointers and the TBI arm64 CPU feature that allows
to dereference such pointers. The word "software" here means that shadow
memory manipulation and tag checking on pointer dereference is done in
software. As it is the only tag-based implementation right now, "software
tag-based" KASAN is sometimes referred to as simply "tag-based" in this
patchset.
A potential expansion of this mode is a hardware tag-based mode, which
would use hardware memory tagging support (announced by Arm [3]) instead
of compiler instrumentation and manual shadow memory manipulation.
Same as generic KASAN, software tag-based KASAN is strictly a debugging
feature.
[1] https://www.kernel.org/doc/html/latest/dev-tools/kasan.html
[2] http://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html
[3] https://community.arm.com/processors/b/blog/posts/arm-a-profile-architecture-2018-developments-armv85a
====== Rationale
On mobile devices generic KASAN's memory usage is significant problem.
One of the main reasons to have tag-based KASAN is to be able to perform a
similar set of checks as the generic one does, but with lower memory
requirements.
Comment from Vishwath Mohan <[email protected]>:
I don't have data on-hand, but anecdotally both ASAN and KASAN have proven
problematic to enable for environments that don't tolerate the increased
memory pressure well. This includes
(a) Low-memory form factors - Wear, TV, Things, lower-tier phones like Go,
(c) Connected components like Pixel's visual core [1].
These are both places I'd love to have a low(er) memory footprint option at
my disposal.
Comment from Evgenii Stepanov <[email protected]>:
Looking at a live Android device under load, slab (according to
/proc/meminfo) + kernel stack take 8-10% available RAM (~350MB). KASAN's
overhead of 2x - 3x on top of it is not insignificant.
Not having this overhead enables near-production use - ex. running
KASAN/KHWASAN kernel on a personal, daily-use device to catch bugs that do
not reproduce in test configuration. These are the ones that often cost
the most engineering time to track down.
CPU overhead is bad, but generally tolerable. RAM is critical, in our
experience. Once it gets low enough, OOM-killer makes your life
miserable.
[1] https://www.blog.google/products/pixel/pixel-visual-core-image-processing-and-machine-learning-pixel-2/
====== Technical details
Software tag-based KASAN mode is implemented in a very similar way to the
generic one. This patchset essentially does the following:
1. TCR_TBI1 is set to enable Top Byte Ignore.
2. Shadow memory is used (with a different scale, 1:16, so each shadow
byte corresponds to 16 bytes of kernel memory) to store memory tags.
3. All slab objects are aligned to shadow scale, which is 16 bytes.
4. All pointers returned from the slab allocator are tagged with a random
tag and the corresponding shadow memory is poisoned with the same value.
5. Compiler instrumentation is used to insert tag checks. Either by
calling callbacks or by inlining them (CONFIG_KASAN_OUTLINE and
CONFIG_KASAN_INLINE flags are reused).
6. When a tag mismatch is detected in callback instrumentation mode
KASAN simply prints a bug report. In case of inline instrumentation,
clang inserts a brk instruction, and KASAN has it's own brk handler,
which reports the bug.
7. The memory in between slab objects is marked with a reserved tag, and
acts as a redzone.
8. When a slab object is freed it's marked with a reserved tag.
Bug detection is imprecise for two reasons:
1. We won't catch some small out-of-bounds accesses, that fall into the
same shadow cell, as the last byte of a slab object.
2. We only have 1 byte to store tags, which means we have a 1/256
probability of a tag match for an incorrect access (actually even
slightly less due to reserved tag values).
Despite that there's a particular type of bugs that tag-based KASAN can
detect compared to generic KASAN: use-after-free after the object has been
allocated by someone else.
====== Testing
Some kernel developers voiced a concern that changing the top byte of
kernel pointers may lead to subtle bugs that are difficult to discover.
To address this concern deliberate testing has been performed.
It doesn't seem feasible to do some kind of static checking to find
potential issues with pointer tagging, so a dynamic approach was taken.
All pointer comparisons/subtractions have been instrumented in an LLVM
compiler pass and a kernel module that would print a bug report whenever
two pointers with different tags are being compared/subtracted (ignoring
comparisons with NULL pointers and with pointers obtained by casting an
error code to a pointer type) has been used. Then the kernel has been
booted in QEMU and on an Odroid C2 board and syzkaller has been run.
This yielded the following results.
The two places that look interesting are:
is_vmalloc_addr in include/linux/mm.h
is_kernel_rodata in mm/util.c
Here we compare a pointer with some fixed untagged values to make sure
that the pointer lies in a particular part of the kernel address space.
Since tag-based KASAN doesn't add tags to pointers that belong to rodata
or vmalloc regions, this should work as is. To make sure debug checks to
those two functions that check that the result doesn't change whether we
operate on pointers with or without untagging has been added.
A few other cases that don't look that interesting:
Comparing pointers to achieve unique sorting order of pointee objects
(e.g. sorting locks addresses before performing a double lock):
tty_ldisc_lock_pair_timeout in drivers/tty/tty_ldisc.c
pipe_double_lock in fs/pipe.c
unix_state_double_lock in net/unix/af_unix.c
lock_two_nondirectories in fs/inode.c
mutex_lock_double in kernel/events/core.c
ep_cmp_ffd in fs/eventpoll.c
fsnotify_compare_groups fs/notify/mark.c
Nothing needs to be done here, since the tags embedded into pointers
don't change, so the sorting order would still be unique.
Checks that a pointer belongs to some particular allocation:
is_sibling_entry in lib/radix-tree.c
object_is_on_stack in include/linux/sched/task_stack.h
Nothing needs to be done here either, since two pointers can only belong
to the same allocation if they have the same tag.
Overall, since the kernel boots and works, there are no critical bugs.
As for the rest, the traditional kernel testing way (use until fails) is
the only one that looks feasible.
Another point here is that tag-based KASAN is available under a separate
config option that needs to be deliberately enabled. Even though it might
be used in a "near-production" environment to find bugs that are not found
during fuzzing or running tests, it is still a debug tool.
====== Benchmarks
The following numbers were collected on Odroid C2 board. Both generic and
tag-based KASAN were used in inline instrumentation mode.
Boot time [1]:
* ~1.7 sec for clean kernel
* ~5.0 sec for generic KASAN
* ~5.0 sec for tag-based KASAN
Network performance [2]:
* 8.33 Gbits/sec for clean kernel
* 3.17 Gbits/sec for generic KASAN
* 2.85 Gbits/sec for tag-based KASAN
Slab memory usage after boot [3]:
* ~40 kb for clean kernel
* ~105 kb (~260% overhead) for generic KASAN
* ~47 kb (~20% overhead) for tag-based KASAN
KASAN memory overhead consists of three main parts:
1. Increased slab memory usage due to redzones.
2. Shadow memory (the whole reserved once during boot).
3. Quaratine (grows gradually until some preset limit; the more the limit,
the more the chance to detect a use-after-free).
Comparing tag-based vs generic KASAN for each of these points:
1. 20% vs 260% overhead.
2. 1/16th vs 1/8th of physical memory.
3. Tag-based KASAN doesn't require quarantine.
[1] Time before the ext4 driver is initialized.
[2] Measured as `iperf -s & iperf -c 127.0.0.1 -t 30`.
[3] Measured as `cat /proc/meminfo | grep Slab`.
====== Some notes
A few notes:
1. The patchset can be found here:
https://github.com/xairy/kasan-prototype/tree/khwasan
2. Building requires a recent Clang version (7.0.0 or later).
3. Stack instrumentation is not supported yet and will be added later.
This patch (of 25):
Tag-based KASAN changes the value of the top byte of pointers returned
from the kernel allocation functions (such as kmalloc). This patch
updates KASAN hooks signatures and their usage in SLAB and SLUB code to
reflect that.
Link: http://lkml.kernel.org/r/aec2b5e3973781ff8a6bb6760f8543643202c451.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Ryabinin <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
SCU clock can be used in a similar way by IMX8QXP and IMX8QM SoCs.
Let's make the name of clock ID generic to allow other SoCs to reuse
the common part.
This patch only changes the clock id name and file name, so no
functional change.
Cc: Stephen Boyd <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Shawn Guo <[email protected]>
Cc: Sascha Hauer <[email protected]>
Cc: Fabio Estevam <[email protected]>
Cc: Michael Turquette <[email protected]>
Cc: [email protected]
Signed-off-by: Dong Aisheng <[email protected]>
Reviewed-by: Fabio Estevam <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
Signed-off-by: Stephen Boyd <[email protected]>
|
|
It can be useful to inhibit all cgroup1 hierarchies especially during
transition and for debugging. cgroup_no_v1 can block hierarchies with
controllers which leaves out the named hierarchies. Expand it to
cover the named hierarchies so that "cgroup_no_v1=all,named" disables
all cgroup1 hierarchies.
Signed-off-by: Tejun Heo <[email protected]>
Suggested-by: Marcin Pawlowski <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
This fixes the case where all mount options specified are consumed by an
LSM and all that's left is an empty string. In this case cgroupfs should
accept the string and not fail.
How to reproduce (with SELinux enabled):
# umount /sys/fs/cgroup/unified
# mount -o context=system_u:object_r:cgroup_t:s0 -t cgroup2 cgroup2 /sys/fs/cgroup/unified
mount: /sys/fs/cgroup/unified: wrong fs type, bad option, bad superblock on cgroup2, missing codepage or helper program, or other error.
# dmesg | tail -n 1
[ 31.575952] cgroup: cgroup2: unknown option ""
Fixes: 67e9c74b8a87 ("cgroup: replace __DEVEL__sane_behavior with cgroup2 fs type")
[NOTE: should apply on top of commit 5136f6365ce3 ("cgroup: implement "nsdelegate" mount option"), older versions need manual rebase]
Suggested-by: Stephen Smalley <[email protected]>
Signed-off-by: Ondrej Mosnacek <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
Clarify the use of the CONFIG_DFS_UPCALL for DNS name resolution
when server ip addresses change (e.g. on long running mounts)
Signed-off-by: Steve French <[email protected]>
|
|
In case a hostname resolves to a different IP address (e.g. long
running mounts), make sure to resolve it every time prior to calling
generic_ip_connect() in reconnect.
Suggested-by: Steve French <[email protected]>
Signed-off-by: Paulo Alcantara <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
After a successful failover, the cifs_reconnect_tcon() function will
make sure to reconnect every tcon to new target server.
Same as previous commit but for SMB1 codepath.
Signed-off-by: Paulo Alcantara <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
After a successful failover in cifs_reconnect(), the smb2_reconnect()
function will make sure to reconnect every tcon to new target server.
For SMB2+.
Signed-off-by: Paulo Alcantara <[email protected]>
Signed-off-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
Fix potential NULL ptr deref when DFS target list is empty.
Signed-off-by: Paulo Alcantara <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
Start the DFS cache refresh worker per volume during cifs mount.
Signed-off-by: Paulo Alcantara <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
A spin lock is held before kstrndup, it may sleep with holding
the spinlock, so we should use GFP_ATOMIC instead.
Fixes: e58c31d5e387 ("cifs: Add support for failover in cifs_reconnect()")
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: Steve French <[email protected]>
Reviewed-by: Paulo Alcantara <[email protected]>
|
|
After failing to reconnect to original target, it will retry any
target available from DFS cache.
Signed-off-by: Paulo Alcantara <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
This patch adds support for failover when failing to connect in
cifs_mount().
Signed-off-by: Paulo Alcantara <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
Fixes gcc '-Wunused-but-set-variable' warning:
fs/cifs/cifs_dfs_ref.c: In function 'cifs_dfs_do_automount':
fs/cifs/cifs_dfs_ref.c:309:7: warning:
variable 'sep' set but not used [-Wunused-but-set-variable]
It never used since introdution in commit 0f56b277073c ("cifs: Make use
of DFS cache to get new DFS referrals")
Signed-off-by: YueHaibing <[email protected]>
Reviewed-by: Paulo Alcantara <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
This patch will make use of DFS cache routines where appropriate and
do not always request a new referral from server.
Signed-off-by: Paulo Alcantara <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
Update cifs "TODO" file.
Signed-off-by: Steve French <[email protected]>
|
|
kzalloc can return NULL so an additional check is needed. While there
is a check for ret_buf there is no check for the allocation of
ret_buf->crfid.fid - this check is thus added. Both call-sites
of tconInfoAlloc() check for NULL return of tconInfoAlloc()
so returning NULL on failure of kzalloc() here seems appropriate.
As the kzalloc() is the only thing here that can fail it is
moved to the beginning so as not to initialize other resources
on failure of kzalloc.
Fixes: 3d4ef9a15343 ("smb3: fix redundant opens on root")
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
Fixes gcc '-Wunused-but-set-variable' warning:
fs/cifs/smb2pdu.c: In function 'smb311_posix_mkdir':
fs/cifs/smb2pdu.c:2040:26: warning:
variable 'server' set but not used [-Wunused-but-set-variable]
fs/cifs/smb2pdu.c: In function 'build_qfs_info_req':
fs/cifs/smb2pdu.c:4067:26: warning:
variable 'server' set but not used [-Wunused-but-set-variable]
The first 'server' never used since commit bea851b8babe ("smb3: Fix mode on
mkdir on smb311 mounts")
And the second not used since commit 1fc6ad2f10ad ("cifs: remove
header_preamble_size where it is always 0")
Signed-off-by: YueHaibing <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
We should zero out the password before we free it.
Fixes: 3d6cacbb5310 ("cifs: Add DFS cache routines")
Signed-off-by: Dan Carpenter <[email protected]>
Signed-off-by: Steve French <[email protected]>
Reviewed-by: Paulo Alcantara <[email protected]>
|
|
memory allocated by kmem_cache_alloc() in alloc_cache_entry()
should be freed using kmem_cache_free(), not kfree().
Fixes: 34a44fb160f9 ("cifs: Add DFS cache routines")
Signed-off-by: Wei Yongjun <[email protected]>
Signed-off-by: Steve French <[email protected]>
Reviewed-by: Aurelien Aptel <[email protected]>
|
|
Fixes cifs build failure after merge of the y2038 tree
After merging the y2038 tree, today's linux-next build (x86_64
allmodconfig) failed like this:
fs/cifs/dfs_cache.c: In function 'cache_entry_expired':
fs/cifs/dfs_cache.c:106:7: error: implicit declaration of function 'current_kernel_time64'; did you mean 'core_kernel_text'? [-Werror=implicit-function-declaration]
ts = current_kernel_time64();
^~~~~~~~~~~~~~~~~~~~~
core_kernel_text
fs/cifs/dfs_cache.c:106:5: error: incompatible types when assigning to type 'struct timespec64' from type 'int'
ts = current_kernel_time64();
^
fs/cifs/dfs_cache.c: In function 'get_expire_time':
fs/cifs/dfs_cache.c:342:24: error: incompatible type for argument 1 of 'timespec64_add'
return timespec64_add(current_kernel_time64(), ts);
^~~~~~~~~~~~~~~~~~~~~~~
In file included from include/linux/restart_block.h:10,
from include/linux/thread_info.h:13,
from arch/x86/include/asm/preempt.h:7,
from include/linux/preempt.h:78,
from include/linux/rcupdate.h:40,
from fs/cifs/dfs_cache.c:8:
include/linux/time64.h:66:66: note: expected 'struct timespec64' but argument is of type 'int'
static inline struct timespec64 timespec64_add(struct timespec64 lhs,
~~~~~~~~~~~~~~~~~~^~~
fs/cifs/dfs_cache.c:343:1: warning: control reaches end of non-void function [-Wreturn-type]
}
^
Caused by:
commit ccea641b6742 ("timekeeping: remove obsolete time accessors")
interacting with:
commit 34a44fb160f9 ("cifs: Add DFS cache routines")
from the cifs tree.
Signed-off-by: Stephen Rothwell <[email protected]>
Reviewed-by: Paulo Alcantara <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
* Add new dfs_cache.[ch] files
* Add new /proc/fs/cifs/dfscache file
- dump current cache when read
- clear current cache when writing "0" to it
* Add delayed_work to periodically refresh cache entries
The new interface will be used for caching DFS referrals, as well as
supporting client target failover.
The DFS cache is a hashtable that maps UNC paths to cache entries.
A cache entry contains:
- the UNC path it is mapped on
- how much the the UNC path the entry consumes
- flags
- a Time-To-Live after which the entry expires
- a list of possible targets (linked lists of UNC paths)
- a "hint target" pointing the last known working target or the first
target if none were tried. This hint lets cifs.ko remember and try
working targets first.
* Looking for an entry in the cache is done with dfs_cache_find()
- if no valid entries are found, a DFS query is made, stored in the
cache and returned
- the full target list can be copied and returned to avoid race
conditions and looped on with the help with the
dfs_cache_tgt_iterator
* Updating the target hint to the next target is done with
dfs_cache_update_tgthint()
These functions have a dfs_cache_noreq_XXX() version that doesn't
fetches referrals if no entries are found. These versions don't
require the tcp/ses/tcon/cifs_sb parameters as a result.
Expired entries cannot be used and since they have a pretty short TTL
[1] in order for them to be useful for failover the DFS cache adds a
delayed work called periodically to keep them fresh.
Since we might not have available connections to issue the referral
request when refreshing we need to store volume_info structs with
credentials and other needed info to be able to connect to the right
server.
1: Windows defaults: 5mn for domain-based referrals, 30mn for regular
links
Signed-off-by: Paulo Alcantara <[email protected]>
Signed-off-by: Aurelien Aptel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
Fix the following warning:
no previous prototype for ‘dbg_sym_flags’ [-Wmissing-prototypes]
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
Currently, images.c is included by qconf.cc and gconf.c.
qconf.cc uses all of xpm_* arrays, but gconf.c only some of them.
Hence, lots of "... defined but not used" warnings are displayed
while compiling gconf.c
Splitting out images.c fixes the warnings.
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
Add "static" to functions that are locally used in gconf.c
This fixes some "no previous prototype for ..." warnings.
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
Compile zconf.lex.c independently of the other files.
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
I want to compile each C file independently instead of including all
of them from zconf.y.
Split out confdata.c, expr.c, symbol.c, and preprocess.c .
These are low-hanging fruits.
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
All files in lxdialog/ are licensed under GPL-2.0+, and the rest are
under GPL-2.0. I added GPL-2.0 tags to test scripts in tests/.
Documentation/process/license-rules.rst does not suggest anything
about the flex/bison files. Because flex does not accept the C++
comment style at the very top of a file, I used the C style for
zconf.l, and so for zconf.y for consistency.
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
Commit 7a88488bbc23 ("[PATCH] kconfig: use gperf for kconfig keywords")
introduced gperf for the keyword lookup.
Then, commit bb3290d91695 ("Remove gperf usage from toolchain") killed
the gperf use. As a result, the linear keyword search was left behind.
If we do not use gperf, there is no reason to have the separate table
of the keywords. Move all keywords back to the lexer.
I also refactored the lexer to remove the COMMAND and PARAM states.
Signed-off-by: Masahiro Yamada <[email protected]>
|