Age | Commit message (Collapse) | Author | Files | Lines |
|
see warnings:
| lib/test_printf.c:157:52: error: format specifies type 'unsigned char'
| but the argument has type 'int' [-Werror,-Wformat]
test("0|1|1|128|255",
| "%hhu|%hhu|%hhu|%hhu|%hhu", 0, 1, 257, 128, -1);
-
| lib/test_printf.c:158:55: error: format specifies type 'char' but the
| argument has type 'int' [-Werror,-Wformat] test("0|1|1|-128|-1",
| "%hhd|%hhd|%hhd|%hhd|%hhd", 0, 1, 257, 128, -1);
-
| lib/test_printf.c:159:41: error: format specifies type 'unsigned
short'
| but the argument has type 'int' [-Werror,-Wformat]
| test("2015122420151225", "%ho%ho%#ho", 1037, 5282, -11627);
There's an ongoing movement to eventually enable the -Wformat flag for
clang. Previous patches have targeted incorrect usage of
format specifiers. In this case, however, the "incorrect" format
specifiers are intrinsically part of the test cases. Hence, fixing them
would be misaligned with their intended purpose. My proposed fix is to
simply disable the warnings so that one day a clean build of the kernel
with clang (and -Wformat enabled) would be possible. It would also keep
us in the green for alot of the CI bots.
Link: https://github.com/ClangBuiltLinux/linux/issues/378
Suggested-by: Nathan Chancellor <[email protected]>
Suggested-by: Nick Desaulniers <[email protected]>
Signed-off-by: Justin Stitt <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Reviewed-by: Nick Desaulniers <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
GCC 12 continues to get smarter about array accesses. The KASAN tests
are expecting to explicitly test out-of-bounds conditions at run-time,
so hide the variable from GCC, to avoid warnings like:
../lib/test_kasan.c: In function 'ksize_uaf':
../lib/test_kasan.c:790:61: warning: array subscript 120 is outside array bounds of 'void[120]' [-Warray-bounds]
790 | KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr)[size]);
| ~~~~~~~~~~~~~~~~~~~~~~^~~~~~
../lib/test_kasan.c:97:9: note: in definition of macro 'KUNIT_EXPECT_KASAN_FAIL'
97 | expression; \
| ^~~~~~~~~~
Cc: Andrey Ryabinin <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Vincenzo Frascino <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
No conflicts.
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
libsha1 can be a module, so it needs a MODULE_LICENSE.
Fixes: ec8f7f4821d5 ("crypto: lib - make the sha1 library optional")
Reported-by: Randy Dunlap <[email protected]>
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Allocate device resource from local node memory when the numa locality of
the device is specified.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mark-PK Tsai <[email protected]>
Cc: Matthias Brugger <[email protected]>
Cc: YJ Chiang <[email protected]>
Cc: Hans de Goede <[email protected]>
Cc: Thomas Zimmermann <[email protected]>
Cc: Zhen Lei <[email protected]>
Cc: Jacob Keller <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Currently instrumentation_end() won't be called if printk_ratelimit()
returned false.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 126f21f0e8d46e2c ("lib/smp_processor_id: Move it into noinstr section")
Signed-off-by: Tetsuo Handa <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Alexandre Chartre <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add a basic suite of tests for cpumask, providing some tests for empty and
completely filled cpumasks.
Link: https://lkml.kernel.org/r/c96980ec35c3bd23f17c3374bf42c22971545e85.1656777646.git.sander@svanheule.net
Signed-off-by: Sander Vanheule <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Suggested-by: Yury Norov <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Valentin Schneider <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
On uniprocessor builds, any CPU mask is assumed to contain exactly one CPU
(cpu0). This assumption ignores the existence of empty masks, resulting
in incorrect behaviour.
cpumask_first_zero(), cpumask_next_zero(), and for_each_cpu_not() don't
provide behaviour matching the assumption that a UP mask is always "1",
and instead provide behaviour matching the empty mask.
Drop the incorrectly optimised code and use the generic implementations in
all cases.
Link: https://lkml.kernel.org/r/86bf3f005abba2d92120ddd0809235cab4f759a6.1656777646.git.sander@svanheule.net
Signed-off-by: Sander Vanheule <[email protected]>
Suggested-by: Yury Norov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Valentin Schneider <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
There is no need to store the result of the addition back to variable
consumed after the addition. The store is redundant, replace += with just
+
Cleans up clang scan build warning: lib/ts_bm.c:83:11: warning: Although
the value stored to 'consumed' is used in the enclosing expression, the
value is never actually read from 'consumed' [deadcode.DeadStores]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
commit 4635873c561a ("scsi: lib/sg_pool.c: improve APIs for allocating sg
pool") changeed @(bool)skip_first_chunk of __sg_free_table() to @(unsigned
int)nents_first_chunk, so use unsigend int type instead of bool type
(false -> 0) when calling the function in sg_free_append_table() and
sg_free_table().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: wuchi <[email protected]>
Reviewed-by: Ming Lei <[email protected]>
Cc: Maor Gottlieb <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
LZ4_decompress_safe_forceExtDict() is only used in
lib/lz4/lz4_decompress.c, make it static to fix the build warning about
"no previous prototype" [1].
[1] https://lore.kernel.org/lkml/[email protected]/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Tiezhu Yang <[email protected]>
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
insert_entries() doesn't use the 'bool replace' argument, and the function
is only used locally, remove the argument.
The historical context of the unused argument is as follow:
2: commit <3a08cd52c37c79> (radix tree: Remove multiorder support)
Remove the code related to macro CONFIG_RADIX_TREE_MULTIORDER
to convert to the xArray.
Without the macro, there is no need to retain the argument.
1: commit <175542f575723e> (radix-tree: add radix_tree_join)
Add insert_entries(..., bool replace) function, depending on the
macro CONFIG_RADIX_TREE_MULTIORDER definition, the implementation
is different. Notice that the implementation without the macro doesn't
use the argument.
[Matthew Wilcox: add historical context for argument]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: wuchi <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Traversing list without mutex in get_injectable_error_type will
race with the following code:
list_del_init(&ent->list)
kfree(ent)
in module_unload_ei_list. So fix that.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: wuchi <[email protected]>
Cc: Masami Hiramatsu (Google) <[email protected]>
Cc: Martin KaFai Lau <[email protected]>
Cc: Song Liu <[email protected]>
Cc: Yonghong Song <[email protected]>
Cc: John Fastabend <[email protected]>
Cc: KP Singh <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
As Linus explained [1], setting the stackdepot hash table size as a config
option is suboptimal, especially as stackdepot becomes a dependency of
less "expert" subsystems than initially (e.g. DRM, networking,
SLUB_DEBUG):
: (a) it introduces a new compile-time question that isn't sane to ask
: a regular user, but is now exposed to regular users.
: (b) this by default uses 1MB of memory for a feature that didn't in
: the past, so now if you have small machines you need to make sure you
: make a special kernel config for them.
Ideally we would employ rhashtable for fully automatic resizing, which
should be feasible for many of the new users, but problematic for the
original users with restricted context that call __stack_depot_save() with
can_alloc == false, i.e. KASAN.
However we can easily remove the config option and scale the hash table
automatically with system memory. The STACK_HASH_MASK constant becomes
stack_hash_mask variable and is used only in one mask operation, so the
overhead should be negligible to none. For early allocation we can employ
the existing alloc_large_system_hash() function and perform similar
scaling for the late allocation.
The existing limits of the config option (between 4k and 1M buckets) are
preserved, and scaling factor is set to one bucket per 16kB memory so on
64bit the max 1M buckets (8MB memory) is achieved with 16GB system, while
a 1GB system will use 512kB.
Because KASAN is reported to need the maximum number of buckets even with
smaller amounts of memory [2], set it as such when kasan_enabled().
If needed, the automatic scaling could be complemented with a boot-time
kernel parameter, but it feels pointless to add it without a specific use
case.
[1] https://lore.kernel.org/all/CAHk-=wjC5nS+fnf6EzRD9yQRJApAhxx7gRB87ZV+pAWo9oVrTg@mail.gmail.com/
[2] https://lore.kernel.org/all/CACT4Y+Y4GZfXOru2z5tFPzFdaSUd+GFc6KVL=bsa0+1m197cQQ@mail.gmail.com/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Reported-by: Linus Torvalds <[email protected]>
Acked-by: Dmitry Vyukov <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
When kmem_cache_alloc in function lc_create returns null, we will
free the memory already allocated. The loop of kmem_cache_free
is wrong, especially:
i = 0 ==> do wrong loop
i > 0 ==> do not free element[0]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: wuchi <[email protected]>
Cc: Philipp Reisner <[email protected]>
Cc: Lars Ellenberg <[email protected]>
Cc: Christoph Bhmwalder <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
In a recent change to the Arm architecture with the end goal of removing
highmem we need to convert virt_to_phys() and virt_to_pfn() to static
inline functions.
This will make them strongly typed.
However since virt_to_* is always implemented as macros they have become
polymorphic and accept both (void *) and e.g. unsigned long as arguments.
Other functions such as virt_to_page() simply wrap virt_to_pfn() and get
affected indirectly.
To be able to proceed, patch mm to use (void *) as argument to affected
functions in all instances.
This patch (of 5):
A pointer into virtual memory is represented by a (void *) not an u32, so
the compiler warns:
lib/test_free_pages.c:20:50: warning: passing argument 1
of 'virt_to_pfn' makes pointer from integer without a
cast [-Wint-conversion]
Fix this with an explicit cast.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Walleij <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Marco Elver <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Device Coherent type uses device memory that is coherently accesible by
the CPU. This could be shown as SP (special purpose) memory range at the
BIOS-e820 memory enumeration. If no SP memory is supported in system,
this could be faked by setting CONFIG_EFI_FAKE_MEMMAP.
Currently, test_hmm only supports two different SP ranges of at least
256MB size. This could be specified in the kernel parameter variable
efi_fake_mem. Ex. Two SP ranges of 1GB starting at 0x100000000 &
0x140000000 physical address. Ex.
efi_fake_mem=1G@0x100000000:0x40000,1G@0x140000000:0x40000
Private and coherent device mirror instances can be created in the same
probed. This is done by passing the module parameters spm_addr_dev0 &
spm_addr_dev1. In this case, it will create four instances of
device_mirror. The first two correspond to private device type, the last
two to coherent type. Then, they can be easily accessed from user space
through /dev/hmm_mirror<num_device>. Usually num_device 0 and 1 are for
private, and 2 and 3 for coherent types. If no module parameters are
passed, two instances of private type device_mirror will be created only.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alex Sierra <[email protected]>
Acked-by: Felix Kuehling <[email protected]>
Reviewed-by: Alistair Poppple <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Ralph Campbell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
In order to configure device coherent in test_hmm, two module parameters
should be passed, which correspond to the SP start address of each device
(2) spm_addr_dev0 & spm_addr_dev1. If no parameters are passed, private
device type is configured.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alex Sierra <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Felix Kuehling <[email protected]>
Reviewed-by: Alistair Poppple <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Ralph Campbell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add new ioctl cmd to query zone device type. This will be used once the
test_hmm adds zone device coherent type.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alex Sierra <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Acked-by: Felix Kuehling <[email protected]>
Reviewed-by: Alistair Poppple <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Ralph Campbell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
After moving gfp flags to a separate header, it's possible to move some
cpumask allocators into headers, and avoid creating real functions.
Signed-off-by: Yury Norov <[email protected]>
|
|
To avoid circular dependencies, cpumask keeps simple (almost) one-line
wrappers around find_bit() in a c-file.
Commit 47d8c15615c0a2 ("include: move find.h from asm_generic to linux")
moved find.h header out of asm_generic include path, and it helped to fix
many circular dependencies, including some in cpumask.h.
This patch moves those one-liners to header files.
Signed-off-by: Yury Norov <[email protected]>
|
|
Switch return types to unsigned int where return values cannot be negative.
Signed-off-by: Yury Norov <[email protected]>
|
|
bitmap_weight() doesn't return negative values, so change it's type
to unsigned long. It may help compiler to generate better code and
catch bugs.
Signed-off-by: Yury Norov <[email protected]>
|
|
Since the Linux RNG no longer uses sha1_transform(), the SHA-1 library
is no longer needed unconditionally. Make it possible to build the
Linux kernel without the SHA-1 library by putting it behind a kconfig
option, and selecting this new option from the kconfig options that gate
the remaining users: CRYPTO_SHA1 for crypto/sha1_generic.c, BPF for
kernel/bpf/core.c, and IPV6 for net/ipv6/addrconf.c.
Unfortunately, since BPF is selected by NET, for now this can only make
a difference for kernels built without networking support.
Signed-off-by: Eric Biggers <[email protected]>
Reviewed-by: Jason A. Donenfeld <[email protected]>
Acked-by: Jakub Kicinski <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
SHA-1 is a crypto algorithm (or at least was intended to be -- it's not
considered secure anymore), so move it out of the top-level library
directory and into lib/crypto/.
Signed-off-by: Eric Biggers <[email protected]>
Reviewed-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Building with UBSAN_DIV_ZERO with clang produces numerous fallthrough
warnings from objtool.
In the case of uncheck division, UBSAN_DIV_ZERO may introduce new
control flow to check for division by zero.
Because the result of the division is undefined, LLVM may optimize the
control flow such that after the call to __ubsan_handle_divrem_overflow
doesn't matter. If panic_on_warn was set,
__ubsan_handle_divrem_overflow would panic.
The problem is is that panic_on_warn is run time configurable. If it's
disabled, then we cannot guarantee that we will be able to recover
safely. Disable this config for clang until we can come up with a
solution in LLVM.
Link: https://github.com/ClangBuiltLinux/linux/issues/1657
Link: https://github.com/llvm/llvm-project/issues/56289
Link: https://lore.kernel.org/lkml/CAHk-=wj1qhf7y3VNACEexyp5EbkNpdcu_542k-xZpzmYLOjiCg@mail.gmail.com/
Reported-by: Sudip Mukherjee <[email protected]>
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Nick Desaulniers <[email protected]>
Acked-by: Nathan Chancellor <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
include/net/sock.h
310731e2f161 ("net: Fix data-races around sysctl_mem.")
e70f3c701276 ("Revert "net: set SK_MEM_QUANTUM to 4096"")
https://lore.kernel.org/all/[email protected]/
net/ipv4/fib_semantics.c
747c14307214 ("ip: fix dflt addr selection for connected nexthop")
d62607c3fe45 ("net: rename reference+tracking helpers")
net/tls/tls.h
include/net/tls.h
3d8c51b25a23 ("net/tls: Check for errors in tls_device_init")
587903142308 ("tls: create an internal header")
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Some bitmap functions return boolean results in int variables. Fix it
by changing return types to bool.
Signed-off-by: Yury Norov <[email protected]>
|
|
It's possible that memory allocation for 'filtered' will fail, but for the
copy of the suite to succeed. In this case, the copy could be leaked.
Properly free 'copy' in the error case for the allocation of 'filtered'
failing.
Note that there may also have been a similar issue in
kunit_filter_subsuites, before it was removed in "kunit: flatten
kunit_suite*** to kunit_suite** in .kunit_test_suites".
This was reported by clang-analyzer via the kernel test robot, here:
https://lore.kernel.org/all/[email protected]/
And by smatch via Dan Carpenter and the kernel test robot:
https://lore.kernel.org/all/[email protected]/
Fixes: a02353f49162 ("kunit: bail out of test filtering logic quicker if OOM")
Reported-by: kernel test robot <[email protected]>
Reported-by: kernel test robot <[email protected]>
Reported-by: Dan Carpenter <[email protected]>
Reviewed-by: Daniel Latypov <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: David Gow <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Currently, test_bitmap_arr64() only tests bitmap_to_arr64()'s sanity
by comparing the result of double-conversion (bm -> arr64 -> bm2)
with the input bitmap. However, this may be not enough when one side
hides bugs of the second one (e.g. tail clearing, which is being
performed by both).
Expand the tests and check the tail of the actual arr64 used as
a temporary buffer for double-converting.
Signed-off-by: Alexander Lobakin <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Yury Norov <[email protected]>
|
|
GENMASK*() family takes the first and the last bits of the mask
*including* them. So, with the current code bitmap_to_arr64()
doesn't clear the tail properly:
nbits % exp mask must be
1 GENMASK(1, 0) 0x3 0x1
...
63 GENMASK(63, 0) 0xffffffffffffffff 0x7fffffffffffffff
This was found by making the function always available instead of
32-bit BE systems only (for reusing in some new functionality).
Turn the number of bits into the last bit set by subtracting 1.
@nbits is already checked to be positive beforehand.
Fixes: 0a97953fd221 ("lib: add bitmap_{from,to}_arr64")
Signed-off-by: Alexander Lobakin <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Signed-off-by: Yury Norov <[email protected]>
|
|
We currently store kunit suites in the .kunit_test_suites ELF section as
a `struct kunit_suite***` (modulo some `const`s).
For every test file, we store a struct kunit_suite** NULL-terminated array.
This adds quite a bit of complexity to the test filtering code in the
executor.
Instead, let's just make the .kunit_test_suites section contain a single
giant array of struct kunit_suite pointers, which can then be directly
manipulated. This array is not NULL-terminated, and so none of the test
filtering code needs to NULL-terminate anything.
Tested-by: Maíra Canal <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Daniel Latypov <[email protected]>
Co-developed-by: David Gow <[email protected]>
Signed-off-by: David Gow <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Currently, KUnit runs built-in tests and tests loaded from modules
differently. For built-in tests, the kunit_test_suite{,s}() macro adds a
list of suites in the .kunit_test_suites linker section. However, for
kernel modules, a module_init() function is used to run the test suites.
This causes problems if tests are included in a module which already
defines module_init/exit_module functions, as they'll conflict with the
kunit-provided ones.
This change removes the kunit-defined module inits, and instead parses
the kunit tests from their own section in the module. After module init,
we call __kunit_test_suites_init() on the contents of that section,
which prepares and runs the suite.
This essentially unifies the module- and non-module kunit init formats.
Tested-by: Maíra Canal <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Jeremy Kerr <[email protected]>
Signed-off-by: Daniel Latypov <[email protected]>
Signed-off-by: David Gow <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Before each invocation of vsnprintf(), do_test() memsets the entire
allocated buffer to a sentinel value. That buffer includes leading and
trailing padding which is never included in the buffer area handed to
vsnprintf (spaces merely for clarity):
pad test_buffer pad
**** **************** ****
Then vsnprintf() is invoked with a bufsize argument <=
BUF_SIZE. Suppose bufsize=10, then we'd have e.g.
|pad | test_buffer |pad |
**** pizza0 **** ****** ****
A B C D E
where vsnprintf() was given the area from B to D.
It is obviously a bug for vsnprintf to touch anything between A and B
or between D and E. The former is checked for as one would expect. But
for the latter, we are actually a little stricter in that we check the
area between C and E.
Split that check in two, providing a clearer error message in case it
was a genuine buffer overrun and not merely a write within the
provided buffer, but after the end of the generated string.
So far, no part of the vsnprintf() implementation has had any use for
using the whole buffer as scratch space, but it's not unreasonable to
allow that, as long as the result is properly nul-terminated and the
return value is the right one. However, it is somewhat unusual, and
most %<something> won't need this, so keep the [C,D] check, but make
it easy for a later patch to make that part opt-out for certain tests.
Signed-off-by: Rasmus Villemoes <[email protected]>
Tested-by: Jia He <[email protected]>
Signed-off-by: Jia He <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
This is another old BUG_ON() that just shouldn't exist (see also commit
a382f8fee42c: "signal handling: don't use BUG_ON() for debugging").
In fact, as Matthew Wilcox points out, this condition shouldn't really
even result in a warning, since a negative id allocation result is just
a normal allocation failure:
"I wonder if we should even warn here -- sure, the caller is trying to
free something that wasn't allocated, but we don't warn for
kfree(NULL)"
and goes on to point out how that current error check is only causing
people to unnecessarily do their own index range checking before freeing
it.
This was noted by Itay Iellin, because the bluetooth HCI socket cookie
code does *not* do that range checking, and ends up just freeing the
error case too, triggering the BUG_ON().
The HCI code requires CAP_NET_RAW, and seems to just result in an ugly
splat, but there really is no reason to BUG_ON() here, and we have
generally striven for allocation models where it's always ok to just do
free(alloc());
even if the allocation were to fail for some random reason (usually
obviously that "random" reason being some resource limit).
Fixes: 88eca0207cf1 ("ida: simplified functions for id allocation")
Reported-by: Itay Iellin <[email protected]>
Suggested-by: Matthew Wilcox <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Daniel Borkmann says:
====================
pull-request: bpf-next 2022-07-09
We've added 94 non-merge commits during the last 19 day(s) which contain
a total of 125 files changed, 5141 insertions(+), 6701 deletions(-).
The main changes are:
1) Add new way for performing BTF type queries to BPF, from Daniel Müller.
2) Add inlining of calls to bpf_loop() helper when its function callback is
statically known, from Eduard Zingerman.
3) Implement BPF TCP CC framework usability improvements, from Jörn-Thorben Hinz.
4) Add LSM flavor for attaching per-cgroup BPF programs to existing LSM
hooks, from Stanislav Fomichev.
5) Remove all deprecated libbpf APIs in prep for 1.0 release, from Andrii Nakryiko.
6) Add benchmarks around local_storage to BPF selftests, from Dave Marchevsky.
7) AF_XDP sample removal (given move to libxdp) and various improvements around AF_XDP
selftests, from Magnus Karlsson & Maciej Fijalkowski.
8) Add bpftool improvements for memcg probing and bash completion, from Quentin Monnet.
9) Add arm64 JIT support for BPF-2-BPF coupled with tail calls, from Jakub Sitnicki.
10) Sockmap optimizations around throughput of UDP transmissions which have been
improved by 61%, from Cong Wang.
11) Rework perf's BPF prologue code to remove deprecated functions, from Jiri Olsa.
12) Fix sockmap teardown path to avoid sleepable sk_psock_stop, from John Fastabend.
13) Fix libbpf's cleanup around legacy kprobe/uprobe on error case, from Chuang Wang.
14) Fix libbpf's bpf_helpers.h to work with gcc for the case of its sec/pragma
macro, from James Hilliard.
15) Fix libbpf's pt_regs macros for riscv to use a0 for RC register, from Yixun Lan.
16) Fix bpftool to show the name of type BPF_OBJ_LINK, from Yafang Shao.
* https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (94 commits)
selftests/bpf: Fix xdp_synproxy build failure if CONFIG_NF_CONNTRACK=m/n
bpf: Correctly propagate errors up from bpf_core_composites_match
libbpf: Disable SEC pragma macro on GCC
bpf: Check attach_func_proto more carefully in check_return_code
selftests/bpf: Add test involving restrict type qualifier
bpftool: Add support for KIND_RESTRICT to gen min_core_btf command
MAINTAINERS: Add entry for AF_XDP selftests files
selftests, xsk: Rename AF_XDP testing app
bpf, docs: Remove deprecated xsk libbpf APIs description
selftests/bpf: Add benchmark for local_storage RCU Tasks Trace usage
libbpf, riscv: Use a0 for RC register
libbpf: Remove unnecessary usdt_rel_ip assignments
selftests/bpf: Fix few more compiler warnings
selftests/bpf: Fix bogus uninitialized variable warning
bpftool: Remove zlib feature test from Makefile
libbpf: Cleanup the legacy uprobe_event on failed add/attach_event()
libbpf: Fix wrong variable used in perf_event_uprobe_open_legacy()
libbpf: Cleanup the legacy kprobe_event on failed add/attach_event()
selftests/bpf: Add type match test against kernel's task_struct
selftests/bpf: Add nested type to type based tests
...
====================
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
kmemdup() is easier than kmalloc() + memcpy(), per lkp bot.
Also make the input `suite` as const since we're now always making
copies after commit a127b154a8f2 ("kunit: tool: allow filtering test
cases via glob").
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Changeset a8e35fece49b ("objtool: Update documentation")
renamed: tools/objtool/Documentation/stack-validation.txt
to: tools/objtool/Documentation/objtool.txt.
Update the cross-references accordingly.
Fixes: a8e35fece49b ("objtool: Update documentation")
Signed-off-by: Mauro Carvalho Chehab <[email protected]>
Link: https://lore.kernel.org/r/ec285ece6348a5be191aebe45f78d06b3319056b.1656234456.git.mchehab@kernel.org
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
... and calculate the offset in the caller
Reviewed-by: Jeff Layton <[email protected]>
Signed-off-by: Al Viro <[email protected]>
|
|
Pass maxsize by reference, return length via the same.
Signed-off-by: Al Viro <[email protected]>
|
|
We return length + offset in page via *size. Don't bother - the caller
can do that arithmetics just as well; just report the length to it.
Signed-off-by: Al Viro <[email protected]>
|
|
caller can do that just as easily
Signed-off-by: Al Viro <[email protected]>
|
|
All callers can and should handle iov_iter_get_pages() returning
fewer pages than requested. All in-kernel ones do. And it makes
the arithmetical overflow analysis much simpler...
Reviewed-by: Jeff Layton <[email protected]>
Signed-off-by: Al Viro <[email protected]>
|
|
do what we do for iovec/kvec; that ends up generating better code,
AFAICS.
Reviewed-by: Jeff Layton <[email protected]>
Reviewed-by: Christian Brauner (Microsoft) <[email protected]>
Signed-off-by: Al Viro <[email protected]>
|
|
A get_random_bytes() function can cause a high contention if it is called
across CPUs simultaneously. Because it shares one lock per all CPUs:
<snip>
class name con-bounces contentions waittime-min waittime-max waittime-total waittime-avg acq-bounces acquisitions holdtime-min holdtime-max holdtime-total holdtime-avg
&crng->lock: 663145 665886 0.05 8.85 261966.66 0.39 7188152 13731279 0.04 11.89 2181582.30 0.16
-----------
&crng->lock 307835 [<00000000acba59cd>] _extract_crng+0x48/0x90
&crng->lock 358051 [<00000000f0075abc>] _crng_backtrack_protect+0x32/0x90
-----------
&crng->lock 234241 [<00000000f0075abc>] _crng_backtrack_protect+0x32/0x90
&crng->lock 431645 [<00000000acba59cd>] _extract_crng+0x48/0x90
<snip>
Switch from the get_random_bytes() to prandom_u32() that does not have any
internal contention when a random value is needed for the tests.
The reason is to minimize CPU cycles introduced by the test-suite itself
from the vmalloc performance metrics.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Uladzislau Rezki (Sony) <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Oleksiy Avramchenko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
This commit introduces the /sys/kernel/debug/shrinker debugfs interface
which provides an ability to observe the state of individual kernel memory
shrinkers.
Because the feature adds some memory overhead (which shouldn't be large
unless there is a huge amount of registered shrinkers), it's guarded by a
config option (enabled by default).
This commit introduces the "count" interface for each shrinker registered
in the system.
The output is in the following format:
<cgroup inode id> <nr of objects on node 0> <nr of objects on node 1>...
<cgroup inode id> <nr of objects on node 0> <nr of objects on node 1>...
...
To reduce the size of output on machines with many thousands cgroups, if
the total number of objects on all nodes is 0, the line is omitted.
If the shrinker is not memcg-aware or CONFIG_MEMCG is off, 0 is printed as
cgroup inode id. If the shrinker is not numa-aware, 0's are printed for
all nodes except the first one.
This commit gives debugfs entries simple numeric names, which are not very
convenient. The following commit in the series will provide shrinkers
with more meaningful names.
[[email protected]: remove WARN_ON_ONCE(), per Roman]
Reported-by: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Kent Overstreet <[email protected]>
Acked-by: Muchun Song <[email protected]>
Cc: Christophe JAILLET <[email protected]>
Cc: Dave Chinner <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Looking at the conditional lock acquire functions in the kernel due to
the new sparse support (see commit 4a557a5d1a61 "sparse: introduce
conditional lock acquire function attribute"), it became obvious that
the lockref code has a couple of them, but they don't match the usual
naming convention for the other ones, and their return value logic is
also reversed.
In the other very similar places, the naming pattern is '*_and_lock()'
(eg 'atomic_put_and_lock()' and 'refcount_dec_and_lock()'), and the
function returns true when the lock is taken.
The lockref code is superficially very similar to the refcount code,
only with the special "atomic wrt the embedded lock" semantics. But
instead of the '*_and_lock()' naming it uses '*_or_lock()'.
And instead of returning true in case it took the lock, it returns true
if it *didn't* take the lock.
Now, arguably the reflock code is quite logical: it really is a "either
decrement _or_ lock" kind of situation - and the return value is about
whether the operation succeeded without any special care needed.
So despite the similarities, the differences do make some sense, and
maybe it's not worth trying to unify the different conditional locking
primitives in this area.
But while looking at this all, it did become obvious that the
'lockref_get_or_lock()' function hasn't actually had any users for
almost a decade.
The only user it ever had was the shortlived 'd_rcu_to_refcount()'
function, and it got removed and replaced with 'lockref_get_not_dead()'
back in 2013 in commits 0d98439ea3c6 ("vfs: use lockred 'dead' flag to
mark unrecoverably dead dentries") and e5c832d55588 ("vfs: fix dentry
RCU to refcounting possibly sleeping dput()")
In fact, that single use was removed less than a week after the whole
function was introduced in commit b3abd80250c1 ("lockref: add
'lockref_get_or_lock() helper") so this function has been around for a
decade, but only had a user for six days.
Let's just put this mis-designed and unused function out of its misery.
We can think about the naming and semantic oddities of the remaining
'lockref_put_or_lock()' later, but at least that function has users.
And while the naming is different and the return value doesn't match,
that function matches the whole '{atomic,refcount}_dec_and_test()'
pattern much better (ie the magic happens when the count goes down to
zero, not when it is incremented from zero).
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The 64-bit overflow tests will trigger 64-bit division on 32-bit hosts,
which is not currently used anywhere in the kernel, and tickles bugs
in at least Clang 13 and earlier:
https://github.com/ClangBuiltLinux/linux/issues/1636
In reality, there shouldn't be a reason to not build the 64-bit test
cases on 32-bit systems, so these #ifdefs can be removed once the minimum
Clang version reaches 13.
In the meantime, silence W=1 warnings given by the current code:
../lib/overflow_kunit.c:191:19: warning: 's64_tests' defined but not used [-Wunused-const-variable=]
191 | DEFINE_TEST_ARRAY(s64) = {
| ^~~
../lib/overflow_kunit.c:24:11: note: in definition of macro 'DEFINE_TEST_ARRAY'
24 | } t ## _tests[]
| ^
../lib/overflow_kunit.c:94:19: warning: 'u64_tests' defined but not used [-Wunused-const-variable=]
94 | DEFINE_TEST_ARRAY(u64) = {
| ^~~
../lib/overflow_kunit.c:24:11: note: in definition of macro 'DEFINE_TEST_ARRAY'
24 | } t ## _tests[]
| ^
Reported-by: kernel test robot <[email protected]>
Link: https://lore.kernel.org/lkml/[email protected]
Fixes: 455a35a6cdb6 ("lib: add runtime test of check_*_overflow functions")
Cc: Rasmus Villemoes <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Cc: Vitor Massaru Iha <[email protected]>
Cc: "Gustavo A. R. Silva" <[email protected]>
Tested-by: Daniel Latypov <[email protected]>
Link: https://lore.kernel.org/lkml/CAGS_qxokQAjQRip2vPi80toW7hmBnXf=KMTNT51B1wuDqSZuVQ@mail.gmail.com
Signed-off-by: Kees Cook <[email protected]>
|
|
Make KUnit trigger the new TAINT_TEST taint when any KUnit test is run.
Due to KUnit tests not being intended to run on production systems, and
potentially causing problems (or security issues like leaking kernel
addresses), the kernel's state should not be considered safe for
production use after KUnit tests are run.
This both marks KUnit modules as test modules using MODULE_INFO() and
manually taints the kernel when tests are run (which catches builtin
tests).
Acked-by: Luis Chamberlain <[email protected]>
Tested-by: Daniel Latypov <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: David Gow <[email protected]>
Tested-by: Maíra Canal <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Pull block fixes from Jens Axboe:
- Fix for batch getting of tags in sbitmap (wuchi)
- NVMe pull request via Christoph:
- More quirks (Lamarque Vieira Souza, Pablo Greco)
- Fix a fabrics disconnect regression (Ruozhu Li)
- Fix a nvmet-tcp data_digest calculation regression (Sagi
Grimberg)
- Fix nvme-tcp send failure handling (Sagi Grimberg)
- Fix a regression with nvmet-loop and passthrough controllers
(Alan Adamson)
* tag 'block-5.19-2022-07-01' of git://git.kernel.dk/linux-block:
nvme-pci: add NVME_QUIRK_BOGUS_NID for ADATA IM2P33F8ABR1
nvmet: add a clear_ids attribute for passthru targets
nvme: fix regression when disconnect a recovering ctrl
nvme-pci: add NVME_QUIRK_BOGUS_NID for ADATA XPG SX6000LNP (AKA SPECTRIX S40G)
nvme-tcp: always fail a request when sending it failed
nvmet-tcp: fix regression in data_digest calculation
lib/sbitmap: Fix invalid loop in __sbitmap_queue_get_batch()
|