Age | Commit message (Collapse) | Author | Files | Lines |
|
Patch series "fs/epoll: restore user-visible behavior upon event ready".
This series tries to address a change in user visible behavior, reported
in https://bugzilla.kernel.org/show_bug.cgi?id=208943.
Epoll does not report an event to all the threads running epoll_wait()
on the same epoll descriptor. Unsurprisingly, this was bisected back to
339ddb53d373 (fs/epoll: remove unnecessary wakeups of nested epoll), which
has had various problems in the past, beyond only nested epoll usage.
This patch (of 2):
This incorporates the testcase originally reported in:
https://bugzilla.kernel.org/show_bug.cgi?id=208943
Which ensures an event is reported to all threads blocked on the same
epoll descriptor, otherwise only a single thread will receive the wakeup
once the event become ready.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Davidlohr Bueso <[email protected]>
Cc: Jason Baron <[email protected]>
Cc: Roman Penyaev <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The devm_ variant of 'kcalloc()' and 'kmalloc_array()' are not tested
Add the corresponding check.
Link: https://lkml.kernel.org/r/205fc4847972fb6779abcc8818f39c14d1b45af1.1618595794.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Christophe JAILLET <[email protected]>
Acked-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
__must_be_array, offsetof, sizeof_field and __stringify are all
preprocessor macros and do not evaluate their arguments. As such, it is
safe not to warn when arguments are being reused in those four
sub-expressions.
Exclude those so that they can pass checkpatch.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Vincent Mailhol <[email protected]>
Acked-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
return sysfs_emit() uses should include a newline.
Suggest adding a newline when one is missing.
Add one using --fix too.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
COMPAT_SYSCALL_DEFINEx()
compat_sys##name is declared twice, just one line below.
With this removal SYSCALL_DEFINEx() (defined in <linux/syscalls.h>)
and COMPAT_SYSCALL_DEFINEx() look symmetrical.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Masahiro Yamada <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Mark match_uint() as kernel-doc notation since it is already fully
annotated as such. Use % prefix on constants in kernel-doc comments.
Convert function return descriptions to use the "Return:" kernel-doc
notation.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Randy Dunlap <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: David Howells <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Commit 52fbf1134d47 ("lib/genalloc.c: fix allocation of aligned buffer
from non-aligned chunk") added a new parameter 'start_addr' w/o
description for it. That causes some doc compile warning:
lib/genalloc.c:649: warning: Function parameter or member 'start_addr' not described in 'gen_pool_first_fit'
lib/genalloc.c:667: warning: Function parameter or member 'start_addr' not described in 'gen_pool_first_fit_align'
lib/genalloc.c:694: warning: Function parameter or member 'start_addr' not described in 'gen_pool_fixed_alloc'
lib/genalloc.c:729: warning: Function parameter or member 'start_addr' not described in 'gen_pool_first_fit_order_align'
lib/genalloc.c:752: warning: Function parameter or member 'start_addr' not described in 'gen_pool_best_fit'
This fixes it by adding a parameter descriptions.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alex Shi <[email protected]>
Cc: Alexey Skidanov <[email protected]>
Cc: Huang Shijie <[email protected]>
Cc: Alex Shi <[email protected]>
Cc: Bhaskar Chowdhury <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
commit 3e8f399da490 ("writeback: rework wb_[dec|inc]_stat family of
functions") add some function description of percpu_counter_add_batch.
but the double '*' in comments means a kernel-doc format comment which
isn't right.
Since the whole file of lib/percpu_counter.c has no any other kernel-doc
format comments, we'd better to remove this incomplete one to tame the
kernel-doc warning:
lib/percpu_counter.c:83: warning: Function parameter or member 'fbc' not described in 'percpu_counter_add_batch'
lib/percpu_counter.c:83: warning: Function parameter or member 'amount' not described in 'percpu_counter_add_batch'
lib/percpu_counter.c:83: warning: Function parameter or member 'batch' not described in 'percpu_counter_add_batch'
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alex Shi <[email protected]>
Cc: Nikolay Borisov <[email protected]>
Cc: Stephen Boyd <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
In RT system, the spin_lock will be replaced by sleepable rt_mutex lock,
in __call_rcu(), disable interrupts before calling
kasan_record_aux_stack(), will trigger this calltrace:
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:951
in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 19, name: pgdatinit0
Call Trace:
___might_sleep.cold+0x1b2/0x1f1
rt_spin_lock+0x3b/0xb0
stack_depot_save+0x1b9/0x440
kasan_save_stack+0x32/0x40
kasan_record_aux_stack+0xa5/0xb0
__call_rcu+0x117/0x880
__exit_signal+0xafb/0x1180
release_task+0x1d6/0x480
exit_notify+0x303/0x750
do_exit+0x678/0xcf0
kthread+0x364/0x4f0
ret_from_fork+0x22/0x30
Replace spinlock with raw_spinlock.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Zqiang <[email protected]>
Reported-by: Andrew Halaney <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Gustavo A. R. Silva <[email protected]>
Cc: Vijayanand Jitta <[email protected]>
Cc: Vinayak Menon <[email protected]>
Cc: Yogesh Lal <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
crc8() does not change the data passed to it, so the pointer argument
should be declared const. This avoids callers that receive const data
having to cast it to a non-const pointer to call crc8().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Richard Fitzgerald <[email protected]>
Acked-by: Randy Dunlap <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
s/macthing/matching/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Bhaskar Chowdhury <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Replace beautiully with beautifully
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: ShihCheng Tu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Smatch gives the warning:
lib/decompress_unlzma.c:395 process_bit1() warn: inconsistent indenting
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Wang Qing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
s/buid/build/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Bhaskar Chowdhury <[email protected]>
Acked-by: Randy Dunlap <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Add myself as maintainer for bitmap API and Andy and Rasmus as reviewers.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Andy Shevchenko <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Add fast paths to find_*_bit() functions as per kernel implementation.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Similarly to bitmap functions, users would benefit if we'll handle a case
of small-size bitmaps that fit into a single word.
While here, move the find_last_bit() declaration to bitops/find.h where
other find_*_bit() functions sit.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Acked-by: Andy Shevchenko <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Similarly to bitmap functions, find_next_*_bit() users will benefit if
we'll handle a case of bitmaps that fit into a single word inline. In the
very best case, the compiler may replace a function call with a few
instructions.
This is the quite typical find_next_bit() user:
unsigned int cpumask_next(int n, const struct cpumask *srcp)
{
/* -1 is a legal arg here. */
if (n != -1)
cpumask_check(n);
return find_next_bit(cpumask_bits(srcp), nr_cpumask_bits, n + 1);
}
EXPORT_SYMBOL(cpumask_next);
Currently, on ARM64 the generated code looks like this:
0000000000000000 <cpumask_next>:
0: a9bf7bfd stp x29, x30, [sp, #-16]!
4: 11000402 add w2, w0, #0x1
8: aa0103e0 mov x0, x1
c: d2800401 mov x1, #0x40 // #64
10: 910003fd mov x29, sp
14: 93407c42 sxtw x2, w2
18: 94000000 bl 0 <find_next_bit>
1c: a8c17bfd ldp x29, x30, [sp], #16
20: d65f03c0 ret
24: d503201f nop
After applying this patch:
0000000000000140 <cpumask_next>:
140: 11000400 add w0, w0, #0x1
144: 93407c00 sxtw x0, w0
148: f100fc1f cmp x0, #0x3f
14c: 54000168 b.hi 178 <cpumask_next+0x38> // b.pmore
150: f9400023 ldr x3, [x1]
154: 92800001 mov x1, #0xffffffffffffffff // #-1
158: 9ac02020 lsl x0, x1, x0
15c: 52800802 mov w2, #0x40 // #64
160: 8a030001 and x1, x0, x3
164: dac00020 rbit x0, x1
168: f100003f cmp x1, #0x0
16c: dac01000 clz x0, x0
170: 1a800040 csel w0, w2, w0, eq // eq = none
174: d65f03c0 ret
178: 52800800 mov w0, #0x40 // #64
17c: d65f03c0 ret
find_next_bit() call is replaced with 6 instructions. find_next_bit()
itself is 41 instructions plus function call overhead.
Despite inlining, the scripts/bloat-o-meter report smaller .text size
after applying the series:
add/remove: 11/9 grow/shrink: 233/176 up/down: 5780/-6768 (-988)
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Acked-by: Andy Shevchenko <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Sync the implementation with recent kernel changes.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
lib/find_bit.c declares five single-line wrappers for _find_next_bit().
We may turn those wrappers to inline functions. It eliminates unneeded
function calls and opens room for compile-time optimizations.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Acked-by: Andy Shevchenko <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Sync implementation with the kernel and move the macro from
tools/include/linux/bitmap.h to tools/include/asm-generic/bitsperlong.h
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
find_bit would also benefit from small_const_nbits() optimizations. The
detailed comment is provided by Rasmus Villemoes.
Link: https://lkml.kernel.org/r/[email protected]
Suggested-by: Rasmus Villemoes <[email protected]>
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Acked-by: Andy Shevchenko <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
m68k and sh include bitmap/{find,le}.h prior to ffs/fls headers. New
fast-path implementation in find.h requires ffs/fls. Reordering the
headers inclusion sequence helps to prevent compile-time implicit function
declaration error.
[[email protected]: h8300: rearrange headers inclusion order in asm/bitops]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Geert Uytterhoeven <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Tested-by: Guenter Roeck <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Kernel version generates better code.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Some functions in tools/include/linux/bitmap.h declare nbits as int. In
the kernel nbits is declared as unsigned int.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Patch series "lib/find_bit: fast path for small bitmaps", v6.
Bitmap operations are much simpler and faster in case of small bitmaps
which fit into a single word. In linux/bitmap.c we have a machinery that
allows compiler to replace actual function call with a few instructions if
bitmaps passed into the function are small and their size is known at
compile time.
find_*_bit() API lacks this functionality; but users will benefit from it
a lot. One important example is cpumask subsystem when NR_CPUS <=
BITS_PER_LONG.
This patch (of 12):
GENMASK(h, l) may be passed with unsigned types. In such case,
type-limits warning is generated for example in case of GENMASK(h, 0).
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Yury Norov <[email protected]>
Acked-by: Rasmus Villemoes <[email protected]>
Cc: Alexey Klimov <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Jianpeng Ma <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: Stefano Brivio <[email protected]>
Cc: Wei Yang <[email protected]>
Cc: Wolfram Sang <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
init_groups is declared in both cred.h and init_task.h, but it is not
actually referenced anywhere outside of cred.c where it is defined. So
make it static and remove the declarations.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Rasmus Villemoes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
An async_func_t returns void - any errors encountered it has to stash
somewhere for consumers to discover later.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Rasmus Villemoes <[email protected]>
Cc: Tejun Heo <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Declaring struct pt_regs is unnecessary. On the one hand, there is no
function using it; on the other hand, struct pt_regs has been declared in
linux/kernel.h. Remove them.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Wan Jiabing <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The bitmap.h header is used in a lot of code around the kernel. Besides
that it includes kernel.h which sometimes makes a loop.
The problem here is many unneeded loops that make header hell
dependencies. For example, how may you move bitmap_zalloc() from C-file
to the header? Currently it's impossible. And bitmap.h here is only the
tip of an iceberg.
kerne.h is a dump of everything that even has nothing in common at all.
We may still have it, but in my new code I prefer to include only the
headers that I want to use, without the bulk of unneeded kernel code.
Break the loop by introducing align.h, including it in kernel.h and
bitmap.h followed by replacing kernel.h with limits.h.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Andy Shevchenko <[email protected]>
Acked-by: Yury Norov <[email protected]>
Cc: Rasmus Villemoes <[email protected]>
Cc: Al Viro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
My UEK-derived config has 1030 files depending on pagemap.h before this
change. Afterwards, just 326 files need to be rebuilt when I touch
pagemap.h. I think blkdev.h is probably included too widely, but
untangling that dependency is harder and this solves my problem. x86
allmodconfig builds, but there may be implicit include problems on other
architectures.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Dan Williams <[email protected]> [nvdimm]
Acked-by: Jens Axboe <[email protected]> [block]
Reviewed-by: Christoph Hellwig <[email protected]>
Acked-by: Coly Li <[email protected]> [bcache]
Acked-by: Martin K. Petersen <[email protected]> [scsi]
Reviewed-by: William Kucharski <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The function name should be modified to register_sysctl_paths instead of
register_sysctl_table_path.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: zhouchuangao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Test that /proc instance mounted with
mount -t proc -o subset=pid
contains only ".", "..", "self", "thread-self" and pid directories.
Note:
Currently "subset=pid" doesn't return "." and ".." via readdir.
This must be a bug.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alexey Dobriyan <[email protected]>
Acked-by: Alexey Gladkov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Two checks in lookup and readdir code should be enough to not have third
check in open code.
Can't open what can't be looked up?
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Alexey Dobriyan <[email protected]>
Acked-by: Alexey Gladkov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now that proc_ops are separate from file_operations and other operations
it easy to check all instances to have ->proc_lseek hook and remove check
in main code.
Note:
nonseekable_open() files naturally don't require ->proc_lseek.
Garbage collect pde_lseek() function.
[[email protected]: smoke test lseek()]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/YFYX0Bzwxlc7aBa/@localhost.localdomain
Signed-off-by: Alexey Dobriyan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Can't look at this verbosity anymore.
Link: https://lkml.kernel.org/r/YFYXAp/[email protected]
Signed-off-by: Alexey Dobriyan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Currently the pde_is_permanent() check is being run on root multiple times
rather than on the next proc directory entry. This looks like a
copy-paste error. Fix this by replacing root with next.
Addresses-Coverity: ("Copy-paste error")
Link: https://lkml.kernel.org/r/[email protected]
Fixes: d919b33dafb3 ("proc: faster open/read/close with "permanent" files")
Signed-off-by: Colin Ian King <[email protected]>
Acked-by: Christian Brauner <[email protected]>
Reviewed-by: Alexey Dobriyan <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Fix "no previous prototype" W=1 warnings from the kernel test robot:
arch/alpha/lib/csum_partial_copy.c:349:1: error: no previous prototype for 'csum_and_copy_from_user' [-Werror=missing-prototypes]
349 | csum_and_copy_from_user(const void __user *src, void *dst, int len)
| ^~~~~~~~~~~~~~~~~~~~~~~
arch/alpha/lib/csum_partial_copy.c:358:1: error: no previous prototype for 'csum_partial_copy_nocheck' [-Werror=missing-prototypes]
358 | csum_partial_copy_nocheck(const void *src, void *dst, int len)
| ^~~~~~~~~~~~~~~~~~~~~~~~~
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 808b49da54e6 ("alpha: turn csum_partial_copy_from_user() into csum_and_copy_from_user()")
Signed-off-by: Randy Dunlap <[email protected]>
Reported-by: kernel test robot <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
'make ARCH=alpha W=1' reports a couple of old-style function
definitions with missing parameter list, so fix those.
arch/alpha/kernel/pc873xx.c: In function 'pc873xx_get_base':
arch/alpha/kernel/pc873xx.c:16:21: warning: old-style function definition [-Wold-style-definition]
16 | unsigned int __init pc873xx_get_base()
arch/alpha/kernel/pc873xx.c: In function 'pc873xx_get_model':
arch/alpha/kernel/pc873xx.c:21:14: warning: old-style function definition [-Wold-style-definition]
21 | char *__init pc873xx_get_model()
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Randy Dunlap <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Use the power-efficient work queue, to avoid the pathological case where
we keep pinning ourselves on the same possibly idle CPU on systems that
want to be power-efficient (https://lwn.net/Articles/731052/).
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Marco Elver <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Hillf Danton <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The allocation wait timeout was initially added because of warnings due to
CONFIG_DETECT_HUNG_TASK=y [1]. While the 1 sec timeout is sufficient to
resolve the warnings (given the hung task timeout must be 1 sec or larger)
it may cause unnecessary wake-ups if the system is idle:
https://lkml.kernel.org/r/CADYN=9J0DQhizAGB0-jz4HOBBh+05kMBXb4c0cXMS7Qi5NAJiw@mail.gmail.com
Fix it by computing the timeout duration in terms of the current
sysctl_hung_task_timeout_secs value.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Marco Elver <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Hillf Danton <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Patch series "kfence: optimize timer scheduling", v2.
We have observed that mostly-idle systems with KFENCE enabled wake up
otherwise idle CPUs, preventing such to enter a lower power state.
Debugging revealed that KFENCE spends too much active time in
toggle_allocation_gate().
While the first version of KFENCE was using all the right bits to be
scheduling optimal, and thus power efficient, by simply using wait_event()
+ wake_up(), that code was unfortunately removed.
As KFENCE was exposed to various different configs and tests, the
scheduling optimal code slowly disappeared. First because of hung task
warnings, and finally because of deadlocks when an allocation is made by
timer code with debug objects enabled. Clearly, the "fixes" were not too
friendly for devices that want to be power efficient.
Therefore, let's try a little harder to fix the hung task and deadlock
problems that we have with wait_event() + wake_up(), while remaining as
scheduling friendly and power efficient as possible.
Crucially, we need to defer the wake_up() to an irq_work, avoiding any
potential for deadlock.
The result with this series is that on the devices where we observed a
power regression, power usage returns back to baseline levels.
This patch (of 3):
On mostly-idle systems, we have observed that toggle_allocation_gate() is
a cause of frequent wake-ups, preventing an otherwise idle CPU to go into
a lower power state.
A late change in KFENCE's development, due to a potential deadlock [1],
required changing the scheduling-friendly wait_event_timeout() and
wake_up() to an open-coded wait-loop using schedule_timeout(). [1]
https://lkml.kernel.org/r/[email protected]
To avoid unnecessary wake-ups, switch to using wait_event_timeout().
Unfortunately, we still cannot use a version with direct wake_up() in
__kfence_alloc() due to the same potential for deadlock as in [1].
Instead, add a level of indirection via an irq_work that is scheduled if
we determine that the kfence_timer requires a wake_up().
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
Signed-off-by: Marco Elver <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
After an out-of-bounds accesses, zero the guard page before re-protecting
in kfence_guarded_free(). On one hand this helps make the failure mode of
subsequent out-of-bounds accesses more deterministic, but could also
prevent certain information leaks.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Marco Elver <[email protected]>
Acked-by: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Jann Horn <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
'linux/compat.h' included in 'process_vm_access.c' is duplicated.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Zhang Yunkai <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Various coding style tweaks to various files under mm/
[[email protected]: mm/swapfile: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/sparse: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/vmscan: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/compaction: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/oom_kill: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/shmem: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/page_alloc: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/filemap: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/mlock: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/frontswap: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/vmalloc: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/memory_hotplug: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
[[email protected]: mm/mempolicy: minor coding style tweaks]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Zhiyuan Dai <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Delete/add some blank lines and some blank spaces
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: songqiang <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
There are many places where kmap/memset/kunmap patterns occur.
Use the newly lifted memzero_page() to eliminate direct uses of kmap and
leverage the new core functions use of kmap_local_page().
The development of this patch was aided by the following coccinelle
script:
// <smpl>
// SPDX-License-Identifier: GPL-2.0-only
// Find kmap/memset/kunmap pattern and replace with memset*page calls
//
// NOTE: Offsets and other expressions may be more complex than what the script
// will automatically generate. Therefore a catchall rule is provided to find
// the pattern which then must be evaluated by hand.
//
// Confidence: Low
// Copyright: (C) 2021 Intel Corporation
// URL: http://coccinelle.lip6.fr/
// Comments:
// Options:
//
// Then the memset pattern
//
@ memset_rule1 @
expression page, V, L, Off;
identifier ptr;
type VP;
@@
(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
-memset(ptr, 0, L);
+memzero_page(page, 0, L);
|
-memset(ptr + Off, 0, L);
+memzero_page(page, Off, L);
|
-memset(ptr, V, L);
+memset_page(page, V, 0, L);
|
-memset(ptr + Off, V, L);
+memset_page(page, V, Off, L);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)
// Remove any pointers left unused
@
depends on memset_rule1
@
identifier memset_rule1.ptr;
type VP, VP1;
@@
-VP ptr;
... when != ptr;
? VP1 ptr;
//
// Catch all
//
@ memset_rule2 @
expression page;
identifier ptr;
expression GenTo, GenSize, GenValue;
type VP;
@@
(
-VP ptr = kmap(page);
|
-ptr = kmap(page);
|
-VP ptr = kmap_atomic(page);
|
-ptr = kmap_atomic(page);
)
<+...
(
//
// Some call sites have complex expressions within the memset/memcpy
// The follow are catch alls which need to be evaluated by hand.
//
-memset(GenTo, 0, GenSize);
+memzero_pageExtra(page, GenTo, GenSize);
|
-memset(GenTo, GenValue, GenSize);
+memset_pageExtra(page, GenValue, GenTo, GenSize);
)
...+>
(
-kunmap(page);
|
-kunmap_atomic(ptr);
)
// Remove any pointers left unused
@
depends on memset_rule2
@
identifier memset_rule2.ptr;
type VP, VP1;
@@
-VP ptr;
... when != ptr;
? VP1 ptr;
// </smpl>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ira Weiny <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: Chaitanya Kulkarni <[email protected]>
Cc: Chris Mason <[email protected]>
Cc: Josef Bacik <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Patch series "btrfs: Convert kmap/memset/kunmap to memzero_user()".
Lifting memzero_user(), convert it to kmap_local_page() and then use it
in btrfs.
This patch (of 3):
memzero_page() can replace the kmap/memset/kunmap pattern in other
places in the code. While zero_user() has the same interface it is not
the same call and its use should be limited and some of those calls may
be better converted from zero_user() to memzero_page().[1] But that is
not addressed in this series.
Lift memzero_page() to highmem.
[1] https://lore.kernel.org/lkml/CAHk-=wijdojzo56FzYqE5TOYw2Vws7ik3LEMGj9SPQaJJ+Z73Q@mail.gmail.com/
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ira Weiny <[email protected]>
Cc: Alexander Viro <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Chris Mason <[email protected]>
Cc: Josef Bacik <[email protected]>
Cc: Chaitanya Kulkarni <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
It can be optimized at compile time.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: zhouchuangao <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
strlcpy is marked as deprecated in Documentation/process/deprecated.rst,
and there is no functional difference when the caller expects truncation
(when not checking the return value). strscpy is relatively better as
it also avoids scanning the whole source string.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Zhiyuan Dai <[email protected]>
Cc: Seth Jennings <[email protected]>
Cc: Dan Streetman <[email protected]>
Cc: Vitaly Wool <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|