aboutsummaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2021-06-18Merge branch 'sched/urgent' into sched/core, to resolve conflictsIngo Molnar1-1/+1
This commit in sched/urgent moved the cfs_rq_is_decayed() function: a7b359fc6a37: ("sched/fair: Correctly insert cfs_rq's to list on unthrottle") and this fresh commit in sched/core modified it in the old location: 9e077b52d86a: ("sched/pelt: Check that *_avg are null when *_sum are") Merge the two variants. Conflicts: kernel/sched/fair.c Signed-off-by: Ingo Molnar <[email protected]>
2021-06-17lib: add iomem emulation (logic_iomem)Johannes Berg3-0/+334
Add IO memory emulation that uses callbacks for read/write to the allocated regions. The callbacks can be registered by the users using logic_iomem_alloc(). To use, an architecture must 'select LOGIC_IOMEM' in Kconfig and then include <asm-generic/logic_io.h> into asm/io.h to get the __raw_read*/__raw_write* functions. Optionally, an architecture may 'select LOGIC_IOMEM_FALLBACK' in which case non-emulated regions will 'fall back' to the various real_* functions that must then be provided. Cc: Arnd Bergmann <[email protected]> Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: Richard Weinberger <[email protected]>
2021-06-14Merge tag 'v5.13-rc6' into driver-core-nextGreg Kroah-Hartman1-1/+1
We need the driver core fix in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-06-14Merge tag 'v5.13-rc6' into char-misc-nextGreg Kroah-Hartman1-1/+1
We need the fixes in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-06-11kunit: Add 'kunit_shutdown' optionDavid Gow1-0/+20
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs. Signed-off-by: David Gow <[email protected]> Signed-off-by: Brendan Higgins <[email protected]> Reviewed-by: Stephen Boyd <[email protected]> Tested-By: Daniel Latypov <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-11kunit: Fix result propagation for parameterised testsDavid Gow1-4/+3
When one parameter of a parameterised test failed, its failure would be propagated to the overall test, but not to the suite result (unless it was the last parameter). This is because test_case->success was being reset to the test->success result after each parameter was used, so a failing test's result would be overwritten by a non-failing result. The overall test result was handled in a third variable, test_result, but this was discarded after the status line was printed. Instead, just propagate the result after each parameter run. Signed-off-by: David Gow <[email protected]> Fixes: fadb08e7c750 ("kunit: Support for Parameterized Testing") Reviewed-by: Marco Elver <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-10bootconfig: Support mixing a value and subkeys under a keyMasami Hiramatsu1-20/+45
Support mixing a value and subkeys under a key. Since kernel cmdline options will support "aaa.bbb=value1 aaa.bbb.ccc=value2", it is better that the bootconfig supports such configuration too. Note that this does not change syntax itself but just accepts mixed value and subkeys e.g. key = value1 key.subkey = value2 But this is not accepted; key { value1 subkey = value2 } That will make value1 as a subkey. Also, the order of the value node under a key is fixed. If there are a value and subkeys, the value is always the first child node of the key. Thus if user specifies subkeys first, e.g. key.subkey = value1 key = value2 In the program (and /proc/bootconfig), it will be shown as below key = value2 key.subkey = value1 Link: https://lkml.kernel.org/r/162262194685.264090.7738574774030567419.stgit@devnote2 Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2021-06-10bootconfig: Change array value to use child nodeMasami Hiramatsu1-4/+19
It is not possible to put an array value with subkeys under a key node, because both of subkeys and the array elements are using "next" field of the xbc_node. Thus this changes the array values to use "child" field in the array case. The reason why split this change is to test it easily. Link: https://lkml.kernel.org/r/162262193838.264090.16044473274501498656.stgit@devnote2 Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2021-06-10csum_and_copy_to_pipe_iter(): leave handling of csum_state to callerAl Viro1-23/+18
... since all the logics is already there for use by iovec/kvec/etc. cases. Signed-off-by: Al Viro <[email protected]>
2021-06-10clean up copy_mc_pipe_to_iter()Al Viro1-24/+9
... and we don't need kmap_atomic() there - kmap_local_page() is fine. Signed-off-by: Al Viro <[email protected]>
2021-06-10pipe_zero(): we don't need no stinkin' kmap_atomic()...Al Viro1-1/+3
FWIW, memcpy_to_page() itself almost certainly ought to use kmap_local_page()... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: clean csum_and_copy_...() primitives up a bitAl Viro1-6/+4
1) kmap_atomic() is not needed here, kmap_local_page() is enough. 2) No need to make sum = csum_block_add(sum, next, off); conditional upon next != 0 - adding 0 is a no-op as far as csum_block_add() is concerned. Signed-off-by: Al Viro <[email protected]>
2021-06-10copy_page_from_iter(): don't need kmap_atomic() for kvec/bvec casesAl Viro1-2/+2
kmap_local_page() is enough. Signed-off-by: Al Viro <[email protected]>
2021-06-10copy_page_to_iter(): don't bother with kmap_atomic() for bvec/kvec casesAl Viro1-3/+3
kmap_local_page() is enough there. Moreover, we can use _copy_to_iter() for actual copying in those cases - no useful extra checks on the address we are copying from in that call. Signed-off-by: Al Viro <[email protected]>
2021-06-10iterate_xarray(): only of the first iteration we might get offset != 0Al Viro1-3/+3
recalculating offset on each iteration is pointless - on all subsequent passes through the loop it will be zero anyway. Signed-off-by: Al Viro <[email protected]>
2021-06-10pull handling of ->iov_offset into iterate_{iovec,bvec,xarray}Al Viro1-12/+14
fewer arguments (by one, but still...) for iterate_...() macros Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: make iterator callbacks use base and len instead of iovecAl Viro1-91/+91
Iterator macros used to provide the arguments for step callbacks in a structure matching the flavour - iovec for ITER_IOVEC, kvec for ITER_KVEC and bio_vec for ITER_BVEC. That already broke down for ITER_XARRAY (bio_vec there); now that we are using kvec callback for bvec and xarray cases, we are always passing a pointer + length (void __user * + size_t for ITER_IOVEC callback, void * + size_t for everything else). Note that the original reason for bio_vec (page + offset + len) in case of ITER_BVEC used to be that we did *not* want to kmap a page when all we wanted was e.g. to find the alignment of its subrange. Now all such users are gone and the ones that are left want the page mapped anyway for actually copying the data. So in all cases we have pointer + length, and there's no good reason for keeping those in struct iovec or struct kvec - we can just pass them to callback separately. Again, less boilerplate in callbacks... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: make the amount already copied available to iterator callbacksAl Viro1-70/+50
Making iterator macros keep track of the amount of data copied is pretty easy and it has several benefits: 1) we no longer need the mess like (from += v.iov_len) - v.iov_len in the callbacks - initial value + total amount copied so far would do just fine. 2) less obviously, we no longer need to remember the initial amount of data we wanted to copy; the loops in iterator macros are along the lines of wanted = bytes; while (bytes) { copy some bytes -= copied if short copy break } bytes = wanted - bytes; Replacement is offs = 0; while (bytes) { copy some offs += copied bytes -= copied if short copy break } bytes = offs; That wouldn't be a win per se, but unlike the initial value of bytes, the amount copied so far *is* useful in callbacks. 3) in some cases (csum_and_copy_..._iter()) we already had offs manually maintained by the callbacks. With that change we can drop that. Less boilerplate and more readable code... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: get rid of separate bvec and xarray callbacksAl Viro1-82/+30
After the previous commit we have * xarray and bvec callbacks idential in all cases * both equivalent to kvec callback wrapped into kmap_local_page()/kunmap_local() pair. So we can pass only two (iovec and kvec) callbacks to iterate_and_advance() and let iterate_{bvec,xarray} wrap it into kmap_local_page()/kunmap_local_page(). Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: teach iterate_{bvec,xarray}() about possible short copiesAl Viro1-41/+24
... and now we finally can sort out the mess in _copy_mc_to_iter(). Provide a variant of iterate_and_advance() that does *NOT* ignore the return values of bvec, xarray and kvec callbacks, use that in _copy_mc_to_iter(). That gets rid of magic in those callbacks - we used to need it so we'd get at least the right return value in case of failure halfway through. As a bonus, now iterator is advanced by the amount actually copied for all flavours. That's what the callers expect and it used to do that correctly in iovec and xarray cases. However, in kvec and bvec cases the iterator had not been advanced on such failures, breaking the users. Fixed now... Signed-off-by: Al Viro <[email protected]>
2021-06-10iterate_bvec(): expand bvec.h macro forest, massage a bitAl Viro1-13/+20
... incidentally, using pointer instead of index in an array (the only change here) trims half-kilobyte of .text... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: unify iterate_iovec and iterate_kvecAl Viro1-23/+5
The differences between iterate_iovec and iterate_kvec are minor: * kvec callback is treated as if it returned 0 * initialization of __p is with i->iov and i->kvec resp. which is trivially dealt with. No code generation changes - compiler is quite capable of turning left = ((void)(STEP), 0); __v.iov_len -= left; (with no accesses to left downstream) and (void)(STEP); into the same code. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: massage iterate_iovec and iterate_kvec to logics similar to ↵Al Viro1-55/+36
iterate_bvec Premature optimization is the root of all evil... Trying to unroll the first pass through the loop makes it harder to follow and not just for readers - compiler ends up generating worse code than it would on a "non-optimized" loop. Signed-off-by: Al Viro <[email protected]>
2021-06-10iterate_and_advance(): get rid of magic in case when n is 0Al Viro1-1/+1
iov_iter_advance() needs to do some non-trivial work when it's given 0 as argument (skip all empty iovecs, mostly). We used to implement it via iterate_and_advance(); we no longer do so and for all other users of iterate_and_advance() zero length is a no-op. Signed-off-by: Al Viro <[email protected]>
2021-06-10csum_and_copy_to_iter(): massage into form closer to csum_and_copy_from_iter()Al Viro1-4/+4
Namely, have off counted starting from 0 rather than from csstate->off. To compensate we need to shift the initial value (csstate->sum) (rotate by 8 bits, as usual for csum) and do the same after we are finished adding the pieces up. What we get out of that is a bit more redundancy in our variables - from is always equal to addr + off, which will be useful several commits down the road. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: replace iov_iter_copy_from_user_atomic() with iterator-advancing ↵Al Viro1-26/+4
variant Replacement is called copy_page_from_iter_atomic(); unlike the old primitive the callers do *not* need to do iov_iter_advance() after it. In case when they end up consuming less than they'd been given they need to do iov_iter_revert() on everything they had not consumed. That, however, needs to be done only on slow paths. All in-tree callers converted. And that kills the last user of iterate_all_kinds() Signed-off-by: Al Viro <[email protected]>
2021-06-10[xarray] iov_iter_npages(): just use DIV_ROUND_UP()Al Viro1-14/+2
Compiler is capable of recognizing division by power of 2 and turning it into shifts. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_npages(): don't bother with iterate_all_kinds()Al Viro1-34/+54
note that in bvec case pages can be compound ones - we can't just assume that each segment is covered by one (sub)page Signed-off-by: Al Viro <[email protected]>
2021-06-10get rid of iterate_all_kinds() in ↵Al Viro1-56/+91
iov_iter_get_pages()/iov_iter_get_pages_alloc() Here iterate_all_kinds() is used just to find the first (non-empty, in case of iovec) segment. Which can be easily done explicitly. Note that in bvec case we now can get more than PAGE_SIZE worth of them, in case when we have a compound page in bvec and a range that crosses a subpage boundary. Older behaviour had been to stop on that boundary; we used to get the right first page (for_each_bvec() took care of that), but that was all we'd got. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_gap_alignment(): get rid of iterate_all_kinds()Al Viro1-13/+14
For one thing, it's only used for iovec (and makes sense only for those). For another, here we don't care about iov_offset, since the beginning of the first segment and the end of the last one are ignored. So it makes a lot more sense to just walk through the iovec array... We need to deal with the case of truncated iov_iter, but unlike the situation with iov_iter_alignment() we don't care where the last segment ends - just which segment is the last one. [fixed a braino spotted by Qian Cai <[email protected]>] Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_alignment(): don't bother with iterate_all_kinds()Al Viro1-10/+53
It's easier to go over the array manually. We need to watch out for truncated iov_iter, though - iovec array might cover more than i->count. Signed-off-by: Al Viro <[email protected]>
2021-06-10sanitize iov_iter_fault_in_readable()Al Viro1-10/+16
1) constify iov_iter argument; we are not advancing it in this primitive. 2) cap the amount requested by the amount of data in iov_iter. All existing callers should've been safe, but the check is really cheap and doing it here makes for easier analysis, as well as more consistent semantics among the primitives. 3) don't bother with iterate_iovec(). Explicit loop is not any harder to follow, and we get rid of standalone iterate_iovec() users - it's only used by iterate_and_advance() and (soon to be gone) iterate_all_kinds(). Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: optimize iov_iter_advance() for iovec and kvecAl Viro1-14/+28
We can do better than generic iterate_and_advance() for this one; inspired by bvec_iter_advance() (and massaged into that form by equivalent transformations). [fixed a braino caught by kernel test robot <[email protected]>] Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: separate direction from flavourAl Viro1-37/+48
Instead of having them mixed in iter->type, use separate ->iter_type and ->data_source (u8 and bool resp.) And don't bother with (pseudo-) bitmap for the former - microoptimizations from being able to check if the flavour is one of two values are not worth the confusion for optimizer. It can't prove that we never get e.g. ITER_IOVEC | ITER_PIPE, so we end up with extra headache. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_advance(): don't modify ->iov_offset for ITER_DISCARDAl Viro1-2/+0
the field is not used for that flavour Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: reorder handling of flavours in primitivesAl Viro1-46/+45
iovec is the most common one; test it first and test explicitly, rather than "not anything else". Replace all flavour checks with use of iov_iter_is_...() helpers. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: switch ..._full() variants of primitives to use of iov_iter_revert()Al Viro1-104/+0
Use corresponding plain variants, revert on short copy. That's the way it should've been done from the very beginning, except that we didn't have iov_iter_revert() back then... [fixed another braino caught by Qian Cai <[email protected]>] Signed-off-by: Al Viro <[email protected]>
2021-06-05lib: crc64: fix kernel-doc warningYueHaibing1-1/+1
Fix W=1 kernel build warning: lib/crc64.c:40: warning: bad line: or the previous crc64 value if computing incrementally. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: YueHaibing <[email protected]> Reviewed-by: Coly Li <[email protected]> Acked-by: Randy Dunlap <[email protected]> Tested-by: Randy Dunlap <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-06-03Merge branch 'sched/urgent' into sched/core, to pick up fixesIngo Molnar4-17/+39
Signed-off-by: Ingo Molnar <[email protected]>
2021-06-03iov_iter_advance(): use consistent semantics for move past the endAl Viro1-3/+2
asking to advance by more than we have left in the iov_iter should move to the very end; it should *not* leave negative i->count and it should not spew into syslog, etc. - it's a legitimate operation. Signed-off-by: Al Viro <[email protected]>
2021-06-03[xarray] iov_iter_fault_in_readable() should do nothing in xarray caseAl Viro1-1/+1
... and actually should just check it's given an iovec-backed iterator in the first place. Cc: [email protected] Signed-off-by: Al Viro <[email protected]>
2021-06-03copy_page_to_iter(): fix ITER_DISCARD caseAl Viro1-2/+5
we need to advance the iterator... Cc: [email protected] Signed-off-by: Al Viro <[email protected]>
2021-06-03teach copy_page_to_iter() to handle compound pagesAl Viro1-3/+25
In situation when copy_page_to_iter() got a compound page the current code would only work on systems with no CONFIG_HIGHMEM. It *is* the majority of real-world setups, or we would've drown in bug reports by now. Still needs fixing. Current variant works for solitary page; rename that to __copy_page_to_iter() and turn the handling of compound pages into a loop over subpages. Cc: [email protected] Signed-off-by: Al Viro <[email protected]>
2021-06-03iov_iter: Remove iov_iter_for_each_range()David Howells1-27/+0
Remove iov_iter_for_each_range() as it's no longer used with the removal of lustre. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2021-05-31locking/lockdep: Reduce LOCKDEP dependency listRandy Dunlap1-1/+0
Some arches (um, sparc64, riscv, xtensa) cause a Kconfig warning for LOCKDEP. These arch-es select LOCKDEP_SUPPORT but they are not listed as one of the arch-es that LOCKDEP depends on. Since (16) arch-es define the Kconfig symbol LOCKDEP_SUPPORT if they intend to have LOCKDEP support, replace the awkward list of arch-es that LOCKDEP depends on with the LOCKDEP_SUPPORT symbol. But wait. LOCKDEP_SUPPORT is included in LOCK_DEBUGGING_SUPPORT, which is already a dependency here, so LOCKDEP_SUPPORT is redundant and not needed. That leaves the FRAME_POINTER dependency, but it is part of an expression like this: depends on (A && B) && (FRAME_POINTER || B') where B' is a dependency of B so if B is true then B' is true and the value of FRAME_POINTER does not matter. Thus we can also delete the FRAME_POINTER dependency. Fixes this kconfig warning: (for um, sparc64, riscv, xtensa) WARNING: unmet direct dependencies detected for LOCKDEP Depends on [n]: DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] && (FRAME_POINTER [=n] || MIPS || PPC || S390 || MICROBLAZE || ARM || ARC || X86) Selected by [y]: - PROVE_LOCKING [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] - LOCK_STAT [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] - DEBUG_LOCK_ALLOC [=y] && DEBUG_KERNEL [=y] && LOCK_DEBUGGING_SUPPORT [=y] Fixes: 7d37cb2c912d ("lib: fix kconfig dependency on ARCH_WANT_FRAME_POINTERS") Signed-off-by: Randy Dunlap <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Waiman Long <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2021-05-31Merge 5.13-rc4 into driver-core-nextGreg Kroah-Hartman3-11/+16
We need the driver core fixes in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-05-31Merge 5.13-rc4 into char-misc-nextGreg Kroah-Hartman2-3/+4
We need the char/misc fixes in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-05-27Merge branch 'for-5.13-fixes' of ↵Linus Torvalds1-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu Pull percpu fixes from Dennis Zhou: "This contains a cleanup to lib/percpu-refcount.c and an update to the MAINTAINERS file to more formally take over support for lib/percpu*" * 'for-5.13-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dennis/percpu: MAINTAINERS: Add lib/percpu* as part of percpu entry percpu_ref: Don't opencode percpu_ref_is_dying
2021-05-27lib: test_scanf: Remove pointless use of type_min() with unsigned typesRichard Fitzgerald1-7/+6
sparse was producing warnings of the form: sparse: cast truncates bits from constant value (ffff0001 becomes 1) There is no actual problem here. Using type_min() on an unsigned type results in an (expected) truncation. However, there is no need to test an unsigned value against type_min(). The minimum value of an unsigned is obviously 0, and any value cast to an unsigned type is >= 0, so for unsigneds only type_max() need be tested. This patch also takes the opportunity to clean up the implementation of simple_numbers_loop() to use a common pattern for the positive and negative test. Reported-by: kernel test robot <[email protected]> Signed-off-by: Richard Fitzgerald <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Signed-off-by: Petr Mladek <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-05-27dyndbg: display KiB of data memory used.Jim Cromie1-3/+3
If booted with verbose>=1, dyndbg prints the memory usage in bytes, of builtin modules' prdebugs. KiB reads better. no functional changes. Signed-off-by: Jim Cromie <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>