aboutsummaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2021-06-25kunit: test: Add example tests which are always skippedDavid Gow1-0/+31
Add two new tests to the example test suite, both of which are always skipped. This is used as an example for how to write tests which are skipped, and to demonstrate the difference between kunit_skip() and kunit_mark_skipped(). Note that these tests are enabled by default, so a default run of KUnit will have two skipped tests. Signed-off-by: David Gow <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Reviewed-by: Marco Elver <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-25kunit: Support skipped testsDavid Gow3-21/+77
The kunit_mark_skipped() macro marks the current test as "skipped", with the provided reason. The kunit_skip() macro will mark the test as skipped, and abort the test. The TAP specification supports this "SKIP directive" as a comment after the "ok" / "not ok" for a test. See the "Directives" section of the TAP spec for details: https://testanything.org/tap-specification.html#directives The 'success' field for KUnit tests is replaced with a kunit_status enum, which can be SUCCESS, FAILURE, or SKIPPED, combined with a 'status_comment' containing information on why a test was skipped. A new 'kunit_status' test suite is added to test this. Signed-off-by: David Gow <[email protected]> Tested-by: Marco Elver <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-25lib/test: convert lib/test_list_sort.c to use KUnitDaniel Latypov2-80/+54
Functionally, this just means that the test output will be slightly changed and it'll now depend on CONFIG_KUNIT=y/m. It'll still run at boot time and can still be built as a loadable module. There was a pre-existing patch to convert this test that I found later, here [1]. Compared to [1], this patch doesn't rename files and uses KUnit features more heavily (i.e. does more than converting pr_err() calls to KUNIT_FAIL()). What this conversion gives us: * a shorter test thanks to KUnit's macros * a way to run this a bit more easily via kunit.py (and CONFIG_KUNIT_ALL_TESTS=y) [2] * a structured way of reporting pass/fail * uses kunit-managed allocations to avoid the risk of memory leaks * more descriptive error messages: * i.e. it prints out which fields are invalid, what the expected values are, etc. What this conversion does not do: * change the name of the file (and thus the name of the module) * change the name of the config option Leaving these as-is for now to minimize the impact to people wanting to run this test. IMO, that concern trumps following KUnit's style guide for both names, at least for now. [1] https://lore.kernel.org/linux-kselftest/[email protected]/ [2] Can be run via $ ./tools/testing/kunit/kunit.py run --kunitconfig /dev/stdin <<EOF CONFIG_KUNIT=y CONFIG_TEST_LIST_SORT=y EOF [16:55:56] Configuring KUnit Kernel ... [16:55:56] Building KUnit Kernel ... [16:56:29] Starting KUnit Kernel ... [16:56:32] ============================================================ [16:56:32] ======== [PASSED] list_sort ======== [16:56:32] [PASSED] list_sort_test [16:56:32] ============================================================ [16:56:32] Testing complete. 1 tests run. 0 failed. 0 crashed. [16:56:32] Elapsed time: 35.668s total, 0.001s configuring, 32.725s building, 0.000s running Note: the build time is as after a `make mrproper`. Signed-off-by: Daniel Latypov <[email protected]> Tested-by: David Gow <[email protected]> Acked-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-25kunit: introduce kunit_kmalloc_array/kunit_kcalloc() helpersDaniel Latypov1-10/+12
Add in: * kunit_kmalloc_array() and wire up kunit_kmalloc() to be a special case of it. * kunit_kcalloc() for symmetry with kunit_kzalloc() This should using KUnit more natural by making it more similar to the existing *alloc() APIs. And while we shouldn't necessarily be writing unit tests where overflow should be a concern, it can't hurt to be safe. Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-23kunit: Add gnu_printf specifiersDavid Gow1-3/+3
Some KUnit functions use variable arguments to implement a printf-like format string. Use the __printf() attribute to let the compiler warn if invalid format strings are passed in. If the kernel is build with W=1, it complained about the lack of these specifiers, e.g.: ../lib/kunit/test.c:72:2: warning: function ‘kunit_log_append’ might be a candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] Signed-off-by: David Gow <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Acked-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-23lib/cmdline_kunit: Remove a cast which are no-longer requiredDavid Gow1-1/+1
With some of the stricter type checking in KUnit's EXPECT macros removed, a cast in cmdline_kunit is no longer required. Remove the unnecessary cast, using NULL instead of (int *) to make it clearer. Signed-off-by: David Gow <[email protected]> Acked-by: Andy Shevchenko <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-22clocksource: Provide kernel module to test clocksource watchdogPaul E. McKenney1-0/+12
When the clocksource watchdog marks a clock as unstable, this might be due to that clock being unstable or it might be due to delays that happen to occur between the reads of the two clocks. It would be good to have a way of testing the clocksource watchdog's ability to distinguish between these two causes of clock skew and instability. Therefore, provide a new clocksource-wdtest module selected by a new TEST_CLOCKSOURCE_WATCHDOG Kconfig option. This module has a single module parameter named "holdoff" that provides the number of seconds of delay before testing should start, which defaults to zero when built as a module and to 10 seconds when built directly into the kernel. Very large systems that boot slowly may need to increase the value of this module parameter. This module uses hand-crafted clocksource structures to do its testing, thus avoiding messing up timing for the rest of the kernel and for user applications. This module first verifies that the ->uncertainty_margin field of the clocksource structures are set sanely. It then tests the delay-detection capability of the clocksource watchdog, increasing the number of consecutive delays injected, first provoking console messages complaining about the delays and finally forcing a clock-skew event. Unexpected test results cause at least one WARN_ON_ONCE() console splat. If there are no splats, the test has passed. Finally, it fuzzes the value returned from a clocksource to test the clocksource watchdog's ability to detect time skew. This module checks the state of its clocksource after each test, and uses WARN_ON_ONCE() to emit a console splat if there are any failures. This should enable all types of test frameworks to detect any such failures. This facility is intended for diagnostic use only, and should be avoided on production systems. Reported-by: Chris Mason <[email protected]> Suggested-by: Thomas Gleixner <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Tested-by: Feng Tang <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22lockdep/selftest: Remove wait-type RCU_CALLBACK testsPeter Zijlstra1-17/+0
The problem is that rcu_callback_map doesn't have wait_types defined, and doing so would make it indistinguishable from SOFTIRQ in any case. Remove it. Fixes: 9271a40d2a14 ("lockdep/selftest: Add wait context selftests") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22lockdep/selftests: Fix selftests vs PROVE_RAW_LOCK_NESTINGPeter Zijlstra1-0/+1
When PROVE_RAW_LOCK_NESTING=y many of the selftests FAILED because HARDIRQ context is out-of-bounds for spinlocks. Instead make the default hardware context the threaded hardirq context, which preserves the old locking rules. The wait-type specific locking selftests will have a non-threaded HARDIRQ variant. Fixes: de8f5e4f2dc1 ("lockdep: Introduce wait-type checks") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Tested-by: Joerg Roedel <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22locking/selftests: Add a selftest for check_irq_usage()Boqun Feng1-0/+65
Johannes Berg reported a lockdep problem which could be reproduced by the special test case introduced in this patch, so add it. Signed-off-by: Boqun Feng <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22locking/lockdep: Improve noinstr vs errorsPeter Zijlstra1-1/+1
Better handle the failure paths. vmlinux.o: warning: objtool: debug_locks_off()+0x23: call to console_verbose() leaves .noinstr.text section vmlinux.o: warning: objtool: debug_locks_off()+0x19: call to __kasan_check_write() leaves .noinstr.text section debug_locks_off+0x19/0x40: instrument_atomic_write at include/linux/instrumented.h:86 (inlined by) __debug_locks_off at include/linux/debug_locks.h:17 (inlined by) debug_locks_off at lib/debug_locks.c:41 Fixes: 6eebad1ad303 ("lockdep: __always_inline more for noinstr") Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-22lib/dump_stack: move cpu lock to printk.cJohn Ogness1-36/+2
dump_stack() implements its own cpu-reentrant spinning lock to best-effort serialize stack traces in the printk log. However, there are other functions (such as show_regs()) that can also benefit from this serialization. Move the cpu-reentrant spinning lock (cpu lock) into new helper functions printk_cpu_lock_irqsave()/printk_cpu_unlock_irqrestore() so that it is available for others as well. For !CONFIG_SMP the cpu lock is a NOP. Note that having multiple cpu locks in the system can easily lead to deadlock. Code needing a cpu lock should use the printk cpu lock, since the printk cpu lock could be acquired from any code and any context. Also note that it is not necessary for a cpu lock to disable interrupts. However, in upcoming work this cpu lock will be used for emergency tasks (for example, atomic consoles during kernel crashes) and any interruptions while holding the cpu lock should be avoided if possible. Signed-off-by: John Ogness <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Petr Mladek <[email protected]> [[email protected]: Backported on top of 5.13-rc1.] Signed-off-by: Petr Mladek <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-18sched: Change task_struct::statePeter Zijlstra1-2/+2
Change the type and name of task_struct::state. Drop the volatile and shrink it to an 'unsigned int'. Rename it in order to find all uses such that we can use READ_ONCE/WRITE_ONCE as appropriate. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Daniel Bristot de Oliveira <[email protected]> Acked-by: Will Deacon <[email protected]> Acked-by: Daniel Thompson <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-06-18Merge branch 'sched/urgent' into sched/core, to resolve conflictsIngo Molnar1-1/+1
This commit in sched/urgent moved the cfs_rq_is_decayed() function: a7b359fc6a37: ("sched/fair: Correctly insert cfs_rq's to list on unthrottle") and this fresh commit in sched/core modified it in the old location: 9e077b52d86a: ("sched/pelt: Check that *_avg are null when *_sum are") Merge the two variants. Conflicts: kernel/sched/fair.c Signed-off-by: Ingo Molnar <[email protected]>
2021-06-17lib: add iomem emulation (logic_iomem)Johannes Berg3-0/+334
Add IO memory emulation that uses callbacks for read/write to the allocated regions. The callbacks can be registered by the users using logic_iomem_alloc(). To use, an architecture must 'select LOGIC_IOMEM' in Kconfig and then include <asm-generic/logic_io.h> into asm/io.h to get the __raw_read*/__raw_write* functions. Optionally, an architecture may 'select LOGIC_IOMEM_FALLBACK' in which case non-emulated regions will 'fall back' to the various real_* functions that must then be provided. Cc: Arnd Bergmann <[email protected]> Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: Richard Weinberger <[email protected]>
2021-06-14Merge tag 'v5.13-rc6' into driver-core-nextGreg Kroah-Hartman1-1/+1
We need the driver core fix in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-06-14Merge tag 'v5.13-rc6' into char-misc-nextGreg Kroah-Hartman1-1/+1
We need the fixes in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2021-06-11kunit: Add 'kunit_shutdown' optionDavid Gow1-0/+20
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs. Signed-off-by: David Gow <[email protected]> Signed-off-by: Brendan Higgins <[email protected]> Reviewed-by: Stephen Boyd <[email protected]> Tested-By: Daniel Latypov <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-11kunit: Fix result propagation for parameterised testsDavid Gow1-4/+3
When one parameter of a parameterised test failed, its failure would be propagated to the overall test, but not to the suite result (unless it was the last parameter). This is because test_case->success was being reset to the test->success result after each parameter was used, so a failing test's result would be overwritten by a non-failing result. The overall test result was handled in a third variable, test_result, but this was discarded after the status line was printed. Instead, just propagate the result after each parameter run. Signed-off-by: David Gow <[email protected]> Fixes: fadb08e7c750 ("kunit: Support for Parameterized Testing") Reviewed-by: Marco Elver <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-10bootconfig: Support mixing a value and subkeys under a keyMasami Hiramatsu1-20/+45
Support mixing a value and subkeys under a key. Since kernel cmdline options will support "aaa.bbb=value1 aaa.bbb.ccc=value2", it is better that the bootconfig supports such configuration too. Note that this does not change syntax itself but just accepts mixed value and subkeys e.g. key = value1 key.subkey = value2 But this is not accepted; key { value1 subkey = value2 } That will make value1 as a subkey. Also, the order of the value node under a key is fixed. If there are a value and subkeys, the value is always the first child node of the key. Thus if user specifies subkeys first, e.g. key.subkey = value1 key = value2 In the program (and /proc/bootconfig), it will be shown as below key = value2 key.subkey = value1 Link: https://lkml.kernel.org/r/162262194685.264090.7738574774030567419.stgit@devnote2 Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2021-06-10bootconfig: Change array value to use child nodeMasami Hiramatsu1-4/+19
It is not possible to put an array value with subkeys under a key node, because both of subkeys and the array elements are using "next" field of the xbc_node. Thus this changes the array values to use "child" field in the array case. The reason why split this change is to test it easily. Link: https://lkml.kernel.org/r/162262193838.264090.16044473274501498656.stgit@devnote2 Signed-off-by: Masami Hiramatsu <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2021-06-10csum_and_copy_to_pipe_iter(): leave handling of csum_state to callerAl Viro1-23/+18
... since all the logics is already there for use by iovec/kvec/etc. cases. Signed-off-by: Al Viro <[email protected]>
2021-06-10clean up copy_mc_pipe_to_iter()Al Viro1-24/+9
... and we don't need kmap_atomic() there - kmap_local_page() is fine. Signed-off-by: Al Viro <[email protected]>
2021-06-10pipe_zero(): we don't need no stinkin' kmap_atomic()...Al Viro1-1/+3
FWIW, memcpy_to_page() itself almost certainly ought to use kmap_local_page()... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: clean csum_and_copy_...() primitives up a bitAl Viro1-6/+4
1) kmap_atomic() is not needed here, kmap_local_page() is enough. 2) No need to make sum = csum_block_add(sum, next, off); conditional upon next != 0 - adding 0 is a no-op as far as csum_block_add() is concerned. Signed-off-by: Al Viro <[email protected]>
2021-06-10copy_page_from_iter(): don't need kmap_atomic() for kvec/bvec casesAl Viro1-2/+2
kmap_local_page() is enough. Signed-off-by: Al Viro <[email protected]>
2021-06-10copy_page_to_iter(): don't bother with kmap_atomic() for bvec/kvec casesAl Viro1-3/+3
kmap_local_page() is enough there. Moreover, we can use _copy_to_iter() for actual copying in those cases - no useful extra checks on the address we are copying from in that call. Signed-off-by: Al Viro <[email protected]>
2021-06-10iterate_xarray(): only of the first iteration we might get offset != 0Al Viro1-3/+3
recalculating offset on each iteration is pointless - on all subsequent passes through the loop it will be zero anyway. Signed-off-by: Al Viro <[email protected]>
2021-06-10pull handling of ->iov_offset into iterate_{iovec,bvec,xarray}Al Viro1-12/+14
fewer arguments (by one, but still...) for iterate_...() macros Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: make iterator callbacks use base and len instead of iovecAl Viro1-91/+91
Iterator macros used to provide the arguments for step callbacks in a structure matching the flavour - iovec for ITER_IOVEC, kvec for ITER_KVEC and bio_vec for ITER_BVEC. That already broke down for ITER_XARRAY (bio_vec there); now that we are using kvec callback for bvec and xarray cases, we are always passing a pointer + length (void __user * + size_t for ITER_IOVEC callback, void * + size_t for everything else). Note that the original reason for bio_vec (page + offset + len) in case of ITER_BVEC used to be that we did *not* want to kmap a page when all we wanted was e.g. to find the alignment of its subrange. Now all such users are gone and the ones that are left want the page mapped anyway for actually copying the data. So in all cases we have pointer + length, and there's no good reason for keeping those in struct iovec or struct kvec - we can just pass them to callback separately. Again, less boilerplate in callbacks... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: make the amount already copied available to iterator callbacksAl Viro1-70/+50
Making iterator macros keep track of the amount of data copied is pretty easy and it has several benefits: 1) we no longer need the mess like (from += v.iov_len) - v.iov_len in the callbacks - initial value + total amount copied so far would do just fine. 2) less obviously, we no longer need to remember the initial amount of data we wanted to copy; the loops in iterator macros are along the lines of wanted = bytes; while (bytes) { copy some bytes -= copied if short copy break } bytes = wanted - bytes; Replacement is offs = 0; while (bytes) { copy some offs += copied bytes -= copied if short copy break } bytes = offs; That wouldn't be a win per se, but unlike the initial value of bytes, the amount copied so far *is* useful in callbacks. 3) in some cases (csum_and_copy_..._iter()) we already had offs manually maintained by the callbacks. With that change we can drop that. Less boilerplate and more readable code... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: get rid of separate bvec and xarray callbacksAl Viro1-82/+30
After the previous commit we have * xarray and bvec callbacks idential in all cases * both equivalent to kvec callback wrapped into kmap_local_page()/kunmap_local() pair. So we can pass only two (iovec and kvec) callbacks to iterate_and_advance() and let iterate_{bvec,xarray} wrap it into kmap_local_page()/kunmap_local_page(). Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: teach iterate_{bvec,xarray}() about possible short copiesAl Viro1-41/+24
... and now we finally can sort out the mess in _copy_mc_to_iter(). Provide a variant of iterate_and_advance() that does *NOT* ignore the return values of bvec, xarray and kvec callbacks, use that in _copy_mc_to_iter(). That gets rid of magic in those callbacks - we used to need it so we'd get at least the right return value in case of failure halfway through. As a bonus, now iterator is advanced by the amount actually copied for all flavours. That's what the callers expect and it used to do that correctly in iovec and xarray cases. However, in kvec and bvec cases the iterator had not been advanced on such failures, breaking the users. Fixed now... Signed-off-by: Al Viro <[email protected]>
2021-06-10iterate_bvec(): expand bvec.h macro forest, massage a bitAl Viro1-13/+20
... incidentally, using pointer instead of index in an array (the only change here) trims half-kilobyte of .text... Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: unify iterate_iovec and iterate_kvecAl Viro1-23/+5
The differences between iterate_iovec and iterate_kvec are minor: * kvec callback is treated as if it returned 0 * initialization of __p is with i->iov and i->kvec resp. which is trivially dealt with. No code generation changes - compiler is quite capable of turning left = ((void)(STEP), 0); __v.iov_len -= left; (with no accesses to left downstream) and (void)(STEP); into the same code. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: massage iterate_iovec and iterate_kvec to logics similar to ↵Al Viro1-55/+36
iterate_bvec Premature optimization is the root of all evil... Trying to unroll the first pass through the loop makes it harder to follow and not just for readers - compiler ends up generating worse code than it would on a "non-optimized" loop. Signed-off-by: Al Viro <[email protected]>
2021-06-10iterate_and_advance(): get rid of magic in case when n is 0Al Viro1-1/+1
iov_iter_advance() needs to do some non-trivial work when it's given 0 as argument (skip all empty iovecs, mostly). We used to implement it via iterate_and_advance(); we no longer do so and for all other users of iterate_and_advance() zero length is a no-op. Signed-off-by: Al Viro <[email protected]>
2021-06-10csum_and_copy_to_iter(): massage into form closer to csum_and_copy_from_iter()Al Viro1-4/+4
Namely, have off counted starting from 0 rather than from csstate->off. To compensate we need to shift the initial value (csstate->sum) (rotate by 8 bits, as usual for csum) and do the same after we are finished adding the pieces up. What we get out of that is a bit more redundancy in our variables - from is always equal to addr + off, which will be useful several commits down the road. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: replace iov_iter_copy_from_user_atomic() with iterator-advancing ↵Al Viro1-26/+4
variant Replacement is called copy_page_from_iter_atomic(); unlike the old primitive the callers do *not* need to do iov_iter_advance() after it. In case when they end up consuming less than they'd been given they need to do iov_iter_revert() on everything they had not consumed. That, however, needs to be done only on slow paths. All in-tree callers converted. And that kills the last user of iterate_all_kinds() Signed-off-by: Al Viro <[email protected]>
2021-06-10[xarray] iov_iter_npages(): just use DIV_ROUND_UP()Al Viro1-14/+2
Compiler is capable of recognizing division by power of 2 and turning it into shifts. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_npages(): don't bother with iterate_all_kinds()Al Viro1-34/+54
note that in bvec case pages can be compound ones - we can't just assume that each segment is covered by one (sub)page Signed-off-by: Al Viro <[email protected]>
2021-06-10get rid of iterate_all_kinds() in ↵Al Viro1-56/+91
iov_iter_get_pages()/iov_iter_get_pages_alloc() Here iterate_all_kinds() is used just to find the first (non-empty, in case of iovec) segment. Which can be easily done explicitly. Note that in bvec case we now can get more than PAGE_SIZE worth of them, in case when we have a compound page in bvec and a range that crosses a subpage boundary. Older behaviour had been to stop on that boundary; we used to get the right first page (for_each_bvec() took care of that), but that was all we'd got. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_gap_alignment(): get rid of iterate_all_kinds()Al Viro1-13/+14
For one thing, it's only used for iovec (and makes sense only for those). For another, here we don't care about iov_offset, since the beginning of the first segment and the end of the last one are ignored. So it makes a lot more sense to just walk through the iovec array... We need to deal with the case of truncated iov_iter, but unlike the situation with iov_iter_alignment() we don't care where the last segment ends - just which segment is the last one. [fixed a braino spotted by Qian Cai <[email protected]>] Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_alignment(): don't bother with iterate_all_kinds()Al Viro1-10/+53
It's easier to go over the array manually. We need to watch out for truncated iov_iter, though - iovec array might cover more than i->count. Signed-off-by: Al Viro <[email protected]>
2021-06-10sanitize iov_iter_fault_in_readable()Al Viro1-10/+16
1) constify iov_iter argument; we are not advancing it in this primitive. 2) cap the amount requested by the amount of data in iov_iter. All existing callers should've been safe, but the check is really cheap and doing it here makes for easier analysis, as well as more consistent semantics among the primitives. 3) don't bother with iterate_iovec(). Explicit loop is not any harder to follow, and we get rid of standalone iterate_iovec() users - it's only used by iterate_and_advance() and (soon to be gone) iterate_all_kinds(). Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: optimize iov_iter_advance() for iovec and kvecAl Viro1-14/+28
We can do better than generic iterate_and_advance() for this one; inspired by bvec_iter_advance() (and massaged into that form by equivalent transformations). [fixed a braino caught by kernel test robot <[email protected]>] Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: separate direction from flavourAl Viro1-37/+48
Instead of having them mixed in iter->type, use separate ->iter_type and ->data_source (u8 and bool resp.) And don't bother with (pseudo-) bitmap for the former - microoptimizations from being able to check if the flavour is one of two values are not worth the confusion for optimizer. It can't prove that we never get e.g. ITER_IOVEC | ITER_PIPE, so we end up with extra headache. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter_advance(): don't modify ->iov_offset for ITER_DISCARDAl Viro1-2/+0
the field is not used for that flavour Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: reorder handling of flavours in primitivesAl Viro1-46/+45
iovec is the most common one; test it first and test explicitly, rather than "not anything else". Replace all flavour checks with use of iov_iter_is_...() helpers. Signed-off-by: Al Viro <[email protected]>
2021-06-10iov_iter: switch ..._full() variants of primitives to use of iov_iter_revert()Al Viro1-104/+0
Use corresponding plain variants, revert on short copy. That's the way it should've been done from the very beginning, except that we didn't have iov_iter_revert() back then... [fixed another braino caught by Qian Cai <[email protected]>] Signed-off-by: Al Viro <[email protected]>