Age | Commit message (Collapse) | Author | Files | Lines |
|
When a number of tests fail, it can be useful to get higher-level
statistics of how many tests are failing (or how many parameters are
failing in parameterised tests), and in what cases or suites. This is
already done by some non-KUnit tests, so add support for automatically
generating these for KUnit tests.
This change adds a 'kunit.stats_enabled' switch which has three values:
- 0: No stats are printed (current behaviour)
- 1: Stats are printed only for tests/suites with more than one
subtest (new default)
- 2: Always print test statistics
For parameterised tests, the summary line looks as follows:
" # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16"
For test suites, there are two lines looking like this:
"# ext4_inode_test: pass:1 fail:0 skip:0 total:1"
"# Totals: pass:16 fail:0 skip:0 total:16"
The first line gives the number of direct subtests, the second "Totals"
line is the accumulated sum of all tests and test parameters.
This format is based on the one used by kselftest[1].
[1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kselftest.h#L109
Signed-off-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull KUnit update from Shuah Khan:
"Fixes and features:
- add support for skipped tests
- introduce kunit_kmalloc_array/kunit_kcalloc() helpers
- add gnu_printf specifiers
- add kunit_shutdown
- add unit test for filtering suites by names
- convert lib/test_list_sort.c to use KUnit
- code organization moving default config to tools/testing/kunit
- refactor of internal parser input handling
- cleanups and updates to documentation
- code cleanup related to casts"
* tag 'linux-kselftest-kunit-fixes-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (29 commits)
kunit: add unit test for filtering suites by names
kasan: test: make use of kunit_skip()
kunit: test: Add example tests which are always skipped
kunit: tool: Support skipped tests in kunit_tool
kunit: Support skipped tests
thunderbolt: test: Reinstate a few casts of bitfields
kunit: tool: internal refactor of parser input handling
lib/test: convert lib/test_list_sort.c to use KUnit
kunit: introduce kunit_kmalloc_array/kunit_kcalloc() helpers
kunit: Remove the unused all_tests.config
kunit: Move default config from arch/um -> tools/testing/kunit
kunit: arch/um/configs: Enable KUNIT_ALL_TESTS by default
kunit: Add gnu_printf specifiers
lib/cmdline_kunit: Remove a cast which are no-longer required
kernel/sysctl-test: Remove some casts which are no-longer required
thunderbolt: test: Remove some casts which are no longer required
mmc: sdhci-of-aspeed: Remove some unnecessary casts from KUnit tests
iio: Remove a cast in iio-test-format which is no longer required
device property: Remove some casts in property-entry-test
Documentation: kunit: Clean up some string casts in examples
...
|
|
The upcoming SLUB kunit test will be calling kunit_find_named_resource()
from a context with disabled interrupts. That means kunit's test->lock
needs to be IRQ safe to avoid potential deadlocks and lockdep splats.
This patch therefore changes the test->lock usage to spin_lock_irqsave()
and spin_unlock_irqrestore().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Signed-off-by: Oliver Glitta <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Daniel Latypov <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Pekka Enberg <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This adds unit tests for kunit_filter_subsuite() and
kunit_filter_suites().
Note: what the executor means by "subsuite" is the array of suites
corresponding to each test file.
This patch lightly refactors executor.c to avoid the use of global
variables to make it testable.
It also includes a clever `kfree_at_end()` helper that makes this test
easier to write than it otherwise would have been.
Tested by running just the new tests using itself
$ ./tools/testing/kunit/kunit.py run '*exec*'
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: David Gow <[email protected]>
Acked-by: Brendan Higgins <[email protected]>
Tested-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add two new tests to the example test suite, both of which are always
skipped. This is used as an example for how to write tests which are
skipped, and to demonstrate the difference between kunit_skip() and
kunit_mark_skipped().
Note that these tests are enabled by default, so a default run of KUnit
will have two skipped tests.
Signed-off-by: David Gow <[email protected]>
Reviewed-by: Daniel Latypov <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Reviewed-by: Marco Elver <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
The kunit_mark_skipped() macro marks the current test as "skipped", with
the provided reason. The kunit_skip() macro will mark the test as
skipped, and abort the test.
The TAP specification supports this "SKIP directive" as a comment after
the "ok" / "not ok" for a test. See the "Directives" section of the TAP
spec for details:
https://testanything.org/tap-specification.html#directives
The 'success' field for KUnit tests is replaced with a kunit_status
enum, which can be SUCCESS, FAILURE, or SKIPPED, combined with a
'status_comment' containing information on why a test was skipped.
A new 'kunit_status' test suite is added to test this.
Signed-off-by: David Gow <[email protected]>
Tested-by: Marco Elver <[email protected]>
Reviewed-by: Daniel Latypov <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add in:
* kunit_kmalloc_array() and wire up kunit_kmalloc() to be a special
case of it.
* kunit_kcalloc() for symmetry with kunit_kzalloc()
This should using KUnit more natural by making it more similar to the
existing *alloc() APIs.
And while we shouldn't necessarily be writing unit tests where overflow
should be a concern, it can't hurt to be safe.
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Some KUnit functions use variable arguments to implement a printf-like
format string. Use the __printf() attribute to let the compiler warn if
invalid format strings are passed in.
If the kernel is build with W=1, it complained about the lack of these
specifiers, e.g.:
../lib/kunit/test.c:72:2: warning: function ‘kunit_log_append’ might be a candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format]
Signed-off-by: David Gow <[email protected]>
Reviewed-by: Daniel Latypov <[email protected]>
Acked-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add a new kernel command-line option, 'kunit_shutdown', which allows the
user to specify that the kernel poweroff, halt, or reboot after
completing all KUnit tests; this is very handy for running KUnit tests
on UML or a VM so that the UML/VM process exits cleanly immediately
after running all tests without needing a special initramfs.
Signed-off-by: David Gow <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Tested-By: Daniel Latypov <[email protected]>
Reviewed-by: Daniel Latypov <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
When one parameter of a parameterised test failed, its failure would be
propagated to the overall test, but not to the suite result (unless it
was the last parameter).
This is because test_case->success was being reset to the test->success
result after each parameter was used, so a failing test's result would
be overwritten by a non-failing result. The overall test result was
handled in a third variable, test_result, but this was discarded after
the status line was printed.
Instead, just propagate the result after each parameter run.
Signed-off-by: David Gow <[email protected]>
Fixes: fadb08e7c750 ("kunit: Support for Parameterized Testing")
Reviewed-by: Marco Elver <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add a kunit_fail_current_test() function to fail the currently running
test, if any, with an error message.
This is largely intended for dynamic analysis tools like UBSAN and for
fakes.
E.g. say I had a fake ops struct for testing and I wanted my `free`
function to complain if it was called with an invalid argument, or
caught a double-free. Most return void and have no normal means of
signalling failure (e.g. super_operations, iommu_ops, etc.).
Key points:
* Always update current->kunit_test so anyone can use it.
* commit 83c4e7a0363b ("KUnit: KASAN Integration") only updated it for
CONFIG_KASAN=y
* Create a new header <kunit/test-bug.h> so non-test code doesn't have
to include all of <kunit/test.h> (e.g. lib/ubsan.c)
* Forward the file and line number to make it easier to track down
failures
* Declare the helper function for nice __printf() warnings about mismatched
format strings even when KUnit is not enabled.
Example output from kunit_fail_current_test("message"):
[15:19:34] [FAILED] example_simple_test
[15:19:34] # example_simple_test: initializing
[15:19:34] # example_simple_test: lib/kunit/kunit-example-test.c:24: message
[15:19:34] not ok 1 - example_simple_test
Fixed minor check patch with checkpatch --fix option:
Shuah Khan <[email protected]>
Signed-off-by: Daniel Latypov <[email protected]>
Signed-off-by: Uriel Guajardo <[email protected]>
Reviewed-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
TL;DR
$ ./tools/testing/kunit/kunit.py run --kunitconfig=lib/kunit
Per suggestion from Ted [1], we can reduce the amount of typing by
assuming a convention that these files are named '.kunitconfig'.
In the case of [1], we now have
$ ./tools/testing/kunit/kunit.py run --kunitconfig=fs/ext4
Also add in such a fragment for kunit itself so we can give that as an
example more close to home (and thus less likely to be accidentally
broken).
[1] https://lore.kernel.org/linux-ext4/[email protected]/
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Before:
> Expected str == "world", but
> str == hello
> "world" == world
After:
> Expected str == "world", but
> str == "hello"
<we don't need to tell the user that "world" == "world">
Note: like the literal ellision for integers, this doesn't handle the
case of
KUNIT_EXPECT_STREQ(test, "hello", "world")
since we don't expect it to realistically happen in checked in tests.
(If you really wanted a test to fail, KUNIT_FAIL("msg") exists)
In that case, you'd get:
> Expected "hello" == "world", but
<output for next failure>
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Tidy up code by fixing the following checkpatch warnings:
CHECK: Alignment should match open parenthesis
CHECK: Lines should not end with a '('
Signed-off-by: Lucas Stankus <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
E.g. specifying this would run suites with "list" in their name.
kunit.filter_glob=list*
Note: the executor prints out a TAP header that includes the number of
suites we intend to run.
So unless we want to report empty results for filtered-out suites, we
need to do the filtering here in the executor.
It's also probably better in the executor since we most likely don't
want any filtering to apply to tests built as modules.
This code does add a CONFIG_GLOB=y dependency for CONFIG_KUNIT=y.
But the code seems light enough that it shouldn't be an issue.
For now, we only filter on suite names so we don't have to create copies
of the suites themselves, just the array (of arrays) holding them.
The name is rather generic since in the future, we could consider
extending it to a syntax like:
kunit.filter_glob=<suite_glob>.<test_glob>
E.g. to run all the del list tests
kunit.filter_glob=list-kunit-test.*del*
But at the moment, it's far easier to manually comment out test cases in
test files as opposed to messing with sets of Kconfig entries to select
specific suites.
So even just doing this makes using kunit far less annoying.
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Currently, given something (fairly dystopian) like
> KUNIT_EXPECT_EQ(test, 2 + 2, 5)
KUnit will prints a failure message like this.
> Expected 2 + 2 == 5, but
> 2 + 2 == 4
> 5 == 5
With this patch, the output just becomes
> Expected 2 + 2 == 5, but
> 2 + 2 == 4
This patch is slightly hacky, but it's quite common* to compare an
expression to a literal integer value, so this can make KUnit less
chatty in many cases. (This patch also fixes variants like
KUNIT_EXPECT_GT, LE, et al.).
It also allocates an additional string briefly, but given this only
happens on test failures, it doesn't seem too bad a tradeoff.
Also, in most cases it'll realize the lengths are unequal and bail out
before the allocation.
We could save the result of the formatted string to avoid wasting this
extra work, but it felt cleaner to leave it as-is.
Edge case: for something silly and unrealistic like
> KUNIT_EXPECT_EQ(test, 4, 5);
It'll generate this message with a trailing "but"
> Expected 4 == 5, but
> <next line of normal output>
It didn't feel worth adding a check up-front to see if both sides are
literals to handle this better.
*A quick grep suggests 100+ comparisons to an integer literal as the
right hand side.
Signed-off-by: Daniel Latypov <[email protected]>
Tested-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Implementation of support for parameterized testing in KUnit. This
approach requires the creation of a test case using the
KUNIT_CASE_PARAM() macro that accepts a generator function as input.
This generator function should return the next parameter given the
previous parameter in parameterized tests. It also provides a macro to
generate common-case generators based on arrays. Generators may also
optionally provide a human-readable description of parameters, which is
displayed where available.
Note, currently the result of each parameter run is displayed in
diagnostic lines, and only the overall test case output summarizes
TAP-compliant success or failure of all parameter runs. In future, when
supported by kunit-tool, these can be turned into subsubtest outputs.
Signed-off-by: Arpitha Raghunandan <[email protected]>
Co-developed-by: Marco Elver <[email protected]>
Signed-off-by: Marco Elver <[email protected]>
Reviewed-by: David Gow <[email protected]>
Tested-by: David Gow <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
Pull more Kunit updates from Shuah Khan:
- add Kunit to kernel_init() and remove KUnit from init calls entirely.
This addresses the concern that Kunit would not work correctly during
late init phase.
- add a linker section where KUnit can put references to its test
suites.
This is the first step in transitioning to dispatching all KUnit
tests from a centralized executor rather than having each as its own
separate late_initcall.
- add a centralized executor to dispatch tests rather than relying on
late_initcall to schedule each test suite separately. Centralized
execution is for built-in tests only; modules will execute tests when
loaded.
- convert bitfield test to use KUnit framework
- Documentation updates for naming guidelines and how
kunit_test_suite() works.
- add test plan to KUnit TAP format
* tag 'linux-kselftest-kunit-5.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
lib: kunit: Fix compilation test when using TEST_BIT_FIELD_COMPILE
lib: kunit: add bitfield test conversion to KUnit
Documentation: kunit: add a brief blurb about kunit_test_suite
kunit: test: add test plan to KUnit TAP format
init: main: add KUnit to kernel init
kunit: test: create a single centralized executor for all tests
vmlinux.lds.h: add linker section for KUnit test suites
Documentation: kunit: Add naming guidelines
|
|
Integrate KASAN into KUnit testing framework.
- Fail tests when KASAN reports an error that is not expected
- Use KUNIT_EXPECT_KASAN_FAIL to expect a KASAN error in KASAN
tests
- Expected KASAN reports pass tests and are still printed when run
without kunit_tool (kunit_tool still bypasses the report due to the
test passing)
- KUnit struct in current task used to keep track of the current
test from KASAN code
Make use of "[PATCH v3 kunit-next 1/2] kunit: generalize kunit_resource
API beyond allocated resources" and "[PATCH v3 kunit-next 2/2] kunit: add
support for named resources" from Alan Maguire [1]
- A named resource is added to a test when a KASAN report is
expected
- This resource contains a struct for kasan_data containing
booleans representing if a KASAN report is expected and if a
KASAN report is found
[1] (https://lore.kernel.org/linux-kselftest/[email protected]/T/#t)
Signed-off-by: Patricia Alfonso <[email protected]>
Signed-off-by: David Gow <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Tested-by: Andrey Konovalov <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Reviewed-by: Dmitry Vyukov <[email protected]>
Acked-by: Brendan Higgins <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: Vincent Guittot <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
TAP 14 allows an optional test plan to be emitted before the start of
the start of testing[1]; this is valuable because it makes it possible
for a test harness to detect whether the number of tests run matches the
number of tests expected to be run, ensuring that no tests silently
failed.
Link[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md#the-plan
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Although we have not seen any actual examples where KUnit doesn't work
because it runs in the late init phase of the kernel, it has been a
concern for some time that this could potentially be an issue in the
future. So, remove KUnit from init calls entirely, instead call directly
from kernel_init() so that KUnit runs after late init.
Co-developed-by: Alan Maguire <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Reviewed-by: Luis Chamberlain <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add a centralized executor to dispatch tests rather than relying on
late_initcall to schedule each test suite separately. Centralized
execution is for built-in tests only; modules will execute tests when
loaded.
Signed-off-by: Alan Maguire <[email protected]>
Co-developed-by: Iurii Zaikin <[email protected]>
Signed-off-by: Iurii Zaikin <[email protected]>
Co-developed-by: Brendan Higgins <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
The kunit resources API allows for custom initialization and
cleanup code (init/fini); here a new resource add function sets
the "struct kunit_resource" "name" field, and calls the standard
add function. Having a simple way to name resources is
useful in cases such as multithreaded tests where a set of
resources are shared among threads; a pointer to the
"struct kunit *" test state then is all that is needed to
retrieve and use named resources. Support is provided to add,
find and destroy named resources; the latter two are simply
wrappers that use a "match-by-name" callback.
If an attempt to add a resource with a name that already exists
is made kunit_add_named_resource() will return -EEXIST.
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
In its original form, the kunit resources API - consisting the
struct kunit_resource and associated functions - was focused on
adding allocated resources during test operation that would be
automatically cleaned up on test completion.
The recent RFC patch proposing converting KASAN tests to KUnit [1]
showed another potential model - where outside of test context,
but with a pointer to the test state, we wish to access/update
test-related data, but expressly want to avoid allocations.
It turns out we can generalize the kunit_resource to support
static resources where the struct kunit_resource * is passed
in and initialized for us. As part of this work, we also
change the "allocation" field to the more general "data" name,
as instead of associating an allocation, we can associate a
pointer to static data. Static data is distinguished by a NULL
free functions. A test is added to cover using kunit_add_resource()
with a static resource and data.
Finally we also make use of the kernel's krefcount interfaces
to manage reference counting of KUnit resources. The motivation
for this is simple; if we have kernel threads accessing and
using resources (say via kunit_find_resource()) we need to
ensure we do not remove said resources (or indeed free them
if they were dynamically allocated) until the reference count
reaches zero. A new function - kunit_put_resource() - is
added to handle this, and it should be called after a
thread using kunit_find_resource() is finished with the
retrieved resource.
We ensure that the functions needed to look up, use and
drop reference count are "static inline"-defined so that
they can be used by builtin code as well as modules in
the case that KUnit is built as a module.
A cosmetic change here also; I've tried moving to
kunit_[action]_resource() as the format of function names
for consistency and readability.
[1] https://lkml.org/lkml/2020/2/26/1286
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
This makes it easier to enable all KUnit fragments.
Adding 'if !KUNIT_ALL_TESTS' so individual tests can not be turned off.
Therefore if KUNIT_ALL_TESTS is enabled that will hide the prompt in
menuconfig.
Reviewed-by: David Gow <[email protected]>
Signed-off-by: Anders Roxell <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Make it easier to enable all KUnit fragments. This is useful for kernel
devs or testers, so its easy to get all KUnit tests enabled and if new
gets added they will be enabled as well. Fragments that has to be
builtin will be missed if CONFIG_KUNIT_ALL_TESTS is set as a module.
Signed-off-by: Anders Roxell <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add missing newline, as otherwise flushing of the final summary message
to the console log can be delayed.
Fixes: e2219db280e3 ("kunit: add debugfs /sys/kernel/debug/kunit/<suite>/results display")
Signed-off-by: Marco Elver <[email protected]>
Tested-by: David Gow <[email protected]>
Reviewed-by: Alan Maguire <[email protected]>
Acked-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Introduce KUNIT_SUBTEST_INDENT macro which corresponds to 4-space
indentation and KUNIT_SUBSUBTEST_INDENT macro which corresponds to
8-space indentation in line with TAP spec (e.g. see "Subtests"
section of https://node-tap.org/tap-protocol/).
Use these macros in place of one or two tabs in strings to clarify
why we are indenting.
Suggested-by: Frank Rowand <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Frank Rowand <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
the logging test ensures multiple strings logged appear in the
log string associated with the test when CONFIG_KUNIT_DEBUGFS is
enabled.
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Reviewed-by: Frank Rowand <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
add debugfs support for displaying kunit test suite results; this is
especially useful for module-loaded tests to allow disentangling of
test result display from other dmesg events. debugfs support is
provided if CONFIG_KUNIT_DEBUGFS=y.
As well as printk()ing messages, we append them to a per-test log.
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Reviewed-by: Frank Rowand <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
KUnit assertions and expectations will print the values being tested. If
these are pointers (e.g., KUNIT_EXPECT_PTR_EQ(test, a, b)), these
pointers are currently printed with the %pK format specifier, which -- to
prevent information leaks which may compromise, e.g., ASLR -- are often
either hashed or replaced with ____ptrval____ or similar, making debugging
tests difficult.
By replacing %pK with %px as Documentation/core-api/printk-formats.rst
suggests, we disable this security feature for KUnit assertions and
expectations, allowing the actual pointer values to be printed. Given
that KUnit is not intended for use in production kernels, and the
pointers are only printed on failing tests, this seems like a worthwhile
tradeoff.
Signed-off-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Making kunit itself buildable as a module allows for "always-on"
kunit configuration; specifying CONFIG_KUNIT=m means the module
is built but only used when loaded. Kunit test modules will load
kunit.ko as an implicit dependency, so simply running
"modprobe my-kunit-tests" will load the tests along with the kunit
module and run them.
Co-developed-by: Knut Omang <[email protected]>
Signed-off-by: Knut Omang <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
In discussion of how to handle timeouts, it was noted that if
sysctl_hung_task_timeout_seconds is exceeded for a kunit test,
the test task will be killed and an oops generated. This should
suffice as a means of debugging such timeout issues for now.
Hence remove use of sysctl_hung_task_timeout_secs, which has the
added benefit of avoiding the need to export that symbol from
the core kernel.
Co-developed-by: Knut Omang <[email protected]>
Signed-off-by: Knut Omang <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Acked-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
As tests are added to kunit, it will become less feasible to execute
all built tests together. By supporting modular tests we provide
a simple way to do selective execution on a running system; specifying
CONFIG_KUNIT=y
CONFIG_KUNIT_EXAMPLE_TEST=m
...means we can simply "insmod example-test.ko" to run the tests.
To achieve this we need to do the following:
o export the required symbols in kunit
o string-stream tests utilize non-exported symbols so for now we skip
building them when CONFIG_KUNIT_TEST=m.
o drivers/base/power/qos-test.c contains a few unexported interface
references, namely freq_qos_read_value() and freq_constraints_init().
Both of these could be potentially defined as static inline functions
in include/linux/pm_qos.h, but for now we simply avoid supporting
module build for that test suite.
o support a new way of declaring test suites. Because a module cannot
do multiple late_initcall()s, we provide a kunit_test_suites() macro
to declare multiple suites within the same module at once.
o some test module names would have been too general ("test-test"
and "example-test" for kunit tests, "inode-test" for ext4 tests);
rename these as appropriate ("kunit-test", "kunit-example-test"
and "ext4-inode-test" respectively).
Also define kunit_test_suite() via kunit_test_suites()
as callers in other trees may need the old definition.
Co-developed-by: Knut Omang <[email protected]>
Signed-off-by: Knut Omang <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Acked-by: Theodore Ts'o <[email protected]> # for ext4 bits
Acked-by: David Gow <[email protected]> # For list-test
Reported-by: kbuild test robot <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Define function as static inline in try-catch-impl.h to allow it to
be used in kunit itself and tests. Also remove unused
kunit_generic_try_catch
Co-developed-by: Knut Omang <[email protected]>
Signed-off-by: Knut Omang <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Tested-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
string-stream interfaces are not intended for external use;
move them from include/kunit to lib/kunit accordingly.
Co-developed-by: Knut Omang <[email protected]>
Signed-off-by: Knut Omang <[email protected]>
Signed-off-by: Alan Maguire <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Tested-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Previously KUnit assumed that printk would always be present, which is
not a valid assumption to make. Fix that by removing call to
vprintk_emit, and calling printk directly.
This fixes a build error[1] reported by Randy.
For context this change comes after much discussion. My first stab[2] at
this was just to make the KUnit logging code compile out; however, it
was agreed that if we were going to use vprintk_emit, then vprintk_emit
should provide a no-op stub, which lead to my second attempt[3]. In
response to me trying to stub out vprintk_emit, Sergey Senozhatsky
suggested a way for me to remove our usage of vprintk_emit, which led to
my third attempt at solving this[4].
In my third version of this patch[4], I completely removed vprintk_emit,
as suggested by Sergey; however, there was a bit of debate over whether
Sergey's solution was the best. The debate arose due to Sergey's version
resulting in a checkpatch warning, which resulted in a debate over
correct printk usage. Joe Perches offered an alternative fix which was
somewhat less far reaching than what Sergey had suggested and
importantly relied on continuing to use %pV. Much of the debated
centered around whether %pV should be widely used, and whether Sergey's
version would result in object size bloat. Ultimately, we decided to go
with Sergey's version.
Reported-by: Randy Dunlap <[email protected]>
Link[1]: https://lore.kernel.org/linux-kselftest/[email protected]/
Link[2]: https://lore.kernel.org/linux-kselftest/[email protected]/
Link[3]: https://lore.kernel.org/linux-kselftest/[email protected]/
Link[4]: https://lore.kernel.org/linux-kselftest/[email protected]/
Cc: Stephen Rothwell <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Joe Perches <[email protected]>
Cc: [email protected]
Signed-off-by: Brendan Higgins <[email protected]>
Acked-by: Randy Dunlap <[email protected]> # build-tested
Reviewed-by: Petr Mladek <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add unit tests for KUnit managed resources. KUnit managed resources
(struct kunit_resource) are resources that are automatically cleaned up
at the end of a KUnit test, similar to the concept of devm_* managed
resources.
Signed-off-by: Avinash Kondareddy <[email protected]>
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add support for assertions which are like expectations except the test
terminates if the assertion is not satisfied.
The idea with assertions is that you use them to state all the
preconditions for your test. Logically speaking, these are the premises
of the test case, so if a premise isn't true, there is no point in
continuing the test case because there are no conclusions that can be
drawn without the premises. Whereas, the expectation is the thing you
are trying to prove. It is not used universally in x-unit style test
frameworks, but I really like it as a convention. You could still
express the idea of a premise using the above idiom, but I think
KUNIT_ASSERT_* states the intended idea perfectly.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add KUnit tests for the KUnit test abort mechanism (see preceding
commit). Add tests both for general try catch mechanism as well as
non-architecture specific mechanism.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add support for aborting/bailing out of test cases, which is needed for
implementing assertions.
An assertion is like an expectation, but bails out of the test case
early if the assertion is not met. The idea with assertions is that you
use them to state all the preconditions for your test. Logically
speaking, these are the premises of the test case, so if a premise isn't
true, there is no point in continuing the test case because there are no
conclusions that can be drawn without the premises. Whereas, the
expectation is the thing you are trying to prove.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add a test for string stream along with a simpler example.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add support for expectations, which allow properties to be specified and
then verified in tests.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add `struct kunit_assert` and friends which provide a structured way to
capture data from an expectation or an assertion (introduced later in
the series) so that it may be printed out in the event of a failure.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
A number of test features need to do pretty complicated string printing
where it may not be possible to rely on a single preallocated string
with parameters.
So provide a library for constructing the string as you go similar to
C++'s std::string. string_stream is really just a string builder,
nothing more.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Create a common API for test managed resources like memory and test
objects. A lot of times a test will want to set up infrastructure to be
used in test cases; this could be anything from just wanting to allocate
some memory to setting up a driver stack; this defines facilities for
creating "test resources" which are managed by the test infrastructure
and are automatically cleaned up at the conclusion of the test.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Add core facilities for defining unit tests; this provides a common way
to define test cases, functions that execute code which is under test
and determine whether the code under test behaves as expected; this also
provides a way to group together related test cases in test suites (here
we call them test_modules).
Just define test cases and how to execute them for now; setting
expectations on code will be defined later.
Signed-off-by: Brendan Higgins <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
Reviewed-by: Luis Chamberlain <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|