aboutsummaryrefslogtreecommitdiff
path: root/tools/testing/kunit/kunit_parser.py
AgeCommit message (Collapse)AuthorFilesLines
2021-08-13kunit: Print test statistics on failureDavid Gow1-1/+1
When a number of tests fail, it can be useful to get higher-level statistics of how many tests are failing (or how many parameters are failing in parameterised tests), and in what cases or suites. This is already done by some non-KUnit tests, so add support for automatically generating these for KUnit tests. This change adds a 'kunit.stats_enabled' switch which has three values: - 0: No stats are printed (current behaviour) - 1: Stats are printed only for tests/suites with more than one subtest (new default) - 2: Always print test statistics For parameterised tests, the summary line looks as follows: " # inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16" For test suites, there are two lines looking like this: "# ext4_inode_test: pass:1 fail:0 skip:0 total:1" "# Totals: pass:16 fail:0 skip:0 total:16" The first line gives the number of direct subtests, the second "Totals" line is the accumulated sum of all tests and test parameters. This format is based on the one used by kselftest[1]. [1]: https://elixir.bootlin.com/linux/latest/source/tools/testing/selftests/kselftest.h#L109 Signed-off-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-08-13kunit: tool: make --raw_output support only showing kunit outputDaniel Latypov1-4/+0
--raw_output is nice, but it would be nicer if could show only output after KUnit tests have started. So change the flag to allow specifying a string ('kunit'). Make it so `--raw_output` alone will default to `--raw_output=all` and have the same original behavior. Drop the small kunit_parser.raw_output() function since it feels wrong to put it in "kunit_parser.py" when the point of it is to not parse anything. E.g. $ ./tools/testing/kunit/kunit.py run --raw_output=kunit ... [15:24:07] Starting KUnit Kernel ... TAP version 14 1..1 # Subtest: example 1..3 # example_simple_test: initializing ok 1 - example_simple_test # example_skip_test: initializing # example_skip_test: You should not see a line below. ok 2 - example_skip_test # SKIP this test should be skipped # example_mark_skipped_test: initializing # example_mark_skipped_test: You should see a line below. # example_mark_skipped_test: You should see this line. ok 3 - example_mark_skipped_test # SKIP this test should be skipped ok 1 - example [15:24:10] Elapsed time: 6.487s total, 0.001s configuring, 3.510s building, 0.000s running Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-07-12kunit: tool: Fix error messages for cases of no tests and wrong TAP headerRae Moar1-2/+4
This patch addresses misleading error messages reported by kunit_tool in two cases. First, in the case of TAP output having an incorrect header format or missing a header, the parser used to output an error message of 'no tests run!'. Now the parser outputs an error message of 'could not parse test results!'. As an example: Before: $ ./tools/testing/kunit/kunit.py parse /dev/null [ERROR] no tests run! ... After: $ ./tools/testing/kunit/kunit.py parse /dev/null [ERROR] could not parse test results! ... Second, in the case of TAP output with the correct header but no tests, the parser used to output an error message of 'could not parse test results!'. Now the parser outputs an error message of 'no tests run!'. As an example: Before: $ echo -e 'TAP version 14\n1..0' | ./tools/testing/kunit/kunit.py parse [ERROR] could not parse test results! After: $ echo -e 'TAP version 14\n1..0' | ./tools/testing/kunit/kunit.py parse [ERROR] no tests run! Additionally, this patch also corrects the tests in kunit_tool_test.py and adds a test to check the error in the case of TAP output with the correct header but no tests. Signed-off-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-25kunit: tool: Support skipped tests in kunit_toolDavid Gow1-24/+53
Add support for the SKIP directive to kunit_tool's TAP parser. Skipped tests now show up as such in the printed summary. The number of skipped tests is counted, and if all tests in a suite are skipped, the suite is also marked as skipped. Otherwise, skipped tests do affect the suite result. Example output: [00:22:34] ======== [SKIPPED] example_skip ======== [00:22:34] [SKIPPED] example_skip_test # SKIP this test should be skipped [00:22:34] [SKIPPED] example_mark_skipped_test # SKIP this test should be skipped [00:22:34] ============================================================ [00:22:34] Testing complete. 2 tests run. 0 failed. 0 crashed. 2 skipped. Signed-off-by: David Gow <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-25kunit: tool: internal refactor of parser input handlingDaniel Latypov1-47/+89
Note: this does not change the parser behavior at all (except for making one error message more useful). This is just an internal refactor. The TAP output parser currently operates over a List[str]. This works, but we only ever need to be able to "peek" at the current line and the ability to "pop" it off. Also, using a List means we need to wait for all the output before we can start parsing. While this is not an issue for most tests which are really lightweight, we do have some longer (~5 minutes) tests. This patch introduces an LineStream wrapper class that * Exposes a peek()/pop() interface instead of manipulating an array * this allows us to more easily add debugging code [1] * Can consume an input from a generator * we can now parse results as tests are running (the parser code currently doesn't print until the end, so no impact yet). * Tracks the current line number to print better error messages * Would allow us to add additional features more easily, e.g. storing N previous lines so we can print out invalid lines in context, etc. [1] The parsing logic is currently quite fragile. E.g. it'll often say the kernel "CRASHED" if there's something slightly wrong with the output format. When debugging a test that had some memory corruption issues, it resulted in very misleading errors from the parser. Now we could easily add this to trace all the lines consumed and why +import inspect ... def pop(self) -> str: n = self._next + print(f'popping {n[0]}: {n[1].ljust(40, " ")}| caller={inspect.stack()[1].function}') Example output: popping 77: TAP version 14 | caller=parse_tap_header popping 78: 1..1 | caller=parse_test_plan popping 79: # Subtest: kunit_executor_test | caller=parse_subtest_header popping 80: 1..2 | caller=parse_subtest_plan popping 81: ok 1 - parse_filter_test | caller=parse_ok_not_ok_test_case popping 82: ok 2 - filter_subsuite_test | caller=parse_ok_not_ok_test_case popping 83: ok 1 - kunit_executor_test | caller=parse_ok_not_ok_test_suite If we introduce an invalid line, we can see the parser go down the wrong path: popping 77: TAP version 14 | caller=parse_tap_header popping 78: 1..1 | caller=parse_test_plan popping 79: # Subtest: kunit_executor_test | caller=parse_subtest_header popping 80: 1..2 | caller=parse_subtest_plan popping 81: 1..2 # this is invalid! | caller=parse_ok_not_ok_test_case popping 82: ok 1 - parse_filter_test | caller=parse_ok_not_ok_test_case popping 83: ok 2 - filter_subsuite_test | caller=parse_ok_not_ok_test_case popping 84: ok 1 - kunit_executor_test | caller=parse_ok_not_ok_test_case [ERROR] ran out of lines before end token Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Acked-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-06-11kunit: Add 'kunit_shutdown' optionDavid Gow1-1/+1
Add a new kernel command-line option, 'kunit_shutdown', which allows the user to specify that the kernel poweroff, halt, or reboot after completing all KUnit tests; this is very handy for running KUnit tests on UML or a VM so that the UML/VM process exits cleanly immediately after running all tests without needing a special initramfs. Signed-off-by: David Gow <[email protected]> Signed-off-by: Brendan Higgins <[email protected]> Reviewed-by: Stephen Boyd <[email protected]> Tested-By: Daniel Latypov <[email protected]> Reviewed-by: Daniel Latypov <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-01-15kunit: tool: fix minor typing issue with None statusDaniel Latypov1-9/+8
The code to handle aggregating statuses didn't check that the status actually got set to some non-None value. Default the value to SUCCESS instead of adding a bunch of `is None` checks. This sorta follows the precedent in commit 3fc48259d525 ("kunit: Don't fail test suites if one of them is empty"). Also slightly simplify the code and add type annotations. Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Tested-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-01-15kunit: tool: surface and address more typing issuesDaniel Latypov1-23/+23
The authors of this tool were more familiar with a different type-checker, https://github.com/google/pytype. That's open source, but mypy seems more prevalent (and runs faster). And unlike pytype, mypy doesn't try to infer types so it doesn't check unanotated functions. So annotate ~all functions in kunit tool to increase type-checking coverage. Note: per https://www.python.org/dev/peps/pep-0484/, `__init__()` should be annotated as `-> None`. Doing so makes mypy discover a number of new violations. Exclude main() since we reuse `request` for the different types of requests, which mypy isn't happy about. This commit fixes all but one error, where `TestSuite.status` might be None. Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Tested-by: Brendan Higgins <[email protected]> Acked-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2021-01-15kunit: tool: Fix spelling of "diagnostic" in kunit_parserDavid Gow1-12/+12
Various helper functions were misspelling "diagnostic" in their names. It finally got annoying, so fix it. Signed-off-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Tested-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-12-01kunit: kunit_tool: Correctly parse diagnostic messagesDavid Gow1-3/+4
Currently, kunit_tool expects all diagnostic lines in test results to contain ": " somewhere, as both the subtest header and the crash report do. Fix this to accept any line starting with (minus indent) "# " as being a valid diagnostic line. This matches what the TAP spec[1] and the draft KTAP spec[2] are expecting. [1]: http://testanything.org/tap-specification.html [2]: https://lore.kernel.org/linux-kselftest/CY4PR13MB1175B804E31E502221BC8163FD830@CY4PR13MB1175.namprd13.prod.outlook.com/T/ Signed-off-by: David Gow <[email protected]> Acked-by: Marco Elver <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-11-10kunit: tool: fix extra trailing \n in raw + parsed test outputDaniel Latypov1-1/+2
For simplcity, strip all trailing whitespace from parsed output. I imagine no one is printing out meaningful trailing whitespace via KUNIT_FAIL() or similar, and that if they are, they really shouldn't. `isolate_kunit_output()` yielded liens with trailing \n, which results in artifacty output like this: $ ./tools/testing/kunit/kunit.py run [16:16:46] [FAILED] example_simple_test [16:16:46] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29 [16:16:46] Expected 1 + 1 == 3, but [16:16:46] 1 + 1 == 2 [16:16:46] 3 == 3 [16:16:46] not ok 1 - example_simple_test [16:16:46] After this change: [16:16:46] # example_simple_test: EXPECTATION FAILED at lib/kunit/kunit-example-test.c:29 [16:16:46] Expected 1 + 1 == 3, but [16:16:46] 1 + 1 == 2 [16:16:46] 3 == 3 [16:16:46] not ok 1 - example_simple_test [16:16:46] We should *not* be expecting lines to end with \n in kunit_tool_test.py for this reason. Do the same for `raw_output()` as well which suffers from the same issue. This is a followup to [1], but rebased onto kunit-fixes to pick up the other raw_output() fix and fixes for kunit_tool_test.py. [1] https://lore.kernel.org/linux-kselftest/[email protected]/ Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Tested-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-11-10kunit: tool: fix pre-existing python type annotation errorsDaniel Latypov1-7/+7
The code uses annotations, but they aren't accurate. Note that type checking in python is a separate process, running `kunit.py run` will not check and complain about invalid types at runtime. Fix pre-existing issues found by running a type checker $ mypy *.py All but one of these were returning `None` without denoting this properly (via `Optional[Type]`). Signed-off-by: Daniel Latypov <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-10-26kunit: Don't fail test suites if one of them is emptyAndy Shevchenko1-1/+1
Empty test suite is okay test suite. Don't fail the rest of the test suites if one of them is empty. Fixes: 6ebf5866f2e8 ("kunit: tool: add Python wrappers for running KUnit tests") Signed-off-by: Andy Shevchenko <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Tested-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-10-26kunit: Fix kunit.py --raw_output optionDavid Gow1-1/+0
Due to the raw_output() function on kunit_parser.py actually being a generator, it only runs if something reads the lines it returns. Since we no-longer do that (parsing doesn't actually happen if raw_output is enabled), it was not printing anything. Fixes: 45ba7a893ad8 ("kunit: kunit_tool: Separate out config/build/exec/parse") Signed-off-by: David Gow <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Tested-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-10-09kunit: test: add test plan to KUnit TAP formatBrendan Higgins1-14/+62
TAP 14 allows an optional test plan to be emitted before the start of the start of testing[1]; this is valuable because it makes it possible for a test harness to detect whether the number of tests run matches the number of tests expected to be run, ensuring that no tests silently failed. Link[1]: https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md#the-plan Signed-off-by: Brendan Higgins <[email protected]> Reviewed-by: Stephen Boyd <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-06-26kunit: show error if kunit results are not presentUriel Guajardo1-4/+4
Currently, if the kernel is configured incorrectly or if it crashes before any kunit tests are run, kunit finishes without error, reporting that 0 test cases were run. To fix this, an error is shown when the tap header is not found, which indicates that kunit was not able to run at all. Signed-off-by: Uriel Guajardo <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-03-26kunit: subtests should be indented 4 spaces according to TAPAlan Maguire1-5/+5
Introduce KUNIT_SUBTEST_INDENT macro which corresponds to 4-space indentation and KUNIT_SUBSUBTEST_INDENT macro which corresponds to 8-space indentation in line with TAP spec (e.g. see "Subtests" section of https://node-tap.org/tap-protocol/). Use these macros in place of one or two tabs in strings to clarify why we are indenting. Suggested-by: Frank Rowand <[email protected]> Signed-off-by: Alan Maguire <[email protected]> Reviewed-by: Frank Rowand <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-03-20kunit: Run all KUnit tests through allyesconfigHeidi Fahim1-0/+1
Implemented the functionality to run all KUnit tests through kunit_tool by specifying an --alltests flag, which builds UML with allyesconfig enabled, and consequently runs every KUnit test. A new function was added to kunit_kernel: make_allyesconfig. Firstly, if --alltests is specified, kunit.py triggers build_um_kernel which call make_allyesconfig. This function calls the make command, disables the broken configs that would otherwise prevent UML from building, then starts the kernel with all possible configurations enabled. All stdout and stderr is sent to test.log and read from there then fed through kunit_parser to parse the tests to the user. Also added a signal_handler in case kunit is interrupted while running. Tested: Run under different conditions such as testing with --raw_output, testing program interrupt then immediately running kunit again without --alltests and making sure to clean the console. Signed-off-by: Heidi Fahim <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-03-20kunit: kunit_parser: make parser more robustHeidi Fahim1-20/+20
Previously, kunit_parser did not properly handle kunit TAP output that - had any prefixes (generated from different configs e.g. CONFIG_PRINTK_TIME) - had unrelated kernel output mixed in the middle of it, which has shown up when testing with allyesconfig To remove prefixes, the parser looks for the first line that includes TAP output, "TAP version 14". It then determines the length of the string before this sequence, and strips that number of characters off the beginning of the following lines until the last KUnit output line is reached. These fixes have been tested with additional tests in the KUnitParseTest and their associated logs have also been added. Signed-off-by: Heidi Fahim <[email protected]> Reviewed-by: Brendan Higgins <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2019-09-30kunit: tool: add Python wrappers for running KUnit testsFelix Guo1-0/+310
The ultimate goal is to create minimal isolated test binaries; in the meantime we are using UML to provide the infrastructure to run tests, so define an abstract way to configure and run tests that allow us to change the context in which tests are built without affecting the user. This also makes pretty and dynamic error reporting, and a lot of other nice features easier. kunit_config.py: - parse .config and Kconfig files. kunit_kernel.py: provides helper functions to: - configure the kernel using kunitconfig. - build the kernel with the appropriate configuration. - provide function to invoke the kernel and stream the output back. kunit_parser.py: parses raw logs returned out by kunit_kernel and displays them in a user friendly way. test_data/*: samples of test data for testing kunit.py, kunit_config.py, etc. Signed-off-by: Felix Guo <[email protected]> Signed-off-by: Brendan Higgins <[email protected]> Reviewed-by: Greg Kroah-Hartman <[email protected]> Reviewed-by: Logan Gunthorpe <[email protected]> Reviewed-by: Stephen Boyd <[email protected]> Signed-off-by: Shuah Khan <[email protected]>