aboutsummaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2019-05-06Merge branch 'core-stacktrace-for-linus' of ↵Linus Torvalds3-30/+40
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull stack trace updates from Ingo Molnar: "So Thomas looked at the stacktrace code recently and noticed a few weirdnesses, and we all know how such stories of crummy kernel code meeting German engineering perfection end: a 45-patch series to clean it all up! :-) Here's the changes in Thomas's words: 'Struct stack_trace is a sinkhole for input and output parameters which is largely pointless for most usage sites. In fact if embedded into other data structures it creates indirections and extra storage overhead for no benefit. Looking at all usage sites makes it clear that they just require an interface which is based on a storage array. That array is either on stack, global or embedded into some other data structure. Some of the stack depot usage sites are outright wrong, but fortunately the wrongness just causes more stack being used for nothing and does not have functional impact. Another oddity is the inconsistent termination of the stack trace with ULONG_MAX. It's pointless as the number of entries is what determines the length of the stored trace. In fact quite some call sites remove the ULONG_MAX marker afterwards with or without nasty comments about it. Not all architectures do that and those which do, do it inconsistenly either conditional on nr_entries == 0 or unconditionally. The following series cleans that up by: 1) Removing the ULONG_MAX termination in the architecture code 2) Removing the ULONG_MAX fixups at the call sites 3) Providing plain storage array based interfaces for stacktrace and stackdepot. 4) Cleaning up the mess at the callsites including some related cleanups. 5) Removing the struct stack_trace based interfaces This is not changing the struct stack_trace interfaces at the architecture level, but it removes the exposure to the generic code'" * 'core-stacktrace-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits) x86/stacktrace: Use common infrastructure stacktrace: Provide common infrastructure lib/stackdepot: Remove obsolete functions stacktrace: Remove obsolete functions livepatch: Simplify stack trace retrieval tracing: Remove the last struct stack_trace usage tracing: Simplify stack trace retrieval tracing: Make ftrace_trace_userstack() static and conditional tracing: Use percpu stack trace buffer more intelligently tracing: Simplify stacktrace retrieval in histograms lockdep: Simplify stack trace handling lockdep: Remove save argument from check_prev_add() lockdep: Remove unused trace argument from print_circular_bug() drm: Simplify stacktrace handling dm persistent data: Simplify stack trace handling dm bufio: Simplify stack trace retrieval btrfs: ref-verify: Simplify stack trace retrieval dma/debug: Simplify stracktrace retrieval fault-inject: Simplify stacktrace retrieval mm/page_owner: Simplify stack trace handling ...
2019-05-06Merge branch 'core-objtool-for-linus' of ↵Linus Torvalds4-4/+10
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull objtool updates from Ingo Molnar: "This is a series from Peter Zijlstra that adds x86 build-time uaccess validation of SMAP to objtool, which will detect and warn about the following uaccess API usage bugs and weirdnesses: - call to %s() with UACCESS enabled - return with UACCESS enabled - return with UACCESS disabled from a UACCESS-safe function - recursive UACCESS enable - redundant UACCESS disable - UACCESS-safe disables UACCESS As it turns out not leaking uaccess permissions outside the intended uaccess functionality is hard when the interfaces are complex and when such bugs are mostly dormant. As a bonus we now also check the DF flag. We had at least one high-profile bug in that area in the early days of Linux, and the checking is fairly simple. The checks performed and warnings emitted are: - call to %s() with DF set - return with DF set - return with modified stack frame - recursive STD - redundant CLD It's all x86-only for now, but later on this can also be used for PAN on ARM and objtool is fairly cross-platform in principle. While all warnings emitted by this new checking facility that got reported to us were fixed, there might be GCC version dependent warnings that were not reported yet - which we'll address, should they trigger. The warnings are non-fatal build warnings" * 'core-objtool-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (27 commits) mm/uaccess: Use 'unsigned long' to placate UBSAN warnings on older GCC versions x86/uaccess: Dont leak the AC flag into __put_user() argument evaluation sched/x86_64: Don't save flags on context switch objtool: Add Direction Flag validation objtool: Add UACCESS validation objtool: Fix sibling call detection objtool: Rewrite alt->skip_orig objtool: Add --backtrace support objtool: Rewrite add_ignores() objtool: Handle function aliases objtool: Set insn->func for alternatives x86/uaccess, kcov: Disable stack protector x86/uaccess, ftrace: Fix ftrace_likely_update() vs. SMAP x86/uaccess, ubsan: Fix UBSAN vs. SMAP x86/uaccess, kasan: Fix KASAN vs SMAP x86/smap: Ditch __stringify() x86/uaccess: Introduce user_access_{save,restore}() x86/uaccess, signal: Fix AC=1 bloat x86/uaccess: Always inline user_access_begin() x86/uaccess, xen: Suppress SMAP warnings ...
2019-05-06ubsan: Remove vla bound checks.Andrey Ryabinin2-23/+0
The kernel the kernel is built with -Wvla for some time, so is not supposed to have any variable length arrays. Remove vla bounds checking from ubsan since it's useless now. Signed-off-by: Andrey Ryabinin <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-05-06ubsan: Fix nasty -Wbuiltin-declaration-mismatch GCC-9 warningsAndrey Ryabinin1-26/+23
Building lib/ubsan.c with gcc-9 results in a ton of nasty warnings like this one: lib/ubsan.c warning: conflicting types for built-in function ‘__ubsan_handle_negate_overflow’; expected ‘void(void *, void *)’ [-Wbuiltin-declaration-mismatch] The kernel's declarations of __ubsan_handle_*() often uses 'unsigned long' types in parameters while GCC these parameters as 'void *' types, hence the mismatch. Fix this by using 'void *' to match GCC's declarations. Reported-by: Linus Torvalds <[email protected]> Signed-off-by: Andrey Ryabinin <[email protected]> Fixes: c6d308534aef ("UBSAN: run-time undefined behavior sanity checker") Cc: <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-05-06Merge branch 'for-5.2-pf-removal' into for-linusPetr Mladek2-3/+3
2019-05-06Merge branch 'for-5.2-vsprintf-hardening' into for-linusPetr Mladek2-163/+293
2019-05-05Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds1-0/+11
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fix from Ingo Molnar: "Disable function tracing during early SME setup to fix a boot crash on SME-enabled kernels running distro kernels (some of which have function tracing enabled)" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/mm/mem_encrypt: Disable all instrumentation for early SME setup
2019-05-04netlink: add validation of NLA_F_NESTED flagMichal Kubecek1-0/+15
Add new validation flag NL_VALIDATE_NESTED which adds three consistency checks of NLA_F_NESTED_FLAG: - the flag is set on attributes with NLA_NESTED{,_ARRAY} policy - the flag is not set on attributes with other policies except NLA_UNSPEC - the flag is set on attribute passed to nla_parse_nested() Signed-off-by: Michal Kubecek <[email protected]> v2: change error messages to mention NLA_F_NESTED explicitly Reviewed-by: Johannes Berg <[email protected]> Reviewed-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-05-04netlink: set bad attribute also on maxtype checkMichal Kubecek1-1/+2
The check that attribute type is within 0...maxtype range in __nla_validate_parse() sets only error message but not bad_attr in extack. Set also bad_attr to tell userspace which attribute failed validation. Signed-off-by: Michal Kubecek <[email protected]> Reviewed-by: Johannes Berg <[email protected]> Reviewed-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-05-03lib: Add support for generic packing operationsVladimir Oltean3-0/+231
This provides an unified API for accessing register bit fields regardless of memory layout. The basic unit of data for these API functions is the u64. The process of transforming an u64 from native CPU encoding into the peripheral's encoding is called 'pack', and transforming it from peripheral to native CPU encoding is 'unpack'. Signed-off-by: Vladimir Oltean <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-05-03kobject: clean up the kobject add documentation a bit moreGreg Kroah-Hartman1-2/+6
Commit 1fd7c3b438a2 ("kobject: Improve doc clarity kobject_init_and_add()") tried to provide more clarity, but the reference to kobject_del() was incorrect. Fix that up by removing that line, and hopefully be more explicit as to exactly what needs to happen here once you register a kobject with the kobject core. Acked-by: Tobin C. Harding <[email protected]> Fixes: 1fd7c3b438a2 ("kobject: Improve doc clarity kobject_init_and_add()") Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2-3/+4
Three trivial overlapping conflicts. Signed-off-by: David S. Miller <[email protected]>
2019-05-02kobject: Fix kernel-doc comment first lineTobin C. Harding1-21/+22
kernel-doc comments have a prescribed format. This includes parenthesis on the function name. To be _particularly_ correct we should also capitalise the brief description and terminate it with a period. In preparation for adding/updating kernel-doc function comments clean up the ones currently present. Signed-off-by: Tobin C. Harding <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-02kobject: Remove docstring reference to ksetTobin C. Harding1-3/+2
Currently the docstring for kobject_get_path() mentions 'kset'. The kset is not used in the function callchain starting from this function. Remove docstring reference to kset from the function kobject_get_path(). Signed-off-by: Tobin C. Harding <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-05-01RDMA/core: Introduce RDMA subsystem ibdev_* print functionsGal Pressman1-0/+37
Similarly to dev/netdev/etc printk helpers, add standard printk helpers for the RDMA subsystem. Example output: efa 0000:00:06.0 efa_0: Hello World! efa_0: Hello World! (no parent device set) (NULL ib_device): Hello World! (ibdev is NULL) Cc: Jason Baron <[email protected]> Suggested-by: Jason Gunthorpe <[email protected]> Suggested-by: Leon Romanovsky <[email protected]> Signed-off-by: Gal Pressman <[email protected]> Reviewed-by: Leon Romanovsky <[email protected]> Reviewed-by: Shiraz Saleem <[email protected]> Reviewed-by: Dennis Dalessandro <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2019-05-01kobject: fix dereference before null check on kobjColin Ian King1-1/+2
The kobj pointer is being null-checked so potentially it could be null, however, the ktype declaration before the null check is dereferencing kobj hence we have a potential null pointer deference. Fix this by moving the assignment of ktype after kobj has been null checked. Addresses-Coverity: ("Dereference before null check") Fixes: aa30f47cf666 ("kobject: Add support for default attribute groups to kobj_type") Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-30x86/mm/mem_encrypt: Disable all instrumentation for early SME setupGary Hook1-0/+11
Enablement of AMD's Secure Memory Encryption feature is determined very early after start_kernel() is entered. Part of this procedure involves scanning the command line for the parameter 'mem_encrypt'. To determine intended state, the function sme_enable() uses library functions cmdline_find_option() and strncmp(). Their use occurs early enough such that it cannot be assumed that any instrumentation subsystem is initialized. For example, making calls to a KASAN-instrumented function before KASAN is set up will result in the use of uninitialized memory and a boot failure. When AMD's SME support is enabled, conditionally disable instrumentation of these dependent functions in lib/string.c and arch/x86/lib/cmdline.c. [ bp: Get rid of intermediary nostackp var and cleanup whitespace. ] Fixes: aca20d546214 ("x86/mm: Add support to make use of Secure Memory Encryption") Reported-by: Li RongQing <[email protected]> Signed-off-by: Gary R Hook <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Shevchenko <[email protected]> Cc: Boris Brezillon <[email protected]> Cc: Coly Li <[email protected]> Cc: "[email protected]" <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Kees Cook <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: "[email protected]" <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: "[email protected]" <[email protected]> Cc: "[email protected]" <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-04-29lib/vsprintf: Make function pointer_string staticYueHaibing1-2/+3
Fix sparse warning: lib/vsprintf.c:673:6: warning: symbol 'pointer_string' was not declared. Should it be static? Link: http://lkml.kernel.org/r/[email protected] To: <[email protected]> To: <[email protected]> To: <[email protected]> To: <[email protected]> Signed-off-by: YueHaibing <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-29stacktrace: Provide common infrastructureThomas Gleixner1-0/+4
All architectures which support stacktrace carry duplicated code and do the stack storage and filtering at the architecture side. Provide a consolidated interface with a callback function for consuming the stack entries provided by the architecture specific stack walker. This removes lots of duplicated code and allows to implement better filtering than 'skip number of entries' in the future without touching any architecture specific code. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: [email protected] Cc: Steven Rostedt <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-04-29lib/stackdepot: Remove obsolete functionsThomas Gleixner1-20/+0
No more users of the struct stack_trace based interfaces. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Acked-by: Alexander Potapenko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-04-29fault-inject: Simplify stacktrace retrievalThomas Gleixner1-9/+3
Replace the indirection through struct stack_trace with an invocation of the storage array based interface. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-04-29lib/stackdepot: Provide functions which operate on plain storage arraysThomas Gleixner1-19/+51
The struct stack_trace indirection in the stack depot functions is a truly pointless excercise which requires horrible code at the callsites. Provide interfaces based on plain storage arrays. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Acked-by: Alexander Potapenko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-04-28kobject: Improve doc clarity kobject_init_and_add()Tobin C. Harding1-3/+6
Function kobject_init_and_add() is currently misused in a number of places in the kernel. On error return kobject_put() must be called but is at times not. Make the function documentation more explicit about calling kobject_put() in the error path. Signed-off-by: Tobin C. Harding <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-28kobject: Improve docs for kobject_add/delTobin C. Harding1-5/+12
There is currently some confusion on how to wind back kobject_init_and_add() during the error paths in code that uses this function. Add documentation to kobject_add() and kobject_del() to help clarify the usage. Signed-off-by: Tobin C. Harding <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-27netlink: add strict parsing for future attributesJohannes Berg1-0/+4
Unfortunately, we cannot add strict parsing for all attributes, as that would break existing userspace. We currently warn about it, but that's about all we can do. For new attributes, however, the story is better: nobody is using them, so we can reject bad sizes. Also, for new attributes, we need not accept them when the policy doesn't declare their usage. David Ahern and I went back and forth on how to best encode this, and the best way we found was to have a "boundary type", from which point on new attributes have all possible validation applied, and NLA_UNSPEC is rejected. As we didn't want to add another argument to all functions that get a netlink policy, the workaround is to encode that boundary in the first entry of the policy array (which is for type 0 and thus probably not really valid anyway). I put it into the validation union for the rare possibility that somebody is actually using attribute 0, which would continue to work fine unless they tried to use the extended validation, which isn't likely. We also didn't find any in-tree users with type 0. The reason for setting the "start strict here" attribute is that we never really need to start strict from 0, which is invalid anyway (or in legacy families where that isn't true, it cannot be set to strict), so we can thus reserve the value 0 for "don't do this check" and don't have to add the tag to all policies right now. Thus, policies can now opt in to this validation, which we should do for all existing policies, at least when adding new attributes. Note that entirely *new* policies won't need to set it, as the use of that should be using nla_parse()/nlmsg_parse() etc. which anyway do fully strict validation now, regardless of this. So in effect, this patch only covers the "existing command with new attribute" case. Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-27netlink: make validation more configurable for future strictnessJohannes Berg1-83/+88
We currently have two levels of strict validation: 1) liberal (default) - undefined (type >= max) & NLA_UNSPEC attributes accepted - attribute length >= expected accepted - garbage at end of message accepted 2) strict (opt-in) - NLA_UNSPEC attributes accepted - attribute length >= expected accepted Split out parsing strictness into four different options: * TRAILING - check that there's no trailing data after parsing attributes (in message or nested) * MAXTYPE - reject attrs > max known type * UNSPEC - reject attributes with NLA_UNSPEC policy entries * STRICT_ATTRS - strictly validate attribute size The default for future things should be *everything*. The current *_strict() is a combination of TRAILING and MAXTYPE, and is renamed to _deprecated_strict(). The current regular parsing has none of this, and is renamed to *_parse_deprecated(). Additionally it allows us to selectively set one of the new flags even on old policies. Notably, the UNSPEC flag could be useful in this case, since it can be arranged (by filling in the policy) to not be an incompatible userspace ABI change, but would then going forward prevent forgetting attribute entries. Similar can apply to the POLICY flag. We end up with the following renames: * nla_parse -> nla_parse_deprecated * nla_parse_strict -> nla_parse_deprecated_strict * nlmsg_parse -> nlmsg_parse_deprecated * nlmsg_parse_strict -> nlmsg_parse_deprecated_strict * nla_parse_nested -> nla_parse_nested_deprecated * nla_validate_nested -> nla_validate_nested_deprecated Using spatch, of course: @@ expression TB, MAX, HEAD, LEN, POL, EXT; @@ -nla_parse(TB, MAX, HEAD, LEN, POL, EXT) +nla_parse_deprecated(TB, MAX, HEAD, LEN, POL, EXT) @@ expression NLH, HDRLEN, TB, MAX, POL, EXT; @@ -nlmsg_parse(NLH, HDRLEN, TB, MAX, POL, EXT) +nlmsg_parse_deprecated(NLH, HDRLEN, TB, MAX, POL, EXT) @@ expression NLH, HDRLEN, TB, MAX, POL, EXT; @@ -nlmsg_parse_strict(NLH, HDRLEN, TB, MAX, POL, EXT) +nlmsg_parse_deprecated_strict(NLH, HDRLEN, TB, MAX, POL, EXT) @@ expression TB, MAX, NLA, POL, EXT; @@ -nla_parse_nested(TB, MAX, NLA, POL, EXT) +nla_parse_nested_deprecated(TB, MAX, NLA, POL, EXT) @@ expression START, MAX, POL, EXT; @@ -nla_validate_nested(START, MAX, POL, EXT) +nla_validate_nested_deprecated(START, MAX, POL, EXT) @@ expression NLH, HDRLEN, MAX, POL, EXT; @@ -nlmsg_validate(NLH, HDRLEN, MAX, POL, EXT) +nlmsg_validate_deprecated(NLH, HDRLEN, MAX, POL, EXT) For this patch, don't actually add the strict, non-renamed versions yet so that it breaks compile if I get it wrong. Also, while at it, make nla_validate and nla_parse go down to a common __nla_validate_parse() function to avoid code duplication. Ultimately, this allows us to have very strict validation for every new caller of nla_parse()/nlmsg_parse() etc as re-introduced in the next patch, while existing things will continue to work as is. In effect then, this adds fully strict validation for any new command. Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-27netlink: add NLA_MIN_LENJohannes Berg1-1/+8
Rather than using NLA_UNSPEC for this type of thing, use NLA_MIN_LEN so we can make NLA_UNSPEC be NLA_REJECT under certain conditions for future attributes. While at it, also use NLA_EXACT_LEN for the struct example. Signed-off-by: Johannes Berg <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-26lib/test_vmalloc.c: do not create cpumask_t variable on stackUladzislau Rezki (Sony)1-3/+3
On my "Intel(R) Xeon(R) W-2135 CPU @ 3.70GHz" system(12 CPUs) i get the warning from the compiler about frame size: warning: the frame size of 1096 bytes is larger than 1024 bytes [-Wframe-larger-than=] the size of cpumask_t depends on number of CPUs, therefore just make use of cpumask_of() in set_cpus_allowed_ptr() as a second argument. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Uladzislau Rezki (Sony) <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Cc: Uladzislau Rezki <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Thomas Garnier <[email protected]> Cc: Oleksiy Avramchenko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Joel Fernandes <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-04-26lib/Kconfig.debug: fix build error without CONFIG_BLOCKYueHaibing1-0/+1
If CONFIG_TEST_KMOD is set to M, while CONFIG_BLOCK is not set, XFS and BTRFS can not be compiled successly. Link: http://lkml.kernel.org/r/[email protected] Fixes: d9c6a72d6fa2 ("kmod: add test driver to stress test the module loader") Signed-off-by: YueHaibing <[email protected]> Reported-by: Hulk Robot <[email protected]> Reviewed-by: Kees Cook <[email protected]> Cc: Masahiro Yamada <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Andy Shevchenko <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Joe Lawrence <[email protected]> Cc: Robin Murphy <[email protected]> Cc: Luis Chamberlain <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-04-26vsprintf: Limit the length of inlined error messagesPetr Mladek1-12/+27
The inlined error messages must be used carefully because they need to fit into the given buffer. Handle them using a custom wrapper that makes people aware of the problem. Also define a reasonable hard limit to avoid a completely insane usage. Suggested-by: Sergey Senozhatsky <[email protected]> Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Avoid confusion between invalid address and valuePetr Mladek1-1/+1
We are able to detect invalid values handled by %p[iI] printk specifier. The current error message is "invalid address". It might cause confusion against "(efault)" reported by the generic valid_pointer_address() check. Let's unify the style and use the more appropriate error code description "(einval)". Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Prevent crash when dereferencing invalid pointersPetr Mladek2-36/+122
We already prevent crash when dereferencing some obviously broken pointers. But the handling is not consistent. Sometimes we print "(null)" only for pure NULL pointer, sometimes for pointers in the first page and sometimes also for pointers in the last page (error codes). Note that printk() call this code under logbuf_lock. Any recursive printks are redirected to the printk_safe implementation and the messages are stored into per-CPU buffers. These buffers might be eventually flushed in printk_safe_flush_on_panic() but it is not guaranteed. This patch adds a check using probe_kernel_read(). It is not a full-proof test. But it should help to see the error message in 99% situations where the kernel would silently crash otherwise. Also it makes the error handling unified for "%s" and the many %p* specifiers that need to read the data from a given address. We print: + (null) when accessing data on pure pure NULL address + (efault) when accessing data on an invalid address It does not affect the %p* specifiers that just print the given address in some form, namely %pF, %pf, %pS, %ps, %pB, %pK, %px, and plain %p. Note that we print (efault) from security reasons. In fact, the real address can be seen only by %px or eventually %pK. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Consolidate handling of unknown pointer specifiersPetr Mladek2-13/+18
There are few printk formats that make sense only with two or more specifiers. Also some specifiers make sense only when a kernel feature is enabled. The handling of unknown specifiers is inconsistent and not helpful. Using WARN() looks like an overkill for this type of error. pr_warn() is not good either. It would by handled via printk_safe buffer and it might be hard to match it with the problematic string. A reasonable compromise seems to be writing the unknown format specifier into the original string with a question mark, for example (%pC?). It should be self-explaining enough. Note that it is in brackets to follow the (null) style. Note that it introduces a warning about that test_hashed() function is unused. It is going to be used again by a later patch. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Factor out %pO handler as kobject_string()Petr Mladek1-5/+12
Move code from the long pointer() function. We are going to improve error handling that will make it even more complicated. This patch does not change the existing behavior. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Cc: Kees Cook <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Factor out %pV handler as va_format()Petr Mladek1-9/+12
Move the code from the long pointer() function. We are going to improve error handling that will make it more complicated. This patch does not change the existing behavior. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Factor out %p[iI] handler as ip_addr_string()Petr Mladek1-22/+30
Move the non-trivial code from the long pointer() function. We are going to improve error handling that will make it even more complicated. This patch does not change the existing behavior. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Do not check address of well-known stringsPetr Mladek1-37/+44
We are going to check the address using probe_kernel_address(). It will be more expensive and it does not make sense for well known address. This patch splits the string() function. The variant without the check is then used on locations that handle string constants or strings defined as local variables. This patch does not change the existing behavior. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Petr Mladek <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]>
2019-04-26vsprintf: Consistent %pK handling for kptr_restrict == 0Petr Mladek1-4/+2
restricted_pointer() pretends that it prints the address when kptr_restrict is set to zero. But it is never called in this situation. Instead, pointer() falls back to ptr_to_id() and hashes the pointer. This patch removes the potential confusion. klp_restrict is checked only in restricted_pointer(). It actually fixes a small race when the address might get printed unhashed: CPU0 CPU1 pointer() if (!kptr_restrict) /* for example set to 2 */ restricted_pointer() /* echo 0 >/proc/sys/kernel/kptr_restrict */ proc_dointvec_minmax_sysadmin() klpr_restrict = 0; switch(kptr_restrict) case 0: break: number() Fixes: ef0010a30935de4e0211 ("vsprintf: don't use 'restricted_pointer()' when not restricting") Link: http://lkml.kernel.org/r/[email protected] To: Andy Shevchenko <[email protected]> To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Cc: Kees Cook <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Reviewed-by: Steven Rostedt (VMware) <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-26vsprintf: Shuffle restricted_pointer()Petr Mladek1-49/+49
This is just a preparation step for further changes. The patch does not change the code. Link: http://lkml.kernel.org/r/[email protected] To: Rasmus Villemoes <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: "Tobin C . Harding" <[email protected]> Cc: Joe Perches <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: [email protected] Reviewed-by: Andy Shevchenko <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Signed-off-by: Petr Mladek <[email protected]>
2019-04-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-3/+3
Two easy cases of overlapping changes. Signed-off-by: David S. Miller <[email protected]>
2019-04-25kobject: Add support for default attribute groups to kobj_typeKimberly Brown1-0/+14
kobj_type currently uses a list of individual attributes to store default attributes. Attribute groups are more flexible than a list of attributes because groups provide support for attribute visibility. So, add support for default attribute groups to kobj_type. In future patches, the existing uses of kobj_type’s attribute list will be converted to attribute groups. When that is complete, kobj_type’s attribute list, “default_attrs”, will be removed. Signed-off-by: Kimberly Brown <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-25lib/siphash.c: mark expected switch fall-throughsStephen Rothwell1-18/+18
In preparation to enabling -Wimplicit-fallthrough, mark switch cases where we are expecting to fall through. This patch aims to suppress up to 18 missing-break-in-switch false positives on some architectures. Cc: Gustavo A. R. Silva <[email protected]> Cc: Kees Cook <[email protected]> Signed-off-by: Stephen Rothwell <[email protected]> Reviewed-by: Jason A. Donenfeld <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-25crypto: shash - remove shash_desc::flagsEric Biggers3-3/+0
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: Eric Biggers <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-04-24mm/uaccess: Use 'unsigned long' to placate UBSAN warnings on older GCC versionsPeter Zijlstra2-4/+5
Randy reported objtool triggered on his (GCC-7.4) build: lib/strncpy_from_user.o: warning: objtool: strncpy_from_user()+0x315: call to __ubsan_handle_add_overflow() with UACCESS enabled lib/strnlen_user.o: warning: objtool: strnlen_user()+0x337: call to __ubsan_handle_sub_overflow() with UACCESS enabled This is due to UBSAN generating signed-overflow-UB warnings where it should not. Prior to GCC-8 UBSAN ignored -fwrapv (which the kernel uses through -fno-strict-overflow). Make the functions use 'unsigned long' throughout. Reported-by: Randy Dunlap <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Randy Dunlap <[email protected]> # build-tested Acked-by: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-04-23asm-generic: provide entirely generic nommu uaccessChristoph Hellwig1-0/+4
Move the code to implement uaccess using memcpy or direct loads and stores to asm-generic/uaccess.h and make it selectable kconfig option. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Arnd Bergmann <[email protected]>
2019-04-22Merge tag 'v5.1-rc6' into for-5.2/blockJens Axboe6-41/+59
Pull in v5.1-rc6 to resolve two conflicts. One is in BFQ, in just a comment, and is trivial. The other one is a conflict due to a later fix in the bio multi-page work, and needs a bit more care. * tag 'v5.1-rc6': (770 commits) Linux 5.1-rc6 block: make sure that bvec length can't be overflow block: kill all_q_node in request_queue x86/cpu/intel: Lower the "ENERGY_PERF_BIAS: Set to normal" message's log priority coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping mm/kmemleak.c: fix unused-function warning init: initialize jump labels before command line option parsing kernel/watchdog_hld.c: hard lockup message should end with a newline kcov: improve CONFIG_ARCH_HAS_KCOV help text mm: fix inactive list balancing between NUMA nodes and cgroups mm/hotplug: treat CMA pages as unmovable proc: fixup proc-pid-vm test proc: fix map_files test on F29 mm/vmstat.c: fix /proc/vmstat format for CONFIG_DEBUG_TLBFLUSH=y CONFIG_SMP=n mm/memory_hotplug: do not unlock after failing to take the device_hotplug_lock mm: swapoff: shmem_unuse() stop eviction without igrab() mm: swapoff: take notice of completion sooner mm: swapoff: remove too limiting SWAP_UNUSE_MAX_TRIES mm: swapoff: shmem_find_swap_entries() filter out other types slab: store tagged freelist for off-slab slabmgmt ... Signed-off-by: Jens Axboe <[email protected]>
2019-04-19kcov: improve CONFIG_ARCH_HAS_KCOV help textMark Rutland1-3/+3
The help text for CONFIG_ARCH_HAS_KCOV is stale, and describes the feature as being enabled only for x86_64, when it is now enabled for several architectures, including arm, arm64, powerpc, and s390. Let's remove that stale help text, and update it along the lines of hat for ARCH_HAS_FORTIFY_SOURCE, better describing when an architecture should select CONFIG_ARCH_HAS_KCOV. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Mark Rutland <[email protected]> Acked-by: Dmitry Vyukov <[email protected]> Cc: Kees Cook <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-04-17Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller1-0/+4
Conflict resolution of af_smc.c from Stephen Rothwell. Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: use BIT(0) for locking.NeilBrown1-1/+1
As reported by Guenter Roeck, the new bit-locking using BIT(1) doesn't work on the m68k architecture. m68k only requires 2-byte alignment for words and longwords, so there is only one unused bit in pointers to structs - We current use two, one for the NULLS marker at the end of the linked list, and one for the bit-lock in the head of the list. The two uses don't need to conflict as we never need the head of the list to be a NULLS marker - the marker is only needed to check if an object has moved to a different table, and the bucket head cannot move. The NULLS marker is only needed in a ->next pointer. As we already have different types for the bucket head pointer (struct rhash_lock_head) and the ->next pointers (struct rhash_head), it is fairly easy to treat the lsb differently in each. So: Initialize buckets heads to NULL, and use the lsb for locking. When loading the pointer from the bucket head, if it is NULL (ignoring the lock big), report as being the expected NULLS marker. When storing a value into a bucket head, if it is a NULLS marker, store NULL instead. And convert all places that used bit 1 for locking, to use bit 0. Fixes: 8f0db018006a ("rhashtable: use bit_spin_locks to protect hash bucket.") Reported-by: Guenter Roeck <[email protected]> Tested-by: Guenter Roeck <[email protected]> Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: replace rht_ptr_locked() with rht_assign_locked()NeilBrown1-3/+3
The only times rht_ptr_locked() is used, it is to store a new value in a bucket-head. This is the only time it makes sense to use it too. So replace it by a function which does the whole task: Sets the lock bit and assigns to a bucket head. Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>