Age | Commit message (Collapse) | Author | Files | Lines |
|
The current implementation of append may cause duplicate data and/or
incorrect ranges to be returned to a reader during an update. Although
this has not been reported or seen, disable the append write operation
while the tree is in rcu mode out of an abundance of caution.
During the analysis of the mas_next_slot() the following was
artificially created by separating the writer and reader code:
Writer: reader:
mas_wr_append
set end pivot
updates end metata
Detects write to last slot
last slot write is to start of slot
store current contents in slot
overwrite old end pivot
mas_next_slot():
read end metadata
read old end pivot
return with incorrect range
store new value
Alternatively:
Writer: reader:
mas_wr_append
set end pivot
updates end metata
Detects write to last slot
last lost write to end of slot
store value
mas_next_slot():
read end metadata
read old end pivot
read new end pivot
return with incorrect range
set old end pivot
There may be other accesses that are not safe since we are now updating
both metadata and pointers, so disabling append if there could be rcu
readers is the safest action.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 54a611b60590 ("Maple Tree: add new data structure")
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Back-merge the 6.5-devel branch for the clean patch application for
6.6 and resolving merge conflicts.
Signed-off-by: Takashi Iwai <[email protected]>
|
|
Merge series from Charles Keepax <[email protected]>:
This patch chain adds support for the Cirrus Logic cs42l43 PC focused
SoundWire CODEC. The chain is currently based of Lee's for-mfd-next
branch.
This series is mostly just a resend keeping pace with the kernel under
it, except for a minor fixup in the ASoC stuff.
Thanks,
Charles
Charles Keepax (4):
dt-bindings: mfd: cirrus,cs42l43: Add initial DT binding
mfd: cs42l43: Add support for cs42l43 core driver
pinctrl: cs42l43: Add support for the cs42l43
ASoC: cs42l43: Add support for the cs42l43
Lucas Tanure (2):
soundwire: bus: Allow SoundWire peripherals to register IRQ handlers
spi: cs42l43: Add SPI controller support
.../bindings/sound/cirrus,cs42l43.yaml | 313 +++
MAINTAINERS | 4 +
drivers/mfd/Kconfig | 23 +
drivers/mfd/Makefile | 3 +
drivers/mfd/cs42l43-i2c.c | 98 +
drivers/mfd/cs42l43-sdw.c | 239 ++
drivers/mfd/cs42l43.c | 1188 +++++++++
drivers/mfd/cs42l43.h | 28 +
drivers/pinctrl/cirrus/Kconfig | 11 +
drivers/pinctrl/cirrus/Makefile | 2 +
drivers/pinctrl/cirrus/pinctrl-cs42l43.c | 609 +++++
drivers/soundwire/bus.c | 32 +
drivers/soundwire/bus_type.c | 12 +
drivers/spi/Kconfig | 7 +
drivers/spi/Makefile | 1 +
drivers/spi/spi-cs42l43.c | 284 ++
include/linux/mfd/cs42l43-regs.h | 1184 +++++++++
include/linux/mfd/cs42l43.h | 102 +
include/linux/soundwire/sdw.h | 9 +
include/sound/cs42l43.h | 17 +
sound/soc/codecs/Kconfig | 16 +
sound/soc/codecs/Makefile | 4 +
sound/soc/codecs/cs42l43-jack.c | 946 +++++++
sound/soc/codecs/cs42l43-sdw.c | 74 +
sound/soc/codecs/cs42l43.c | 2278 +++++++++++++++++
sound/soc/codecs/cs42l43.h | 131 +
26 files changed, 7615 insertions(+)
create mode 100644 Documentation/devicetree/bindings/sound/cirrus,cs42l43.yaml
create mode 100644 drivers/mfd/cs42l43-i2c.c
create mode 100644 drivers/mfd/cs42l43-sdw.c
create mode 100644 drivers/mfd/cs42l43.c
create mode 100644 drivers/mfd/cs42l43.h
create mode 100644 drivers/pinctrl/cirrus/pinctrl-cs42l43.c
create mode 100644 drivers/spi/spi-cs42l43.c
create mode 100644 include/linux/mfd/cs42l43-regs.h
create mode 100644 include/linux/mfd/cs42l43.h
create mode 100644 include/sound/cs42l43.h
create mode 100644 sound/soc/codecs/cs42l43-jack.c
create mode 100644 sound/soc/codecs/cs42l43-sdw.c
create mode 100644 sound/soc/codecs/cs42l43.c
create mode 100644 sound/soc/codecs/cs42l43.h
--
2.30.2
|
|
|
|
We already use _tolower() in other places, so convert the one which open
codes it.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Andy Shevchenko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Sparse is not happy to see non-static variable without declaration:
lib/vsprintf.c:61:6: warning: symbol 'no_hash_pointers' was not declared.
Should it be static?
Declare respective variable in the sprintf.h. With this, add a comment to
discourage its use if no real need.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Andy Shevchenko <[email protected]>
Acked-by: Marco Elver <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Rasmus Villemoes <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Steven Rostedt (Google) <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "lib/vsprintf: Rework header inclusions", v3.
Some patches that reduce the mess with the header inclusions related to
vsprintf.c module. Each patch has its own description, and has no
dependencies to each other, except the collisions over modifications of
the same places. Hence the series.
This patch (of 2):
kernel.h is being used as a dump for all kinds of stuff for a long time.
sprintf() and friends are used in many drivers without need of the full
kernel.h dependency train with it.
Here is the attempt on cleaning it up by splitting out sprintf() and
friends.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Andy Shevchenko <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Marco Elver <[email protected]>
Cc: Rasmus Villemoes <[email protected]>
Cc: Sergey Senozhatsky <[email protected]>
Cc: Steven Rostedt (Google) <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Reorder the operations for split and spanning stores so that new data is
placed in the tree prior to marking the old data as dead. This will limit
re-walks on dead data to just once instead of a retry loop.
The order of operations is as follows: Create the new data, put the new
data in place, mark the top node of the old data as dead.
Then repair parent links in the reused nodes through all levels of the
tree, following the new nodes downwards. Finally walk the top dead node
looking for nodes that are no longer used, or subtrees that should be
destroyed (marked dead throughout then freed), follow the partially used
nodes downwards to discover other dead nodes and subtrees.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
All calls to mas_adopt_children() currently pass the parent as the node in
the maple state. Allow for the parent pointer that is passed in to be
used instead.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add a definition to shorten long code lines and clarify what the code is
doing. Use the new definition to get the maple tree parent pointer from
the maple state where possible.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
mas_replace() has a single user that takes a flag which is now always
true. Replace this function with mas_put_in_tree() to better align with
mas_replace_node(). Inline the remaining logic into the only caller;
mas_wmb_replace().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Replacing nodes may cause a live lock-up if CPU resources are saturated by
write operations on the tree by continuously retrying on dead nodes. To
avoid the continuous retry scenario, ensure the new node is inserted into
the tree prior to marking the old data as dead. This will define a window
where old and new data is swapped.
When reusing lower level nodes, ensure the parent pointer is updated after
the parent is marked dead. This ensures that the child is still reachable
from the top of the tree, but walking up to a dead node will result in a
single retry that will start a fresh walk from the top down through the
new node.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "maple_tree: Change replacement strategy".
The maple tree marks nodes dead as soon as they are going to be replaced.
This could be problematic when used in the RCU context since the writer
may be starved of CPU time by the readers. This patch set addresses the
issue by switching the data replacement strategy to one that will only
mark data as dead once the new data is available.
This series changes the ordering of the node replacement so that the new
data is live before the old data is marked 'dead'. When readers hit
'dead' nodes, they will restart from the top of the tree and end up in the
new data.
In more complex scenarios, the replacement strategy means a subtree is
built and graphed into the tree leaving some nodes to point to the old
parent. The view of tasks into the old data will either remain with the
old data, or see the new data once the old data is marked 'dead'.
Iterators will see the 'dead' node and restart on their own and switch to
the new data. There is no risk of the reader seeing old data in these
cases.
The 'dead' subtree of data is then fully marked dead, but reused nodes
will still point to the dead nodes until the parent pointer is updated.
Walking up to a 'dead' node will cause a re-walk from the top of the tree
and enter the new data area where old data is not reachable.
Once the parent pointers are fully up to date in the active data, the
'dead' subtree is iterated to collect entirely 'dead' subtrees, and dead
nodes (nodes that partially contained reused data).
This patch (of 6):
When dumping the tree, honour formatting request to output hex for the
maple node type arange64.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Recent versions of clang warn about an unused variable, though older
versions saw the 'slot++' as a use and did not warn:
radix-tree.c:1136:50: error: parameter 'slot' set but not used [-Werror,-Wunused-but-set-parameter]
It's clearly not needed any more, so just remove it.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 3a08cd52c37c7 ("radix tree: Remove multiorder support")
Signed-off-by: Arnd Bergmann <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Nathan Chancellor <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Rong Tao <[email protected]>
Cc: Tom Rix <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add parameter descriptions to struct kunit_attr header for the
parameters attr_default and print.
Fixes: 39e92cb1e4a1 ("kunit: Add test attributes API structure")
Reported-by: kernel test robot <[email protected]>
Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
Signed-off-by: Rae Moar <[email protected]>
Reviewed-by: David Gow <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
When adding koject or kset, we have made sure that ktype cannot be NULL.
Therefore, after adding koject or kset, there is no need to worry about
ktype being NULL. Clear all ktype-related redundancy checks.
Signed-off-by: Zhen Lei <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
When I register a kset in the following way:
static struct kset my_kset;
kobject_set_name(&my_kset.kobj, "my_kset");
ret = kset_register(&my_kset);
A null pointer dereference exception is occurred:
[ 4453.568337] Unable to handle kernel NULL pointer dereference at \
virtual address 0000000000000028
... ...
[ 4453.810361] Call trace:
[ 4453.813062] kobject_get_ownership+0xc/0x34
[ 4453.817493] kobject_add_internal+0x98/0x274
[ 4453.822005] kset_register+0x5c/0xb4
[ 4453.825820] my_kobj_init+0x44/0x1000 [my_kset]
... ...
Because I didn't initialize my_kset.kobj.ktype.
According to the description in Documentation/core-api/kobject.rst:
- A ktype is the type of object that embeds a kobject. Every structure
that embeds a kobject needs a corresponding ktype.
So add sanity check to make sure kset->kobj.ktype is not NULL.
Signed-off-by: Zhen Lei <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
Cross-merge networking fixes after downstream PR.
Conflicts:
drivers/net/ethernet/sfc/tc.c
fa165e194997 ("sfc: don't unregister flow_indr if it was never registered")
3bf969e88ada ("sfc: add MAE table machinery for conntrack table")
https://lore.kernel.org/all/[email protected]/
No adjacent changes.
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The APIs that allow backtracing across CPUs have always had a way to
exclude the current CPU. This convenience means callers didn't need to
find a place to allocate a CPU mask just to handle the common case.
Let's extend the API to take a CPU ID to exclude instead of just a
boolean. This isn't any more complex for the API to handle and allows the
hardlockup detector to exclude a different CPU (the one it already did a
trace for) without needing to find space for a CPU mask.
Arguably, this new API also encourages safer behavior. Specifically if
the caller wants to avoid tracing the current CPU (maybe because they
already traced the current CPU) this makes it more obvious to the caller
that they need to make sure that the current CPU ID can't change.
[[email protected]: fix trigger_allbutcpu_cpu_backtrace() stub]
Link: https://lkml.kernel.org/r/20230804065935.v4.1.Ia35521b91fc781368945161d7b28538f9996c182@changeid
Signed-off-by: Douglas Anderson <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: kernel test robot <[email protected]>
Cc: Lecopzer Chen <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Pingfan Liu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Replace internal logic with separate bitrev library.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: John Sanpe <[email protected]>
Cc: Bhaskar Chowdhury <[email protected]>
Cc: Randy Dunlap <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
It is expected that most callers should _ignore_ the errors return by
debugfs_create_dir() in ei_debugfs_init().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Wang Ming <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
It is expected that most callers should _ignore_ the errors return by
debugfs_create_dir() in err_inject_init().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Wang Ming <[email protected]>
Cc: Akinobu Mita <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
kmap() has been deprecated in favor of the kmap_local_page() due to high
cost, restricted mapping space, the overhead of a global lock for
synchronization, and making the process sleep in the absence of free
slots.
kmap_local_page() is faster than kmap() and offers thread-local and
CPU-local mappings, take pagefaults in a local kmap region and preserves
preemption by saving the mappings of outgoing tasks and restoring those of
the incoming one during a context switch.
The mappings are kept thread local in the functions “dmirror_do_read”
and “dmirror_do_write” in test_hmm.c
Therefore, replace kmap() with kmap_local_page() and use
mempcy_from/to_page() to avoid open coding kmap_local_page() + memcpy() +
kunmap_local().
Remove the unused variable “tmp”.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Sumitra Sharma <[email protected]>
Suggested-by: Fabio M. De Francesco <[email protected]>
Reviewed-by: Fabio M. De Francesco <[email protected]>
Reviewed-by: Ira Weiny <[email protected]>
Cc: Deepak R Varma <[email protected]>
Cc: Jérôme Glisse <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
mas_prealloc() may walk partially down the tree before finding that a
split or spanning store is needed. When the write occurs, relax the
logic on resetting the walk so that partial walks will not restart, but
walks that have gone too far (a store that affects beyond the current
node) should be restarted.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Calculate the number of nodes based on the pending write action instead
of assuming the worst case.
This addresses a performance regression introduced in platforms that
have longer allocation timing.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Relocate it and call mas_wr_extend_null() from within mas_wr_end_piv().
Extending the NULL may affect the end pivot value so call
mas_wr_endtend_null() from within mas_wr_end_piv() to keep it all
together.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
mas_rebalance() is called to rebalance an insufficient node into a
single node or two sufficient nodes. The preallocation estimate is
always too many in this case as the height of the tree will never grow
and there is no possibility to have a three way split in this case, so
revise the node allocation count.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The current preallocation strategy is to preallocate the absolute
worst-case allocation for a tree modification. The entry (or NULL) is
needed to know how many nodes are needed to write to the tree. Start by
adding the argument to the mas_preallocate() definition.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add some benchmarking functions in testing for mas_prev(). This is
useful to ensure there are no regressions added during modifications.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "Reduce preallocations for maple tree", v3.
Initial work on preallocations showed no regression in performance during
testing, but recently some users (both on [1] and off [android] list) have
reported that preallocating the worst-case number of nodes has caused some
slow down. This patch set addresses the number of allocations in a few
ways.
During munmap() most munmap() operations will remove a single VMA, so
leverage the fact that the maple tree can place a single pointer at range
0 - 0 without allocating. This is done by changing the index of the VMAs
to be indexed by the count, starting at 0.
Re-introduce the entry argument to mas_preallocate() so that a more
intelligent guess of the node count can be made.
Implement the more intelligent guess of the node count, although there is
more work to be done.
During development of v2 of this patch set, I also noticed that the number
of nodes being allocated for a rebalance was beyond what could possibly be
needed. This is addressed in patch 0008.
This patch (of 15):
Add a way to test the speed of mas_for_each() to the testing code.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Peng Zhang <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Use lockdep to check the write path in the maple tree holds the lock in
write mode.
Introduce mt_write_lock_is_held() to check if the lock is held for
writing. Update the necessary checks for rcu_dereference_protected() to
use the new write lock check.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liam R. Howlett <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Oliver Sang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Replace FGP_FLAGS with GFP_FLAGS
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Rapoport (IBM) <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Replace "Insert and entry at a give index" with "Insert an entry at a
given index"
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Mike Rapoport (IBM) <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
test_pages() tests the page allocator by calling alloc_pages() with
different orders up to order 10.
However, different architectures and platforms support different maximum
contiguous allocation sizes. The default maximum allocation order
(MAX_ORDER) is 10, but architectures can use CONFIG_ARCH_FORCE_MAX_ORDER
to override this. On platforms where this is less than 10, test_meminit()
will blow up with a WARN(). This is expected, so let's not do that.
Replace the hardcoded "10" with the MAX_ORDER macro so that we test
allocations up to the expected platform limit.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 5015a300a522 ("lib: introduce test_meminit module")
Signed-off-by: Andrew Donnellan <[email protected]>
Reviewed-by: Alexander Potapenko <[email protected]>
Cc: Xiaoke Wang <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The internal function mas_first_entry() is no longer used, so drop it.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Replace mas_logical_pivot() with mas_safe_pivot() and drop
mas_logical_pivot() since it won't be used anymore. We can do this since
now all nodes will have node limit pivot (if it is not full node).
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Instead of using mas_first_entry() to find the leftmost leaf, use a simple
loop instead. Remove an unneeded check for root node. To make the error
message more accurate, check pivots first and then slots, because checking
slots depend on the node limit pivot to break the loop.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Update mas_validate_limits() to check root node, check node limit pivot if
there is enough room for it to exist and check data_end. Remove the check
for child existence as it is done in mas_validate_child_slot().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Don't break the loop before checking the last slot. Also here check if
non-leaf nodes are missing children.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Make mas_validate_gaps() check whether the offset in the metadata points
to the largest gap. By the way, simplify this function.
Add the verification that gaps beyond the node limit are zero.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "Improve the validation for maple tree and some cleanup", v2.
This patch (of 7):
Do not use a special offset to indicate that there is no gap. When there
is no gap, offset can point to any valid slots because its gap is 0.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Tested-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
When expanding a range in two directions, only partially overwriting the
previous and next ranges, the number of entries will not be increased, so
we can just update the pivots as a fast path. However, it may introduce
potential risks in RCU mode, because it updates two pivots. We only
enable it in non-RCU mode.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
When the new range can be completely covered by the original last range
without touching the boundaries on both sides, two new entries can be
appended to the end as a fast path. We update the original last pivot at
the end, and the newly appended two entries will not be accessed before
this, so it is also safe in RCU mode.
This is useful for sequential insertion, which is what we do in
dup_mmap(). Enabling BENCH_FORK in test_maple_tree and just running
bench_forking() gives the following time-consuming numbers:
before: after:
17,874.83 msec 15,738.38 msec
It shows about a 12% performance improvement for duplicating VMAs.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "Optimize the fast path of mas_store()", v4.
Add fast paths for mas_wr_append() and mas_wr_slot_store() respectively.
The newly added fast path of mas_wr_append() is used in fork() and how
much it benefits fork() depends on how many VMAs are duplicated.
Thanks Liam for the review.
This patch (of 4):
Add tests for all cases of mas_wr_append() and mas_wr_slot_store().
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peng Zhang <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The documentation of mt_next() claims that it starts the search at the
provided index. That's incorrect as it starts the search after the
provided index.
The documentation of mt_find() is slightly confusing. "Handles locking"
is not really helpful as it does not explain how the "locking" works.
Also the documentation of index talks about a range, while in reality the
index is updated on a succesful search to the index of the found entry
plus one.
Fix similar issues for mt_find_after() and mt_prev().
Reword the confusing "Note: Will not return the zero entry." comment on
mt_for_each() and document @__index correctly.
Link: https://lkml.kernel.org/r/87ttw2n556.ffs@tglx
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Liam R. Howlett <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Shanker Donthineni <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
If a testcase returns a wrong (unexpected) value, print the expected and
returned value in hex notation in addition to the decimal notation.
This is very useful in tests which bit-shift hex values left or right and
helped me a lot while developing the JIT compiler for the hppa architecture.
Additionally fix two typos: dowrd -> dword, tall calls -> tail calls.
Signed-off-by: Helge Deller <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Link: https://lore.kernel.org/bpf/ZN6ZAAVoWZpsD1Jf@p100
|
|
Export import_ubuf() to be used in sound subsystem for generic memory
handling as Linus suggested. It's used for constructing an iov_iter
of a single segment user-space copy for PCM data.
Cc: Alexander Viro <[email protected]>
Link: https://lore.kernel.org/r/CAHk-=wh-mUL6mp4chAc6E_UjwpPLyCPRCJK+iB4ZMD2BqjwGHA@mail.gmail.com
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
|
test_number_prefix()
A recent change in clang allows it to consider more expressions as
compile time constants, which causes it to point out an implicit
conversion in the scanf tests:
lib/test_scanf.c:661:2: warning: implicit conversion from 'int' to 'unsigned char' changes value from -168 to 88 [-Wconstant-conversion]
661 | test_number_prefix(unsigned char, "0xA7", "%2hhx%hhx", 0, 0xa7, 2, check_uchar);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
lib/test_scanf.c:609:29: note: expanded from macro 'test_number_prefix'
609 | T result[2] = {~expect[0], ~expect[1]}; \
| ~ ^~~~~~~~~~
1 warning generated.
The result of the bitwise negation is the type of the operand after
going through the integer promotion rules, so this truncation is
expected but harmless, as the initial values in the result array get
overwritten by _test() anyways. Add an explicit cast to the expected
type in test_number_prefix() to silence the warning. There is no
functional change, as all the tests still pass with GCC 13.1.0 and clang
18.0.0.
Cc: [email protected]
Link: https://github.com/ClangBuiltLinux/linuxq/issues/1899
Link: https://github.com/llvm/llvm-project/commit/610ec954e1f81c0e8fcadedcd25afe643f5a094e
Suggested-by: Nick Desaulniers <[email protected]>
Signed-off-by: Nathan Chancellor <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
Link: https://lore.kernel.org/r/20230807-test_scanf-wconstant-conversion-v2-1-839ca39083e1@kernel.org
|
|
BUG_ON_DATA_CORRUPTION is turning detected corruptions of list data
structures from WARNings into BUGs. This can be useful to stop further
corruptions or even exploitation attempts.
However, the option has less to do with debugging than with hardening.
With the introduction of LIST_HARDENED, it makes more sense to move it
to the hardening options, where it selects LIST_HARDENED instead.
Without this change, combining BUG_ON_DATA_CORRUPTION with LIST_HARDENED
alone wouldn't be possible, because DEBUG_LIST would always be selected
by BUG_ON_DATA_CORRUPTION.
Signed-off-by: Marco Elver <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Kees Cook <[email protected]>
|
|
Numerous production kernel configs (see [1, 2]) are choosing to enable
CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened
configs [3]. The motivation behind this is that the option can be used
as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025
are mitigated by the option [4]).
The feature has never been designed with performance in mind, yet common
list manipulation is happening across hot paths all over the kernel.
Introduce CONFIG_LIST_HARDENED, which performs list pointer checking
inline, and only upon list corruption calls the reporting slow path.
To generate optimal machine code with CONFIG_LIST_HARDENED:
1. Elide checking for pointer values which upon dereference would
result in an immediate access fault (i.e. minimal hardening
checks). The trade-off is lower-quality error reports.
2. Use the __preserve_most function attribute (available with Clang,
but not yet with GCC) to minimize the code footprint for calling
the reporting slow path. As a result, function size of callers is
reduced by avoiding saving registers before calling the rarely
called reporting slow path.
Note that all TUs in lib/Makefile already disable function tracing,
including list_debug.c, and __preserve_most's implied notrace has
no effect in this case.
3. Because the inline checks are a subset of the full set of checks in
__list_*_valid_or_report(), always return false if the inline
checks failed. This avoids redundant compare and conditional
branch right after return from the slow path.
As a side-effect of the checks being inline, if the compiler can prove
some condition to always be true, it can completely elide some checks.
Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the
Kconfig variables are changed to reflect that: DEBUG_LIST selects
LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on
DEBUG_LIST.
Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with
"preserve_most") shows throughput improvements, in my case of ~7% on
average (up to 20-30% on some test cases).
Link: https://r.android.com/1266735 [1]
Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2]
Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3]
Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4]
Signed-off-by: Marco Elver <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Kees Cook <[email protected]>
|