aboutsummaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2024-06-05net: remove NULL-pointer net parameter in ip_metrics_convertJason Xing2-3/+2
When I was doing some experiments, I found that when using the first parameter, namely, struct net, in ip_metrics_convert() always triggers NULL pointer crash. Then I digged into this part, realizing that we can remove this one due to its uselessness. Signed-off-by: Jason Xing <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2024-06-05fsnotify: clear PARENT_WATCHED flags lazilyAmir Goldstein1-3/+5
In some setups directories can have many (usually negative) dentries. Hence __fsnotify_update_child_dentry_flags() function can take a significant amount of time. Since the bulk of this function happens under inode->i_lock this causes a significant contention on the lock when we remove the watch from the directory as the __fsnotify_update_child_dentry_flags() call from fsnotify_recalc_mask() races with __fsnotify_update_child_dentry_flags() calls from __fsnotify_parent() happening on children. This can lead upto softlockup reports reported by users. Fix the problem by calling fsnotify_update_children_dentry_flags() to set PARENT_WATCHED flags only when parent starts watching children. When parent stops watching children, clear false positive PARENT_WATCHED flags lazily in __fsnotify_parent() for each accessed child. Suggested-by: Jan Kara <[email protected]> Signed-off-by: Amir Goldstein <[email protected]> Signed-off-by: Stephen Brennan <[email protected]> Signed-off-by: Jan Kara <[email protected]>
2024-06-05mm/memblock: fix a typo in description of for_each_mem_region()Wei Yang1-1/+1
No functional change. Signed-off-by: Wei Yang <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport (IBM) <[email protected]>
2024-06-05platform/chrome: cros_ec_proto: Upgrade get_next_event to v3Daisuke Nojiri1-1/+1
Upgrade EC_CMD_GET_NEXT_EVENT to version 3. The max supported version will be v3. So, we speak v3 even if the EC says it supports v4+. Signed-off-by: Daisuke Nojiri <[email protected]> Link: https://lore.kernel.org/r/[email protected] [tzungbi: uint32_t -> u32 per suggested by checkpatch.pl] Signed-off-by: Tzung-Bi Shih <[email protected]>
2024-06-05platform/chrome: Add struct ec_response_get_next_event_v3Daisuke Nojiri1-0/+34
Add struct ec_response_get_next_event_v3 to upgrade EC_CMD_GET_NEXT_EVENT to version 3. Signed-off-by: Daisuke Nojiri <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Tzung-Bi Shih <[email protected]>
2024-06-04scsi: ufs: pci: Add support MCQ for QEMU-based UFSMinwoo Im1-0/+1
Recently, ufs-mcq feature has been introduced to QEMU hw/ufs device [1]. This patch adds MCQ support for upstream QEMU UFS PCI controller. This patch provides mandatory vops callbacks to make UFS controller work properly on MCQ mode. Operation and Runtime Config register stride is fixed to 48bytes which is implemented by qemu. [1] https://lore.kernel.org/qemu-devel/[email protected]/ Signed-off-by: Minwoo Im <[email protected]> Link: https://lore.kernel.org/r/[email protected] Reviewed-by: Bart Van Assche <[email protected]> Signed-off-by: Martin K. Petersen <[email protected]>
2024-06-04iio: imu: adis_trigger: Allow level interrupts for FIFO readingsRamona Gradinariu1-0/+1
Currently, adis library allows configuration only for edge interrupts, needed for data ready sampling. This patch removes the restriction for level interrupts for devices which have FIFO support. Furthermore, in case of devices which have FIFO support, devm_request_threaded_irq is used for interrupt allocation, to avoid flooding the processor with the FIFO watermark level interrupt, which is active until enough data has been read from the FIFO. Reviewed-by: Nuno Sa <[email protected]> Signed-off-by: Ramona Gradinariu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jonathan Cameron <[email protected]>
2024-06-04iio: imu: adis_buffer: Add buffer setup API with buffer attributesRamona Gradinariu1-4/+16
Add new API called devm_adis_setup_buffer_and_trigger_with_attrs() which also takes buffer attributes as a parameter. Rewrite devm_adis_setup_buffer_and_trigger() implementation such that it calls devm_adis_setup_buffer_and_trigger_with_attrs() with buffer attributes parameter NULL Reviewed-by: Nuno Sa <[email protected]> Signed-off-by: Ramona Gradinariu <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jonathan Cameron <[email protected]>
2024-06-04iio: add support for multiple scan types per channelDavid Lechner1-2/+53
This adds new fields to the iio_channel structure to support multiple scan types per channel. This is useful for devices that support multiple resolution modes or other modes that require different data formats of the raw data. To make use of this, drivers need to implement the new callback get_current_scan_type() to resolve the scan type for a given channel based on the current state of the driver. There is a new scan_type_ext field in the iio_channel structure that should be used to store the scan types for any channel that has more than one. There is also a new flag has_ext_scan_type that acts as a type discriminator for the scan_type/ext_scan_type union. A union is used so that we don't grow the size of the iio_channel structure and also makes it clear that scan_type and ext_scan_type are mutually exclusive. The buffer code is the only code in the IIO core code that is using the scan_type field. This patch updates the buffer code to use the new iio_channel_validate_scan_type() function to ensure it is returning the correct scan type for the current state of the device when reading the sysfs attributes. The buffer validation code is also update to validate any additional scan types that are set in the scan_type_ext field. Part of that code is refactored to a new function to avoid duplication. Some userspace tools may need to be updated to re-read the scan type after writing any other attribute. During testing, we noticed that we had to restart iiod to get it to re-read the scan type after enabling oversampling on the ad7380 driver. Signed-off-by: David Lechner <[email protected]> Reviewed-by: Nuno Sa <[email protected]> Link: https://lore.kernel.org/r/20240530-iio-add-support-for-multiple-scan-types-v3-3-cbc4acea2cfa@baylibre.com Signed-off-by: Jonathan Cameron <[email protected]>
2024-06-04iio: introduce struct iio_scan_typeDavid Lechner1-19/+22
This gives the channel scan_type a named type so that it can be used to simplify code in later commits. Signed-off-by: David Lechner <[email protected]> Reviewed-by: Nuno Sa <[email protected]> Link: https://lore.kernel.org/r/20240530-iio-add-support-for-multiple-scan-types-v3-1-cbc4acea2cfa@baylibre.com Signed-off-by: Jonathan Cameron <[email protected]>
2024-06-04PCI: Revert the cfg_access_lock lockdep mechanismDan Williams2-7/+0
While the experiment did reveal that there are additional places that are missing the lock during secondary bus reset, one of the places that needs to take cfg_access_lock (pci_bus_lock()) is not prepared for lockdep annotation. Specifically, pci_bus_lock() takes pci_dev_lock() recursively and is currently dependent on the fact that the device_lock() is marked lockdep_set_novalidate_class(&dev->mutex). Otherwise, without that annotation, pci_bus_lock() would need to use something like a new pci_dev_lock_nested() helper, a scheme to track a PCI device's depth in the topology, and a hope that the depth of a PCI tree never exceeds the max value for a lockdep subclass. The alternative to ripping out the lockdep coverage would be to deploy a dynamic lock key for every PCI device. Unfortunately, there is evidence that increasing the number of keys that lockdep needs to track to be per-PCI-device is prohibitively expensive for something like the cfg_access_lock. The main motivation for adding the annotation in the first place was to catch unlocked secondary bus resets, not necessarily catch lock ordering problems between cfg_access_lock and other locks. Solve that narrower problem with follow-on patches, and just due to targeted revert for now. Link: https://lore.kernel.org/r/171711746402.1628941.14575335981264103013.stgit@dwillia2-xfh.jf.intel.com Fixes: 7e89efc6e9e4 ("PCI: Lock upstream bridge for pci_reset_function()") Reported-by: Imre Deak <[email protected]> Closes: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_134186v1/shard-dg2-1/igt@[email protected] Signed-off-by: Dan Williams <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]> Tested-by: Hans de Goede <[email protected]> Tested-by: Kalle Valo <[email protected]> Reviewed-by: Dave Jiang <[email protected]> Cc: Jani Saarinen <[email protected]>
2024-06-04driver core: device.h: Group of_node handling declarations and definitionsAndy Shevchenko1-8/+9
There are a few of_node related APIs defined in the driver core. Group the respective declarations and definitions in the header. There is no functional change. Signed-off-by: Andy Shevchenko <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2024-06-04misc: eeprom_93xx46: Hide legacy platform data in the driverAndy Shevchenko1-32/+0
First of all, there is no user for the platform data in the kernel. Second, it needs a lot of updates to follow the modern standards of the kernel, including proper Device Tree bindings and device property handling. For now, just hide the legacy platform data in the driver's code. Signed-off-by: Andy Shevchenko <[email protected]> Reviewed-by: Linus Walleij <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2024-06-04function_graph: Implement fgraph_reserve_data() and fgraph_retrieve_data()Steven Rostedt (VMware)1-0/+3
Added functions that can be called by a fgraph_ops entryfunc and retfunc to store state between the entry of the function being traced to the exit of the same function. The fgraph_ops entryfunc() may call fgraph_reserve_data() to store up to 32 words onto the task's shadow ret_stack and this then can be retrieved by fgraph_retrieve_data() called by the corresponding retfunc(). Co-developed with Masami Hiramatsu: Link: https://lore.kernel.org/linux-trace-kernel/171509109089.162236.11372474169781184034.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Move graph notrace bit to shadow stack global varSteven Rostedt (VMware)1-7/+0
The use of the task->trace_recursion for the logic used for the function graph no-trace was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Link: https://lore.kernel.org/linux-trace-kernel/171509107907.162236.6564679266777519065.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Move graph depth stored data to shadow stack global varSteven Rostedt (VMware)1-29/+0
The use of the task->trace_recursion for the logic used for the function graph depth was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Link: https://lore.kernel.org/linux-trace-kernel/171509106728.162236.2398372644430125344.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Move set_graph_function tests to shadow stack global varSteven Rostedt (VMware)1-4/+1
The use of the task->trace_recursion for the logic used for the set_graph_function was a bit of an abuse of that variable. Now that there exists global vars that are per stack for registered graph traces, use that instead. Link: https://lore.kernel.org/linux-trace-kernel/171509105520.162236.10339831553995971290.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Add "task variables" per task for fgraph_opsSteven Rostedt (VMware)1-0/+1
Add a "task variables" array on the tasks shadow ret_stack that is the size of longs for each possible registered fgraph_ops. That's a total of 16, taking up 8 * 16 = 128 bytes (out of a page size 4k). This will allow for fgraph_ops to do specific features on a per task basis having a way to maintain state for each task. Co-developed with Masami Hiramatsu: Link: https://lore.kernel.org/linux-trace-kernel/171509104383.162236.12239656156685718550.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Add pid tracing back to function graph tracerSteven Rostedt (Google)1-0/+2
Now that the function_graph has a main callback that handles the function graph subops tracing, it no longer honors the pid filtering of ftrace. Add back this logic in the function_graph code to update the gops callback for the entry function to test if it should trace the current task or not. Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Masami Hiramatsu <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Have the instances use their own ftrace_ops for filteringSteven Rostedt (VMware)1-0/+1
Allow for instances to have their own ftrace_ops part of the fgraph_ops that makes the funtion_graph tracer filter on the set_ftrace_filter file of the instance and not the top instance. This uses the new ftrace_startup_subops(), by using graph_ops as the "manager ops" that defines the callback function and adds the functions defined by the filters of the ops for each trace instance. The callback defined by the manager ops will call the registered fgraph ops that were added to the fgraph_array. Co-developed with Masami Hiramatsu: Link: https://lore.kernel.org/linux-trace-kernel/171509102088.162236.15758883237657317789.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04ftrace: Allow subops filtering to be modifiedSteven Rostedt (Google)1-0/+3
The subops filters use a "manager" ops to enable and disable its filters. The manager ops can handle more than one subops, and its filter is what controls what functions get set. Add a ftrace_hash_move_and_update_subops() function that will update the manager ops when the subops filters change. Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Masami Hiramatsu <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04ftrace: Add subops logic to allow one ops to manage manySteven Rostedt (Google)1-0/+1
There are cases where a single system will use a single function callback to handle multiple users. For example, to allow function_graph tracer to have multiple users where each can trace their own set of functions, it is useful to only have one ftrace_ops registered to ftrace that will call a function by the function_graph tracer to handle the multiplexing with the different registered function_graph tracers. Add a "subop_list" to the ftrace_ops that will hold a list of other ftrace_ops that the top ftrace_ops will manage. The function ftrace_startup_subops() that takes the manager ftrace_ops and a subop ftrace_ops it will manage. If there are no subops with the ftrace_ops yet, it will copy the ftrace_ops subop filters to the manager ftrace_ops and register that with ftrace_startup(), and adds the subop to its subop_list. If the manager ops already has something registered, it will then merge the new subop filters with what it has and enable the new functions that covers all the subops it has. To remove a subop, ftrace_shutdown_subops() is called which will use the subop_list of the manager ops to rebuild all the functions it needs to trace, and update the ftrace records to only call the functions it now has registered. If there are no more functions registered, it will then call ftrace_shutdown() to disable itself completely. Note, it is up to the manager ops callback to always make sure that the subops callbacks are called if its filter matches, as there are times in the update where the callback could be calling more functions than those that are currently registered. This could be updated to handle other systems other than function_graph, for example, fprobes could use this (but will need an interface to call ftrace_startup_subops()). Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Masami Hiramatsu <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04ftrace: Allow ftrace startup flags to exist without dynamic ftraceSteven Rostedt (VMware)1-9/+9
Some of the flags for ftrace_startup() may be exposed even when CONFIG_DYNAMIC_FTRACE is not configured in. This is fine as the difference between dynamic ftrace and static ftrace is done within the internals of ftrace itself. No need to have use cases fail to compile because dynamic ftrace is disabled. This change is needed to move some of the logic of what is passed to ftrace_startup() out of the parameters of ftrace_startup(). Link: https://lore.kernel.org/linux-trace-kernel/171509100890.162236.4362350342549122222.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04ftrace: Allow function_graph tracer to be enabled in instancesSteven Rostedt (VMware)1-0/+1
Now that function graph tracing can handle more than one user, allow it to be enabled in the ftrace instances. Note, the filtering of the functions is still joined by the top level set_ftrace_filter and friends, as well as the graph and nograph files. Co-developed with Masami Hiramatsu: Link: https://lore.kernel.org/linux-trace-kernel/171509099743.162236.1699959255446248163.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04ftrace/function_graph: Pass fgraph_ops to function graph callbacksSteven Rostedt (VMware)1-3/+7
Pass the fgraph_ops structure to the function graph callbacks. This will allow callbacks to add a descriptor to a fgraph_ops private field that wil be added in the future and use it for the callbacks. This will be useful when more than one callback can be registered to the function graph tracer. Co-developed with Masami Hiramatsu: Link: https://lore.kernel.org/linux-trace-kernel/171509098588.162236.4787930115997357578.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Allow multiple users to attach to function graphSteven Rostedt (VMware)1-1/+2
Allow for multiple users to attach to function graph tracer at the same time. Only 16 simultaneous users can attach to the tracer. This is because there's an array that stores the pointers to the attached fgraph_ops. When a function being traced is entered, each of the ftrace_ops entryfunc is called and if it returns non zero, its index into the array will be added to the shadow stack. On exit of the function being traced, the shadow stack will contain the indexes of the ftrace_ops on the array that want their retfunc to be called. Because a function may sleep for a long time (if a task sleeps itself), the return of the function may be literally days later. If the ftrace_ops is removed, its place on the array is replaced with a ftrace_ops that contains the stub functions and that will be called when the function finally returns. If another ftrace_ops is added that happens to get the same index into the array, its return function may be called. But that's actually the way things current work with the old function graph tracer. If one tracer is removed and another is added, the new one will get the return calls of the function traced by the previous one, thus this is not a regression. This can be fixed by adding a counter to each time the array item is updated and save that on the shadow stack as well, such that it won't be called if the index saved does not match the index on the array. Note, being able to filter functions when both are called is not completely handled yet, but that shouldn't be too hard to manage. Co-developed with Masami Hiramatsu: Link: https://lore.kernel.org/linux-trace-kernel/171509096221.162236.8806372072523195752.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04function_graph: Convert ret_stack to a series of longsSteven Rostedt (VMware)1-1/+1
In order to make it possible to have multiple callbacks registered with the function_graph tracer, the retstack needs to be converted from an array of ftrace_ret_stack structures to an array of longs. This will allow to store the list of callbacks on the stack for the return side of the functions. Link: https://lore.kernel.org/linux-trace-kernel/171509092742.162236.4427737821399314856.stgit@devnote2 Link: https://lore.kernel.org/linux-trace-kernel/[email protected] Cc: Mark Rutland <[email protected]> Cc: Mathieu Desnoyers <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Florent Revest <[email protected]> Cc: Martin KaFai Lau <[email protected]> Cc: bpf <[email protected]> Cc: Sven Schnelle <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Alan Maguire <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Guo Ren <[email protected]> Reviewed-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (VMware) <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]> Signed-off-by: Steven Rostedt (Google) <[email protected]>
2024-06-04sysfs: Unbreak the build around sysfs_bin_attr_simple_read()Lukas Wunner1-0/+9
Günter reports build breakage for m68k "m5208evb_defconfig" plus CONFIG_BLK_DEV_INITRD=y caused by commit 66bc1a173328 ("treewide: Use sysfs_bin_attr_simple_read() helper"). The defconfig disables CONFIG_SYSFS, so sysfs_bin_attr_simple_read() is not compiled into the kernel. But init/initramfs.c references that function in the initializer of a struct bin_attribute. Add an empty static inline to avoid the build breakage. Fixes: 66bc1a173328 ("treewide: Use sysfs_bin_attr_simple_read() helper") Reported-by: Guenter Roeck <[email protected]> Closes: https://lore.kernel.org/r/[email protected] Signed-off-by: Lukas Wunner <[email protected]> Tested-by: Guenter Roeck <[email protected]> Reviewed-by: Rafael J. Wysocki <[email protected]> Link: https://lore.kernel.org/r/05f4290439a58730738a15b0c99cd8576c4aa0d9.1716461752.git.lukas@wunner.de Signed-off-by: Greg Kroah-Hartman <[email protected]>
2024-06-04driver core: remove devm_device_add_groups()Greg Kroah-Hartman1-2/+0
There is no more in-kernel users of this function, and no driver should ever be using it, so remove it from the kernel. Acked-by: Dmitry Torokhov <[email protected]> Acked-by: "Rafael J. Wysocki" <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2024-06-04usb: typec: Update sysfs when setting opsAbhishek Pandit-Subedi1-0/+3
When adding altmode ops, update the sysfs group so that visibility is also recalculated. Reviewed-by: Heikki Krogerus <[email protected]> Reviewed-by: Benson Leung <[email protected]> Signed-off-by: Abhishek Pandit-Subedi <[email protected]> Signed-off-by: Jameson Thies <[email protected]> Reviewed-by: Dmitry Baryshkov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2024-06-04kcov, usb: disable interrupts in kcov_remote_start_usb_softirqAndrey Konovalov1-9/+38
After commit 8fea0c8fda30 ("usb: core: hcd: Convert from tasklet to BH workqueue"), usb_giveback_urb_bh() runs in the BH workqueue with interrupts enabled. Thus, the remote coverage collection section in usb_giveback_urb_bh()-> __usb_hcd_giveback_urb() might be interrupted, and the interrupt handler might invoke __usb_hcd_giveback_urb() again. This breaks KCOV, as it does not support nested remote coverage collection sections within the same context (neither in task nor in softirq). Update kcov_remote_start/stop_usb_softirq() to disable interrupts for the duration of the coverage collection section to avoid nested sections in the softirq context (in addition to such in the task context, which are already handled). Reported-by: Tetsuo Handa <[email protected]> Closes: https://lore.kernel.org/linux-usb/[email protected]/ Closes: https://syzkaller.appspot.com/bug?extid=0438378d6f157baae1a2 Suggested-by: Alan Stern <[email protected]> Fixes: 8fea0c8fda30 ("usb: core: hcd: Convert from tasklet to BH workqueue") Cc: [email protected] Acked-by: Dmitry Vyukov <[email protected]> Signed-off-by: Andrey Konovalov <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2024-06-04iommu: Return right value in iommu_sva_bind_device()Lu Baolu1-1/+1
iommu_sva_bind_device() should return either a sva bond handle or an ERR_PTR value in error cases. Existing drivers (idxd and uacce) only check the return value with IS_ERR(). This could potentially lead to a kernel NULL pointer dereference issue if the function returns NULL instead of an error pointer. In reality, this doesn't cause any problems because iommu_sva_bind_device() only returns NULL when the kernel is not configured with CONFIG_IOMMU_SVA. In this case, iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA) will return an error, and the device drivers won't call iommu_sva_bind_device() at all. Fixes: 26b25a2b98e4 ("iommu: Bind process address spaces to devices") Signed-off-by: Lu Baolu <[email protected]> Reviewed-by: Jean-Philippe Brucker <[email protected]> Reviewed-by: Kevin Tian <[email protected]> Reviewed-by: Vasant Hegde <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Joerg Roedel <[email protected]>
2024-06-04tcp: add a helper for setting EOR on tail skbJakub Kicinski1-0/+9
TLS (and hopefully soon PSP will) use EOR to prevent skbs with different decrypted state from getting merged, without adding new tests to the skb handling. In both cases once the connection switches to an "encrypted" state, all subsequent skbs will be encrypted, so a single "EOR fence" is sufficient to prevent mixing. Add a helper for setting the EOR bit, to make this arrangement more explicit. Signed-off-by: Jakub Kicinski <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Reviewed-by: Willem de Bruijn <[email protected]> Signed-off-by: Paolo Abeni <[email protected]>
2024-06-04tcp: wrap mptcp and decrypted checks into tcp_skb_can_collapse_rx()Jakub Kicinski1-0/+7
tcp_skb_can_collapse() checks for conditions which don't make sense on input. Because of this we ended up sprinkling a few pairs of mptcp_skb_can_collapse() and skb_cmp_decrypted() calls on the input path. Group them in a new helper. This should make it less likely that someone will check mptcp and not decrypted or vice versa when adding new code. This implicitly adds a decrypted check early in tcp_collapse(). AFAIU this will very slightly increase our ability to collapse packets under memory pressure, not a real bug. Signed-off-by: Jakub Kicinski <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Reviewed-by: Matthieu Baerts (NGI0) <[email protected]> Reviewed-by: Willem de Bruijn <[email protected]> Signed-off-by: Paolo Abeni <[email protected]>
2024-06-04net/sched: cls_flower: add support for matching tunnel control flagsDavide Caratti1-0/+3
extend cls_flower to match TUNNEL_FLAGS_PRESENT bits in tunnel metadata. Suggested-by: Ilya Maximets <[email protected]> Acked-by: Jamal Hadi Salim <[email protected]> Signed-off-by: Davide Caratti <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Paolo Abeni <[email protected]>
2024-06-04flow_dissector: add support for tunnel control flagsDavide Caratti2-0/+21
Dissect [no]csum, [no]dontfrag, [no]oam, [no]crit flags from skb metadata. This is a prerequisite for matching these control flags using TC flower. Suggested-by: Ilya Maximets <[email protected]> Signed-off-by: Davide Caratti <[email protected]> Reviewed-by: Simon Horman <[email protected]> Signed-off-by: Paolo Abeni <[email protected]>
2024-06-04dt-bindings: clock: add Amlogic C3 peripherals clock controllerXianwei Zhao1-0/+212
Add the peripherals clock controller dt-bindings for Amlogic C3 SoC family Reviewed-by: Rob Herring (Arm) <[email protected]> Co-developed-by: Chuan Liu <[email protected]> Signed-off-by: Chuan Liu <[email protected]> Signed-off-by: Xianwei Zhao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jerome Brunet <[email protected]>
2024-06-04dt-bindings: clock: add Amlogic C3 SCMI clock controller supportXianwei Zhao1-0/+27
Add the SCMI clock controller dt-bindings for Amlogic C3 SoC family Acked-by: Rob Herring (Arm) <[email protected]> Co-developed-by: Chuan Liu <[email protected]> Signed-off-by: Chuan Liu <[email protected]> Signed-off-by: Xianwei Zhao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jerome Brunet <[email protected]>
2024-06-04dt-bindings: clock: add Amlogic C3 PLL clock controllerXianwei Zhao1-0/+40
Add the PLL clock controller dt-bindings for Amlogic C3 SoC family. Reviewed-by: Krzysztof Kozlowski <[email protected]> Co-developed-by: Chuan Liu <[email protected]> Signed-off-by: Chuan Liu <[email protected]> Signed-off-by: Xianwei Zhao <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Jerome Brunet <[email protected]>
2024-06-04m68k: amiga: Turn off Warp1260 interrupts during bootPaolo Pisati1-0/+3
On an Amiga 1200 equipped with a Warp1260 accelerator, an interrupt storm coming from the accelerator board causes the machine to crash in local_irq_enable() or auto_irq_enable(). Disabling interrupts for the Warp1260 in amiga_parse_bootinfo() fixes the problem. Link: https://lore.kernel.org/r/[email protected] Cc: stable <[email protected]> Signed-off-by: Paolo Pisati <[email protected]> Reviewed-by: Michael Schmitz <[email protected]> Reviewed-by: Geert Uytterhoeven <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Geert Uytterhoeven <[email protected]>
2024-06-04media: v4l2-subdev: Provide const-aware subdev state accessorsLaurent Pinchart1-18/+36
It would be useful to mark instances of v4l2_subdev_state structures as const when code needs to access them read-only. This isn't currently possible, as the v4l2_subdev_state_get_*() accessor functions take a non-const pointer to the state. Use _Generic() to provide two different versions of the accessors, for const and non-const states respectively. The former returns a const pointer to the requested format, rectangle or interval, implementing const-correctness. The latter returns a non-const pointer, preserving the current behaviour for drivers. Signed-off-by: Laurent Pinchart <[email protected]> Reviewed-by: Sakari Ailus <[email protected]> Reviewed-by: Tomi Valkeinen <[email protected]> [Sakari Ailus: Drop the word "below" from the text.] Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-04media: v4l2-subdev: Fix v4l2_subdev_state_get_format() documentationLaurent Pinchart1-3/+3
The documentation of the v4l2_subdev_state_get_format() macro incorrectly references __v4l2_subdev_state_get_format() instead of __v4l2_subdev_state_gen_call(). Fix it, and also update the list of similar macros to add the missing v4l2_subdev_state_get_interval(). Suggested-by: Sakari Ailus <[email protected]> Signed-off-by: Laurent Pinchart <[email protected]> Reviewed-by: Sakari Ailus <[email protected]> Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-04media: subdev: Improve s_stream documentationTomi Valkeinen1-0/+9
Now that enable/disable_streams operations are available for single-stream subdevices too, there's no reason to use the old s_stream operation on new drivers. Extend the documentation reflecting this. Signed-off-by: Tomi Valkeinen <[email protected]> Reviewed-by: Umang Jain<[email protected]> Reviewed-by: Laurent Pinchart <[email protected]> Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-04media: subdev: Add v4l2_subdev_is_streaming()Tomi Valkeinen1-0/+13
Add a helper function which returns whether the subdevice is streaming, i.e. if .s_stream or .enable_streams has been called successfully. Reviewed-by: Umang Jain <[email protected]> Reviewed-by: Laurent Pinchart <[email protected]> Tested-by: Umang Jain <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-04media: subdev: Improve v4l2_subdev_enable/disable_streams_fallbackTomi Valkeinen1-5/+4
v4l2_subdev_enable/disable_streams_fallback() supports falling back to .s_stream() for subdevs with a single source pad. It also tracks the enabled streams for that one pad in the sd->enabled_streams field. Tracking the enabled streams with sd->enabled_streams does not make sense, as with .s_stream() there can only be a single stream per pad. Thus, as the v4l2_subdev_enable/disable_streams_fallback() only supports a single source pad, all we really need is a boolean which tells whether streaming has been enabled on this pad or not. However, as we only need a true/false state for a pad (instead of tracking which streams have been enabled for a pad), we can easily extend the fallback mechanism to support multiple source pads as we only need to keep track of which pads have been enabled. Change the sd->enabled_streams field to sd->enabled_pads, which is a 64-bit bitmask tracking the enabled source pads. With this change we can remove the restriction that v4l2_subdev_enable/disable_streams_fallback() only supports a single source pad. Reviewed-by: Laurent Pinchart <[email protected]> Tested-by: Umang Jain <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-04media: subdev: Fix use of sd->enabled_streams in call_s_stream()Tomi Valkeinen1-0/+3
call_s_stream() uses sd->enabled_streams to track whether streaming has already been enabled. However, v4l2_subdev_enable/disable_streams_fallback(), which was the original user of this field, already uses it, and v4l2_subdev_enable/disable_streams_fallback() will call call_s_stream(). This leads to a conflict as both functions set the field. Afaics, both functions set the field to the same value, so it won't cause a runtime bug, but it's still wrong and if we, e.g., change how v4l2_subdev_enable/disable_streams_fallback() operates we might easily cause bugs. Fix this by adding a new field, 's_stream_enabled', for call_s_stream(). Reviewed-by: Umang Jain <[email protected]> Reviewed-by: Laurent Pinchart <[email protected]> Tested-by: Umang Jain <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-04media: ipu-bridge: add mod_devicetable.h header inclusionBingbu Cao1-0/+1
ACPI_ID_LEN is defined in mod_devicetable.h, so the header should be guaranteed to included in ipu-bridge.h instead of the source files which include ipu-bridge.h. Signed-off-by: Bingbu Cao <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Signed-off-by: Sakari Ailus <[email protected]> Signed-off-by: Hans Verkuil <[email protected]>
2024-06-03hwmon: Add PEC attribute support to hardware monitoring coreGuenter Roeck1-0/+2
Several hardware monitoring chips optionally support Packet Error Checking (PEC). For some chips, PEC support can be enabled simply by setting I2C_CLIENT_PEC in the i2c client data structure. Others require chip specific code to enable or disable PEC support. Introduce hwmon_chip_pec and HWMON_C_PEC to simplify adding configurable PEC support for hardware monitoring drivers. A driver can set HWMON_C_PEC in its chip information data to indicate PEC support. If a chip requires chip specific code to enable or disable PEC support, the driver only needs to implement support for the hwmon_chip_pec attribute to its write function. Packet Error Checking is only supported for SMBus devices. HWMON_C_PEC must therefore only be set by a driver if the parent device is an I2C device. Attempts to set HWMON_C_PEC on any other device type is not supported and rejected. The code calls i2c_check_functionality() to check if PEC is supported by the I2C/SMBus controller. This function is only available if CONFIG_I2C is enabled and reachable. For this reason, the added code needs to depend on reachability of CONFIG_I2C. Cc: Radu Sabau <[email protected]> Acked-by: Nuno Sa <[email protected]> Signed-off-by: Guenter Roeck <[email protected]>
2024-06-03Revert "rcu-tasks: Fix synchronize_rcu_tasks() VS zap_pid_ns_processes()"Frederic Weisbecker1-2/+0
This reverts commit 28319d6dc5e2ffefa452c2377dd0f71621b5bff0. The race it fixed was subject to conditions that don't exist anymore since: 1612160b9127 ("rcu-tasks: Eliminate deadlocks involving do_exit() and RCU tasks") This latter commit removes the use of SRCU that used to cover the RCU-tasks blind spot on exit between the tasklist's removal and the final preemption disabling. The task is now placed instead into a temporary list inside which voluntary sleeps are accounted as RCU-tasks quiescent states. This would disarm the deadlock initially reported against PID namespace exit. Signed-off-by: Frederic Weisbecker <[email protected]> Reviewed-by: Oleg Nesterov <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2024-06-03rcu/nocb: Use kthread parking instead of ad-hoc implementationFrederic Weisbecker1-45/+36
Upon NOCB deoffloading, the rcuo kthread must be forced to sleep until the corresponding rdp is ever offloaded again. The deoffloader clears the SEGCBLIST_OFFLOADED flag, wakes up the rcuo kthread which then notices that change and clears in turn its SEGCBLIST_KTHREAD_CB flag before going to sleep, until it ever sees the SEGCBLIST_OFFLOADED flag again, should a re-offloading happen. Upon NOCB offloading, the rcuo kthread must be forced to wake up and handle callbacks until the corresponding rdp is ever deoffloaded again. The offloader sets the SEGCBLIST_OFFLOADED flag, wakes up the rcuo kthread which then notices that change and sets in turn its SEGCBLIST_KTHREAD_CB flag before going to check callbacks, until it ever sees the SEGCBLIST_OFFLOADED flag cleared again, should a de-offloading happen again. This is all a crude ad-hoc and error-prone kthread (un-)parking re-implementation. Consolidate the behaviour with the appropriate API instead. [ paulmck: Apply Qiang Zhang feedback provided in Link: below. ] Link: https://lore.kernel.org/all/[email protected]/ Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>