Commit graph

42402 commits

Author SHA1 Message Date
Linus Torvalds
815c24a085 linux-kselftest-kunit-6.6-rc1
This kunit update for Linux 6.6.rc1 consists of:
 
 -- Adds support for running Rust documentation tests as KUnit tests
 -- Makes init, str, sync, types doctests compilable/testable
 -- Adds support for attributes API which include speed, modules
    attributes, ability to filter and report attributes.
 -- Adds support for marking tests slow using attributes API.
 -- Adds attributes API documentation
 -- Fixes to wild-memory-access bug in kunit_filter_suites() and
    a possible memory leak in kunit_filter_suites()
 -- Adds support for counting number of test suites in a module, list
    action to kunit test modules, and test filtering on module tests.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAmTsxL8ACgkQCwJExA0N
 Qxwt6BAA5FgF7nUeGRZCnot4MQCNGRThxsns2k3CKjM1Iokp8tstTDoNHXzk2veS
 WlRYOHFQqQOVTVRP+laXyjjMMHnlnhFxqbv93UKsen4JIUJDLFLq9x+0i+0bZh97
 N1rE5cKUnqjAOL6MIJuomW9IzEIrbMcqdljm6SOCZp90NLvq1+I4pDGLgx2bxcow
 Y/7dkx+dnlEsoACZ19CL1L2TaR21GpKdpOudpHNCShsbE0aOAlyHAVcmH64FTqCy
 Z1LtrA0odS71q0yxDVCk5X3cIkeVfGBMz6aMZBRzS9k5jU4H1EN1eG1rGdGErIe5
 YduwX3KMikYJB2stT64T1vgldIpT/emxqkBigmxQ37g3Flgopz4bI1snMBry+nKb
 ViD/WQNjsf2iL8MooCgYBzH7yjmX6lXXQTZXROogBj4lP2/0gHiQVZyXZEAjtoO3
 uNzUbfHQGnvtTphBHV4nNGaO+7kU9Y/oX8TYFcSYJQzcH5UVx16uBwevZjT1bii/
 q89bRAQLnJpzkR93SGpnmsRgoDcYJSYsEA1o/f9Eqq8j3guOS2idpJvkheXq8+A2
 MqTSOCJHENKZ3v0UGKlvZUPStaMaqN58z/VjlWug5EaB83LLfPcXJrGjz/EHk967
 hYDHcwPoamTegr1zlg3ckOLiWEhga2tv6aHPkshkcFphpnhRU/c=
 =Nsb8
 -----END PGP SIGNATURE-----

Merge tag 'linux-kselftest-kunit-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull kunit updates from Shuah Khan:

 - add support for running Rust documentation tests as KUnit tests

 - make init, str, sync, types doctests compilable/testable

 - add support for attributes API which include speed, modules
   attributes, ability to filter and report attributes

 - add support for marking tests slow using attributes API

 - add attributes API documentation

 - fix a wild-memory-access bug in kunit_filter_suites() and a possible
   memory leak in kunit_filter_suites()

 - add support for counting number of test suites in a module, list
   action to kunit test modules, and test filtering on module tests

* tag 'linux-kselftest-kunit-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (25 commits)
  kunit: fix struct kunit_attr header
  kunit: replace KUNIT_TRIGGER_STATIC_STUB maro with KUNIT_STATIC_STUB_REDIRECT
  kunit: Allow kunit test modules to use test filtering
  kunit: Make 'list' action available to kunit test modules
  kunit: Report the count of test suites in a module
  kunit: fix uninitialized variables bug in attributes filtering
  kunit: fix possible memory leak in kunit_filter_suites()
  kunit: fix wild-memory-access bug in kunit_filter_suites()
  kunit: Add documentation of KUnit test attributes
  kunit: add tests for filtering attributes
  kunit: time: Mark test as slow using test attributes
  kunit: memcpy: Mark tests as slow using test attributes
  kunit: tool: Add command line interface to filter and report attributes
  kunit: Add ability to filter attributes
  kunit: Add module attribute
  kunit: Add speed attribute
  kunit: Add test attributes API structure
  MAINTAINERS: add Rust KUnit files to the KUnit entry
  rust: support running Rust documentation tests as KUnit ones
  rust: types: make doctests compilable/testable
  ...
2023-08-28 18:56:38 -07:00
Linus Torvalds
ccc5e98177 Power management updates for 6.6-rc1
- Rework the menu and teo cpuidle governors to avoid calling
    tick_nohz_get_sleep_length(), which is likely to become quite
    expensive going forward, too often and improve making decisions
    regarding whether or not to stop the scheduler tick in the teo
    governor (Rafael Wysocki).
 
  - Improve the performance of cpufreq_stats_create_table() in some
    cases (Liao Chang).
 
  - Fix two issues in the amd-pstate-ut cpufreq driver (Swapnil Sapkal).
 
  - Use clamp() helper macro to improve the code readability in
    cpufreq_verify_within_limits() (Liao Chang).
 
  - Set stale CPU frequency to minimum in intel_pstate (Doug Smythies).
 
  - Migrate cpufreq drivers for various platforms to use void remove
    callback (Yangtao Li).
 
  - Add online/offline/exit hooks for Tegra driver (Sumit Gupta).
 
  - Explicitly include correct DT includes in cpufreq (Rob Herring).
 
  - Frequency domain updates for qcom-hw driver (Neil Armstrong).
 
  - Modify AMD pstate driver return the highest_perf value (Meng Li).
 
  - Generic cleanups for cppc, mediatek and powernow driver (Liao Chang,
    Konrad Dybcio).
 
  - Add more platforms to cpufreq-arm driver's blocklist (AngeloGioacchino
    Del Regno and Konrad Dybcio).
 
  - brcmstb-avs-cpufreq: Fix -Warray-bounds bug (Gustavo A. R. Silva).
 
  - Add device PM helpers to allow a device to remain powered-on during
    system-wide transitions (Ulf Hansson).
 
  - Rework hibernation memory snapshotting to avoid storing pages filled
    with zeros in hibernation image files (Brian Geffon).
 
  - Add check to make sure that CPU latency QoS constraints do not use
    negative values (Clive Lin).
 
  - Optimize rp->domains memory allocation in the Intel RAPL power
    capping driver (xiongxin).
 
  - Remove recursion while parsing zones in the arm_scmi power capping
    driver (Cristian Marussi).
 
  - Fix memory leak in devfreq_dev_release() (Boris Brezillon).
 
  - Rewrite devfreq_monitor_start() kerneldoc comment (Manivannan
    Sadhasivam).
 
  - Explicitly include correct DT includes in devfreq (Rob Herring).
 
  - Remove unsued pm_runtime_update_max_time_suspended() extern
    declaration (YueHaibing).
 
  - Add turbo-boost support to cpupower (Wyes Karny).
 
  - Add support for amd_pstate mode change to cpupower (Wyes Karny).
 
  - Fix 'cpupower idle_set' command to accept only numeric values of
    arguments (Likhitha Korrapati).
 
  - Clean up OPP code and add new frequency related APIs to it (Viresh
    Kumar, Manivannan Sadhasivam).
 
  - Convert ti cpufreq/opp bindings to json schema (Nishanth Menon).
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEE4fcc61cGeeHD/fCwgsRv/nhiVHEFAmTslI4SHHJqd0Byand5
 c29ja2kubmV0AAoJEILEb/54YlRxLMYP/3v0DxA3HZSZ/Xg63P9ylnln084cDt+/
 qpJZ0CJUd6+MkoeuCYq/5udNwPSREsfx+pIEJy+h/iCiQlQz3NzriR7/dgPV0Ud0
 t7k95lyZo+u51MNxk4SEqRMVTyYaNgDPvGbLyWFpLnne3CsxYzfH5xr77yHf342W
 jHii1vJLXiXPnQWDlahf8tUpdQ0MQFmEwx0WkJp81NaAFyXDi0fPrB4YZaZrr6AQ
 3TNaxTxZSirVSn19m5RPPAQhEfK8Dk4jF8wVPWsuL9F6v+9wERD9zcaxUPf3CD36
 aj+SqKLCkOfkJHk45PCIYbS2wQ04fT/yWE9Rzm4iSr+fWA/q7vA0jXsaAgcv1Bm7
 k6QyAy2ffLZTUFObX5bevIPvxZTzunLh0iglHx0WZKS/nn/9Jwpt6UMrpOsjiw/J
 GLKEww+ZiKXj980GfvV2QUZG/XmsrvML/1L+qiDxNB2IPTxxuOxrWQ+cM7oxUTPM
 pdIPIdwkm5ICVRVcAfNw/fr30s2yp1K304VWgzbKdK9b1aVhUSkxZGI8KHFODOHO
 4Crii2rk0r972kxuJmenKwEfmwr/rbAAstFVSM736jH9RUANaWsIeNvkurXMOd2f
 mil9DViTAu0iY4cy5tgLiLHDH4tOQOOCntRVFJ1tSytMyCFlMvVM0dwrc0yh254Q
 zcrNj8ERJSsC
 =6BIh
 -----END PGP SIGNATURE-----

Merge tag 'pm-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These rework cpuidle governors to call tick_nohz_get_sleep_length()
  less often and fix one of them, rework hibernation to avoid storing
  pages filled with zeros in hibernation images, switch over some
  cpufreq drivers to use void remove callbacks, fix and clean up
  multiple cpufreq drivers, fix the devfreq core, update the cpupower
  utility and make other assorted improvements.

  Specifics:

   - Rework the menu and teo cpuidle governors to avoid calling
     tick_nohz_get_sleep_length(), which is likely to become quite
     expensive going forward, too often and improve making decisions
     regarding whether or not to stop the scheduler tick in the teo
     governor (Rafael Wysocki)

   - Improve the performance of cpufreq_stats_create_table() in some
     cases (Liao Chang)

   - Fix two issues in the amd-pstate-ut cpufreq driver (Swapnil Sapkal)

   - Use clamp() helper macro to improve the code readability in
     cpufreq_verify_within_limits() (Liao Chang)

   - Set stale CPU frequency to minimum in intel_pstate (Doug Smythies)

   - Migrate cpufreq drivers for various platforms to use void remove
     callback (Yangtao Li)

   - Add online/offline/exit hooks for Tegra driver (Sumit Gupta)

   - Explicitly include correct DT includes in cpufreq (Rob Herring)

   - Frequency domain updates for qcom-hw driver (Neil Armstrong)

   - Modify AMD pstate driver return the highest_perf value (Meng Li)

   - Generic cleanups for cppc, mediatek and powernow driver (Liao
     Chang, Konrad Dybcio)

   - Add more platforms to cpufreq-arm driver's blocklist
     (AngeloGioacchino Del Regno and Konrad Dybcio)

   - brcmstb-avs-cpufreq: Fix -Warray-bounds bug (Gustavo A. R. Silva)

   - Add device PM helpers to allow a device to remain powered-on during
     system-wide transitions (Ulf Hansson)

   - Rework hibernation memory snapshotting to avoid storing pages
     filled with zeros in hibernation image files (Brian Geffon)

   - Add check to make sure that CPU latency QoS constraints do not use
     negative values (Clive Lin)

   - Optimize rp->domains memory allocation in the Intel RAPL power
     capping driver (xiongxin)

   - Remove recursion while parsing zones in the arm_scmi power capping
     driver (Cristian Marussi)

   - Fix memory leak in devfreq_dev_release() (Boris Brezillon)

   - Rewrite devfreq_monitor_start() kerneldoc comment (Manivannan
     Sadhasivam)

   - Explicitly include correct DT includes in devfreq (Rob Herring)

   - Remove unsued pm_runtime_update_max_time_suspended() extern
     declaration (YueHaibing)

   - Add turbo-boost support to cpupower (Wyes Karny)

   - Add support for amd_pstate mode change to cpupower (Wyes Karny)

   - Fix 'cpupower idle_set' command to accept only numeric values of
     arguments (Likhitha Korrapati)

   - Clean up OPP code and add new frequency related APIs to it (Viresh
     Kumar, Manivannan Sadhasivam)

   - Convert ti cpufreq/opp bindings to json schema (Nishanth Menon)"

* tag 'pm-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (74 commits)
  cpufreq: tegra194: remove opp table in exit hook
  cpufreq: powernow-k8: Use related_cpus instead of cpus in driver.exit()
  cpufreq: tegra194: add online/offline hooks
  cpuidle: teo: Avoid unnecessary variable assignments
  cpufreq: qcom-cpufreq-hw: add support for 4 freq domains
  dt-bindings: cpufreq: qcom-hw: add a 4th frequency domain
  cpufreq: amd-pstate-ut: Fix kernel panic when loading the driver
  cpufreq: amd-pstate-ut: Remove module parameter access
  cpufreq: Use clamp() helper macro to improve the code readability
  PM: sleep: Add helpers to allow a device to remain powered-on
  PM: QoS: Add check to make sure CPU latency is non-negative
  PM: runtime: Remove unsued extern declaration of pm_runtime_update_max_time_suspended()
  cpufreq: intel_pstate: set stale CPU frequency to minimum
  cpufreq: stats: Improve the performance of cpufreq_stats_create_table()
  dt-bindings: cpufreq: Convert ti-cpufreq to json schema
  dt-bindings: opp: Convert ti-omap5-opp-supply to json schema
  OPP: Fix argument name in doc comment
  cpuidle: menu: Skip tick_nohz_get_sleep_length() call in some cases
  cpufreq: cppc: Set fie_disabled to FIE_DISABLED if fails to create kworker_fie
  cpufreq: cppc: cppc_cpufreq_get_rate() returns zero in all error cases.
  ...
2023-08-28 18:04:39 -07:00
Linus Torvalds
97efd28334 Misc x86 cleanups.
The following commit deserves special mention:
 
    22dc02f81c Revert "sched/fair: Move unused stub functions to header"
 
 This is in x86/cleanups, because the revert is a re-application of a
 number of cleanups that got removed inadvertedly.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmTtDkoRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1jCMw//UvQGM8yxsTa57r0/ZpJHS2++P5pJxOsz
 45kBb3aBiDV6idArce4EHpthp3MvF3Pycibp9w0qg//NOtIHTKeagXv52abxsu1W
 hmS6gXJZDXZvjO1BFaUlmv97iYtzGfKnQppj32C4tMr9SaP49h3KvOHH1Z8CR3mP
 1nZaJJwYIi2qBh7msnmLGG+F0drb85O/dfHdoLX6iVJw9UP4n5nu9u8u1E0iC7J7
 2GC6AwP60A0EBRTK9EHQQEYwy9uvdS/TG5f2Qk1VP87KA9TTocs8MyapMG4DQu79
 hZKVEGuVQAlV3rYe9cJBNpDx1mTu3rmuMH0G71KEe3T6UcG5QRUiAPm8UfA9prPD
 uWjY4zm5o0W3tUio4V1MqqiLFIaBU63WmTY9RyM0QH8Ms8r8GugWKmnrTIuHfEC3
 9D+Uhyb5d8ID6qFGLTOvPm0g+v64lnH71qq83PcVJgsmZvUb2XvFA3d/A0h9JzLT
 2In/yfU10UsLUFTiNRyAgcLccjaGhliDB2oke9Kp0OyOTSQRcWmiq8kByVxCPImP
 auOWWcNXjcuOgjlnziEkMTDuRY12MgUB2If4zhELvdEFibIaaNW5sNCbY2msWaN1
 CUD7fcj0L3HZvzujUm72l5hxL2brJMuPwVNJfuOe4T8wzy569d6VJULrd1URBM1B
 vfaPs1Dz46Q=
 =kiAA
 -----END PGP SIGNATURE-----

Merge tag 'x86-cleanups-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull misc x86 cleanups from Ingo Molnar:
 "The following commit deserves special mention:

   22dc02f81c Revert "sched/fair: Move unused stub functions to header"

  This is in x86/cleanups, because the revert is a re-application of a
  number of cleanups that got removed inadvertedly"

[ This also effectively undoes the amd_check_microcode() microcode
  declaration change I had done in my microcode loader merge in commit
  42a7f6e3ff ("Merge tag 'x86_microcode_for_v6.6_rc1' [...]").

  I picked the declaration change by Arnd from this branch instead,
  which put it in <asm/processor.h> instead of <asm/microcode.h> like I
  had done in my merge resolution   - Linus ]

* tag 'x86-cleanups-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/platform/uv: Refactor code using deprecated strncpy() interface to use strscpy()
  x86/hpet: Refactor code using deprecated strncpy() interface to use strscpy()
  x86/platform/uv: Refactor code using deprecated strcpy()/strncpy() interfaces to use strscpy()
  x86/qspinlock-paravirt: Fix missing-prototype warning
  x86/paravirt: Silence unused native_pv_lock_init() function warning
  x86/alternative: Add a __alt_reloc_selftest() prototype
  x86/purgatory: Include header for warn() declaration
  x86/asm: Avoid unneeded __div64_32 function definition
  Revert "sched/fair: Move unused stub functions to header"
  x86/apic: Hide unused safe_smp_processor_id() on 32-bit UP
  x86/cpu: Fix amd_check_microcode() declaration
2023-08-28 17:05:58 -07:00
Linus Torvalds
3ca9a836ff Scheduler changes for v6.6:
- The biggest change is introduction of a new iteration of the
   SCHED_FAIR interactivity code: the EEVDF ("Earliest Eligible Virtual
   Deadline First") scheduler.
 
   EEVDF too is a virtual-time scheduler, with two parameters (weight
   and relative deadline), compared to CFS that had weight only.
   It completely reworks the base scheduler: placement, preemption,
   picking -- everything.
 
   LWN.net, as usual, has a terrific writeup about EEVDF:
 
      https://lwn.net/Articles/925371/
 
   Preemption (both tick and wakeup) is driven by testing against
   a fresh pick. Because the tree is now effectively an interval
   tree, and the selection is no longer the 'leftmost' task,
   over-scheduling is less of a problem. A lot of the CFS
   heuristics are removed or replaced by more natural latency-space
   parameters & constructs.
 
   In terms of expected performance regressions: we'll and can fix
   everything where a 'good' workload misbehaves with the new scheduler,
   but EEVDF inevitably changes workload scheduling in a binary fashion,
   hopefully for the better in the overwhelming majority of cases,
   but in some cases it won't, especially in adversarial loads that
   got lucky with the previous code, such as some variants of hackbench.
   We are trying hard to err on the side of fixing all performance
   regressions, but we expect some inevitable post-release iterations
   of that process.
 
 - Improve load-balancing on hybrid x86 systems: enable cluster
   scheduling (again).
 
 - Improve & fix bandwidth-scheduling on nohz systems.
 
 - Improve bandwidth-throttling.
 
 - Use lock guards to simplify and de-goto-ify control flow.
 
 - Misc improvements, cleanups and fixes.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmTtDOgRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1iS4g//b9yewVW9OPxetKoN8zIJA0TjFYuuOVHK
 BlCJi5dbzXeCTrtENI65BRA7kPbTQ3AjwLRQ2BallAZ4dJceK0RhlZJvcrMNsm4e
 Adcpoch/FbqPKCrtAJQY04Ln1B244n/KyVifYett9220dMgTFQGJJYxrTc2G2+Kp
 F44vdUHzRczIE+KeOgBild1CwfKv5Zn5xgaXgtuoPLZtWBE0C1fSSzbK/PTINcUx
 bS4NVxK0CpOqSiNjnugV8KsYb71/0U6IgShBVjfHsrlBYigOH2NbVTH5xyjF8f83
 WxiGstlhxj+N6Kv4L6FOJIAr2BIggH82j3FaPACmv4c8pzEoBBbvlAJkfinLEgbn
 Povg3OF2t6uZ8NoHjeu3WxOjBsphbpkFz7H5nno1ibXSIR/JyUH5MdBPSx93QITB
 QoUKQpr/L8zWauWDOEzSaJjEsZbl8rkcIVq5Bk0bR3qn2xkZsIeVte+vCEu3+tBc
 b4JOZjq7AuPDqPnsBLvuyiFZ7zwsAfm+pOD5UF3/zbLjPn1N/7wTNQZ29zjc04jl
 SifpCZGgF1KlG8m8wNTlSfVvq0ksppCzJt+C6VFuejZ191IGpirQHn4Vp0sluMhC
 WRzXhb7v37Bq5JY10GMfeKb/jAiRs68kozhzqVPsBSAPS6I6jJssONgedq+LbQdC
 tFsmE9n09do=
 =XtCD
 -----END PGP SIGNATURE-----

Merge tag 'sched-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull scheduler updates from Ingo Molnar:

 - The biggest change is introduction of a new iteration of the
   SCHED_FAIR interactivity code: the EEVDF ("Earliest Eligible Virtual
   Deadline First") scheduler

   EEVDF too is a virtual-time scheduler, with two parameters (weight
   and relative deadline), compared to CFS that had weight only. It
   completely reworks the base scheduler: placement, preemption, picking
   -- everything

   LWN.net, as usual, has a terrific writeup about EEVDF:

      https://lwn.net/Articles/925371/

   Preemption (both tick and wakeup) is driven by testing against a
   fresh pick. Because the tree is now effectively an interval tree, and
   the selection is no longer the 'leftmost' task, over-scheduling is
   less of a problem. A lot of the CFS heuristics are removed or
   replaced by more natural latency-space parameters & constructs

   In terms of expected performance regressions: we will and can fix
   everything where a 'good' workload misbehaves with the new scheduler,
   but EEVDF inevitably changes workload scheduling in a binary fashion,
   hopefully for the better in the overwhelming majority of cases, but
   in some cases it won't, especially in adversarial loads that got
   lucky with the previous code, such as some variants of hackbench. We
   are trying hard to err on the side of fixing all performance
   regressions, but we expect some inevitable post-release iterations of
   that process

 - Improve load-balancing on hybrid x86 systems: enable cluster
   scheduling (again)

 - Improve & fix bandwidth-scheduling on nohz systems

 - Improve bandwidth-throttling

 - Use lock guards to simplify and de-goto-ify control flow

 - Misc improvements, cleanups and fixes

* tag 'sched-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (43 commits)
  sched/eevdf/doc: Modify the documented knob to base_slice_ns as well
  sched/eevdf: Curb wakeup-preemption
  sched: Simplify sched_core_cpu_{starting,deactivate}()
  sched: Simplify try_steal_cookie()
  sched: Simplify sched_tick_remote()
  sched: Simplify sched_exec()
  sched: Simplify ttwu()
  sched: Simplify wake_up_if_idle()
  sched: Simplify: migrate_swap_stop()
  sched: Simplify sysctl_sched_uclamp_handler()
  sched: Simplify get_nohz_timer_target()
  sched/rt: sysctl_sched_rr_timeslice show default timeslice after reset
  sched/rt: Fix sysctl_sched_rr_timeslice intial value
  sched/fair: Block nohz tick_stop when cfs bandwidth in use
  sched, cgroup: Restore meaning to hierarchical_quota
  MAINTAINERS: Add Peter explicitly to the psi section
  sched/psi: Select KERNFS as needed
  sched/topology: Align group flags when removing degenerate domain
  sched/fair: remove util_est boosting
  sched/fair: Propagate enqueue flags into place_entity()
  ...
2023-08-28 16:43:39 -07:00
Linus Torvalds
1a7c611546 Perf events changes for v6.6:
- AMD IBS improvements
 - Intel PMU driver updates
 - Extend core perf facilities & the ARM PMU driver to better handle ARM big.LITTLE events
 - Micro-optimize software events and the ring-buffer code
 - Misc cleanups & fixes
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmTtBscRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1hHoQ/+IBQ8Xi/rcdd40n8OqEB/VBWVuSjNT3uN
 3pHHcTl2Pio9CxBeat42NekNijlRILCKJrZ3Lt3JWBmWyWv5l3KFabelj+lDF2xa
 TVCjTnQNe1+HvrODYnF4ECIs5vaoMVjcJ9jg8+VDgAcOQr1nZs4m5TVAd6TLqPpV
 urBEQVULkkzk7ZRhfrugKhw+wrpWFefgGCx0RV8ijZB7TLMHc2wE+Q/sTxKdKceL
 wNaJaDgV33pZh0aImwR9pKUE532hF1FiBdLuehkh61PZa1L82jzAX1xjw2s1hSa4
 eIWemPHJIYfivRlENbJsDWc4N8gk6ijVHwrxGcr4Axu+NN+zPtQ3ddhaGMAyKdTo
 qUKXH3MZSMIl++jI5Fkc6xM+XLvY1rML62epSzMwu/cc7Z5MeyWdQcri0N9YFuO7
 wUUNnFpU00lwQBLbyyUQ3Zi8E0QV7NuPW4axTkmntiIjMpLagaEvVSf6nf8qLpbE
 WTT16s707t19hUZNazNZ7ONmhly4ALbHFQEH65J2KoYn99fYqy9z68Hwk+xnmykw
 bc3qvfhpw0MImQQ+DqHiBwb4n4UuvY2WlkkZI3FfNeSG63DaM2mZikfpElpXYjn6
 9iOIXvx21Wiq/n0cbLhidI2q/ZzFCzYLCk6ikZ320wb+rhvd7EoSlZil6QSzn3pH
 Qdk+NEZgWQY=
 =ZT6+
 -----END PGP SIGNATURE-----

Merge tag 'perf-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf event updates from Ingo Molnar:

 - AMD IBS improvements

 - Intel PMU driver updates

 - Extend core perf facilities & the ARM PMU driver to better handle ARM big.LITTLE events

 - Micro-optimize software events and the ring-buffer code

 - Misc cleanups & fixes

* tag 'perf-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf/x86/uncore: Remove unnecessary ?: operator around pcibios_err_to_errno() call
  perf/x86/intel: Add Crestmont PMU
  x86/cpu: Update Hybrids
  x86/cpu: Fix Crestmont uarch
  x86/cpu: Fix Gracemont uarch
  perf: Remove unused extern declaration arch_perf_get_page_size()
  perf: Remove unused PERF_PMU_CAP_HETEROGENEOUS_CPUS capability
  arm_pmu: Remove unused PERF_PMU_CAP_HETEROGENEOUS_CPUS capability
  perf/x86: Remove unused PERF_PMU_CAP_HETEROGENEOUS_CPUS capability
  arm_pmu: Add PERF_PMU_CAP_EXTENDED_HW_TYPE capability
  perf/x86/ibs: Set mem_lvl_num, mem_remote and mem_hops for data_src
  perf/mem: Add PERF_MEM_LVLNUM_NA to PERF_MEM_NA
  perf/mem: Introduce PERF_MEM_LVLNUM_UNC
  perf/ring_buffer: Use local_try_cmpxchg in __perf_output_begin
  locking/arch: Avoid variable shadowing in local_try_cmpxchg()
  perf/core: Use local64_try_cmpxchg in perf_swevent_set_period
  perf/x86: Use local64_try_cmpxchg
  perf/amd: Prevent grouping of IBS events
2023-08-28 16:35:01 -07:00
Linus Torvalds
6f49693a6c Updates for the CPU hotplug core:
- Support partial SMT enablement.
 
     So far the sysfs SMT control only allows to toggle between SMT on and
     off. That's sufficient for x86 which usually has at max two threads
     except for the Xeon PHI platform which has four threads per core.
 
     Though PowerPC has up to 16 threads per core and so far it's only
     possible to control the number of enabled threads per core via a
     command line option. There is some way to control this at runtime, but
     that lacks enforcement and the usability is awkward.
 
     This update expands the sysfs interface and the core infrastructure to
     accept numerical values so PowerPC can build SMT runtime control for
     partial SMT enablement on top.
 
     The core support has also been provided to the PowerPC maintainers who
     added the PowerPC related changes on top.
 
   - Minor cleanups and documentation updates.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmTsj4wTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoaszEADKMd/6m7/Bq7RU2OJ+IXw8yfMEF9nS
 6HPrFu71a4cDufb/G8UckQOvkwdTFWD7bZ0snJe2sBDFTOtzK/inYkgPZTxlm7si
 JcJmFnHKUM7OTwNZb7Tv1bd9Csz4JhggAYUw6P8CqsCmhQ+p6ECemx3bHDlYiywm
 5eW2yzI9EM4dbsHPwUOvjI0WazGvAf0esSDAS8JTnhBXbd8FAckbMV+xuRPcCUK+
 dBqbqr+3Nf4/wcXTro/gZIc7sEATAHH6m7zHlLVBSyVPnBxre8NLz6KciW4SezyJ
 GWFnDV03mmG2KxQ2ugwI8n6M3zDJQtfEJFwW/x4t2M5RK+ka2a6G6GtCLHYOXLWR
 akIuBXtTAC57BgpqzBihGej9eiC1BJ1QMa9ZK+6WDXSZtMTFOLlbwdY2/qyfxpfw
 LfepWb+UMtFy5YyW84S1O5/AqpOtKD2kPTqfDjvDxWIAigispU+qwAKxcMzMjtwz
 aAlf2Z/iX0R9DkRzGD2gaFG5AUsRich8RtVO7u+WDwYSsi8ywrvryiPlZrDDBkSQ
 sRzdoHeXNGVY/FgkbZmEyBj4udrypymkR6ivqn6C2OrysgznSiv5NC983uS6TfJX
 cVqdUv6CNYYNiNu0x0Qf0MluYT2s5c1Fa4bjCBJL+KwORwjM3+TCN9RA1KtFrW2T
 G3Ta1KqI6wRonA==
 =JQRJ
 -----END PGP SIGNATURE-----

Merge tag 'smp-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull CPU hotplug updates from Thomas Gleixner:
 "Updates for the CPU hotplug core:

   - Support partial SMT enablement.

     So far the sysfs SMT control only allows to toggle between SMT on
     and off. That's sufficient for x86 which usually has at max two
     threads except for the Xeon PHI platform which has four threads per
     core

     Though PowerPC has up to 16 threads per core and so far it's only
     possible to control the number of enabled threads per core via a
     command line option. There is some way to control this at runtime,
     but that lacks enforcement and the usability is awkward

     This update expands the sysfs interface and the core infrastructure
     to accept numerical values so PowerPC can build SMT runtime control
     for partial SMT enablement on top

     The core support has also been provided to the PowerPC maintainers
     who added the PowerPC related changes on top

   - Minor cleanups and documentation updates"

* tag 'smp-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  Documentation: core-api/cpuhotplug: Fix state names
  cpu/hotplug: Remove unused function declaration cpu_set_state_online()
  cpu/SMT: Fix cpu_smt_possible() comment
  cpu/SMT: Allow enabling partial SMT states via sysfs
  cpu/SMT: Create topology_smt_thread_allowed()
  cpu/SMT: Remove topology_smt_supported()
  cpu/SMT: Store the current/max number of threads
  cpu/SMT: Move smt/control simple exit cases earlier
  cpu/SMT: Move SMT prototypes into cpu_smt.h
  cpu/hotplug: Remove dependancy against cpu_primary_thread_mask
2023-08-28 15:04:43 -07:00
Linus Torvalds
dd3f0fe501 Boring updates for the interrupt subsystem:
Core:
 
     - Prevent a deadlock of nested interrupt threads vs.
       synchronize_hard()
 
     - Removal of a stale extern declaration
 
   Drivers:
 
     - The first new driver since v6.2 for Amlogic-C3 SoCs
 
     - The usual small fixes, cleanups and improvements all over
       the place
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmTsjR0THHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYofmLEAC5anouyAUbGjl/cL//+2GkvWB2YgO2
 +D7q8tx3Tt5US/vpiYaDvFs+at+2lAyB6M/KUkvYSCne9bm80+YqaL+73iM6YQKH
 yNDrAnLR1FA4+fHIvvhmk23U1uUjWgSTL7iKufgNWf8I0aYsWLTIX3N6m0606ZLE
 eUNIf7w+aZRr/axHdadRQpib6l1fvfA3C72urPRBnZDA56ZDAgE9tS0kfk9D+3sW
 BgXRp4knvHBf6I4RdA10hHDTa1RuX9xkDeAC1a/ljWpbCEgEDPJ+5JI+TD+fU/d5
 TCVGa7GwqJc2srRFwy76/t0jQrG7DnwW56SsMomjS+vjIu4exNFwXJ6LqZSJacwa
 Z3HB0Py3awQWPfHdFqdF9LHyum+a58RHX96RenlL8Q/42qe5K6RmAIfcAaiy2OpL
 xAGy9+nplMWh+qde9q1o30WPr08GhhDEXrdHZdAAODjBeoUDGmFooH5NHAFjw2+Q
 ba15/f7Nl8KIl854OUJv4cftNEv5klpueLR/YUviivoO55vydRae/k/CSPhvt7TN
 VIQ+vgiaiOCEwAAx2kP7Au0ADeEMCYiEqH9KWBp33dvjNZMt2DbAGLDWagcy8N9y
 R8ms4c5e7Z2MvN9Z6YDihQ1XvkQsdX/dWwJq3weH3c/tP1MBFFHZYdeQhIVKTIKR
 4zFKi4jrlmn0vQ==
 =jiUK
 -----END PGP SIGNATURE-----

Merge tag 'irq-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
 "Boring updates for the interrupt subsystem:

  Core:

   - Prevent a deadlock of nested interrupt threads vs.
     synchronize_hard()

   - Removal of a stale extern declaration

  Drivers:

   - The first new driver since v6.2 for Amlogic-C3 SoCs

   - The usual small fixes, cleanups and improvements all over the
     place"

* tag 'irq-core-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip: Add support for Amlogic-C3 SoCs
  dt-bindings: interrupt-controller: Add support for Amlogic-C3 SoCs
  irqchip/irq-mvebu-sei: Use devm_platform_get_and_ioremap_resource()
  irqchip/ls-scfg-msi: Use devm_platform_get_and_ioremap_resource()
  irqchip: Explicitly include correct DT includes
  irqchip/orion: Use of_address_count() helper
  irqchip/irq-pruss-intc: Do not check for 0 return after calling platform_get_irq()
  irqchip/imx-mu-msi: Do not check for 0 return after calling platform_get_irq()
  irqchipr/i8259: Mark i8259_of_init() static
  irqchip/mips-gic: Mark gic_irq_domain_free() static
  irqchip/xtensa-pic: Include header for xtensa_pic_init_legacy()
  irqchip/loongson-eiointc: Fix return value checking of eiointc_index
  genirq: Remove unused extern declaration
  genirq: Prevent nested thread vs synchronize_hardirq() deadlock
2023-08-28 14:33:11 -07:00
Linus Torvalds
6bfce7759c A single update to the core entry code, which removes the empty user
address limit check which is a leftover of the removed TIF_FSCHECK.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmTsi5ETHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoZRGEACDGWXaadhCYNpHftScC6X3z2d3DXZX
 E5PAiVtq/sQX8FV2KJgMxywxQ4QEJjtX9lgoxGMKc7zai2FgKbWzPnlz6qz3przh
 P5Ji+sBl0DbpVnLMaAAVzikvWpjFfSbf/b8lHA3EVs/9HkPfZY7rwByONsVkxX2e
 6AS8tOv0CJP3lMaZ02tDs48PWOeF1CEpub9Eg5JfYG+CTU0gy+wFMnIUCkN/eP2E
 CYNo6wTFjBQ43S7GWrqA6eYgbHLBBvOuHLHM3RlLOm2Rexct/umf84At9K9wUJvJ
 mGSrZKsgD3UZJi7HpF5RXsY88+4uV38vhkN6LGRdHrarLz3WMvnc191WP7iwCBmo
 HGIgWWxm9+bGAxiw9wTNgmERvwKBeMNNQEDu/58An637VDucrYOlRi2Mh0CE5QiG
 i1R+KiKBUZw3Blogx+O65m0PyXpJQqHfr2WkfT+uKJCs7wRBdupmWv+ZAcSj6tys
 ILqCHRmI4n46T2qp67/M6FbYTrk0DNWsjgjtUgLBquEsj6z00favxAug5NrJV6+c
 5/kf7C97h1TmtqqNjtL4uwfWGm2bqc6AZyMpsk0KqnirywmnkgIKOWHu//TwQVJs
 jpwRvsAv3UNnUrO6qtqNzbNDQQ0MOLAAuDgardGWW7gEEhvaa+HdbwyjZSwDZvZy
 b8PLikU7gRB9rQ==
 =W9kd
 -----END PGP SIGNATURE-----

Merge tag 'core-entry-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core entry code update from Thomas Gleixner:
 "A single update to the core entry code, which removes the empty user
  address limit check which is a leftover of the removed TIF_FSCHECK"

* tag 'core-entry-2023-08-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  entry: Remove empty addr_limit_user_check()
2023-08-28 14:04:55 -07:00
Linus Torvalds
b98af53cb0 Clocksource watchdog commits for v6.6
This pull reqeust contains the following:
 
 o	Handle negative skews in "skew is too large" messages.
 
 o	Extend watchdog check exemption to 4-Socket platforms
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEbK7UrM+RBIrCoViJnr8S83LZ+4wFAmTcF94THHBhdWxtY2tA
 a2VybmVsLm9yZwAKCRCevxLzctn7jPeHEACXvYFMwno+1DPlt2PtFJglnkuYyOUQ
 SwRRiOLWXdzsikzjWIvdrSOqslbdIjhZmSSyPDjyo4GPYFSSOyKNsqczwsX8R49u
 O/2Yzkrx2OmIWcKiAkII8Iw4fFedoITtzC59wbZHoo/+upEpbZYP3u7AjJTird7y
 du3WcWcGc1eEt5+7MNwbZfwpzo2t5Rb3Wqfgs6vnKTG7Abc/23uChsCBzPavX7X/
 djNd1bA5YmEldKKxSoF5XSW/F1TWIA4fXMDkBwgRKHBx1Y7xU+nJMtam4ogAzN6a
 4zgthgy5wQ7/VnTBv2rmQQb6ae3Blm09Yg1ac8zt95RLgmGkyX73lZGnRBTdYqyh
 kb47Tfw3a+e+VPTD+W3rY1NOSOwbstdDHVckK+0bFvqNyXOoaEJL+EEOhm9rPXxv
 le+T6Ct1VPAF9lHPUz7lVCVXN91vP4Gqrxjmeq5rqWNOvRW3jBcCLnEpFzWJtu50
 JjQBi3HA0HW+Bxqov22W0llFAa0gVm8xyxXfNSSL7VoCinnS6/qyQvD1GoG0brk3
 l5orOk38m2/acvTyvw2tnvAAuqmOr+oGlcQhJOOVl7jDz+sae6RMTwWCMaVxKUDo
 YW7v5YFQePmzC9J4M5XFMdCCTb3cUCjVLMdsPgONM3kn9ALEDhhTBSw76N//bGLg
 4/OEsT7Aq6LHkA==
 =6/HI
 -----END PGP SIGNATURE-----

Merge tag 'clocksource.2023.08.15a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull clocksource watchdog updates from Paul McKenney:

 - Handle negative skews in "skew is too large" messages

 - Extend watchdog check exemption to 4-Socket platforms

* tag 'clocksource.2023.08.15a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu:
  x86/tsc: Extend watchdog check exemption to 4-Sockets platform
  clocksource: Handle negative skews in "skew is too large" messages
2023-08-28 13:59:46 -07:00
Linus Torvalds
b324696dce CSD lock commits for v6.5
This series reduces the number of stack traces dumped during CSD-lock
 debugging.  This helps to avoid console overrun on systems with large
 numbers of CPUs.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEbK7UrM+RBIrCoViJnr8S83LZ+4wFAmSxzCITHHBhdWxtY2tA
 a2VybmVsLm9yZwAKCRCevxLzctn7jK3wD/9bNte4pbnG5CmJvu+bh1idbD+ZIyaU
 13mTg1J35XurW/QlmbOKoNcdylvmE4QdzUxURXgv4FHO7wdiARBGgrz9ArKjgq+e
 NfklTr4EY9UbRb27tO1iJDUiP6ZZB3fw+gYs2zmJMkn5CqK+rkUrTdcMa8EFHvlT
 vf6OL8xeFjsrTCWfYTAYJU1Yp+0UOiO+BRwzq4u76Wzpex79EiMEE2lLeRZfXhz9
 mF704EXn7VEkfRo50GlGOjVkezghlItXlaUCV2eQ4T6/LwXgreStCTKfhrDA5Qs2
 mAQ5OMZJztlbUWcVrEPZMpQ6pXWaJx5qoMZ1uP8Obec89ocr+/DuL1Myyaau9g+H
 rCYA9Om4XfAd2JURrxOIlKQ7SmvRJNZWpv0DHizTfWpSTOumtON2RyVlC2EYwx9Q
 2ZL4Eo99VzYcAXWx8KiGpF6CtW67VXKZsHwTtJegu4Vkjk9wOt5Sa9svhiHv0Kz4
 veYE9XuOH5+tIfN6tP4eikI+4VJOVhudsOKiXCjhoscy+1/gtXRH5WgYwvSiopWo
 nEsj05V7U0hWdsPpu7niZ982vAU1eHC7EeQt+pc5f17NeNr51xG3Oj3xyy14yzFC
 TbEyOft0MEsJ8NkC93FCbNqere7dk6k+bxVqvoWQ5tDfsEhIaVn+HzVhPdsluvfP
 1JixcSuqZ42RqA==
 =t4s8
 -----END PGP SIGNATURE-----

Merge tag 'csd-lock.2023.07.15a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull CSD lock updates from Paul McKenney:
 "This series reduces the number of stack traces dumped during CSD-lock
  debugging. This helps to avoid console overrun on systems with large
  numbers of CPUs"

* tag 'csd-lock.2023.07.15a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu:
  smp: Reduce NMI traffic from CSD waiters to CSD destination
  smp: Reduce logging due to dump_stack of CSD waiters
2023-08-28 13:46:41 -07:00
Linus Torvalds
6ae0c15765 smp_call_function torture-test updates for v6.6
This pull request prevents some memory-exhaustion false-postitive failures
 in scftorture testing.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEbK7UrM+RBIrCoViJnr8S83LZ+4wFAmTcFWYTHHBhdWxtY2tA
 a2VybmVsLm9yZwAKCRCevxLzctn7jBPhD/9jqgTgPuF3bmRVpkggIXHN0wCTihS9
 BNQPaUdLRmTymtZAecaPOdRvPPMUvqjOK5dS8sx7rnoyU+qr33mUkRzSFCIrsGHM
 62FowQ4grokOkQnJYUpVuLhitYwwmWi7aKi5T2Xolc4ooSIpWZe/NPoiteGkm4lc
 nuA84DcV51rRykjBjW3LIrffoi9fu3lU65FsAjQttG7OZwWmAjhhHl29loCPlG3F
 +Ui+0p+cp8WAB/2J0B/6aHTqK6JJoV0t/gzKpzYvI/Gydz/7PaYjdBhPCSxHcsXd
 LMf+OO5/LtGfw4kcYF/8O4Ir0t4F681iOXlz06op2P2OT90S0O31SGUWznKMVq3E
 V307I9LnfT5Jo2aK9xD4ad8GM9rMKb9btc284QvaYAjCUD5RBoyA/S1d6e0u9rt3
 oK7rJWIG9bzCbZ7R7xXCzpkCYw98npVeDxS9gdwWSCA0vBwmhF8BbVQODZ/u+YQ0
 TQyTSankebeaoINeieto0ZAbK9iDSbsnTmKZ146hoLGFshDFN7qPOL4PggXPqw5B
 CXILQH+SOjMO+JaIrd4iOr172REzp1/64K4szaheV4LxyEwC/QJBdxhajdpJOTOS
 LowIG+LIIElr8dPIiiEIBVAaTehadgqA1+5zIcevt8OSMb7KOoB6FkXKj/9kWOfD
 PwFfqskEYoY8xQ==
 =8rLK
 -----END PGP SIGNATURE-----

Merge tag 'scftorture.2023.08.15a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull smp_call_function torture-test updates from Paul McKenney:
 "This prevents some memory-exhaustion false-postitive failures in
  scftorture testing"

* tag 'scftorture.2023.08.15a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu:
  scftorture: Add CONFIG_PREEMPT_DYNAMIC=n to NOPREEMPT scenario
  scftorture: Pause testing after memory-allocation failure
  scftorture: Forgive memory-allocation failure if KASAN
  torture: Scale scftorture memory based on number of CPUs
2023-08-28 13:42:29 -07:00
Linus Torvalds
68cadad11f RCU pull request for v6.6
doc.2023.07.14b: Documentation updates.
 
 fixes.2023.08.16a: Miscellaneous fixes, perhaps most notably simplifying
 	SRCU_NOTIFIER_INIT() as suggested.
 
 rcu-tasks.2023.07.24a:  RCU Tasks updates, most notably treating
 	Tasks RCU callbacks as lazy while still treating synchronous
 	grace periods as urgent.  Also fixes one bug that restores the
 	ability to apply debug-objects to RCU Tasks and another that
 	fixes a race condition that could result in false-positive
 	failures of the boot-time self-test code.
 
 rcuscale.2023.07.14b: RCU-scalability performance-test updates,
 	most notably adding the ability to measure the RCU-Tasks's
 	grace-period kthread's CPU consumption.  This proved
 	quite useful for the rcu-tasks.2023.07.24a work.
 
 refscale.2023.07.14b: Reference-acquisition/release performance-test
 	updates, including a fix for an uninitialized wait_queue_head_t.
 
 torture.2023.08.14a: Miscellaneous torture-test updates.
 
 torturescripts.2023.07.20a: Torture-test scripting updates, including
 	removal of the non-longer-functional formal-verification scripts,
 	test builds of individual RCU Tasks flavors, better diagnostics
 	for loss of connectivity for distributed rcutorture tests,
 	disabling of reboot loops in qemu/KVM-based rcutorture testing,
 	and passing of init parameters to rcutorture's init program.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEbK7UrM+RBIrCoViJnr8S83LZ+4wFAmTjkssTHHBhdWxtY2tA
 a2VybmVsLm9yZwAKCRCevxLzctn7jITND/9zEqYNbeFrcBs/YaHdoAjsNgOt1IYN
 csfF/KArVgdvmrwlV/nEaQMLaJcw9X7DVU5+7E2JbbDaB/2FSacseNyKk6mfgSVK
 /0rnTOXpqI9/T1HiJObWZvDQFuKL12bfteXWGJg1sMt2JUGZ4nAWhdZ3xRjp2XkO
 89qB5r0fF8gyGwvQ3M29ss8T9Oy0uUNJmDY/QyVxHM6dhkpSAezFffKzD7C4zkSV
 WucRTpYJ7bs6otBGtVmwz3x60UAuLwcVfQyB+CTbnGLsps9yAYU+1DDVdm7olcr3
 ARXMeboeodMvy9jWXhtbWRVAAob4lVUDXQN27kb4sBgroRQBfQXMuByRAU6s0VtX
 frOl6rlbORuAetsC8wFL0IFVn4yTpvXKbYw7h1MXTs7gVVbl33O9FieGvWu0r79/
 VR4Xw+JbmYWtyvFV8Zaq4iIEcOe+PeNH6u0bPx+htsHYd1+DUG2UY0MVmJQ3a4sb
 ygejA6mguCk7KBzWab8wdDpgAfhNwg0T9a+LQYcaskuD5SSWjYqqg6i1ulqqqyiE
 bOfRKDX4mWmAobWKHLssqUrjiLbxfygIaHjCrt7rWJKPIs1bK/WfWa4JbrE0NRwK
 9IDd1lWc9C+zoUpjyZWSG3ahK5lWo2u4sPNoRtMQjowjobIz1cBhaEwmFe72bG7C
 FCKb7Da2oUaLOw==
 =EujZ
 -----END PGP SIGNATURE-----

Merge tag 'rcu.2023.08.21a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu

Pull RCU updates from Paul McKenney:

 - Documentation updates

 - Miscellaneous fixes, perhaps most notably simplifying
   SRCU_NOTIFIER_INIT() as suggested

 - RCU Tasks updates, most notably treating Tasks RCU callbacks as lazy
   while still treating synchronous grace periods as urgent. Also fixes
   one bug that restores the ability to apply debug-objects to RCU Tasks
   and another that fixes a race condition that could result in
   false-positive failures of the boot-time self-test code

 - RCU-scalability performance-test updates, most notably adding the
   ability to measure the RCU-Tasks's grace-period kthread's CPU
   consumption. This proved quite useful for the RCU Tasks work

 - Reference-acquisition/release performance-test updates, including a
   fix for an uninitialized wait_queue_head_t

 - Miscellaneous torture-test updates

 - Torture-test scripting updates, including removal of the
   non-longer-functional formal-verification scripts, test builds of
   individual RCU Tasks flavors, better diagnostics for loss of
   connectivity for distributed rcutorture tests, disabling of reboot
   loops in qemu/KVM-based rcutorture testing, and passing of init
   parameters to rcutorture's init program

* tag 'rcu.2023.08.21a' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu: (64 commits)
  rcu: Use WRITE_ONCE() for assignments to ->next for rculist_nulls
  rcu: Make the rcu_nocb_poll boot parameter usable via boot config
  rcu: Mark __rcu_irq_enter_check_tick() ->rcu_urgent_qs load
  srcu,notifier: Remove #ifdefs in favor of SRCU Tiny srcu_usage
  rcutorture: Stop right-shifting torture_random() return values
  torture: Stop right-shifting torture_random() return values
  torture: Move stutter_wait() timeouts to hrtimers
  torture: Move torture_shuffle() timeouts to hrtimers
  torture: Move torture_onoff() timeouts to hrtimers
  torture: Make torture_hrtimeout_*() use TASK_IDLE
  torture: Add lock_torture writer_fifo module parameter
  torture: Add a kthread-creation callback to _torture_create_kthread()
  rcu-tasks: Fix boot-time RCU tasks debug-only deadlock
  rcu-tasks: Permit use of debug-objects with RCU Tasks flavors
  checkpatch: Complain about unexpected uses of RCU Tasks Trace
  torture: Cause mkinitrd.sh to indicate failure on compile errors
  torture: Make init program dump command-line arguments
  torture: Switch qemu from -nographic to -display none
  torture: Add init-program support for loongarch
  torture: Avoid torture-test reboot loops
  ...
2023-08-28 13:19:28 -07:00
Linus Torvalds
727dbda16b hardening updates for v6.6-rc1
- Carve out the new CONFIG_LIST_HARDENED as a more focused subset of
   CONFIG_DEBUG_LIST (Marco Elver).
 
 - Fix kallsyms lookup failure under Clang LTO (Yonghong Song).
 
 - Clarify documentation for CONFIG_UBSAN_TRAP (Jann Horn).
 
 - Flexible array member conversion not carried in other tree (Gustavo
   A. R. Silva).
 
 - Various strlcpy() and strncpy() removals not carried in other trees
   (Azeem Shaikh, Justin Stitt).
 
 - Convert nsproxy.count to refcount_t (Elena Reshetova).
 
 - Add handful of __counted_by annotations not carried in other trees,
   as well as an LKDTM test.
 
 - Fix build failure with gcc-plugins on GCC 14+.
 
 - Fix selftests to respect SKIP for signal-delivery tests.
 
 - Fix CFI warning for paravirt callback prototype.
 
 - Clarify documentation for seq_show_option_n() usage.
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAmTs6ZAWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJkpjD/9AeST5Imc2t0t71Qd+wPxW3jT3
 kDZPlHH8wHmuxSpRscX82m21SozvEMvybo6Cp7FSH4qr863FnBWMlo8acr7rKxUf
 0f7Y9qgY/hKADiVx5p0pbnCgcy+l4pwsxIqVCGuhjvNCbWHrdGqLM4UjIfaVz5Ws
 +55a/C3S1KVwB1s1+6to43jtKqQAx6yrqYWOaT3wEfCzHC87f9PUHhIGnFQVwPGP
 WpjQI/BQKpH7+MDCoJOPrZqXaE/4lWALxR6+5BBheGbvLoWifpJEYHX6bDUzkgBz
 liQDkgr4eAw5EXSOS7mX3EApfeMKakznJt9Mcmn0h3pPRlM3ZSVD64Xrou2Brpje
 exS2JRuh6HwIiXY9nTHc6YMGcAWG1syAR/hM2fQdujM0CWtBUk9+kkuYWsqF6nIK
 3tOxYLB/Ph4p+tShd+v5R3mEmp/6snYRKJoUk+9Fk67i54NnK4huyxaCO4zui+ML
 3vHuGp8KgFHUjJaYmYXHs3TRZnKSFUkPGc4MbpiGtmJ9zhfSwlhhF+yfBJCsvmTf
 ZajA+sPupT4OjLxU6vUD/ZNkXAEjWzktyX2v9YBA7FHh7SqPtX9ARRIxh417AjEJ
 tBPHhW/iRw9ftBIAKDmI7gPLynngd/zvjhvk6O5egHYjjgRM1/WAJZ4V26XR6+hf
 TWfQb7VRzdZIqwOEUA==
 =9ZWP
 -----END PGP SIGNATURE-----

Merge tag 'hardening-v6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull hardening updates from Kees Cook:
 "As has become normal, changes are scattered around the tree (either
  explicitly maintainer Acked or for trivial stuff that went ignored):

   - Carve out the new CONFIG_LIST_HARDENED as a more focused subset of
     CONFIG_DEBUG_LIST (Marco Elver)

   - Fix kallsyms lookup failure under Clang LTO (Yonghong Song)

   - Clarify documentation for CONFIG_UBSAN_TRAP (Jann Horn)

   - Flexible array member conversion not carried in other tree (Gustavo
     A. R. Silva)

   - Various strlcpy() and strncpy() removals not carried in other trees
     (Azeem Shaikh, Justin Stitt)

   - Convert nsproxy.count to refcount_t (Elena Reshetova)

   - Add handful of __counted_by annotations not carried in other trees,
     as well as an LKDTM test

   - Fix build failure with gcc-plugins on GCC 14+

   - Fix selftests to respect SKIP for signal-delivery tests

   - Fix CFI warning for paravirt callback prototype

   - Clarify documentation for seq_show_option_n() usage"

* tag 'hardening-v6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (23 commits)
  LoadPin: Annotate struct dm_verity_loadpin_trusted_root_digest with __counted_by
  kallsyms: Change func signature for cleanup_symbol_name()
  kallsyms: Fix kallsyms_selftest failure
  nsproxy: Convert nsproxy.count to refcount_t
  integrity: Annotate struct ima_rule_opt_list with __counted_by
  lkdtm: Add FAM_BOUNDS test for __counted_by
  Compiler Attributes: counted_by: Adjust name and identifier expansion
  um: refactor deprecated strncpy to memcpy
  um: vector: refactor deprecated strncpy
  alpha: Replace one-element array with flexible-array member
  hardening: Move BUG_ON_DATA_CORRUPTION to hardening options
  list: Introduce CONFIG_LIST_HARDENED
  list_debug: Introduce inline wrappers for debug checks
  compiler_types: Introduce the Clang __preserve_most function attribute
  gcc-plugins: Rename last_stmt() for GCC 14+
  selftests/harness: Actually report SKIP for signal tests
  x86/paravirt: Fix tlb_remove_table function callback prototype warning
  EISA: Replace all non-returning strlcpy with strscpy
  perf: Replace strlcpy with strscpy
  um: Remove strlcpy declaration
  ...
2023-08-28 12:59:45 -07:00
Linus Torvalds
b03a434214 seccomp updates for v6.6-rc1
- Provide USER_NOTIFY flag for synchronous mode (Andrei Vagin, Peter
   Oskolkov). This touches the scheduler and perf but has been Acked by
   Peter Zijlstra.
 
 - Fix regression in syscall skipping and restart tracing on arm32.
   This touches arch/arm/ but has been Acked by Arnd Bergmann.
 -----BEGIN PGP SIGNATURE-----
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAmTs418WHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJohpD/4tEfRdnb/KDgwQ7uvqBonUJXcx
 wqw17LZCGTpBV3/Tp3+aEseD1NezOxiMJL88VyUHSy7nfDJShbL6QtyoenwEOeXJ
 HmBUfcIH3cqRutHEJ3drYBzBetpeeK2G+gTYVj+JoEfPWyPf+Egj+1JE2n1xLi92
 WC1miBAyBZ59kN+D1hcDzJu24CkAwbcUYlEzGejN5lBOwxYV3/fjARBVRvefOO5m
 jljSCIVJOFgCiybKhJ7Zw1+lkFc3cIlcOgr4/ZegSc8PxFVebnuImTHHp/gvoo6F
 7d1xe5Hk+PSfNvVq41MAeRB2vK2tY5efwjXRarThUaydPTO43KiQm0dzP0EYWK9a
 LcOg8zAXZnpvuWU5O2SqUKADcxe2TjS1WuQ/Q4ixxgKz2kJKDwrNU8Frf327eLSR
 acfZgMMiUfEXyXDV9B3LzNAtwdvwyxYrzEzxgKywhThIhZmQDat0rI2IaTV5QIc5
 pkxiFEe0TPwpzyUVO9dSzE+ughTmNQOKk5uAM9e2NwRwVdhEmlZAxo0kStJ1NoaA
 yDjYIKfaNBElchL4v2931KJFJseI+uRaWdW10JEV+1M69+gEAEs6wbmAxtcYS776
 xWsYp3slXzlmeVyvQp/ah8p0y55r+qTbcnhkvIdiwLYei4Bh3KOoJUlVmW0V5dKq
 b+7qspIvBA0kKRAqPw==
 =DI8R
 -----END PGP SIGNATURE-----

Merge tag 'seccomp-v6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull seccomp updates from Kees Cook:

 - Provide USER_NOTIFY flag for synchronous mode (Andrei Vagin, Peter
   Oskolkov). This touches the scheduler and perf but has been Acked by
   Peter Zijlstra.

 - Fix regression in syscall skipping and restart tracing on arm32. This
   touches arch/arm/ but has been Acked by Arnd Bergmann.

* tag 'seccomp-v6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  seccomp: Add missing kerndoc notations
  ARM: ptrace: Restore syscall skipping for tracers
  ARM: ptrace: Restore syscall restart tracing
  selftests/seccomp: Handle arm32 corner cases better
  perf/benchmark: add a new benchmark for seccom_unotify
  selftest/seccomp: add a new test for the sync mode of seccomp_user_notify
  seccomp: add the synchronous mode for seccomp_unotify
  sched: add a few helpers to wake up tasks on the current cpu
  sched: add WF_CURRENT_CPU and externise ttwu
  seccomp: don't use semaphore and wait_queue together
2023-08-28 12:38:26 -07:00
Linus Torvalds
615e95831e v6.6-vfs.ctime
-----BEGIN PGP SIGNATURE-----
 
 iHUEABYKAB0WIQRAhzRXHqcMeLMyaSiRxhvAZXjcogUCZOXTKAAKCRCRxhvAZXjc
 oifJAQCzi/p+AdQu8LA/0XvR7fTwaq64ZDCibU4BISuLGT2kEgEAuGbuoFZa0rs2
 XYD/s4+gi64p9Z01MmXm2XO1pu3GPg0=
 =eJz5
 -----END PGP SIGNATURE-----

Merge tag 'v6.6-vfs.ctime' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs

Pull vfs timestamp updates from Christian Brauner:
 "This adds VFS support for multi-grain timestamps and converts tmpfs,
  xfs, ext4, and btrfs to use them. This carries acks from all relevant
  filesystems.

  The VFS always uses coarse-grained timestamps when updating the ctime
  and mtime after a change. This has the benefit of allowing filesystems
  to optimize away a lot of metadata updates, down to around 1 per
  jiffy, even when a file is under heavy writes.

  Unfortunately, this has always been an issue when we're exporting via
  NFSv3, which relies on timestamps to validate caches. A lot of changes
  can happen in a jiffy, so timestamps aren't sufficient to help the
  client decide to invalidate the cache.

  Even with NFSv4, a lot of exported filesystems don't properly support
  a change attribute and are subject to the same problems with timestamp
  granularity. Other applications have similar issues with timestamps
  (e.g., backup applications).

  If we were to always use fine-grained timestamps, that would improve
  the situation, but that becomes rather expensive, as the underlying
  filesystem would have to log a lot more metadata updates.

  This introduces fine-grained timestamps that are used when they are
  actively queried.

  This uses the 31st bit of the ctime tv_nsec field to indicate that
  something has queried the inode for the mtime or ctime. When this flag
  is set, on the next mtime or ctime update, the kernel will fetch a
  fine-grained timestamp instead of the usual coarse-grained one.

  As POSIX generally mandates that when the mtime changes, the ctime
  must also change the kernel always stores normalized ctime values, so
  only the first 30 bits of the tv_nsec field are ever used.

  Filesytems can opt into this behavior by setting the FS_MGTIME flag in
  the fstype. Filesystems that don't set this flag will continue to use
  coarse-grained timestamps.

  Various preparatory changes, fixes and cleanups are included:

   - Fixup all relevant places where POSIX requires updating ctime
     together with mtime. This is a wide-range of places and all
     maintainers provided necessary Acks.

   - Add new accessors for inode->i_ctime directly and change all
     callers to rely on them. Plain accesses to inode->i_ctime are now
     gone and it is accordingly rename to inode->__i_ctime and commented
     as requiring accessors.

   - Extend generic_fillattr() to pass in a request mask mirroring in a
     sense the statx() uapi. This allows callers to pass in a request
     mask to only get a subset of attributes filled in.

   - Rework timestamp updates so it's possible to drop the @now
     parameter the update_time() inode operation and associated helpers.

   - Add inode_update_timestamps() and convert all filesystems to it
     removing a bunch of open-coding"

* tag 'v6.6-vfs.ctime' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs: (107 commits)
  btrfs: convert to multigrain timestamps
  ext4: switch to multigrain timestamps
  xfs: switch to multigrain timestamps
  tmpfs: add support for multigrain timestamps
  fs: add infrastructure for multigrain timestamps
  fs: drop the timespec64 argument from update_time
  xfs: have xfs_vn_update_time gets its own timestamp
  fat: make fat_update_time get its own timestamp
  fat: remove i_version handling from fat_update_time
  ubifs: have ubifs_update_time use inode_update_timestamps
  btrfs: have it use inode_update_timestamps
  fs: drop the timespec64 arg from generic_update_time
  fs: pass the request_mask to generic_fillattr
  fs: remove silly warning from current_time
  gfs2: fix timestamp handling on quota inodes
  fs: rename i_ctime field to __i_ctime
  selinux: convert to ctime accessor functions
  security: convert to ctime accessor functions
  apparmor: convert to ctime accessor functions
  sunrpc: convert to ctime accessor functions
  ...
2023-08-28 09:31:32 -07:00
Linus Torvalds
3b35375f19 A last minute fix for a regression introduced in the v6.5 merge window. The
conversion of the software based interrupt resend mechanism to hlist missed
 to add a check whether the descriptor is already enqueued and dropped the
 interrupt descriptor lookup for nested interrupts.
 
 The missing check whether the descriptor is already queued causes hlist
 corruption and can be observed in the wild. The dropped parent descriptor
 lookup has not yet caused problems, but it would result in stale interrupt
 line in the worst case.
 
 Add the missing enqueued check and bring the descriptor lookup back to cure
 this.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAmTqNLQTHHRnbHhAbGlu
 dXRyb25peC5kZQAKCRCmGPVMDXSYoTeBD/0b8zbNhmO5TXhP6GrCPXahFM6aTmyK
 NveZMzh1c7tQZzMBNNEnRoaYvmcgPOviZ1Yi3+/Hs3oaR/b6nLt36K8+MRC7J+15
 j6cIylmpTp9eH5Na3IT1wmTNfCVAdoejoZVYq4PPHAHUrzqu7ESOTLzHbPmWS97E
 VGdvUrKnQ7J4ajOZn7bXWaia+qCuIij87CYAKH++c9JVMIc0iTs2Zd7FG2sncgLm
 OJdvjmMy/qN9a1jYdM4DrGOS8HBdvuYb9EEDuZB4NEY3nBR+svQqBHsD462LgxNe
 +OTzLBVMoP9heKbyTU9357PUq2qz6OmpC0vE1n5XgkSEdrvm9x1UjYcPQnagRm25
 JZp/pEI/ryD8oGQNWzsPe7PDyyKHV5F0Q1KPHGUvvEJxwF+USVe9Zm6damfZvGeA
 dp34zYg0mFCH0hmqdYs6+cc8sJcEy8aR8FFUgI1Uj5nr9zZ3vV7WTsOjJ12NDFo/
 L+oDKz6/sdL2X/EKddP3ffQrImPF8DdSYfEPmoukTMhihfgXewBlgvg3b9HekVVm
 9j7UhqsQw/mdPcTpkM6cd5ngxB71X64gMjAfotwsproJg/EUw978CM++9sGKmKy8
 jU7hlgZQ3DniSCyCpXB/7vZxAFej8TKTWmTc4KZYKiMfej2vqI3FjA3KLGY6GzK+
 ls/Rm57EOhKZlw==
 =Snax
 -----END PGP SIGNATURE-----

Merge tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq fix from Thomas Gleixner:
 "A last minute fix for a regression introduced in the v6.5 merge
  window.

  The conversion of the software based interrupt resend mechanism to
  hlist missed to add a check whether the descriptor is already enqueued
  and dropped the interrupt descriptor lookup for nested interrupts.

  The missing check whether the descriptor is already queued causes
  hlist corruption and can be observed in the wild. The dropped parent
  descriptor lookup has not yet caused problems, but it would result in
  stale interrupt line in the worst case.

  Add the missing enqueued check and bring the descriptor lookup back to
  cure this"

* tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Fix software resend lockup and nested resend
2023-08-26 10:34:29 -07:00
Johan Hovold
9f5deb5516 genirq: Fix software resend lockup and nested resend
The switch to using hlist for managing software resend of interrupts
broke resend in at least two ways:

First, unconditionally adding interrupt descriptors to the resend list can
corrupt the list when the descriptor in question has already been
added. This causes the resend tasklet to loop indefinitely with interrupts
disabled as was recently reported with the Lenovo ThinkPad X13s after
threaded NAPI was disabled in the ath11k WiFi driver.

This bug is easily fixed by restoring the old semantics of irq_sw_resend()
so that it can be called also for descriptors that have already been marked
for resend.

Second, the offending commit also broke software resend of nested
interrupts by simply discarding the code that made sure that such
interrupts are retriggered using the parent interrupt.

Add back the corresponding code that adds the parent descriptor to the
resend list.

Fixes: bc06a9e087 ("genirq: Use hlist for managing resend handlers")
Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/lkml/20230809073432.4193-1-johan+linaro@kernel.org/
Link: https://lore.kernel.org/r/20230826154004.1417-1-johan+linaro@kernel.org
2023-08-26 19:14:31 +02:00
Yonghong Song
76903a9648 kallsyms: Change func signature for cleanup_symbol_name()
All users of cleanup_symbol_name() do not use the return value.
So let us change the return value of cleanup_symbol_name() to
'void' to reflect its usage pattern.

Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230825202036.441212-1-yonghong.song@linux.dev
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-25 15:00:36 -07:00
Rafael J. Wysocki
6a0b211f8b Merge branches 'pm-sleep', 'pm-qos' and 'powercap'
Merge system-wide power management changes and power capping updates
for 6.6-rc1:

 - Add device PM helpers to allow a device to remain powered-on during
   system-wide transitions (Ulf Hansson).

 - Rework hibernation memory snapshotting to avoid storing pages filled
   with zeros in hibernation image files (Brian Geffon).

 - Add check to make sure that CPU latency QoS constraints do not use
   negative values (Clive Lin).

 - Optimize rp->domains memory allocation in the Intel RAPL power
   capping driver (xiongxin).

 - Remove recursion while parsing zones in the arm_scmi power capping
   driver (Cristian Marussi).

* pm-sleep:
  PM: sleep: Add helpers to allow a device to remain powered-on
  PM: hibernate: don't store zero pages in the image file

* pm-qos:
  PM: QoS: Add check to make sure CPU latency is non-negative

* powercap:
  powercap: intel_rapl: Optimize rp->domains memory allocation
  powercap: arm_scmi: Remove recursion while parsing zones
2023-08-25 21:23:30 +02:00
Yonghong Song
33f0467fe0 kallsyms: Fix kallsyms_selftest failure
Kernel test robot reported a kallsyms_test failure when clang lto is
enabled (thin or full) and CONFIG_KALLSYMS_SELFTEST is also enabled.
I can reproduce in my local environment with the following error message
with thin lto:
  [    1.877897] kallsyms_selftest: Test for 1750th symbol failed: (tsc_cs_mark_unstable) addr=ffffffff81038090
  [    1.877901] kallsyms_selftest: abort

It appears that commit 8cc32a9bbf ("kallsyms: strip LTO-only suffixes
from promoted global functions") caused the failure. Commit 8cc32a9bbf
changed cleanup_symbol_name() based on ".llvm." instead of '.' where
".llvm." is appended to a before-lto-optimization local symbol name.
We need to propagate such knowledge in kallsyms_selftest.c as well.

Further more, compare_symbol_name() in kallsyms.c needs change as well.
In scripts/kallsyms.c, kallsyms_names and kallsyms_seqs_of_names are used
to record symbol names themselves and index to symbol names respectively.
For example:
  kallsyms_names:
    ...
    __amd_smn_rw._entry       <== seq 1000
    __amd_smn_rw._entry.5     <== seq 1001
    __amd_smn_rw.llvm.<hash>  <== seq 1002
    ...

kallsyms_seqs_of_names are sorted based on cleanup_symbol_name() through, so
the order in kallsyms_seqs_of_names actually has

  index 1000:   seq 1002   <== __amd_smn_rw.llvm.<hash> (actual symbol comparison using '__amd_smn_rw')
  index 1001:   seq 1000   <== __amd_smn_rw._entry
  index 1002:   seq 1001   <== __amd_smn_rw._entry.5

Let us say at a particular point, at index 1000, symbol '__amd_smn_rw.llvm.<hash>'
is comparing to '__amd_smn_rw._entry' where '__amd_smn_rw._entry' is the one to
search e.g., with function kallsyms_on_each_match_symbol(). The current implementation
will find out '__amd_smn_rw._entry' is less than '__amd_smn_rw.llvm.<hash>' and
then continue to search e.g., index 999 and never found a match although the actual
index 1001 is a match.

To fix this issue, let us do cleanup_symbol_name() first and then do comparison.
In the above case, comparing '__amd_smn_rw' vs '__amd_smn_rw._entry' and
'__amd_smn_rw._entry' being greater than '__amd_smn_rw', the next comparison will
be > index 1000 and eventually index 1001 will be hit an a match is found.

For any symbols not having '.llvm.' substr, there is no functionality change
for compare_symbol_name().

Fixes: 8cc32a9bbf ("kallsyms: strip LTO-only suffixes from promoted global functions")
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202308232200.1c932a90-oliver.sang@intel.com
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Reviewed-by: Song Liu <song@kernel.org>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Link: https://lore.kernel.org/r/20230825034659.1037627-1-yonghong.song@linux.dev
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-25 10:44:20 -07:00
Mark Rutland
1dfe3a5a7c entry: Remove empty addr_limit_user_check()
Back when set_fs() was a generic API for altering the address limit,
addr_limit_user_check() was a safety measure to prevent userspace being
able to issue syscalls with an unbound limit.

With the the removal of set_fs() as a generic API, the last user of
addr_limit_user_check() was removed in commit:

  b5a5a01d8e ("arm64: uaccess: remove addr_limit_user_check()")

... as since that commit, no architecture defines TIF_FSCHECK, and hence
addr_limit_user_check() always expands to nothing.

Remove addr_limit_user_check(), updating the comment in
exit_to_user_mode_prepare() to no longer refer to it. At the same time,
the comment is reworded to be a little more generic so as to cover
kmap_assert_nomap() in addition to lockdep_sys_exit().

No functional change.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20230821163526.2319443-1-mark.rutland@arm.com
2023-08-23 10:32:39 +02:00
Clive Lin
5f55836ab4 PM: QoS: Add check to make sure CPU latency is non-negative
CPU latency should never be negative, which will be incorrectly high
when converted to unsigned data type.

Commit 8d36694245 ("PM: QoS: Add check to make sure CPU freq is
non-negative") makes sure CPU frequency is non-negative to fix incorrect
behavior in freqency QoS.

Add an analogous check to make sure CPU latency is non-negative so as to
prevent this problem from happening in CPU latency QoS.

Signed-off-by: Clive Lin <clive.lin@mediatek.com>
[ rjw: Changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2023-08-22 21:37:29 +02:00
Elena Reshetova
2ddd3cac1f nsproxy: Convert nsproxy.count to refcount_t
atomic_t variables are currently used to implement reference counters
with the following properties:
 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows and
underflows. This is important since overflows and underflows can lead
to use-after-free situation and be exploitable.

The variable nsproxy.count is used as pure reference counter. Convert it
to refcount_t and fix up the operations.

**Important note for maintainers:

Some functions from refcount_t API defined in refcount.h have different
memory ordering guarantees than their atomic counterparts. Please check
Documentation/core-api/refcount-vs-atomic.rst for more information.

Normally the differences should not matter since refcount_t provides
enough guarantees to satisfy the refcounting use cases, but in some
rare cases it might matter. Please double check that you don't have
some undocumented memory guarantees for this variable usage.

For the nsproxy.count it might make a difference in following places:
 - put_nsproxy() and switch_task_namespaces(): decrement in
   refcount_dec_and_test() only provides RELEASE ordering and ACQUIRE
   ordering on success vs. fully ordered atomic counterpart

Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Christian Brauner <brauner@kernel.org>
Link: https://lore.kernel.org/r/20230818041327.gonna.210-kees@kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-21 11:29:12 -07:00
Zheng Yejian
c2489bb7e6 tracing: Introduce pipe_cpumask to avoid race on trace_pipes
There is race issue when concurrently splice_read main trace_pipe and
per_cpu trace_pipes which will result in data read out being different
from what actually writen.

As suggested by Steven:
  > I believe we should add a ref count to trace_pipe and the per_cpu
  > trace_pipes, where if they are opened, nothing else can read it.
  >
  > Opening trace_pipe locks all per_cpu ref counts, if any of them are
  > open, then the trace_pipe open will fail (and releases any ref counts
  > it had taken).
  >
  > Opening a per_cpu trace_pipe will up the ref count for just that
  > CPU buffer. This will allow multiple tasks to read different per_cpu
  > trace_pipe files, but will prevent the main trace_pipe file from
  > being opened.

But because we only need to know whether per_cpu trace_pipe is open or
not, using a cpumask instead of using ref count may be easier.

After this patch, users will find that:
 - Main trace_pipe can be opened by only one user, and if it is
   opened, all per_cpu trace_pipes cannot be opened;
 - Per_cpu trace_pipes can be opened by multiple users, but each per_cpu
   trace_pipe can only be opened by one user. And if one of them is
   opened, main trace_pipe cannot be opened.

Link: https://lore.kernel.org/linux-trace-kernel/20230818022645.1948314-1-zhengyejian1@huawei.com

Suggested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-08-21 11:17:14 -04:00
Kees Cook
46822860a5 seccomp: Add missing kerndoc notations
The kerndoc for some struct member and function arguments were missing.
Add them.

Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Will Drewry <wad@chromium.org>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202308171742.AncabIG1-lkp@intel.com/
Signed-off-by: Kees Cook <keescook@chromium.org>
2023-08-17 12:32:15 -07:00
Zheng Yejian
eecb91b9f9 tracing: Fix memleak due to race between current_tracer and trace
Kmemleak report a leak in graph_trace_open():

  unreferenced object 0xffff0040b95f4a00 (size 128):
    comm "cat", pid 204981, jiffies 4301155872 (age 99771.964s)
    hex dump (first 32 bytes):
      e0 05 e7 b4 ab 7d 00 00 0b 00 01 00 00 00 00 00 .....}..........
      f4 00 01 10 00 a0 ff ff 00 00 00 00 65 00 10 00 ............e...
    backtrace:
      [<000000005db27c8b>] kmem_cache_alloc_trace+0x348/0x5f0
      [<000000007df90faa>] graph_trace_open+0xb0/0x344
      [<00000000737524cd>] __tracing_open+0x450/0xb10
      [<0000000098043327>] tracing_open+0x1a0/0x2a0
      [<00000000291c3876>] do_dentry_open+0x3c0/0xdc0
      [<000000004015bcd6>] vfs_open+0x98/0xd0
      [<000000002b5f60c9>] do_open+0x520/0x8d0
      [<00000000376c7820>] path_openat+0x1c0/0x3e0
      [<00000000336a54b5>] do_filp_open+0x14c/0x324
      [<000000002802df13>] do_sys_openat2+0x2c4/0x530
      [<0000000094eea458>] __arm64_sys_openat+0x130/0x1c4
      [<00000000a71d7881>] el0_svc_common.constprop.0+0xfc/0x394
      [<00000000313647bf>] do_el0_svc+0xac/0xec
      [<000000002ef1c651>] el0_svc+0x20/0x30
      [<000000002fd4692a>] el0_sync_handler+0xb0/0xb4
      [<000000000c309c35>] el0_sync+0x160/0x180

The root cause is descripted as follows:

  __tracing_open() {  // 1. File 'trace' is being opened;
    ...
    *iter->trace = *tr->current_trace;  // 2. Tracer 'function_graph' is
                                        //    currently set;
    ...
    iter->trace->open(iter);  // 3. Call graph_trace_open() here,
                              //    and memory are allocated in it;
    ...
  }

  s_start() {  // 4. The opened file is being read;
    ...
    *iter->trace = *tr->current_trace;  // 5. If tracer is switched to
                                        //    'nop' or others, then memory
                                        //    in step 3 are leaked!!!
    ...
  }

To fix it, in s_start(), close tracer before switching then reopen the
new tracer after switching. And some tracers like 'wakeup' may not update
'iter->private' in some cases when reopen, then it should be cleared
to avoid being mistakenly closed again.

Link: https://lore.kernel.org/linux-trace-kernel/20230817125539.1646321-1-zhengyejian1@huawei.com

Fixes: d7350c3f45 ("tracing/core: make the read callbacks reentrants")
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-08-17 13:49:37 -04:00
Peter Zijlstra
63304558ba sched/eevdf: Curb wakeup-preemption
Mike and others noticed that EEVDF does like to over-schedule quite a
bit -- which does hurt performance of a number of benchmarks /
workloads.

In particular, what seems to cause over-scheduling is that when lag is
of the same order (or larger) than the request / slice then placement
will not only cause the task to be placed left of current, but also
with a smaller deadline than current, which causes immediate
preemption.

[ notably, lag bounds are relative to HZ ]

Mike suggested we stick to picking 'current' for as long as it's
eligible to run, giving it uninterrupted runtime until it reaches
parity with the pack.

Augment Mike's suggestion by only allowing it to exhaust it's initial
request.

One random data point:

echo NO_RUN_TO_PARITY > /debug/sched/features
perf stat -a -e context-switches --repeat 10 -- perf bench sched messaging -g 20 -t -l 5000

	3,723,554        context-switches      ( +-  0.56% )
	9.5136 +- 0.0394 seconds time elapsed  ( +-  0.41% )

echo RUN_TO_PARITY > /debug/sched/features
perf stat -a -e context-switches --repeat 10 -- perf bench sched messaging -g 20 -t -l 5000

	2,556,535        context-switches      ( +-  0.51% )
	9.2427 +- 0.0302 seconds time elapsed  ( +-  0.33% )

Suggested-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20230816134059.GC982867@hirez.programming.kicks-ass.net
2023-08-17 17:07:07 +02:00
Paul E. McKenney
fe24a0b632 Merge branches 'doc.2023.07.14b', 'fixes.2023.08.16a', 'rcu-tasks.2023.07.24a', 'rcuscale.2023.07.14b', 'refscale.2023.07.14b', 'torture.2023.08.14a' and 'torturescripts.2023.07.20a' into HEAD
doc.2023.07.14b:  Documentation updates.
fixes.2023.08.16a:  Miscellaneous fixes.
rcu-tasks.2023.07.24a:  RCU Tasks updates.
rcuscale.2023.07.14b:  RCU (updater) scalability test updates.
refscale.2023.07.14b:  Reference (reader) scalability test updates.
torture.2023.08.14a:  Other torture-test updates.
torturescripts.2023.07.20a:  Other torture-test scripting updates.
2023-08-16 14:31:08 -07:00
Paul E. McKenney
3292ba0229 rcu: Make the rcu_nocb_poll boot parameter usable via boot config
The rcu_nocb_poll kernel boot parameter is defined via early_param(),
whose parsing functions are invoked from parse_early_param() which
is in turn invoked by setup_arch(), which is very early indeed.  It
is invoked so early that the console output timestamps read 0.000000,
in other words, before time begins.

This use of early_param() means that the rcu_nocb_poll kernel boot
parameter cannot usefully be embedded into the kernel image.  Yes, you
can embed it, but setup_boot_config() is invoked from start_kernel()
too late for it to be parsed.

But it makes no sense to parse this parameter so early.  After all,
it cannot do anything until the rcuog kthreads are created, which is
long after rcu_init() time, let alone setup_boot_config() time.

This commit therefore switches the rcu_nocb_poll kernel boot parameter
from early_param() to __setup(), which allows boot-config parsing of
this parameter, in turn allowing it to be embedded into the kernel image.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
2023-08-16 14:27:41 -07:00
Paul E. McKenney
343640cb5b rcu: Mark __rcu_irq_enter_check_tick() ->rcu_urgent_qs load
The rcu_request_urgent_qs_task() function does a cross-CPU store
to ->rcu_urgent_qs, so this commit therefore marks the load in
__rcu_irq_enter_check_tick() with READ_ONCE().

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
2023-08-16 14:27:41 -07:00
Sven Schnelle
c4d6b54381 tracing/synthetic: Allocate one additional element for size
While debugging another issue I noticed that the stack trace contains one
invalid entry at the end:

<idle>-0       [008] d..4.    26.484201: wake_lat: pid=0 delta=2629976084 000000009cc24024 stack=STACK:
=> __schedule+0xac6/0x1a98
=> schedule+0x126/0x2c0
=> schedule_timeout+0x150/0x2c0
=> kcompactd+0x9ca/0xc20
=> kthread+0x2f6/0x3d8
=> __ret_from_fork+0x8a/0xe8
=> 0x6b6b6b6b6b6b6b6b

This is because the code failed to add the one element containing the
number of entries to field_size.

Link: https://lkml.kernel.org/r/20230816154928.4171614-4-svens@linux.ibm.com

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Fixes: 00cf3d672a ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-08-16 16:37:07 -04:00
Sven Schnelle
887f92e09e tracing/synthetic: Skip first entry for stack traces
While debugging another issue I noticed that the stack trace output
contains the number of entries on top:

         <idle>-0       [000] d..4.   203.322502: wake_lat: pid=0 delta=2268270616 stack=STACK:
=> 0x10
=> __schedule+0xac6/0x1a98
=> schedule+0x126/0x2c0
=> schedule_timeout+0x242/0x2c0
=> __wait_for_common+0x434/0x680
=> __wait_rcu_gp+0x198/0x3e0
=> synchronize_rcu+0x112/0x138
=> ring_buffer_reset_online_cpus+0x140/0x2e0
=> tracing_reset_online_cpus+0x15c/0x1d0
=> tracing_set_clock+0x180/0x1d8
=> hist_register_trigger+0x486/0x670
=> event_hist_trigger_parse+0x494/0x1318
=> trigger_process_regex+0x1d4/0x258
=> event_trigger_write+0xb4/0x170
=> vfs_write+0x210/0xad0
=> ksys_write+0x122/0x208

Fix this by skipping the first element. Also replace the pointer
logic with an index variable which is easier to read.

Link: https://lkml.kernel.org/r/20230816154928.4171614-3-svens@linux.ibm.com

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Fixes: 00cf3d672a ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-08-16 16:34:25 -04:00
Sven Schnelle
ddeea494a1 tracing/synthetic: Use union instead of casts
The current code uses a lot of casts to access the fields member in struct
synth_trace_events with different sizes.  This makes the code hard to
read, and had already introduced an endianness bug. Use a union and struct
instead.

Link: https://lkml.kernel.org/r/20230816154928.4171614-2-svens@linux.ibm.com

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Fixes: 00cf3d672a ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle <svens@linux.ibm.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-08-16 16:33:27 -04:00
Zheng Yejian
b71645d6af tracing: Fix cpu buffers unavailable due to 'record_disabled' missed
Trace ring buffer can no longer record anything after executing
following commands at the shell prompt:

  # cd /sys/kernel/tracing
  # cat tracing_cpumask
  fff
  # echo 0 > tracing_cpumask
  # echo 1 > snapshot
  # echo fff > tracing_cpumask
  # echo 1 > tracing_on
  # echo "hello world" > trace_marker
  -bash: echo: write error: Bad file descriptor

The root cause is that:
  1. After `echo 0 > tracing_cpumask`, 'record_disabled' of cpu buffers
     in 'tr->array_buffer.buffer' became 1 (see tracing_set_cpumask());
  2. After `echo 1 > snapshot`, 'tr->array_buffer.buffer' is swapped
     with 'tr->max_buffer.buffer', then the 'record_disabled' became 0
     (see update_max_tr());
  3. After `echo fff > tracing_cpumask`, the 'record_disabled' become -1;
Then array_buffer and max_buffer are both unavailable due to value of
'record_disabled' is not 0.

To fix it, enable or disable both array_buffer and max_buffer at the same
time in tracing_set_cpumask().

Link: https://lkml.kernel.org/r/20230805033816.3284594-2-zhengyejian1@huawei.com

Cc: <mhiramat@kernel.org>
Cc: <vnagarnaik@google.com>
Cc: <shuah@kernel.org>
Fixes: 71babb2705 ("tracing: change CPU ring buffer state from tracing_cpumask")
Signed-off-by: Zheng Yejian <zhengyejian1@huawei.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2023-08-16 15:12:42 -04:00
Paul E. McKenney
bc19e86e28 rcutorture: Stop right-shifting torture_random() return values
Now that torture_random() uses swahw32(), its callers no longer see
not-so-random low-order bits, as these are now swapped up into the upper
16 bits of the torture_random() function's return value.  This commit
therefore removes the right-shifting of torture_random() return values.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:08 -07:00
Paul E. McKenney
6cab60ceb1 torture: Stop right-shifting torture_random() return values
Now that torture_random() uses swahw32(), its callers no longer see
not-so-random low-order bits, as these are now swapped up into the upper
16 bits of the torture_random() function's return value.  This commit
therefore removes the right-shifting of torture_random() return values.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:08 -07:00
Paul E. McKenney
10af43671e torture: Move stutter_wait() timeouts to hrtimers
In order to gain better race coverage, move the test start/stop
waits in stutter_wait() to torture_hrtimeout_jiffies().

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:08 -07:00
Paul E. McKenney
dea81dcfd3 torture: Move torture_shuffle() timeouts to hrtimers
In order to gain better race coverage, move the CPU-migration timed
waits in torture_shuffle() to torture_hrtimeout_jiffies().

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:08 -07:00
Paul E. McKenney
3f0c06e1cb torture: Move torture_onoff() timeouts to hrtimers
In order to gain better race coverage, move the CPU-hotplug-related
timed waits in torture_onoff() to torture_hrtimeout_jiffies().

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:08 -07:00
Paul E. McKenney
872948c665 torture: Make torture_hrtimeout_*() use TASK_IDLE
Given that it is expected that more code will use torture_hrtimeout_*(),
including for longer timeouts, make it use TASK_IDLE instead of
TASK_UNINTERRUPTIBLE.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:07 -07:00
Dietmar Eggemann
5d248bb39f torture: Add lock_torture writer_fifo module parameter
This commit adds a module parameter that causes the locktorture writer
to run at real-time priority.

To use it:
insmod /lib/modules/torture.ko random_shuffle=1
insmod /lib/modules/locktorture.ko torture_type=mutex_lock rt_boost=1 rt_boost_factor=50 nested_locks=3 writer_fifo=1
													^^^^^^^^^^^^^

A predecessor to this patch has been helpful to uncover issues with the
proxy-execution series.

[ paulmck: Remove locktorture-specific code from kernel/torture.c. ]

Cc: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: kernel-team@android.com
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
[jstultz: Include header change to build, reword commit message]
Signed-off-by: John Stultz <jstultz@google.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2023-08-14 15:01:07 -07:00
Paul E. McKenney
67d5404d27 torture: Add a kthread-creation callback to _torture_create_kthread()
This commit adds a kthread-creation callback to the
_torture_create_kthread() function, which allows callers of a new
torture_create_kthread_cb() macro to specify a function to be invoked
after the kthread is created but before it is awakened for the first time.

Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: kernel-team@android.com
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Acked-by: John Stultz <jstultz@google.com>
2023-08-14 15:00:37 -07:00
Paul E. McKenney
9d0cce2bc3 rcu-tasks: Fix boot-time RCU tasks debug-only deadlock
In kernels built with CONFIG_PROVE_RCU=y (for example, lockdep kernels),
the following sequence of events can occur:

o	rcu_init_tasks_generic() is invoked just before init is spawned.
	It invokes rcu_spawn_tasks_kthread() and friends.

o	rcu_spawn_tasks_kthread() invokes rcu_spawn_tasks_kthread_generic(),
	which uses kthread_run() to create the needed kthread.

o	Control returns to rcu_init_tasks_generic(), which, because this
	is a CONFIG_PROVE_RCU=y kernel, invokes the version of the
	rcu_tasks_initiate_self_tests() function that actually does
	something, including invoking synchronize_rcu_tasks(), which
	in turn invokes synchronize_rcu_tasks_generic().

o	synchronize_rcu_tasks_generic() sees that the ->kthread_ptr is
	still NULL, because the newly spawned kthread has not yet
	started.

o	The new kthread starts, preempting synchronize_rcu_tasks_generic()
	just after its check.  This kthread invokes rcu_tasks_one_gp(),
	which acquires ->tasks_gp_mutex, and, seeing no work, blocks
	in rcuwait_wait_event().  Note that this step requires either
	a preemptible kernel or a fault-injection-style sleep at the
	beginning of mutex_lock().

o	synchronize_rcu_tasks_generic() resumes and invokes rcu_tasks_one_gp().

o	rcu_tasks_one_gp() attempts to acquire ->tasks_gp_mutex, which
	is still held by the newly spawned kthread's rcu_tasks_one_gp()
	function.  Deadlock.

Because the only reason for ->tasks_gp_mutex is to handle pre-kthread
synchronous grace periods, this commit avoids this deadlock by having
rcu_tasks_one_gp() momentarily release ->tasks_gp_mutex while invoking
rcuwait_wait_event().  This allows the call to rcu_tasks_one_gp() from
synchronize_rcu_tasks_generic() proceed.

Note that it is not necessary to release the mutex anywhere else in
rcu_tasks_one_gp() because rcuwait_wait_event() is the only function
that can block indefinitely.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Reported-by: Roy Hopkins <rhopkins@suse.de>
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Roy Hopkins <rhopkins@suse.de>
2023-08-14 14:58:25 -07:00
Peter Zijlstra
7170509cad sched: Simplify sched_core_cpu_{starting,deactivate}()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211812.371787909@infradead.org
2023-08-14 17:01:27 +02:00
Peter Zijlstra
b4e1fa1e14 sched: Simplify try_steal_cookie()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211812.304154828@infradead.org
2023-08-14 17:01:27 +02:00
Peter Zijlstra
6dafc713e3 sched: Simplify sched_tick_remote()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211812.236247952@infradead.org
2023-08-14 17:01:26 +02:00
Peter Zijlstra
4bdada79f3 sched: Simplify sched_exec()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211812.168490417@infradead.org
2023-08-14 17:01:26 +02:00
Peter Zijlstra
857d315f12 sched: Simplify ttwu()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211812.101069260@infradead.org
2023-08-14 17:01:25 +02:00
Peter Zijlstra
4eb054f92b sched: Simplify wake_up_if_idle()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211812.032678917@infradead.org
2023-08-14 17:01:25 +02:00
Peter Zijlstra
5bb76f1ddf sched: Simplify: migrate_swap_stop()
Use guards to reduce gotos and simplify control flow.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Valentin Schneider <vschneid@redhat.com>
Link: https://lore.kernel.org/r/20230801211811.964370836@infradead.org
2023-08-14 17:01:25 +02:00