Age | Commit message (Collapse) | Author | Files | Lines |
|
Since commit 43a7206b0963 ("driver core: class: make class_register() take
a const *"), the driver core allows for struct class to be in read-only
memory, so move the vmlogrdr_class structure to be declared at build time
placing it into read-only memory, instead of having to be dynamically
allocated at boot time.
Cc: Greg Kroah-Hartman <[email protected]>
Suggested-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: "Ricardo B. Marliere" <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Since commit 43a7206b0963 ("driver core: class: make class_register() take
a const *"), the driver core allows for struct class to be in read-only
memory, so move the vmur_class structure to be declared at build time
placing it into read-only memory, instead of having to be dynamically
allocated at boot time.
Cc: Greg Kroah-Hartman <[email protected]>
Suggested-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: "Ricardo B. Marliere" <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Since commit 43a7206b0963 ("driver core: class: make class_register() take
a const *"), the driver core allows for struct class to be in read-only
memory, so move the zcrypt_class structure to be declared at build time
placing it into read-only memory, instead of having to be dynamically
allocated at boot time.
Cc: Greg Kroah-Hartman <[email protected]>
Suggested-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: "Ricardo B. Marliere" <[email protected]>
Acked-by: Harald Freudenberger <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Provide a very simple ARCH_HAS_DEBUG_VIRTUAL implementation.
For now errors are only reported for the following cases:
- Trying to translate a vmalloc or module address to a physical address
- Translating a supposed to be ZONE_DMA virtual address into a physical
address, and the resulting physical address is larger than two GiB
Reviewed-by: Alexander Gordeev <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma64() and friends to properly convert virtual to physical and
hysical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Reviewed-by: Eric Farman <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma64() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma64() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Reviewed-by: Steffen Maier <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion and use new dma types and helper
functions to allow for type checking. This does not fix a bug since virtual
and physical address spaces are currently the same.
Tested-by: Jan Höppner <[email protected]>
Reviewed-by: Jan Höppner <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma64() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion. This does not fix a bug since
virtual and physical address spaces are currently the same.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Only the last 12 bits of virtual / physical addresses are used when masking
with IDA_BLOCK_SIZE - 1. Given that the bits are the same regardless of
virtual or physical address, remove the virtual to physical address
conversion.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Adjust coding style, partially refactor code, and use kcalloc()
instead of kmalloc() to allocate an idaw array.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use virt_to_dma32() and friends to properly convert virtual to physical and
physical to virtual addresses so that "make C=1" does not generate any
warnings anymore.
Reviewed-by: Stefan Haberland <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Only the last 12 bits of virtual / physical addresses are used when masking
with IDA_BLOCK_SIZE - 1. Given that the bits are the same regardless of
virtual or physical address, remove the virtual to physical address
conversion.
Reviewed-by: Stefan Haberland <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Instead of converting virtual to physical addresses with the virt_to_dma*()
functions, use dma addresses as provided by DMA API and only add offsets to
these addresses. This makes sure that address conversion is only done by
the DMA API.
Signed-off-by: Halil Pasic <[email protected]>
Reviewed-by: Eric Farman <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Change and use ccw_device_dma_zalloc() so it returns a virtual address like
before, which can be used to access data. However also pass a new dma32_t
pointer type handle, which correlates to the returned virtual address.
This pointer is used to directly pass/set the DMA handle as returned by the
DMA API.
Signed-off-by: Halil Pasic <[email protected]>
Reviewed-by: Eric Farman <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion and use new dma types and helper
functions to allow for type checking. This does not fix a bug since virtual
and physical address spaces are currently the same.
Signed-off-by: Halil Pasic <[email protected]>
Reviewed-by: Eric Farman <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Change types of I/O structure members which contain physical addresses to
dma32_t and dma64_t bitwise types.
This allows to make use of sparse (aka "make C=1") to find incorrect usage
of physical addresses.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Introduce dma32_t and dma64_t bitwise types, which are supposed to be used
for 31 and 64 bit DMA capable addresses. This allows to use sparse (make
C=1) for type checking, so that incorrect usages can be easily found.
Also add a couple of helper functions which
- convert virtual to DMA addresses and vice versa
- allow for simple logical and arithmetic operations on DMA addresses
- convert DMA addresses to plain u32 and u64 values
All helper functions exist to avoid excessive casting in C code.
Signed-off-by: Halil Pasic <[email protected]>
Co-developed-by: Heiko Carstens <[email protected]>
Reviewed-by: Steffen Maier <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion. This does not fix a bug
since virtual and physical address spaces are currently the same.
Reviewed-by: Eric Farman <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion. This does not fix a bug
since virtual and physical address spaces are currently the same.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion. This does not fix a bug
since virtual and physical address spaces are currently the same.
Reviewed-by: Stefan Haberland <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix virtual vs physical address confusion. This does not fix a bug
since virtual and physical address spaces are currently the same.
dax_direct_access() should receive a virtual kernel address in kaddr.
Reviewed-by: Heiko Carstens <[email protected]>
Acked-by: Alexander Gordeev <[email protected]>
Signed-off-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Fix IUCV_IPBUFLST-type buffers virtual vs physical address confusion.
This does not fix a bug since virtual and physical address spaces are
currently the same.
Signed-off-by: Alexander Gordeev <[email protected]>
Reviewed-by: Alexandra Winter <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Tests with hot-plugging crytpo cards on KVM guests with debug
kernel build revealed an use after free for the load field of
the struct zcrypt_card. The reason was an incorrect reference
handling of the zcrypt card object which could lead to a free
of the zcrypt card object while it was still in use.
This is an example of the slab message:
kernel: 0x00000000885a7512-0x00000000885a7513 @offset=1298. First byte 0x68 instead of 0x6b
kernel: Allocated in zcrypt_card_alloc+0x36/0x70 [zcrypt] age=18046 cpu=3 pid=43
kernel: kmalloc_trace+0x3f2/0x470
kernel: zcrypt_card_alloc+0x36/0x70 [zcrypt]
kernel: zcrypt_cex4_card_probe+0x26/0x380 [zcrypt_cex4]
kernel: ap_device_probe+0x15c/0x290
kernel: really_probe+0xd2/0x468
kernel: driver_probe_device+0x40/0xf0
kernel: __device_attach_driver+0xc0/0x140
kernel: bus_for_each_drv+0x8c/0xd0
kernel: __device_attach+0x114/0x198
kernel: bus_probe_device+0xb4/0xc8
kernel: device_add+0x4d2/0x6e0
kernel: ap_scan_adapter+0x3d0/0x7c0
kernel: ap_scan_bus+0x5a/0x3b0
kernel: ap_scan_bus_wq_callback+0x40/0x60
kernel: process_one_work+0x26e/0x620
kernel: worker_thread+0x21c/0x440
kernel: Freed in zcrypt_card_put+0x54/0x80 [zcrypt] age=9024 cpu=3 pid=43
kernel: kfree+0x37e/0x418
kernel: zcrypt_card_put+0x54/0x80 [zcrypt]
kernel: ap_device_remove+0x4c/0xe0
kernel: device_release_driver_internal+0x1c4/0x270
kernel: bus_remove_device+0x100/0x188
kernel: device_del+0x164/0x3c0
kernel: device_unregister+0x30/0x90
kernel: ap_scan_adapter+0xc8/0x7c0
kernel: ap_scan_bus+0x5a/0x3b0
kernel: ap_scan_bus_wq_callback+0x40/0x60
kernel: process_one_work+0x26e/0x620
kernel: worker_thread+0x21c/0x440
kernel: kthread+0x150/0x168
kernel: __ret_from_fork+0x3c/0x58
kernel: ret_from_fork+0xa/0x30
kernel: Slab 0x00000372022169c0 objects=20 used=18 fp=0x00000000885a7c88 flags=0x3ffff00000000a00(workingset|slab|node=0|zone=1|lastcpupid=0x1ffff)
kernel: Object 0x00000000885a74b8 @offset=1208 fp=0x00000000885a7c88
kernel: Redzone 00000000885a74b0: bb bb bb bb bb bb bb bb ........
kernel: Object 00000000885a74b8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
kernel: Object 00000000885a74c8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
kernel: Object 00000000885a74d8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
kernel: Object 00000000885a74e8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
kernel: Object 00000000885a74f8: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
kernel: Object 00000000885a7508: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 68 4b 6b 6b 6b a5 kkkkkkkkkkhKkkk.
kernel: Redzone 00000000885a7518: bb bb bb bb bb bb bb bb ........
kernel: Padding 00000000885a756c: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZZZZZ
kernel: CPU: 0 PID: 387 Comm: systemd-udevd Not tainted 6.8.0-HF #2
kernel: Hardware name: IBM 3931 A01 704 (KVM/Linux)
kernel: Call Trace:
kernel: [<00000000ca5ab5b8>] dump_stack_lvl+0x90/0x120
kernel: [<00000000c99d78bc>] check_bytes_and_report+0x114/0x140
kernel: [<00000000c99d53cc>] check_object+0x334/0x3f8
kernel: [<00000000c99d820c>] alloc_debug_processing+0xc4/0x1f8
kernel: [<00000000c99d852e>] get_partial_node.part.0+0x1ee/0x3e0
kernel: [<00000000c99d94ec>] ___slab_alloc+0xaf4/0x13c8
kernel: [<00000000c99d9e38>] __slab_alloc.constprop.0+0x78/0xb8
kernel: [<00000000c99dc8dc>] __kmalloc+0x434/0x590
kernel: [<00000000c9b4c0ce>] ext4_htree_store_dirent+0x4e/0x1c0
kernel: [<00000000c9b908a2>] htree_dirblock_to_tree+0x17a/0x3f0
kernel: [<00000000c9b919dc>] ext4_htree_fill_tree+0x134/0x400
kernel: [<00000000c9b4b3d0>] ext4_dx_readdir+0x160/0x2f0
kernel: [<00000000c9b4bedc>] ext4_readdir+0x5f4/0x760
kernel: [<00000000c9a7efc4>] iterate_dir+0xb4/0x280
kernel: [<00000000c9a7f1ea>] __do_sys_getdents64+0x5a/0x120
kernel: [<00000000ca5d6946>] __do_syscall+0x256/0x310
kernel: [<00000000ca5eea10>] system_call+0x70/0x98
kernel: INFO: lockdep is turned off.
kernel: FIX kmalloc-96: Restoring Poison 0x00000000885a7512-0x00000000885a7513=0x6b
kernel: FIX kmalloc-96: Marking all objects used
The fix is simple: Before use of the queue not only the queue object
but also the card object needs to increase it's reference count
with a call to zcrypt_card_get(). Similar after use of the queue
not only the queue but also the card object's reference count is
decreased with zcrypt_card_put().
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Cc: [email protected]
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Current average steal timer calculation produces volatile and inflated
values. The only user of this value is KVM so far and it uses that to
decide whether or not to yield the vCPU which is seeing steal time.
KVM compares average steal timer to a threshold and if the threshold
is past then it does not allow CPU polling and yields it to host, else
it keeps the CPU by polling.
Since KVM's steal time threshold is very low by default (%10) it most
likely is not effected much by the bloated average steal timer values
because the operating region is pretty small. However there might be
new users in the future who might rely on this number. Fix average
steal timer calculation by changing the formula from:
avg_steal_timer = avg_steal_timer / 2 + steal_timer;
to the following:
avg_steal_timer = (avg_steal_timer + steal_timer) / 2;
This ensures that avg_steal_timer is actually a naive average of steal
timer values. It now closely follows steal timer values but of course
in a smoother manner.
Fixes: 152e9b8676c6 ("s390/vtime: steal time exponential moving average")
Signed-off-by: Mete Durlu <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Acked-by: Christian Borntraeger <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
As provided with commit cd4386a931b63 ("s390/cpcmd,vmcp: avoid GFP_DMA
allocations") the Diagnose Code 8 response buffer does not have to be
below 2GB.
Reviewed-by: Heiko Carstens <[email protected]>
Signed-off-by: Alexander Gordeev <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
The runtime_pm handling seems to have been loosely inspired by the
cs32l41 driver, but in this case the get_noresume/put sequence is not
required.
Signed-off-by: Pierre-Louis Bossart <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Takashi Iwai <[email protected]>
|
|
Some HP laptops have received revisions that altered their board IDs
and therefore the current patches/quirks do not apply to them.
Specifically, for my Probook 440 G8, I have a board ID of 8a74.
It is necessary to add a line for that specific model.
Signed-off-by: Valentine Altair <[email protected]>
Cc: <[email protected]>
Message-ID: <kOqXRBcxkKt6m5kciSDCkGqMORZi_HB3ZVPTX5sD3W1pKxt83Pf-WiQ1V1pgKKI8pYr4oGvsujt3vk2zsCE-DDtnUADFG6NGBlS5N3U4xgA=@proton.me>
Signed-off-by: Takashi Iwai <[email protected]>
|
|
Pick up a parsing fix for the CDAT SSLBIS structure for v6.9.
|
|
Pick up support for injecting errors via ACPI EINJ into the CXL protocol
for v6.9.
|
|
Pick up support for CXL "HMEM reporting" for v6.9, i.e. build an HMAT
from CXL CDAT and PCIe switch information.
|
|
There exist card implementations with a CDAT table using a fixed size
buffer, but with entries filled in that do not fill the whole table
length size. Then, the last entry in the CDAT table may not mark the
end of the CDAT table buffer specified by the length field in the CDAT
header. It can be shorter with trailing unused (zero'ed) data. The
actual table length is determined while reading all CDAT entries of
the table with DOE.
If the table is greater than expected (containing zero'ed trailing
data), the CDAT parser fails with:
[ 48.691717] Malformed DSMAS table length: (24:0)
[ 48.702084] [CDAT:0x00] Invalid zero length
[ 48.711460] cxl_port endpoint1: Failed to parse CDAT: -22
In addition, a check of the table buffer length is missing to prevent
an out-of-bound access then parsing the CDAT table.
Hardening code against device returning borked table. Fix that by
providing an optional buffer length argument to
acpi_parse_entries_array() that can be used by cdat_table_parse() to
propagate the buffer size down to its users to check the buffer
length. This also prevents a possible out-of-bound access mentioned.
Add a check to warn about a malformed CDAT table length.
Cc: Rafael J. Wysocki <[email protected]>
Cc: Len Brown <[email protected]>
Reviewed-by: Dave Jiang <[email protected]>
Signed-off-by: Robert Richter <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
Reading the CDAT table using DOE requires a Table Access Response
Header in addition to the CDAT entry. In current implementation this
has caused offsets with sizeof(__le32) to the actual buffers. This led
to hardly readable code and even bugs. E.g., see fix of devm_kfree()
in read_cdat_data():
commit c65efe3685f5 ("cxl/cdat: Free correct buffer on checksum error")
Rework code to avoid calculations with sizeof(__le32). Introduce
struct cdat_doe_rsp for this which contains the Table Access Response
Header and a variable payload size for various data structures
afterwards to access the CDAT table and its CDAT Data Structures
without recalculating buffer offsets.
Cc: Lukas Wunner <[email protected]>
Cc: Fan Ni <[email protected]>
Reviewed-by: Dave Jiang <[email protected]>
Signed-off-by: Robert Richter <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
Trivial variable rename for the DOE mailbox handle from cdat_doe to
doe_mb. The variable name cdat_doe is too ambiguous, use doe_mb that
is commonly used for the mailbox.
Signed-off-by: Robert Richter <[email protected]>
Reviewed-by: Dave Jiang <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
The 'entry' pointer in cdat_sslbis_handler() is set to header +
sizeof(common header). However, the math missed the addition of the SSLBIS
main header. It should be header + sizeof(common header) + sizeof(*sslbis).
Use a defined struct for all the SSLBIS parts in order to avoid pointer
math errors.
The bug causes incorrect parsing of the SSLBIS table and introduces incorrect
performance values to the access_coordinates during the CXL access_coordinate
calculation path if there are CXL switches present in the topology.
The issue was found during testing of new code being added to add additional
checks for invalid CDAT values during CXL access_coordinate calculation. The
testing was done on qemu with a CXL topology including a CXL switch.
Fixes: 80aa780dda20 ("cxl: Add callback to parse the SSLBIS subtable from CDAT")
Signed-off-by: Dave Jiang <[email protected]>
Reviewed-by: Jonathan Cameron <[email protected]>
Reviewed-by: Fan Ni <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
Export CXL helper functions in einj-cxl.c for getting/injecting
available CXL protocol error types to sysfs under kernel/debug/cxl.
The kernel/debug/cxl/einj_types file will print the available CXL
protocol errors in the same format as the available_error_types
file provided by the einj module. The
kernel/debug/cxl/$dport_dev/einj_inject file is functionally the same
as the error_type and error_inject files provided by the EINJ module,
i.e.: writing an error type into $dport_dev/einj_inject will inject
said error type into the CXL dport represented by $dport_dev.
Reviewed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Ben Cheatham <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
Update EINJ kernel document to include how to inject CXL protocol error
types, build the kernel to include CXL error types, and give an example
injection.
Reviewed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Ben Cheatham <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
Move CXL protocol error types from einj.c (now einj-core.c) to einj-cxl.c.
einj-cxl.c implements the necessary handling for CXL protocol error
injection and exposes an API for the CXL core to use said functionality,
while also allowing the EINJ module to be built without CXL support.
Because CXL error types targeting CXL 1.0/1.1 ports require special
handling, only allow them to be injected through the new cxl debugfs
interface (next commit) and return an error when attempting to inject
through the legacy interface.
Reviewed-by: Jonathan Cameron <[email protected]>
Signed-off-by: Ben Cheatham <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
Change the EINJ module to install a platform device/driver on module
init and move the module init() and exit() functions to driver probe and
remove. This change allows the EINJ module to load regardless of whether
setting up EINJ succeeds, which allows dependent modules to still load
(i.e. the CXL core).
Since EINJ may no longer be initialized when the module loads, any
functions that are called from dependent/external modules should
safegaurd against the case EINJ didn't load.
Reviewed-by: Jonathan Cameron <[email protected]>
Reviewed-by: Dan Williams <[email protected]>
Signed-off-by: Ben Cheatham <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dan Williams <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/printk/linux
Pull printk updates from Petr Mladek:
"Improve the behavior during panic. The issues were found when testing
the ongoing changes introducing atomic consoles and printk kthreads:
- pr_flush() has to wait for the last reserved record instead of the
last finalized one. Note that records are finalized in random order
when generated by more CPUs in parallel.
- Ignore non-finalized records during panic(). Messages printed on
panic-CPU are always finalized. Messages printed by other CPUs
might never be finalized when the CPUs get stopped.
- Block new printk() calls on non-panic CPUs completely. Backtraces
are printed before entering the panic mode. Later messages would
just mess information printed by the panic CPU.
- Do not take console_lock in console_flush_on_panic() at all. The
original code did try_lock()/console_unlock(). The unlock part
might cause a deadlock when panic() happened in a scheduler code.
- Fix conversion of 64-bit sequence number for 32-bit atomic
operations"
* tag 'printk-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/printk/linux:
dump_stack: Do not get cpu_sync for panic CPU
panic: Flush kernel log buffer at the end
printk: Avoid non-panic CPUs writing to ringbuffer
printk: Disable passing console lock owner completely during panic()
printk: ringbuffer: Skip non-finalized records in panic
printk: Wait for all reserved records with pr_flush()
printk: ringbuffer: Cleanup reader terminology
printk: Add this_cpu_in_panic()
printk: For @suppress_panic_printk check for other CPU in panic
printk: ringbuffer: Clarify special lpos values
printk: ringbuffer: Do not skip non-finalized records with prb_next_seq()
printk: Use prb_first_seq() as base for 32bit seq macros
printk: Adjust mapping for 32bit seq macros
printk: nbcon: Relocate 32bit seq macros
|
|
Yes, yes, I know the slab people were planning on going slow and letting
every subsystem fight this thing on their own. But let's just rip off
the band-aid and get it over and done with. I don't want to see a
number of unnecessary pull requests just to get rid of a flag that no
longer has any meaning.
This was mainly done with a couple of 'sed' scripts and then some manual
cleanup of the end result.
Link: https://lore.kernel.org/all/CAHk-=wji0u+OOtmAOD-5JV3SXcRJF___k_+8XNKmak0yd5vW1Q@mail.gmail.com/
Signed-off-by: Linus Torvalds <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab updates from Vlastimil Babka:
- Freelist loading optimization (Chengming Zhou)
When the per-cpu slab is depleted and a new one loaded from the cpu
partial list, optimize the loading to avoid an irq enable/disable
cycle. This results in a 3.5% performance improvement on the "perf
bench sched messaging" test.
- Kernel boot parameters cleanup after SLAB removal (Xiongwei Song)
Due to two different main slab implementations we've had boot
parameters prefixed either slab_ and slub_ with some later becoming
an alias as both implementations gained the same functionality (i.e.
slab_nomerge vs slub_nomerge). In order to eventually get rid of the
implementation-specific names, the canonical and documented
parameters are now all prefixed slab_ and the slub_ variants become
deprecated but still working aliases.
- SLAB_ kmem_cache creation flags cleanup (Vlastimil Babka)
The flags had hardcoded #define values which became tedious and
error-prone when adding new ones. Assign the values via an enum that
takes care of providing unique bit numbers. Also deprecate
SLAB_MEM_SPREAD which was only used by SLAB, so it's a no-op since
SLAB removal. Assign it an explicit zero value. The removals of the
flag usage are handled independently in the respective subsystems,
with a final removal of any leftover usage planned for the next
release.
- Misc cleanups and fixes (Chengming Zhou, Xiaolei Wang, Zheng Yejian)
Includes removal of unused code or function parameters and a fix of a
memleak.
* tag 'slab-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab:
slab: remove PARTIAL_NODE slab_state
mm, slab: remove memcg_from_slab_obj()
mm, slab: remove the corner case of inc_slabs_node()
mm/slab: Fix a kmemleak in kmem_cache_destroy()
mm, slab, kasan: replace kasan_never_merge() with SLAB_NO_MERGE
mm, slab: use an enum to define SLAB_ cache creation flags
mm, slab: deprecate SLAB_MEM_SPREAD flag
mm, slab: fix the comment of cpu partial list
mm, slab: remove unused object_size parameter in kmem_cache_flags()
mm/slub: remove parameter 'flags' in create_kmalloc_caches()
mm/slub: remove unused parameter in next_freelist_entry()
mm/slub: remove full list manipulation for non-debug slab
mm/slub: directly load freelist from cpu partial slab in the likely case
mm/slub: make the description of slab_min_objects helpful in doc
mm/slub: replace slub_$params with slab_$params in slub.rst
mm/slub: unify all sl[au]b parameters with "slab_$param"
Documentation: kernel-parameters: remove noaliencache
|