Age | Commit message (Collapse) | Author | Files | Lines |
|
Two series lived in parallel for some time, which led to this situation:
- The nvmem-layout container is used for dynamic layouts
- We now expect fixed layouts to also use the nvmem-layout container but
this does not require any additional driver, the support is built-in the
nvmem core.
Ensure we don't refuse to probe for wrong reasons.
Fixes: 27f699e578b1 ("nvmem: core: add support for fixed cells *layout*")
Cc: [email protected]
Reported-by: Luca Ceresoli <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
Tested-by: Rafał Miłecki <[email protected]>
Tested-by: Luca Ceresoli <[email protected]>
Reviewed-by: Luca Ceresoli <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
Adds support for Intashield IX-500/IX-550, UC-146/UC-157, PX-146/PX-157,
PX-203 and PX-475 (LPT port)
Cc: [email protected]
Signed-off-by: Cameron Williams <[email protected]>
Acked-by: Sudip Mukherjee <[email protected]>
Link: https://lore.kernel.org/r/AS4PR02MB790389C130410BD864C8DCC9C4A6A@AS4PR02MB7903.eurprd02.prod.outlook.com
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
Granite Rapids-D has an additional UART that is enumerated via ACPI.
Add ACPI ID for it.
Signed-off-by: Andy Shevchenko <[email protected]>
Cc: stable <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
The console is immediately assigned to the ma35d1 port without
checking its index. This oversight can lead to out-of-bounds
errors when the index falls outside the valid '0' to
MA35_UART_NR range. Such scenario trigges ran error like the
following:
UBSAN: array-index-out-of-bounds in drivers/tty/serial/ma35d1_serial.c:555:51
index -1 is out of range for type 'uart_ma35d1_port [17]
Check the index before using it and bail out with a warning.
Fixes: 930cbf92db01 ("tty: serial: Add Nuvoton ma35d1 serial driver support")
Signed-off-by: Andi Shyti <[email protected]>
Cc: Jacky Huang <[email protected]>
Cc: <[email protected]> # v6.5+
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
The commit 89ff3dfac604 ("usb: gadget: f_hid: fix f_hidg lifetime vs
cdev") has introduced a bug that leads to hid device corruption after
the replug operation.
Reverse device managed memory allocation for the report descriptor
to fix the issue.
Tested:
This change was tested on the AMD EthanolX CRB server with the BMC
based on the OpenBMC distribution. The BMC provides KVM functionality
via the USB gadget device:
- before: KVM page refresh results in a broken USB device,
- after: KVM page refresh works without any issues.
Fixes: 89ff3dfac604 ("usb: gadget: f_hid: fix f_hidg lifetime vs cdev")
Cc: [email protected]
Signed-off-by: Konstantin Aladyshev <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
|
|
I conducted real-time testing and observed that
madvise_cold_or_pageout_pte_range() causes significant latency under
memory pressure, which can be effectively reduced by adding cond_resched()
within the loop.
I tested on the LicheePi 4A board using Cylictest for latency testing and
Ftrace for latency tracing. The board uses TH1520 processor and has a
memory size of 8GB. The kernel version is 6.5.0 with the PREEMPT_RT patch
applied.
The script I tested is as follows:
echo wakeup_rt > /sys/kernel/tracing/current_tracer
echo 1 > /sys/kernel/tracing/tracing_on
echo 0 > /sys/kernel/tracing/tracing_max_latency
stress-ng --vm 8 --vm-bytes 2G &
cyclictest --mlockall --smp --priority=99 --distance=0 --duration=30m
echo 0 > /sys/kernel/tracing/tracing_on
cat /sys/kernel/tracing/trace
The tracing results before modification are as follows:
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00003-g999d221864bf
# --------------------------------------------------------------------
# latency: 2552 us, #6/6, CPU#3 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
# -----------------
# | task: cyclictest-196 (uid:0 nice:0 policy:1 rt_prio:99)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
stress-n-206 3dn.h512 2us : 206:120:R + [003] 196: 0:R cyclictest
stress-n-206 3dn.h512 7us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup
=> ttwu_do_activate
=> try_to_wake_up
=> wake_up_process
=> hrtimer_wakeup
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> riscv_timer_interrupt
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> riscv_intc_irq
=> handle_riscv_irq
=> do_irq
stress-n-206 3dn.h512 9us#: 0
stress-n-206 3d...3.. 2544us : __schedule
stress-n-206 3d...3.. 2545us : 206:120:R ==> [003] 196: 0:R cyclictest
stress-n-206 3d...3.. 2551us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> migrate_enable
=> rt_spin_unlock
=> madvise_cold_or_pageout_pte_range
=> walk_pgd_range
=> __walk_page_range
=> walk_page_range
=> madvise_pageout
=> madvise_vma_behavior
=> do_madvise
=> sys_madvise
=> do_trap_ecall_u
=> ret_from_exception
The tracing results after modification are as follows:
# tracer: wakeup_rt
#
# wakeup_rt latency trace v1.1.5 on 6.5.0-rt6-r1208-00004-gca3876fc69a6-dirty
# --------------------------------------------------------------------
# latency: 1689 us, #6/6, CPU#0 | (M:preempt_rt VP:0, KP:0, SP:0 HP:0 #P:4)
# -----------------
# | task: cyclictest-217 (uid:0 nice:0 policy:1 rt_prio:99)
# -----------------
#
# _--------=> CPU#
# / _-------=> irqs-off/BH-disabled
# | / _------=> need-resched
# || / _-----=> need-resched-lazy
# ||| / _----=> hardirq/softirq
# |||| / _---=> preempt-depth
# ||||| / _--=> preempt-lazy-depth
# |||||| / _-=> migrate-disable
# ||||||| / delay
# cmd pid |||||||| time | caller
# \ / |||||||| \ | /
stress-n-232 0dn.h413 1us+: 232:120:R + [000] 217: 0:R cyclictest
stress-n-232 0dn.h413 12us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup
=> ttwu_do_activate
=> try_to_wake_up
=> wake_up_process
=> hrtimer_wakeup
=> __hrtimer_run_queues
=> hrtimer_interrupt
=> riscv_timer_interrupt
=> handle_percpu_devid_irq
=> generic_handle_domain_irq
=> riscv_intc_irq
=> handle_riscv_irq
=> do_irq
stress-n-232 0dn.h413 19us#: 0
stress-n-232 0d...3.. 1671us : __schedule
stress-n-232 0d...3.. 1676us+: 232:120:R ==> [000] 217: 0:R cyclictest
stress-n-232 0d...3.. 1687us : <stack trace>
=> __ftrace_trace_stack
=> __trace_stack
=> probe_wakeup_sched_switch
=> __schedule
=> preempt_schedule
=> migrate_enable
=> free_unref_page_list
=> release_pages
=> free_pages_and_swap_cache
=> tlb_batch_pages_flush
=> tlb_flush_mmu
=> unmap_page_range
=> unmap_vmas
=> unmap_region
=> do_vmi_align_munmap.constprop.0
=> do_vmi_munmap
=> __vm_munmap
=> sys_munmap
=> do_trap_ecall_u
=> ret_from_exception
After the modification, the cause of maximum latency is no longer
madvise_cold_or_pageout_pte_range(), so this modification can reduce the
latency caused by madvise_cold_or_pageout_pte_range().
Currently the madvise_cold_or_pageout_pte_range() function exhibits
significant latency under memory pressure, which can be effectively
reduced by adding cond_resched() within the loop.
When the batch_count reaches SWAP_CLUSTER_MAX, we reschedule
the task to ensure fairness and avoid long lock holding times.
Link: https://lkml.kernel.org/r/85363861af65fac66c7a98c251906afc0d9c8098.1695291046.git.wangjiexun@tinylab.org
Signed-off-by: Jiexun Wang <[email protected]>
Cc: Zhangjin Wu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
If nilfs2 reads a disk image with corrupted segment usage metadata, and
its segment usage information is marked as an error for the segment at the
write location, nilfs_sufile_set_segment_usage() can trigger WARN_ONs
during log writing.
Segments newly allocated for writing with nilfs_sufile_alloc() will not
have this error flag set, but this unexpected situation will occur if the
segment indexed by either nilfs->ns_segnum or nilfs->ns_nextnum (active
segment) was marked in error.
Fix this issue by inserting a sanity check to treat it as a file system
corruption.
Since error returns are not allowed during the execution phase where
nilfs_sufile_set_segment_usage() is used, this inserts the sanity check
into nilfs_sufile_mark_dirty() which pre-reads the buffer containing the
segment usage record to be updated and sets it up in a dirty state for
writing.
In addition, nilfs_sufile_set_segment_usage() is also called when
canceling log writing and undoing segment usage update, so in order to
avoid issuing the same kernel warning in that case, in case of
cancellation, avoid checking the error flag in
nilfs_sufile_set_segment_usage().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ryusuke Konishi <[email protected]>
Reported-by: [email protected]
Closes: https://syzkaller.appspot.com/bug?extid=14e9f834f6ddecece094
Tested-by: Ryusuke Konishi <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
After commit a08c7193e4f1 "mm/filemap: remove hugetlb special casing in
filemap.c", hugetlb pages are stored in the page cache in base page sized
indexes. This leads to multi index stores in the xarray which is only
supporting through CONFIG_XARRAY_MULTI. The other page cache user of
multi index stores ,THP, selects XARRAY_MULTI. Have CONFIG_HUGETLB_PAGE
follow this behavior as well to avoid the BUG() with a CONFIG_HUGETLB_PAGE
&& !CONFIG_XARRAY_MULTI config.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: a08c7193e4f1 ("mm/filemap: remove hugetlb special casing in filemap.c")
Signed-off-by: Sidhartha Kumar <[email protected]>
Reported-by: Al Viro <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Muchun Song <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
After the conversion to bus_to_subsys() and class_to_subsys(), the gdb
scripts listing the system buses and classes respectively was broken, fix
those by returning the subsys_priv pointer and have the various caller
de-reference either the 'bus' or 'class' structure members accordingly.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 7b884b7f24b4 ("driver core: class.c: convert to only use class_to_subsys")
Signed-off-by: Florian Fainelli <[email protected]>
Tested-by: Kuan-Ying Lee <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jan Kiszka <[email protected]>
Cc: Kieran Bingham <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
He is currently inactive (last message from him is two years ago [1]).
His media tree [2] is also dormant (latest activity is 6 years ago), yet
his site is still online [3].
Drop him from MAINTAINERS and add CREDITS entry for him. We thank him
for maintaining various DVB drivers.
[1]: https://lore.kernel.org/all/[email protected]/
[2]: https://git.linuxtv.org/anttip/media_tree.git/
[3]: https://palosaari.fi/linux/
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Bagas Sanjaya <[email protected]>
Acked-by: Antti Palosaari <[email protected]>
Cc: Hans Verkuil <[email protected]>
Cc: Lukas Bulwahn <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Uwe Kleine-König <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Clang static checker complains that value stored to 'from' is never read.
And memcpy_from_folio() only copy the last chunk memory from folio to
destination. Use 'to += chunk' to replace 'from += chunk' to fix this
typo problem.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: b23d03ef7af5 ("highmem: add memcpy_to_folio() and memcpy_from_folio()")
Signed-off-by: Su Hui <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Cc: Ira Weiny <[email protected]>
Cc: Jiaqi Yan <[email protected]>
Cc: Nathan Chancellor <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Cc: Peter Collingbourne <[email protected]>
Cc: Tom Rix <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
When mounting a filesystem image with a block size larger than the page
size, nilfs2 repeatedly outputs long error messages with stack traces to
the kernel log, such as the following:
getblk(): invalid block size 8192 requested
logical block size: 512
...
Call Trace:
dump_stack_lvl+0x92/0xd4
dump_stack+0xd/0x10
bdev_getblk+0x33a/0x354
__breadahead+0x11/0x80
nilfs_search_super_root+0xe2/0x704 [nilfs2]
load_nilfs+0x72/0x504 [nilfs2]
nilfs_mount+0x30f/0x518 [nilfs2]
legacy_get_tree+0x1b/0x40
vfs_get_tree+0x18/0xc4
path_mount+0x786/0xa88
__ia32_sys_mount+0x147/0x1a8
__do_fast_syscall_32+0x56/0xc8
do_fast_syscall_32+0x29/0x58
do_SYSENTER_32+0x15/0x18
entry_SYSENTER_32+0x98/0xf1
...
This overloads the system logger. And to make matters worse, it sometimes
crashes the kernel with a memory access violation.
This is because the return value of the sb_set_blocksize() call, which
should be checked for errors, is not checked.
The latter issue is due to out-of-buffer memory being accessed based on a
large block size that caused sb_set_blocksize() to fail for buffers read
with the initial minimum block size that remained unupdated in the
super_block structure.
Since nilfs2 mkfs tool does not accept block sizes larger than the system
page size, this has been overlooked. However, it is possible to create
this situation by intentionally modifying the tool or by passing a
filesystem image created on a system with a large page size to a system
with a smaller page size and mounting it.
Fix this issue by inserting the expected error handling for the call to
sb_set_blocksize().
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ryusuke Konishi <[email protected]>
Tested-by: Ryusuke Konishi <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Ignat Korchagin complained that a potential config regression was
introduced by commit 89cde455915f ("kexec: consolidate kexec and crash
options into kernel/Kconfig.kexec"). Before the commit, CONFIG_CRASH_DUMP
has no dependency on CONFIG_KEXEC. After the commit, CRASH_DUMP selects
KEXEC. That enforces system to have CONFIG_KEXEC=y as long as
CONFIG_CRASH_DUMP=Y which people may not want.
In Ignat's case, he sets CONFIG_CRASH_DUMP=y, CONFIG_KEXEC_FILE=y and
CONFIG_KEXEC=n because kexec_load interface could have security issue if
kernel/initrd has no chance to be signed and verified.
CRASH_DUMP has select of KEXEC because Eric, author of above commit, met a
LKP report of build failure when posting patch of earlier version. Please
see below link to get detail of the LKP report:
https://lore.kernel.org/all/[email protected]/T/#u
In fact, that LKP report is triggered because arm's <asm/kexec.h> is
wrapped in CONFIG_KEXEC ifdeffery scope. That is wrong. CONFIG_KEXEC
controls the enabling/disabling of kexec_load interface, but not kexec
feature. Removing the wrongly added CONFIG_KEXEC ifdeffery scope in
<asm/kexec.h> of arm allows us to drop the select KEXEC for CRASH_DUMP.
Meanwhile, change arch/arm/kernel/Makefile to let machine_kexec.o
relocate_kernel.o depend on KEXEC_CORE.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 89cde455915f ("kexec: consolidate kexec and crash options into kernel/Kconfig.kexec")
Signed-off-by: Baoquan He <[email protected]>
Reported-by: Ignat Korchagin <[email protected]>
Tested-by: Ignat Korchagin <[email protected]> [compile-time only]
Tested-by: Alexander Gordeev <[email protected]>
Reviewed-by: Eric DeVolder <[email protected]>
Tested-by: Eric DeVolder <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
BITS_PER_BYTE is defined in bits.h.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: e8eed5f7366f ("units: Add BYTES_PER_*BIT")
Signed-off-by: Andy Shevchenko <[email protected]>
Reviewed-by: Randy Dunlap <[email protected]>
Cc: Damian Muszynski <[email protected]>
Cc: Rasmus Villemoes <[email protected]>
Cc: Herbert Xu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
After commit 88a6f8994421 ("crash: memory and CPU hotplug sysfs
attributes"), on x86_64, if only below kernel configs related to kdump are
set, compiling error are triggered.
----
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_CRASH_DUMP=y
CONFIG_CRASH_HOTPLUG=y
------
------------------------------------------------------
drivers/base/cpu.c: In function `crash_hotplug_show':
drivers/base/cpu.c:309:40: error: implicit declaration of function `crash_hotplug_cpu_support'; did you mean `crash_hotplug_show'? [-Werror=implicit-function-declaration]
309 | return sysfs_emit(buf, "%d\n", crash_hotplug_cpu_support());
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| crash_hotplug_show
cc1: some warnings being treated as errors
------------------------------------------------------
CONFIG_KEXEC is used to enable kexec_load interface, the
crash_notes/crash_notes_size/crash_hotplug showing depends on
CONFIG_KEXEC is incorrect. It should depend on KEXEC_CORE instead.
Fix it now.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 88a6f8994421 ("crash: memory and CPU hotplug sysfs attributes")
Signed-off-by: Baoquan He <[email protected]>
Tested-by: Ignat Korchagin <[email protected]> [compile-time only]
Tested-by: Alexander Gordeev <[email protected]>
Reviewed-by: Eric DeVolder <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
If a scheme is set to not applied to any monitoring target region for any
reasons including the target access pattern, quota, filters, or
watermarks, writing 'update_schemes_tried_regions' to 'state' DAMON sysfs
file can indefinitely hang. Fix the case by implementing a timeout for
the operation. The time limit is two apply intervals of each scheme.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 4d4e41b68299 ("mm/damon/sysfs-schemes: do not update tried regions more than one DAMON snapshot")
Signed-off-by: SeongJae Park <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Since commit 8e1f385104ac ("kill task_struct->thread_group") remove
the thread_group, we will encounter below issue.
(gdb) lx-ps
TASK PID COMM
0xffff800086503340 0 swapper/0
Python Exception <class 'gdb.error'>: There is no member named thread_group.
Error occurred in Python: There is no member named thread_group.
We use signal->thread_head to iterate all threads instead.
[[email protected]: v2]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 8e1f385104ac ("kill task_struct->thread_group")
Signed-off-by: Kuan-Ying Lee <[email protected]>
Acked-by: Oleg Nesterov <[email protected]>
Tested-by: Florian Fainelli <[email protected]>
Cc: AngeloGioacchino Del Regno <[email protected]>
Cc: Chinwen Chang <[email protected]>
Cc: Kuan-Ying Lee <[email protected]>
Cc: Matthias Brugger <[email protected]>
Cc: Qun-Wei Lin <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
PTE_MARKER_UFFD_WP is a subconfig for userfaultfd. To make it clear,
switch to use menuconfig for userfaultfd.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Peter Xu <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Axel Rasmussen <[email protected]>
Cc: Mike Rapoport (IBM) <[email protected]>
Cc: Peter Xu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Commit 05f1edac8009 ("selftests/mm: run all tests from run_vmtests.sh")
fixed the inconsistency caused by tests being defined as TEST_GEN_PROGS.
This issue was leading to tests not being executed via run_vmtests.sh and
furthermore some tests running twice due to the kselftests wrapper also
executing them.
Fix the definition of two tests (soft-dirty and pagemap_ioctl) that are
still incorrectly defined.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Nico Pache <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Joel Savitz <[email protected]>
Cc: Shuah Khan <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Regions split function ('damon_split_region_at()') is called at the
beginning of an aggregation interval, and when DAMOS applying the actions
and charging quota. Because 'nr_accesses' fields of all regions are reset
at the beginning of each aggregation interval, and DAMOS was applying the
action at the end of each aggregation interval, there was no need to copy
the 'nr_accesses' field to the split-out region.
However, commit 42f994b71404 ("mm/damon/core: implement scheme-specific
apply interval") made DAMOS applies action on its own timing interval.
Hence, 'nr_accesses' should also copied to split-out regions, but the
commit didn't. Fix it by copying it.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 42f994b71404 ("mm/damon/core: implement scheme-specific apply interval")
Signed-off-by: SeongJae Park <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
group_cpus_evenly() could be part of storage driver's error handler, such
as nvme driver, when may happen during CPU hotplug, in which storage queue
has to drain its pending IOs because all CPUs associated with the queue
are offline and the queue is becoming inactive. And handling IO needs
error handler to provide forward progress.
Then deadlock is caused:
1) inside CPU hotplug handler, CPU hotplug lock is held, and blk-mq's
handler is waiting for inflight IO
2) error handler is waiting for CPU hotplug lock
3) inflight IO can't be completed in blk-mq's CPU hotplug handler
because error handling can't provide forward progress.
Solve the deadlock by not holding CPU hotplug lock in group_cpus_evenly(),
in which two stage spreads are taken: 1) the 1st stage is over all present
CPUs; 2) the end stage is over all other CPUs.
Turns out the two stage spread just needs consistent 'cpu_present_mask',
and remove the CPU hotplug lock by storing it into one local cache. This
way doesn't change correctness, because all CPUs are still covered.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ming Lei <[email protected]>
Reported-by: Yi Zhang <[email protected]>
Reported-by: Guangwu Zhang <[email protected]>
Tested-by: Guangwu Zhang <[email protected]>
Reviewed-by: Chengming Zhou <[email protected]>
Reviewed-by: Jens Axboe <[email protected]>
Cc: Keith Busch <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
All addresses printed by checkstack have an extra incorrect 0 appended at
the end.
This was introduced with commit 677f1410e058 ("scripts/checkstack.pl: don't
display $dre as different entity"): since then the address is taken from
the line which contains the function name, instead of the line which
contains stack consumption. E.g. on s390:
0000000000100a30 <do_one_initcall>:
...
100a44: e3 f0 ff 70 ff 71 lay %r15,-144(%r15)
So the used regex which matches spaces and hexadecimal numbers to extract
an address now matches a different substring. Subsequently replacing spaces
with 0 appends a zero at the and, instead of replacing leading spaces.
Fix this by using the proper regex, and simplify the code a bit.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 677f1410e058 ("scripts/checkstack.pl: don't display $dre as different entity")
Signed-off-by: Heiko Carstens <[email protected]>
Cc: Maninder Singh <[email protected]>
Cc: Masahiro Yamada <[email protected]>
Cc: Vaneet Narang <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
In add_memory_resource(), creation of memory block devices occurs after
successful call to arch_add_memory(). However, creation of memory block
devices could fail. In that case, arch_remove_memory() is called to
perform necessary cleanup.
Currently with or without altmap support, arch_remove_memory() is always
passed with altmap set to NULL during error handling. This leads to
freeing of struct pages using free_pages(), eventhough the allocation
might have been performed with altmap support via
altmap_alloc_block_buf().
Fix the error handling by passing altmap in arch_remove_memory(). This
ensures the following:
* When altmap is disabled, deallocation of the struct pages array occurs
via free_pages().
* When altmap is enabled, deallocation occurs via vmem_altmap_free().
Link: https://lkml.kernel.org/r/[email protected]
Fixes: a08a2ae34613 ("mm,memory_hotplug: allocate memmap from the added memory range")
Signed-off-by: Sumanth Korikkar <[email protected]>
Reviewed-by: Gerald Schaefer <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Cc: Alexander Gordeev <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: kernel test robot <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: <[email protected]> [5.15+]
Signed-off-by: Andrew Morton <[email protected]>
|
|
From Documentation/core-api/memory-hotplug.rst:
When adding/removing/onlining/offlining memory or adding/removing
heterogeneous/device memory, we should always hold the mem_hotplug_lock
in write mode to serialise memory hotplug (e.g. access to global/zone
variables).
mhp_(de)init_memmap_on_memory() functions can change zone stats and
struct page content, but they are currently called w/o the
mem_hotplug_lock.
When memory block is being offlined and when kmemleak goes through each
populated zone, the following theoretical race conditions could occur:
CPU 0: | CPU 1:
memory_offline() |
-> offline_pages() |
-> mem_hotplug_begin() |
... |
-> mem_hotplug_done() |
| kmemleak_scan()
| -> get_online_mems()
| ...
-> mhp_deinit_memmap_on_memory() |
[not protected by mem_hotplug_begin/done()]|
Marks memory section as offline, | Retrieves zone_start_pfn
poisons vmemmap struct pages and updates | and struct page members.
the zone related data |
| ...
| -> put_online_mems()
Fix this by ensuring mem_hotplug_lock is taken before performing
mhp_init_memmap_on_memory(). Also ensure that
mhp_deinit_memmap_on_memory() holds the lock.
online/offline_pages() are currently only called from
memory_block_online/offline(), so it is safe to move the locking there.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: a08a2ae34613 ("mm,memory_hotplug: allocate memmap from the added memory range")
Signed-off-by: Sumanth Korikkar <[email protected]>
Reviewed-by: Gerald Schaefer <[email protected]>
Acked-by: David Hildenbrand <[email protected]>
Cc: Alexander Gordeev <[email protected]>
Cc: Aneesh Kumar K.V <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: kernel test robot <[email protected]>
Cc: <[email protected]> [5.15+]
Signed-off-by: Andrew Morton <[email protected]>
|
|
My company email address is going to be disabled so let's create a mapping
that links to my private/community email just in case people might still
try to reach me via the old one.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Chester Lin <[email protected]>
Cc: Chester Lin <[email protected]>
Cc: Bjorn Andersson <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Heiko Stuebner <[email protected]>
Cc: Jakub Kicinski <[email protected]>
Cc: Konrad Dybcio <[email protected]>
Cc: Oleksij Rempel <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Conor Dooley <[email protected]>
Cc: Matthias Brugger <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
syzbot reports oops in lockdep's __lock_acquire(), called from
__pte_offset_map_lock() called from filemap_map_pages(); or when I run the
repro, the oops comes in pmd_install(), called from filemap_map_pmd()
called from filemap_map_pages(), just before the __pte_offset_map_lock().
The problem is that filemap_map_pmd() has been assuming that when it finds
pmd_none(), a page table has already been prepared in prealloc_pte; and
indeed do_fault_around() has been careful to preallocate one there, when
it finds pmd_none(): but what if *pmd became none in between?
My 6.6 mods in mm/khugepaged.c, avoiding mmap_lock for write, have made it
easy for *pmd to be cleared while servicing a page fault; but even before
those, a huge *pmd might be zapped while a fault is serviced.
The difference in symptomatic stack traces comes from the "memory model"
in use: pmd_install() uses pmd_populate() uses page_to_pfn(): in some
models that is strict, and will oops on the NULL prealloc_pte; in other
models, it will construct a bogus value to be populated into *pmd, then
__pte_offset_map_lock() oops when trying to access split ptlock pointer
(or some other symptom in normal case of ptlock embedded not pointer).
Link: https://lore.kernel.org/linux-mm/[email protected]/
Link: https://lkml.kernel.org/r/[email protected]
Fixes: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
Signed-off-by: Hugh Dickins <[email protected]>
Reported-and-tested-by: [email protected]
Closes: https://lore.kernel.org/linux-mm/[email protected]/
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Jann Horn <[email protected]>,
Cc: José Pekkarinen <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: <[email protected]> [5.12+]
Signed-off-by: Andrew Morton <[email protected]>
|
|
When the length passed in is 0, the pagemap_scan_test_walk() caller should
bail. This error causes at least a WARN_ON().
Link: https://lkml.kernel.org/r/[email protected]
Reported-by: [email protected]
Closes: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Lizhi Xu <[email protected]>
Reviewed-by: Phillip Lougher <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
__FILE__ is not guaranteed to exist in current dir. Replace that with
argv[0] for memory map test.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 46fd75d4a3c9 ("selftests: mm: add pagemap ioctl tests")
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Andrei Vagin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The new pagemap ioctl contains a fast path for wr-protections without
looking into category masks. It forgets to check PM_SCAN_WP_MATCHING
before applying the wr-protections. It can cause, e.g., pte markers
installed on archs that do not even support uffd wr-protect.
WARNING: CPU: 0 PID: 5059 at mm/memory.c:1520 zap_pte_range mm/memory.c:1520 [inline]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 12f6b01a0bcb ("fs/proc/task_mmu: add fast paths to get/clear PAGE_IS_WRITTEN flag")
Signed-off-by: Peter Xu <[email protected]>
Reported-by: [email protected]
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrei Vagin <[email protected]>
Cc: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "mm/pagemap: A few fixes to the recent PAGEMAP_SCAN".
This series should fix two known reports from syzbot on the new
PAGEMAP_SCAN ioctl():
https://lore.kernel.org/all/[email protected]/
https://lore.kernel.org/all/[email protected]/
The 3rd patch is something I found when testing these patches.
This patch (of 3):
The new ioctl(PAGEMAP_SCAN) relies on vma wr-protect capability provided
by userfault, however in the vma test it didn't explicitly require the vma
to have wr-protect function enabled, even if PM_SCAN_WP_MATCHING flag is
set.
It means the pagemap code can now apply uffd-wp bit to a page in the vma
even if not registered to userfaultfd at all.
Then in whatever way as long as the pte got written and page fault
resolved, we'll apply the write bit even if uffd-wp bit is set. We'll see
a pte that has both UFFD_WP and WRITE bit set. Anything later that looks
up the pte for uffd-wp bit will trigger the warning:
WARNING: CPU: 1 PID: 5071 at arch/x86/include/asm/pgtable.h:403 pte_uffd_wp arch/x86/include/asm/pgtable.h:403 [inline]
Fix it by doing proper check over the vma attributes when
PM_SCAN_WP_MATCHING is specified.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 52526ca7fdb9 ("fs/proc/task_mmu: implement IOCTL to get and optionally clear info about PTEs")
Signed-off-by: Peter Xu <[email protected]>
Reported-by: [email protected]
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrei Vagin <[email protected]>
Reviewed-by: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Erhard reported that the 6.7-rc1 kernel panics on boot if being
built with clang-16. The problem was not reproducible with gcc.
[ 5.975049] general protection fault, probably for non-canonical address 0xf555515555555557: 0000 [#1] SMP KASAN PTI
[ 5.976422] KASAN: maybe wild-memory-access in range [0xaaaaaaaaaaaaaab8-0xaaaaaaaaaaaaaabf]
[ 5.977475] CPU: 3 PID: 1 Comm: systemd Not tainted 6.7.0-rc1-Zen3 #77
[ 5.977860] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 5.977860] RIP: 0010:obj_cgroup_charge_pages+0x27/0x2d5
[ 5.977860] Code: 90 90 90 55 41 57 41 56 41 55 41 54 53 89 d5 41 89 f6 49 89 ff 48 b8 00 00 00 00 00 fc ff df 49 83 c7 10 4d3
[ 5.977860] RSP: 0018:ffffc9000001fb18 EFLAGS: 00010a02
[ 5.977860] RAX: dffffc0000000000 RBX: aaaaaaaaaaaaaaaa RCX: ffff8883eb9a8b08
[ 5.977860] RDX: 0000000000000005 RSI: 0000000000400cc0 RDI: aaaaaaaaaaaaaaaa
[ 5.977860] RBP: 0000000000000005 R08: 3333333333333333 R09: 0000000000000000
[ 5.977860] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883eb9a8b18
[ 5.977860] R13: 1555555555555557 R14: 0000000000400cc0 R15: aaaaaaaaaaaaaaba
[ 5.977860] FS: 00007f2976438b40(0000) GS:ffff8883eb980000(0000) knlGS:0000000000000000
[ 5.977860] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5.977860] CR2: 00007f29769e0060 CR3: 0000000107222003 CR4: 0000000000370eb0
[ 5.977860] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 5.977860] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 5.977860] Call Trace:
[ 5.977860] <TASK>
[ 5.977860] ? __die_body+0x16/0x75
[ 5.977860] ? die_addr+0x4a/0x70
[ 5.977860] ? exc_general_protection+0x1c9/0x2d0
[ 5.977860] ? cgroup_mkdir+0x455/0x9fb
[ 5.977860] ? __x64_sys_mkdir+0x69/0x80
[ 5.977860] ? asm_exc_general_protection+0x26/0x30
[ 5.977860] ? obj_cgroup_charge_pages+0x27/0x2d5
[ 5.977860] obj_cgroup_charge+0x114/0x1ab
[ 5.977860] pcpu_alloc+0x1a6/0xa65
[ 5.977860] ? mem_cgroup_css_alloc+0x1eb/0x1140
[ 5.977860] ? cgroup_apply_control_enable+0x26b/0x7c0
[ 5.977860] mem_cgroup_css_alloc+0x23f/0x1140
[ 5.977860] cgroup_apply_control_enable+0x26b/0x7c0
[ 5.977860] ? cgroup_kn_set_ugid+0x2d/0x1a0
[ 5.977860] cgroup_mkdir+0x455/0x9fb
[ 5.977860] ? __cfi_cgroup_mkdir+0x10/0x10
[ 5.977860] kernfs_iop_mkdir+0x130/0x170
[ 5.977860] vfs_mkdir+0x405/0x530
[ 5.977860] do_mkdirat+0x188/0x1f0
[ 5.977860] __x64_sys_mkdir+0x69/0x80
[ 5.977860] do_syscall_64+0x7d/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] entry_SYSCALL_64_after_hwframe+0x4b/0x53
[ 5.977860] RIP: 0033:0x7f297671defb
[ 5.977860] Code: 8b 05 39 7f 0d 00 bb ff ff ff ff 64 c7 00 16 00 00 00 e9 61 ff ff ff e8 23 0c 02 00 0f 1f 00 f3 0f 1e fa b88
[ 5.977860] RSP: 002b:00007ffee6242bb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
[ 5.977860] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f297671defb
[ 5.977860] RDX: 0000000000000000 RSI: 00000000000001ed RDI: 000055c6b449f0e0
[ 5.977860] RBP: 00007ffee6242bf0 R08: 000000000000000e R09: 0000000000000000
[ 5.977860] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c6b445db80
[ 5.977860] R13: 00000000000003a0 R14: 00007f2976a68651 R15: 00000000000003a0
[ 5.977860] </TASK>
[ 5.977860] Modules linked in:
[ 6.014095] ---[ end trace 0000000000000000 ]---
[ 6.014701] RIP: 0010:obj_cgroup_charge_pages+0x27/0x2d5
[ 6.015348] Code: 90 90 90 55 41 57 41 56 41 55 41 54 53 89 d5 41 89 f6 49 89 ff 48 b8 00 00 00 00 00 fc ff df 49 83 c7 10 4d3
[ 6.017575] RSP: 0018:ffffc9000001fb18 EFLAGS: 00010a02
[ 6.018255] RAX: dffffc0000000000 RBX: aaaaaaaaaaaaaaaa RCX: ffff8883eb9a8b08
[ 6.019120] RDX: 0000000000000005 RSI: 0000000000400cc0 RDI: aaaaaaaaaaaaaaaa
[ 6.019983] RBP: 0000000000000005 R08: 3333333333333333 R09: 0000000000000000
[ 6.020849] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883eb9a8b18
[ 6.021747] R13: 1555555555555557 R14: 0000000000400cc0 R15: aaaaaaaaaaaaaaba
[ 6.022609] FS: 00007f2976438b40(0000) GS:ffff8883eb980000(0000) knlGS:0000000000000000
[ 6.023593] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6.024296] CR2: 00007f29769e0060 CR3: 0000000107222003 CR4: 0000000000370eb0
[ 6.025279] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6.026139] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6.027000] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
Actually the problem is caused by uninitialized local variable in
current_obj_cgroup(). If the root memory cgroup is set as an active
memory cgroup for a charging scope (as in the trace, where systemd tries
to create the first non-root cgroup, so the parent cgroup is the root
cgroup), the "for" loop is skipped and uninitialized objcg is returned,
causing a panic down the accounting stack.
The fix is trivial: initialize the objcg variable to NULL unconditionally
before the "for" loop.
[[email protected]: remove redundant assignment]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: e86828e5446d ("mm: kmem: scoped objcg protection")
Signed-off-by: Roman Gushchin (Cruise) <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
Reported-by: Erhard Furtner <[email protected]>
Closes: https://github.com/ClangBuiltLinux/linux/issues/1959
Tested-by: Erhard Furtner <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Muchun Song <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
set_track_prepare() will call __alloc_pages() which attempts to acquire
zone->lock(spinlocks), so move it outside object->lock(raw_spinlocks)
because it's not right to acquire spinlocks while holding raw_spinlocks in
RT mode.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liu Shixin <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Patrick Wang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "Fix invalid wait context of set_track_prepare()".
Geert reported an invalid wait context[1] which is resulted by moving
set_track_prepare() inside kmemleak_lock. This is not allowed because in
RT mode, the spinlocks can be preempted but raw_spinlocks can not, so it
is not allowd to acquire spinlocks while holding raw_spinlocks. The
second patch fix same problem in kmemleak_update_trace().
This patch (of 2):
Move the initialisation of object back to__alloc_object() because
set_track_prepare() attempt to acquire zone->lock(spinlocks) while
__link_object is holding kmemleak_lock(raw_spinlocks). This is not right
for RT mode.
This reverts commit 245245c2fffd00 ("mm/kmemleak: move the initialisation
of object to __link_object").
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 245245c2fffd ("mm/kmemleak: move the initialisation of object to __link_object")
Signed-off-by: Liu Shixin <[email protected]>
Reported-by: Geert Uytterhoeven <[email protected]>
Closes: https://lore.kernel.org/linux-mm/CAMuHMdWj0UzwNaxUvcocTfh481qRJpOWwXxsJCTJfu1oCqvgdA@mail.gmail.com/ [1]
Acked-by: Catalin Marinas <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Patrick Wang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
We have a report of this WARN() triggering. Let's print the offending
swp_entry_t to help diagnosis.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The routine __vma_private_lock tests for the existence of a reserve map
associated with a private hugetlb mapping. A pointer to the reserve map
is in vma->vm_private_data. __vma_private_lock was checking the pointer
for NULL. However, it is possible that the low bits of the pointer could
be used as flags. In such instances, vm_private_data is not NULL and not
a valid pointer. This results in the null-ptr-deref reported by syzbot:
general protection fault, probably for non-canonical address 0xdffffc000000001d:
0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x00000000000000e8-0x00000000000000ef]
CPU: 0 PID: 5048 Comm: syz-executor139 Not tainted 6.6.0-rc7-syzkaller-00142-g88
8cf78c29e2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 1
0/09/2023
RIP: 0010:__lock_acquire+0x109/0x5de0 kernel/locking/lockdep.c:5004
...
Call Trace:
<TASK>
lock_acquire kernel/locking/lockdep.c:5753 [inline]
lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718
down_write+0x93/0x200 kernel/locking/rwsem.c:1573
hugetlb_vma_lock_write mm/hugetlb.c:300 [inline]
hugetlb_vma_lock_write+0xae/0x100 mm/hugetlb.c:291
__hugetlb_zap_begin+0x1e9/0x2b0 mm/hugetlb.c:5447
hugetlb_zap_begin include/linux/hugetlb.h:258 [inline]
unmap_vmas+0x2f4/0x470 mm/memory.c:1733
exit_mmap+0x1ad/0xa60 mm/mmap.c:3230
__mmput+0x12a/0x4d0 kernel/fork.c:1349
mmput+0x62/0x70 kernel/fork.c:1371
exit_mm kernel/exit.c:567 [inline]
do_exit+0x9ad/0x2a20 kernel/exit.c:861
__do_sys_exit kernel/exit.c:991 [inline]
__se_sys_exit kernel/exit.c:989 [inline]
__x64_sys_exit+0x42/0x50 kernel/exit.c:989
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Mask off low bit flags before checking for NULL pointer. In addition, the
reserve map only 'belongs' to the OWNER (parent in parent/child
relationships) so also check for the OWNER flag.
Link: https://lkml.kernel.org/r/[email protected]
Reported-by: [email protected]
Closes: https://lore.kernel.org/linux-mm/[email protected]/
Fixes: bf4916922c60 ("hugetlbfs: extend hugetlb_vma_lock to private VMAs")
Signed-off-by: Mike Kravetz <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Edward Adam Davis <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Nathan Chancellor <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Cc: Tom Rix <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add myself as the fallthough maintainer for material under lib/.
Cc: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
This fixes a bug where rebalance would loop repeatedly on the same
extents.
Signed-off-by: Daniel Hill <[email protected]>
Signed-off-by: Kent Overstreet <[email protected]>
|
|
When using audio from ADV7533 or ADV7535 and describing the audio
card via simple-audio-card, the '#sound-dai-cells' needs to be passed.
Document the '#sound-dai-cells' property to fix the following
dt-schema warning:
imx8mn-beacon-kit.dtb: hdmi@3d: '#sound-dai-cells' does not match any of the regexes: 'pinctrl-[0-9]+'
from schema $id: http://devicetree.org/schemas/display/bridge/adi,adv7533.yaml#
Signed-off-by: Fabio Estevam <[email protected]>
Acked-by: Adam Ford <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Rob Herring <[email protected]>
|
|
i.MX23 has two LCDIF interrupts instead of a single one like other
i.MX devices.
Take this into account for properly describing the i.MX23 LCDIF
interrupts.
This fixes the following dt-schema warning:
imx23-olinuxino.dtb: lcdif@80030000: interrupts: [[46], [45]] is too long
from schema $id: http://devicetree.org/schemas/display/fsl,lcdif.yaml#
Signed-off-by: Fabio Estevam <[email protected]>
Reviewed-by: Marek Vasut <[email protected]>
Acked-by: Conor Dooley <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Rob Herring <[email protected]>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/song/md into block-6.7
Pull MD fixes from Song:
"This set from Yu Kuai fixes issues around sync_work, which was introduced
in 6.7 kernels."
* tag 'md-fixes-20231206' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
md: fix stopping sync thread
md: don't leave 'MD_RECOVERY_FROZEN' in error path of md_set_readonly()
md: fix missing flush of sync_work
|
|
Adding test that tries to trigger the BUG_IN during early map update
in prog_array_map_poke_run function.
The idea is to share prog array map between thread that constantly
updates it and another one loading a program that uses that prog
array.
Eventually we will hit a place where the program is ok to be updated
(poke->tailcall_target_stable check) but the address is still not
registered in kallsyms, so the bpf_arch_text_poke returns -EINVAL
and cause imbalance for the next tail call update check, which will
fail with -EBUSY in bpf_arch_text_poke as described in previous fix.
Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Ilya Leoshkevich <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Lee pointed out issue found by syscaller [0] hitting BUG in prog array
map poke update in prog_array_map_poke_run function due to error value
returned from bpf_arch_text_poke function.
There's race window where bpf_arch_text_poke can fail due to missing
bpf program kallsym symbols, which is accounted for with check for
-EINVAL in that BUG_ON call.
The problem is that in such case we won't update the tail call jump
and cause imbalance for the next tail call update check which will
fail with -EBUSY in bpf_arch_text_poke.
I'm hitting following race during the program load:
CPU 0 CPU 1
bpf_prog_load
bpf_check
do_misc_fixups
prog_array_map_poke_track
map_update_elem
bpf_fd_array_map_update_elem
prog_array_map_poke_run
bpf_arch_text_poke returns -EINVAL
bpf_prog_kallsyms_add
After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next
poke update fails on expected jump instruction check in bpf_arch_text_poke
with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run.
Similar race exists on the program unload.
Fixing this by moving the update to bpf_arch_poke_desc_update function which
makes sure we call __bpf_arch_text_poke that skips the bpf address check.
Each architecture has slightly different approach wrt looking up bpf address
in bpf_arch_text_poke, so instead of splitting the function or adding new
'checkip' argument in previous version, it seems best to move the whole
map_poke_run update as arch specific code.
[0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810
Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
Reported-by: [email protected]
Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Yonghong Song <[email protected]>
Cc: Lee Jones <[email protected]>
Cc: Maciej Fijalkowski <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
A reservation goes through a 3 step lifetime:
- generated during delalloc
- released/counted by ordered_extent allocation
- freed by running delayed ref
That third step depends on must_insert_reserved on the head ref, so the
head ref with that field set owns the reservation. Once you prepare to
run the head ref, must_insert_reserved is unset, which means that
running the ref must free the reservation, whether or not it succeeds,
or else the reservation is leaked. That results in either a risk of
spurious ENOSPC if the fs stays writeable or a warning on unmount if it
is readonly.
The existing squota code was aware of these invariants, but missed a few
cases. Improve it by adding a helper function to use in the cleanup
paths and call it from the existing early returns in running delayed
refs. This also simplifies btrfs_record_squota_delta and struct
btrfs_quota_delta.
This fixes (or at least improves the reliability of) generic/475 with
"mkfs -O squota". On my machine, that test failed ~4/10 times without
this patch and passed 100/100 times with it.
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
The EXTENT_QGROUP_RESERVED bit is used to "lock" regions of the file for
duplicate reservations. That is two writes to that range in one
transaction shouldn't create two reservations, as the reservation will
only be freed once when the write finally goes down. Therefore, it is
never OK to clear that bit without freeing the associated qgroup
reserve. At this point, we don't want to be freeing the reserve, so mask
off the bit.
CC: [email protected] # 5.15+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
If we abort a transaction, we never run the code that frees the pertrans
qgroup reservation. This results in warnings on unmount as that
reservation has been leaked. The leak isn't a huge issue since the fs is
read-only, but it's better to clean it up when we know we can/should. Do
it during the cleanup_transaction step of aborting.
CC: [email protected] # 5.15+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
The reserved data counter and input parameter is a u64, but we
inadvertently accumulate it in an int. Overflowing that int results in
freeing the wrong amount of data and breaking reserve accounting.
Unfortunately, this overflow rot spreads from there, as the qgroup
release/free functions rely on returning an int to take advantage of
negative values for error codes.
Therefore, the full fix is to return the "released" or "freed" amount by
a u64 argument and to return 0 or negative error code via the return
value.
Most of the call sites simply ignore the return value, though some
of them handle the error and count the returned bytes. Change all of
them accordingly.
CC: [email protected] # 6.1+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
An ordered extent completing is a critical moment in qgroup reserve
handling, as the ownership of the reservation is handed off from the
ordered extent to the delayed ref. In the happy path we release (unlock)
but do not free (decrement counter) the reservation, and the delayed ref
drives the free. However, on an error, we don't create a delayed ref,
since there is no ref to add. Therefore, free on the error path.
CC: [email protected] # 6.1+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
We need to disable this after the last eviction
call, but before we disable the SDMA IP.
Fixes: b70438004a14 ("drm/amdgpu: move buffer funcs setting up a level")
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Luben Tuikov <[email protected]>
Tested-by: Phillip Susi <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Cc: Phillip Susi <[email protected]>
Cc: Luben Tuikov <[email protected]>
|
|
MP0 v13.0.6 SOCs don't support DRM MGCG.
Signed-off-by: Lijo Lazar <[email protected]>
Reviewed-by: Hawking Zhang <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|