Age | Commit message (Collapse) | Author | Files | Lines |
|
syzbot reports oops in lockdep's __lock_acquire(), called from
__pte_offset_map_lock() called from filemap_map_pages(); or when I run the
repro, the oops comes in pmd_install(), called from filemap_map_pmd()
called from filemap_map_pages(), just before the __pte_offset_map_lock().
The problem is that filemap_map_pmd() has been assuming that when it finds
pmd_none(), a page table has already been prepared in prealloc_pte; and
indeed do_fault_around() has been careful to preallocate one there, when
it finds pmd_none(): but what if *pmd became none in between?
My 6.6 mods in mm/khugepaged.c, avoiding mmap_lock for write, have made it
easy for *pmd to be cleared while servicing a page fault; but even before
those, a huge *pmd might be zapped while a fault is serviced.
The difference in symptomatic stack traces comes from the "memory model"
in use: pmd_install() uses pmd_populate() uses page_to_pfn(): in some
models that is strict, and will oops on the NULL prealloc_pte; in other
models, it will construct a bogus value to be populated into *pmd, then
__pte_offset_map_lock() oops when trying to access split ptlock pointer
(or some other symptom in normal case of ptlock embedded not pointer).
Link: https://lore.kernel.org/linux-mm/[email protected]/
Link: https://lkml.kernel.org/r/[email protected]
Fixes: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths")
Signed-off-by: Hugh Dickins <[email protected]>
Reported-and-tested-by: [email protected]
Closes: https://lore.kernel.org/linux-mm/[email protected]/
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Jann Horn <[email protected]>,
Cc: José Pekkarinen <[email protected]>
Cc: Kirill A. Shutemov <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: <[email protected]> [5.12+]
Signed-off-by: Andrew Morton <[email protected]>
|
|
When the length passed in is 0, the pagemap_scan_test_walk() caller should
bail. This error causes at least a WARN_ON().
Link: https://lkml.kernel.org/r/[email protected]
Reported-by: [email protected]
Closes: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Lizhi Xu <[email protected]>
Reviewed-by: Phillip Lougher <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
__FILE__ is not guaranteed to exist in current dir. Replace that with
argv[0] for memory map test.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 46fd75d4a3c9 ("selftests: mm: add pagemap ioctl tests")
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Andrei Vagin <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The new pagemap ioctl contains a fast path for wr-protections without
looking into category masks. It forgets to check PM_SCAN_WP_MATCHING
before applying the wr-protections. It can cause, e.g., pte markers
installed on archs that do not even support uffd wr-protect.
WARNING: CPU: 0 PID: 5059 at mm/memory.c:1520 zap_pte_range mm/memory.c:1520 [inline]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 12f6b01a0bcb ("fs/proc/task_mmu: add fast paths to get/clear PAGE_IS_WRITTEN flag")
Signed-off-by: Peter Xu <[email protected]>
Reported-by: [email protected]
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrei Vagin <[email protected]>
Cc: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "mm/pagemap: A few fixes to the recent PAGEMAP_SCAN".
This series should fix two known reports from syzbot on the new
PAGEMAP_SCAN ioctl():
https://lore.kernel.org/all/[email protected]/
https://lore.kernel.org/all/[email protected]/
The 3rd patch is something I found when testing these patches.
This patch (of 3):
The new ioctl(PAGEMAP_SCAN) relies on vma wr-protect capability provided
by userfault, however in the vma test it didn't explicitly require the vma
to have wr-protect function enabled, even if PM_SCAN_WP_MATCHING flag is
set.
It means the pagemap code can now apply uffd-wp bit to a page in the vma
even if not registered to userfaultfd at all.
Then in whatever way as long as the pte got written and page fault
resolved, we'll apply the write bit even if uffd-wp bit is set. We'll see
a pte that has both UFFD_WP and WRITE bit set. Anything later that looks
up the pte for uffd-wp bit will trigger the warning:
WARNING: CPU: 1 PID: 5071 at arch/x86/include/asm/pgtable.h:403 pte_uffd_wp arch/x86/include/asm/pgtable.h:403 [inline]
Fix it by doing proper check over the vma attributes when
PM_SCAN_WP_MATCHING is specified.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 52526ca7fdb9 ("fs/proc/task_mmu: implement IOCTL to get and optionally clear info about PTEs")
Signed-off-by: Peter Xu <[email protected]>
Reported-by: [email protected]
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Andrei Vagin <[email protected]>
Reviewed-by: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Erhard reported that the 6.7-rc1 kernel panics on boot if being
built with clang-16. The problem was not reproducible with gcc.
[ 5.975049] general protection fault, probably for non-canonical address 0xf555515555555557: 0000 [#1] SMP KASAN PTI
[ 5.976422] KASAN: maybe wild-memory-access in range [0xaaaaaaaaaaaaaab8-0xaaaaaaaaaaaaaabf]
[ 5.977475] CPU: 3 PID: 1 Comm: systemd Not tainted 6.7.0-rc1-Zen3 #77
[ 5.977860] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
[ 5.977860] RIP: 0010:obj_cgroup_charge_pages+0x27/0x2d5
[ 5.977860] Code: 90 90 90 55 41 57 41 56 41 55 41 54 53 89 d5 41 89 f6 49 89 ff 48 b8 00 00 00 00 00 fc ff df 49 83 c7 10 4d3
[ 5.977860] RSP: 0018:ffffc9000001fb18 EFLAGS: 00010a02
[ 5.977860] RAX: dffffc0000000000 RBX: aaaaaaaaaaaaaaaa RCX: ffff8883eb9a8b08
[ 5.977860] RDX: 0000000000000005 RSI: 0000000000400cc0 RDI: aaaaaaaaaaaaaaaa
[ 5.977860] RBP: 0000000000000005 R08: 3333333333333333 R09: 0000000000000000
[ 5.977860] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883eb9a8b18
[ 5.977860] R13: 1555555555555557 R14: 0000000000400cc0 R15: aaaaaaaaaaaaaaba
[ 5.977860] FS: 00007f2976438b40(0000) GS:ffff8883eb980000(0000) knlGS:0000000000000000
[ 5.977860] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5.977860] CR2: 00007f29769e0060 CR3: 0000000107222003 CR4: 0000000000370eb0
[ 5.977860] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 5.977860] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 5.977860] Call Trace:
[ 5.977860] <TASK>
[ 5.977860] ? __die_body+0x16/0x75
[ 5.977860] ? die_addr+0x4a/0x70
[ 5.977860] ? exc_general_protection+0x1c9/0x2d0
[ 5.977860] ? cgroup_mkdir+0x455/0x9fb
[ 5.977860] ? __x64_sys_mkdir+0x69/0x80
[ 5.977860] ? asm_exc_general_protection+0x26/0x30
[ 5.977860] ? obj_cgroup_charge_pages+0x27/0x2d5
[ 5.977860] obj_cgroup_charge+0x114/0x1ab
[ 5.977860] pcpu_alloc+0x1a6/0xa65
[ 5.977860] ? mem_cgroup_css_alloc+0x1eb/0x1140
[ 5.977860] ? cgroup_apply_control_enable+0x26b/0x7c0
[ 5.977860] mem_cgroup_css_alloc+0x23f/0x1140
[ 5.977860] cgroup_apply_control_enable+0x26b/0x7c0
[ 5.977860] ? cgroup_kn_set_ugid+0x2d/0x1a0
[ 5.977860] cgroup_mkdir+0x455/0x9fb
[ 5.977860] ? __cfi_cgroup_mkdir+0x10/0x10
[ 5.977860] kernfs_iop_mkdir+0x130/0x170
[ 5.977860] vfs_mkdir+0x405/0x530
[ 5.977860] do_mkdirat+0x188/0x1f0
[ 5.977860] __x64_sys_mkdir+0x69/0x80
[ 5.977860] do_syscall_64+0x7d/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] ? do_syscall_64+0x89/0x100
[ 5.977860] entry_SYSCALL_64_after_hwframe+0x4b/0x53
[ 5.977860] RIP: 0033:0x7f297671defb
[ 5.977860] Code: 8b 05 39 7f 0d 00 bb ff ff ff ff 64 c7 00 16 00 00 00 e9 61 ff ff ff e8 23 0c 02 00 0f 1f 00 f3 0f 1e fa b88
[ 5.977860] RSP: 002b:00007ffee6242bb8 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
[ 5.977860] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f297671defb
[ 5.977860] RDX: 0000000000000000 RSI: 00000000000001ed RDI: 000055c6b449f0e0
[ 5.977860] RBP: 00007ffee6242bf0 R08: 000000000000000e R09: 0000000000000000
[ 5.977860] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c6b445db80
[ 5.977860] R13: 00000000000003a0 R14: 00007f2976a68651 R15: 00000000000003a0
[ 5.977860] </TASK>
[ 5.977860] Modules linked in:
[ 6.014095] ---[ end trace 0000000000000000 ]---
[ 6.014701] RIP: 0010:obj_cgroup_charge_pages+0x27/0x2d5
[ 6.015348] Code: 90 90 90 55 41 57 41 56 41 55 41 54 53 89 d5 41 89 f6 49 89 ff 48 b8 00 00 00 00 00 fc ff df 49 83 c7 10 4d3
[ 6.017575] RSP: 0018:ffffc9000001fb18 EFLAGS: 00010a02
[ 6.018255] RAX: dffffc0000000000 RBX: aaaaaaaaaaaaaaaa RCX: ffff8883eb9a8b08
[ 6.019120] RDX: 0000000000000005 RSI: 0000000000400cc0 RDI: aaaaaaaaaaaaaaaa
[ 6.019983] RBP: 0000000000000005 R08: 3333333333333333 R09: 0000000000000000
[ 6.020849] R10: 0000000000000000 R11: 0000000000000000 R12: ffff8883eb9a8b18
[ 6.021747] R13: 1555555555555557 R14: 0000000000400cc0 R15: aaaaaaaaaaaaaaba
[ 6.022609] FS: 00007f2976438b40(0000) GS:ffff8883eb980000(0000) knlGS:0000000000000000
[ 6.023593] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6.024296] CR2: 00007f29769e0060 CR3: 0000000107222003 CR4: 0000000000370eb0
[ 6.025279] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 6.026139] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 6.027000] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
Actually the problem is caused by uninitialized local variable in
current_obj_cgroup(). If the root memory cgroup is set as an active
memory cgroup for a charging scope (as in the trace, where systemd tries
to create the first non-root cgroup, so the parent cgroup is the root
cgroup), the "for" loop is skipped and uninitialized objcg is returned,
causing a panic down the accounting stack.
The fix is trivial: initialize the objcg variable to NULL unconditionally
before the "for" loop.
[[email protected]: remove redundant assignment]
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: e86828e5446d ("mm: kmem: scoped objcg protection")
Signed-off-by: Roman Gushchin (Cruise) <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
Reported-by: Erhard Furtner <[email protected]>
Closes: https://github.com/ClangBuiltLinux/linux/issues/1959
Tested-by: Erhard Furtner <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Muchun Song <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
set_track_prepare() will call __alloc_pages() which attempts to acquire
zone->lock(spinlocks), so move it outside object->lock(raw_spinlocks)
because it's not right to acquire spinlocks while holding raw_spinlocks in
RT mode.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Liu Shixin <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Patrick Wang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Patch series "Fix invalid wait context of set_track_prepare()".
Geert reported an invalid wait context[1] which is resulted by moving
set_track_prepare() inside kmemleak_lock. This is not allowed because in
RT mode, the spinlocks can be preempted but raw_spinlocks can not, so it
is not allowd to acquire spinlocks while holding raw_spinlocks. The
second patch fix same problem in kmemleak_update_trace().
This patch (of 2):
Move the initialisation of object back to__alloc_object() because
set_track_prepare() attempt to acquire zone->lock(spinlocks) while
__link_object is holding kmemleak_lock(raw_spinlocks). This is not right
for RT mode.
This reverts commit 245245c2fffd00 ("mm/kmemleak: move the initialisation
of object to __link_object").
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 245245c2fffd ("mm/kmemleak: move the initialisation of object to __link_object")
Signed-off-by: Liu Shixin <[email protected]>
Reported-by: Geert Uytterhoeven <[email protected]>
Closes: https://lore.kernel.org/linux-mm/CAMuHMdWj0UzwNaxUvcocTfh481qRJpOWwXxsJCTJfu1oCqvgdA@mail.gmail.com/ [1]
Acked-by: Catalin Marinas <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Patrick Wang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
We have a report of this WARN() triggering. Let's print the offending
swp_entry_t to help diagnosis.
Link: https://lkml.kernel.org/r/[email protected]
Cc: Muhammad Usama Anjum <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The routine __vma_private_lock tests for the existence of a reserve map
associated with a private hugetlb mapping. A pointer to the reserve map
is in vma->vm_private_data. __vma_private_lock was checking the pointer
for NULL. However, it is possible that the low bits of the pointer could
be used as flags. In such instances, vm_private_data is not NULL and not
a valid pointer. This results in the null-ptr-deref reported by syzbot:
general protection fault, probably for non-canonical address 0xdffffc000000001d:
0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x00000000000000e8-0x00000000000000ef]
CPU: 0 PID: 5048 Comm: syz-executor139 Not tainted 6.6.0-rc7-syzkaller-00142-g88
8cf78c29e2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 1
0/09/2023
RIP: 0010:__lock_acquire+0x109/0x5de0 kernel/locking/lockdep.c:5004
...
Call Trace:
<TASK>
lock_acquire kernel/locking/lockdep.c:5753 [inline]
lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718
down_write+0x93/0x200 kernel/locking/rwsem.c:1573
hugetlb_vma_lock_write mm/hugetlb.c:300 [inline]
hugetlb_vma_lock_write+0xae/0x100 mm/hugetlb.c:291
__hugetlb_zap_begin+0x1e9/0x2b0 mm/hugetlb.c:5447
hugetlb_zap_begin include/linux/hugetlb.h:258 [inline]
unmap_vmas+0x2f4/0x470 mm/memory.c:1733
exit_mmap+0x1ad/0xa60 mm/mmap.c:3230
__mmput+0x12a/0x4d0 kernel/fork.c:1349
mmput+0x62/0x70 kernel/fork.c:1371
exit_mm kernel/exit.c:567 [inline]
do_exit+0x9ad/0x2a20 kernel/exit.c:861
__do_sys_exit kernel/exit.c:991 [inline]
__se_sys_exit kernel/exit.c:989 [inline]
__x64_sys_exit+0x42/0x50 kernel/exit.c:989
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Mask off low bit flags before checking for NULL pointer. In addition, the
reserve map only 'belongs' to the OWNER (parent in parent/child
relationships) so also check for the OWNER flag.
Link: https://lkml.kernel.org/r/[email protected]
Reported-by: [email protected]
Closes: https://lore.kernel.org/linux-mm/[email protected]/
Fixes: bf4916922c60 ("hugetlbfs: extend hugetlb_vma_lock to private VMAs")
Signed-off-by: Mike Kravetz <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Edward Adam Davis <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Nathan Chancellor <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Cc: Tom Rix <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Add myself as the fallthough maintainer for material under lib/.
Cc: Joe Perches <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
Masks the "DSP Virtual Mailbox 2 write" interrupt when before
issuing the hibernate command to the DSP. The interrupt is
unmasked when exiting runtime suspend as it is required for
DSP operation.
Without this change the DSP fires an interrupt when hibernating
causing the system spin between runtime suspend and runtime
resume.
Signed-off-by: Ricardo Rivera-Matos <[email protected]>
Acked-by: Charles Keepax <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
Use the SYSTEM_SLEEP_PM_OPS handlers to prevent handling an IRQ
when the system is in the middle of suspending or resuming.
Signed-off-by: Ricardo Rivera-Matos <[email protected]>
Acked-by: Charles Keepax <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
Make use of the recently introduced EXPORT_GPL_DEV_PM_OPS() macro, to
conditionally export the runtime/system PM functions.
Replace the old SET_{RUNTIME,SYSTEM_SLEEP,NOIRQ_SYSTEM_SLEEP}_PM_OPS()
helpers with their modern alternatives and get rid of the now
unnecessary '__maybe_unused' annotations on all PM functions.
Additionally, use the pm_ptr() macro to fix the following errors when
building with CONFIG_PM disabled:
Signed-off-by: Ricardo Rivera-Matos <[email protected]>
Acked-by: Charles Keepax <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
This fixes a bug where rebalance would loop repeatedly on the same
extents.
Signed-off-by: Daniel Hill <[email protected]>
Signed-off-by: Kent Overstreet <[email protected]>
|
|
When using audio from ADV7533 or ADV7535 and describing the audio
card via simple-audio-card, the '#sound-dai-cells' needs to be passed.
Document the '#sound-dai-cells' property to fix the following
dt-schema warning:
imx8mn-beacon-kit.dtb: hdmi@3d: '#sound-dai-cells' does not match any of the regexes: 'pinctrl-[0-9]+'
from schema $id: http://devicetree.org/schemas/display/bridge/adi,adv7533.yaml#
Signed-off-by: Fabio Estevam <[email protected]>
Acked-by: Adam Ford <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Rob Herring <[email protected]>
|
|
i.MX23 has two LCDIF interrupts instead of a single one like other
i.MX devices.
Take this into account for properly describing the i.MX23 LCDIF
interrupts.
This fixes the following dt-schema warning:
imx23-olinuxino.dtb: lcdif@80030000: interrupts: [[46], [45]] is too long
from schema $id: http://devicetree.org/schemas/display/fsl,lcdif.yaml#
Signed-off-by: Fabio Estevam <[email protected]>
Reviewed-by: Marek Vasut <[email protected]>
Acked-by: Conor Dooley <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Rob Herring <[email protected]>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/song/md into block-6.7
Pull MD fixes from Song:
"This set from Yu Kuai fixes issues around sync_work, which was introduced
in 6.7 kernels."
* tag 'md-fixes-20231206' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
md: fix stopping sync thread
md: don't leave 'MD_RECOVERY_FROZEN' in error path of md_set_readonly()
md: fix missing flush of sync_work
|
|
Adding test that tries to trigger the BUG_IN during early map update
in prog_array_map_poke_run function.
The idea is to share prog array map between thread that constantly
updates it and another one loading a program that uses that prog
array.
Eventually we will hit a place where the program is ok to be updated
(poke->tailcall_target_stable check) but the address is still not
registered in kallsyms, so the bpf_arch_text_poke returns -EINVAL
and cause imbalance for the next tail call update check, which will
fail with -EBUSY in bpf_arch_text_poke as described in previous fix.
Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Ilya Leoshkevich <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
Lee pointed out issue found by syscaller [0] hitting BUG in prog array
map poke update in prog_array_map_poke_run function due to error value
returned from bpf_arch_text_poke function.
There's race window where bpf_arch_text_poke can fail due to missing
bpf program kallsym symbols, which is accounted for with check for
-EINVAL in that BUG_ON call.
The problem is that in such case we won't update the tail call jump
and cause imbalance for the next tail call update check which will
fail with -EBUSY in bpf_arch_text_poke.
I'm hitting following race during the program load:
CPU 0 CPU 1
bpf_prog_load
bpf_check
do_misc_fixups
prog_array_map_poke_track
map_update_elem
bpf_fd_array_map_update_elem
prog_array_map_poke_run
bpf_arch_text_poke returns -EINVAL
bpf_prog_kallsyms_add
After bpf_arch_text_poke (CPU 1) fails to update the tail call jump, the next
poke update fails on expected jump instruction check in bpf_arch_text_poke
with -EBUSY and triggers the BUG_ON in prog_array_map_poke_run.
Similar race exists on the program unload.
Fixing this by moving the update to bpf_arch_poke_desc_update function which
makes sure we call __bpf_arch_text_poke that skips the bpf address check.
Each architecture has slightly different approach wrt looking up bpf address
in bpf_arch_text_poke, so instead of splitting the function or adding new
'checkip' argument in previous version, it seems best to move the whole
map_poke_run update as arch specific code.
[0] https://syzkaller.appspot.com/bug?extid=97a4fe20470e9bc30810
Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
Reported-by: [email protected]
Signed-off-by: Jiri Olsa <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Yonghong Song <[email protected]>
Cc: Lee Jones <[email protected]>
Cc: Maciej Fijalkowski <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
|
|
A reservation goes through a 3 step lifetime:
- generated during delalloc
- released/counted by ordered_extent allocation
- freed by running delayed ref
That third step depends on must_insert_reserved on the head ref, so the
head ref with that field set owns the reservation. Once you prepare to
run the head ref, must_insert_reserved is unset, which means that
running the ref must free the reservation, whether or not it succeeds,
or else the reservation is leaked. That results in either a risk of
spurious ENOSPC if the fs stays writeable or a warning on unmount if it
is readonly.
The existing squota code was aware of these invariants, but missed a few
cases. Improve it by adding a helper function to use in the cleanup
paths and call it from the existing early returns in running delayed
refs. This also simplifies btrfs_record_squota_delta and struct
btrfs_quota_delta.
This fixes (or at least improves the reliability of) generic/475 with
"mkfs -O squota". On my machine, that test failed ~4/10 times without
this patch and passed 100/100 times with it.
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
The EXTENT_QGROUP_RESERVED bit is used to "lock" regions of the file for
duplicate reservations. That is two writes to that range in one
transaction shouldn't create two reservations, as the reservation will
only be freed once when the write finally goes down. Therefore, it is
never OK to clear that bit without freeing the associated qgroup
reserve. At this point, we don't want to be freeing the reserve, so mask
off the bit.
CC: [email protected] # 5.15+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
If we abort a transaction, we never run the code that frees the pertrans
qgroup reservation. This results in warnings on unmount as that
reservation has been leaked. The leak isn't a huge issue since the fs is
read-only, but it's better to clean it up when we know we can/should. Do
it during the cleanup_transaction step of aborting.
CC: [email protected] # 5.15+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
The reserved data counter and input parameter is a u64, but we
inadvertently accumulate it in an int. Overflowing that int results in
freeing the wrong amount of data and breaking reserve accounting.
Unfortunately, this overflow rot spreads from there, as the qgroup
release/free functions rely on returning an int to take advantage of
negative values for error codes.
Therefore, the full fix is to return the "released" or "freed" amount by
a u64 argument and to return 0 or negative error code via the return
value.
Most of the call sites simply ignore the return value, though some
of them handle the error and count the returned bytes. Change all of
them accordingly.
CC: [email protected] # 6.1+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Reviewed-by: David Sterba <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
An ordered extent completing is a critical moment in qgroup reserve
handling, as the ownership of the reservation is handed off from the
ordered extent to the delayed ref. In the happy path we release (unlock)
but do not free (decrement counter) the reservation, and the delayed ref
drives the free. However, on an error, we don't create a delayed ref,
since there is no ref to add. Therefore, free on the error path.
CC: [email protected] # 6.1+
Reviewed-by: Qu Wenruo <[email protected]>
Signed-off-by: Boris Burkov <[email protected]>
Signed-off-by: David Sterba <[email protected]>
|
|
We need to disable this after the last eviction
call, but before we disable the SDMA IP.
Fixes: b70438004a14 ("drm/amdgpu: move buffer funcs setting up a level")
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Luben Tuikov <[email protected]>
Tested-by: Phillip Susi <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Cc: Phillip Susi <[email protected]>
Cc: Luben Tuikov <[email protected]>
|
|
MP0 v13.0.6 SOCs don't support DRM MGCG.
Signed-off-by: Lijo Lazar <[email protected]>
Reviewed-by: Hawking Zhang <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
HDP 4.4.2 clockgating is enabled by default, update the flags
accordingly.
Signed-off-by: Lijo Lazar <[email protected]>
Reviewed-by: Hawking Zhang <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Check if function is implemented before making the call.
Signed-off-by: Lijo Lazar <[email protected]>
Reviewed-by: Hawking Zhang <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Only PSPv13.0.6 SOCs take a longer time to reach steady state. Other
PSPv13 based SOCs don't need extended wait. Also, reduce PSPv13.0.6 wait
time.
Cc: [email protected]
Fixes: fc5988907156 ("drm/amdgpu: update retry times for psp vmbx wait")
Fixes: d8c1925ba8cd ("drm/amdgpu: update retry times for psp BL wait")
Link: https://lore.kernel.org/amd-gfx/[email protected]/
Signed-off-by: Lijo Lazar <[email protected]>
Reviewed-by: Asad Kamal <[email protected]>
Reviewed-by: Mario Limonciello <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Does the same thing as:
commit 6740ec97bcdb ("drm/amd/display: Increase frame warning limit with KASAN or KCSAN in dml2")
Reviewed-by: Harry Wentland <[email protected]>
Reported-by: kernel test robot <[email protected]>
Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
Fixes: 67e38874b85b ("drm/amd/display: Increase num voltage states to 40")
Signed-off-by: Alex Deucher <[email protected]>
Cc: Alvin Lee <[email protected]>
Cc: Hamza Mahfooz <[email protected]>
Cc: Samson Tam <[email protected]>
Cc: Harry Wentland <[email protected]>
|
|
sort error data list to optimize the printing order.
Signed-off-by: Yang Wang <[email protected]>
Reviewed-by: Hawking Zhang <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Boot time error query is not available until fw a10109
Signed-off-by: Hawking Zhang <[email protected]>
Reviewed-by: Stanley Yang <[email protected]>
Reviewed-by: Yang Wang <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
support new mca smu error code decoding from smu 85.86.0 for smu v13.0.6
Signed-off-by: Yang Wang <[email protected]>
Reviewed-by: Hawking Zhang <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Increment the driver if version and add new mems to the mertics table.
Signed-off-by: Li Ma <[email protected]>
Reviewed-by: Yifan Zhang <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
[Why]
UBSAN errors observed in dmesg.
array-index-out-of-bounds in dml2/display_mode_core.c
[How]
Fix the index.
Tested-by: Daniel Wheeler <[email protected]>
Acked-by: Rodrigo Siqueira <[email protected]>
Signed-off-by: Roman Li <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
[WHY]
Some eDP panels's ext caps don't write initial value cause the value of
dpcd_addr(0x317) is random. It means that sometimes the eDP will
clarify it is OLED, miniLED...etc cause the backlight control interface
is incorrect.
[HOW]
Add a new panel patch to remove sink ext caps(HDR,OLED...etc)
Tested-by: Daniel Wheeler <[email protected]>
Reviewed-by: Sun peng Li <[email protected]>
Acked-by: Rodrigo Siqueira <[email protected]>
Signed-off-by: Ivan Lipski <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
VBIOS has suggested to use channel_width=2 for any ASIC that uses vram
info 3.0. This is because channel_width in the vram table no longer
represents the memory width
Tested-by: Daniel Wheeler <[email protected]>
Reviewed-by: Samson Tam <[email protected]>
Acked-by: Rodrigo Siqueira <[email protected]>
Signed-off-by: Alvin Lee <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
|
|
Currently sync thread is stopped from multiple contex:
- idle_sync_thread
- frozen_sync_thread
- __md_stop_writes
- md_set_readonly
- do_md_stop
And there are some problems:
1) sync_work is flushed while reconfig_mutex is grabbed, this can
deadlock because the work function will grab reconfig_mutex as well.
2) md_reap_sync_thread() can't be called directly while md_do_sync() is
not finished yet, for example, commit 130443d60b1b ("md: refactor
idle/frozen_sync_thread() to fix deadlock").
3) If MD_RECOVERY_RUNNING is not set, there is no need to stop
sync_thread at all because sync_thread must not be registered.
Factor out a helper stop_sync_thread(), so that above contex will behave
the same. Fix 1) by flushing sync_work after reconfig_mutex is released,
before waiting for sync_thread to be done; Fix 2) bt letting daemon thread
to unregister sync_thread; Fix 3) by always checking MD_RECOVERY_RUNNING
first.
Fixes: db5e653d7c9f ("md: delay choosing sync action to md_start_sync()")
Signed-off-by: Yu Kuai <[email protected]>
Signed-off-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
If md_set_readonly() failed, the array could still be read-write, however
'MD_RECOVERY_FROZEN' could still be set, which leave the array in an
abnormal state that sync or recovery can't continue anymore.
Hence make sure the flag is cleared after md_set_readonly() returns.
Fixes: 88724bfa68be ("md: wait for pending superblock updates before switching to read-only")
Signed-off-by: Yu Kuai <[email protected]>
Acked-by: Xiao Ni <[email protected]>
Signed-off-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Commit ac619781967b ("md: use separate work_struct for md_start_sync()")
use a new sync_work to replace del_work, however, stop_sync_thread() and
__md_stop_writes() was trying to wait for sync_thread to be done, hence
they should switch to use sync_work as well.
Noted that md_start_sync() from sync_work will grab 'reconfig_mutex',
hence other contex can't held the same lock to flush work, and this will
be fixed in later patches.
Fixes: ac619781967b ("md: use separate work_struct for md_start_sync()")
Signed-off-by: Yu Kuai <[email protected]>
Acked-by: Xiao Ni <[email protected]>
Signed-off-by: Song Liu <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Disable MCBP(mid command buffer preemption) by default as old Mesa
hangs with it. We shall not enable the feature that breaks old usermode
driver.
Fixes: 50a7c8765ca6 ("drm/amdgpu: enable mcbp by default on gfx9")
Signed-off-by: Jiadong Zhu <[email protected]>
Acked-by: Alex Deucher <[email protected]>
Signed-off-by: Alex Deucher <[email protected]>
Cc: [email protected]
|
|
Since 64 bit cmpxchg() is very expensive on 32bit architectures, the
timestamp used by the ring buffer does some interesting tricks to be able
to still have an atomic 64 bit number. It originally just used 60 bits and
broke it up into two 32 bit words where the extra 2 bits were used for
synchronization. But this was not enough for all use cases, and all 64
bits were required.
The 32bit version of the ring buffer timestamp was then broken up into 3
32bit words using the same counter trick. But one update was not done. The
check to see if the read operation was done without interruption only
checked the first two words and not last one (like it had before this
update). Fix it by making sure all three updates happen without
interruption by comparing the initial counter with the last updated
counter.
Link: https://lore.kernel.org/linux-trace-kernel/[email protected]
Cc: [email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Fixes: f03f2abce4f39 ("ring-buffer: Have 32 bit time stamps use all 64 bits")
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
There's a race where if an event is discarded from the ring buffer and an
interrupt were to happen at that time and insert an event, the time stamp
is still used from the discarded event as an offset. This can screw up the
timings.
If the event is going to be discarded, set the "before_stamp" to zero.
When a new event comes in, it compares the "before_stamp" with the
"write_stamp" and if they are not equal, it will insert an absolute
timestamp. This will prevent the timings from getting out of sync due to
the discarded event.
Link: https://lore.kernel.org/linux-trace-kernel/[email protected]
Cc: [email protected]
Cc: Masami Hiramatsu <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Fixes: 6f6be606e763f ("ring-buffer: Force before_stamp and write_stamp to be different on discard")
Signed-off-by: Steven Rostedt (Google) <[email protected]>
|
|
Reconnect worker currently assumes that the server struct
is alive and only takes reference on the server if it needs
to call smb2_reconnect.
With the new ability to disable channels based on whether the
server has multichannel disabled, this becomes a problem when
we need to disable established channels. While disabling the
channels and deallocating the server, there could be reconnect
work that could not be cancelled (because it started).
This change forces the reconnect worker to unconditionally
take a reference on the server when it runs.
Also, this change now allows smb2_reconnect to know if it was
called by the reconnect worker. Based on this, the cifs_put_tcp_session
can decide whether it can cancel the reconnect work synchronously or not.
Signed-off-by: Shyam Prasad N <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
This reverts commit 19a4b9d6c372cab6a3b2c9a061a236136fe95274.
This earlier commit was making an assumption that each mod_delayed_work
called for the reconnect work would result in smb2_reconnect_server
being called twice. This assumption turns out to be untrue. So reverting
this change for now.
I will submit a follow-up patch to fix the actual problem in a different
way.
Signed-off-by: Shyam Prasad N <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
A concurrently running sock_orphan() may NULL the sk_socket pointer in
between check and deref. Follow other users (like nft_meta.c for
instance) and acquire sk_callback_lock before dereferencing sk_socket.
Fixes: 0265ab44bacc ("[NETFILTER]: merge ipt_owner/ip6t_owner in xt_owner")
Reported-by: Jann Horn <[email protected]>
Signed-off-by: Phil Sutter <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux into arm/fixes
Arm FF-A fixes for v6.7
A bunch of fixes addressing issues around the notification support that
was added this cycle. They address issue in partition IDs handling in
ffa_notification_info_get(), notifications cleanup path and the size of
the allocation in ffa_partitions_cleanup().
It also adds check for the notification enabled state so that the drivers
registering the callbacks can be rejected if not enabled/supported.
It also moves the partitions setup operation after the notification
initialisation so that the driver has the correct state for notification
enabled/supported before the partitions are initialised/setup.
It also now allows FF-A initialisation to complete successfully even
when the notification initialisation fails as it is an optional support
in the specification. Initial support just allowed it only if the
firmware didn't support notifications.
Finally, it also adds a fix for smatch warning by declaring ffa_bus_type
structure in the header.
* tag 'ffa-fixes-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/sudeep.holla/linux:
firmware: arm_ffa: Fix ffa_notification_info_get() IDs handling
firmware: arm_ffa: Fix the size of the allocation in ffa_partitions_cleanup()
firmware: arm_ffa: Fix FFA notifications cleanup path
firmware: arm_ffa: Add checks for the notification enabled state
firmware: arm_ffa: Setup the partitions after the notification initialisation
firmware: arm_ffa: Allow FF-A initialisation even when notification fails
firmware: arm_ffa: Declare ffa_bus_type structure in the header
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnd Bergmann <[email protected]>
|
|
The commit 855a40cd8ccc ("spi: cadence: Add SPI transfer delays") adds a
delay after each transfer into the driver's transfer_one(). However,
the delay is already done in SPI core. So this commit unnecessarily
doubles the delay amount. Revert this commit.
Signed-off-by: Nam Cao <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
This reverts commit 505c83212da5bfca95109421b8f5d9f8c6cdfef2.
This is not an official topology from the SOF project. Topologies are
named based on the card configuration and are NOT board specific.
Signed-off-by: Curtis Malainey <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|