Age | Commit message (Collapse) | Author | Files | Lines |
|
Move intel_modeset_all_pipes() to a central place so that we can
use it elsewhere as well. No functional changes.
Cc: Stanislav Lisovskiy <[email protected]>
Signed-off-by: Ville Syrjälä <[email protected]>
Signed-off-by: Clinton Taylor <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Anusha Srivatsa <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
Alderlake P have modular FIA like TGL but it is always modular in all
skus, not like TGL that we had to read a register to check if it is
monolithic or modular.
BSpec: 55480
BSpec: 50572
Cc: Imre Deak <[email protected]>
Signed-off-by: José Roberto de Souza <[email protected]>
Signed-off-by: Clinton Taylor <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Imre Deak <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
When DP_PHY_MODE_STATUS_NOT_SAFE is set, it means that display
has the control over the TC phy.
The "not safe" naming is confusing using ownership make it easier
to read also future platforms will have a new register that does the
same job as DP_PHY_MODE_STATUS_NOT_SAFE but with the onwership name.
BSpec: 49294
Cc: Imre Deak <[email protected]>
Signed-off-by: José Roberto de Souza <[email protected]>
Signed-off-by: Clinton Taylor <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Imre Deak <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
ADL-P has 3 possible refclk frequencies: 19.2MHz,
24MHz and 38.4MHz
While we're at it, remove the drm_WARNs. They've never actually helped
us catch any problems, but it's very easy to forget to update them
properly for new platforms.
BSpec: 55409, 49208
Cc: Matt Roper <[email protected]>
Cc: Clinton Taylor <[email protected]>
Cc: José Roberto de Souza <[email protected]>
Signed-off-by: Anusha Srivatsa <[email protected]>
Signed-off-by: Clinton Taylor <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Mika Kahola <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
ADL-P further extends the bits in PLANE_WM that represent blocks and
lines; we need to extend our masks accordingly. Since these bits are
reserved and MBZ on earlier platforms, it's safe to use the larger
bitmask on all platforms.
Bspec: 50419
Cc: Matt Atwood <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Signed-off-by: Clinton Taylor <[email protected]>
Reviewed-by: Anusha Srivatsa <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
This will allow proper DDI initialization based on vbt information.
Cc: Uma Shankar <[email protected]>
Signed-off-by: José Roberto de Souza <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Anusha Srivatsa <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
We need slice height to calculate few RC parameters
hence assign slice height first.
Cc: Manasi Navare <[email protected]>
Signed-off-by: Vandita Kulkarni <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Manasi Navare <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
Support compression BPPs from bpc to uncompressed BPP -1.
So far we have 8,10,12 as valid compressed BPPS now the
support is extended.
Cc: Manasi Navare <[email protected]>
Signed-off-by: Vandita Kulkarni <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Manasi Navare <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
Move the platform specific max bpc calculation into
intel_dp_dsc_compute_bpp function
Cc: Manasi Navare <[email protected]>
Signed-off-by: Vandita Kulkarni <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Anusha Srivatsa <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
XE_LPD continues to use the same "skylake-style" watermark
programming as other recent platforms. The only change to the watermark
calculations compared to Display12 is that XE_LPD now allows a
maximum of 255 lines vs the old limit of 31.
Due to the larger possible lines value, the corresponding bits
representing the value in PLANE_WM are also extended, so make sure we
read/write enough bits. Let's also take this opportunity to switch over
to the REG_FIELD notation.
Bspec: 49325
Bspec: 50419
Cc: Ville Syrjälä <[email protected]>
Cc: Anshuman Gupta <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Anusha Srivatsa <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
The DDI naming template for display version 12 went A-C, TC1-TC6. With
XE_LPD, that naming scheme for DDI's has now changed to A-E, TC1-TC4.
The XE_LPD design keeps the register offsets and bitfields relating to
the TC outputs in the same location they were previously. The new "D"
and "E" outputs now take the locations that were previously used by TC5
and TC6 outputs, or what we would have considered to be outputs "H" and
"I" under the legacy lettering scheme.
For the most part everything will just work as long as we initialize the
output with the proper 'enum port' value. However we do need to take
care to pick the correct AUX channel when parsing the VBT (e.g., a
reference to 'AUX D' is actually asking us to use the 8th aux channel,
not the fourth). We should also make sure that our encoders and aux
channels are named appropriately so that it's easier to correlate driver
debug messages with the bspec instructions.
v2:
- Update handling of TGL_TRANS_CLK_SEL_PORT. (Jose)
v3:
- Add hpd_pin to handle outputs D and E (Jose)
- Fixed conversion of BIOS port to aux ch for TC ports (Jose)
Cc: José Roberto de Souza <[email protected]>
Cc: Ville Syrjälä <[email protected]>
Signed-off-by: José Roberto de Souza <[email protected]>
Signed-off-by: Matt Roper <[email protected]>
Reviewed-by: Imre Deak <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
iomap_max_page_shift is expected to contain a page shift, so it can't be a
'bool', has to be an 'unsigned int'
And fix the default values: P4D_SHIFT is when huge iomap is allowed.
However, on some architectures (eg: powerpc book3s/64), P4D_SHIFT is not a
constant so it can't be used to initialise a static variable. So,
initialise iomap_max_page_shift with a maximum shift supported by the
architecture, it is gated by P4D_SHIFT in vmap_try_huge_p4d() anyway.
Link: https://lkml.kernel.org/r/ad2d366015794a9f21320dcbdd0a8eb98979e9df.1620898113.git.christophe.leroy@csgroup.eu
Fixes: bbc180a5adb0 ("mm: HUGE_VMAP arch support cleanup")
Signed-off-by: Christophe Leroy <[email protected]>
Reviewed-by: Nicholas Piggin <[email protected]>
Reviewed-by: Anshuman Khandual <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
When I added CONFIG_MODPROBE_PATH, I neglected to update Documentation/.
It's still true that this defaults to /sbin/modprobe, but now via a level
of indirection. So document that the kernel might have been built with
something other than /sbin/modprobe as the initial value.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 17652f4240f7a ("modules: add CONFIG_MODPROBE_PATH")
Signed-off-by: Rasmus Villemoes <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jessica Yu <[email protected]>
Cc: Luis Chamberlain <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
I believe there are some issues introduced by commit 31651c607151
("hfsplus: avoid deadlock on file truncation")
HFS+ has extent records which always contains 8 extents. In case the
first extent record in catalog file gets full, new ones are allocated from
extents overflow file.
In case shrinking truncate happens to middle of an extent record which
locates in extents overflow file, the logic in hfsplus_file_truncate() was
changed so that call to hfs_brec_remove() is not guarded any more.
Right action would be just freeing the extents that exceed the new size
inside extent record by calling hfsplus_free_extents(), and then check if
the whole extent record should be removed. However since the guard
(blk_cnt > start) is now after the call to hfs_brec_remove(), this has
unfortunate effect that the last matching extent record is removed
unconditionally.
To reproduce this issue, create a file which has at least 10 extents, and
then perform shrinking truncate into middle of the last extent record, so
that the number of remaining extents is not under or divisible by 8. This
causes the last extent record (8 extents) to be removed totally instead of
truncating into middle of it. Thus this causes corruption, and lost data.
Fix for this is simply checking if the new truncated end is below the
start of this extent record, making it safe to remove the full extent
record. However call to hfs_brec_remove() can't be moved to it's previous
place since we're dropping ->tree_lock and it can cause a race condition
and the cached info being invalidated possibly corrupting the node data.
Another issue is related to this one. When entering into the block
(blk_cnt > start) we are not holding the ->tree_lock. We break out from
the loop not holding the lock, but hfs_find_exit() does unlock it. Not
sure if it's possible for someone else to take the lock under our feet,
but it can cause hard to debug errors and premature unlocking. Even if
there's no real risk of it, the locking should still always be kept in
balance. Thus taking the lock now just before the check.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 31651c607151f ("hfsplus: avoid deadlock on file truncation")
Signed-off-by: Jouni Roivas <[email protected]>
Reviewed-by: Anton Altaparmakov <[email protected]>
Cc: Anatoly Trosinenko <[email protected]>
Cc: Viacheslav Dubeyko <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
A readahead request will not allocate more memory than can be represented
by a size_t, even on systems that have HIGHMEM available. Change the
length functions from returning an loff_t to a size_t.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 32c0a6bcaa1f57 ("btrfs: add and use readahead_batch_length")
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Reported-by: Linus Torvalds <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
These tests deliberately access these arrays out of bounds, which will
cause the dynamic local bounds checks inserted by
CONFIG_UBSAN_LOCAL_BOUNDS to fail and panic the kernel. To avoid this
problem, access the arrays via volatile pointers, which will prevent the
compiler from being able to determine the array bounds.
These accesses use volatile pointers to char (char *volatile) rather than
the more conventional pointers to volatile char (volatile char *) because
we want to prevent the compiler from making inferences about the pointer
itself (i.e. its array bounds), not the data that it refers to.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://linux-review.googlesource.com/id/I90b1713fbfa1bf68ff895aef099ea77b98a7c3b9
Signed-off-by: Peter Collingbourne <[email protected]>
Tested-by: Alexander Potapenko <[email protected]>
Reviewed-by: Andrey Konovalov <[email protected]>
Cc: Peter Collingbourne <[email protected]>
Cc: George Popescu <[email protected]>
Cc: Elena Petrova <[email protected]>
Cc: Evgenii Stepanov <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
32-bit architectures which expect 8-byte alignment for 8-byte integers and
need 64-bit DMA addresses (arm, mips, ppc) had their struct page
inadvertently expanded in 2019. When the dma_addr_t was added, it forced
the alignment of the union to 8 bytes, which inserted a 4 byte gap between
'flags' and the union.
Fix this by storing the dma_addr_t in one or two adjacent unsigned longs.
This restores the alignment to that of an unsigned long. We always
store the low bits in the first word to prevent the PageTail bit from
being inadvertently set on a big endian platform. If that happened,
get_user_pages_fast() racing against a page which was freed and
reallocated to the page_pool could dereference a bogus compound_head(),
which would be hard to trace back to this cause.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: c25fff7171be ("mm: add dma_addr_t to struct page")
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Ilias Apalodimas <[email protected]>
Acked-by: Jesper Dangaard Brouer <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Tested-by: Matteo Croce <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
remove_rmap_item_from_tree()"
This reverts commit 3e96b6a2e9ad929a3230a22f4d64a74671a0720b. General
Protection Fault in rmap_walk_ksm() under memory pressure:
remove_rmap_item_from_tree() needs to take page lock, of course.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Hugh Dickins <[email protected]>
Cc: Miaohe Lin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Consider the following sequence of events:
1. Userspace issues a UFFD ioctl, which ends up calling into
shmem_mfill_atomic_pte(). We successfully account the blocks, we
shmem_alloc_page(), but then the copy_from_user() fails. We return
-ENOENT. We don't release the page we allocated.
2. Our caller detects this error code, tries the copy_from_user() after
dropping the mmap_lock, and retries, calling back into
shmem_mfill_atomic_pte().
3. Meanwhile, let's say another process filled up the tmpfs being used.
4. So shmem_mfill_atomic_pte() fails to account blocks this time, and
immediately returns - without releasing the page.
This triggers a BUG_ON in our caller, which asserts that the page
should always be consumed, unless -ENOENT is returned.
To fix this, detect if we have such a "dangling" page when accounting
fails, and if so, release it before returning.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: cb658a453b93 ("userfaultfd: shmem: avoid leaking blocks and used blocks in UFFDIO_COPY")
Signed-off-by: Axel Rasmussen <[email protected]>
Reported-by: Hugh Dickins <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
Reviewed-by: Peter Xu <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Sysbot has reported a "divide error" which has been identified as being
caused by a corrupted file_size value within the file inode. This value
has been corrupted to a much larger value than expected.
Calculate_skip() is passed i_size_read(inode) >> msblk->block_log. Due to
the file_size value corruption this overflows the int argument/variable in
that function, leading to the divide error.
This patch changes the function to use u64. This will accommodate any
unexpectedly large values due to corruption.
The value returned from calculate_skip() is clamped to be never more than
SQUASHFS_CACHED_BLKS - 1, or 7. So file_size corruption does not lead to
an unexpectedly large return result here.
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Phillip Lougher <[email protected]>
Reported-by: <[email protected]>
Reported-by: <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Splitting an earlier version of a patch that allowed calling
__request_region() while holding the resource lock into a series of
patches required changing the return code for the newly introduced
__request_region_locked().
Unfortunately this change was not carried through to a subsequent commit
56fd94919b8b ("kernel/resource: fix locking in request_free_mem_region")
in the series. This resulted in a use-after-free due to freeing the
struct resource without properly releasing it. Fix this by correcting the
return code check so that the struct is not freed if the request to add it
was successful.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 56fd94919b8b ("kernel/resource: fix locking in request_free_mem_region")
Signed-off-by: Alistair Popple <[email protected]>
Reported-by: kernel test robot <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: Dan Williams <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: John Hubbard <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Oliver Sang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Paul E. McKenney reported [1] that commit 1f0723a4c0df ("mm, slub: enable
slub_debug static key when creating cache with explicit debug flags")
results in the lockdep complaint:
======================================================
WARNING: possible circular locking dependency detected
5.12.0+ #15 Not tainted
------------------------------------------------------
rcu_torture_sta/109 is trying to acquire lock:
ffffffff96063cd0 (cpu_hotplug_lock){++++}-{0:0}, at: static_key_enable+0x9/0x20
but task is already holding lock:
ffffffff96173c28 (slab_mutex){+.+.}-{3:3}, at: kmem_cache_create_usercopy+0x2d/0x250
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (slab_mutex){+.+.}-{3:3}:
lock_acquire+0xb9/0x3a0
__mutex_lock+0x8d/0x920
slub_cpu_dead+0x15/0xf0
cpuhp_invoke_callback+0x17a/0x7c0
cpuhp_invoke_callback_range+0x3b/0x80
_cpu_down+0xdf/0x2a0
cpu_down+0x2c/0x50
device_offline+0x82/0xb0
remove_cpu+0x1a/0x30
torture_offline+0x80/0x140
torture_onoff+0x147/0x260
kthread+0x10a/0x140
ret_from_fork+0x22/0x30
-> #0 (cpu_hotplug_lock){++++}-{0:0}:
check_prev_add+0x8f/0xbf0
__lock_acquire+0x13f0/0x1d80
lock_acquire+0xb9/0x3a0
cpus_read_lock+0x21/0xa0
static_key_enable+0x9/0x20
__kmem_cache_create+0x38d/0x430
kmem_cache_create_usercopy+0x146/0x250
kmem_cache_create+0xd/0x10
rcu_torture_stats+0x79/0x280
kthread+0x10a/0x140
ret_from_fork+0x22/0x30
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(slab_mutex);
lock(cpu_hotplug_lock);
lock(slab_mutex);
lock(cpu_hotplug_lock);
*** DEADLOCK ***
1 lock held by rcu_torture_sta/109:
#0: ffffffff96173c28 (slab_mutex){+.+.}-{3:3}, at: kmem_cache_create_usercopy+0x2d/0x250
stack backtrace:
CPU: 3 PID: 109 Comm: rcu_torture_sta Not tainted 5.12.0+ #15
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-1ubuntu1.1 04/01/2014
Call Trace:
dump_stack+0x6d/0x89
check_noncircular+0xfe/0x110
? lock_is_held_type+0x98/0x110
check_prev_add+0x8f/0xbf0
__lock_acquire+0x13f0/0x1d80
lock_acquire+0xb9/0x3a0
? static_key_enable+0x9/0x20
? mark_held_locks+0x49/0x70
cpus_read_lock+0x21/0xa0
? static_key_enable+0x9/0x20
static_key_enable+0x9/0x20
__kmem_cache_create+0x38d/0x430
kmem_cache_create_usercopy+0x146/0x250
? rcu_torture_stats_print+0xd0/0xd0
kmem_cache_create+0xd/0x10
rcu_torture_stats+0x79/0x280
? rcu_torture_stats_print+0xd0/0xd0
kthread+0x10a/0x140
? kthread_park+0x80/0x80
ret_from_fork+0x22/0x30
This is because there's one order of locking from the hotplug callbacks:
lock(cpu_hotplug_lock); // from hotplug machinery itself
lock(slab_mutex); // in e.g. slab_mem_going_offline_callback()
And commit 1f0723a4c0df made the reverse sequence possible:
lock(slab_mutex); // in kmem_cache_create_usercopy()
lock(cpu_hotplug_lock); // kmem_cache_open() -> static_key_enable()
The simplest fix is to move static_key_enable() to a place before slab_mutex is
taken. That means kmem_cache_create_usercopy() in mm/slab_common.c which is not
ideal for SLUB-specific code, but the #ifdef CONFIG_SLUB_DEBUG makes it
at least self-contained and obvious.
[1] https://lore.kernel.org/lkml/20210502171827.GA3670492@paulmck-ThinkPad-P17-Gen-1/
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 1f0723a4c0df ("mm, slub: enable slub_debug static key when creating cache with explicit debug flags")
Signed-off-by: Vlastimil Babka <[email protected]>
Reported-by: Paul E. McKenney <[email protected]>
Tested-by: Paul E. McKenney <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
When rework early cow of pinned hugetlb pages, we moved huge_ptep_get()
upper but overlooked a side effect that the huge_ptep_get() will fetch the
pte after wr-protection. After moving it upwards, we need explicit
wr-protect of child pte or we will keep the write bit set in the child
process, which could cause data corrution where the child can write to the
original page directly.
This issue can also be exposed by "memfd_test hugetlbfs" kselftest.
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 4eae4efa2c299 ("hugetlb: do early cow when page pinned on src mm")
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Joel Fernandes (Google) <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Patch series "mm/hugetlb: Fix issues on file sealing and fork", v2.
Hugh reported issue with F_SEAL_FUTURE_WRITE not applied correctly to
hugetlbfs, which I can easily verify using the memfd_test program, which
seems that the program is hardly run with hugetlbfs pages (as by default
shmem).
Meanwhile I found another probably even more severe issue on that hugetlb
fork won't wr-protect child cow pages, so child can potentially write to
parent private pages. Patch 2 addresses that.
After this series applied, "memfd_test hugetlbfs" should start to pass.
This patch (of 2):
F_SEAL_FUTURE_WRITE is missing for hugetlb starting from the first day.
There is a test program for that and it fails constantly.
$ ./memfd_test hugetlbfs
memfd-hugetlb: CREATE
memfd-hugetlb: BASIC
memfd-hugetlb: SEAL-WRITE
memfd-hugetlb: SEAL-FUTURE-WRITE
mmap() didn't fail as expected
Aborted (core dumped)
I think it's probably because no one is really running the hugetlbfs test.
Fix it by checking FUTURE_WRITE also in hugetlbfs_file_mmap() as what we
do in shmem_mmap(). Generalize a helper for that.
Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: ab3948f58ff84 ("mm/memfd: add an F_SEAL_FUTURE_WRITE seal to memfd")
Signed-off-by: Peter Xu <[email protected]>
Reported-by: Hugh Dickins <[email protected]>
Reviewed-by: Mike Kravetz <[email protected]>
Cc: Joel Fernandes (Google) <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
https://gitlab.freedesktop.org/drm/msm into drm-fixes
- dsi regression fix
- dma-buf pinning fix
- displayport fixes
- llc fix
Signed-off-by: Dave Airlie <[email protected]>
From: Rob Clark <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGuqLZDAEJwUFKb6m+h3kyxgjDEKa3DPA1fHA69vxbXH=g@mail.gmail.com
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fix from Steven Rostedt:
"Fix trace_check_vprintf() for %.*s
The sanity check of all strings being read from the ring buffer to
make sure they are in safe memory space did not account for the %.*s
notation having another parameter to process (the length).
Add that to the check"
* tag 'trace-v5.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Handle %.*s in trace_check_vprintf()
|
|
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
drm/i915 fixes for v5.13-rc2:
- Fix active callback alignment annotations and subsequent crashes
- Retract link training strategy to slow and wide, again
- Avoid division by zero on gen2
- Use correct width reads for C0DRB3/C1DRB3 registers
- Fix double free in pdp allocation failure path
- Fix HDMI 2.1 PCON downstream caps check
Signed-off-by: Dave Airlie <[email protected]>
From: Jani Nikula <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
In case of error, the function devm_ioremap() returns NULL pointer not ERR_PTR().
The IS_ERR() test in the return value check should be replaced with NULL test.
After that, the error code -ENOMEM should be returned.
Reported-by: Hulk Robot <[email protected]>
Signed-off-by: Qiheng Lin <[email protected]>
Signed-off-by: Zack Rusin <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
The allocation of fifo is lacking an allocation failure check, so
fix this by adding one.
In the case where fifo->static_buffer fails to be allocated the
error return path neglects to kfree the fifo object. Fix this by
adding in the missing kfree.
Kudos to Dan Carpenter for spotting the missing kzalloc failure
check.
Addresses-Coverity: ("Resource leak")
Fixes: 2cd80dbd3551 ("drm/vmwgfx: Add basic support for SVGA3")
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: Zack Rusin <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Catalin Marinas:
"Fixes and cpucaps.h automatic generation:
- Generate cpucaps.h at build time rather than carrying lots of
#defines. Merged at -rc1 to avoid some conflicts during the merge
window.
- Initialise RGSR_EL1.SEED in __cpu_setup() as it may be left as 0
out of reset and the IRG instruction would not function as expected
if only the architected pseudorandom number generator is
implemented.
- Fix potential race condition in __sync_icache_dcache() where the
PG_dcache_clean page flag is set before the actual cache
maintenance.
- Fix header include in BTI kselftests"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: Fix race condition on PG_dcache_clean in __sync_icache_dcache()
arm64: tools: Add __ASM_CPUCAPS_H to the endif in cpucaps.h
arm64: mte: initialize RGSR_EL1.SEED in __cpu_setup
kselftest/arm64: Add missing stddef.h include to BTI tests
arm64: Generate cpucaps.h
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs
Pull f2fs fixes from Jaegeuk Kim:
"This fixes some critical bugs such as memory leak in compression
flows, kernel panic when handling errors, and swapon failure due to
newly added condition check"
* tag 'f2fs-5.13-rc1-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs:
f2fs: return EINVAL for hole cases in swap file
f2fs: avoid swapon failure by giving a warning first
f2fs: compress: fix to assign cc.cluster_idx correctly
f2fs: compress: fix race condition of overwrite vs truncate
f2fs: compress: fix to free compress page correctly
f2fs: support iflag change given the mask
f2fs: avoid null pointer access when handling IPU error
|
|
Pull drm fixes from Dave Airlie:
"Not much here, mostly amdgpu fixes, with a couple of radeon, and a
cosmetic vc4.
Two MAINTAINERS file updates also.
amdgpu:
- Fixes for flexible array conversions
- Fix sysfs attribute init
- Harvesting fixes
- VCN CG/PG fixes for Picasso
radeon:
- Fixes for flexible array conversions
- Fix for flickering on Oland with multiple 4K displays
vc4:
- drop unused function"
* tag 'drm-fixes-2021-05-14' of git://anongit.freedesktop.org/drm/drm:
drm/amdgpu: update vcn1.0 Non-DPG suspend sequence
drm/amdgpu: set vcn mgcg flag for picasso
drm/radeon/dpm: Disable sclk switching on Oland when two 4K 60Hz monitors are connected
drm/amdgpu: update the method for harvest IP for specific SKU
drm/amdgpu: add judgement when add ip blocks (v2)
drm/amd/display: Initialize attribute for hdcp_srm sysfs file
drm/amd/pm: Fix out-of-bounds bug
drm/radeon/si_dpm: Fix SMU power state load
drm/radeon/ni_dpm: Fix booting bug
MAINTAINERS: Update address for Emma Anholt
MAINTAINERS: Update my e-mail
drm/vc4: remove unused function
drm/ttm: Do not add non-system domain BO into swap list
|
|
To ensure that instructions are observable in a new mapping, the arm64
set_pte_at() implementation cleans the D-cache and invalidates the
I-cache to the PoU. As an optimisation, this is only done on executable
mappings and the PG_dcache_clean page flag is set to avoid future cache
maintenance on the same page.
When two different processes map the same page (e.g. private executable
file or shared mapping) there's a potential race on checking and setting
PG_dcache_clean via set_pte_at() -> __sync_icache_dcache(). While on the
fault paths the page is locked (PG_locked), mprotect() does not take the
page lock. The result is that one process may see the PG_dcache_clean
flag set but the I/D cache maintenance not yet performed.
Avoid test_and_set_bit(PG_dcache_clean) in favour of separate test_bit()
and set_bit(). In the rare event of a race, the cache maintenance is
done twice.
Signed-off-by: Catalin Marinas <[email protected]>
Cc: <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Steven Price <[email protected]>
Reviewed-by: Steven Price <[email protected]>
Acked-by: Will Deacon <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Add support for MT8183's G72 Bifrost.
Signed-off-by: Nicolas Boichat <[email protected]>
Reviewed-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Steven Price <[email protected]>
Signed-off-by: Steven Price <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/20210421132841.v13.4.I5f6b04431828ec9c3e41e65f3337cec6a127480d@changeid
|
|
GPUs with more than a single regulator (e.g. G72 on MT8183) will
require platform-specific handling for devfreq, for 2 reasons:
1. The opp core (drivers/opp/core.c:_generic_set_opp_regulator)
does not support multiple regulators, so we'll need custom
handlers.
2. Generally, platforms with 2 regulators have platform-specific
constraints on how the voltages should be set (e.g.
minimum/maximum voltage difference between them), so we
should not just create generic handlers that simply
change the voltages without taking care of those constraints.
Disable devfreq for now on those GPUs.
Signed-off-by: Nicolas Boichat <[email protected]>
Reviewed-by: Tomeu Vizoso <[email protected]>
Reviewed-by: Steven Price <[email protected]>
Signed-off-by: Steven Price <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/20210421132841.v13.3.I3af068abe30c9c85cabc4486385c52e56527a509@changeid
|
|
Define a compatible string for the Mali Bifrost GPU found in
Mediatek's MT8183 SoCs.
Signed-off-by: Nicolas Boichat <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
Signed-off-by: Steven Price <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/20210421132841.v13.1.Ie74d3355761aab202d4825ac6f66d990bba0130e@changeid
|
|
Fix the following kernel-doc warning:
block/partitions/efi.c:685: warning: wrong kernel-doc identifier on line:
* efi_partition(struct parsed_partitions *state)
Cc: Alexander Viro <[email protected]>
Signed-off-by: Bart Van Assche <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jens Axboe <[email protected]>
|
|
If a tag set is shared across request queues (e.g. SCSI LUNs) then the
block layer core keeps track of the number of active request queues in
tags->active_queues. blk_mq_tag_busy() and blk_mq_tag_idle() update that
atomic counter if the hctx flag BLK_MQ_F_TAG_QUEUE_SHARED is set. Make
sure that blk_mq_exit_queue() calls blk_mq_tag_idle() before that flag is
cleared by blk_mq_del_queue_tag_set().
Cc: Christoph Hellwig <[email protected]>
Cc: Ming Lei <[email protected]>
Cc: Hannes Reinecke <[email protected]>
Fixes: 0d2602ca30e4 ("blk-mq: improve support for shared tags maps")
Signed-off-by: Bart Van Assche <[email protected]>
Reviewed-by: Ming Lei <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jens Axboe <[email protected]>
|
|
In case of shared sbitmap, request won't be held in plug list any more
sine commit 32bc15afed04 ("blk-mq: Facilitate a shared sbitmap per
tagset"), this way makes request merge from flush plug list & batching
submission not possible, so cause performance regression.
Yanhui reports performance regression when running sequential IO
test(libaio, 16 jobs, 8 depth for each job) in VM, and the VM disk
is emulated with image stored on xfs/megaraid_sas.
Fix the issue by recovering original behavior to allow to hold request
in plug list.
Cc: Yanhui Ma <[email protected]>
Cc: John Garry <[email protected]>
Cc: Bart Van Assche <[email protected]>
Cc: [email protected]
Fixes: 32bc15afed04 ("blk-mq: Facilitate a shared sbitmap per tagset")
Signed-off-by: Ming Lei <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jens Axboe <[email protected]>
|
|
xen_swiotlb_init calls swiotlb_late_init_with_tbl, which fails with
-ENOMEM if the swiotlb has already been initialized.
Add an explicit check io_tlb_default_mem != NULL at the beginning of
xen_swiotlb_init. If the swiotlb is already initialized print a warning
and return -EEXIST.
On x86, the error propagates.
On ARM, we don't actually need a special swiotlb buffer (yet), any
buffer would do. So ignore the error and continue.
CC: [email protected]
CC: [email protected]
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Juergen Gross <[email protected]>
|
|
Although SWIOTLB_NO_FORCE is meant to allow later calls to swiotlb_init,
today dma_direct_map_page returns error if SWIOTLB_NO_FORCE.
For now, without a larger overhaul of SWIOTLB_NO_FORCE, the best we can
do is to avoid setting SWIOTLB_NO_FORCE in mem_init when we know that it
is going to be required later (e.g. Xen requires it).
CC: [email protected]
CC: [email protected]
CC: [email protected]
CC: [email protected]
CC: [email protected]
Fixes: 2726bf3ff252 ("swiotlb: Make SWIOTLB_NO_FORCE perform no allocation")
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Juergen Gross <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Juergen Gross <[email protected]>
|
|
Move xen_swiotlb_detect to a static inline function to make it available
to !CONFIG_XEN builds.
CC: [email protected]
CC: [email protected]
Signed-off-by: Stefano Stabellini <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Juergen Gross <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Juergen Gross <[email protected]>
|
|
Mohammed reports (https://bugzilla.kernel.org/show_bug.cgi?id=213029)
the commit e4ab4658f1cf ("clocksource/drivers/hyper-v: Handle vDSO
differences inline") broke vDSO on x86. The problem appears to be that
VDSO_CLOCKMODE_HVCLOCK is an enum value in 'enum vdso_clock_mode' and
'#ifdef VDSO_CLOCKMODE_HVCLOCK' branch evaluates to false (it is not
a define).
Use a dedicated HAVE_VDSO_CLOCKMODE_HVCLOCK define instead.
Fixes: e4ab4658f1cf ("clocksource/drivers/hyper-v: Handle vDSO differences inline")
Reported-by: Mohammed Gamal <[email protected]>
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Vitaly Kuznetsov <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Signed-off-by: Stephen Rothwell <[email protected]>
Signed-off-by: Thomas Zimmermann <[email protected]>
Fixes: 92f1d09ca4ed ("drm: Switch to %p4cc format modifier")
Cc: Sakari Ailus <[email protected]>
Cc: Petr Mladek <[email protected]>
Cc: Andy Shevchenko <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/dri-devel/[email protected]/T/#macc61d4e0b17ca0da2b26aae8fbbcbf47324da13
|
|
Since recent changes instead of storing a large array of struct
io_mapped_ubuf, we store pointers to them, that is 4 times slimmer and
we should not to so worry about restricting max number of registererd
buffer slots, increase the limit 4 times.
Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/d3dee1da37f46da416aa96a16bf9e5094e10584d.1620990371.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
|
|
There are three types of requests that left disabled for sqpoll, namely
epoll ctx, statx, and resources update. Since SQPOLL task is now closely
mimics a userspace thread, remove the restrictions.
Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/909b52d70c45636d8d7897582474ea5aab5eed34.1620990306.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
|
|
Always remove linked timeout on io_link_timeout_fn() from the master
request link list, otherwise we may get use-after-free when first
io_link_timeout_fn() puts linked timeout in the fail path, and then
will be found and put on master's free.
Cc: [email protected] # 5.10+
Fixes: 90cd7e424969d ("io_uring: track link timeout's master explicitly")
Reported-and-tested-by: [email protected]
Signed-off-by: Pavel Begunkov <[email protected]>
Link: https://lore.kernel.org/r/69c46bf6ce37fec4fdcd98f0882e18eb07ce693a.1620990121.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <[email protected]>
|
|
Some interrupt handlers have an "extra" that saves 1 or 2
registers (r14, r15) in the paca save area and makes them available to
use by the handler.
The change to always save nvgprs in exception handlers lead to some
interrupt handlers saving those scratch r14 / r15 registers into the
interrupt frame's GPR saves, which get restored on interrupt exit.
Fix this by always reloading those scratch registers from paca before
the EXCEPTION_COMMON that saves nvgprs.
Fixes: 4228b2c3d20e ("powerpc/64e/interrupt: always save nvgprs on interrupt")
Reported-by: Christian Zigotzky <[email protected]>
Signed-off-by: Nicholas Piggin <[email protected]>
Tested-by: Christian Zigotzky <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
scv support introduced the notion of code that implicitly soft-masks
irqs due to the instruction addresses. This is required because scv
enters the kernel with MSR[EE]=1.
If a NMI (including soft-NMI) interrupt hits when we are implicitly
soft-masked then its regs->softe does not reflect this because it is
derived from the explicit soft mask state (paca->irq_soft_mask). This
makes arch_irq_disabled_regs(regs) return false.
This can trigger a warning in the soft-NMI watchdog code (shown below).
Fix it by having NMI interrupts set regs->softe to disabled in case of
interrupting an implicit soft-masked region.
------------[ cut here ]------------
WARNING: CPU: 41 PID: 1103 at arch/powerpc/kernel/watchdog.c:259 soft_nmi_interrupt+0x3e4/0x5f0
CPU: 41 PID: 1103 Comm: (spawn) Not tainted
NIP: c000000000039534 LR: c000000000039234 CTR: c000000000009a00
REGS: c000007fffbcf940 TRAP: 0700 Not tainted
MSR: 9000000000021033 <SF,HV,ME,IR,DR,RI,LE> CR: 22042482 XER: 200400ad
CFAR: c000000000039260 IRQMASK: 3
GPR00: c000000000039204 c000007fffbcfbe0 c000000001d6c300 0000000000000003
GPR04: 00007ffffa45d078 0000000000000000 0000000000000008 0000000000000020
GPR08: 0000007ffd4e0000 0000000000000000 c000007ffffceb00 7265677368657265
GPR12: 9000000000009033 c000007ffffceb00 00000f7075bf4480 000000000000002a
GPR16: 00000f705745a528 00007ffffa45ddd8 00000f70574d0008 0000000000000000
GPR20: 00000f7075c58d70 00000f7057459c38 0000000000000001 0000000000000040
GPR24: 0000000000000000 0000000000000029 c000000001dae058 0000000000000029
GPR28: 0000000000000000 0000000000000800 0000000000000009 c000007fffbcfd60
NIP [c000000000039534] soft_nmi_interrupt+0x3e4/0x5f0
LR [c000000000039234] soft_nmi_interrupt+0xe4/0x5f0
Call Trace:
[c000007fffbcfbe0] [c000000000039204] soft_nmi_interrupt+0xb4/0x5f0 (unreliable)
[c000007fffbcfcf0] [c00000000000c0e8] soft_nmi_common+0x138/0x1c4
--- interrupt: 900 at end_real_trampolines+0x0/0x1000
NIP: c000000000003000 LR: 00007ca426adb03c CTR: 900000000280f033
REGS: c000007fffbcfd60 TRAP: 0900
MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 44042482 XER: 200400ad
CFAR: 00007ca426946020 IRQMASK: 0
GPR00: 00000000000000ad 00007ffffa45d050 00007ca426b07f00 0000000000000035
GPR04: 00007ffffa45d078 0000000000000000 0000000000000008 0000000000000020
GPR08: 0000000000000000 0000000000100000 0000000010000000 00007ffffa45d110
GPR12: 0000000000000001 00007ca426d4e680 00000f7075bf4480 000000000000002a
GPR16: 00000f705745a528 00007ffffa45ddd8 00000f70574d0008 0000000000000000
GPR20: 00000f7075c58d70 00000f7057459c38 0000000000000001 0000000000000040
GPR24: 0000000000000000 00000f7057473f68 0000000000000003 000000000000041b
GPR28: 00007ffffa45d4c4 0000000000000035 0000000000000000 00000f7057473f68
NIP [c000000000003000] end_real_trampolines+0x0/0x1000
LR [00007ca426adb03c] 0x7ca426adb03c
--- interrupt: 900
Instruction dump:
60000000 60000000 60420000 38600001 482b3ae5 60000000 e93f0138 a36d0008
7daa6b78 71290001 7f7907b4 4082fd34 <0fe00000> 4bfffd2c 60420000 ea6100a8
---[ end trace dc75f67d819779da ]---
Fixes: 118178e62e2e ("powerpc: move NMI entry/exit code into wrapper")
Reported-by: Cédric Le Goater <[email protected]>
Signed-off-by: Nicholas Piggin <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The stf entry barrier fallback is unsafe to execute in a semi-patched
state, which can happen when enabling/disabling the mitigation with
strict kernel RWX enabled and using the hash MMU.
See the previous commit for more details.
Fix it by changing the order in which we patch the instructions.
Note the stf barrier fallback is only used on Power6 or earlier.
Fixes: bd573a81312f ("powerpc/mm/64s: Allow STRICT_KERNEL_RWX again")
Signed-off-by: Michael Ellerman <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|