Age | Commit message (Collapse) | Author | Files | Lines |
|
Pull cramfs fix from Al Viro:
"Regression fix, fallen through the cracks"
* 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
cramfs: fix usage on non-MTD device
|
|
Adjust indentation from spaces to tab (+optional two spaces) as in
coding style with command like:
$ sed -e 's/^ /\t/' -i */Kconfig
Signed-off-by: Krzysztof Kozlowski <[email protected]>
Reviewed-by: Juergen Gross <[email protected]>
Signed-off-by: Juergen Gross <[email protected]>
|
|
An explicit Kconfig dependency is missing for the recent addition of
the timer support. CONFIG_SND_TIMER isn't always selected by SND_PCM.
Fixes: 26c53379f98d ("ALSA: aloop: Support selection of snd_timer instead of jiffies")
Reported-by: kbuild test robot <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Takashi Iwai <[email protected]>
|
|
We have already handled cache_strategy option carefully,
so incorrect setting could not pass option parsing.
Meanwhile, print 'cache_strategy=(unknown)' can cause
failure on remount.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Chengguang Xu <[email protected]>
Reviewed-by: Gao Xiang <[email protected]>
Reviewed-by: Chao Yu <[email protected]>
Signed-off-by: Gao Xiang <[email protected]>
|
|
VLE was an old informal name of fixed-sized output
compression which came from published ATC'19 paper [1].
Drop those old annotations since erofs can handle
all encoded clusters in block-aligned basis, which
is wider than fixed-sized output compression after
larger clustersize feature is fully implemented.
Unaligned encoding won't be considered in EROFS
since it's not friendly to inplace I/O and perhaps
decompression inplace.
a) Fixed-sized output compression with 16KB pcluster:
___________________________________
|xxxxxxxx|xxxxxxxx|xxxxxxxx|xxxxxxxx|
|___ 0___|___ 1___|___ 2___|___ 3___| physical blocks
b) Block-aligned fixed-sized input compression with
16KB pcluster:
___________________________________
|xxxxxxxx|xxxxxxxx|xxxxxxxx|xxx00000|
|___ 0___|___ 1___|___ 2___|___ 3___| physical blocks
c) Block-unaligned fixed-sized input compression with
16KB compression unit:
____________________________________________
|..xxxxxx|xxxxxxxx|xxxxxxxx|xxxxxxxx|x.......|
|___ 0___|___ 1___|___ 2___|___ 3___|___ 4___| physical blocks
Refine better names for those as well.
[1] https://www.usenix.org/conference/atc19/presentation/gao
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Chao Yu <[email protected]>
Signed-off-by: Gao Xiang <[email protected]>
|
|
Introduce superblock checksum feature in order to
check at mounting time.
Note that the first 1024 bytes are ignore for x86
boot sectors and other oddities.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Pratik Shinde <[email protected]>
Reviewed-by: Chao Yu <[email protected]>
Cc: Dan Carpenter <[email protected]>
Signed-off-by: Gao Xiang <[email protected]>
|
|
For those tasks waiting I/O for sync decompression,
they should be better marked as IO wait state.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Chao Yu <[email protected]>
Signed-off-by: Gao Xiang <[email protected]>
|
|
Previously, both z_erofs_unzip_io and z_erofs_unzip_io_sb
record decompress queues for backend to use.
The only difference is that z_erofs_unzip_io is used for
on-stack sync decompression so that it doesn't have a super
block field (since the caller can pass it in its context),
but it increases complexity with only a pointer saving.
Rename z_erofs_unzip_io to z_erofs_decompressqueue with
a fixed super_block member and kill the other entirely,
and it can fallback to sync decompression if memory
allocation failure.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Chao Yu <[email protected]>
Signed-off-by: Gao Xiang <[email protected]>
|
|
Now open code is much cleaner due to iterative development.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Chao Yu <[email protected]>
Signed-off-by: Gao Xiang <[email protected]>
|
|
When both CONFIG_CRAMFS_MTD and CONFIG_CRAMFS_BLOCKDEV are enabled, if
we fail to mount on MTD, we don't try on block device.
Note: this relies upon cramfs_mtd_fill_super() leaving no side
effects on fc state in case of failure; in general, failing
get_tree_...() does *not* mean "fine to try again"; e.g. parsed
options might've been consumed by fill_super callback and freed
on failure.
Fixes: 74f78fc5ef43 ("vfs: Convert cramfs to use the new mount API")
Signed-off-by: Maxime Bizon <[email protected]>
Signed-off-by: Nicolas Pitre <[email protected]>
Signed-off-by: Al Viro <[email protected]>
|
|
The recording of RSS hash should be controlled by NETIF_F_RXHASH.
Fixes: 1fac7ca4e63b ("hv_netvsc: record hardware hash in skb")
Suggested-by: Eric Dumazet <[email protected]>
Signed-off-by: Stephen Hemminger <[email protected]>
Signed-off-by: Haiyang Zhang <[email protected]>
Reviewed-by: Michael Kelley <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
This patch is to fix a data-race reported by syzbot:
BUG: KCSAN: data-race in sctp_assoc_migrate / sctp_hash_obj
write to 0xffff8880b67c0020 of 8 bytes by task 18908 on cpu 1:
sctp_assoc_migrate+0x1a6/0x290 net/sctp/associola.c:1091
sctp_sock_migrate+0x8aa/0x9b0 net/sctp/socket.c:9465
sctp_accept+0x3c8/0x470 net/sctp/socket.c:4916
inet_accept+0x7f/0x360 net/ipv4/af_inet.c:734
__sys_accept4+0x224/0x430 net/socket.c:1754
__do_sys_accept net/socket.c:1795 [inline]
__se_sys_accept net/socket.c:1792 [inline]
__x64_sys_accept+0x4e/0x60 net/socket.c:1792
do_syscall_64+0xcc/0x370 arch/x86/entry/common.c:290
entry_SYSCALL_64_after_hwframe+0x44/0xa9
read to 0xffff8880b67c0020 of 8 bytes by task 12003 on cpu 0:
sctp_hash_obj+0x4f/0x2d0 net/sctp/input.c:894
rht_key_get_hash include/linux/rhashtable.h:133 [inline]
rht_key_hashfn include/linux/rhashtable.h:159 [inline]
rht_head_hashfn include/linux/rhashtable.h:174 [inline]
head_hashfn lib/rhashtable.c:41 [inline]
rhashtable_rehash_one lib/rhashtable.c:245 [inline]
rhashtable_rehash_chain lib/rhashtable.c:276 [inline]
rhashtable_rehash_table lib/rhashtable.c:316 [inline]
rht_deferred_worker+0x468/0xab0 lib/rhashtable.c:420
process_one_work+0x3d4/0x890 kernel/workqueue.c:2269
worker_thread+0xa0/0x800 kernel/workqueue.c:2415
kthread+0x1d4/0x200 drivers/block/aoe/aoecmd.c:1253
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352
It was caused by rhashtable access asoc->base.sk when sctp_assoc_migrate
is changing its value. However, what rhashtable wants is netns from asoc
base.sk, and for an asoc, its netns won't change once set. So we can
simply fix it by caching netns since created.
Fixes: d6c0256a60e6 ("sctp: add the rhashtable apis for sctp global transport hashtable")
Reported-by: [email protected]
Signed-off-by: Xin Long <[email protected]>
Acked-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
In the implementation of sctp_sf_do_5_2_4_dupcook() the allocated
new_asoc is leaked if security_sctp_assoc_request() fails. Release it
via sctp_association_free().
Fixes: 2277c7cd75e3 ("sctp: Add LSM hooks")
Signed-off-by: Navid Emamdoost <[email protected]>
Acked-by: Marcelo Ricardo Leitner <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Use vlan common api to access the vlan_tag info.
Signed-off-by: Tonghao Zhang <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-updates-2019-11-22
1) Misc Cleanups
2) Software steering support for Geneve
====================
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Rename the mac_link_state() method to mac_pcs_get_state() to make it
clear that it should be returning the MACs PCS current state, which
is used for inband negotiation rather than just reading back what the
MAC has been configured for. Update the documentation to explicitly
mention that this is for inband.
We drop the return value as well; most of phylink doesn't check the
return value and it is not clear what it should do on error - instead
arrange for state->link to be false.
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
These two functions have never been used since they were added.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
hmm_range_fault() calls find_vma() and walk_page_range() in a loop. This
is unnecessary duplication since walk_page_range() calls find_vma() in a
loop already.
Simplify hmm_range_fault() by defining a walk_test() callback function to
filter unhandled vmas.
This also fixes a bug where hmm_range_fault() was not checking start >=
vma->vm_start before checking vma->vm_flags so hmm_range_fault() could
return an error based on the wrong vma for the requested range.
It also fixes a bug when the vma has no read access and the caller did not
request a fault, there shouldn't be any error return code.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Ralph Campbell <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
gntdev simply wants to monitor a specific VMA for any notifier events,
this can be done straightforwardly using mmu_interval_notifier_insert()
over the VMA's VA range.
The notifier should be attached until the original VMA is destroyed.
It is unclear if any of this is even sane, but at least a lot of duplicate
code is removed.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Boris Ostrovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
The only two users of this are now converted to use mmu_interval_notifier,
delete all the code and update hmm.rst.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Jérôme Glisse <[email protected]>
Tested-by: Ralph Campbell <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Convert the collision-retry lock around hmm_range_fault to use the one now
provided by the mmu_interval notifier.
Although this driver does not seem to use the collision retry lock that
hmm provides correctly, it can still be converted over to use the
mmu_interval_notifier api instead of hmm_mirror without too much trouble.
This also deletes another place where a driver is associating additional
data (struct amdgpu_mn) with a mmu_struct.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Philip Yang <[email protected]>
Reviewed-by: Philip Yang <[email protected]>
Tested-by: Philip Yang <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Remove the interval tree in the driver and rely on the tree maintained by
the mmu_notifier for delivering mmu_notifier invalidation callbacks.
For some reason amdgpu has a very complicated arrangement where it tries
to prevent duplicate entries in the interval_tree, this is not necessary,
each amdgpu_bo can be its own stand alone entry. interval_tree already
allows duplicates and overlaps in the tree.
Also, there is no need to remove entries upon a release callback, the
mmu_interval API safely allows objects to remain registered beyond the
lifetime of the mm. The driver only has to stop touching the pages during
release.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Philip Yang <[email protected]>
Tested-by: Philip Yang <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
find_vma() must be called under the mmap_sem, reorganize this code to
do the vma check after entering the lock.
Further, fix the unlocked use of struct task_struct's mm, instead use
the mm from hmm_mirror which has an active mm_grab. Also the mm_grab
must be converted to a mm_get before acquiring mmap_sem or calling
find_vma().
Fixes: 66c45500bfdc ("drm/amdgpu: use new HMM APIs and helpers")
Fixes: 0919195f2b0d ("drm/amdgpu: Enable amdgpu_ttm_tt_get_user_pages in worker threads")
Link: https://lore.kernel.org/r/[email protected]
Acked-by: Christian König <[email protected]>
Reviewed-by: Felix Kuehling <[email protected]>
Reviewed-by: Philip Yang <[email protected]>
Tested-by: Philip Yang <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Remove the hmm_mirror object and use the mmu_interval_notifier API instead
for the range, and use the normal mmu_notifier API for the general
invalidation callback.
While here re-organize the pagefault path so the locking pattern is clear.
nouveau is the only driver that uses a temporary range object and instead
forwards nearly every invalidation range directly to the HW. While this is
not how the mmu_interval_notifier was intended to be used, the overheads on
the pagefaulting path are similar to the existing hmm_mirror version.
Particularly since the interval tree will be small.
Link: https://lore.kernel.org/r/[email protected]
Tested-by: Ralph Campbell <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
There is no reason to get the invalidate_range_start() callback via an
indirection through hmm_mirror, just register a normal notifier directly.
Link: https://lore.kernel.org/r/[email protected]
Tested-by: Ralph Campbell <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
The new API is an exact match for the needs of radeon.
For some reason radeon tries to remove overlapping ranges from the
interval tree, but interval trees (and mmu_interval_notifier_insert())
support overlapping ranges directly. Simply delete all this code.
Since this driver is missing a invalidate_range_end callback, but
still calls get_user_pages(), it cannot be correct against all races.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Christian König <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
This converts one of the two users of mmu_notifiers to use the new API.
The conversion is fairly straightforward, however the existing use of
notifiers here seems to be racey.
Link: https://lore.kernel.org/r/[email protected]
Tested-by: Dennis Dalessandro <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Replace the internal interval tree based mmu notifier with the new common
mmu_interval_notifier_insert() API. This removes a lot of code and fixes a
deadlock that can be triggered in ODP:
zap_page_range()
mmu_notifier_invalidate_range_start()
[..]
ib_umem_notifier_invalidate_range_start()
down_read(&per_mm->umem_rwsem)
unmap_single_vma()
[..]
__split_huge_page_pmd()
mmu_notifier_invalidate_range_start()
[..]
ib_umem_notifier_invalidate_range_start()
down_read(&per_mm->umem_rwsem) // DEADLOCK
mmu_notifier_invalidate_range_end()
up_read(&per_mm->umem_rwsem)
mmu_notifier_invalidate_range_end()
up_read(&per_mm->umem_rwsem)
The umem_rwsem is held across the range_start/end as the ODP algorithm for
invalidate_range_end cannot tolerate changes to the interval
tree. However, due to the nested invalidation regions the second
down_read() can deadlock if there are competing writers. The new core code
provides an alternative scheme to solve this problem.
Fixes: ca748c39ea3f ("RDMA/umem: Get rid of per_mm->notifier_count")
Link: https://lore.kernel.org/r/[email protected]
Tested-by: Artemy Kovalyov <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Only the function calls are stubbed out with static inlines that always
fail. This is the standard way to write a header for an optional component
and makes it easier for drivers that only optionally need HMM_MIRROR.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Jérôme Glisse <[email protected]>
Tested-by: Ralph Campbell <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
hmm_mirror's handling of ranges does not use a sequence count which
results in this bug:
CPU0 CPU1
hmm_range_wait_until_valid(range)
valid == true
hmm_range_fault(range)
hmm_invalidate_range_start()
range->valid = false
hmm_invalidate_range_end()
range->valid = true
hmm_range_valid(range)
valid == true
Where the hmm_range_valid() should not have succeeded.
Adding the required sequence count would make it nearly identical to the
new mmu_interval_notifier. Instead replace the hmm_mirror stuff with
mmu_interval_notifier.
Co-existence of the two APIs is the first step.
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Jérôme Glisse <[email protected]>
Tested-by: Philip Yang <[email protected]>
Tested-by: Ralph Campbell <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Of the 13 users of mmu_notifiers, 8 of them use only
invalidate_range_start/end() and immediately intersect the
mmu_notifier_range with some kind of internal list of VAs. 4 use an
interval tree (i915_gem, radeon_mn, umem_odp, hfi1). 4 use a linked list
of some kind (scif_dma, vhost, gntdev, hmm)
And the remaining 5 either don't use invalidate_range_start() or do some
special thing with it.
It turns out that building a correct scheme with an interval tree is
pretty complicated, particularly if the use case is synchronizing against
another thread doing get_user_pages(). Many of these implementations have
various subtle and difficult to fix races.
This approach puts the interval tree as common code at the top of the mmu
notifier call tree and implements a shareable locking scheme.
It includes:
- An interval tree tracking VA ranges, with per-range callbacks
- A read/write locking scheme for the interval tree that avoids
sleeping in the notifier path (for OOM killer)
- A sequence counter based collision-retry locking scheme to tell
device page fault that a VA range is being concurrently invalidated.
This is based on various ideas:
- hmm accumulates invalidated VA ranges and releases them when all
invalidates are done, via active_invalidate_ranges count.
This approach avoids having to intersect the interval tree twice (as
umem_odp does) at the potential cost of a longer device page fault.
- kvm/umem_odp use a sequence counter to drive the collision retry,
via invalidate_seq
- a deferred work todo list on unlock scheme like RTNL, via deferred_list.
This makes adding/removing interval tree members more deterministic
- seqlock, except this version makes the seqlock idea multi-holder on the
write side by protecting it with active_invalidate_ranges and a spinlock
To minimize MM overhead when only the interval tree is being used, the
entire SRCU and hlist overheads are dropped using some simple
branches. Similarly the interval tree overhead is dropped when in hlist
mode.
The overhead from the mandatory spinlock is broadly the same as most of
existing users which already had a lock (or two) of some sort on the
invalidation path.
Link: https://lore.kernel.org/r/[email protected]
Acked-by: Christian König <[email protected]>
Tested-by: Philip Yang <[email protected]>
Tested-by: Ralph Campbell <[email protected]>
Reviewed-by: John Hubbard <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
PROM only enables ethernet PHY on first Origin 200 module, so we must
do it ourselves for the second module.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Cc: Jakub Kicinski <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Lee Jones <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Srinivas Kandagatla <[email protected]>
Cc: Alessandro Zummo <[email protected]>
Cc: Alexandre Belloni <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jiri Slaby <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
Generation of fake subdevice ID had vendor and device ID swapped.
Signed-off-by: Thomas Bogendoerfer <[email protected]>
Signed-off-by: Paul Burton <[email protected]>
Cc: Jakub Kicinski <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Lee Jones <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Srinivas Kandagatla <[email protected]>
Cc: Alessandro Zummo <[email protected]>
Cc: Alexandre Belloni <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Jiri Slaby <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
Pull last minute virtio bugfixes from Michael Tsirkin:
"Minor bugfixes all over the place"
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
virtio_balloon: fix shrinker count
virtio_balloon: fix shrinker scan number of pages
virtio_console: allocate inbufs in add_port() only if it is needed
virtio_ring: fix return code on DMA mapping fails
|
|
rhashtable_lookup_fast() internally calls rcu_read_lock() then,
calls rhashtable_lookup(). So if rcu_read_lock() is already held,
rhashtable_lookup() is enough.
Signed-off-by: Taehee Yoo <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers-next patches for v5.5
Last set of patches for v5.5. Major features here 802.11ax support for
qtnfmac and airtime fairness support to mt76. And naturally smaller
fixes and improvements all over.
Major changes:
qtnfmac
* add 802.11ax support in AP mode
* enable offload bridging support
iwlwifi
* support TX/RX antennas reporting
mt76
* mt7615 smart carrier sense support
* aggregation statistics via debugfs
* airtime fairness (ATF) support
* mt76x0 OF mac address support
====================
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Robert Schwebel says:
====================
here is v2 of the series converting the NFC documentation from txt to
rst. Thanks to Jonathan and Dave for the input.
Changes since (implicit) v1:
* replace code-block by more compact :: syntax
* really add the rst file to the index
====================
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Now that the sphinx syntax has been fixed, change the document from txt
to rst and add it to the index.
Signed-off-by: Robert Schwebel <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Silence this warning:
Documentation/networking/nfc.rst:113: WARNING: Definition list ends without
a blank line; unexpected unindent.
Signed-off-by: Robert Schwebel <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Fix this warning:
Documentation/networking/nfc.rst:87: WARNING: Bullet list ends without
a blank line; unexpected unindent.
Signed-off-by: Robert Schwebel <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Change the block diagram to match the sphinx syntax. This will make it
possible to switch this file to rst in the future.
Signed-off-by: Robert Schwebel <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
The headlines in this file do are not in the standard kernel docu-
mentation headline format. Change it, so this file can be switched to
rst in the future.
Signed-off-by: Robert Schwebel <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
When a phydev is created, the speed and duplex are set to zero and
-1 respectively, rather than using the predefined SPEED_UNKNOWN and
DUPLEX_UNKNOWN constants.
There is a window at initialisation time where we may report link
down using the 0/-1 values. Tidy this up and use the predefined
constants, so debug doesn't complain with:
"Unsupported (update phy-core.c)/Unsupported (update phy-core.c)"
when the speed and duplex settings are printed.
Signed-off-by: Russell King <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
There are no users of phy_ethtool_sset() in the kernel anymore, and
as of commit 3c1bcc8614db ("net: ethernet: Convert phydev advertize
and supported from u32 to link mode"), the implementation is slightly
buggy - it doesn't correctly check the masked advertising mask as it
used to.
Remove it, and update the phy documentation to refer to its replacement
function.
Signed-off-by: Russell King <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
This commit reverts commit 91e6015b082b ("bpf: Emit audit messages
upon successful prog load and unload") and its follow up commit
7599a896f2e4 ("audit: Move audit_log_task declaration under
CONFIG_AUDITSYSCALL") as requested by Paul Moore. The change needs
close review on linux-audit, tests etc.
Signed-off-by: Jakub Kicinski <[email protected]>
|
|
Commit 37e4c997dadf ("KVM: VMX: validate individual bits of guest
MSR_IA32_FEATURE_CONTROL") broke the KVM_SET_MSRS ABI by instituting
new constraints on the data values that kvm would accept for the guest
MSR, IA32_FEATURE_CONTROL. Perhaps these constraints should have been
opt-in via a new KVM capability, but they were applied
indiscriminately, breaking at least one existing hypervisor.
Relax the constraints to allow either or both of
FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX and
FEATURE_CONTROL_VMXON_ENABLED_INSIDE_SMX to be set when nVMX is
enabled. This change is sufficient to fix the aforementioned breakage.
Fixes: 37e4c997dadf ("KVM: VMX: validate individual bits of guest MSR_IA32_FEATURE_CONTROL")
Signed-off-by: Jim Mattson <[email protected]>
Reviewed-by: Liran Alon <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Acquire kvm->srcu for the duration of ->set_nested_state() to fix a bug
where nVMX derefences ->memslots without holding ->srcu or ->slots_lock.
The other half of nested migration, ->get_nested_state(), does not need
to acquire ->srcu as it is a purely a dump of internal KVM (and CPU)
state to userspace.
Detected as an RCU lockdep splat that is 100% reproducible by running
KVM's state_test selftest with CONFIG_PROVE_LOCKING=y. Note that the
failing function, kvm_is_visible_gfn(), is only checking the validity of
a gfn, it's not actually accessing guest memory (which is more or less
unsupported during vmx_set_nested_state() due to incorrect MMU state),
i.e. vmx_set_nested_state() itself isn't fundamentally broken. In any
case, setting nested state isn't a fast path so there's no reason to go
out of our way to avoid taking ->srcu.
=============================
WARNING: suspicious RCU usage
5.4.0-rc7+ #94 Not tainted
-----------------------------
include/linux/kvm_host.h:626 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 1
1 lock held by evmcs_test/10939:
#0: ffff88826ffcb800 (&vcpu->mutex){+.+.}, at: kvm_vcpu_ioctl+0x85/0x630 [kvm]
stack backtrace:
CPU: 1 PID: 10939 Comm: evmcs_test Not tainted 5.4.0-rc7+ #94
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
Call Trace:
dump_stack+0x68/0x9b
kvm_is_visible_gfn+0x179/0x180 [kvm]
mmu_check_root+0x11/0x30 [kvm]
fast_cr3_switch+0x40/0x120 [kvm]
kvm_mmu_new_cr3+0x34/0x60 [kvm]
nested_vmx_load_cr3+0xbd/0x1f0 [kvm_intel]
nested_vmx_enter_non_root_mode+0xab8/0x1d60 [kvm_intel]
vmx_set_nested_state+0x256/0x340 [kvm_intel]
kvm_arch_vcpu_ioctl+0x491/0x11a0 [kvm]
kvm_vcpu_ioctl+0xde/0x630 [kvm]
do_vfs_ioctl+0xa2/0x6c0
ksys_ioctl+0x66/0x70
__x64_sys_ioctl+0x16/0x20
do_syscall_64+0x54/0x200
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7f59a2b95f47
Fixes: 8fcc4b5923af5 ("kvm: nVMX: Introduce KVM_CAP_NESTED_STATE")
Cc: [email protected]
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Fold shared_msr_update() into its sole user to eliminate its pointless
bounds check, its godawful printk, its misleading comment (it's called
under a global lock), and its woefully inaccurate name.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
The jump label out_free_1 and out_free_2 deal with
the same stuff, so git rid of one and rename the
label out_free_0a to retain the label name order.
Signed-off-by: Miaohe Lin <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
A recent change inadvertently exported a static function, which results
in modpost throwing a warning. Fix it.
Fixes: cbbaa2727aa3 ("KVM: x86: fix presentation of TSX feature in ARCH_CAPABILITIES")
Signed-off-by: Sean Christopherson <[email protected]>
Cc: [email protected]
Reviewed-by: Jim Mattson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|