Age | Commit message (Collapse) | Author | Files | Lines |
|
hwprobe provides a way to report if misaligned access are emulated. In
order to correctly populate that feature, we can check if it actually
traps when doing a misaligned access. This can be checked using an
exception table entry which will actually be used when a misaligned
access is done from kernel mode.
Signed-off-by: Clément Léger <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
This function is solely called as an initcall, thus annotate it with
__init.
Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Evan Green <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
This sysctl tuning option allows the user to disable misaligned access
handling globally on the system. This will also be used by misaligned
detection code to temporarily disable misaligned access handling.
Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Björn Töpel <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
This support is partially based of openSBI misaligned emulation floating
point instruction support. It provides support for the existing
floating point instructions (both for 32/64 bits as well as compressed
ones). Since floating point registers are not part of the pt_regs
struct, we need to modify them directly using some assembly. We also
dirty the pt_regs status in case we modify them to be sure context
switch will save FP state. With this support, Linux is on par with
openSBI support.
Signed-off-by: Clément Léger <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
Add missing calls to account for misaligned fault event using
perf_sw_event().
Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Björn Töpel <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
Misalignment trap handling is only supported for M-mode and uses direct
accesses to user memory. In S-mode, when handling usermode fault, this
requires to use the get_user()/put_user() accessors. Implement
load_u8(), store_u8() and get_insn() using these accessors for
userspace and direct text access for kernel.
Signed-off-by: Clément Léger <[email protected]>
Reviewed-by: Björn Töpel <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
Replace macros by the only two function calls that are done from this
file, store_u8() and load_u8().
Signed-off-by: Clément Léger <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
|
|
* uninline simple_strntoull(),
gcc overinlines and this function is not performance critical
* reorder arguments, so that appending INT_MAX as 4th argument
generates very efficient tail call
Space savings:
add/remove: 1/0 grow/shrink: 0/3 up/down: 27/-179 (-152)
Function old new delta
simple_strntoll - 27 +27
simple_strtoull 15 10 -5
simple_strtoll 41 7 -34
vsscanf 1930 1790 -140
Signed-off-by: Alexey Dobriyan <[email protected]>
Reviewed-by: Andy Shevchenko <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
Signed-off-by: Petr Mladek <[email protected]>
Link: https://lore.kernel.org/all/82a2af6e-9b6c-4a09-89d7-ca90cc1cdad1@p183/
|
|
This syntax is useful to specify libraries linked to all userspace
programs in the Makefile.
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
Commit 0f71dcfb4aef ("powerpc/ftrace: Add support for
-fpatchable-function-entry") added a script to check for
-fpatchable-function-entry compiler support. The script expects compiler
to emit the section __patchable_function_entries and few nops after a
function entry.
If the compiler understands and emits the above,
CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY is set.
So teach dummy-tools' gcc about this.
Signed-off-by: Jiri Slaby (SUSE) <[email protected]>
Reviewed-by: Nathan Chancellor <[email protected]>
Signed-off-by: Masahiro Yamada <[email protected]>
|
|
In order to reduce excessive memory mapping cost in live migration and
VM reboot, it is desirable to decouple the vhost-vdpa IOTLB abstraction
from the virtio device life cycle, i.e. mappings can be kept intact
across virtio device reset. Leverage the .reset_map callback, which is
meant to destroy the iotlb on the given ASID and recreate the 1:1
passthrough/identity mapping. To be consistent, the mapping on device
creation is initiailized to passthrough/identity with PA 1:1 mapped as
IOVA. With this the device .reset op doesn't have to maintain and clean
up memory mappings by itself.
Additionally, implement .compat_reset to cater for older userspace,
which may wish to see mapping to be cleared during reset.
Signed-off-by: Si-Wei Liu <[email protected]>
Tested-by: Stefano Garzarella <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Since commit 6f5312f80183 ("vdpa/mlx5: Add support for running with
virtio_vdpa"), mlx5_vdpa starts with preallocate 1:1 DMA MR at device
creation time. This 1:1 DMA MR will be implicitly destroyed while the
first .set_map call is invoked, in which case callers like vhost-vdpa
will start to set up custom mappings. When the .reset callback is
invoked, the custom mappings will be cleared and the 1:1 DMA MR will be
re-created.
In order to reduce excessive memory mapping cost in live migration, it
is desirable to decouple the vhost-vdpa IOTLB abstraction from the
virtio device life cycle, i.e. mappings can be kept around intact across
virtio device reset. Leverage the .reset_map callback, which is meant to
destroy the regular MR (including cvq mapping) on the given ASID and
recreate the initial DMA mapping. That way, the device .reset op runs
free from having to maintain and clean up memory mappings by itself.
Additionally, implement .compat_reset to cater for older userspace,
which may wish to see mapping to be cleared during reset.
Co-developed-by: Dragos Tatulea <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Signed-off-by: Si-Wei Liu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Using .compat_reset op from the previous patch, the buggy .reset
behaviour can be kept as-is on older userspace apps, which don't ack the
IOTLB_PERSIST backend feature. As this compatibility quirk is limited to
those drivers that used to be buggy in the past, it won't affect change
the behaviour or affect ABI on the setups with API compliant driver.
The separation of .compat_reset from the regular .reset allows
vhost-vdpa able to know which driver had broken behaviour before, so it
can apply the corresponding compatibility quirk to the individual driver
whenever needed. Compared to overloading the existing .reset with
flags, .compat_reset won't cause any extra burden to the implementation
of every compliant driver.
[mst: squashed in two fixup commits]
Message-Id: <[email protected]>
Message-Id: <[email protected]>
Reported-by: Dragos Tatulea <[email protected]>
Tested-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Reported-by: Lei Yang <[email protected]>
Signed-off-by: Si-Wei Liu <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Some device specific IOMMU parent drivers have long standing bogus
behaviour that mistakenly clean up the maps during .reset. By
definition, this is violation to the on-chip IOMMU ops (i.e. .set_map,
or .dma_map & .dma_unmap) in those offending drivers, as the removal of
internal maps is completely agnostic to the upper layer, causing
inconsistent view between the userspace and the kernel. Some userspace
app like QEMU gets around of this brokenness by proactively removing and
adding back all the maps around vdpa device reset, but such workaround
actually penaltize other well-behaved driver setup, where vdpa reset
always comes with the associated mapping cost, especially for kernel
vDPA devices (use_va=false) that have high cost on pinning. It's
imperative to rectify this behaviour and remove the problematic code
from all those non-compliant parent drivers.
However, we cannot unconditionally remove the bogus map-cleaning code
from the buggy .reset implementation, as there might exist userspace
apps that already rely on the behaviour on some setup. Introduce a
.compat_reset driver op to keep compatibility with older userspace. New
and well behaved parent driver should not bother to implement such op,
but only those drivers that are doing or used to do non-compliant
map-cleaning reset will have to.
Signed-off-by: Si-Wei Liu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Userspace needs this feature flag to distinguish if vhost-vdpa iotlb in
the kernel can be trusted to persist IOTLB mapping across vDPA reset.
Without it, userspace has no way to tell apart if it's running on an
older kernel, which could silently drop all iotlb mapping across vDPA
reset, especially with broken parent driver implementation for the
.reset driver op. The broken driver may incorrectly drop all mappings of
its own as part of .reset, which inadvertently ends up with corrupted
mapping state between vhost-vdpa userspace and the kernel. As a
workaround, to make the mapping behaviour predictable across reset,
userspace has to pro-actively remove all mappings before vDPA reset, and
then restore all the mappings afterwards. This workaround is done
unconditionally on top of all parent drivers today, due to the parent
driver implementation issue and no means to differentiate. This
workaround had been utilized in QEMU since day one when the
corresponding vhost-vdpa userspace backend came to the world.
There are 3 cases that backend may claim this feature bit on for:
- parent device that has to work with platform IOMMU
- parent device with on-chip IOMMU that has the expected
.reset_map support in driver
- parent device with vendor specific IOMMU implementation with
persistent IOTLB mapping already that has to specifically
declare this backend feature
The reason why .reset_map is being one of the pre-condition for
persistent iotlb is because without it, vhost-vdpa can't switch back
iotlb to the initial state later on, especially for the on-chip IOMMU
case which starts with identity mapping at device creation. virtio-vdpa
requires on-chip IOMMU to perform 1:1 passthrough translation from PA to
IOVA as-is to begin with, and .reset_map is the only means to turn back
iotlb to the identity mapping mode after vhost-vdpa is gone.
The difference in behavior did not matter as QEMU unmaps all the memory
unregistering the memory listener at vhost_vdpa_dev_start( started =
false), but the backend acknowledging this feature flag allows QEMU to
make sure it is safe to skip this unmap & map in the case of vhost stop
& start cycle.
In that sense, this feature flag is actually a signal for userspace to
know that the driver bug has been solved. Not offering it indicates that
userspace cannot trust the kernel will retain the maps.
Signed-off-by: Si-Wei Liu <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Devices with on-chip IOMMU or vendor specific IOTLB implementation may
need to restore iotlb mapping to the initial or default state using the
.reset_map op, as it's desirable for some parent devices to not work
with DMA ops and maintain a simple IOMMU model with .reset_map. In
particular, device reset should not cause mapping to go away on such
IOTLB model, so persistent mapping is implied across reset. Before the
userspace process using vhost-vdpa is gone, give it a chance to reset
iotlb back to the initial state in vhost_vdpa_cleanup().
Signed-off-by: Si-Wei Liu <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Some device specific IOMMU parent drivers have long standing bogus
behavior that mistakenly clean up the maps during .reset. By definition,
this is violation to the on-chip IOMMU ops (i.e. .set_map, or .dma_map &
.dma_unmap) in those offending drivers, as the removal of internal maps
is completely agnostic to the upper layer, causing inconsistent view
between the userspace and the kernel. Some userspace app like QEMU gets
around of this brokenness by proactively removing and adding back all
the maps around vdpa device reset, but such workaround actually penalize
other well-behaved driver setup, where vdpa reset always comes with the
associated mapping cost, especially for kernel vDPA devices
(use_va=false) that have high cost on pinning. It's imperative to
rectify this behavior and remove the problematic code from all those
non-compliant parent drivers.
The reason why a separate .reset_map op is introduced is because this
allows a simple on-chip IOMMU model without exposing too much device
implementation detail to the upper vdpa layer. The .dma_map/unmap or
.set_map driver API is meant to be used to manipulate the IOTLB
mappings, and has been abstracted in a way similar to how a real IOMMU
device maps or unmaps pages for certain memory ranges. However, apart
from this there also exists other mapping needs, in which case 1:1
passthrough mapping has to be used by other users (read virtio-vdpa). To
ease parent/vendor driver implementation and to avoid abusing DMA ops in
an unexpacted way, these on-chip IOMMU devices can start with 1:1
passthrough mapping mode initially at the time of creation. Then the
.reset_map op can be used to switch iotlb back to this initial state
without having to expose a complex two-dimensional IOMMU device model.
The .reset_map is not a MUST for every parent that implements the
.dma_map or .set_map API, because device may work with DMA ops directly
by implement their own to manipulate system memory mappings, so don't
have to use .reset_map to achieve a simple IOMMU device model for 1:1
passthrough mapping.
Signed-off-by: Si-Wei Liu <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Some buggy devices, the common cfg size may not match the features.
This patch checks the common cfg size for the
features(VIRTIO_F_NOTIF_CONFIG_DATA, VIRTIO_F_RING_RESET). When the
common cfg size does not match the corresponding feature, we fail the
probe and print error message.
Signed-off-by: Xuan Zhuo <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
The following codes have an implicit conversion from size_t to u32:
(u32)max_size = (size_t)virtio_max_dma_size(vdev);
This may lead overflow, Ex (size_t)4G -> (u32)0. Once
virtio_max_dma_size() has a larger size than U32_MAX, use U32_MAX
instead.
Signed-off-by: zhenwei pi <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
Add checks to the check_offsets(void) for queue_notify_data and
queue_reset.
Signed-off-by: Xuan Zhuo <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
This patch adds the definition of VIRTIO_F_NOTIF_CONFIG_DATA feature bit
in the relevant header file.
This feature indicates that the driver uses the data provided by the
device as a virtqueue identifier in available buffer notifications.
It comes from here:
https://github.com/oasis-tcs/virtio-spec/issues/89
Signed-off-by: Xuan Zhuo <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Acked-by: Jason Wang <[email protected]>
|
|
Now that the driver core allows for struct class to be in read-only
memory, we should make all 'class' structures declared at build time
placing them into read-only memory, instead of having to be dynamically
allocated at runtime.
Cc: "Michael S. Tsirkin" <[email protected]>
Cc: Jason Wang <[email protected]>
Cc: Xuan Zhuo <[email protected]>
Cc: Xie Yongji <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
Message-Id: <2023100643-tricolor-citizen-6c2d@gregkh>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Xie Yongji <[email protected]>
Acked-by: Jason Wang <[email protected]>
|
|
Fix a misspelling of "preceding".
Signed-off-by: Geert Uytterhoeven <[email protected]>
Message-Id: <b57b882675809f1f9dacbf42cf6b920b2bea9cba.1695903476.git.geert+renesas@glider.be>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Simon Horman <[email protected]>
Reviewed-by: Stefan Hajnoczi <[email protected]>
|
|
Finally following up to Simon's suggestion for some kdoc attention
on struct virtio_pci_modern_device.
Link: https://lore.kernel.org/netdev/ZE%[email protected]/
Cc: Simon Horman <[email protected]>
Signed-off-by: Shannon Nelson <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Acked-by: Jason Wang <[email protected]>
|
|
Fix the wrong drivers_autoprobe path name in the document
Signed-off-by: Shawn.Shao <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Acked-by: Jason Wang <[email protected]>
|
|
As Eli Cohen moved to other work, I'll be the contact point for
mlx5_vdpa.
Acked-by: Jason Wang <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
After commit 68f2736a8583 ("mm: Convert all PageMovable users to
movable_operations"), the execution path has been changed to
move_to_new_folio
movable_operations->migrate_page
balloon_page_migrate
balloon_page_migrate->balloon_page_migrate
balloon_page_migrate
Correct the outdated comment.
Signed-off-by: Xueshi Hu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
Reviewed-by: Xuan Zhuo <[email protected]>
|
|
Offer this backend feature as mlx5 is compatible with it. It allows it
to do live migration with CVQ, dynamically switching between passthrough
and shadow virtqueue.
Signed-off-by: Eugenio Pérez <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
For the following sequence:
- cvq group is in ASID 0
- .set_map(1, cvq_iotlb)
- .set_group_asid(cvq_group, 1)
... the cvq mapping from ASID 0 will be used. This is not always correct
behaviour.
This patch adds support for the above mentioned flow by saving the iotlb
on each .set_map and updating the cvq iotlb with it on a cvq group change.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
They will be used in a follow-up patch.
For dup_iotlb, avoid the src == dst case. This is an error.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Vq descriptor mappings are supported in hardware by filling in an
additional mkey which contains the descriptor mappings to the hw vq.
A previous patch in this series added support for hw mkey (mr) creation
for ASID 1.
This patch fills in both the vq data and vq descriptor mkeys based on
group ASID mapping.
The feature is signaled to the vdpa core through the presence of the
.get_vq_desc_group op.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Introduce the vq descriptor group and mr per ASID. Until now
.set_map on ASID 1 was only updating the cvq iotlb. From now on it also
creates a mkey for it. The current patch doesn't use it but follow-up
patches will add hardware support for mapping the vq descriptors.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
The current flow for updating an mr works directly on mvdev->mr which
makes it cumbersome to handle multiple new mr structs.
This patch makes the flow more straightforward by having
mlx5_vdpa_create_mr return a new mr which will update the old mr (if
any). The old mr will be deleted and unlinked from mvdev. For the case
when the iotlb is empty (not NULL), the old mr will be cleared.
This change paves the way for adding mrs for different ASIDs.
The initialized bool is no longer needed as mr is now a pointer in the
mlx5_vdpa_dev struct which will be NULL when not initialized.
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
The mutex is named like it is supposed to protect only the mkey but in
reality it is a global lock for all mr resources.
Shift the mutex to it's rightful location (struct mlx5_vdpa_dev) and
give it a more appropriate name.
Signed-off-by: Dragos Tatulea <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
This patch adapts the mr creation/deletion code to be able to work with
any given mr struct pointer. All the APIs are adapted to take an extra
parameter for the mr.
mlx5_vdpa_create/delete_mr doesn't need a ASID parameter anymore. The
check is done in the caller instead (mlx5_set_map).
This change is needed for a followup patch which will introduce an
additional mr for the vq descriptor data.
Signed-off-by: Dragos Tatulea <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Make mlx5_destroy_mr symmetric to mlx5_create_mr.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Now that the cvq code is out of mlx5_vdpa_create/destroy_mr, the "dvq"
functions can be folded into their callers.
Having "dvq" in the naming will no longer be accurate in the downstream
patches.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
The reslock is taken while refresh is called but iommu_lock is more
specific to this resource. So take the iommu_lock during cvq iotlb
refresh.
Based on Eugenio's patch [0].
[0] https://lore.kernel.org/lkml/[email protected]/
Acked-by: Jason Wang <[email protected]>
Suggested-by: Eugenio Pérez <[email protected]>
Reviewed-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
The handling of the cvq iotlb is currently coupled with the creation
and destruction of the hardware mkeys (mr).
This patch moves cvq iotlb handling into its own function and shifts it
to a scope that is not related to mr handling. As cvq handling is just a
prune_iotlb + dup_iotlb cycle, put it all in the same "update" function.
Finally, the destruction path is handled by directly pruning the iotlb.
After this move is done the ASID mr code can be collapsed into a single
function.
Acked-by: Jason Wang <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Necessary for upcoming cvq separation from mr allocation.
Acked-by: Jason Wang <[email protected]>
Signed-off-by: Dragos Tatulea <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
With _F_DESC_ASID backend feature, the device can now support the
VHOST_VDPA_GET_VRING_DESC_GROUP ioctl, and it may expose the descriptor
table (including avail and used ring) in a different group than the
buffers it contains. This new uAPI will fetch the group ID of the
descriptor table.
Signed-off-by: Si-Wei Liu <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
Userspace knows if the device has dedicated descriptor group or not
by checking this feature bit.
It's only exposed if the vdpa driver backend implements the
.get_vq_desc_group() operation callback. Userspace trying to negotiate
this feature when it or the dependent _F_IOTLB_ASID feature hasn't
been exposed will result in an error.
Signed-off-by: Si-Wei Liu <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
In some cases, the access to the virtqueue's descriptor area, device
and driver areas (precluding indirect descriptor table in guest memory)
may have to be confined to a different address space than where its
buffers reside. Without loss of simplicity and generality with already
established terminology, let's fold up these 3 areas and call them
as a whole as descriptor table group, or descriptor group for short.
Specifically, in case of split virtqueues, descriptor group consists of
regions for Descriptor Table, Available Ring and Used Ring; for packed
virtqueues layout, descriptor group contains Descriptor Ring, Driver
and Device Event Suppression structures.
The group ID for a dedicated descriptor group can be obtained through a
new .get_vq_desc_group() op. If driver implements this op, it means that
the descriptor, device and driver areas of the virtqueue may reside
in a dedicated group than where its buffers reside, a.k.a the default
virtqueue group through the .get_vq_group() op.
In principle, the descriptor group may or may not have same group ID
as the default group. Even if the descriptor group has a different ID,
meaning the vq's descriptor group areas can optionally move to a
separate address space than where guest memory resides, the descriptor
group may still start from a default address space, same as where its
buffers reside. To move the descriptor group to a different address
space, .set_group_asid() has to be called to change the ASID binding
for the group, which is no different than what needs to be done on any
other virtqueue group. On the other hand, the .reset() semantics also
applies on descriptor table group, meaning the device reset will clear
all ASID bindings and move all virtqueue groups including descriptor
group back to the default address space, i.e. in ASID 0.
QEMU's shadow virtqueue is going to utilize dedicated descriptor group
to speed up map and unmap operations, yielding tremendous downtime
reduction by avoiding the full and slow remap cycle in SVQ switching.
Signed-off-by: Si-Wei Liu <[email protected]>
Acked-by: Eugenio Pérez <[email protected]>
Acked-by: Jason Wang <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Michael S. Tsirkin <[email protected]>
Reviewed-by: Si-Wei Liu <[email protected]>
Tested-by: Si-Wei Liu <[email protected]>
Tested-by: Lei Yang <[email protected]>
|
|
When DA7219 is suspended, prevent the AAD IRQ handler is unexpectedly
executed and cause the I2C driver "Transfer while suspended" failure.
Signed-off-by: David Rau <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
The value of vsense_select should be either 32
or 0 in both cases, so modify the
AW88399_DEV_VDSEL_VSENSE macro to 32.
Signed-off-by: Weidong Wang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
An error code should be return when the re is greater
than the maximum value or less than the minimum value
Signed-off-by: Weidong Wang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
The maximum value that calib can set should be
consistent with the maximum value of re.
Signed-off-by: Weidong Wang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux.git
This merges a single commit that contains changes to mlx5_ifc.h
It's required to support vq descriptor mappings in mlx5/vdpa
Signed-off-by: Michael S. Tsirkin <[email protected]>
|
|
lp55xx_write() can return an error code, add a check for this.
Signed-off-by: Su Hui <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Lee Jones <[email protected]>
|
|
Include headers which we are direct users of, no need
to have proxies.
Signed-off-by: Andy Shevchenko <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Lee Jones <[email protected]>
|