Age | Commit message (Collapse) | Author | Files | Lines |
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD
* More selftests
* Improved KVM_S390_MEM_OP ioctl input checking
* Add kvm_valid_regs and kvm_dirty_regs invalid bit checking
|
|
The hrtimer which is used to emulate lapic timer is stopped during
vcpu reset, preemption timer should do the same.
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
This patch optimizes the virtual IPI emulation sequence:
write ICR2 write ICR2
write ICR read ICR2
read ICR ==> send virtual IPI
read ICR2 write ICR
send virtual IPI
It can reduce kvm-unit-tests/vmexit.flat IPI testing latency(from sender
send IPI to sender receive the ACK) from 3319 cycles to 3203 cycles on
SKylake server.
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
On AMD processors, in PAE 32bit mode, nested KVM instances don't
work. The L0 host get a kernel OOPS, which is related to
arch.mmu->pae_root being NULL.
The reason for this is that when setting up nested KVM instance,
arch.mmu is set to &arch.guest_mmu (while normally, it would be
&arch.root_mmu). However, the initialization and allocation of
pae_root only creates it in root_mmu. KVM code (ie. in
mmu_alloc_shadow_roots) then accesses arch.mmu->pae_root, which is the
unallocated arch.guest_mmu->pae_root.
This fix just allocates (and frees) pae_root in both guest_mmu and
root_mmu (and also lm_root if it was allocated). The allocation is
subject to previous restrictions ie. it won't allocate anything on
64-bit and AFAIK not on Intel.
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=203923
Fixes: 14c07ad89f4d ("x86/kvm/mmu: introduce guest_mmu")
Signed-off-by: Jiri Palecek <[email protected]>
Tested-by: Jiri Palecek <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
x86_emulate_instruction() takes into account ctxt->have_exception flag
during instruction decoding, but in practice this flag is never set in
x86_decode_insn().
Fixes: 6ea6e84309ca ("KVM: x86: inject exceptions produced by x86_decode_insn")
Cc: [email protected]
Cc: Denis Lunev <[email protected]>
Cc: Roman Kagan <[email protected]>
Cc: Denis Plotnikov <[email protected]>
Signed-off-by: Jan Dakinevich <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
inject_emulated_exception() returns true if and only if nested page
fault happens. However, page fault can come from guest page tables
walk, either nested or not nested. In both cases we should stop an
attempt to read under RIP and give guest to step over its own page
fault handler.
This is also visible when an emulated instruction causes a #GP fault
and the VMware backdoor is enabled. To handle the VMware backdoor,
KVM intercepts #GP faults; with only the next patch applied,
x86_emulate_instruction() injects a #GP but returns EMULATE_FAIL
instead of EMULATE_DONE. EMULATE_FAIL causes handle_exception_nmi()
(or gp_interception() for SVM) to re-inject the original #GP because it
thinks emulation failed due to a non-VMware opcode. This patch prevents
the issue as x86_emulate_instruction() will return EMULATE_DONE after
injecting the #GP.
Fixes: 6ea6e84309ca ("KVM: x86: inject exceptions produced by x86_decode_insn")
Cc: [email protected]
Cc: Denis Lunev <[email protected]>
Cc: Roman Kagan <[email protected]>
Cc: Denis Plotnikov <[email protected]>
Signed-off-by: Jan Dakinevich <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
available
The downside of guest side polling is that polling is performed even
with other runnable tasks in the host. However, even if poll in kvm
can aware whether or not other runnable tasks in the same pCPU, it
can still incur extra overhead in over-subscribe scenario. Now we can
just enable guest polling when dedicated pCPUs are available.
Acked-by: Paolo Bonzini <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
cpuidle-haltpoll can be built as a module to allow optional late load.
Given we are setting @owner to THIS_MODULE, cpuidle will attempt to grab a
module reference every time a cpuidle_device is registered -- so
essentially all online cpus get a reference.
This prevents for the module to be unloaded later, which makes the
module_exit callback entirely unused. Thus remove the @owner and allow
module to be unloaded.
Fixes: fa86ee90eb11 ("add cpuidle-haltpoll driver")
Signed-off-by: Joao Martins <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
When a user loads cpuidle-haltpoll on a non KVM guest the module will
successfully load, even though idle driver registration didn't take
place.
We should instead return -ENODEV signaling the user that the driver can't
be loaded, like other error paths in haltpoll_init(). An example of such
error paths is when we return -EBUSY when attempting to register an idle
driver when it had one already (e.g. intel_idle loads at boot and then we
attempt to insert module cpuidle-haltpoll).
Fixes: fa86ee90eb11 ("add cpuidle-haltpoll driver")
Signed-off-by: Joao Martins <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
Right now, guest current governors have the following ratings:
* ladder -> 10
* teo -> 19
* menu -> 20
* haltpoll -> 21
* ladder + nohz=off -> 25
haltpoll governor got introduced and it is now the default governor given
its highest rating -- with ladder+nohz being the exception -- regardless of
idle driver in the guest. An example of an undesirable case is x86 KVM
guests with MWAIT which have intel_idle registered first, and consequently
will have haltpoll be used as governor which would get limited to a poll
state and state 1 and the other states wouldn't get used.
To keep the previous defaults we decrease rating of governor to 9 (below
current lowest rating) and thus rely on @governor switch on
cpuidle_register_driver() to tie in haltpoll idle driver and governor
together.
Signed-off-by: Joao Martins <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
The recently introduced haltpoll driver is largely only useful with
haltpoll governor. To allow drivers to associate with a particular idle
behaviour, add a @governor property to 'struct cpuidle_driver' and thus
allow a cpuidle driver to switch to a *preferred* governor on idle driver
registration. We save the previous governor, and when an idle driver is
unregistered we switch back to that.
The @governor can be overridden by cpuidle.governor= boot param or
alternatively be ignored if the governor doesn't exist.
Signed-off-by: Joao Martins <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
Use the recently added tracepoint for logging nested VM-Enter failures
instead of spamming the kernel log when hardware detects a consistency
check failure. Take the opportunity to print the name of the error code
instead of dumping the raw hex number, but limit the symbol table to
error codes that can reasonably be encountered by KVM.
Add an equivalent tracepoint in nested_vmx_check_vmentry_hw(), e.g. so
that tracing of "invalid control field" errors isn't suppressed when
nested early checks are enabled.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Debugging a failed VM-Enter is often like searching for a needle in a
haystack, e.g. there are over 80 consistency checks that funnel into
the "invalid control field" error code. One way to expedite debug is
to run the buggy code as an L1 guest under KVM (and pray that the
failing check is detected by KVM). However, extracting useful debug
information out of L0 KVM requires attaching a debugger to KVM and/or
modifying the source, e.g. to log which check is failing.
Make life a little less painful for VMM developers and add a tracepoint
for failed VM-Enter consistency checks. Ideally the tracepoint would
capture both what check failed and precisely why it failed, but logging
why a checked failed is difficult to do in a generic tracepoint without
resorting to invasive techniques, e.g. generating a custom string on
failure. That being said, for the vast majority of VM-Enter failures
the most difficult step is figuring out exactly what to look at, e.g.
figuring out which bit was incorrectly set in a control field is usually
not too painful once the guilty field as been identified.
To reach a happy medium between precision and ease of use, simply log
the code that detected a failed check, using a macro to execute the
check and log the trace event on failure. This approach enables tracing
arbitrary code, e.g. it's not limited to function calls or specific
formats of checks, and the changes to the existing code are minimally
invasive. A macro with a two-character name is desirable as usage of
the macro doesn't result in overly long lines or confusing alignment,
while still retaining some amount of readability. I.e. a one-character
name is a little too terse, and a three-character name results in the
contents being passed to the macro aligning with an indented line when
the macro is used an in if-statement, e.g.:
if (VCC(nested_vmx_check_long_line_one(...) &&
nested_vmx_check_long_line_two(...)))
return -EINVAL;
And that is the story of how the CC(), a.k.a. Consistency Check, macro
got its name.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
We refactored this code a bit and accidentally deleted the "-" character
from "-EINVAL". The kvm_vcpu_map() function never returns positive
EINVAL.
Fixes: c8e16b78c614 ("x86: KVM: svm: eliminate hardcoded RIP advancement from vmrun_interception()")
Cc: [email protected]
Signed-off-by: Dan Carpenter <[email protected]>
Reviewed-by: Vitaly Kuznetsov <[email protected]>
Reviewed-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
|
|
|
|
The BCM2835 SPI driver currently sets the SPI_CONTROLLER_MUST_TX flag.
When performing an RX-only transfer, this flag causes the SPI core to
allocate and DMA-map a dummy buffer which is copied to the TX FIFO.
The dummy buffer is necessary because the chip is not capable of
automatically clocking out null bytes.
Avoid the overhead induced by the dummy buffer by preallocating a
reusable DMA transaction which fills the TX FIFO by cyclically copying
from the zero page. The transaction requires very little CPU time to
submit and generates no interrupts while running. Specifics are
provided in kerneldoc comments.
[Nathan Chancellor contributed a DMA mapping fixup for an early version
of this commit, hence his Signed-off-by.]
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Nathan Chancellor <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Cc: Robert Jarzmik <[email protected]>
Link: https://lore.kernel.org/r/f45920af18dbf06e34129bbc406f53dc9c5d1075.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
The BCM2835 SPI driver currently sets the SPI_CONTROLLER_MUST_RX flag.
When performing a TX-only transfer, this flag causes the SPI core to
allocate and DMA-map a dummy buffer into which the RX FIFO contents are
copied. The dummy buffer is necessary because the chip is not capable
of disabling the receiver or automatically throwing away received data.
Not reading the RX FIFO isn't an option either since transmission is
halted once it's full.
Avoid the overhead induced by the dummy buffer by preallocating a
reusable DMA transaction which cyclically clears the RX FIFO. The
transaction requires very little CPU time to submit and generates no
interrupts while running. Specifics are provided in kerneldoc comments.
With a ks8851 Ethernet chip attached to the SPI controller, I am seeing
a 30 us reduction in ping time with this commit (1.819 ms vs. 1.849 ms,
average of 100,000 packets) as well as a 2% reduction in CPU time
(75:08 vs. 76:39 for transmission of 5 GByte over the SPI bus).
The commit uses the TX DMA interrupt to signal completion of a transfer.
This interrupt is raised once all bytes have been written to the
TX FIFO and it is then necessary to busy-wait for the TX FIFO to become
empty before the transfer can be finalized. As an alternative approach,
I have explored using the SPI controller's DONE interrupt to detect
completion. This interrupt is signaled when the TX FIFO becomes empty,
avoiding the need to busy-wait. However latency deteriorates compared
to the present commit and surprisingly, CPU time is slightly higher as
well:
It turns out that in 45% of the cases, no busy-waiting is needed at all
and in 76% of the cases, less than 10 busy-wait iterations are
sufficient for the TX FIFO to drain. This was measured on an RT kernel.
On a vanilla kernel, wakeup latency is worse and thus fewer iterations
are needed. The measurements were made with an SPI clock of 20 MHz,
they may differ slightly for slower or faster clock speeds.
Previously we always used the RX DMA interrupt to signal completion of a
transfer. Using the TX DMA interrupt now introduces a race condition:
TX DMA is always started before RX DMA so that bytes are already clocked
out while RX DMA is still being set up. But if a TX-only transfer is
very short, then the TX DMA interrupt may occur before RX DMA is set up.
If the interrupt happens to occur on the same CPU, setup of RX DMA may
even be delayed until after the interrupt was handled.
I've solved this by having the TX DMA callback clear the RX FIFO while
busy-waiting for the TX FIFO to drain, thus avoiding a dependency on
setup of RX DMA. Additionally, I am using a lock-free mechanism with
two flags, tx_dma_active and rx_dma_active plus memory barriers to
terminate RX DMA either by the TX DMA callback or immediately after
setting it up, whichever wins the race. I've explored an alternative
approach which temporarily disables the TX DMA callback until RX DMA
has been set up (using tasklet_disable(), local_bh_disable() or
local_irq_save()), but the performance was minimally worse.
[Nathan Chancellor contributed a DMA mapping fixup for an early version
of this commit, hence his Signed-off-by.]
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Nathan Chancellor <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Cc: Robert Jarzmik <[email protected]>
Link: https://lore.kernel.org/r/874949385f28251e2dcaa9494e39a27b50e9f9e4.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
The BCM2835 DMA controller is capable of synthesizing zeroes instead of
copying them from a source address. The feature is enabled by setting
the SRC_IGNORE bit in the Transfer Information field of a Control Block:
"Do not perform source reads.
In addition, destination writes will zero all the write strobes.
This is used for fast cache fill operations."
https://www.raspberrypi.org/app/uploads/2012/02/BCM2835-ARM-Peripherals.pdf
The feature is only available on 8 of the 16 channels. The others are
so-called "lite" channels with a limited feature set and performance.
Enable the feature if a cyclic transaction copies from the zero page.
This reduces traffic on the memory bus.
A forthcoming use case is the BCM2835 SPI driver, which will cyclically
copy from the zero page to the TX FIFO. The idea to use SRC_IGNORE was
taken from an ancient GitHub conversation between Martin and Noralf:
https://github.com/msperl/spi-bcm2835/issues/13#issuecomment-98180451
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Vinod Koul <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Cc: Florian Kauer <[email protected]>
Link: https://lore.kernel.org/r/b2286c904408745192e4beb3de3c88f73e4a7210.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
The BCM2835 SPI driver needs to set up the clock polarity in its
->prepare_message() hook before spi_transfer_one_message() asserts chip
select to avoid a gratuitous clock signal edge (cf. commit acace73df2c1
("spi: bcm2835: set up spi-mode before asserting cs-gpio")).
Precalculate the CS register value (which selects the clock polarity)
once in ->setup() and use that cached value in ->prepare_message() and
->transfer_one(). This avoids one MMIO read per message and one per
transfer, yielding a small latency improvement. Additionally, a
forthcoming commit will use the precalculated value to derive the
register value for clearing the RX FIFO, which will eliminate the need
for an RX dummy buffer when performing TX-only DMA transfers.
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Link: https://lore.kernel.org/r/d17c1d7fcdc97fffa961b8737cfd80eeb14f9416.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
While it is safe to use strncpy in this case, the advice is to move to
strscpy or strscpy_pad.
Suggested-by: Takashi Iwai <[email protected]>
Signed-off-by: Peter Ujfalusi <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Mark Brown <[email protected]>
|
|
Document the BCM2835 DMA driver's device data structure so that upcoming
commits may add further members with proper kerneldoc.
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Vinod Koul <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Cc: Florian Kauer <[email protected]>
Link: https://lore.kernel.org/r/78648f80f67d97bb7beecc1b9be6b6e4a45bc1d8.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
__spi_alloc_controller() uses a single allocation to accommodate struct
spi_controller and the driver-private data, but places the latter behind
the former. This order does not guarantee cacheline alignment of the
driver-private data. (It does guarantee cacheline alignment of struct
spi_controller but the structure doesn't make any use of that property.)
Round up struct spi_controller to cacheline size. A forthcoming commit
leverages this to grant DMA access to driver-private data of the BCM2835
SPI master.
An alternative, less economical approach would be to use two allocations.
A third approach consists of reversing the order to conserve memory.
But Mark Brown is concerned that it may result in a performance penalty
on architectures that don't like unaligned accesses.
Signed-off-by: Lukas Wunner <[email protected]>
Link: https://lore.kernel.org/r/01625b9b26b93417fb09d2c15ad02dfe9cdbbbe5.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
The DMA engine API requires DMA drivers to explicitly allow that
descriptors are prepared once and reused multiple times. Only a
single driver makes use of this functionality so far (pxa_dma.c,
to speed up pxa_camera.c).
We're about to add another use case for reusable descriptors in
the BCM2835 SPI driver, so allow that in the BCM2835 DMA driver.
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Vinod Koul <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Cc: Florian Kauer <[email protected]>
Cc: Robert Jarzmik <[email protected]>
Link: https://lore.kernel.org/r/bfc98a38225bbec4158440ad06cb9eee675e3e6f.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
The BCM2835 DMA driver currently requests an interrupt from the
controller regardless whether or not the client has passed in the
DMA_PREP_INTERRUPT flag. This causes unnecessary overhead for cyclic
transactions which do not need an interrupt after each period.
We're about to add such a use case, namely cyclic clearing of the SPI
controller's RX FIFO, so amend the DMA driver to request an interrupt
only if DMA_PREP_INTERRUPT was passed in. Ignore the period_len for
such transactions and set it to the buffer length to make the driver's
calculations work.
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Vinod Koul <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Cc: Florian Kauer <[email protected]>
Link: https://lore.kernel.org/r/73cf37be56eb4cbe6f696057c719f3a38cbaf26e.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
The BCM2835 SPI driver uses a flag to keep track of whether a DMA
transfer is in progress.
The flag is used to avoid terminating DMA channels multiple times if a
transfer finishes orderly while simultaneously the SPI core invokes the
->handle_err() callback because the transfer took too long. However
terminating DMA channels multiple times is perfectly fine, so the flag
is unnecessary for this particular purpose.
The flag is also used to avoid invoking bcm2835_spi_undo_prologue()
multiple times under this race condition. However multiple *concurrent*
invocations can no longer happen since commit 2527704d8411 ("spi:
bcm2835: Synchronize with callback on DMA termination") because the
->handle_err() callback now uses the _sync() variant when terminating
DMA channels.
The only raison d'être of the flag is therefore that
bcm2835_spi_undo_prologue() cannot cope with multiple *sequential*
invocations. Achieve that by setting tx_prologue to 0 at the end of
the function. Subsequent invocations thus become no-ops.
With that, the dma_pending flag becomes unnecessary, so drop it.
Tested-by: Nuno Sá <[email protected]>
Tested-by: Noralf Trønnes <[email protected]>
Signed-off-by: Lukas Wunner <[email protected]>
Acked-by: Stefan Wahren <[email protected]>
Acked-by: Martin Sperl <[email protected]>
Link: https://lore.kernel.org/r/062b03b7f86af77a13ce0ec3b22e0bdbfcfba10d.1568187525.git.lukas@wunner.de
Signed-off-by: Mark Brown <[email protected]>
|
|
This change documents the 'mac-mode' property that was introduced in the
'stmmac' driver to support passive mode converters that can sit in-between
the MAC & PHY.
Signed-off-by: Alexandru Ardelean <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
In-between the MAC & PHY there can be a mode converter, which converts one
mode to another (e.g. GMII-to-RGMII).
The converter, can be passive (i.e. no driver or OS/SW information
required), so the MAC & PHY need to be configured differently.
For the `stmmac` driver, this is implemented via a `mac-mode` property in
the device-tree, which configures the MAC into a certain mode, and for the
PHY a `phy_interface` field will hold the mode of the PHY. The mode of the
PHY will be passed to the PHY and from there-on it work in a different
mode. If unspecified, the default `phy-mode` will be used for both.
Signed-off-by: Alexandru Ardelean <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
There is a spelling mistake in a mlx4_err error message. Fix it.
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
There is a spelling mistake in a .msg literal string. Fix it.
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Sudarsana Reddy Kalluru says:
====================
qed* Fix series.
The patch series addresses couple of issues in the recent commits.
Patch (1) populates the actual dump-size of config attribute instead of
providing a fixed size value.
Patch(2) updates frame format of flash config buffer as required by
management FW (MFW).
Please consider applying it to net-next.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
MFW associates the entity id to a config attribute instead of assigning
one entity id for all the config attributes.
This patch incorporates driver changes to link entity id to a config id
attribute.
Fixes: 0dabbe1bb3a4 ("qed: Add driver API for flashing the config attributes.")
Signed-off-by: Sudarsana Reddy Kalluru <[email protected]>
Signed-off-by: Ariel Elior <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Driver currently returns max-buf-size as size of the config attribute.
This patch incorporates changes to read this value from MFW (if available)
and provide it to the user. Also did a trivial clean up in this path.
Fixes: d44a3ced7023 ("qede: Add support for reading the config id attributes.")
Signed-off-by: Sudarsana Reddy Kalluru <[email protected]>
Signed-off-by: Ariel Elior <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
There is a spelling mistake in the lmc_trace message. Fix it.
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
When filtering xattr list for reading, presence of trusted xattr
results in a security audit log. However, if there is other content
no errno will be set, and if there isn't, the errno will be -ENODATA
and not -EPERM as is usually associated with a lack of capability.
The check does not block the request to list the xattrs present.
Switch to ns_capable_noaudit to reflect a more appropriate check.
Signed-off-by: Mark Salyzyn <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected] # v3.18+
Fixes: a082c6f680da ("ovl: filter trusted xattr for non-admin")
Signed-off-by: Miklos Szeredi <[email protected]>
|
|
if ovl_encode_real_fh() fails, no memory was allocated
and the error in the error-valued pointer should be returned.
Fixes: 9b6faee07470 ("ovl: check ERR_PTR() return value from ovl_encode_fh()")
Signed-off-by: Ding Xiang <[email protected]>
Cc: <[email protected]> # v4.16+
Signed-off-by: Miklos Szeredi <[email protected]>
|
|
There is a spelling mistake in a dbg_verbose message. Fix it.
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
Don't populate the array degrees on the stack but instead make it
static const. Makes the object code smaller by 46 bytes.
Before:
text data bss dec hex filename
5356 1560 0 6916 1b04 dw_mmc-hi3798cv200.o
After:
text data bss dec hex filename
5214 1656 0 6870 1ad6 dw_mmc-hi3798cv200.o
(gcc version 9.2.1, amd64)
Signed-off-by: Colin Ian King <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
Instead of keeping track of whether SDIO IRQs have been enabled via an
internal sdhci status flag, avoid the open-coding and convert into using
sdio_irq_claimed().
Reviewed-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
Nowadays sdhci prevents runtime suspend when SDIO IRQs are enabled.
However, some variants such as sdhci-esdhc-imx's, tries to allow runtime
suspend while having the SDIO IRQs enabled, but without supporting remote
wakeups. This support is a bit questionable, especially if the host device
have a PM domain attached that can be power gated, but more importantly,
the code have also become redundant (which was not the case when it was
introduced).
Rather than keeping the redundant code around, let's drop it and leave this
to be revisited later on.
Signed-off-by: Ulf Hansson <[email protected]>
|
|
The sdhci_ack_sdio_irq() is called only when SDIO IRQs are enabled.
Therefore, let's drop the redundant check of the internal
SDHCI_SDIO_IRQ_ENABLED flag and just re-enable the IRQs immediately.
Reviewed-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Acked-by: Adrian Hunter <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
System suspend/resume of SDIO cards, with SDIO IRQs enabled and when using
MMC_CAP2_SDIO_IRQ_NOTHREAD is unfortunate still suffering from a fragile
behaviour. Some problems have been taken care of so far, but more issues
remains.
For example, calling the ->ack_sdio_irq() callback to let host drivers
re-enable the SDIO IRQs is a bad idea, unless the IRQ have been consumed,
which may not be the case during system suspend/resume. This may lead to
that a host driver re-signals the same SDIO IRQ over and over again,
causing a storm of IRQs and gives a ping-pong effect towards the
sdio_irq_work().
Moreover, calling the ->enable_sdio_irq() callback at system resume to
re-enable already enabled SDIO IRQs for the host, causes the runtime PM
count for some host drivers to become in-balanced. This then leads to the
host to remain runtime resumed, no matter if it's needed or not.
To fix these problems, let's check if process_sdio_pending_irqs() actually
consumed the SDIO IRQ, before we continue to ack the IRQ by invoking the
->ack_sdio_irq() callback.
Additionally, there should be no need to re-enable SDIO IRQs as the host
driver already knows if they were enabled at system suspend, thus also
whether it needs to re-enable them at system resume. For this reason, drop
the call to ->enable_sdio_irq() during system resume.
In regards to these changes there is yet another issue, which is when there
is an SDIO IRQ being signaled by the host driver, but after the SDIO card
has been system suspended. Currently these IRQs are just thrown away, while
we should at least make sure to try to consume them when the SDIO card has
been system resumed. Fix this by queueing a sdio_irq_work() after we system
resumed the SDIO card.
Tested-by: Matthias Kaehlcke <[email protected]>
Reviewed-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
To make sure SDIO func drivers behaves correctly during system
suspend/resume, let add a WARN_ON in case the condition is a non-powered
SDIO card and there are some SDIO IRQs still being claimed.
Tested-by: Matthias Kaehlcke <[email protected]>
Reviewed-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
For the MMC_CAP2_SDIO_IRQ_NOTHREAD case and when using sdio_signal_irq(),
the ->ack_sdio_irq() is already mandatory, which was not the case for those
host drivers that called sdio_run_irqs() directly.
As there are no longer any drivers calling sdio_run_irqs(), let's clarify
the code by dropping the unnecessary check and explicitly state that the
callback is mandatory in the header file.
Tested-by: Matthias Kaehlcke <[email protected]>
Reviewed-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
The sdio_irq_pending flag is used to let host drivers indicate that it has
signaled an IRQ. If that is the case and we only have a single SDIO func
that have claimed an SDIO IRQ, our assumption is that we can avoid reading
the SDIO_CCCR_INTx register and just call the SDIO func irq handler
immediately. This makes sense, but the flag is set/cleared in a somewhat
messy order, let's fix that up according to below.
First, the flag is currently set in sdio_run_irqs(), which is executed as a
work that was scheduled from sdio_signal_irq(). To make it more implicit
that the host have signaled an IRQ, let's instead immediately set the flag
in sdio_signal_irq(). This also makes the behavior consistent with host
drivers that uses the legacy, mmc_signal_sdio_irq() API. This have no
functional impact, because we don't expect host drivers to call
sdio_signal_irq() until after the work (sdio_run_irqs()) have been executed
anyways.
Second, currently we never clears the flag when using the sdio_run_irqs()
work, but only when using the sdio_irq_thread(). Let make the behavior
consistent, by moving the flag to be cleared inside the common
process_sdio_pending_irqs() function. Additionally, tweak the behavior of
the flag slightly, by avoiding to clear it unless we processed the SDIO
IRQ. The purpose with this at this point, is to keep the information about
whether there have been an SDIO IRQ signaled by the host, so at system
resume we can decide to process it without reading the SDIO_CCCR_INTx
register.
Tested-by: Matthias Kaehlcke <[email protected]>
Reviewed-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
To improve code quality, let's move the code that gets pending SDIO IRQs
from process_sdio_pending_irqs() into a dedicated function.
Signed-off-by: Matthias Kaehlcke <[email protected]>
[Ulf: Converted function into static]
Tested-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
In cases when SDIO IRQs have been enabled, runtime suspend is prevented by
the driver. However, this still means msdc_runtime_suspend|resume() gets
called during system suspend/resume, via pm_runtime_force_suspend|resume().
This means during system suspend/resume, the register context of the mtk-sd
device most likely loses its register context, even in cases when SDIO IRQs
have been enabled.
To re-enable the SDIO IRQs during system resume, the mtk-sd driver
currently relies on the mmc core to re-enable the SDIO IRQs when it resumes
the SDIO card, but this isn't the recommended solution. Instead, it's
better to deal with this locally in the mtk-sd driver, so let's do that.
Signed-off-by: Ulf Hansson <[email protected]>
|
|
In cases when SDIO IRQs have been enabled, runtime suspend is prevented by
the driver. However, this still means dw_mci_runtime_suspend|resume() gets
called during system suspend/resume, via pm_runtime_force_suspend|resume().
This means during system suspend/resume, the register context of the dw_mmc
device most likely loses its register context, even in cases when SDIO IRQs
have been enabled.
To re-enable the SDIO IRQs during system resume, the dw_mmc driver
currently relies on the mmc core to re-enable the SDIO IRQs when it resumes
the SDIO card, but this isn't the recommended solution. Instead, it's
better to deal with this locally in the dw_mmc driver, so let's do that.
Tested-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
To avoid each host driver supporting SDIO IRQs, from keeping track
internally about if SDIO IRQs has been claimed, let's introduce a common
helper function, sdio_irq_claimed().
The function returns true if SDIO IRQs are claimed, via using the
information about the number of claimed irqs. This is safe, even without
any locks, as long as the helper function is called only from
runtime/system suspend callbacks of the host driver.
Tested-by: Matthias Kaehlcke <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Signed-off-by: Ulf Hansson <[email protected]>
|
|
Simon Horman says:
====================
devlink: add unknown 'fw_load_policy' value
Dirk says:
Recently we added an unknown value for the 'reset_dev_on_drv_probe' devlink
parameter. Extend the 'fw_load_policy' parameter in the same way.
The only driver that uses this right now is the nfp driver.
====================
Signed-off-by: David S. Miller <[email protected]>
|