Age | Commit message (Collapse) | Author | Files | Lines |
|
Source and destination data buffers are allocated with GPF_KERNEL flag.
It means that, if the DDR is more than 2GB, buffers can be allocated above
the 32-bit addressable space. In this case, and if the dma controller is
only 32-bit compatible, swiotlb bounce buffer, located in the 32-bit
addressable space, is used and introduces a memcpy.
To prevent this extra memcpy, due to swiotlb bounce buffer use because
source or destination data buffer is allocated above the 32-bit addressable
space, force source and destination data buffers allocation with GPF_DMA
instead, when nobounce parameter is true.
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Link: https://lore.kernel.org/r/20231124160235.2459326-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Allocate channel count consistently increases due to a missing source ID
(srcid) cleanup in the fsl_edma_free_chan_resources() function at imx93
eDMAv4.
Reset 'srcid' at fsl_edma_free_chan_resources().
Cc: stable@vger.kernel.org
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Frank Li <Frank.Li@nxp.com>
Link: https://lore.kernel.org/r/20231127214325.2477247-1-Frank.Li@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
To support the flexibility to reserve the specific dma channels
add the support of dma-channel-mask property in the tegra210-adma
driver
Signed-off-by: Mohan Kumar <mkumard@nvidia.com>
Link: https://lore.kernel.org/r/20231128071615.31447-3-mkumard@nvidia.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
device_link_add() returns NULL pointer not PTR_ERR() when it fails,
so replace the IS_ERR() check with NULL pointer check.
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Link: https://lore.kernel.org/r/20231129090000.841440-1-yangyingliang@huaweicloud.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Sifive platform dma (sf-pdma) has both in-order and out-of-order
configurations but sf-pdam driver configured to do in-order DMA
transfers, with out-of-order configuration got better throughput
in the PolarFire SoC platform.
Add a PolarFire SoC specific compatible and code to support
for out-of-order dma transfers
Reviewed-by: Emil Renner Berthing <emil.renner.berthing@canonical.com>
Signed-off-by: Shravan Chippa <shravan.chippa@microchip.com>
Link: https://lore.kernel.org/r/20231208103856.3732998-4-shravan.chippa@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Update sf-pdma driver to adopt generic DMA device tree bindings.
It calls of_dma_controller_register() with of_dma_xlate_by_chan_id
to get the generic DMA device tree helper support and the DMA
clients can look up the sf-pdma controller using standard APIs.
Signed-off-by: Shravan Chippa <shravan.chippa@microchip.com>
Link: https://lore.kernel.org/r/20231208103856.3732998-2-shravan.chippa@microchip.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Fix incorrect descriptions for the GRPCFG register which has three
sub-registers (GRPWQCFG, GRPENGCFG and GRPFLGCFG).
No functional changes
Signed-off-by: Guanjun <guanjun@linux.alibaba.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Lijun Pan <lijun.pan@intel.com>
Link: https://lore.kernel.org/r/20231211053704.2725417-3-guanjun@linux.alibaba.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The int_handle field in hw descriptor should also be protected
by wmb() before possibly triggering a DMA read.
Fixes: eb0cf33a91b4 (dmaengine: idxd: move interrupt handle assignment)
Signed-off-by: Guanjun <guanjun@linux.alibaba.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Reviewed-by: Lijun Pan <lijun.pan@intel.com>
Link: https://lore.kernel.org/r/20231211053704.2725417-2-guanjun@linux.alibaba.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
For RX channels, REG_BUS_WIDTH seems to default to a value of 0xf00, and
macOS preserves the upper bits when setting the configuration in the
lower ones. If we reset the upper bits to 0, this causes framing errors
on suspend/resume (the data stream "tears" and channels get swapped
around). Keeping the upper bits untouched, like the macOS driver does,
fixes this issue.
Signed-off-by: Hector Martin <marcan@marcan.st>
Reviewed-by: Martin Povišer <povik+lin@cutebit.org>
Signed-off-by: Martin Povišer <povik+lin@cutebit.org>
Link: https://lore.kernel.org/r/20231029170704.82238-1-povik+lin@cutebit.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-10-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-9-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-8-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The .remove() callback for a platform driver returns an int which makes
many driver authors wrongly assume it's possible to do error handling by
returning an error code. However the value returned is ignored (apart
from emitting a warning) and this typically results in resource leaks.
To improve here there is a quest to make the remove callback return
void. In the first step of this quest all drivers are converted to
.remove_new(), which already returns void. Eventually after all drivers
are converted, .remove_new() will be renamed to .remove().
There is an error path that has the above mentioned problem. This patch
only adds a more drastic error message. To properly fix it,
dmaengine_terminate_sync() must be known to have succeeded (or that it's
safe to not call it as other drivers seem to assume).
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231105093415.3704633-7-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
stm32_dma_get_burst() returns a negative error for invalid input, which
gets turned into a large u32 value in stm32_dma_prep_dma_memcpy() that
in turn triggers an assertion because it does not fit into a two-bit field:
drivers/dma/stm32-dma.c: In function 'stm32_dma_prep_dma_memcpy':
include/linux/compiler_types.h:354:38: error: call to '__compiletime_assert_282' declared with attribute error: FIELD_PREP: value too large for the field
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^
include/linux/compiler_types.h:335:4: note: in definition of macro '__compiletime_assert'
prefix ## suffix(); \
^~~~~~
include/linux/compiler_types.h:354:2: note: in expansion of macro '_compiletime_assert'
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^~~~~~~~~~~~~~~~~~~
include/linux/build_bug.h:39:37: note: in expansion of macro 'compiletime_assert'
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^~~~~~~~~~~~~~~~~~
include/linux/bitfield.h:68:3: note: in expansion of macro 'BUILD_BUG_ON_MSG'
BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \
^~~~~~~~~~~~~~~~
include/linux/bitfield.h:114:3: note: in expansion of macro '__BF_FIELD_CHECK'
__BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_PREP: "); \
^~~~~~~~~~~~~~~~
drivers/dma/stm32-dma.c:1237:4: note: in expansion of macro 'FIELD_PREP'
FIELD_PREP(STM32_DMA_SCR_PBURST_MASK, dma_burst) |
^~~~~~~~~~
As an easy workaround, assume the error can happen, so try to handle this
by failing stm32_dma_prep_dma_memcpy() before the assertion. It replicates
what is done in stm32_dma_set_xfer_param() where stm32_dma_get_burst() is
also used.
Fixes: 1c32d6c37cc2 ("dmaengine: stm32-dma: use bitfield helpers")
Fixes: a2b6103b7a8a ("dmaengine: stm32-dma: Improve memory burst management")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202311060135.Q9eMnpCL-lkp@intel.com/
Link: https://lore.kernel.org/r/20231106134832.1470305-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Add judgment on enabling round robin arbitration to avoid
exceptions if this function is not supported.
Call trace:
fsl_edma_resume_early+0x1d4/0x208
dpm_run_callback+0xd4/0x304
device_resume_early+0xb0/0x208
dpm_resume_early+0x224/0x528
suspend_devices_and_enter+0x3e4/0xd00
pm_suspend+0x3c4/0x910
state_store+0x90/0x124
kobj_attr_store+0x48/0x64
sysfs_kf_write+0x84/0xb4
kernfs_fop_write_iter+0x19c/0x264
vfs_write+0x664/0x858
ksys_write+0xc8/0x180
__arm64_sys_write+0x44/0x58
invoke_syscall+0x5c/0x178
el0_svc_common.constprop.0+0x11c/0x14c
do_el0_svc+0x30/0x40
el0_svc+0x58/0xa8
el0t_64_sync_handler+0xc0/0xc4
el0t_64_sync+0x190/0x194
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Xiaolei Wang <xiaolei.wang@windriver.com>
Reviewed-by: Frank Li <Frank.Li@nxp.com>
Link: https://lore.kernel.org/r/20231113225713.1892643-3-xiaolei.wang@windriver.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
the system is sleeping
Some channels may be masked. When the system is suspended,
if these masked channels are not filtered out, this will
lead to null pointer operations and system crash:
Unable to handle kernel NULL pointer dereference at virtual address
Mem abort info:
ESR = 0x0000000096000004
EC = 0x25: DABT (current EL), IL = 32 bits
SET = 0, FnV = 0
EA = 0, S1PTW = 0
FSC = 0x04: level 0 translation fault
Data abort info:
ISV = 0, ISS = 0x00000004, ISS2 = 0x00000000
CM = 0, WnR = 0, TnD = 0, TagAccess = 0
GCS = 0, Overlay = 0, DirtyBit = 0, Xs = 0
user pgtable: 4k pages, 48-bit VAs, pgdp=0000000894300000
[00000000000002a0] pgd=0000000000000000, p4d=0000000000000000
Internal error: Oops: 0000000096000004 [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 989 Comm: sh Tainted: G B 6.6.0-16203-g557fb7a3ec4c-dirty #70
Hardware name: Freescale i.MX8QM MEK (DT)
pstate: 400000c5 (nZcv daIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc: fsl_edma_disable_request+0x3c/0x78
lr: fsl_edma_disable_request+0x3c/0x78
sp:ffff800089ae7690
x29: ffff800089ae7690 x28: ffff000807ab5440 x27: ffff000807ab5830
x26: 0000000000000008 x25: 0000000000000278 x24: 0000000000000001
23: ffff000807ab4328 x22: 0000000000000000 x21: 0000000000000009
x20: ffff800082616940 x19: 0000000000000000 x18: 0000000000000000
x17: 3d3d3d3d3d3d3d3d x16: 3d3d3d3d3d3d3d3d x15: 3d3d3d3d3d3d3d3d
x14: 3d3d3d3d3d3d3d3d x13: 3d3d3d3d3d3d3d3d x12: 1ffff00010d45724
x11: ffff700010d45724 x10: dfff800000000000 x9: dfff800000000000
x8: 00008fffef2ba8dc x7: 0000000000000001 x6: ffff800086a2b927
x5: ffff800086a2b920 x4: ffff700010d45725 x3: ffff8000800d5bbc
x2 : 0000000000000000 x1 : ffff000800c1d880 x0 : 0000000000000001
Call trace:
fsl_edma_disable_request+0x3c/0x78
fsl_edma_suspend_late+0x128/0x12c
dpm_run_callback+0xd4/0x304
__device_suspend_late+0xd0/0x240
dpm_suspend_late+0x174/0x59c
suspend_devices_and_enter+0x194/0xd00
pm_suspend+0x3c4/0x910
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Xiaolei Wang <xiaolei.wang@windriver.com>
Link: https://lore.kernel.org/r/20231113225713.1892643-2-xiaolei.wang@windriver.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
AM62Ax has 3 SPI channels where each channel has 4x TX and 4x RX
threads. Also fix the thread numbers to match what the firmware expects
according to the PSI-L device description.
Link: http://downloads.ti.com/tisci/esd/latest/5_soc_doc/am62ax/psil_cfg.html [1]
Fixes: aac6db7e243a ("dmaengine: ti: k3-psil-am62a: Add AM62Ax PSIL and PDMA data")
Signed-off-by: Jai Luthra <j-luthra@ti.com>
Link: https://lore.kernel.org/r/20231123-psil_fix-v1-1-6604d80819be@ti.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
AM62x has 3 SPI channels where each channel has 4 TX and 4 RX threads.
This also fixes the thread numbers.
Signed-off-by: Ronald Wahl <ronald.wahl@raritan.com>
Fixes: 5ac6bfb58777 ("dmaengine: ti: k3-psil: Add AM62x PSIL and PDMA data")
Reviewed-by: Jai Luthra <j-luthra@ti.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/20231030190113.16782-1-rwahl@gmx.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine
Pull dmaengine updates from Vinod Koul:
- Big pile of __counted_by attribute annotations to several structures
for bounds checking of flexible arrays at run-time
- Another big pile platform remove callback returning void changes
- Device tree device_get_match_data() usage and dropping
of_match_device() calls
- Minor driver updates to pxa, idxd fsl, hisi etc drivers
* tag 'dmaengine-6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (106 commits)
dmaengine: stm32-mdma: correct desc prep when channel running
dmaengine: dw-axi-dmac: Add support DMAX_NUM_CHANNELS > 16
dmaengine: xilinx: xilinx_dma: Fix kernel doc about xilinx_dma_remove()
dmaengine: mmp_tdma: drop unused variable 'of_id'
MAINTAINERS: Add entries for NXP(Freescale) eDMA drivers
dmaengine: xilinx: xdma: Support cyclic transfers
dmaengine: xilinx: xdma: Prepare the introduction of cyclic transfers
dmaengine: Drop unnecessary of_match_device() calls
dmaengine: Use device_get_match_data()
dmaengine: pxa_dma: Annotate struct pxad_desc_sw with __counted_by
dmaengine: pxa_dma: Remove an erroneous BUG_ON() in pxad_free_desc()
dmaengine: xilinx: xdma: Use resource_size() in xdma_probe()
dmaengine: fsl-dpaa2-qdma: Remove redundant initialization owner in dpaa2_qdma_driver
dmaengine: Remove unused declaration dma_chan_cleanup()
dmaengine: mmp: fix Wvoid-pointer-to-enum-cast warning
dmaengine: qcom: fix Wvoid-pointer-to-enum-cast warning
dmaengine: fsl-edma: Remove redundant dev_err() for platform_get_irq()
dmaengine: ep93xx_dma: Annotate struct ep93xx_dma_engine with __counted_by
dmaengine: idxd: add wq driver name support for accel-config user tool
dmaengine: fsl-edma: Annotate struct struct fsl_edma_engine with __counted_by
...
|
|
In case of the prep descriptor while the channel is already running, the
CCR register value stored into the channel could already have its EN bit
set. This would lead to a bad transfer since, at start transfer time,
enabling the channel while other registers aren't yet properly set.
To avoid this, ensure to mask the CCR_EN bit when storing the ccr value
into the mdma channel structure.
Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Tested-by: Alain Volmat <alain.volmat@foss.st.com>
Link: https://lore.kernel.org/r/20231009082450.452877-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Added support for DMA controller with more than 16 channels.
Signed-off-by: Sergey Khimich <serghox@gmail.com>
Link: https://lore.kernel.org/r/20231010101450.2949126-2-serghox@gmail.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Since commit cc99582d46b4 ("dmaengine: xilinx: xilinx_dma: Convert to
platform remove callback returning void") xilinx_dma_remove() doesn't
return zero any more. As the function has no return value any more, just
drop the statement about the return value.
Reported-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Link: https://lore.kernel.org/r/20231014211656.1512016-2-u.kleine-koenig@pengutronix.de
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine
Pull dmaengine fixes from Vinod Koul:
"Driver fixes for:
- stm32 dma residue calculation and chaining
- stm32 mdma for setting inflight bytes, residue calculation and
resume abort
- channel request, channel enable and dma error in fsl_edma
- runtime pm imbalance in ste_dma40 driver
- deadlock fix in mediatek driver"
* tag 'dmaengine-fix-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine:
dmaengine: fsl-edma: fix all channels requested when call fsl_edma3_xlate()
dmaengine: stm32-dma: fix residue in case of MDMA chaining
dmaengine: stm32-dma: fix stm32_dma_prep_slave_sg in case of MDMA chaining
dmaengine: stm32-mdma: set in_flight_bytes in case CRQA flag is set
dmaengine: stm32-mdma: use Link Address Register to compute residue
dmaengine: stm32-mdma: abort resume if no ongoing transfer
dmaengine: ste_dma40: Fix PM disable depth imbalance in d40_probe
dmaengine: mediatek: Fix deadlock caused by synchronize_irq()
dmaengine: idxd: use spin_lock_irqsave before wait_event_lock_irq
dmaengine: fsl-edma: fix edma4 channel enable failure on second attempt
dt-bindings: dmaengine: zynqmp_dma: add xlnx,bus-width required property
dmaengine: fsl-dma: fix DMA error when enabling sg if 'DONE' bit is set
|
|
Recent change a67ba97dfb30 ("dmaengine: Use device_get_match_data()")
cleaned up device tree data calls but left an unused variable, so drop
that
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: a67ba97dfb30 ("dmaengine: Use device_get_match_data()")
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20231010065729.29385-1-vkoul@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
dma_get_slave_channel() increases client_count for all channels. It should
only be called when a matched channel is found in fsl_edma3_xlate().
Move dma_get_slave_channel() after checking for a matched channel.
Cc: stable@vger.kernel.org
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Frank Li <Frank.Li@nxp.com>
Link: https://lore.kernel.org/r/20231004142911.838916-1-Frank.Li@nxp.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
In case of MDMA chaining, DMA is configured in Double-Buffer Mode (DBM)
with two periods, but if transfer has been prepared with _prep_slave_sg(),
the transfer is not marked cyclic (=!chan->desc->cyclic). However, as DBM
is activated for MDMA chaining, residue computation must take into account
cyclic constraints.
With only two periods in MDMA chaining, and no update due to Transfer
Complete interrupt masked, n_sg is always 0. If DMA current memory address
(depending on SxCR.CT and SxM0AR/SxM1AR) does not correspond, it means n_sg
should be increased.
Then, the residue of the current period is the one read from SxNDTR and
should not be overwritten with the full period length.
Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004155024.2609531-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Current Target (CT) have to be reset when starting an MDMA chaining use
case, as Double Buffer mode is activated. It ensures the DMA will start
processing the first memory target (pointed with SxM0AR).
Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004155024.2609531-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
CRQA flag is set by hardware when the channel request become active and
the channel is enabled. It is cleared by hardware, when the channel request
is completed.
So when it is set, it means MDMA is transferring bytes.
This information is useful in case of STM32 DMA and MDMA chaining,
especially when the user pauses DMA before stopping it, to trig one last
MDMA transfer to get the latest bytes of the SRAM buffer to the
destination buffer.
STM32 DCMI driver can then use this to know if the last MDMA transfer in
case of chaining is done.
Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004163531.2864160-3-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Current implementation relies on curr_hwdesc index. But to keep this index
up to date, Block Transfer interrupt (BTIE) has to be enabled.
If it is not, curr_hwdesc is not updated, and then residue is not reliable.
Rely on Link Address Register instead. And disable BTIE interrupt
in stm32_mdma_setup_xfer() because it is no more needed in case of
_prep_slave_sg() to maintain curr_hwdesc up to date.
It avoids extra interrupts and also ensures a reliable residue. These
improvements are required for STM32 DCMI camera capture use case, which
need STM32 DMA and MDMA chaining for good performance.
Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004163531.2864160-2-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
chan->desc can be null, if transfer is terminated when resume is called,
leading to a NULL pointer when retrieving the hwdesc.
To avoid this case, check that chan->desc is not null and channel is
disabled (transfer previously paused or terminated).
Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver")
Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20231004163531.2864160-1-amelie.delaunay@foss.st.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
In order to use this dmaengine with sound devices, let's add cyclic
transfers support. Most of the code is reused from the existing
scatter-gather implementation, only the final linking between
descriptors, the control fields (to trigger interrupts more often) and
the interrupt handling are really different.
This controller supports up to 32 adjacent descriptors, we assume this
is way more than enough for the purpose of cyclic transfers and limit to
32 the number of cycled descriptors. This way, we simplify a lot the
overall handling of the descriptors.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/r/20231005160237.2804238-4-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
In order to reduce and clarify the diff when introducing cyclic
transfers support, let's first prepare the driver a bit. There is no
functional change.
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/r/20231005160237.2804238-3-miquel.raynal@bootlin.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The pm_runtime_enable will increase power disable depth. Thus
a pairing decrement is needed on the error handling path to
keep it balanced according to context.
We fix it by calling pm_runtime_disable when error returns.
Signed-off-by: Zhang Shurong <zhang_shurong@foxmail.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://lore.kernel.org/r/tencent_DD2D371DB5925B4B602B1E1D0A5FA88F1208@qq.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
If probe is reached, we've already matched the device and in the case of
DT matching, the struct device_node pointer will be set. Therefore, there
is no need to call of_match_device() in probe.
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20231006213835.332848-1-robh@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Use preferred device_get_match_data() instead of of_match_device() to
get the driver match data. With this, adjust the includes to explicitly
include the correct headers.
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://lore.kernel.org/r/20231006213844.333027-1-robh@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
To do so, the code needs a little shuffling related to how hw_desc is used
and nb_desc incremented.
The one by one increment is needed for the error handling path, calling
pxad_free_desc(), to work correctly.
So, add a new intermediate variable, desc, to store the result of the
dma_pool_alloc() call.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/1c9ef22826f449a3756bb13a83494e9fe3e0be8b.1696676782.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
If pxad_alloc_desc() fails on the first dma_pool_alloc() call, then
sw_desc->nb_desc is zero.
In such a case pxad_free_desc() is called and it will BUG_ON().
Remove this erroneous BUG_ON().
It is also useless, because if "sw_desc->nb_desc == 0", then, on the first
iteration of the for loop, i is -1 and the loop will not be executed.
(both i and sw_desc->nb_desc are 'int')
Fixes: a57e16cf0333 ("dmaengine: pxa: add pxa dmaengine driver")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Link: https://lore.kernel.org/r/c8fc5563c9593c914fde41f0f7d1489a21b45a9a.1696676782.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
There is a warning reported by coccinelle:
./drivers/dma/xilinx/xdma.c:888:22-25: ERROR:
Missing resource_size with res
Use resource_size() on resource object instead of explicit computation.
Signed-off-by: Li Zetao <lizetao1@huawei.com>
Link: https://lore.kernel.org/r/20230803033235.3049137-1-lizetao1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
dpaa2_qdma_driver
The fsl_mc_driver_register() will set "THIS_MODULE" to driver.owner when
register a fsl_mc_driver driver, so it is redundant initialization to set
driver.owner in dpaa2_qdma_driver statement. Remove it for clean code.
Signed-off-by: Li Zetao <lizetao1@huawei.com>
Link: https://lore.kernel.org/r/20230804100245.100068-1-lizetao1@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The synchronize_irq(c->irq) will not return until the IRQ handler
mtk_uart_apdma_irq_handler() is completed. If the synchronize_irq()
holds a spin_lock and waits the IRQ handler to complete, but the
IRQ handler also needs the same spin_lock. The deadlock will happen.
The process is shown below:
cpu0 cpu1
mtk_uart_apdma_device_pause() | mtk_uart_apdma_irq_handler()
spin_lock_irqsave() |
| spin_lock_irqsave()
//hold the lock to wait |
synchronize_irq() |
This patch reorders the synchronize_irq(c->irq) outside the spin_lock
in order to mitigate the bug.
Fixes: 9135408c3ace ("dmaengine: mediatek: Add MediaTek UART APDMA support")
Signed-off-by: Duoming Zhou <duoming@zju.edu.cn>
Reviewed-by: Eugen Hristev <eugen.hristev@collabora.com>
Link: https://lore.kernel.org/r/20230806032511.45263-1-duoming@zju.edu.cn
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
'type' is an enum, thus cast of pointer on 64-bit compile test with W=1
causes:
mmp_tdma.c:649:10: error: cast to smaller integer type 'enum mmp_tdma_type' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20230810100000.123515-2-krzysztof.kozlowski@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
'cap' is an enum, thus cast of pointer on 64-bit compile test with W=1
causes:
hidma.c:748:8: error: cast to smaller integer type 'enum hidma_cap' from 'const void *' [-Werror,-Wvoid-pointer-to-enum-cast]
Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20230810100000.123515-1-krzysztof.kozlowski@linaro.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Since commit 7723f4c5ecdb ("driver core: platform: Add an error message
to platform_get_irq*()") and commit 2043727c2882 ("driver core:
platform: Make use of the helper function dev_err_probe()"), there is
no need to call the dev_err() function directly to print a custom
message when handling an error from platform_get_irq() function as it is
going to display an appropriate error message in case of a failure.
Fixes: 72f5801a4e2b ("dmaengine: fsl-edma: integrate v3 support")
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Link: https://lore.kernel.org/r/20230901071115.1322000-1-ruanjinjie@huawei.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct ep93xx_dma_engine.
[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci
Cc: Vinod Koul <vkoul@kernel.org>
Cc: Nathan Chancellor <nathan@kernel.org>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Tom Rix <trix@redhat.com>
Cc: dmaengine@vger.kernel.org
Cc: llvm@lists.linux.dev
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230928234334.work.391-kees@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The k3_udma_glue_tx_get_irq() function currently returns negative error
codes on error, zero on error and positive values for success. This
complicates life for the callers who need to propagate the error code.
Also GCC will not warn about unsigned comparisons when you check:
if (unsigned_irq <= 0)
All the callers have been fixed now but let's just make this easy going
forward.
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Roger Quadros <rogerq@kernel.org>
Acked-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In idxd_cmd_exec(), wait_event_lock_irq() explicitly calls
spin_unlock_irq()/spin_lock_irq(). If the interrupt is on before entering
wait_event_lock_irq(), it will become off status after
wait_event_lock_irq() is called. Later, wait_for_completion() may go to
sleep but irq is disabled. The scenario is warned in might_sleep().
Fix it by using spin_lock_irqsave() instead of the primitive spin_lock()
to save the irq status before entering wait_event_lock_irq() and using
spin_unlock_irqrestore() instead of the primitive spin_unlock() to restore
the irq status before entering wait_for_completion().
Before the change:
idxd_cmd_exec() {
interrupt is on
spin_lock() // interrupt is on
wait_event_lock_irq()
spin_unlock_irq() // interrupt is enabled
...
spin_lock_irq() // interrupt is disabled
spin_unlock() // interrupt is still disabled
wait_for_completion() // report "BUG: sleeping function
// called from invalid context...
// in_atomic() irqs_disabled()"
}
After applying spin_lock_irqsave():
idxd_cmd_exec() {
interrupt is on
spin_lock_irqsave() // save the on state
// interrupt is disabled
wait_event_lock_irq()
spin_unlock_irq() // interrupt is enabled
...
spin_lock_irq() // interrupt is disabled
spin_unlock_irqrestore() // interrupt is restored to on
wait_for_completion() // No Call trace
}
Fixes: f9f4082dbc56 ("dmaengine: idxd: remove interrupt disable for cmd_lock")
Signed-off-by: Rex Zhang <rex.zhang@intel.com>
Signed-off-by: Lijun Pan <lijun.pan@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Link: https://lore.kernel.org/r/20230916060619.3744220-1-rex.zhang@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
With the possibility of multiple wq drivers that can be bound to the wq,
the user config tool accel-config needs a way to know which wq driver to
bind to the wq. Introduce per wq driver_name sysfs attribute where the user
can indicate the driver to be bound to the wq. This allows accel-config to
just bind to the driver using wq->driver_name.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Vinod Koul <vkoul@kernel.org>
Link: https://lore.kernel.org/r/20230908201045.4115614-1-fenghua.yu@intel.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time via CONFIG_UBSAN_BOUNDS (for
array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct struct fsl_edma_engine.
Cc: Vinod Koul <vkoul@kernel.org>
Cc: dmaengine@vger.kernel.org
Link: https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci [1]
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: "Gustavo A. R. Silva" <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20231003232704.work.596-kees@kernel.org
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
The parameter *sdesc in function sprd_dma_check_trans_done is not
used, so here delete redundant parameter.
Signed-off-by: Kaiwei Liu <kaiwei.liu@unisoc.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Link: https://lore.kernel.org/r/20230919014929.17037-1-kaiwei.liu@unisoc.com
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|
|
Zero is not a valid IRQ for in-kernel code and the irq_of_parse_and_map()
function returns zero on error. So this check for valid IRQs should only
accept values > 0.
Fixes: 2b6b3b742019 ("ARM/dmaengine: edma: Merge the two drivers under drivers/dma/")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com>
Link: https://lore.kernel.org/r/f15cb6a7-8449-4f79-98b6-34072f04edbc@moroto.mountain
Signed-off-by: Vinod Koul <vkoul@kernel.org>
|