diff options
author | Palmer Dabbelt <[email protected]> | 2023-09-08 11:24:34 -0700 |
---|---|---|
committer | Palmer Dabbelt <[email protected]> | 2023-09-08 11:24:34 -0700 |
commit | c23be918c5d0f860971cf824de772714b4c771ea (patch) | |
tree | 2394f1d44e748da1eede675c26d948bef54feb06 /arch/riscv/mm/dma-noncoherent.c | |
parent | 7f215d003f31d56b458c7c3c8c7185a1697f5076 (diff) | |
parent | 484861e09f3ed8fb2e1de290d9e33fee3611b9fc (diff) |
Merge patch series "Add non-coherent DMA support for AX45MP"
Prabhakar <[email protected]> says:
From: Lad Prabhakar <[email protected]>
non-coherent DMA support for AX45MP
====================================
On the Andes AX45MP core, cache coherency is a specification option so it
may not be supported. In this case DMA will fail. To get around with this
issue this patch series does the below:
1] Andes alternative ports is implemented as errata which checks if the
IOCP is missing and only then applies to CMO errata. One vendor specific
SBI EXT (ANDES_SBI_EXT_IOCP_SW_WORKAROUND) is implemented as part of
errata.
Below are the configs which Andes port provides (and are selected by
RZ/Five):
- ERRATA_ANDES
- ERRATA_ANDES_CMO
OpenSBI patch supporting ANDES_SBI_EXT_IOCP_SW_WORKAROUND SBI is now
part v1.3 release.
2] Andes AX45MP core has a Programmable Physical Memory Attributes (PMA)
block that allows dynamic adjustment of memory attributes in the runtime.
It contains a configurable amount of PMA entries implemented as CSR
registers to control the attributes of memory locations in interest.
OpenSBI configures the PMA regions as required and creates a reserve memory
node and propagates it to the higher boot stack.
Currently OpenSBI (upstream) configures the required PMA region and passes
this a shared DMA pool to Linux.
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
pma_resv0@58000000 {
compatible = "shared-dma-pool";
reg = <0x0 0x58000000 0x0 0x08000000>;
no-map;
linux,dma-default;
};
};
The above shared DMA pool gets appended to Linux DTB so the DMA memory
requests go through this region.
3] We provide callbacks to synchronize specific content between memory and
cache.
4] RZ/Five SoC selects the below configs
- AX45MP_L2_CACHE
- DMA_GLOBAL_POOL
- ERRATA_ANDES
- ERRATA_ANDES_CMO
----------x---------------------x--------------------x---------------x----
* b4-shazam-merge:
soc: renesas: Kconfig: Select the required configs for RZ/Five SoC
cache: Add L2 cache management for Andes AX45MP RISC-V core
dt-bindings: cache: andestech,ax45mp-cache: Add DT binding documentation for L2 cache controller
riscv: mm: dma-noncoherent: nonstandard cache operations support
riscv: errata: Add Andes alternative ports
riscv: asm: vendorid_list: Add Andes Technology to the vendors list
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Palmer Dabbelt <[email protected]>
Diffstat (limited to 'arch/riscv/mm/dma-noncoherent.c')
-rw-r--r-- | arch/riscv/mm/dma-noncoherent.c | 43 |
1 files changed, 43 insertions, 0 deletions
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c index f269990e26c3..b76e7e192eb1 100644 --- a/arch/riscv/mm/dma-noncoherent.c +++ b/arch/riscv/mm/dma-noncoherent.c @@ -9,15 +9,28 @@ #include <linux/dma-map-ops.h> #include <linux/mm.h> #include <asm/cacheflush.h> +#include <asm/dma-noncoherent.h> static bool noncoherent_supported __ro_after_init; int dma_cache_alignment __ro_after_init = ARCH_DMA_MINALIGN; EXPORT_SYMBOL_GPL(dma_cache_alignment); +struct riscv_nonstd_cache_ops noncoherent_cache_ops __ro_after_init = { + .wback = NULL, + .inv = NULL, + .wback_inv = NULL, +}; + static inline void arch_dma_cache_wback(phys_addr_t paddr, size_t size) { void *vaddr = phys_to_virt(paddr); +#ifdef CONFIG_RISCV_NONSTANDARD_CACHE_OPS + if (unlikely(noncoherent_cache_ops.wback)) { + noncoherent_cache_ops.wback(paddr, size); + return; + } +#endif ALT_CMO_OP(clean, vaddr, size, riscv_cbom_block_size); } @@ -25,6 +38,13 @@ static inline void arch_dma_cache_inv(phys_addr_t paddr, size_t size) { void *vaddr = phys_to_virt(paddr); +#ifdef CONFIG_RISCV_NONSTANDARD_CACHE_OPS + if (unlikely(noncoherent_cache_ops.inv)) { + noncoherent_cache_ops.inv(paddr, size); + return; + } +#endif + ALT_CMO_OP(inval, vaddr, size, riscv_cbom_block_size); } @@ -32,6 +52,13 @@ static inline void arch_dma_cache_wback_inv(phys_addr_t paddr, size_t size) { void *vaddr = phys_to_virt(paddr); +#ifdef CONFIG_RISCV_NONSTANDARD_CACHE_OPS + if (unlikely(noncoherent_cache_ops.wback_inv)) { + noncoherent_cache_ops.wback_inv(paddr, size); + return; + } +#endif + ALT_CMO_OP(flush, vaddr, size, riscv_cbom_block_size); } @@ -97,6 +124,13 @@ void arch_dma_prep_coherent(struct page *page, size_t size) { void *flush_addr = page_address(page); +#ifdef CONFIG_RISCV_NONSTANDARD_CACHE_OPS + if (unlikely(noncoherent_cache_ops.wback_inv)) { + noncoherent_cache_ops.wback_inv(page_to_phys(page), size); + return; + } +#endif + ALT_CMO_OP(flush, flush_addr, size, riscv_cbom_block_size); } @@ -128,3 +162,12 @@ void __init riscv_set_dma_cache_alignment(void) if (!noncoherent_supported) dma_cache_alignment = 1; } + +void riscv_noncoherent_register_cache_ops(const struct riscv_nonstd_cache_ops *ops) +{ + if (!ops) + return; + + noncoherent_cache_ops = *ops; +} +EXPORT_SYMBOL_GPL(riscv_noncoherent_register_cache_ops); |