Age | Commit message (Collapse) | Author | Files | Lines |
|
* kvm-arm64/mte-map-shared:
: .
: Update the MTE support to allow the VMM to use shared mappings
: to back the memslots exposed to MTE-enabled guests.
:
: Patches courtesy of Catalin Marinas and Peter Collingbourne.
: .
: Fix a number of issues with MTE, such as races on the tags
: being initialised vs the PG_mte_tagged flag as well as the
: lack of support for VM_SHARED when KVM is involved.
:
: Patches from Catalin Marinas and Peter Collingbourne.
: .
Documentation: document the ABI changes for KVM_CAP_ARM_MTE
KVM: arm64: permit all VM_MTE_ALLOWED mappings with MTE enabled
KVM: arm64: unify the tests for VMAs in memslots when MTE is enabled
arm64: mte: Lock a page for MTE tag initialisation
mm: Add PG_arch_3 page flag
KVM: arm64: Simplify the sanitise_mte_tags() logic
arm64: mte: Fix/clarify the PG_mte_tagged semantics
mm: Do not enable PG_arch_2 for all 64-bit architectures
Signed-off-by: Marc Zyngier <[email protected]>
|
|
* kvm-arm64/pkvm-vcpu-state: (25 commits)
: .
: Large drop of pKVM patches from Will Deacon and co, adding
: a private vm/vcpu state at EL2, managed independently from
: the EL1 state. From the cover letter:
:
: "This is version six of the pKVM EL2 state series, extending the pKVM
: hypervisor code so that it can dynamically instantiate and manage VM
: data structures without the host being able to access them directly.
: These structures consist of a hyp VM, a set of hyp vCPUs and the stage-2
: page-table for the MMU. The pages used to hold the hypervisor structures
: are returned to the host when the VM is destroyed."
: .
KVM: arm64: Use the pKVM hyp vCPU structure in handle___kvm_vcpu_run()
KVM: arm64: Don't unnecessarily map host kernel sections at EL2
KVM: arm64: Explicitly map 'kvm_vgic_global_state' at EL2
KVM: arm64: Maintain a copy of 'kvm_arm_vmid_bits' at EL2
KVM: arm64: Unmap 'kvm_arm_hyp_percpu_base' from the host
KVM: arm64: Return guest memory from EL2 via dedicated teardown memcache
KVM: arm64: Instantiate guest stage-2 page-tables at EL2
KVM: arm64: Consolidate stage-2 initialisation into a single function
KVM: arm64: Add generic hyp_memcache helpers
KVM: arm64: Provide I-cache invalidation by virtual address at EL2
KVM: arm64: Initialise hypervisor copies of host symbols unconditionally
KVM: arm64: Add per-cpu fixmap infrastructure at EL2
KVM: arm64: Instantiate pKVM hypervisor VM and vCPU structures from EL1
KVM: arm64: Add infrastructure to create and track pKVM instances at EL2
KVM: arm64: Rename 'host_kvm' to 'host_mmu'
KVM: arm64: Add hyp_spinlock_t static initializer
KVM: arm64: Include asm/kvm_mmu.h in nvhe/mem_protect.h
KVM: arm64: Add helpers to pin memory shared with the hypervisor at EL2
KVM: arm64: Prevent the donation of no-map pages
KVM: arm64: Implement do_donate() helper for donating memory
...
Signed-off-by: Marc Zyngier <[email protected]>
|
|
* kvm-arm64/parallel-faults:
: .
: Parallel stage-2 fault handling, courtesy of Oliver Upton.
: From the cover letter:
:
: "Presently KVM only takes a read lock for stage 2 faults if it believes
: the fault can be fixed by relaxing permissions on a PTE (write unprotect
: for dirty logging). Otherwise, stage 2 faults grab the write lock, which
: predictably can pile up all the vCPUs in a sufficiently large VM.
:
: Like the TDP MMU for x86, this series loosens the locking around
: manipulations of the stage 2 page tables to allow parallel faults. RCU
: and atomics are exploited to safely build/destroy the stage 2 page
: tables in light of multiple software observers."
: .
KVM: arm64: Reject shared table walks in the hyp code
KVM: arm64: Don't acquire RCU read lock for exclusive table walks
KVM: arm64: Take a pointer to walker data in kvm_dereference_pteref()
KVM: arm64: Handle stage-2 faults in parallel
KVM: arm64: Make table->block changes parallel-aware
KVM: arm64: Make leaf->leaf PTE changes parallel-aware
KVM: arm64: Make block->table PTE changes parallel-aware
KVM: arm64: Split init and set for table PTE
KVM: arm64: Atomically update stage 2 leaf attributes in parallel walks
KVM: arm64: Protect stage-2 traversal with RCU
KVM: arm64: Tear down unlinked stage-2 subtree after break-before-make
KVM: arm64: Use an opaque type for pteps
KVM: arm64: Add a helper to tear down unlinked stage-2 subtrees
KVM: arm64: Don't pass kvm_pgtable through kvm_pgtable_walk_data
KVM: arm64: Pass mm_ops through the visitor context
KVM: arm64: Stash observed pte value in visitor context
KVM: arm64: Combine visitor arguments into a context structure
Signed-off-by: Marc Zyngier <[email protected]>
|
|
Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
fails to find a kprobe corresponding to the address.
Since arm64 kprobes uses stop_machine based text patching for removing
BRK, it ensures all running kprobe_break_handler() is done at that point.
And after removing the BRK, it removes the kprobe from its hash list.
Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
there is a bug.
Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/166994753273.439920.6629626290560350760.stgit@devnote3
Signed-off-by: Will Deacon <[email protected]>
|
|
Since arm64's do_page_fault() can handle the page fault correctly
than kprobe_fault_handler() according to the context, let it handle
the page fault instead of simply call fixup_exception() in the
kprobe_fault_handler().
Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/166994752269.439920.4801339965959400456.stgit@devnote3
Signed-off-by: Will Deacon <[email protected]>
|
|
Mark arch_stack_walk() as noinstr instead of notrace and inline functions
called from arch_stack_walk() as __always_inline so that user does not
put any instrumentations on it, because this function can be used from
return_address() which is used by lockdep.
Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing
arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on
arm64.
# echo p arch_stack_walk >> ${TRACEFS}/kprobe_events
# echo 1 > ${TRACEFS}/events/kprobes/enable
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
PREEMPT SMP
Modules linked in:
CPU: 0 PID: 17 Comm: migration/0 Tainted: G N 6.1.0-rc5+ #6
Hardware name: linux,dummy-virt (DT)
Stopper: 0x0 <- 0x0
pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : kprobe_breakpoint_handler+0x178/0x17c
lr : kprobe_breakpoint_handler+0x178/0x17c
sp : ffff8000080d3090
x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774
x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000
x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0
x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006
x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d
x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b
x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958
x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000
x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064
Call trace:
kprobes: Failed to recover from reentered kprobes.
kprobes: Dump kprobe:
.symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
------------[ cut here ]------------
kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()")
Cc: [email protected]
Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/166994751368.439920.3236636557520824664.stgit@devnote3
Signed-off-by: Will Deacon <[email protected]>
|
|
* kvm-arm64/dirty-ring:
: .
: Add support for the "per-vcpu dirty-ring tracking with a bitmap
: and sprinkles on top", courtesy of Gavin Shan.
:
: This branch drags the kvmarm-fixes-6.1-3 tag which was already
: merged in 6.1-rc4 so that the branch is in a working state.
: .
KVM: Push dirty information unconditionally to backup bitmap
KVM: selftests: Automate choosing dirty ring size in dirty_log_test
KVM: selftests: Clear dirty ring states between two modes in dirty_log_test
KVM: selftests: Use host page size to map ring buffer in dirty_log_test
KVM: arm64: Enable ring-based dirty memory tracking
KVM: Support dirty ring in conjunction with bitmap
KVM: Move declaration of kvm_cpu_dirty_log_size() to kvm_dirty_ring.h
KVM: x86: Introduce KVM_REQ_DIRTY_RING_SOFT_FULL
Signed-off-by: Marc Zyngier <[email protected]>
|
|
* kvm-arm64/52bit-fixes:
: .
: 52bit PA fixes, courtesy of Ryan Roberts. From the cover letter:
:
: "I've been adding support for FEAT_LPA2 to KVM and as part of that work have been
: testing various (84) configurations of HW, host and guest kernels on FVP. This
: has thrown up a couple of pre-existing bugs, for which the fixes are provided."
: .
KVM: arm64: Fix benign bug with incorrect use of VA_BITS
KVM: arm64: Fix PAR_TO_HPFAR() to work independently of PA_BITS.
KVM: arm64: Fix kvm init failure when mode!=vhe and VA_BITS=52.
Signed-off-by: Marc Zyngier <[email protected]>
|
|
get_user_mapping_size() uses kvm's pgtable library to walk a user space
page table created by the kernel, and in doing so, passes metadata
that the library needs, including ia_bits, which defines the size of the
input address.
For the case where the kernel is compiled for 52 VA bits but runs on HW
that does not support LVA, it will fall back to 48 VA bits at runtime.
Therefore we must use vabits_actual rather than VA_BITS to get the true
address size.
This is benign in the current code base because the pgtable library only
uses it for error checking.
Fixes: 6011cf68c885 ("KVM: arm64: Walk userspace page tables to compute the THP mapping size")
Signed-off-by: Ryan Roberts <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
We use uprobe in aarch64_be, which we found the tracee task would exit
due to SIGILL when we enable the uprobe trace.
We can see the replace inst from uprobe is not correct in aarch big-endian.
As in Armv8-A, instruction fetches are always treated as little-endian,
we should treat the UPROBE_SWBP_INSN as little-endian。
The test case is as following。
bash-4.4# ./mqueue_test_aarchbe 1 1 2 1 10 > /dev/null &
bash-4.4# cd /sys/kernel/debug/tracing/
bash-4.4# echo 'p:test /mqueue_test_aarchbe:0xc30 %x0 %x1' > uprobe_events
bash-4.4# echo 1 > events/uprobes/enable
bash-4.4#
bash-4.4# ps
PID TTY TIME CMD
140 ? 00:00:01 bash
237 ? 00:00:00 ps
[1]+ Illegal instruction ./mqueue_test_aarchbe 1 1 2 1 100 > /dev/null
which we debug use gdb as following:
bash-4.4# gdb attach 155
(gdb) disassemble send
Dump of assembler code for function send:
0x0000000000400c30 <+0>: .inst 0xa00020d4 ; undefined
0x0000000000400c34 <+4>: mov x29, sp
0x0000000000400c38 <+8>: str w0, [sp, #28]
0x0000000000400c3c <+12>: strb w1, [sp, #27]
0x0000000000400c40 <+16>: str xzr, [sp, #40]
0x0000000000400c44 <+20>: str xzr, [sp, #48]
0x0000000000400c48 <+24>: add x0, sp, #0x1b
0x0000000000400c4c <+28>: mov w3, #0x0 // #0
0x0000000000400c50 <+32>: mov x2, #0x1 // #1
0x0000000000400c54 <+36>: mov x1, x0
0x0000000000400c58 <+40>: ldr w0, [sp, #28]
0x0000000000400c5c <+44>: bl 0x405e10 <mq_send>
0x0000000000400c60 <+48>: str w0, [sp, #60]
0x0000000000400c64 <+52>: ldr w0, [sp, #60]
0x0000000000400c68 <+56>: ldp x29, x30, [sp], #64
0x0000000000400c6c <+60>: ret
End of assembler dump.
(gdb) info b
No breakpoints or watchpoints.
(gdb) c
Continuing.
Program received signal SIGILL, Illegal instruction.
0x0000000000400c30 in send ()
(gdb) x/10x 0x400c30
0x400c30 <send>: 0xd42000a0 0xfd030091 0xe01f00b9 0xe16f0039
0x400c40 <send+16>: 0xff1700f9 0xff1b00f9 0xe06f0091 0x03008052
0x400c50 <send+32>: 0x220080d2 0xe10300aa
(gdb) disassemble 0x400c30
Dump of assembler code for function send:
=> 0x0000000000400c30 <+0>: .inst 0xa00020d4 ; undefined
0x0000000000400c34 <+4>: mov x29, sp
0x0000000000400c38 <+8>: str w0, [sp, #28]
0x0000000000400c3c <+12>: strb w1, [sp, #27]
0x0000000000400c40 <+16>: str xzr, [sp, #40]
Signed-off-by: junhua huang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
apply_alternatives_vdso(), __apply_alternatives_multi_stop() and
kernel_alternatives are not needed after booting, so mark the two
functions as __init and the var as __initconst.
Signed-off-by: Jisheng Zhang <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Fix the bogus masking when computing the period of a 64bit counter
with 32bit overflow. It really should be treated like a 32bit counter
for the purpose of the period.
Reported-by: Ricardo Koller <[email protected]>
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
soc/dt
Apple SoC DT updates for 6.2 (v2).
This includes:
* L1/L2 cache topology for t600x
* CPUfreq nodes for t8103/t600x
* DT binding for CPUfreq
* Associated MAINTAINERS update
The CPUfreq driver was already merged for 6.2 via its tree.
* tag 'asahi-soc-dt-6.2-v2' of https://github.com/AsahiLinux/linux:
arm64: dts: apple: Add CPU topology & cpufreq nodes for t600x
arm64: dts: apple: Add CPU topology & cpufreq nodes for t8103
dt-bindings: cpufreq: apple,soc-cpufreq: Add binding for Apple SoC cpufreq
MAINTAINERS: Add entries for Apple SoC cpufreq driver
arm64: dts: apple: Add t600x L1/L2 cache properties and nodes
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Arnd Bergmann <[email protected]>
|
|
Add the missing CPU topology/capacity information and the cpufreq nodes,
so we can have CPU frequency scaling and the scheduler has the
information it needs to make the correct decisions.
As with t8103, boost states are commented out pending PSCI/etc support
for deep sleep states.
Reviewed-by: Sven Peter <[email protected]>
Signed-off-by: Hector Martin <[email protected]>
|
|
There are three UFS reference clocks on SC8280XP which are used as
follows:
- The GCC_UFS_REF_CLKREF_CLK clock is fed to any UFS device connected
to either controller.
- The GCC_UFS_1_CARD_CLKREF_CLK and GCC_UFS_CARD_CLKREF_CLK clocks
provide reference clocks to the two PHYs.
Note that this depends on first updating the clock driver to reflect
that all three clocks are sourced from CXO. Specifically, the UFS
controller driver expects the device reference clock to have a valid
frequency:
ufshcd-qcom 1d84000.ufs: invalid ref_clk setting = 0
Fixes: 152d1faf1e2f ("arm64: dts: qcom: add SC8280XP platform")
Fixes: 8d6b458ce6e9 ("arm64: dts: qcom: sc8280xp: fix ufs_card_phy ref clock")
Fixes: f3aa975e230e ("arm64: dts: qcom: sc8280xp: correct ref clock for ufs_mem_phy")
Link: https://lore.kernel.org/lkml/[email protected]/
Cc: [email protected] # 5.20
Signed-off-by: Johan Hovold <[email protected]>
Reviewed-by: Brian Masney <[email protected]>
Reviewed-by: Konrad Dybcio <[email protected]>
Signed-off-by: Bjorn Andersson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The devices on the SC8280XP PCIe buses are cache coherent and must be
marked as such to avoid data corruption.
A coherent device can, for example, end up snooping stale data from the
caches instead of using data written by the CPU through the
non-cacheable mapping which is used for consistent DMA buffers for
non-coherent devices.
Note that this is much more likely to happen since commit c44094eee32f
("arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()")
that was added in 6.1 and which removed the cache invalidation when
setting up the non-cacheable mapping.
Marking the PCIe devices as coherent specifically fixes the intermittent
NVMe probe failures observed on the Thinkpad X13s, which was due to
corruption of the submission and completion queues. This was typically
observed as corruption of the admin submission queue (with well-formed
completion):
could not locate request for tag 0x0
nvme nvme0: invalid id 0 completed on queue 0
or corruption of the admin or I/O completion queues (malformed
completion):
could not locate request for tag 0x45f
nvme nvme0: invalid id 25695 completed on queue 25965
presumably as these queues are small enough to not be allocated using
CMA which in turn make them more likely to be cached (e.g. due to
accesses to nearby pages through the cacheable linear map). Increasing
the buffer sizes to two pages to force CMA allocation also appears to
make the problem go away.
Fixes: 813e83157001 ("arm64: dts: qcom: sc8280xp/sa8540p: add PCIe2-4 nodes")
Signed-off-by: Johan Hovold <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Konrad Dybcio <[email protected]>
Signed-off-by: Bjorn Andersson <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Mergeback arm64-fixes-for-6.1 to avoid merge conflicts.
|
|
The helper crypto_tfm_ctx is only used by the Crypto API algorithm
code and should really be in algapi.h. However, for historical
reasons many files relied on it to be in crypto.h. This patch
changes those files to use algapi.h instead in prepartion for a
move.
Signed-off-by: Herbert Xu <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi
Pull EFI fix from Ard Biesheuvel:
"A single revert for some code that I added during this cycle. The code
is not wrong, but it should be a bit more careful about how to handle
the shadow call stack pointer, so it is better to revert it for now
and bring it back later in improved form.
Summary:
- Revert runtime service sync exception recovery on arm64"
* tag 'efi-fixes-for-v6.1-4' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi:
arm64: efi: Revert "Recover from synchronous exceptions ..."
|
|
With the new-fangled generation of asm/sysreg-defs.h, some definitions
have ended up being duplicated between the two files.
Remove these duplicate definitions, and consolidate the naming for
GMID_EL1_BS_WIDTH.
Signed-off-by: Will Deacon <[email protected]>
|
|
Add the missing CPU topology/capacity information and the cpufreq nodes,
so we can have CPU frequency scaling and the scheduler has the
information it needs to make the correct decisions.
Boost states are commented out, as they are not yet available (that
requires CPU deep sleep support, to be eventually done via PSCI).
The driver supports them fine; the hardware will just refuse to ever
go into them at this time, so don't expose them to users until that's
done.
Acked-by: Marc Zyngier <[email protected]>
Acked-by: Viresh Kumar <[email protected]>
Signed-off-by: Hector Martin <[email protected]>
|
|
Convert ID_DFR1_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_DFR0_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_AFR0_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_MMFR5_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert MVFR2_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert MVFR1_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert MVFR0_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_PFR2_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_PFR1_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_PFR0_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Mark Brown <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR6_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR5_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR4_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR3_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR2_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR1_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_ISAR0_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_MMFR4_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_MMFR3_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_MMFR2_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_MMFR1_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
Convert ID_MMFR0_EL1 to be automatically generated as per DDI0487I.a,
no functional changes.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
32bit has multiple values for its id registers, as extra properties
were added to the CPUs. Some of these end up having long names, which
exceed the fixed 48 character column that the sysreg awk script generates.
For example, the ID_MMFR1_EL1.L1Hvd field has an encoding whose natural
name would be 'invalidate Iside only'. Using this causes compile errors
as the script generates the following:
#define ID_MMFR1_EL1_L1Hvd_INVALIDATE_ISIDE_ONLYUL(0b0001)
Add a few extra characters.
Reviewed-by: Mark Brown <[email protected]>
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
To convert the 32bit id registers to use the sysreg generation, they
must first have a regular pattern, to match the symbols the script
generates.
Ensure symbols for the MVFR2_EL1 register use lower-case for feature
names where the arm-arm does the same.
No functional change.
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
To convert the 32bit id registers to use the sysreg generation, they
must first have a regular pattern, to match the symbols the script
generates.
Ensure symbols for the MVFR1_EL1 register use lower-case for feature
names where the arm-arm does the same.
No functional change.
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
To convert the 32bit id registers to use the sysreg generation, they
must first have a regular pattern, to match the symbols the script
generates.
Ensure symbols for the MVFR0_EL1 register use lower-case for feature
names where the arm-arm does the same.
No functional change.
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
To convert the 32bit id registers to use the sysreg generation, they
must first have a regular pattern, to match the symbols the script
generates.
Ensure symbols for the ID_DFR1_EL1 register have an _EL1 suffix.
No functional change.
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
To convert the 32bit id registers to use the sysreg generation, they
must first have a regular pattern, to match the symbols the script
generates.
Ensure symbols for the ID_DFR0_EL1 register have an _EL1 suffix,
and use lower-case for feature names where the arm-arm does the same.
The arm-arm has feature names for some of the ID_DFR0_EL1.PerMon encodings.
Use these feature names in preference to the '8_4' indication of the
architecture version they were introduced in.
No functional change.
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|
|
To convert the 32bit id registers to use the sysreg generation, they
must first have a regular pattern, to match the symbols the script
generates.
Ensure symbols for the ID_PFR2_EL1 register have an _EL1 suffix.
No functional change.
Signed-off-by: James Morse <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Will Deacon <[email protected]>
|