Age | Commit message (Collapse) | Author | Files | Lines |
|
Add missing clearing of BLTZALL and BGEZALL emulation counters in
function mipsr2_stats_clear_show().
Previously, it was not possible to reset BLTZALL and BGEZALL
emulation counters - their value remained the same even after
explicit request via debugfs. As far as other related counters
are concerned, they all seem to be properly cleared.
This change affects debugfs operation only, core R2 emulation
functionality is not affected.
Signed-off-by: Aleksandar Markovic <[email protected]>
Reviewed-by: Paul Burton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15517/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Fix the problem of inaccurate identification of instructions BLEZL and
BGTZL in R2 emulation code by making sure all necessary encoding
specifications are met.
Previously, certain R6 instructions could be identified as BLEZL or
BGTZL. R2 emulation routine didn't take into account that both BLEZL
and BGTZL instructions require their rt field (bits 20 to 16 of
instruction encoding) to be 0, and that, at same time, if the value in
that field is not 0, the encoding may represent a legitimate MIPS R6
instruction.
This means that a problem could occur after emulation optimization,
when emulation routine tried to pipeline emulation, picked up a next
candidate, and subsequently misrecognized an R6 instruction as BLEZL
or BGTZL.
It should be said that for single pass strategy, the problem does not
happen because CPU doesn't trap on branch-compacts which share opcode
space with BLEZL/BGTZL (but have rt field != 0, of course).
Signed-off-by: Leonid Yegoshin <[email protected]>
Signed-off-by: Miodrag Dinic <[email protected]>
Signed-off-by: Aleksandar Markovic <[email protected]>
Reported-by: Douglas Leung <[email protected]>
Reviewed-by: Paul Burton <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15456/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Some newer Loongson-3 have 64 bytes cache lines, so select
MIPS_L1_CACHE_SHIFT_6.
Signed-off-by: Huacai Chen <[email protected]>
Cc: John Crispin <[email protected]>
Cc: Steven J . Hill <[email protected]>
Cc: Fuxin Zhang <[email protected]>
Cc: Zhangjin Wu <[email protected]>
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15755/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
<linux/cache.h> already defines SMP_CACHE_BYTES as L1_CACHE_BYTES.
This change results in a build error in <asm/cpu-info.h> which directly
includes <asm/cache.h>. Fix this by including <linux/cache.h> instead.
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Using any value for W= will lead to a ton of warnings which are turned
into fatal errors because MIPS adds -Werror to arch/mips/*.
Signed-off-by: Florian Fainelli <[email protected]>
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15785/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Remove unused headers and fix warnings from checkpatch.
Signed-off-by: Steven J. Hill <[email protected]>
Acked-by: David Daney <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15407/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Remove all unused bitfields and macros. Convert the remaining
bitfields to use __BITFIELD_FIELD instead of #ifdef.
[[email protected]: Add inclusions of <uapi/asm/bitfield.h> as necessary.]
Signed-off-by: Steven J. Hill <[email protected]>
Acked-by: David Daney <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15408/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Move all USB platform code to one place within the file.
Signed-off-by: Steven J. Hill <[email protected]>
Acked-by: David Daney <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15406/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Remove all unused bitfields and macros. Convert the remaining
bitfields to use __BITFIELD_FIELD instead of #ifdef.
[[email protected]: Add inclusions of <uapi/asm/bitfield.h> as necessary.]
Signed-off-by: Steven J. Hill <[email protected]>
Acked-by: David Daney <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15405/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Remove all unused bitfields and macros. Convert the remaining
bitfields to use __BITFIELD_FIELD instead of #ifdef.
[[email protected]: Add inclusions of <uapi/asm/bitfield.h> as necessary.]
Signed-off-by: Steven J. Hill <[email protected]>
Acked-by: David Daney <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15403/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Some users must have 4K pages while needing a 48-bit VA space size.
The cleanest way do do this is to go to a 4-level page table for this
case. Each page table level using order-0 pages adds 9 bits to the
VA size (at 4K pages, so for four levels we get 9 * 4 + 12 == 48-bits.
For the 4K page size case only we add support functions for the PUD
level of the page table tree, also the TLB exception handlers get an
extra level of tree walk.
[[email protected]: Forward port to v4.10.]
[[email protected]: Forward port to v4.11.]
Signed-off-by: Alex Belits <[email protected]>
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Alex Belits <[email protected]>
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15312/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
This config option never really worked, and has bit-rotted to the
point of being completely useless. Remove it completely.
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15314/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
o Socket data is unsigned, so use unsigned accessors instructions.
o Fix path result pointer generation arithmetic.
o Fix half-word byte swapping code for unsigned semantics.
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15747/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
If bpf_needs_clear_a() returns true, only actually clear it if it is
ever used. If it is not used, we don't save and restore it, so the
clearing has the nasty side effect of clobbering caller state.
Also, don't emit stack pointer adjustment instructions if the
adjustment amount is zero.
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15745/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
The SKB vlan_tci and queue_mapping fields are unsigned, don't sign
extend these in the BPF JIT. In the vlan_tci case, the value gets
masked so the change is not needed for correctness, but do it anyway
for agreement with the types defined in struct sk_buff.
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15746/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
This let's us pass some additional "modprobe test-bpf" tests with JIT
enabled.
Reuse the code for SKF_AD_IFINDEX, but substitute the offset and size
of the "type" field.
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15744/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
The follow-on BPF JIT patches use the LHU instruction, so add it.
Signed-off-by: David Daney <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Steven J. Hill <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15743/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Add missing macros and methods that are required by
CONFIG_GENERIC_CPU_AUTOPROBE: MAX_CPU_FEATURES, cpu_have_feature(),
cpu_feature().
Also set a default elf platform as currently it is not set for most MIPS
platforms resulting in incorrectly specified modalias values in cpu
autoprobe ("cpu:type:(null):feature:...").
Export 'elf_hwcap' symbol so that it can be accessed from modules that
use module_cpu_feature_match()
Signed-off-by: Marcin Nowakowski <[email protected]>
Cc: [email protected]
Patchwork: https://patchwork.linux-mips.org/patch/15395/
Signed-off-by: Ralf Baechle <[email protected]>
|
|
Introduce a new getsockopt operation to retrieve the socket cookie
for a specific socket based on the socket fd. It returns a unique
non-decreasing cookie for each socket.
Tested: https://android-review.googlesource.com/#/c/358163/
Acked-by: Willem de Bruijn <[email protected]>
Signed-off-by: Chenbo Feng <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Its value has never changed; we might as well make it part of the ABI instead
of using the return value of KVM_CHECK_EXTENSION(KVM_CAP_COALESCED_MMIO).
Because PPC does not always make MMIO available, the code has to be made
dependent on CONFIG_KVM_MMIO rather than KVM_COALESCED_MMIO_PAGE_OFFSET.
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Radim Krčmář <[email protected]>
|
|
Remove code from architecture files that can be moved to virt/kvm, since there
is already common code for coalesced MMIO.
Signed-off-by: Paolo Bonzini <[email protected]>
Reviewed-by: David Hildenbrand <[email protected]>
[Removed a pointless 'break' after 'return'.]
Signed-off-by: Radim Krčmář <[email protected]>
|
|
Pull MIPS fixes from Ralf Baechle:
"Lantiq:
- Fix adding xbar resoures causing a panic
Loongson3:
- Some Loongson 3A don't identify themselves as having an FTLB so
hardwire that knowledge into CPU probing.
- Handle Loongson 3 TLB peculiarities in the fast path of the RDHWR
emulation.
- Fix invalid FTLB entries with huge page on VTLB+FTLB platforms
- Add missing calculation of S-cache and V-cache cache-way size
Ralink:
- Fix typos in rt3883 pinctrl data
Generic:
- Force o32 fp64 support on 32bit MIPS64r6 kernels
- Yet another build fix after the linux/sched.h changes
- Wire up statx system call
- Fix stack unwinding after introduction of IRQ stack
- Fix spinlock code to build even for microMIPS with recent binutils
SMP-CPS:
- Fix retrieval of VPE mask on big endian CPUs"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
MIPS: IRQ Stack: Unwind IRQ stack onto task stack
MIPS: c-r4k: Fix Loongson-3's vcache/scache waysize calculation
MIPS: Flush wrong invalid FTLB entry for huge page
MIPS: Check TLB before handle_ri_rdhwr() for Loongson-3
MIPS: Add MIPS_CPU_FTLB for Loongson-3A R2
MIPS: Lantiq: fix missing xbar kernel panic
MIPS: smp-cps: Fix retrieval of VPE mask on big endian CPUs
MIPS: Wire up statx system call
MIPS: Include asm/ptrace.h now linux/sched.h doesn't
MIPS: ralink: Fix typos in rt3883 pinctrl
MIPS: End spinlocks with .insn
MIPS: Force o32 fp64 support on 32bit MIPS64r6 kernels
|
|
Mostly simple cases of overlapping changes (adding code nearby,
a function whose name changes, for example).
Signed-off-by: David S. Miller <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
for one thing, the last argument is always __access_mask and had been such
since 2.4.0-test3pre8; for another, it can bloody well be a static inline -
-O2 or -Os, __builtin_constant_p() propagates through static inline calls.
Signed-off-by: Al Viro <[email protected]>
|
|
backmerge of a build fix from mainline
|
|
The kbuild test robot reported this build failure on a number
of architectures:
> make.cross ARCH=arm
> lib/lib.a(bug.o): In function `find_bug':
> >> lib/bug.c:135: undefined reference to `__start___bug_table'
> >> lib/bug.c:135: undefined reference to `__stop___bug_table'
Caused by:
19d436268dde ("debug: Add _ONCE() logic to report_bug()")
Which moved the BUG_TABLE from RO_DATA_SECTION() to RW_DATA_SECTION(),
but a number of architectures don't use RW_DATA_SECTION(), so they
ended up with no __bug_table[] ...
Ideally all those would use RW_DATA_SECTION() in their linker scripts,
but that's for another day.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: kbuild test robot <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux into uaccess.parisc
|
|
Merge PTRACE_SETREGSET leakage fixes from Dave Martin:
"This series is the collection of fixes I proposed on this topic, that
have not yet appeared upstream or in the stable branches,
The issue can leak kernel stack, but doesn't appear to allow userspace
to attack the kernel directly. The affected architectures are c6x,
h8300, metag, mips and sparc.
[ Mark Salter points out that c6x has no MMU or other mechanism to
prevent userspace access to kernel code or data on c6x, but it
doesn't hurt to clean that case up too. ]
The bugs arise from use of user_regset_copyin(). Users of
user_regset_copyin() can work in one of two ways:
1) Copy directly to thread_struct or equivalent. (This seems to be
the design assumption of the regset API, and is the most common
approach.)
2) Copy to a local variable and then transfer to thread_struct. (A
significant minority of cases.)
Buggy code typically involves approach 2"
* emailed patches from Dave Martin <[email protected]>:
sparc/ptrace: Preserve previous registers for short regset write
mips/ptrace: Preserve previous registers for short regset write
metag/ptrace: Reject partial NT_METAG_RPIPE writes
metag/ptrace: Provide default TXSTATUS for short NT_PRSTATUS
metag/ptrace: Preserve previous registers for short regset write
h8300/ptrace: Fix incorrect register transfer count
c6x/ptrace: Remove useless PTRACE_SETREGSET implementation
|
|
Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET
to fill all the registers, the thread's old registers are preserved.
Signed-off-by: Dave Martin <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
Properly implement emulation of the TLBR instruction for Trap & Emulate.
This instruction reads the TLB entry pointed at by the CP0_Index
register into the other TLB registers, which may have the side effect of
changing the current ASID. Therefore abstract the CP0_EntryHi and ASID
changing code into a common function in the process.
A comment indicated that Linux doesn't use TLBR, which is true during
normal use, however dumping of the TLB does use it (for example with the
relatively recent 'x' magic sysrq key), as does a wired TLB entries test
case in my KVM tests.
Signed-off-by: James Hogan <[email protected]>
Acked-by: Ralf Baechle <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Octeon III has VZ ASE support, so allow KVM to be enabled on Octeon
CPUs as it should now be functional.
Signed-off-by: James Hogan <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Octeon III implements a read-only guest CP0_PRid register, so add cases
to the KVM register access API for Octeon to ensure the correct value is
read and writes are ignored.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Octeon III doesn't implement the optional GuestCtl0.CG bit to allow
guest mode to execute virtual address based CACHE instructions, so
implement emulation of a few important ones specifically for Octeon III
in response to a GPSI exception.
Currently the main reason to perform these operations is for icache
synchronisation, so they are implemented as a simple icache flush with
local_flush_icache_range().
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Set up hardware virtualisation on Octeon III cores, configuring guest
interrupt routing and carving out half of the root TLB for guest use,
restoring it back again afterwards.
We need to be careful to inhibit TLB shutdown machine check exceptions
while invalidating guest TLB entries, since TLB invalidation is not
available so guest entries must be invalidated by setting them to unique
unmapped addresses, which could conflict with mappings set by the guest
or root if recently repartitioned.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Octeon CPUs don't report the correct dcache line size in CP0_Config1.DL,
so encode the correct value for the guest CP0_Config1.DL based on
cpu_dcache_line_size().
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
When TLB entries are invalidated in the presence of a virtually tagged
icache, such as that found on Octeon CPUs, flush the icache so that we
don't get a reserved instruction exception even though the TLB mapping
is removed.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Cache management is implemented separately for Cavium Octeon CPUs, so
r4k_blast_[id]cache aren't available. Instead for Octeon perform a local
icache flush using local_flush_icache_range(), and for other platforms
which don't use c-r4k.c use __flush_cache_all() / flush_icache_all().
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Add accessors for some VZ related Cavium Octeon III specific COP0
registers, along with field definitions. These will mostly be used by
KVM to set up interrupt routing and partition the TLB between root and
guest.
Signed-off-by: James Hogan <[email protected]>
Acked-by: Ralf Baechle <[email protected]>
Cc: David Daney <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Create a trace event for guest mode changes, and enable VZ's
GuestCtl0.MC bit after the trace event is enabled to trap all guest mode
changes.
The MC bit causes Guest Hardware Field Change (GHFC) exceptions whenever
a guest mode change occurs (such as an exception entry or return from
exception), so we need to handle this exception now. The MC bit is only
enabled when restoring register state, so enabling the trace event won't
take immediate effect.
Tracing guest mode changes can be particularly handy when trying to work
out what a guest OS gets up to before something goes wrong, especially
if the problem occurs as a result of some previous guest userland
exception which would otherwise be invisible in the trace.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Transfer timer state to the VZ guest context (CP0_GTOffset & guest
CP0_Count) when entering guest mode, enabling direct guest access to it,
and transfer back to soft timer when saving guest register state.
This usually allows guest code to directly read CP0_Count (via MFC0 and
RDHWR) and read/write CP0_Compare, without trapping to the hypervisor
for it to emulate the guest timer. Writing to CP0_Count or CP0_Cause.DC
is much less common and still triggers a hypervisor GPSI exception, in
which case the timer state is transferred back to an hrtimer before
emulating the write.
We are careful to prevent small amounts of drift from building up due to
undeterministic time intervals between reading of the ktime and reading
of CP0_Count. Some drift is expected however, since the system
clocksource may use a different timer to the local CP0_Count timer used
by VZ. This is permitted to prevent guest CP0_Count from appearing to go
backwards.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Add emulation of Memory Accessibility Attribute Registers (MAARs) when
necessary. We can't actually do anything with whatever the guest
provides, but it may not be possible to clear Guest.Config5.MRP so we
have to emulate at least a pair of MAARs.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
When restoring guest state after another VCPU has run, be sure to clear
CP0_LLAddr.LLB in order to break any interrupted atomic critical
section. Without this SMP guest atomics don't work when LLB is present
as one guest can complete the atomic section started by another guest.
MIPS VZ guest read of CP0_LLAddr causes Guest Privileged Sensitive
Instruction (GPSI) exception due to the address being root physical.
Handle this by reporting only the LLB bit, which contains the bit for
whether a ll/sc atomic is in progress without any reason for failure.
Similarly on P5600 a guest write to CP0_LLAddr also causes a GPSI
exception. Handle this also by clearing the guest LLB bit from root
mode.
Signed-off-by: James Hogan <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: "Radim Krčmář" <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Cc: [email protected]
|