Age | Commit message (Collapse) | Author | Files | Lines |
|
This function was introduced by previous commits implementing UAO.
However, it can be replaced with task_thread_info() in
uao_thread_switch() or get_fs() in do_page_fault() (the latter being
called only on the current context, so no need for using the saved
pt_regs).
Signed-off-by: Catalin Marinas <[email protected]>
|
|
If a CPU supports both Privileged Access Never (PAN) and User Access
Override (UAO), we don't need to disable/re-enable PAN round all
copy_to_user() like calls.
UAO alternatives cause these calls to use the 'unprivileged' load/store
instructions, which are overridden to be the privileged kind when
fs==KERNEL_DS.
This patch changes the copy_to_user() calls to have their PAN toggling
depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
If both features are detected, PAN will be enabled, but the copy_to_user()
alternatives will not be applied. This means PAN will be enabled all the
time for these functions. If only PAN is detected, the toggling will be
enabled as normal.
This will save the time taken to disable/re-enable PAN, and allow us to
catch copy_to_user() accesses that occur with fs==KERNEL_DS.
Futex and swp-emulation code continue to hang their PAN toggling code on
ARM64_HAS_PAN.
Signed-off-by: James Morse <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
CPU feature code uses the desc field as a test to find the end of the list,
this means every entry must have a description. This generates noise for
entries in the list that aren't really features, but combinations of them.
e.g.
> CPU features: detected feature: Privileged Access Never
> CPU features: detected feature: PAN and not UAO
These combination features are needed for corner cases with alternatives,
where cpu features interact.
Change all walkers of the arm64_features[] and arm64_hwcaps[] lists to test
'matches' not 'desc', and only print 'desc' if it is non-NULL.
Signed-off-by: James Morse <[email protected]>
Reviewed-by : Suzuki K Poulose <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
'User Access Override' is a new ARMv8.2 feature which allows the
unprivileged load and store instructions to be overridden to behave in
the normal way.
This patch converts {get,put}_user() and friends to use ldtr*/sttr*
instructions - so that they can only access EL0 memory, then enables
UAO when fs==KERNEL_DS so that these functions can access kernel memory.
This allows user space's read/write permissions to be checked against the
page tables, instead of testing addr<USER_DS, then using the kernel's
read/write permissions.
Signed-off-by: James Morse <[email protected]>
[[email protected]: move uao_thread_switch() above dsb()]
Signed-off-by: Catalin Marinas <[email protected]>
|
|
ARMv8.1 increases the PMU event number space to 16 bit so increase
the EVTYPE mask.
Signed-off-by: Jan Glauber <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
With the long cycle counter bit (LC) disabled the cycle counter is not
working on ThunderX SOC (ThunderX only implements Aarch64).
Also, according to documentation LC == 0 is deprecated.
To keep the code simple the patch does not introduce 64 bit wide counter
functions. Instead writing the cycle counter always sets the upper
32 bits so overflow interrupts are generated as before.
Original patch from Andrew Pinksi <[email protected]>
Signed-off-by: Jan Glauber <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Support PMU events on Caviums ThunderX SOC. ThunderX supports
some additional counters compared to the default ARMv8 PMUv3:
- branch instructions counter
- stall frontend & backend counters
- L1 dcache load & store counters
- L1 icache counters
- iTLB & dTLB counters
- L1 dcache & icache prefetch counters
Signed-off-by: Jan Glauber <[email protected]>
[will: capitalisation]
Signed-off-by: Will Deacon <[email protected]>
|
|
The implemented Cortex A57 events are strictly-speaking not
A57 specific. They are ARM recommended implementation defined events
and can be found on other ARMv8 SOCs like Cavium ThunderX too.
Therefore rename these events to allow using them in other
implementations too.
Signed-off-by: Jan Glauber <[email protected]>
[will: capitalisation and ordering]
Signed-off-by: Will Deacon <[email protected]>
|
|
ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
cpu feature boiler plate used by the actual features in later patches.
Signed-off-by: James Morse <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Older assemblers may not have support for newer feature registers. To get
round this, sysreg.h provides a 'mrs_s' macro that takes a register
encoding and generates the raw instruction.
Change read_cpuid() to use mrs_s in all cases so that new registers
don't have to be a special case. Including sysreg.h means we need to move
the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__
to avoid syntax errors in vmlinux.lds.
Signed-off-by: James Morse <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Although the arm64 vDSO is cleanly separated by code/data with the
code being read-only in userspace mappings, the code page is still
writable from the kernel. There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.
Prevent this specific exploit on arm64 by putting the vDSO code page
in read-only memory as well.
Before the change:
[ 3.138366] vdso: 2 pages (1 code @ ffffffc000a71000, 1 data @ ffffffc000a70000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b6000 1752K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b6000-0xffffffc000c00000 2344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
After:
[ 3.138368] vdso: 2 pages (1 code @ ffffffc0006de000, 1 data @ ffffffc000a74000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b8000 1760K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b8000-0xffffffc000c00000 2336K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.
Signed-off-by: David Brown <[email protected]>
Acked-by: Will Deacon <[email protected]>
Acked-by: Ard Biesheuvel <[email protected]>
[[email protected]: removed superfluous __PAGE_ALIGNED_DATA]
Signed-off-by: Catalin Marinas <[email protected]>
|
|
BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
in_atomic(): 1, irqs_disabled(): 128, pid: 383, name: sh
Preemption disabled at:[<ffff800000124c18>] kgdb_cpu_enter+0x158/0x6b8
CPU: 3 PID: 383 Comm: sh Tainted: G W 4.1.13-rt13 #2
Hardware name: Freescale Layerscape 2085a RDB Board (DT)
Call trace:
[<ffff8000000885e8>] dump_backtrace+0x0/0x128
[<ffff800000088734>] show_stack+0x24/0x30
[<ffff80000079a7c4>] dump_stack+0x80/0xa0
[<ffff8000000bd324>] ___might_sleep+0x18c/0x1a0
[<ffff8000007a20ac>] __rt_spin_lock+0x2c/0x40
[<ffff8000007a2268>] rt_read_lock+0x40/0x58
[<ffff800000085328>] single_step_handler+0x38/0xd8
[<ffff800000082368>] do_debug_exception+0x58/0xb8
Exception stack(0xffff80834a1e7c80 to 0xffff80834a1e7da0)
7c80: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7e40 ffff8083 001bfcc4 ffff8000
7ca0: f2000400 00000000 00000000 00000000 4a1e7d80 ffff8083 0049501c ffff8000
7cc0: 00005402 00000000 00aaa210 ffff8000 4a1e7ea0 ffff8083 000833f4 ffff8000
7ce0: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7ea0 ffff8083 001bfcc0 ffff8000
7d00: 4a0fc400 ffff8083 00005402 00000000 4a1e7d40 ffff8083 00490324 ffff8000
7d20: ffffff9c 00000000 92c23ba0 0000ffff 000a0000 00000000 00000000 00000000
7d40: 00000008 00000000 00080000 00000000 92c23b8b 0000ffff 92c23b8e 0000ffff
7d60: 00000038 00000000 00001cb2 00000000 00000005 00000000 92d7b498 0000ffff
7d80: 01010101 01010101 92be9000 0000ffff 00000000 00000000 00000030 00000000
[<ffff8000000833f4>] el1_dbg+0x18/0x6c
This issue is similar with 62c6c61("arm64: replace read_lock to rcu lock in
call_break_hook"), but comes to single_step_handler.
This also solves kgdbts boot test silent hang issue on 4.4 -rt kernel.
Signed-off-by: Yang Shi <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Most CPUs have a hardware prefetcher which generally performs better
without explicit prefetch instructions issued by software, however
some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch
instructions.
This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to
allow our library code to make use of explicit prefetch instructions
during things like copy routines only when the CPU does not have the
capability to perform the prefetching itself.
Signed-off-by: Will Deacon <[email protected]>
Tested-by: Andrew Pinski <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
The SBBR and ACPI specifications allow ACPI based systems that do not
implement PSCI (eg systems with no EL3) to boot through the ACPI parking
protocol specification[1].
This patch implements the ACPI parking protocol CPU operations, and adds
code that eases parsing the parking protocol data structures to the
ARM64 SMP initializion carried out at the same time as cpus enumeration.
To wake-up the CPUs from the parked state, this patch implements a
wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the
ARM one, so that a specific IPI is sent for wake-up purpose in order
to distinguish it from other IPI sources.
Given the current ACPI MADT parsing API, the patch implements a glue
layer that helps passing MADT GICC data structure from SMP initialization
code to the parking protocol implementation somewhat overriding the CPU
operations interfaces. This to avoid creating a completely trasparent
DT/ACPI CPU operations layer that would require creating opaque
structure handling for CPUs data (DT represents CPU through DT nodes, ACPI
through static MADT table entries), which seems overkill given that ACPI
on ARM64 mandates only two booting protocols (PSCI and parking protocol),
so there is no need for further protocol additions.
Based on the original work by Mark Salter <[email protected]>
[1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docx
Signed-off-by: Lorenzo Pieralisi <[email protected]>
Tested-by: Loc Ho <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Hanjun Guo <[email protected]>
Cc: Sudeep Holla <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Mark Salter <[email protected]>
Cc: Al Stone <[email protected]>
[[email protected]: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI]
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Currently we have separate ALIGN_DEBUG_RO{,_MIN} directives to align
_etext and __init_begin. While we ensure that __init_begin is
page-aligned, we do not provide the same guarantee for _etext. This is
not problematic currently as the alignment of __init_begin is sufficient
to prevent issues when we modify permissions.
Subsequent patches will assume page alignment of segments of the kernel
we wish to map with different permissions. To ensure this, move _etext
after the ALIGN_DEBUG_RO_MIN for the init section. This renders the
prior ALIGN_DEBUG_RO irrelevant, and hence it is removed. Likewise,
upgrade to ALIGN_DEBUG_RO_MIN(PAGE_SIZE) for _stext.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Tested-by: Jeremy Linton <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
During boot we leave the idmap in place until paging_init, as we
previously had to wait for the zero page to become allocated and
accessible.
Now that we have a statically-allocated zero page, we can uninstall the
idmap much earlier in the boot process, making it far easier to spot
accidental use of physical addresses. This also brings the cold boot
path in line with the secondary boot path.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Tested-by: Jeremy Linton <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
We currently open-code the removal of the idmap and restoration of the
current task's MMU state in a few places.
Before introducing yet more copies of this sequence, unify these to call
a new helper, cpu_uninstall_idmap.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Tested-by: Jeremy Linton <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Lorenzo Pieralisi <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Currently the zero page is set up in paging_init, and thus we cannot use
the zero page earlier. We use the zero page as a reserved TTBR value
from which no TLB entries may be allocated (e.g. when uninstalling the
idmap). To enable such usage earlier (as may be required for invasive
changes to the kernel page tables), and to minimise the time that the
idmap is active, we need to be able to use the zero page before
paging_init.
This patch follows the example set by x86, by allocating the zero page
at compile time, in .bss. This means that the zero page itself is
available immediately upon entry to start_kernel (as we zero .bss before
this), and also means that the zero page takes up no space in the raw
Image binary. The associated struct page is allocated in bootmem_init,
and remains unavailable until this time.
Outside of arch code, the only users of empty_zero_page assume that the
empty_zero_page symbol refers to the zeroed memory itself, and that
ZERO_PAGE(x) must be used to acquire the associated struct page,
following the example of x86. This patch also brings arm64 inline with
these assumptions.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Ard Biesheuvel <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Tested-by: Jeremy Linton <[email protected]>
Cc: Laura Abbott <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Catalin Marinas <[email protected]>
|
|
Changes introduced in the upstream version of libfdt pulled in by commit
91feabc2e224 ("scripts/dtc: Update to upstream commit b06e55c88b9b") use
the strnlen() function, which isn't currently available to the EFI name-
space. Add it to the EFI namespace to avoid a linker error.
Cc: Catalin Marinas <[email protected]>
Cc: Rob Herring <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Thierry Reding <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Switching between stacks is only valid if we are tracing ourselves while on the
irq_stack, so it is only valid when in current and non-preemptible context,
otherwise is is just zeroed off.
Fixes: 132cd887b5c5 ("arm64: Modify stack trace and dump for use with irq_stack")
Acked-by: James Morse <[email protected]>
Tested-by: James Morse <[email protected]>
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
drivers/firmware
ARM64 PSCI kernel interfaces that initialize idle states and implement
the suspend API to enter them are generic and can be shared with the
ARM architecture.
To achieve that goal, this patch moves ARM64 PSCI idle management
code to drivers/firmware, so that the interface to initialize and
enter idle states can actually be shared by ARM and ARM64 arches
back-ends.
The ARM generic CPUidle implementation also requires the definition of
a cpuidle_ops section entry for the kernel to initialize the CPUidle
operations at boot based on the enable-method (ie ARM64 has the
statically initialized cpu_ops counterparts for that purpose); therefore
this patch also adds the required section entry on CONFIG_ARM for PSCI so
that the kernel can initialize the PSCI CPUidle back-end when PSCI is
the probed enable-method.
On ARM64 this patch provides no functional change.
Signed-off-by: Lorenzo Pieralisi <[email protected]>
Acked-by: Daniel Lezcano <[email protected]>
Acked-by: Catalin Marinas <[email protected]> [arch/arm64]
Acked-by: Mark Rutland <[email protected]>
Tested-by: Jisheng Zhang <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Sudeep Holla <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Jisheng Zhang <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
force_sig_info can sleep under an -rt kernel, so attempting to send a
breakpoint SIGTRAP with interrupts disabled yields the following BUG:
BUG: sleeping function called from invalid context at
/kernel-source/kernel/locking/rtmutex.c:917
in_atomic(): 0, irqs_disabled(): 128, pid: 551, name: test.sh
CPU: 5 PID: 551 Comm: test.sh Not tainted 4.1.13-rt13 #7
Hardware name: Freescale Layerscape 2085a RDB Board (DT)
Call trace:
dump_backtrace+0x0/0x128
show_stack+0x24/0x30
dump_stack+0x80/0xa0
___might_sleep+0x128/0x1a0
rt_spin_lock+0x2c/0x40
force_sig_info+0xcc/0x210
brk_handler.part.2+0x6c/0x80
brk_handler+0xd8/0xe8
do_debug_exception+0x58/0xb8
This patch fixes the problem by ensuring that interrupts are enabled
prior to sending the SIGTRAP if they were already enabled in the user
context.
Reported-by: Yang Shi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
When boot arm64 kernel with KASAN enabled, the below error is reported by
kasan:
BUG: KASAN: out-of-bounds in unwind_frame+0xec/0x260 at addr ffffffc064d57ba0
Read of size 8 by task pidof/499
page:ffffffbdc39355c0 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x0()
page dumped because: kasan: bad access detected
CPU: 2 PID: 499 Comm: pidof Not tainted 4.5.0-rc1 #119
Hardware name: Freescale Layerscape 2085a RDB Board (DT)
Call trace:
[<ffffffc00008d078>] dump_backtrace+0x0/0x290
[<ffffffc00008d32c>] show_stack+0x24/0x30
[<ffffffc0006a981c>] dump_stack+0x8c/0xd8
[<ffffffc0002e4400>] kasan_report_error+0x558/0x588
[<ffffffc0002e4958>] kasan_report+0x60/0x70
[<ffffffc0002e3188>] __asan_load8+0x60/0x78
[<ffffffc00008c92c>] unwind_frame+0xec/0x260
[<ffffffc000087e60>] get_wchan+0x110/0x160
[<ffffffc0003b647c>] do_task_stat+0xb44/0xb68
[<ffffffc0003b7730>] proc_tgid_stat+0x40/0x50
[<ffffffc0003ac840>] proc_single_show+0x88/0xd8
[<ffffffc000345be8>] seq_read+0x370/0x770
[<ffffffc00030aba0>] __vfs_read+0xc8/0x1d8
[<ffffffc00030c0ec>] vfs_read+0x94/0x168
[<ffffffc00030d458>] SyS_read+0xb8/0x128
[<ffffffc000086530>] el0_svc_naked+0x24/0x28
Memory state around the buggy address:
ffffffc064d57a80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 f4 f4
ffffffc064d57b00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffffffc064d57b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
^
ffffffc064d57c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffffffc064d57c80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Since the shadow byte pointed by the report is 0, so it may mean it is just hit
oob in non-current task. So, disable the instrumentation to silence these
warnings.
Acked-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
arm64 generates asm/pci-bridge.h, which merely includes the now-empty
asm-generic/pci-bridge.h. Stop generating asm/pci-bridge.h, and stop
including it.
Signed-off-by: Bjorn Helgaas <[email protected]>
Acked-by: Will Deacon <[email protected]>
|
|
Set IORESOURCE_SYSTEM_RAM in flags of resource ranges with
"System RAM", "Kernel code", "Kernel data", and "Kernel bss".
Note that:
- IORESOURCE_SYSRAM (i.e. modifier bit) is set in flags when
IORESOURCE_MEM is already set. IORESOURCE_SYSTEM_RAM is defined
as (IORESOURCE_MEM|IORESOURCE_SYSRAM).
- Some archs do not set 'flags' for children nodes, such as
"Kernel code". This patch does not change 'flags' in this
case.
Signed-off-by: Toshi Kani <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Denys Vlasenko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Luis R. Rodriguez <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: linux-mm <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
The Performance Monitors extension is an optional feature of the
AArch64 architecture, therefore, in order to access Performance
Monitors registers safely, the kernel should detect the architected
PMU unit presence through the ID_AA64DFR0_EL1 register PMUVer field
before accessing them.
This patch implements a guard by reading the ID_AA64DFR0_EL1 register
PMUVer field to detect the architected PMU presence and prevent accessing
PMU system registers if the Performance Monitors extension is not
implemented in the core.
Cc: Peter Maydell <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: <[email protected]>
Fixes: 60792ad349f3 ("arm64: kernel: enforce pmuserenr_el0 initialization and restore")
Signed-off-by: Lorenzo Pieralisi <[email protected]>
Reported-by: Guenter Roeck <[email protected]>
Tested-by: Guenter Roeck <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Commit e8f3010f7326 ("arm64/efi: isolate EFI stub from the kernel
proper") isolated the EFI stub code from the kernel proper by prefixing
all of its symbols with __efistub_, and selectively allowing access to
core kernel symbols from the stub by emitting __efistub_ aliases for
functions and variables that the stub can access legally.
As an unintended side effect, these aliases are emitted into the
kallsyms symbol table, which means they may turn up in backtraces,
e.g.,
...
PC is at __efistub_memset+0x108/0x200
LR is at fixup_init+0x3c/0x48
...
[<ffffff8008328608>] __efistub_memset+0x108/0x200
[<ffffff8008094dcc>] free_initmem+0x2c/0x40
[<ffffff8008645198>] kernel_init+0x20/0xe0
[<ffffff8008085cd0>] ret_from_fork+0x10/0x40
The backtrace in question has nothing to do with the EFI stub, but
simply returns one of the several aliases of memset() that have been
recorded in the kallsyms table. This is undesirable, since it may
suggest to people who are not aware of this that the issue they are
seeing is somehow EFI related.
So hide the __efistub_ aliases from kallsyms, by emitting them as
absolute linker symbols explicitly. The distinction between those
and section relative symbols is completely irrelevant to these
definitions, and to the final link we are performing when these
definitions are being taken into account (the distinction is only
relevant to symbols defined inside a section definition when performing
a partial link), and so the resulting values are identical to the
original ones. Since absolute symbols are ignored by kallsyms, this
will result in these values to be omitted from its symbol table.
After this patch, the backtrace generated from the same address looks
like this:
...
PC is at __memset+0x108/0x200
LR is at fixup_init+0x3c/0x48
...
[<ffffff8008328608>] __memset+0x108/0x200
[<ffffff8008094dcc>] free_initmem+0x2c/0x40
[<ffffff8008645198>] kernel_init+0x20/0xe0
[<ffffff8008085cd0>] ret_from_fork+0x10/0x40
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
During code generation, we used to BUG_ON unknown/unsupported encoding
or invalid parameters.
Instead, now we report these as errors and simply return the
instruction AARCH64_BREAK_FAULT. Users of these codegen helpers should
check for and handle this failure condition as appropriate.
Otherwise, unhandled codegen failure will result in trapping at
run-time due to AARCH64_BREAK_FAULT, which is arguably better than a
BUG_ON.
Signed-off-by: Zi Shen Lim <[email protected]>
Acked-by: Will Deacon <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Pull KVM updates from Paolo Bonzini:
"PPC changes will come next week.
- s390: Support for runtime instrumentation within guests, support of
248 VCPUs.
- ARM: rewrite of the arm64 world switch in C, support for 16-bit VM
identifiers. Performance counter virtualization missed the boat.
- x86: Support for more Hyper-V features (synthetic interrupt
controller), MMU cleanups"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (115 commits)
kvm: x86: Fix vmwrite to SECONDARY_VM_EXEC_CONTROL
kvm/x86: Hyper-V SynIC timers tracepoints
kvm/x86: Hyper-V SynIC tracepoints
kvm/x86: Update SynIC timers on guest entry only
kvm/x86: Skip SynIC vector check for QEMU side
kvm/x86: Hyper-V fix SynIC timer disabling condition
kvm/x86: Reorg stimer_expiration() to better control timer restart
kvm/x86: Hyper-V unify stimer_start() and stimer_restart()
kvm/x86: Drop stimer_stop() function
kvm/x86: Hyper-V timers fix incorrect logical operation
KVM: move architecture-dependent requests to arch/
KVM: renumber vcpu->request bits
KVM: document which architecture uses each request bit
KVM: Remove unused KVM_REQ_KICK to save a bit in vcpu->requests
kvm: x86: Check kvm_write_guest return value in kvm_write_wall_clock
KVM: s390: implement the RI support of guest
kvm/s390: drop unpaired smp_mb
kvm: x86: fix comment about {mmu,nested_mmu}.gva_to_gpa
KVM: x86: MMU: Use clear_page() instead of init_shadow_page_table()
arm/arm64: KVM: Detect vGIC presence at runtime
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull xen updates from David Vrabel:
"Xen features and fixes for 4.5-rc0:
- Stolen ticks and PV wallclock support for arm/arm64
- Add grant copy ioctl to gntdev device"
* tag 'for-linus-4.5-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/gntdev: add ioctl for grant copy
x86/xen: don't reset vcpu_info on a cancelled suspend
xen/gntdev: constify mmu_notifier_ops structures
xen/grant-table: constify gnttab_ops structure
xen/time: use READ_ONCE
xen/x86: convert remaining timespec to timespec64 in xen_pvclock_gtod_notify
xen/x86: support XENPF_settime64
xen/arm: set the system time in Xen via the XENPF_settime64 hypercall
xen/arm: introduce xen_read_wallclock
arm: extend pvclock_wall_clock with sec_hi
xen: introduce XENPF_settime64
xen/arm: introduce HYPERVISOR_platform_op on arm and arm64
xen: rename dom0_op to platform_op
xen/arm: account for stolen ticks
arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
missing include asm/paravirt.h in cputime.c
xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
|
|
Pull ARM updates from Russell King:
- UEFI boot and runtime services support for ARM from Ard Biesheuvel
and Roy Franz.
- DT compatibility with old atags booting protocol for Nokia N900
devices from Ivaylo Dimitrov.
- PSCI firmware interface using new arm-smc calling convention from
Jens Wiklander.
- Runtime patching for udiv/sdiv instructions for ARMv7 CPUs that
support these instructions from Nicolas Pitre.
- L2x0 cache updates from Dirk B and Linus Walleij.
- Randconfig fixes from Arnd Bergmann.
- ARMv7M (nommu) updates from Ezequiel Garcia
* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (34 commits)
ARM: 8481/2: drivers: psci: replace psci firmware calls
ARM: 8480/2: arm64: add implementation for arm-smccc
ARM: 8479/2: add implementation for arm-smccc
ARM: 8478/2: arm/arm64: add arm-smccc
ARM: 8494/1: mm: Enable PXN when running non-LPAE kernel on LPAE processor
ARM: 8496/1: OMAP: RX51: save ATAGS data in the early boot stage
ARM: 8495/1: ATAGS: move save_atags() to arch/arm/include/asm/setup.h
ARM: 8452/3: PJ4: make coprocessor access sequences buildable in Thumb2 mode
ARM: 8482/1: l2x0: make it possible to disable outer sync from DT
ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI
ARM: 8487/1: Remove IPI_CALL_FUNC_SINGLE
ARM: 8485/1: cpuidle: remove cpu parameter from the cpuidle_ops suspend hook
ARM: 8484/1: Documentation: l2c2x0: Mention separate controllers explicitly
ARM: 8483/1: Documentation: l2c: Rename l2cc to l2c2x0
ARM: 8477/1: runtime patch udiv/sdiv instructions into __aeabi_{u}idiv()
ARM: 8476/1: VDSO: use PTR_ERR_OR_ZERO for vma check
ARM: 8453/2: proc-v7.S: don't locate temporary stack space in .text section
ARM: add UEFI stub support
ARM: wire up UEFI init and runtime support
ARM: only consider memblocks with NOMAP cleared for linear mapping
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm[64] perf updates from Will Deacon:
"In the past, I have funnelled perf updates through the respective
architecture trees, but now that the arm/arm64 perf driver has been
largely consolidated under drivers/perf/, it makes more sense to send
a separate pull, particularly as I'm listed as maintainer for all the
files involved. I offered the branch to arm-soc, but Arnd suggested
that I just send it to you directly.
So, here is the arm/arm64 perf queue for 4.5. The main features are
described below, but the most useful change is from Drew, which
advertises our architected event mapping in sysfs so that the perf
tool is a lot more user friendly and no longer requires the use of
magic hex constants for profiling common events.
- Support for the CPU PMU in Cortex-A72
- Add sysfs entries to describe the architected events and their
mappings for PMUv{1-3}"
* tag 'arm64-perf' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: perf: add support for Cortex-A72
arm64: perf: add format entry to describe event -> config mapping
ARM: perf: add format entry to describe event -> config mapping
arm64: kernel: enforce pmuserenr_el0 initialization and restore
arm64: perf: Correct Cortex-A53/A57 compatible values
arm64: perf: Add event descriptions
arm64: perf: Convert event enums to #defines
arm: perf: Add event descriptions
arm: perf: Convert event enums to #defines
drivers/perf: kill armpmu_register
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon:
"Here is the core arm64 queue for 4.5. As you might expect, the
Christmas break resulted in a number of patches not making the final
cut, so 4.6 is likely to be larger than usual. There's still some
useful stuff here, however, and it's detailed below.
The EFI changes have been Reviewed-by Matt and the memblock change got
an "OK" from akpm.
Summary:
- Support for a separate IRQ stack, although we haven't reduced the
size of our thread stack just yet since we don't have enough data
to determine a safe value
- Refactoring of our EFI initialisation and runtime code into
drivers/firmware/efi/ so that it can be reused by arch/arm/.
- Ftrace improvements when unwinding in the function graph tracer
- Document our silicon errata handling process
- Cache flushing optimisation when mapping executable pages
- Support for hugetlb mappings using the contiguous hint in the pte"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (45 commits)
arm64: head.S: use memset to clear BSS
efi: stub: define DISABLE_BRANCH_PROFILING for all architectures
arm64: entry: remove pointless SPSR mode check
arm64: mm: move pgd_cache initialisation to pgtable_cache_init
arm64: module: avoid undefined shift behavior in reloc_data()
arm64: module: fix relocation of movz instruction with negative immediate
arm64: traps: address fallout from printk -> pr_* conversion
arm64: ftrace: fix a stack tracer's output under function graph tracer
arm64: pass a task parameter to unwind_frame()
arm64: ftrace: modify a stack frame in a safe way
arm64: remove irq_count and do_softirq_own_stack()
arm64: hugetlb: add support for PTE contiguous bit
arm64: Use PoU cache instr for I/D coherency
arm64: Defer dcache flush in __cpu_copy_user_page
arm64: reduce stack use in irq_handler
arm64: mm: ensure that the zero page is visible to the page table walker
arm64: Documentation: add list of software workarounds for errata
arm64: mm: place __cpu_setup in .text
arm64: cmpxchg: Don't incldue linux/mmdebug.h
arm64: mm: fold alternatives into .init
...
|
|
|
|
Currently we use an open-coded memzero to clear the BSS. As it is a
trivial implementation, it is sub-optimal.
Our optimised memset doesn't use the stack, is position-independent, and
for the memzero case can use of DC ZVA to clear large blocks
efficiently. In __mmap_switched the MMU is on and there are no live
caller-saved registers, so we can safely call an uninstrumented memset.
This patch changes __mmap_switched to use memset when clearing the BSS.
We use the __pi_memset alias so as to avoid any instrumentation in all
kernel configurations.
Cc: Catalin Marinas <[email protected]>
Cc: Marc Zyngier <[email protected]>
Reviewed-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
In work_pending, we may skip work if the stacked SPSR value represents
anything other than an EL0 context. We then immediately invoke the
kernel_exit 0 macro as part of ret_to_user, assuming a return to EL0.
This is somewhat confusing.
We use work_pending as part of the ret_to_user/ret_fast_syscall state
machine. We only use ret_fast_syscall in the return from an SVC issued
from EL0. We use ret_to_user for return from EL0 exception handlers and
also for return from ret_from_fork in the case the task was not a kernel
thread (i.e. it is a user task).
Thus in all cases the stacked SPSR value must represent an EL0 context,
and the check is redundant. This patch removes it, along with the now
unused no_work_pending label.
Cc: Chris Metcalf <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Compilers may engage the improbability drive when encountering shifts
by a distance that is a multiple of the size of the operand type. Since
the required bounds check is very simple here, we can get rid of all the
fuzzy masking, shifting and comparing, and use the documented bounds
directly.
Reported-by: David Binderman <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
The test whether a movz instruction with a signed immediate should be
turned into a movn instruction (i.e., when the immediate is negative)
is flawed, since the value of imm is always positive. Also, the
subsequent bounds check is incorrect since the limit update never
executes, due to the fact that the imm_type comparison will always be
false for negative signed immediates.
Let's fix this by performing the sign test on sval directly, and
replacing the bounds check with a simple comparison against U16_MAX.
Signed-off-by: Ard Biesheuvel <[email protected]>
[will: tidied up use of sval, renamed MOVK enum value to MOVKZ]
Signed-off-by: Will Deacon <[email protected]>
|
|
|
|
Switch to use a generic interface for issuing SMC/HVC based on ARM SMC
Calling Convention. Removes now the now unused psci-call.S.
Acked-by: Will Deacon <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Tested-by: Mark Rutland <[email protected]>
Acked-by: Lorenzo Pieralisi <[email protected]>
Tested-by: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC.
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Jens Wiklander <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Cortex-A72 has a PMUv3 implementation that is compatible with the PMU
implemented by Cortex-A57.
This patch hooks up the new compatible string so that the Cortex-A57
event mappings are used.
Signed-off-by: Will Deacon <[email protected]>
|
|
It's all very well providing an events directory to userspace that
details our events in terms of "event=0xNN", but if we don't define how
to encode the "event" field in the perf attr.config, then it's a waste
of time.
This patch adds a single format entry to describe that the event field
occupies the bottom 10 bits of our config field on ARMv8 (PMUv3).
Signed-off-by: Will Deacon <[email protected]>
|
|
Commit ac7b406c1a9d ("arm64: Use pr_* instead of printk") was a fairly
mindless s/printk/pr_*/ change driven by a complaint from checkpatch.
As is usual with such changes, this has led to some odd behaviour on
arm64:
* syslog now picks up the "pr_emerg" line from dump_backtrace, but not
the actual trace, which leads to a bunch of "kernel:Call trace:"
lines in the log
* __{pte,pmd,pgd}_error print at KERN_CRIT, as opposed to KERN_ERR
which is used by other architectures.
This patch restores the original printk behaviour for dump_backtrace
and downgrade the pgtable error macros to KERN_ERR.
Signed-off-by: Will Deacon <[email protected]>
|
|
Function graph tracer modifies a return address (LR) in a stack frame
to hook a function return. This will result in many useless entries
(return_to_handler) showing up in
a) a stack tracer's output
b) perf call graph (with perf record -g)
c) dump_backtrace (at panic et al.)
For example, in case of a),
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
$ echo 1 > /proc/sys/kernel/stack_trace_enabled
$ cat /sys/kernel/debug/tracing/stack_trace
Depth Size Location (54 entries)
----- ---- --------
0) 4504 16 gic_raise_softirq+0x28/0x150
1) 4488 80 smp_cross_call+0x38/0xb8
2) 4408 48 return_to_handler+0x0/0x40
3) 4360 32 return_to_handler+0x0/0x40
...
In case of b),
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
$ perf record -e mem:XXX:x -ag -- sleep 10
$ perf report
...
| | |--0.22%-- 0x550f8
| | | 0x10888
| | | el0_svc_naked
| | | sys_openat
| | | return_to_handler
| | | return_to_handler
...
In case of c),
$ echo function_graph > /sys/kernel/debug/tracing/current_tracer
$ echo c > /proc/sysrq-trigger
...
Call trace:
[<ffffffc00044d3ac>] sysrq_handle_crash+0x24/0x30
[<ffffffc000092250>] return_to_handler+0x0/0x40
[<ffffffc000092250>] return_to_handler+0x0/0x40
...
This patch replaces such entries with real addresses preserved in
current->ret_stack[] at unwind_frame(). This way, we can cover all
the cases.
Reviewed-by: Jungseok Lee <[email protected]>
Signed-off-by: AKASHI Takahiro <[email protected]>
[will: fixed minor context changes conflicting with irq stack bits]
Signed-off-by: Will Deacon <[email protected]>
|
|
Function graph tracer modifies a return address (LR) in a stack frame
to hook a function's return. This will result in many useless entries
(return_to_handler) showing up in a call stack list.
We will fix this problem in a later patch ("arm64: ftrace: fix a stack
tracer's output under function graph tracer"). But since real return
addresses are saved in ret_stack[] array in struct task_struct,
unwind functions need to be notified of, in addition to a stack pointer
address, which task is being traced in order to find out real return
addresses.
This patch extends unwind functions' interfaces by adding an extra
argument of a pointer to task_struct.
Signed-off-by: AKASHI Takahiro <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Function graph tracer modifies a return address (LR) in a stack frame by
calling ftrace_prepare_return() in a traced function's function prologue.
The current code does this modification before preserving an original
address at ftrace_push_return_trace() and there is always a small window
of inconsistency when an interrupt occurs.
This doesn't matter, as far as an interrupt stack is introduced, because
stack tracer won't be invoked in an interrupt context. But it would be
better to proactively minimize such a window by moving the LR modification
after ftrace_push_return_trace().
Signed-off-by: AKASHI Takahiro <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
sysrq_handle_reboot() re-enables interrupts while on the irq stack. The
irq_stack implementation wrongly assumed this would only ever happen
via the softirq path, allowing it to update irq_count late, in
do_softirq_own_stack().
This means if an irq occurs in sysrq_handle_reboot(), during
emergency_restart() the stack will be corrupted, as irq_count wasn't
updated.
Lose the optimisation, and instead of moving the adding/subtracting of
irq_count into irq_stack_entry/irq_stack_exit, remove it, and compare
sp_el0 (struct thread_info) with sp & ~(THREAD_SIZE - 1). This tells us
if we are on a task stack, if so, we can safely switch to the irq stack.
Finally, remove do_softirq_own_stack(), we don't need it anymore.
Reported-by: Will Deacon <[email protected]>
Signed-off-by: James Morse <[email protected]>
[will: use get_thread_info macro]
Signed-off-by: Will Deacon <[email protected]>
|
|
The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
Current kernel code resets that register value iff the core pmu device is
correctly probed in the kernel. On platforms with missing DT pmu nodes (or
disabled perf events in the kernel), the pmu is not probed, therefore the
pmuserenr_el0 register is not reset in the kernel, which means that its
value retains the reset value that is architecturally UNKNOWN (system
may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
is available at EL0, which must be disallowed).
This patch adds code that resets pmuserenr_el0 on cold boot and restores
it on core resume from shutdown, so that the pmuserenr_el0 setup is
always enforced in the kernel.
Cc: <[email protected]>
Cc: Mark Rutland <[email protected]>
Signed-off-by: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
Necessary duplication of paravirt.h and paravirt.c with ARM.
The only paravirt interface supported is pv_time_ops.steal_clock, so no
runtime pvops patching needed.
This allows us to make use of steal_account_process_tick for stolen
ticks accounting.
Signed-off-by: Stefano Stabellini <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
|