Age | Commit message (Collapse) | Author | Files | Lines |
|
Now that the arm pmu code is limited to CPU PMUs the get_hw_events()
function is superfluous, as we'll always have a set of per-cpu
pmu_hw_events structures.
This patch removes the get_hw_events() function, replacing it with
a percpu hw_events pointer. Uses of get_hw_events are updated to use
this_cpu_ptr.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Commit 3fc2c83087 (ARM: perf: remove event limit from pmu_hw_events) got
rid of the upper limit on the number of events an arm_pmu could handle,
but introduced additional complexity and places a burden on each PMU
driver to allocate accounting data somehow. So far this has not
generally been useful as the only users of arm_pmu are the CPU backend
and the CCI driver.
Now that the CCI driver plugs into the perf subsystem directly, we can
remove some of the complexities that get in the way of supporting
heterogeneous CPU PMUs.
This patch restores the original limits on pmu_hw_events fields such
that the pmu_hw_events data can be allocated as a contiguous block. This
will simplify dynamic pmu_hw_events allocation in later patches.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Tested-by: Stephen Boyd <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
For systems with heterogeneous CPUs (e.g. big.LITTLE systems) the PMUs
can be different in each cluster, and not all events can be migrated
between clusters. To allow userspace to deal with this, it must be
possible to address each PMU independently.
This patch changes PMUs to be registered with dynamic (IDR) types,
allowing them to be targeted individually. Each PMU's type can be found
in ${SYSFS_ROOT}/bus/event_source/devices/${PMU_NAME}/type.
From userspace, raw events can be targeted at a specific PMU:
$ perf stat -e ${PMU_NAME}/config=V,config1=V1,.../
Doing this does not break existing tools which use existing perf types:
when perf core can't find a PMU of matching type (in perf_init_event)
it'll iterate over the set of all PMUs. If a compatible PMU exists,
it'll be found eventually. If more than one compatible PMU exists, the
event will be handled by whichever PMU happens to be earlier in the pmus
list (which currently will be the last compatible PMU registered).
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
The current PMU probing logic consists of a single switch statement,
which means that the core arm_pmu core in perf_event_cpu.c needs to know
about every CPU PMU variant supported by a driver using the arm_pmu
framework. This makes it rather difficult to decouple the drivers from
the (otherwise generic) probing code.
The patch refactors that switch statement to a table-driven lookup,
separating the logic and knowledge (in the form of the table). Later
patches will split the table across the relevant PMU drivers, which can
pass their tables to the generic probing function.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Most of the pr_info format strings in perf_event_cpu.c are missing
newlines. Currently we get away with this as the format strings for
subsequent calls to printk (including all pr_* calls) begin with a log
prefix, and the printk core adds the omitted newline for this case.
While generates the output we expect, we probably should not rely on the
format of successive printk calls in order to get legible output.
This patch adds the missing newlines to pr_info format strings in
perf_event_cpu.c, making them consistent with the format strings for
other pr_info, warn, and pr_err calls, and preventing potentially
illegible output if the next printk/pr_* format string doesn't begin
with a log prefix.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Stephen Boyd <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
The ARM callchain handling code is currently bundled with the ARM PMU
management code, despite the two having no dependency on each other.
This bundling has the unfortunate property of making callchain handling
depend on CONFIG_HW_PERF_EVENTS, even though the callchain handling
could be applied to software events in the absence of PMU hardware
support.
This patch separates the two, placing the callchain handling in
perf_callchain.c and making it depend on CONFIG_PERF_EVENTS rather than
CONFIG_HW_PERF_EVENTS, enabling callchain recording on kernels built
without hardware perf event support.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Will Deacon <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
There are a few remaining uses of printk in the ARM perf code, so move
them over to the pr_* variants instead.
Reported-by: Russell King <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Idx sanity check was once implemented separately in these counter
handling functions and then return value was treated as a judgement.
armv7_pmnc_select_counter()
armv7_pmnc_enable_counter()
armv7_pmnc_disable_counter()
armv7_pmnc_enable_intens()
armv7_pmnc_disable_intens()
But we do not need to do this now, as idx validation check was moved
out all these functions by commit 7279adbd9bb8ef8f(ARM: perf: check ARMv7
counter validity on a per-pmu basis).
Let's remove the useless return of idx from these functions.
Acked-by: Mark Rutland <[email protected]>
Signed-off-by: chai wen <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
|
|
Signed-off-by: Russell King <[email protected]>
|
|
Pull ARM fixes from Russell King:
"A couple of ARM fixes.
We fix some printk formats for ptrdiff_t quantities which cause GCC
4.9 to complain, and we also blacklist known buggy GCC 4.8.x compilers
as their miscompilation is serious enough to cause filesystem
corruption, even through many distros have fixed their versions"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: fix some printk formats
ARM: Blacklist GCC 4.8.0 to GCC 4.8.2 - PR58854
|
|
Pull audit updates from Eric Paris:
"So this change across a whole bunch of arches really solves one basic
problem. We want to audit when seccomp is killing a process. seccomp
hooks in before the audit syscall entry code. audit_syscall_entry
took as an argument the arch of the given syscall. Since the arch is
part of what makes a syscall number meaningful it's an important part
of the record, but it isn't available when seccomp shoots the
syscall...
For most arch's we have a better way to get the arch (syscall_get_arch)
So the solution was two fold: Implement syscall_get_arch() everywhere
there is audit which didn't have it. Use syscall_get_arch() in the
seccomp audit code. Having syscall_get_arch() everywhere meant it was
a useless flag on the stack and we could get rid of it for the typical
syscall entry.
The other changes inside the audit system aren't grand, fixed some
records that had invalid spaces. Better locking around the task comm
field. Removing some dead functions and structs. Make some things
static. Really minor stuff"
* git://git.infradead.org/users/eparis/audit: (31 commits)
audit: rename audit_log_remove_rule to disambiguate for trees
audit: cull redundancy in audit_rule_change
audit: WARN if audit_rule_change called illegally
audit: put rule existence check in canonical order
next: openrisc: Fix build
audit: get comm using lock to avoid race in string printing
audit: remove open_arg() function that is never used
audit: correct AUDIT_GET_FEATURE return message type
audit: set nlmsg_len for multicast messages.
audit: use union for audit_field values since they are mutually exclusive
audit: invalid op= values for rules
audit: use atomic_t to simplify audit_serial()
kernel/audit.c: use ARRAY_SIZE instead of sizeof/sizeof[0]
audit: reduce scope of audit_log_fcaps
audit: reduce scope of audit_net_id
audit: arm64: Remove the audit arch argument to audit_syscall_entry
arm64: audit: Add audit hook in syscall_trace_enter/exit()
audit: x86: drop arch from __audit_syscall_entry() interface
sparc: implement is_32bit_task
sparc: properly conditionalize use of TIF_32BIT
...
|
|
These stock GCC versions miscompile the kernel by incorrectly optimising
the function epilogue code - by first increasing the stack pointer, and
then loading entries from below the stack. This means that an opportune
interrupt or exception will corrupt the current function's saved state,
which may result in the parent function seeing different register
values.
As this bug has been known to result in corrupted filesystems, and these
buggy compiler versions seem to be frequently used, we have little
option but to blacklist these compiler versions.
Distributions may have fixed PR58854, but as their compilers are totally
indistinguishable from the buggy versions, it is unfortunate that this
also results in those also being blacklisted. Given the filesystem
corruption potential of the original, this is the lesser evil. People
who want to build with their fixed compiler versions will need to adjust
the kernel source. (Distros need to think about the implications of
fixing such a compiler bug, and consider how to ensure that their fixed
compiler versions can be detected if they wish to avoid this.)
Signed-off-by: Russell King <[email protected]>
|
|
This introduces CONFIG_DEBUG_RODATA, making kernel text and rodata
read-only. Additionally, this splits rodata from text so that rodata can
also be NX, which may lead to wasted memory when aligning to SECTION_SIZE.
The read-only areas are made writable during ftrace updates and kexec.
Signed-off-by: Kees Cook <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
|
|
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions
into section-sized areas that can have different permisions. Performs
the NX permission changes during free_initmem, so that init memory can be
reclaimed.
This uses section size instead of PMD size to reduce memory lost to
padding on non-LPAE systems.
Based on work by Brad Spengler, Larry Bassel, and Laura Abbott.
Signed-off-by: Kees Cook <[email protected]>
Tested-by: Laura Abbott <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
|
|
Handle the case where someone has set the text segment of the kernel
as read-only by using the newly introduced "patch" mechanism.
Signed-off-by: Doug Anderson <[email protected]>
[kees: switched structure size check to BUILD_BUG_ON (sboyd)]
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
|
|
With the introduction of Kees Cook's patch to make the kernel .text
read-only the existing method by which kexec works got broken since it
directly pokes some values in the template code, which resides in the
.text section.
The current patch changes the way those values are inserted so that poking
.text section occurs only in machine_kexec (e.g when we are about to nuke
the old kernel and are beyond the point of return). This allows to use
set_kernel_text_rw() to directly patch the values in the .text section.
I had already sent a patch which achieved this but it was significantly
more complicated, so this is a cleaner/straight-forward approach.
Signed-off-by: Nikolay Borisov <[email protected]>
Acked-by: Will Deacon <[email protected]>
[kees: collapsed kexec_boot_atags (will.daecon)]
[kees: for bisectability, moved set_kernel_text_rw() to RODATA patch]
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
|
|
Use fixmaps for text patching when the kernel text is read-only,
inspired by x86. This makes jump labels and kprobes work with the
currently available CONFIG_DEBUG_SET_MODULE_RONX and the upcoming
CONFIG_DEBUG_RODATA options.
Signed-off-by: Rabin Vincent <[email protected]>
[kees: fixed up for merge with "arm: use generic fixmap.h"]
[kees: added parse acquire/release annotations to pass C=1 builds]
[kees: always use stop_machine to keep TLB flushing local]
Signed-off-by: Kees Cook <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu
Pull percpu consistent-ops changes from Tejun Heo:
"Way back, before the current percpu allocator was implemented, static
and dynamic percpu memory areas were allocated and handled separately
and had their own accessors. The distinction has been gone for many
years now; however, the now duplicate two sets of accessors remained
with the pointer based ones - this_cpu_*() - evolving various other
operations over time. During the process, we also accumulated other
inconsistent operations.
This pull request contains Christoph's patches to clean up the
duplicate accessor situation. __get_cpu_var() uses are replaced with
with this_cpu_ptr() and __this_cpu_ptr() with raw_cpu_ptr().
Unfortunately, the former sometimes is tricky thanks to C being a bit
messy with the distinction between lvalues and pointers, which led to
a rather ugly solution for cpumask_var_t involving the introduction of
this_cpu_cpumask_var_ptr().
This converts most of the uses but not all. Christoph will follow up
with the remaining conversions in this merge window and hopefully
remove the obsolete accessors"
* 'for-3.18-consistent-ops' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (38 commits)
irqchip: Properly fetch the per cpu offset
percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t -fix
ia64: sn_nodepda cannot be assigned to after this_cpu conversion. Use __this_cpu_write.
percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t
Revert "powerpc: Replace __get_cpu_var uses"
percpu: Remove __this_cpu_ptr
clocksource: Replace __this_cpu_ptr with raw_cpu_ptr
sparc: Replace __get_cpu_var uses
avr32: Replace __get_cpu_var with __this_cpu_write
blackfin: Replace __get_cpu_var uses
tile: Use this_cpu_ptr() for hardware counters
tile: Replace __get_cpu_var uses
powerpc: Replace __get_cpu_var uses
alpha: Replace __get_cpu_var
ia64: Replace __get_cpu_var uses
s390: cio driver &__get_cpu_var replacements
s390: Replace __get_cpu_var uses
mips: Replace __get_cpu_var uses
MIPS: Replace __get_cpu_var uses in FPU emulator.
arm: Replace __this_cpu_ptr with raw_cpu_ptr
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 seccomp changes from Ingo Molnar:
"This tree includes x86 seccomp filter speedups and related preparatory
work, which touches core seccomp facilities as well.
The main idea is to split seccomp into two phases, to be able to enter
a simple fast path for syscalls with ptrace side effects.
There's no substantial user-visible (and ABI) effects expected from
this, except a change in how we emit a better audit record for
SECCOMP_RET_TRACE events"
* 'x86-seccomp-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86_64, entry: Use split-phase syscall_trace_enter for 64-bit syscalls
x86_64, entry: Treat regs->ax the same in fastpath and slowpath syscalls
x86: Split syscall_trace_enter into two phases
x86, entry: Only call user_exit if TIF_NOHZ
x86, x32, audit: Fix x32's AUDIT_ARCH wrt audit
seccomp: Document two-phase seccomp and arch-provided seccomp_data
seccomp: Allow arch code to provide seccomp_data
seccomp: Refactor the filter callback and the API
seccomp,x86,arm,mips,s390: Remove nr parameter from secure_computing
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar:
"The main changes in this cycle were:
- Optimized support for Intel "Cluster-on-Die" (CoD) topologies (Dave
Hansen)
- Various sched/idle refinements for better idle handling (Nicolas
Pitre, Daniel Lezcano, Chuansheng Liu, Vincent Guittot)
- sched/numa updates and optimizations (Rik van Riel)
- sysbench speedup (Vincent Guittot)
- capacity calculation cleanups/refactoring (Vincent Guittot)
- Various cleanups to thread group iteration (Oleg Nesterov)
- Double-rq-lock removal optimization and various refactorings
(Kirill Tkhai)
- various sched/deadline fixes
... and lots of other changes"
* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (72 commits)
sched/dl: Use dl_bw_of() under rcu_read_lock_sched()
sched/fair: Delete resched_cpu() from idle_balance()
sched, time: Fix build error with 64 bit cputime_t on 32 bit systems
sched: Improve sysbench performance by fixing spurious active migration
sched/x86: Fix up typo in topology detection
x86, sched: Add new topology for multi-NUMA-node CPUs
sched/rt: Use resched_curr() in task_tick_rt()
sched: Use rq->rd in sched_setaffinity() under RCU read lock
sched: cleanup: Rename 'out_unlock' to 'out_free_new_mask'
sched: Use dl_bw_of() under RCU read lock
sched/fair: Remove duplicate code from can_migrate_task()
sched, mips, ia64: Remove __ARCH_WANT_UNLOCKED_CTXSW
sched: print_rq(): Don't use tasklist_lock
sched: normalize_rt_tasks(): Don't use _irqsave for tasklist_lock, use task_rq_lock()
sched: Fix the task-group check in tg_has_rt_tasks()
sched/fair: Leverage the idle state info when choosing the "idlest" cpu
sched: Let the scheduler see CPU idle states
sched/deadline: Fix inter- exclusive cpusets migrations
sched/deadline: Clear dl_entity params when setscheduling to different class
sched/numa: Kill the wrong/dead TASK_DEAD check in task_numa_fault()
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull restart handler infrastructure from Guenter Roeck:
"This series was supposed to be pulled through various trees using it,
and I did not plan to send a separate pull request. As it turns out,
the pinctrl tree did not merge with it, is now upstream, and uses it,
meaning there are now build failures.
Please pull this series directly to fix those build failures"
* tag 'restart-handler-for-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
arm/arm64: unexport restart handlers
watchdog: sunxi: register restart handler with kernel restart handler
watchdog: alim7101: register restart handler with kernel restart handler
watchdog: moxart: register restart handler with kernel restart handler
arm: support restart through restart handler call chain
arm64: support restart through restart handler call chain
power/restart: call machine_restart instead of arm_pm_restart
kernel: add support for kernel restart handler call chain
|
|
The different architectures used their own (and different) declarations:
extern __visible const void __nosave_begin, __nosave_end;
extern const void __nosave_begin, __nosave_end;
extern long __nosave_begin, __nosave_end;
Consolidate them using the first variant in <asm/sections.h>.
Signed-off-by: Geert Uytterhoeven <[email protected]>
Cc: Russell King <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Guan Xuetao <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq updates from Thomas Gleixner:
"The irq departement delivers:
- a cleanup series to get rid of mindlessly copied code.
- another bunch of new pointlessly different interrupt chip drivers.
Adding homebrewn irq chips (and timers) to SoCs must provide a
value add which is beyond the imagination of mere mortals.
- the usual SoC irq controller updates, IOW my second cat herding
project"
* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (44 commits)
irqchip: gic-v3: Implement CPU PM notifier
irqchip: gic-v3: Refactor gic_enable_redist to support both enabling and disabling
irqchip: renesas-intc-irqpin: Add minimal runtime PM support
irqchip: renesas-intc-irqpin: Add helper variable dev = &pdev->dev
irqchip: atmel-aic5: Add sama5d4 support
irqchip: atmel-aic5: The sama5d3 has 48 IRQs
Documentation: bcm7120-l2: Add Broadcom BCM7120-style L2 binding
irqchip: bcm7120-l2: Add Broadcom BCM7120-style Level 2 interrupt controller
irqchip: renesas-irqc: Add binding docs for new R-Car Gen2 SoCs
irqchip: renesas-irqc: Add DT binding documentation
irqchip: renesas-intc-irqpin: Document SoC-specific bindings
openrisc: Get rid of handle_IRQ
arm64: Get rid of handle_IRQ
ARM: omap2: irq: Convert to handle_domain_irq
ARM: imx: tzic: Convert to handle_domain_irq
ARM: imx: avic: Convert to handle_domain_irq
irqchip: or1k-pic: Convert to handle_domain_irq
irqchip: atmel-aic5: Convert to handle_domain_irq
irqchip: atmel-aic: Convert to handle_domain_irq
irqchip: gic-v3: Convert to handle_domain_irq
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fixes from Ingo Molnar:
"Main changes:
- Fix the deadlock reported by Dave Jones et al
- Clean up and fix nohz_full interaction with arch abilities
- nohz init code consolidation/cleanup"
* 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
nohz: nohz full depends on irq work self IPI support
nohz: Consolidate nohz full init code
arm64: Tell irq work about self IPI support
arm: Tell irq work about self IPI support
x86: Tell irq work about self IPI support
irq_work: Force raised irq work to run on irq work interrupt
irq_work: Introduce arch_irq_work_has_interrupt()
nohz: Move nohz full init call to tick init
|
|
into for-next
|
|
This patch changes the __init_end address to a
page align address, so that free_initmem() can
free the whole .init section, because if the end
address is not page aligned, it will round down to
a page align address, then the tail unligned page
will not be freed.
Signed-off-by: wang <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
When compiling kprobes-test-arm.c the following error has been observed
/tmp/ccoT403o.s:21439: Error: bad immediate value for offset (4168)
This is caused by the compiler spilling it's literal pool too far away
from the site which is trying to reference it with a PC relative load.
This arises because the compiler is underestimating the size of the
inline assembler code present, which apparently it approximates as 4
bytes per line or instruction.
We fix this problem by moving the operations which generate more than
4 bytes out of the text section. Specifically, moving the .ascii
directives to the .rodata section.
Signed-off-by: Jon Medhurst <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The warning was introduced in 2009 (commit 4bf1fa5a34aa ([ARM] 5613/1:
implement CALLER_ADDRESSx)). The only "problem" here is that
CALLER_ADDRESSx for x > 1 returns NULL which doesn't do much harm.
The drawback of implementing a fix (i.e. use unwind tables to implement CALLER_ADDRESSx) is that much of the unwinder code would need to be marked as not
traceable.
Signed-off-by: Uwe Kleine-König <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
With compilers which follow the C99 standard (like modern versions of gcc and
clang), "extern inline" does the wrong thing (emits code for an externally
linkable version of the inline function). In this case using static inline
and removing the NULL version of return_address in return_address.c does
the right thing.
Signed-off-by: Behan Webster <[email protected]>
Reviewed-by: Mark Charlebois <[email protected]>
Acked-by: Steven Rostedt <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The sigpage is currently placed alongside shared libraries etc in the
address space. Similar to what x86_64 does for its VDSO, place the
sigpage at a randomized offset above the stack so that learning the
base address of the sigpage doesn't help expose where shared libraries
are loaded in the address space (and vice versa).
Signed-off-by: Nathan Lynch <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
_install_special_mapping allows the VMA to be identifed in
/proc/pid/maps without the use of arch_vma_name, providing a
slight net reduction in object size:
text data bss dec hex filename
2996 96 144 3236 ca4 arch/arm/kernel/process.o (before)
2956 104 144 3204 c84 arch/arm/kernel/process.o (after)
Signed-off-by: Nathan Lynch <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
If we are not changing the control register value, avoid writing to it.
Writes to the control register can be very expensive, taking around a
hundred cycles or so.
Signed-off-by: Russell King <[email protected]>
|
|
Use the more common pr_warn.
Other miscellanea:
o Coalesce formats
o Realign arguments
Signed-off-by: Joe Perches <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Implementing a restart handler in a module don't make sense as there would
be no guarantee that the module is loaded when a restart is needed.
Unexport arm_pm_restart to ensure that no one gets the idea to do it
anyway.
Signed-off-by: Guenter Roeck <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: Heiko Stuebner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Dmitry Eremin-Solenikov <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonas Jensen <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Russell King <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Tomasz Figa <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Wim Van Sebroeck <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
The kernel core now supports a restart handler call chain for system
restart functions.
With this change, the arm_pm_restart callback is now optional, so drop its
initialization and check if it is set before calling it. Only call the
kernel restart handler if arm_pm_restart is not set.
Signed-off-by: Guenter Roeck <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Acked-by: Heiko Stuebner <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: David Woodhouse <[email protected]>
Cc: Dmitry Eremin-Solenikov <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jonas Jensen <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Russell King <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Tomasz Figa <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Wim Van Sebroeck <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
|
|
We have a function where the arch can be queried, syscall_get_arch().
So rather than have every single piece of arch specific code use and/or
duplicate syscall_get_arch(), just have the audit code use the
syscall_get_arch() code.
Based-on-patch-by: Richard Briggs <[email protected]>
Signed-off-by: Eric Paris <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
|
|
Use the new arch_scale_cpu_capacity() scheduler facility in order to reflect
the original capacity of a CPU instead of arch_scale_freq_capacity() which is
more linked to a scaling of the capacity linked to the frequency.
Signed-off-by: Vincent Guittot <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Grant Likely <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Nicolas Pitre <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Russell King <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
do_unexp_fiq() has never been called by any code in the last 10 years,
it's about time it was removed!
Signed-off-by: Russell King <[email protected]>
|
|
Remove an unnecessary newline in show_regs().
Signed-off-by: Russell King <[email protected]>
|
|
This patch introduces a new default FIQ handler that is structured in a
similar way to the existing ARM exception handler and result in the FIQ
being handled by C code running on the SVC stack (despite this code run
in the FIQ handler is subject to severe limitations with respect to
locking making normal interaction with the kernel impossible).
This default handler allows concepts that on x86 would be handled using
NMIs to be realized on ARM.
Credit:
This patch is a near complete re-write of a patch originally
provided by Anton Vorontsov. Today only a couple of small fragments
survive, however without Anton's work to build from this patch would
not exist. Thanks also to Russell King for spoonfeeding me a variety
of fixes during the review cycle.
Signed-off-by: Daniel Thompson <[email protected]>
Cc: Catalin Marinas <[email protected]>
Acked-by: Nicolas Pitre <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Rob Clark reports a sleeping while atomic bug when using perf.
BUG: sleeping function called from invalid context at ../kernel/locking/mutex.c:583
in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/0
------------[ cut here ]------------
WARNING: CPU: 2 PID: 4828 at ../kernel/locking/mutex.c:479 mutex_lock_nested+0x3a0/0x3e8()
DEBUG_LOCKS_WARN_ON(in_interrupt())
Modules linked in:
CPU: 2 PID: 4828 Comm: Xorg.bin Tainted: G W 3.17.0-rc3-00234-gd535c45-dirty #819
[<c0216690>] (unwind_backtrace) from [<c0212174>] (show_stack+0x10/0x14)
[<c0212174>] (show_stack) from [<c0867cc0>] (dump_stack+0x98/0xb8)
[<c0867cc0>] (dump_stack) from [<c02492a4>] (warn_slowpath_common+0x70/0x8c)
[<c02492a4>] (warn_slowpath_common) from [<c02492f0>] (warn_slowpath_fmt+0x30/0x40)
[<c02492f0>] (warn_slowpath_fmt) from [<c086a3f8>] (mutex_lock_nested+0x3a0/0x3e8)
[<c086a3f8>] (mutex_lock_nested) from [<c0294d08>] (irq_find_host+0x20/0x9c)
[<c0294d08>] (irq_find_host) from [<c0769d50>] (of_irq_get+0x28/0x48)
[<c0769d50>] (of_irq_get) from [<c057d104>] (platform_get_irq+0x1c/0x8c)
[<c057d104>] (platform_get_irq) from [<c021a06c>] (cpu_pmu_enable_percpu_irq+0x14/0x38)
[<c021a06c>] (cpu_pmu_enable_percpu_irq) from [<c02b1634>] (flush_smp_call_function_queue+0x88/0x178)
[<c02b1634>] (flush_smp_call_function_queue) from [<c0214dc0>] (handle_IPI+0x88/0x160)
[<c0214dc0>] (handle_IPI) from [<c0208930>] (gic_handle_irq+0x64/0x68)
[<c0208930>] (gic_handle_irq) from [<c0212d04>] (__irq_svc+0x44/0x5c)
Exception stack(0xe63ddea0 to 0xe63ddee8)
dea0: 00000001 00000001 00000000 c2f3b200 c16db380 c032d4a0 e63ddf40 60010013
dec0: 00000000 001fbfd4 00000100 00000000 00000001 e63ddee8 c0284770 c02a2e30
dee0: 20010013 ffffffff
[<c0212d04>] (__irq_svc) from [<c02a2e30>] (ktime_get_ts64+0x1c8/0x200)
[<c02a2e30>] (ktime_get_ts64) from [<c032d4a0>] (poll_select_set_timeout+0x60/0xa8)
[<c032d4a0>] (poll_select_set_timeout) from [<c032df64>] (SyS_select+0xa8/0x118)
[<c032df64>] (SyS_select) from [<c020e8e0>] (ret_fast_syscall+0x0/0x48)
---[ end trace 0bb583b46342da6f ]---
INFO: lockdep is turned off.
We don't really need to get the platform irq again when we're
enabling or disabling the per-cpu irq. Furthermore, we don't
really need to set and clear bits in the active_irqs bitmask
because that's only used in the non-percpu irq case to figure out
when the last CPU PMU has been disabled. Just pass the irq
directly to the enable/disable functions to clean all this up.
This should be slightly more efficient and also fix the
scheduling while atomic bug.
Fixes: bbd64559376f "ARM: perf: support percpu irqs for the CPU PMU"
Reported-by: Rob Clark <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: [email protected]
Signed-off-by: Stephen Boyd <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The TPIDRURO and TPIDRURW registers need to be flushed during exec;
otherwise TLS information is potentially leaked. TPIDRURO in
particular needs careful treatment. Since flush_thread basically
needs the same code used to set the TLS in arm_syscall, pull that into
a common set_tls helper in tls.h and use it in both places.
Similarly, TEEHBR needs to be cleared during exec as well. Clearing
its save slot in thread_info isn't right as there is no guarantee
that a thread switch will occur before the new program runs. Just
setting the register directly is sufficient.
Signed-off-by: Nathan Lynch <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Previous commits that dealt with get_user for 64bit type missed to
export proper functions, so if get_user macro with particular target/value
types are used by kernel module modpost would produce 'undefined!' error.
Solution is to export all required functions.
Signed-off-by: Victor Kamensky <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
ARM irq work IPI support depends on SMP support. That information is
partly known at early boottime. Lets implement
arch_irq_work_has_interrupt() accordingly.
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
|
|
According to the ARM ARMv7, explicit barriers are necessary when using
synchronisation primitives such as SWP{B}. The use of these
instructions does not automatically imply a barrier and any ordering
requirements by the software must be explicitly expressed with the use
of suitable barriers.
Based on this, remove the barriers from SWP{B} emulation.
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Punit Agrawal <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The secure_computing function took a syscall number parameter, but
it only paid any attention to that parameter if seccomp mode 1 was
enabled. Rather than coming up with a kludge to get the parameter
to work in mode 2, just remove the parameter.
To avoid churn in arches that don't have seccomp filters (and may
not even support syscall_get_nr right now), this leaves the
parameter in secure_computing_strict, which is now a real function.
For ARM, this is a bit ugly due to the fact that ARM conditionally
supports seccomp filters. Fixing that would probably only be a
couple of lines of code, but it should be coordinated with the audit
maintainers.
This will be a slight slowdown on some arches. The right fix is to
pass in all of seccomp_data instead of trying to make just the
syscall nr part be fast.
This is a prerequisite for making two-phase seccomp work cleanly.
Cc: Russell King <[email protected]>
Cc: [email protected]
Cc: Ralf Baechle <[email protected]>
Cc: [email protected]
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: Kees Cook <[email protected]>
Signed-off-by: Andy Lutomirski <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
|
|
In order to limit code duplication, convert the architecture specific
handle_IRQ to use the generic __handle_domain_irq function.
Signed-off-by: Marc Zyngier <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Jason Cooper <[email protected]>
|
|
Since commit 1dbfa187dad ("ARM: irq migration: force migration off CPU
going down") the ARM interrupt migration code on cpu offline calls
irqchip.irq_set_affinity() with the argument force=true. At the point
of this change the argument had no effect because it was not used by
any interrupt chip driver and there was no semantics defined.
This changed with commit 01f8fa4f01d8 ("genirq: Allow forcing cpu
affinity of interrupts") which made the force argument useful to route
interrupts to not yet online cpus without checking the target cpu
against the cpu online mask. The following commit ffde1de64012
("irqchip: gic: Support forced affinity setting") implemented this for
the GIC interrupt controller.
As a consequence the ARM cpu offline irq migration fails if CPU0 is
offlined, because CPU0 is still set in the affinity mask and the
validataion against cpu online mask is skipped to the force argument
being true. The following first_cpu(mask) selection always selects
CPU0 as the target.
Solve the issue by calling irq_set_affinity() with force=false from
the CPU offline irq migration code so the GIC driver validates the
affinity mask against CPU online mask and therefore removes CPU0 from
the possible target candidates.
Tested on TC2 hotpluging CPU0 in and out. Without this patch the system
locks up as the IRQs are not migrated away from CPU0.
Signed-off-by: Sudeep Holla <[email protected]>
Acked-by: Thomas Gleixner <[email protected]>
Acked-by: Mark Rutland <[email protected]>
Cc: <[email protected]> # 3.10.x
Signed-off-by: Russell King <[email protected]>
|
|
After becoming a mandatory function, boot_secondary() is no longer used
outside arch/arm/kernel/smp.c. Hence remove its public prototype, and,
as suggested by Arnd, let it be absorbed by its single caller.
Signed-off-by: Geert Uytterhoeven <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
On revisions of Cortex-A15 prior to r3p3, a CLREX instruction at PL1 may
falsely trigger a watchpoint exception, leading to potential data aborts
during exception return and/or livelock.
This patch resolves the issue in the following ways:
- Replacing our uses of CLREX with a dummy STREX sequence instead (as
we did for v6 CPUs).
- Removing the clrex code from v7_exit_coherency_flush and derivatives,
since this only exists as a minor performance improvement when
non-cached exclusives are in use (Linux doesn't use these).
Benchmarking on a variety of ARM cores revealed no measurable
performance difference with this change applied, so the change is
performed unconditionally and no new Kconfig entry is added.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Cc: [email protected]
Signed-off-by: Russell King <[email protected]>
|