Age | Commit message (Collapse) | Author | Files | Lines |
|
Make the entire kernel image available for secondary CPUs rather
than just the first MB of memory. This allows the startup code
to appear in the cpuinit sections.
Signed-off-by: Russell King <[email protected]>
|
|
nommu can jump directly to __mmap_switched without the absolute
address branching which the mmuful kernel does.
Acked-by: Greg Ungerer <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Signed-off-by: Russell King <[email protected]>
|
|
Use _sdata as the start of the data section, rather than _data.
Signed-off-by: Russell King <[email protected]>
|
|
In order for CPUidle to work on SMP systems, an implementation of
cpu_idle_wait() is needed.
This patch duplicates the x86 implementation of cpu_idle_wait() for
ARM.
Tested-by: Colin Cross <[email protected]>
Signed-off-by: Kevin Hilman <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The kernel does not compile for my ARM926EJ-S system U300 due to
the isb instruction inserted in generic assember statement from
commit 8925ec4c530094b878e7e28a1fd78e7122afd973, "ARM: 6385/1:
setup: detect aliasing I-cache when D-cache is non-aliasing"
hey the isb is only available when assembling for v7 so let's
use the generic isb() macro from setup.h instead.
Acked-by: Will Deacon <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Conflicts:
arch/x86/kernel/module.c
Merge reason: Resolve the conflict, pick up fixes.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Currently, the Kernel assumes that if a CPU has a non-aliasing D-cache
then the I-cache is also non-aliasing. This may not be true on ARM cores
from v6 onwards, which may have aliasing I-caches but non-aliasing
D-caches.
This patch adds a cpu_has_aliasing_icache function, which is called from
cacheid_init and adds CACHEID_VIPT_I_ALIASING to the cacheid when
appropriate. A utility macro, icache_is_vipt_aliasing(), is also
provided.
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
No need to send IPI if there's one CPU, especially when booting
systems with CONFIG_SMP_ON_UP that may not even support IPI.
Signed-off-by: Tony Lindgren <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
UP systems do not implement all the instructions that SMP systems have,
so in order to boot a SMP kernel on a UP system, we need to rewrite
parts of the kernel.
Do this using an 'alternatives' scheme, where the kernel code and data
is modified prior to initialization to replace the SMP instructions,
thereby rendering the problematical code ineffectual. We use the linker
to generate a list of 32-bit word locations and their replacement values,
and run through these replacements when we detect a UP system.
Signed-off-by: Russell King <[email protected]>
|
|
This is done so as to be able to make use of the coresight components'
registers in assembler code (like omap sleep code). Also, there shouldn't
be any users of this structure outside the etm driver.
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Alexander Shishkin <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The MOVW instruction moves a 16-bit immediate into the bottom halfword
of the destination register.
This patch ensures that kprobes leaves the 16-bit immediate intact, rather
than assume a 12-bit immediate and mask out the upper 4 bits.
Acked-by: Nicolas Pitre <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The kernel makes the high vector page visible to user space. This page
contains (amongst others) small code segments that can be executed in
user space. Make this page visible through ptrace and /proc/<pid>/mem
in order to let gdb perform code parsing needed for proper unwinding.
For example, the ERESTART_RESTARTBLOCK handler actually has a stack
frame -- it returns to a PC value stored on the user's stack. To
unwind after a "sleep" system call was interrupted twice, GDB would
have to recognize this situation and understand that stack frame
layout -- which it currently cannot do.
We could fix this by hard-coding addresses in the vector page range into
GDB, but that isn't really portable as not all of those addresses are
guaranteed to remain stable across kernel releases. And having the gdb
process make an exception for this page and get content from its own
address space for it looks strange, and it is not future proof either.
Being located above PAGE_OFFSET, this vma cannot be deleted by
user space code.
Signed-off-by: Nicolas Pitre <[email protected]>
|
|
Signed-off-by: Nicolas Pitre <[email protected]>
|
|
* master.kernel.org:/home/rmk/linux-2.6-arm: (28 commits)
ARM: 6411/1: vexpress: set RAM latencies to 1 cycle for PL310 on ct-ca9x4 tile
ARM: 6409/1: davinci: map sram using MT_MEMORY_NONCACHED instead of MT_DEVICE
ARM: 6408/1: omap: Map only available sram memory
ARM: 6407/1: mmu: Setup MT_MEMORY and MT_MEMORY_NONCACHED L1 entries
ARM: pxa: remove pr_<level> uses of KERN_<level>
ARM: pxa168fb: clear enable bit when not active
ARM: pxa: fix cpu_is_pxa*() not expanding to zero when not configured
ARM: pxa168: fix corrected reset vector
ARM: pxa: Use PIO for PI2C communication on Palm27x
ARM: pxa: Fix Vpac270 gpio_power for MMC
ARM: 6401/1: plug a race in the alignment trap handler
ARM: 6406/1: at91sam9g45: fix i2c bus speed
leds: leds-ns2: fix locking
ARM: dove: fix __io() definition to use bus based offset
dmaengine: fix interrupt clearing for mv_xor
ARM: kirkwood: Unbreak PCIe I/O port
ARM: Fix build error when using KCONFIG_CONFIG
ARM: 6383/1: Implement phys_mem_access_prot() to avoid attributes aliasing
ARM: 6400/1: at91: fix arch_gettimeoffset fallout
ARM: 6398/1: add proc info for ARM11MPCore/Cortex-A9 from ARM
...
|
|
Merge reason: Pick up the latest fixes in -rc5.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
If a signal hits us outside of a syscall and another gets delivered
when we are in sigreturn (e.g. because it had been in sa_mask for
the first one and got sent to us while we'd been in the first handler),
we have a chance of returning from the second handler to location one
insn prior to where we ought to return. If r0 happens to contain -513
(-ERESTARTNOINTR), sigreturn will get confused into doing restart
syscall song and dance.
Incredible joy to debug, since it manifests as random, infrequent and
very hard to reproduce double execution of instructions in userland
code...
The fix is simple - mark it "don't bother with restarts" in wrapper,
i.e. set r8 to 0 in sys_sigreturn and sys_rt_sigreturn wrappers,
suppressing the syscall restart handling on return from these guys.
They can't legitimately return a restart-worthy error anyway.
Testcase:
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>
#include <sys/time.h>
#include <errno.h>
void f(int n)
{
__asm__ __volatile__(
"ldr r0, [%0]\n"
"b 1f\n"
"b 2f\n"
"1:b .\n"
"2:\n" : : "r"(&n));
}
void handler1(int sig) { }
void handler2(int sig) { raise(1); }
void handler3(int sig) { exit(0); }
main()
{
struct sigaction s = {.sa_handler = handler2};
struct itimerval t1 = { .it_value = {1} };
struct itimerval t2 = { .it_value = {2} };
signal(1, handler1);
sigemptyset(&s.sa_mask);
sigaddset(&s.sa_mask, 1);
sigaction(SIGALRM, &s, NULL);
signal(SIGVTALRM, handler3);
setitimer(ITIMER_REAL, &t1, NULL);
setitimer(ITIMER_VIRTUAL, &t2, NULL);
f(-513); /* -ERESTARTNOINTR */
write(1, "buggered\n", 9);
return 1;
}
Signed-off-by: Al Viro <[email protected]>
Acked-by: Russell King <[email protected]>
Cc: [email protected]
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Al Viro reports that calling "sys_sigsuspend(-ERESTARTNOHAND, 0, 0)"
with two signals coming and being handled in kernel space results
in the syscall restart being done twice.
Avoid this by clearing the 'why' flag when we call the signal handling
code to prevent further syscall restarts after the first.
Acked-by: Al Viro <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
|
|
Neither the overcommit nor the reservation sysfs parameter were
actually working, remove them as they'll only get in the way.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: paulus <[email protected]>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Replace pmu::{enable,disable,start,stop,unthrottle} with
pmu::{add,del,start,stop}, all of which take a flags argument.
The new interface extends the capability to stop a counter while
keeping it scheduled on the PMU. We replace the throttled state with
the generic stopped state.
This also allows us to efficiently stop/start counters over certain
code paths (like IRQ handlers).
It also allows scheduling a counter without it starting, allowing for
a generic frozen state (useful for rotating stopped counters).
The stopped state is implemented in two different ways, depending on
how the architecture implemented the throttled state:
1) We disable the counter:
a) the pmu has per-counter enable bits, we flip that
b) we program a NOP event, preserving the counter state
2) We store the counter state and ignore all read/overflow events
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: paulus <[email protected]>
Cc: stephane eranian <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Lin Ming <[email protected]>
Cc: Yanmin <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: David Miller <[email protected]>
Cc: Michael Cree <[email protected]>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Changes perf_disable() into perf_pmu_disable().
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: paulus <[email protected]>
Cc: stephane eranian <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Lin Ming <[email protected]>
Cc: Yanmin <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: David Miller <[email protected]>
Cc: Michael Cree <[email protected]>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Since the current perf_disable() usage is only an optimization,
remove it for now. This eases the removal of the __weak
hw_perf_enable() interface.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: paulus <[email protected]>
Cc: stephane eranian <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Lin Ming <[email protected]>
Cc: Yanmin <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: David Miller <[email protected]>
Cc: Michael Cree <[email protected]>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Simple registration interface for struct pmu, this provides the
infrastructure for removing all the weak functions.
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: paulus <[email protected]>
Cc: stephane eranian <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Lin Ming <[email protected]>
Cc: Yanmin <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: David Miller <[email protected]>
Cc: Michael Cree <[email protected]>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: paulus <[email protected]>
Cc: stephane eranian <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Cyrill Gorcunov <[email protected]>
Cc: Lin Ming <[email protected]>
Cc: Yanmin <[email protected]>
Cc: Deng-Cheng Zhu <[email protected]>
Cc: David Miller <[email protected]>
Cc: Michael Cree <[email protected]>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
If we're targetting a v6 or v7 core and have at least software perf events
available, then automatically add support for hardware breakpoints.
Cc: Frederic Weisbecker <[email protected]>
Cc: S. Karthikeyan <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
interaction
For debuggers to take advantage of the hw-breakpoint framework in the kernel,
it is necessary to expose the API calls via a ptrace interface.
This patch exposes the hardware breakpoints framework as a collection of
virtual registers, accesible using PTRACE_SETHBPREGS and PTRACE_GETHBPREGS
requests. The breakpoints are stored in the debug_info struct of the running
thread.
Cc: Frederic Weisbecker <[email protected]>
Cc: S. Karthikeyan <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The hw-breakpoint framework in the kernel requires architecture-specific
support in order to install, remove, validate and manage hardware
breakpoints.
This patch adds initial support for this framework to the ARM architecture,
but restricts the number of watchpoints to a single resource to get around
the fact that the Data Fault Address Register is unknown when a watchpoint
debug exception is taken.
On cores with v7 debug, the Kernel can handle breakpoint and watchpoint
exceptions occuring from userspace. Older cores require clients to handle
the exception themselves by registering an appropriate overflow handler
or, in the case of ptrace, handling the raised SIGTRAP.
The memory-mapped extended debug interface is unsupported due to its
unreliability in real implementations.
Cc: Frederic Weisbecker <[email protected]>
Cc: S. Karthikeyan <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The validate_event function in the ARM perf events backend has the
following problems:
1.) Events that are disabled count towards the cost.
2.) Events associated with other PMUs [for example, software events or
breakpoints] do not count towards the cost, but do fail validation,
causing the group to fail.
This patch changes validate_event so that it ignores events in the
PERF_EVENT_STATE_OFF state or that are scheduled for other PMUs.
Reported-by: Pawel Moll <[email protected]>
Acked-by: Jamie Iles <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
With several sections per module, and dozens of modules, the
searches down the linked list of sections would dominate the
lookup time, dwarfing any savings from the binary search
within the section.
A simple move-to-front optimisation exploits the commonality
of the code paths taken, and in simple real-world tests reduces
the number of steps in the search to barely more than 1.
Signed-off-by: Phil Carmody <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Without these, exit functions cannot be stack-traced, so to speak.
This implies that module unloads that perform allocations (don't
laugh) will cause noisy warnings on the console when kmemleak is
enabled, as it presumes that all code's call chains are traceable.
Similarly, BUGs and WARN_ONs will give additional console spam.
Signed-off-by: Phil Carmody <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The various sections are all dealt with similarly, so factor out
that common behaviour. (Incorporating Peter Huewe's fix.)
Cc: Peter Huewe <[email protected]>
Signed-off-by: Phil Carmody <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Less to read.
Signed-off-by: Phil Carmody <[email protected]>
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Handle the different nop and call instructions for Thumb-2. Also, we
need to adjust the recorded mcount_loc addresses because they have the
lsb set.
Cc: Catalin Marinas <[email protected]>
Acked-by: Steven Rostedt <[email protected]> [recordmcount.pl change]
Signed-off-by: Rabin Vincent <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
This adds mcount recording and updates dynamic ftrace for ARM to work
with the new ftrace dyamic tracing implementation. It also adds support
for the mcount format used by newer ARM compilers.
With dynamic tracing, mcount() is implemented as a nop. Callsites are
patched on startup with nops, and dynamically patched to call to the
ftrace_caller() routine as needed.
Acked-by: Steven Rostedt <[email protected]> [recordmcount.pl change]
Signed-off-by: Rabin Vincent <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
Fix the mcount routines to build and run on a kernel built with the
Thumb-2 instruction set by correcting the following errors using the
fixes suggested by Catalin Marinas:
- Problem: The following assembler errors appear at the "adr r0,
ftrace_stub" instruction:
entry-common.S: Assembler messages:
entry-common.S:179: Error: invalid immediate for address calculation (value = 0x00000004)
Fix: The errors don't occur with a non-global symbol, so use one.
- Problem: The "mov lr, pc" does not set the lsb when storing the pc in
lr. The called function returns with "bx lr", and the mode changes
to ARM.
Fix: Add a label on the return address and use "adr lr, BSYM(label)".
We don't modify the old mcount because it won't be built when using
Thumb-2.
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Rabin Vincent <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
When building as Thumb-2, the ".type foo, %function" annotation in
ENDPROC seems to be required in order for the assembly routines to be
recognized as Thumb-2 code. If the ENDPROC annotations are not present,
calls to these routines are generated as BLX instead of BL.
Acked-by: Catalin Marinas <[email protected]>
Signed-off-by: Rabin Vincent <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
With a new enough GCC, ARM function tracing can be supported without the
need for frame pointers. This is essential for Thumb-2 support, since
frame pointers aren't available then.
Acked-by: Catalin Marinas <[email protected]>
Acked-by: Steven Rostedt <[email protected]>
Signed-off-by: Rabin Vincent <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
The 2.6.36-rc kernel added three new system calls:
fanotify_init, fanotify_mark, and prlimit64. This patch
wires them up on ARM.
The only non-trivial issue here is the u64 argument to
sys_fanotify_mark(), but it is the 3rd argument and thus
passed in r2/r3 in both kernel and user space, so it causes
no problems.
Tested with a 2.6.36-rc2 EABI kernel on an ixp4xx machine.
Tested-by: Anand Gadiyar <[email protected]>
Signed-off-by: Mikael Pettersson <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
This is purely a cosmetic change to the ARM perf backend because the current
comments about the relationship between NMIs, interrupt context and
perf_event_do_pending are misleading.
This patch updates the comments so that they reflect what the code
actually does (which is in line with other architectures).
Acked-by: Jamie Iles <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Signed-off-by: Russell King <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: pxa27x_keypad - remove input_free_device() in pxa27x_keypad_remove()
Input: mousedev - fix regression of inverting axes
Input: uinput - add devname alias to allow module on-demand load
Input: hil_kbd - fix compile error
USB: drop tty argument from usb_serial_handle_sysrq_char()
Input: sysrq - drop tty argument form handle_sysrq()
Input: sysrq - drop tty argument from sysrq ops handlers
|
|
Merge reason: pick up perf fixes
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Noone is using tty argument so let's get rid of it.
Acked-by: Alan Cox <[email protected]>
Acked-by: Jason Wessel <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Dmitry Torokhov <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into perf/core
|
|
Store the kernel and user contexts from the generic layer instead
of archs, this gathers some repetitive code.
Signed-off-by: Frederic Weisbecker <[email protected]>
Acked-by: Paul Mackerras <[email protected]>
Tested-by: Will Deacon <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: David Miller <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Borislav Petkov <[email protected]>
|
|
- Most archs use one callchain buffer per cpu, except x86 that needs
to deal with NMIs. Provide a default perf_callchain_buffer()
implementation that x86 overrides.
- Centralize all the kernel/user regs handling and invoke new arch
handlers from there: perf_callchain_user() / perf_callchain_kernel()
That avoid all the user_mode(), current->mm checks and so...
- Invert some parameters in perf_callchain_*() helpers: entry to the
left, regs to the right, following the traditional (dst, src).
Signed-off-by: Frederic Weisbecker <[email protected]>
Acked-by: Paul Mackerras <[email protected]>
Tested-by: Will Deacon <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: David Miller <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Borislav Petkov <[email protected]>
|
|
callchain_store() is the same on every archs, inline it in
perf_event.h and rename it to perf_callchain_store() to avoid
any collision.
This removes repetitive code.
Signed-off-by: Frederic Weisbecker <[email protected]>
Acked-by: Paul Mackerras <[email protected]>
Tested-by: Will Deacon <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: David Miller <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Borislav Petkov <[email protected]>
|
|
Drop the TASK_RUNNING test on user tasks for callchains as
this check doesn't seem to make any sense.
Also remove the tests for !current that is not supposed to
happen and current->pid as this should be handled at the
generic level, with exclude_idle attribute.
Signed-off-by: Frederic Weisbecker <[email protected]>
Tested-by: Will Deacon <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Stephane Eranian <[email protected]>
Cc: David Miller <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: Borislav Petkov <[email protected]>
|
|
* master.kernel.org:/home/rmk/linux-2.6-arm:
VIDEO: amba clcd: don't disable an already disabled clock
ARM: Tighten check for allowable CPSR values
ARM: 6329/1: wire up sys_accept4() on ARM
ARM: 6328/1: Build with -fno-dwarf2-cfi-asm
ARM: 6326/1: kgdb: fix GDB_MAX_REGS no longer used
|
|
Make do_execve() take a const filename pointer so that kernel_execve() compiles
correctly on ARM:
arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type
This also requires the argv and envp arguments to be consted twice, once for
the pointer array and once for the strings the array points to. This is
because do_execve() passes a pointer to the filename (now const) to
copy_strings_kernel(). A simpler alternative would be to cast the filename
pointer in do_execve() when it's passed to copy_strings_kernel().
do_execve() may not change any of the strings it is passed as part of the argv
or envp lists as they are some of them in .rodata, so marking these strings as
const should be fine.
Further kernel_execve() and sys_execve() need to be changed to match.
This has been test built on x86_64, frv, arm and mips.
Signed-off-by: David Howells <[email protected]>
Tested-by: Ralf Baechle <[email protected]>
Acked-by: Russell King <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|