Age | Commit message (Collapse) | Author | Files | Lines |
|
* 'for-linus' of git://neil.brown.name/md: (97 commits)
md: raid-1/10: fix RW bits manipulation
md: remove unnecessary memset from multipath.
md: report device as congested when suspended
md: Improve name of threads created by md_register_thread
md: remove sparse warnings about lock context.
md: remove sparse waring "symbol xxx shadows an earlier one"
async_tx/raid6: add missing dma_unmap calls to the async fail case
ioat3: fix uninitialized var warnings
drivers/dma/ioat/dma_v2.c: fix warnings
raid6test: fix stack overflow
ioat2: clarify ring size limits
md/raid6: cleanup ops_run_compute6_2
md/raid6: eliminate BUG_ON with side effect
dca: module load should not be an error message
ioat: driver version 4.0
dca: registering requesters in multiple dca domains
async_tx: remove HIGHMEM64G restriction
dmaengine: sh: Add Support SuperH DMA Engine driver
dmaengine: Move all map_sg/unmap_sg for slave channel to its client
fsldma: Add DMA_SLAVE support
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6
* 'hwpoison' of git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6: (21 commits)
HWPOISON: Enable error_remove_page on btrfs
HWPOISON: Add simple debugfs interface to inject hwpoison on arbitary PFNs
HWPOISON: Add madvise() based injector for hardware poisoned pages v4
HWPOISON: Enable error_remove_page for NFS
HWPOISON: Enable .remove_error_page for migration aware file systems
HWPOISON: The high level memory error handler in the VM v7
HWPOISON: Add PR_MCE_KILL prctl to control early kill behaviour per process
HWPOISON: shmem: call set_page_dirty() with locked page
HWPOISON: Define a new error_remove_page address space op for async truncation
HWPOISON: Add invalidate_inode_page
HWPOISON: Refactor truncate to allow direct truncating of page v2
HWPOISON: check and isolate corrupted free pages v2
HWPOISON: Handle hardware poisoned pages in try_to_unmap
HWPOISON: Use bitmask/action code for try_to_unmap behaviour
HWPOISON: x86: Add VM_FAULT_HWPOISON handling to x86 page fault handler v2
HWPOISON: Add poison check to page fault handling
HWPOISON: Add basic support for poisoned pages in fault handler v3
HWPOISON: Add new SIGBUS error codes for hardware poison signals
HWPOISON: Add support for poison swap entries v2
HWPOISON: Export some rmap vma locking to outside world
...
|
|
This brings Alpha AGP platforms in sync with the change to struct
agp_memory (unsigned long *memory => struct page **pages).
Only compile tested (I don't have titan/marvel hardware), but this change
looks pretty straightforward, so hopefully it's ok.
Signed-off-by: Ivan Kokshaysky <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: Dave Airlie <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
It's unused.
It isn't needed -- read or write flag is already passed and sysctl
shouldn't care about the rest.
It _was_ used in two places at arch/frv for some reason.
Signed-off-by: Alexey Dobriyan <[email protected]>
Cc: David Howells <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: James Morris <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
In order to direct the SIGIO signal to a particular thread of a
multi-threaded application we cannot, like suggested by the manpage, put a
TID into the regular fcntl(F_SETOWN) call. It will still be send to the
whole process of which that thread is part.
Since people do want to properly direct SIGIO we introduce F_SETOWN_EX.
The need to direct SIGIO comes from self-monitoring profiling such as with
perf-counters. Perf-counters uses SIGIO to notify that new sample data is
available. If the signal is delivered to the same task that generated the
new sample it can augment that data by inspecting the task's user-space
state right after it returns from the kernel. This is esp. convenient
for interpreted or virtual machine driven environments.
Both F_SETOWN_EX and F_GETOWN_EX take a pointer to a struct f_owner_ex
as argument:
struct f_owner_ex {
int type;
pid_t pid;
};
Where type is one of F_OWNER_TID, F_OWNER_PID or F_OWNER_GID.
Signed-off-by: Peter Zijlstra <[email protected]>
Reviewed-by: Oleg Nesterov <[email protected]>
Tested-by: stephane eranian <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Roland McGrath <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
arch/x86/include/asm/topology.h declares inline fns cpu_to_node and
cpumask_of_node for !NUMA, even though they are then declared as
macros by asm-generic/topology.h, which is #included just below.
The macros (which are the same) end up being used; these functions
are just confusing.
Noticed-by: Linus Torvalds <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
Cc: Jesse Barnes <[email protected]>
Cc: "Greg Kroah-Hartman" <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
If you use the kernel argument:
earlyprintk=serial,ttyS0,115200
This will cause a recursive hang printing the same line
again and again:
BIOS-e820: 000000003fff3000 - 0000000040000000 (ACPI data)
BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved)
BIOS-e820: 00000000fec00000 - 0000000100000000 (reserved)
bootconsole [earlyser0] enabled
Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009
Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009
Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009
Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009
Linux version 2.6.31-07863-gb64ada6 (mingo@sirius) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #16789 SMP Wed Sep 23 21:09:43 CEST 2009
Instead warn the end user that they specified the device
a second time, and ignore that second console.
Reported-by: Ingo Molnar <[email protected]>
Signed-off-by: Jason Wessel <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Greg KH <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Merge reason: Queueing up dependent early-printk fix.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
On modern systems, the kernel prints the message
x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
once for every CPU.
This gets kind of ridiculous on huge systems; for example, on a
64-thread system I was lucky enough to get:
dmesg| grep 'PAT enabled' | wc
64 704 5174
There is already a BUG() if non-boot CPUs have PAT capabilities
that don't match the boot CPU, so just print the message on the
boot CPU. (I kept the print after the wrmsrl() that enables PAT,
so that the log output continues to mean that the system survived
enabling PAT on the boot CPU)
Signed-off-by: Roland Dreier <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Venkatesh Pallipadi <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
On modern systems, the kernel prints the message
Skipping synchronization checks as TSC is reliable.
once for every non-boot CPU.
This gets kind of ridiculous on huge systems; for example, on a
64-thread system I was lucky enough to get:
$ dmesg | grep 'TSC is reliable' | wc
63 567 4221
There's no point to doing this for every CPU, since the code is
just checking the boot CPU anyway, so change this to a
printk_once() to make the message appears only once.
Signed-off-by: Roland Dreier <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
In the unaligned kernel exception fixup case the printk() was ordered
before the copy_from_user(), resulting in a nonsensical instruction
value. This fixes up the ordering properly.
Signed-off-by: Paul Mundt <[email protected]>
|
|
This adds some sanity checking in the unaligned instruction handler to
verify the instruction size, which enables basic support for 16-bit
fixups on SH-2A parts.
Signed-off-by: Paul Mundt <[email protected]>
|
|
I need to disable heartbeat function because this features
breaks testing in Qemu.
Signed-off-by: Michal Simek <[email protected]>
|
|
Instead of remembering to specify DTB= on the make commandline, this commit
allows the much friendlier make simpleImage.<dts>
where <dts>.dts is expected to be found in arch/microblaze/boot/dts/
The resulting vmlinux, with the compiled DTS linked in, will be copied to
boot/simpleImage.<dts>
This mirrors the same functionality as on PowerPC,
albeit achieving it in a slightly different way.
+ strip simpleImage file
The size of output file is very similar to linux.bin.
vmlinux - full elf without fdt blob
simpleImage.<dtb name>.unstrip - full elf with fdt blob
simpleImage.<dtb name> - stripped elf with fdt blob
Add symlink to generic system.dts in platform folder
Signed-off-by: John Williams <[email protected]>
Signed-off-by: Michal Simek <[email protected]>
|
|
Signed-off-by: Kuninori Morimoto <[email protected]>
Signed-off-by: Paul Mundt <[email protected]>
|
|
fix the following 'make includecheck' warning:
arch/sh/kernel/dwarf.c: asm/dwarf.h is included more than once.
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Paul Mundt <[email protected]>
|
|
After upgrading to the latest kernel on my mpc875 userspace started
running incredibly slow (hours to get to a shell, even!).
I tracked it down to commit 8d30c14cab30d405a05f2aaceda1e9ad57800f36,
that patch removed a work-around for the 8xx. Adding it
back makes my problem go away.
Signed-off-by: Rex Feany <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
The xmon code relies on MSR_RI being non-zero to indicate that an exception
is recoverable. If it is not, it prints a warning message. However, the
PowerPC 4xx cores do not have an MSR_RI bit and this warning is produced for
every xmon event.
This introduces an unrecoverable_excp function to determine if an exception
is recoverable or not. This gets rid of the erroneous warnings on 4xx.
Signed-off-by: Josh Boyer <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
The test to check whether we have _PAGE_SPECIAL defined is broken,
since we always define it, just not always to a meaninful value :-)
That broke 8xx and 40x under some circumstances.
This fixes it by adding _PAGE_SPECIAL for both of these since they
had a free PTE bit, and removing the condition around advertising
it.
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
Signed-off-by: Tim Abbott <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: [email protected]
Cc: Sam Ravnborg <[email protected]>
Acked-by: Sam Ravnborg <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
On machines without the ibm,client-architecture-support call we were missing a
newline. We may as well print the full name in all its glory too - its
ibm,client-architecture-support, not ibm,client-architecture as I mistakenly
wrote (a name only an IBM architect could love).
For my penance I will write out ibm,client-architecture-support 100 times.
Before:
Calling ibm,client-architecture...command line: root=/dev/sda6 console=hvc0 quiet
After:
Calling ibm,client-architecture-support... not implemented
command line: root=/dev/sda6 console=hvc0
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
Some System p configurations can already have more than 16 nodes so we
need to increase NODES_SHIFT. I chose 256 to give us some room to grow in the
future, although we can look at something smaller if the memory bloat is
considered too much.
Unless we clamp MAX_ACTIVE_REGIONS we end up with 300kB of extra bloat in
early_node_map in mm/page_alloc.c:
< 6144 early_node_map
> 307200 early_node_map
due to:
#if MAX_NUMNODES >= 32
/* If there can be many nodes, allow up to 50 holes per node */
#define MAX_ACTIVE_REGIONS (MAX_NUMNODES*50)
#else
/* By default, allow up to 256 distinct regions */
#define MAX_ACTIVE_REGIONS 256
Since our memory is mostly contiguous it seems reasonable to keep this
at 256 for now. I also set 32bit to 32 to save space (is there any chance
a 32bit system will have more than 32 discontiguous memory ranges?).
Even with that fixed we have a few data structures that grow:
< 896 bootmem_node_data
> 14336 bootmem_node_data
< 1280 node_devices
> 20480 node_devices
< 25088 kmalloc_caches
> 59648 kmalloc_caches
< 1632 hstates
> 21792 hstates
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
perf_counter uses arch_vma_name() to detect a vdso region which in turn uses
current->mm->context.vdso_base. We need to initialise this before doing
the mmap or else we fail to detect the vdso.
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
If we are using 1TB segments and we are allowed to randomise the heap, we can
put it above 1TB so it is backed by a 1TB segment. Otherwise the heap will be
in the bottom 1TB which always uses 256MB segments and this may result in a
performance penalty.
This functionality is disabled when heap randomisation is turned off:
echo 1 > /proc/sys/kernel/randomize_va_space
which may be useful when trying to allocate the maximum amount of 16M or 16G
pages.
On a microbenchmark that repeatedly touches 32GB of memory with a stride of
256MB + 4kB (designed to stress 256MB segments while still mapping nicely into
the L1 cache), we see the improvement:
Force malloc to use heap all the time:
# export MALLOC_MMAP_MAX_=0 MALLOC_TRIM_THRESHOLD_=-1
Disable heap randomization:
# echo 1 > /proc/sys/kernel/randomize_va_space
# time ./test
12.51s
Enable heap randomization:
# echo 2 > /proc/sys/kernel/randomize_va_space
# time ./test
1.70s
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
Sometimes this is used to hold a simple offset, and sometimes
it is used to hold a pointer. This patch changes it to a union containing
void * and dma_addr_t. get/set accessors are also provided, because it was
getting a bit ugly to get to the actual data.
Signed-off-by: Becky Bruce <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
The former is no longer really accurate with the swiotlb case now
a possibility. I also move it into dma-mapping.h - it no longer
needs to be in dma.c, and there are about to be some more accessors
that should all end up in the same place. A comment is added to
indicate that this function is not used in configs where there is no
simple dma offset, such as the iommu case.
Signed-off-by: Becky Bruce <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
Remove duplicated #include('s) in
arch/powerpc/mm/tlb_low_64e.S
Signed-off-by: Huang Weiyi <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
Remove duplicated #include('s) in
arch/powerpc/kernel/exceptions-64e.S
Signed-off-by: Huang Weiyi <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
When using CONFIG_RELOCATABLE, we build the kernel as a position
independent executable. The kernel then uses a little bit of relocation
code to relocate itself. That code only deals with R_PPC64_RELATIVE
relocations though. If for some reason you use assembly constructs
such as LOAD_REG_IMMEDIATE() to load the address of a symbol, you'll
generate different kinds of relocations that won't be processed properly
and bad things will happen. (We have 2 such bugs today).
The perl script tries to filter out "known" bad ones. It's possible
that we are missing some in the case of a weak function that nobody
implements, we'll see if we get false positive and fix it.
Signed-off-by: Tony Breeds <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
It doesn't exist !
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
Prevent NULL dereference if kmalloc() fails.
Signed-off-by: Roel Kluin <[email protected]>
Signed-off-by: Benjamin Herrenschmidt <[email protected]>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux-2.6-for-linus: (39 commits)
cpumask: Move deprecated functions to end of header.
cpumask: remove unused deprecated functions, avoid accusations of insanity
cpumask: use new-style cpumask ops in mm/quicklist.
cpumask: use mm_cpumask() wrapper: x86
cpumask: use mm_cpumask() wrapper: um
cpumask: use mm_cpumask() wrapper: mips
cpumask: use mm_cpumask() wrapper: mn10300
cpumask: use mm_cpumask() wrapper: m32r
cpumask: use mm_cpumask() wrapper: arm
cpumask: Use accessors for cpu_*_mask: um
cpumask: Use accessors for cpu_*_mask: powerpc
cpumask: Use accessors for cpu_*_mask: mips
cpumask: Use accessors for cpu_*_mask: m32r
cpumask: remove arch_send_call_function_ipi
cpumask: arch_send_call_function_ipi_mask: s390
cpumask: arch_send_call_function_ipi_mask: powerpc
cpumask: arch_send_call_function_ipi_mask: mips
cpumask: arch_send_call_function_ipi_mask: m32r
cpumask: arch_send_call_function_ipi_mask: alpha
cpumask: remove obsolete topology_core_siblings and topology_thread_siblings: ia64
...
|
|
* remove asm/atomic.h inclusion from linux/utsname.h --
not needed after kref conversion
* remove linux/utsname.h inclusion from files which do not need it
NOTE: it looks like fs/binfmt_elf.c do not need utsname.h, however
due to some personality stuff it _is_ needed -- cowardly leave ELF-related
headers and files alone.
Signed-off-by: Alexey Dobriyan <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask (to be a pointer).
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <[email protected]>
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <[email protected]>
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <[email protected]>
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask
(to be a pointer).
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Also change the actual arg name here to "mm" (which it is), not "task".
Signed-off-by: Rusty Russell <[email protected]>
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <[email protected]>
Acked-by: Hirokazu Takata <[email protected]> (fixes)
|
|
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).
Signed-off-by: Rusty Russell <[email protected]>
|
|
Use the accessors rather than frobbing bits directly (the new versions
are const).
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Mike Travis <[email protected]>
|
|
Use the accessors rather than frobbing bits directly (the new versions
are const).
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Mike Travis <[email protected]>
|
|
Use the accessors rather than frobbing bits directly (the new versions
are const).
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Mike Travis <[email protected]>
|
|
Use the accessors rather than frobbing bits directly (the new versions
are const).
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Mike Travis <[email protected]>
|
|
Now everyone is converted to arch_send_call_function_ipi_mask, remove
the shim and the #defines.
Signed-off-by: Rusty Russell <[email protected]>
|
|
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask().
Signed-off-by: Rusty Russell <[email protected]>
|
|
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.
Signed-off-by: Rusty Russell <[email protected]>
|
|
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.
We also take the chance to wean the implementations off the
obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer
seemed the most natural way to ensure all implementations used
for_each_cpu.
Signed-off-by: Rusty Russell <[email protected]>
|
|
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask(), and by defining
it, the old arch_send_call_function_ipi is defined by the core code.
We also take the chance to wean the implementations off the
obsolescent for_each_cpu_mask(): making send_ipi_mask take the pointer
seemed the most natural way to ensure all implementations used
for_each_cpu.
Signed-off-by: Rusty Russell <[email protected]>
|
|
We're weaning the core code off handing cpumask's around on-stack.
This introduces arch_send_call_function_ipi_mask().
We also take the chance to wean the send_ipi_message off the
obsolescent for_each_cpu_mask(): making it take a pointer seemed the
most natural way to do this.
Signed-off-by: Rusty Russell <[email protected]>
|
|
topology_thread_siblings: ia64
There were replaced by topology_core_cpumask and topology_thread_cpumask.
Signed-off-by: Rusty Russell <[email protected]>
|