Age | Commit message (Collapse) | Author | Files | Lines |
|
cpu_info is already with per_cpu, We can take llc_shared_map out
of cpu_info, and declare it as per_cpu variable directly.
So later referencing could be simple and directly instead of
diving to find cpu_info at first.
Also could make smp_store_cpu_info() much simple to avoid to do
save and restore trick.
Signed-off-by: Yinghai Lu <[email protected]>
Cc: Hans Rosenfeld <[email protected]>
Cc: Alok N Kataria <[email protected]>
Cc: Stephen Hemminger <[email protected]>
Cc: Hans J. Koch <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andreas Herrmann <[email protected]>
Cc: Robert Richter <[email protected]>
Cc: Suresh Siddha <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
x86 smp_ops now has a new op, stop_other_cpus which takes a parameter
"wait" this allows the caller to specify if it wants to stop until all
the cpus have processed the stop IPI. This is required specifically
for the kexec case where we should wait for all the cpus to be stopped
before starting the new kernel. We now wait for the cpus to stop in
all cases except for panic/kdump where we expect things to be broken
and we are doing our best to make things work anyway.
This patch fixes a legitimate regression, which was introduced during
2.6.30, by commit id 4ef702c10b5df18ab04921fc252c26421d4d6c75.
Signed-off-by: Alok N Kataria <[email protected]>
LKML-Reference: <[email protected]>
Cc: Eric W. Biederman <[email protected]>
Cc: Jeremy Fitzhardinge <[email protected]>
Cc: <[email protected]> v2.6.30-36
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Add wbinvd_on_cpu and wbinvd_on_all_cpus stubs for executing wbinvd on a
particular CPU.
[ hpa: renamed lib/smp.c to lib/cache-smp.c ]
[ hpa: wbinvd_on_all_cpus() returns int, but wbinvd() returns
void. Thus, the former cannot be a macro for the latter,
replace with an inline function. ]
Signed-off-by: Borislav Petkov <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Now everyone is converted to arch_send_call_function_ipi_mask, remove
the shim and the #defines.
Signed-off-by: Rusty Russell <[email protected]>
|
|
Ed found that on 32-bit, boot_cpu_physical_apicid is not read right,
when the mptable is broken.
Interestingly, actually three paths use/set it:
1. acpi: at that time that is already read from reg
2. mptable: only read from mptable
3. no madt, and no mptable, that use default apic id 0 for 64-bit, -1 for 32-bit
so we could read the apic id for the 2/3 path. We trust the hardware
register more than we trust a BIOS data structure (the mptable).
We can also avoid the double set_fixmap() when acpi_lapic
is used, and also need to move cpu_has_apic earlier and
call apic_disable().
Also when need to update the apic id, we'd better read and
set the apic version as well - so that quirks are applied precisely.
v2: make path 3 with 64bit, use -1 as apic id, so could read it later.
v3: fix whitespace problem pointed out by Ed Swierk
v5: fix boot crash
[ Impact: get correct apic id for bsp other than acpi path ]
Reported-by: Ed Swierk <[email protected]>
Signed-off-by: Yinghai Lu <[email protected]>
Acked-by: Cyrill Gorcunov <[email protected]>
LKML-Reference: <[email protected]>
[ v4: sanity-check in the ACPI case too ]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: implement new API
We define arch_send_call_function_ipi_mask and generic kernel/smp.c
code creates arch_send_call_function_ipi() as a wrapper.
Signed-off-by: Rusty Russell <[email protected]>
|
|
Impact: reduce per-cpu size for CONFIG_CPUMASK_OFFSTACK=y
In most places it's cleaner to use the accessors cpu_sibling_mask()
and cpu_core_mask() wrappers which already exist.
I couldn't avoid cleaning up the access in oprofile, either.
Signed-off-by: Rusty Russell <[email protected]>
|
|
Spread mach_apic.h definitions into genapic.h. (with some knock-on effects
on smp.h and apic.h.)
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Move its definitions into apic.h.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
- spread out the namespace on a per driver basis
- get rid of macro wrappers
- small cleanups
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Brian Gerst <[email protected]>
|
|
tj: moved cpu_number definition out of CONFIG_HAVE_SETUP_PER_CPU_AREA
for voyager.
Signed-off-by: Brian Gerst <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
It is an optimization and a cleanup, and adds the following new
generic percpu methods:
percpu_read()
percpu_write()
percpu_add()
percpu_sub()
percpu_and()
percpu_or()
percpu_xor()
and implements support for them on x86. (other architectures will fall
back to a default implementation)
The advantage is that for example to read a local percpu variable,
instead of this sequence:
return __get_cpu_var(var);
ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx
ffffffff8102ca32: 81
ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax
ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax
We can get a single instruction by using the optimized variants:
return percpu_read(var);
ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax
I also cleaned up the x86-specific APIs and made the x86 code use
these new generic percpu primitives.
tj: * fixed generic percpu_sub() definition as Roel Kluin pointed out
* added percpu_and() for completeness's sake
* made generic percpu ops atomic against preemption
Signed-off-by: Ingo Molnar <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
[ Based on original patch from Christoph Lameter and Mike Travis. ]
Currently pdas and percpu areas are allocated separately. %gs points
to local pda and percpu area can be reached using pda->data_offset.
This patch folds pda into percpu area.
Due to strange gcc requirement, pda needs to be at the beginning of
the percpu area so that pda->stack_canary is at %gs:40. To achieve
this, a new percpu output section macro - PERCPU_VADDR_PREALLOC() - is
added and used to reserve pda sized chunk at the start of the percpu
area.
After this change, for boot cpu, %gs first points to pda in the
data.init area and later during setup_per_cpu_areas() gets updated to
point to the actual pda. This means that setup_per_cpu_areas() need
to reload %gs for CPU0 while clearing pda area for other cpus as cpu0
already has modified it when control reaches setup_per_cpu_areas().
This patch also removes now unnecessary get_local_pda() and its call
sites.
A lot of this patch is taken from Mike Travis' "x86_64: Fold pda into
per cpu area" patch.
Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
to cpumask.h
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup, moving NON-SMP stuff from smp.h
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup, moving NON-SMP stuff from smp.h
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: cleanup
Signed-off-by: Jaswinder Singh Rajput <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Impact: use new cpumask API to reduce memory and stack usage
Allocate the following local cpumasks based on the number of cpus that
are present. References will use new cpumask API. (Currently only
modified for x86_64, x86_32 continues to use the *_map variants.)
cpu_callin_mask
cpu_callout_mask
cpu_initialized_mask
cpu_sibling_setup_mask
Provide the following accessor functions:
struct cpumask *cpu_sibling_mask(int cpu)
struct cpumask *cpu_core_mask(int cpu)
Other changes are when setting or clearing the cpu online, possible
or present maps, use the accessor functions.
Signed-off-by: Mike Travis <[email protected]>
Acked-by: Rusty Russell <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
This patch simply changes cpumask_t to struct cpumask and similar
trivial modernizations.
Signed-off-by: Rusty Russell <[email protected]>
Signed-off-by: Mike Travis <[email protected]>
|
|
Impact: cleanup, change parameter passing
* Change genapic interfaces to accept cpumask_t pointers where possible.
* Modify external callers to use cpumask_t pointers in function calls.
* Create new send_IPI_mask_allbutself which is the same as the
send_IPI_mask functions but removes smp_processor_id() from list.
This removes another common need for a temporary cpumask_t variable.
* Functions that used a temp cpumask_t variable for:
cpumask_t allbutme = cpu_online_map;
cpu_clear(smp_processor_id(), allbutme);
if (!cpus_empty(allbutme))
...
become:
if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu)))
...
* Other minor code optimizations (like using cpus_clear instead of
CPU_MASK_NONE, etc.)
Applies to linux-2.6.tip/master.
Signed-off-by: Mike Travis <[email protected]>
Signed-off-by: Rusty Russell <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
|
|
dc1e35c6e95e8923cf1d3510438b63c600fee1e2
Impact: build fix on x86/Voyager
Given commits like this:
| Author: Suresh Siddha <[email protected]>
| Date: Tue Jul 29 10:29:19 2008 -0700
|
| x86, xsave: enable xsave/xrstor on cpus with xsave support
Which deliberately expose boot cpu dependence to pieces of the system,
I think it's time to explicitly have a variable for it to prevent this
continual misassumption that the boot CPU is zero.
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since:
a. the double underscore is ugly and pointless.
b. no leading underscore violates namespace constraints.
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Signed-off-by: Al Viro <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|