diff options
author | Dimitri Sivanich <[email protected]> | 2010-03-01 11:48:15 -0600 |
---|---|---|
committer | Ingo Molnar <[email protected]> | 2010-03-02 13:36:11 +0100 |
commit | 14be1f7454ea96ee614467a49cf018a1a383b189 (patch) | |
tree | 803848d46600aaeb00fdd67fa0888d0c24ddfe13 | |
parent | 13dda80e48439b446d0bc9bab34b91484bc8f533 (diff) |
x86: Fix sched_clock_cpu for systems with unsynchronized TSC
On UV systems, the TSC is not synchronized across blades. The
sched_clock_cpu() function is returning values that can go
backwards (I've seen as much as 8 seconds) when switching
between cpus.
As each cpu comes up, early_init_intel() will currently set the
sched_clock_stable flag true. When mark_tsc_unstable() runs, it
clears the flag, but this only occurs once (the first time a cpu
comes up whose TSC is not synchronized with cpu 0). After this,
early_init_intel() will set the flag again as the next cpu comes
up.
Only set sched_clock_stable if tsc has not been marked unstable.
Signed-off-by: Dimitri Sivanich <[email protected]>
Acked-by: Venkatesh Pallipadi <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
LKML-Reference: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
-rw-r--r-- | arch/x86/kernel/cpu/intel.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index 879666f4d871..7e1cca13af35 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -70,7 +70,8 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c) if (c->x86_power & (1 << 8)) { set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC); set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC); - sched_clock_stable = 1; + if (!check_tsc_unstable()) + sched_clock_stable = 1; } /* |