aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/kernel/cpu
AgeCommit message (Collapse)AuthorFilesLines
2015-08-31Merge tag 'driver-core-4.3-rc1' of ↵Linus Torvalds1-3/+2
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull driver core updates from Greg KH: "Here is the new patches for the driver core / sysfs for 4.3-rc1. Very small number of changes here, all the details are in the shortlog, nothing major happening at all this kernel release, which is nice to see" * tag 'driver-core-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: bus: subsys: update return type of ->remove_dev() to void driver core: correct device's shutdown order driver core: fix docbook for device_private.device selftests: firmware: skip timeout checks for kernels without user mode helper kernel, cpu: Remove bogus __ref annotations cpu: Remove bogus __ref annotation of cpu_subsys_online() firmware: fix wrong memory deallocation in fw_add_devm_name() sysfs.txt: update show method notes about sprintf/snprintf/scnprintf usage devres: fix devres_get()
2015-08-31Merge tag 'char-misc-4.3-rc1' of ↵Linus Torvalds1-0/+47
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver patches from Greg KH: "Here's the "big" char/misc driver update for 4.3-rc1. Not much really interesting here, just a number of little changes all over the place, and some nice consolidation of the nvmem drivers to a common framework. As usual, the mei drivers stand out as the largest "churn" to handle new devices and features in their hardware. All have been in linux-next for a while with no issues" * tag 'char-misc-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (136 commits) auxdisplay: ks0108: initialize local parport variable extcon: palmas: Fix build break due to devm_gpiod_get_optional API change extcon: palmas: Support GPIO based USB ID detection extcon: Fix signedness bugs about break error handling extcon: Drop owner assignment from i2c_driver extcon: arizona: Simplify pdata symantics for micd_dbtime extcon: arizona: Declare 3-pole jack if we detect open circuit on mic extcon: Add exception handling to prevent the NULL pointer access extcon: arizona: Ensure variables are set for headphone detection extcon: arizona: Use gpiod inteface to handle micd_pol_gpio gpio extcon: arizona: Add basic microphone detection DT/ACPI bindings extcon: arizona: Update to use the new device properties API extcon: palmas: Remove the mutually_exclusive array extcon: Remove optional print_state() function pointer of struct extcon_dev extcon: Remove duplicate header file in extcon.h extcon: max77843: Clear IRQ bits state before request IRQ toshiba laptop: replace ioremap_cache with ioremap misc: eeprom: max6875: clean up max6875_read() misc: eeprom: clean up eeprom_read() misc: eeprom: 93xx46: clean up eeprom_93xx46_bin_read/write ...
2015-08-28x86/mm/mtrr: Remove kernel internal MTRR interfaces: unexport mtrr_add() and ↵Luis R. Rodriguez1-2/+0
mtrr_del() The effort to replace mtrr_add() with architecture agnostic arch_phys_wc_add() is complete, this will ensure write-combining implementations (PAT on x86) is taken advantage instead of using MTRR. With the effort done now, hide direct MTRR access for drivers. The legacy user-space /proc/mtrr ABI is not affected. Update x86 documentation on MTRR to reflect the completion of the phasing out of direct access to MTRR, also add a note on platform firmware code use of MTRRs based on the obituary discussion of MTRRs on Linux [0]. [0] http://lkml.kernel.org/r/[email protected] Signed-off-by: Luis R. Rodriguez <[email protected]> Cc: <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Andy Walls <[email protected]> Cc: Antonino Daplas <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Dave Airlie <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Doug Ledford <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Jean-Christophe Plagniol-Villard <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Suresh Siddha <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tomi Valkeinen <[email protected]> Cc: Toshi Kani <[email protected]> Cc: Ville Syrjälä <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-23x86/asm/msr: Make wrmsrl() a functionAndy Lutomirski1-3/+3
As of cf991de2f614 ("x86/asm/msr: Make wrmsrl_safe() a function"), wrmsrl_safe is a function, but wrmsrl is still a macro. The wrmsrl macro performs invalid shifts if the value argument is 32 bits. This makes it unnecessarily awkward to write code that puts an unsigned long into an MSR. To make this work, syscall_init needs tweaking to stop passing a function pointer to wrmsrl. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Willy Tarreau <[email protected]> Link: http://lkml.kernel.org/r/690f0c629a1085d054e2d1ef3da073cfb3f7db92.1437678821.git.luto@kernel.org Signed-off-by: Ingo Molnar <[email protected]>
2015-08-22x86/asm/delay: Introduce an MWAITX-based delay with a configurable timerHuang Rui1-0/+4
MWAITX can enable a timer and a corresponding timer value specified in SW P0 clocks. The SW P0 frequency is the same as TSC. The timer provides an upper bound on how long the instruction waits before exiting. This way, a delay function in the kernel can leverage that MWAITX timer of MWAITX. When a CPU core executes MWAITX, it will be quiesced in a waiting phase, diminishing its power consumption. This way, we can save power in comparison to our default TSC-based delays. A simple test shows that: $ cat /sys/bus/pci/devices/0000\:00\:18.4/hwmon/hwmon0/power1_acc $ sleep 10000s $ cat /sys/bus/pci/devices/0000\:00\:18.4/hwmon/hwmon0/power1_acc Results: * TSC-based default delay: 485115 uWatts average power * MWAITX-based delay: 252738 uWatts average power Thus, that's about 240 milliWatts less power consumption. The test method relies on the support of AMD CPU accumulated power algorithm in fam15h_power for which patches are forthcoming. Suggested-by: Andy Lutomirski <[email protected]> Suggested-by: Borislav Petkov <[email protected]> Suggested-by: Peter Zijlstra <[email protected]> Signed-off-by: Huang Rui <[email protected]> [ Fix delay truncation. ] Signed-off-by: Borislav Petkov <[email protected]> Cc: Aaron Lu <[email protected]> Cc: Andreas Herrmann <[email protected]> Cc: Aravind Gopalakrishnan <[email protected]> Cc: Fengguang Wu <[email protected]> Cc: Frédéric Weisbecker <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Hector Marco-Gisbert <[email protected]> Cc: Jacob Shin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: John Stultz <[email protected]> Cc: Len Brown <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Rafael J. Wysocki <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-21x86/hyperv: Mark the Hyper-V TSC as unstableVitaly Kuznetsov1-0/+1
The Hyper-V top-level functional specification states, that "algorithms should be resilient to sudden jumps forward or backward in the TSC value", this means that we should consider TSC as unstable. In some cases tsc tests are able to detect the instability, it was detected in 543 out of 646 boots in my testing: Measured 6277 cycles TSC warp between CPUs, turning off TSC clock. tsc: Marking TSC unstable due to check_tsc_sync_source failed This is, however, just a heuristic. On Hyper-V platform there are two good clocksources: MSR-based hyperv_clocksource and recently introduced TSC page. Signed-off-by: Vitaly Kuznetsov <[email protected]> Cc: Haiyang Zhang <[email protected]> Cc: K. Y. Srinivasan <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-21perf/x86/msr: Fix the MSR driver buildIngo Molnar1-1/+1
The new MSR PMU driver made use of rdtsc() which does not exist (yet) in this tree: arch/x86/kernel/cpu/perf_event_msr.c:91:3: error: implicit declaration of function 'rdtsc' Use the old rdtscll() primitive for now. Reported-by: kbuild test robot <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-20Merge branch 'perf/urgent' into perf/core, to pick up fixes before adding ↵Ingo Molnar2-6/+10
more changes Signed-off-by: Ingo Molnar <[email protected]>
2015-08-18Merge branch 'x86/urgent' into x86/asm to fix up conflicts and to pick up fixesIngo Molnar2-12/+19
Conflicts: arch/x86/entry/entry_64_compat.S arch/x86/math-emu/get_address.c Signed-off-by: Ingo Molnar <[email protected]>
2015-08-14Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds2-12/+19
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Misc fixes: PMU driver corner cases, tooling fixes, and an 'AUX' (Intel PT) race related core fix" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel/cqm: Do not access cpu_data() from CPU_UP_PREPARE handler perf/x86/intel: Fix memory leak on hot-plug allocation fail perf: Fix PERF_EVENT_IOC_PERIOD migration race perf: Fix double-free of the AUX buffer perf: Fix fasync handling on inherited events perf tools: Fix test build error when bindir contains double slash perf stat: Fix transaction lenght metrics perf: Fix running time accounting
2015-08-13x86/mce: Add a wrapper around mce_log() for injectionBorislav Petkov2-0/+10
Will be used by an injector module in a following patch. Additionally, add a missing module export reported by 0-DAY kernel test. Reported-by: kbuild test robot <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Rename rcu_dereference_check_mce() to mce_log_get_idx_check()Borislav Petkov1-4/+4
The "rcu_" prefix misleads for it being a proper RCU interface which is not. It basically checks whether we're preemptible or holding the chrdev_read mutex. Rename it accordingly. Signed-off-by: Borislav Petkov <[email protected]> Acked-by: Paul E. McKenney <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Reenable CMCI banks when swiching back to interrupt modeXie XiuQi1-18/+23
Zhang Liguang reported the following issue: 1) System detects a CMCI storm on the current CPU. 2) Kernel disables the CMCI interrupt on banks owned by the current CPU and switches to poll mode 3) After the CMCI storm subsides, kernel switches back to interrupt mode 4) We expect the system to reenable the CMCI interrupt on banks owned by the current CPU mce_intel_adjust_timer |-> cmci_reenable |-> cmci_discover # owned banks are ignored here static void cmci_discover(int banks) ... for (i = 0; i < banks; i++) { ... if (test_bit(i, owned)) # ownd banks is ignore here continue; So convert cmci_storm_disable_banks() to cmci_toggle_interrupt_mode() which controls whether to enable or disable CMCI interrupts with its argument. NB: We cannot clear the owned bit because the banks won't be polled, otherwise. See: 27f6c573e0f7 ("x86, CMCI: Add proper detection of end of CMCI storms") for more info. Reported-by: Zhang Liguang <[email protected]> Signed-off-by: Xie XiuQi <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: <[email protected]> # v3.15+ Cc: H. Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: [email protected] Cc: linux-edac <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Clear Local MCE opt-in before kexecAshok Raj2-1/+48
kexec could boot a kernel that could be legacy with no knowledge of LMCE. Hence we should make sure we clear LMCE optin before kexec reboot. Signed-off-by: Ashok Raj <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Aravind Gopalakrishnan <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: linux-edac <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Kill drain_mcelog_buffer()Borislav Petkov1-42/+2
This used to flush out MCEs logged during early boot and which were in the MCA registers from a previous system run. No need for that now, since we've moved to a genpool. Suggested-by: Tony Luck <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Avoid potential deadlock due to printk() in MCE contextChen, Gong3-4/+2
Printing in MCE context is a no-no, currently, as printk() is not NMI-safe. If some of the notifiers on the MCE chain call do so, we may deadlock. In order to avoid that, delay printk() to process context where it is safe. Reported-by: Xie XiuQi <[email protected]> Signed-off-by: Chen, Gong <[email protected]> [ Fold in subsequent patch from Boris for early boot logging. ] Signed-off-by: Tony Luck <[email protected]> [ Kick irq_work in mce_log() directly. ] Signed-off-by: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Remove the MCE ring for Action Optional errorsChen, Gong1-75/+60
Use unified genpool to save Action Optional error events and put Action Optional error handling in the same notification chain as MCE error decoding. Signed-off-by: Chen, Gong <[email protected]> [ Fold in subsequent patch from Boris for early boot logging. ] Signed-off-by: Tony Luck <[email protected]> [ Correct a lot. ] Signed-off-by: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Don't use percpu workqueuesChen, Gong1-7/+7
An MCE is a rare event. Therefore, there's no need to have per-CPU instances of both normal and IRQ workqueues. Make them both global. Signed-off-by: Chen, Gong <[email protected]> [ Fold in subsequent patch from Rui/Boris/Tony for early boot logging. ] Signed-off-by: Tony Luck <[email protected]> [ Massage commit message. ] Signed-off-by: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-13x86/mce: Provide a lockless memory pool to save error recordsChen, Gong4-2/+119
printk() is not safe to use in MCE context. Add a lockless memory allocator pool to save error records in MCE context. Those records will be issued later, in a printk-safe context. The idea is inspired by the APEI/GHES driver. We're very conservative and allocate only two pages for it but since we're going to use those pages throughout the system's lifetime, we allocate them statically to avoid early boot time allocation woes. Signed-off-by: Chen, Gong <[email protected]> [ Rewrite. ] Signed-off-by: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-12Merge branch 'for-mingo' of ↵Ingo Molnar1-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu Pull RCU changes from Paul E. McKenney: - The combination of tree geometry-initialization simplifications and OS-jitter-reduction changes to expedited grace periods. These two are stacked due to the large number of conflicts that would otherwise result. [ With one addition, a temporary commit to silence a lockdep false positive. Additional changes to the expedited grace-period primitives (queued for 4.4) remove the cause of this false positive, and therefore include a revert of this temporary commit. ] - Documentation updates. - Torture-test updates. - Miscellaneous fixes. Signed-off-by: Ingo Molnar <[email protected]>
2015-08-12perf/x86/intel/pt: Clean up files of Intel Processor TraceTakao Indoh2-32/+11
This patch just cleans up some files of Intel Processor Trace, does not change its behavior. This patch removes unused definitions and replaces a constant value with a macro. Signed-off-by: Takao Indoh <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin<[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: H.Peter Anvin <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-12perf/x86: Fix MSR PMU driverPeter Zijlstra1-84/+84
Currently we only update the sysfs event files per available MSR, we didn't actually disallow creating unlisted events. Rework things such that the dectection, sysfs listing and event creation are better coordinated. Sadly it appears it's impossible to probe R/O MSRs under virt. This means we have to do the full model table to avoid listing all MSRs all the time. Tested-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2015-08-12Merge branch 'perf/urgent' into perf/core, to pick up fixes before applying ↵Ingo Molnar2-12/+19
new changes Signed-off-by: Ingo Molnar <[email protected]>
2015-08-12perf/x86/intel/cqm: Do not access cpu_data() from CPU_UP_PREPARE handlerMatt Fleming1-5/+3
Tony reports that booting his 144-cpu machine with maxcpus=10 triggers the following WARN_ON(): [ 21.045727] WARNING: CPU: 8 PID: 647 at arch/x86/kernel/cpu/perf_event_intel_cqm.c:1267 intel_cqm_cpu_prepare+0x75/0x90() [ 21.045744] CPU: 8 PID: 647 Comm: systemd-udevd Not tainted 4.2.0-rc4 #1 [ 21.045745] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRHSXSD1.86B.0066.R00.1506021730 06/02/2015 [ 21.045747] 0000000000000000 0000000082771b09 ffff880856333ba8 ffffffff81669b67 [ 21.045748] 0000000000000000 0000000000000000 ffff880856333be8 ffffffff8107b02a [ 21.045750] ffff88085b789800 ffff88085f68a020 ffffffff819e2470 000000000000000a [ 21.045750] Call Trace: [ 21.045757] [<ffffffff81669b67>] dump_stack+0x45/0x57 [ 21.045759] [<ffffffff8107b02a>] warn_slowpath_common+0x8a/0xc0 [ 21.045761] [<ffffffff8107b15a>] warn_slowpath_null+0x1a/0x20 [ 21.045762] [<ffffffff81036725>] intel_cqm_cpu_prepare+0x75/0x90 [ 21.045764] [<ffffffff81036872>] intel_cqm_cpu_notifier+0x42/0x160 [ 21.045767] [<ffffffff8109a33d>] notifier_call_chain+0x4d/0x80 [ 21.045769] [<ffffffff8109a44e>] __raw_notifier_call_chain+0xe/0x10 [ 21.045770] [<ffffffff8107b538>] _cpu_up+0xe8/0x190 [ 21.045771] [<ffffffff8107b65a>] cpu_up+0x7a/0xa0 [ 21.045774] [<ffffffff8165e920>] cpu_subsys_online+0x40/0x90 [ 21.045777] [<ffffffff81433b37>] device_online+0x67/0x90 [ 21.045778] [<ffffffff81433bea>] online_store+0x8a/0xa0 [ 21.045782] [<ffffffff81430e78>] dev_attr_store+0x18/0x30 [ 21.045785] [<ffffffff8126b6ba>] sysfs_kf_write+0x3a/0x50 [ 21.045786] [<ffffffff8126ad40>] kernfs_fop_write+0x120/0x170 [ 21.045789] [<ffffffff811f0b77>] __vfs_write+0x37/0x100 [ 21.045791] [<ffffffff811f38b8>] ? __sb_start_write+0x58/0x110 [ 21.045795] [<ffffffff81296d2d>] ? security_file_permission+0x3d/0xc0 [ 21.045796] [<ffffffff811f1279>] vfs_write+0xa9/0x190 [ 21.045797] [<ffffffff811f2075>] SyS_write+0x55/0xc0 [ 21.045800] [<ffffffff81067300>] ? do_page_fault+0x30/0x80 [ 21.045804] [<ffffffff816709ae>] entry_SYSCALL_64_fastpath+0x12/0x71 [ 21.045805] ---[ end trace fe228b836d8af405 ]--- The root cause is that CPU_UP_PREPARE is completely the wrong notifier action from which to access cpu_data(), because smp_store_cpu_info() won't have been executed by the target CPU at that point, which in turn means that ->x86_cache_max_rmid and ->x86_cache_occ_scale haven't been filled out. Instead let's invoke our handler from CPU_STARTING and rename it appropriately. Reported-by: Tony Luck <[email protected]> Signed-off-by: Matt Fleming <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Ashok Raj <[email protected]> Cc: Kanaka Juvva <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vikas Shivappa <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-12perf/x86/intel: Fix memory leak on hot-plug allocation failPeter Zijlstra1-7/+16
We fail to free the shared_regs allocation if the constraint_list allocation fails. Cure this and be more consistent in NULL-ing the pointers after free. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2015-08-09Merge 4.2-rc6 into char-misc-nextGreg Kroah-Hartman2-6/+10
We want the fixes in Linus's tree in here as well. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2015-08-05bus: subsys: update return type of ->remove_dev() to voidViresh Kumar1-3/+2
Its return value is not used by the subsys core and nothing meaningful can be done with it, even if we want to use it. The subsys device is anyway getting removed. Update prototype of ->remove_dev() to make its return type as void. Fix all usage sites as well. Signed-off-by: Viresh Kumar <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2015-08-04mshyperv: fix recognition of Hyper-V guest crash MSR'sDenis V. Lunev1-0/+1
Hypervisor Top Level Functional Specification v3.1/4.0 notes that cpuid (0x40000003) EDX's 10th bit should be used to check that Hyper-V guest crash MSR's functionality available. This patch should fix this recognition. Currently the code checks EAX register instead of EDX. Signed-off-by: Andrey Smetanin <[email protected]> Signed-off-by: Denis V. Lunev <[email protected]> Signed-off-by: K. Y. Srinivasan <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2015-08-04Drivers: hv: vmbus: add special crash handlerVitaly Kuznetsov1-0/+22
Full kernel hang is observed when kdump kernel starts after a crash. This hang happens in vmbus_negotiate_version() function on wait_for_completion() as Hyper-V host (Win2012R2 in my testing) never responds to CHANNELMSG_INITIATE_CONTACT as it thinks the connection is already established. We need to perform some mandatory minimalistic cleanup before we start new kernel. Reported-by: K. Y. Srinivasan <[email protected]> Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: K. Y. Srinivasan <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2015-08-04Drivers: hv: vmbus: add special kexec handlerVitaly Kuznetsov1-0/+24
When general-purpose kexec (not kdump) is being performed in Hyper-V guest the newly booted kernel fails with an MCE error coming from the host. It is the same error which was fixed in the "Drivers: hv: vmbus: Implement the protocol for tearing down vmbus state" commit - monitor pages remain special and when they're being written to (as the new kernel doesn't know these pages are special) bad things happen. We need to perform some minimalistic cleanup before booting a new kernel on kexec. To do so we need to register a special machine_ops.shutdown handler to be executed before the native_machine_shutdown(). Registering a shutdown notification handler via the register_reboot_notifier() call is not sufficient as it happens to early for our purposes. machine_ops is not being exported to modules (and I don't think we want to export it) so let's do this in mshyperv.c The minimalistic cleanup consists of cleaning up clockevents, synic MSRs, guest os id MSR, and hypercall MSR. Kdump doesn't require all this stuff as it lives in a separate memory space. Signed-off-by: Vitaly Kuznetsov <[email protected]> Signed-off-by: K. Y. Srinivasan <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2015-08-04perf/x86/intel/pebs: Robustify PEBS buffer drainPeter Zijlstra1-17/+17
Vince Weaver and Stephane Eranian reported warnings in the PEBS code when running the perf fuzzer. Stephane wrote: > I can reproduce the problem on my HSW running the fuzzer. > > I can see why this could be happening if you are mixing PEBS and non PEBS events > in the bottom 4 counters. I suspect: > for (bit = 0; bit < x86_pmu.max_pebs_events; bit++) { > if ((counts[bit] == 0) && (error[bit] == 0)) > continue; > > This test is not correct when you have non-PEBS events mixed with > PEBS events and they overflow at the same time. They will have > counts[i] != 0 but error[i] == 0, and thus you fall thru the loop > and hit the assert. Or it is something along those lines. The only way I can make this work is if ->status only has !PEBS events set, because if it has both set we'll take that slow path which masks out the !PEBS bits. After masking there are 3 options: - there is one bit set, and its @bit, we increment counts[bit]. - there are multiple bits set, we increment error[] for each set bit, we do not increment counts[]. - there are no bits set, we do nothing. The intent was to never increment counts[] for !PEBS events. Now if we start out with only a single !PEBS event set, we'll pass the test and increment counts[] for a !PEBS and hit the warn. Reported-by: Vince Weaver <[email protected]> Reported-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/pebs: Fix event disable PEBS buffer drainLiang, Kan1-6/+7
When disabling a PEBS event, we need to drain the buffer. Doing so requires a correct cpuc->pebs_active mask. The current code clears the pebs_active bit before draining the buffer. Fix that. Signed-off-by: "Liang, Kan" <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver<[email protected]> Link: http://lkml.kernel.org/r/37D7C6CF3E00A74B8858931C1DB2F07701885A65@SHSMSX103.ccr.corp.intel.com [ Fixed the SOB. ] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86: Add an MSR PMU driverAndy Lutomirski2-0/+244
This patch adds an MSR PMU to support free running MSR counters. Such as time and freq related counters includes TSC, IA32_APERF, IA32_MPERF and IA32_PPERF, but also SMI_COUNT. The events are exposed in sysfs for use by perf stat and other tools. The files are under /sys/devices/msr/events/ Signed-off-by: Andy Lutomirski <[email protected]> Signed-off-by: Kan Liang <[email protected]> [ s/freq/msr/, added SMI_COUNT, fixed bugs. ] Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/uncore: Add Broadwell-DE uncore supportKan Liang3-0/+172
The uncore subsystem for Broadwell-DE is similar to Haswell-EP. There are some differences in pci device IDs, box number and constraints. Please refer to the public document: http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-uncore-performance-monitoring.html Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel: Use 0x11 as extra reg test valueAndi Kleen1-1/+1
The next patch adds a new perf extra register where 0x1ff is not a valid value. Use 0x11 instead. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86: Make merge_attr() global to use from perf_event_intelAndi Kleen2-1/+4
merge_attr() allows to merge two sysfs attribute tables. Export it to be usable by other files too. Next patch is going to use that to extend the sysfs format attributes for a CPU. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/lbr: Limit LBR accesses to TOS in callstack modeAndi Kleen1-3/+7
In callstack mode the LBR is not a ring buffer, but a stack that grows up and down. This means in this case we don't need to access all LBRs, only the ones up to TOS. Do this optimization for the normal LBR read, and the context switch save/restore code. For save/restore it can be done unconditionally, as it only runs when call stack mode is active. This recovers some of the cost of going to 32 LBRs on Skylake. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/lbr: Use correct index to save/restore LBR_INFO with call stackAndi Kleen1-2/+2
Use the correct index to save/restore the LBR_INFO_x MSR in callstack mode. This is more a cleanup, as even with the wrong index the register was correctly saved/restored, and also LBR callgraph mode in perf tools do not really need anything in LBR_INFO. But still better to use the right index. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel: Add Intel Skylake PMU supportAndi Kleen4-1/+279
Add perf core PMU support for future Intel Skylake CPU cores. The code is based on Haswell/Broadwell. There is a new cache event list, based on the updated Haswell event list. Skylake has removed most counter constraints on basic events, so the basic constraints table now only has a single entry (plus the fixed counters). TSX support and various other setups are all shared with Haswell. Skylake has 32 LBR entries. Add a new LBR init function to set this up. The filters are all the same as Haswell. It also has a new LBR format with a separate LBR_INFO_* MSR, but that has been already added earlier. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/lbr: Optimize v4 LBR unfreezingAndi Kleen1-0/+7
In Arch perfmon v4 the GLOBAL_STATUS reset automatically unfreezes LBRs. So no need to do it manually in the LBR code. Add a check to skip it. v2: Move test up to beginning of function. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel: Move PMU ACK to after LBR readAndi Kleen1-1/+1
With Arch Perfmon v4 the PMU ack unfreezes the LBRs. So we need to do the PMU ack after the LBR reading, otherwise the LBRs would be polluted by the PMI handler. This is a minimal change. In principle the ACK could be moved much later. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel: Handle new arch perfmon v4 status bitsAndi Kleen1-6/+7
ArchPerfmon v4 has some new status bits in GLOBAL_STATUS. These need to be ignored when deciding whether a NMI was an NMI, to avoid eating all NMIs when they stay set, see: b292d7a10487 ("perf/x86/intel: ignore CondChgd bit to avoid false NMI handling") This patch ignores the new ASIF bit, which indicates that SGX interfered with the PMU, and also the new LBR freezing bits, which are set when the LBRs get frozen, plus the existing CondChange (set by JTAG debuggers and some buggy BIOSes) Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/lbr: Add support for LBRv5Andi Kleen2-1/+21
Add support for the new LBRv5 format used on Intel Skylake CPUs. The flags for mispredict, abort, in_tx etc. moved to range of separate LBR_INFO_* MSRs. Teach the LBR code to read those. The original LBR registers stay the same, except they have full sign extension now. LBR_INFO also reports a cycle count to the last branch. Report the cycle information using the new "cycles" branch_info output field. In addition we have to context switch and clear the new INFO MSRs to avoid any information leaks. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/lbr: Allow time stamp for free running PEBSv3Andi Kleen3-1/+16
With PEBSv3 the PEBS record contains a time stamp. That means we can allow free-running PEBS without a PMI even if the user program requested a time stamp. This avoids the need to use -T to get free running PEBS, and also avoids any problems with mis-identifying MMAPs later. Move the free_running_flags state into a variable in x86_pmu and use it. This only works when no explicit clock_id is set. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel: Add support for PEBSv3 profilingAndi Kleen1-3/+33
PEBSv3 is the same as the existing PEBSv2 used on Haswell, but it adds a new TSC field. Add support to the generic PEBS handler to handle the new format, and overwrite the perf time stamp using the new native_sched_clock_from_tsc(). Right now the time stamp is just slightly more accurate, as it is nearer the actual event trigger point. With the PEBS threshold > 1 patchkit it will be much more accurate, avoid the problems with MMAP mismatches earlier. The accurate time stamping is only implemented for the default trace clock for now. v2: Use _skl prefix. Check for default clock_id. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/pt: Add new timing packet enablesAlexander Shishkin2-1/+73
Intel PT chapter in the new Intel Architecture SDM adds several packets corresponding enable bits and registers that control packet generation. Also, additional bits in the Intel PT CPUID leaf were added to enumerate presence and parameters of these new packets and features. The packets and enables are: * CYC: cycle accurate mode, provides the number of cycles elapsed since previous CYC packet; its presence and available threshold values are enumerated via CPUID; * MTC: mini time counter packets, used for tracking TSC time between full TSC packets; its presence and available resolution options are enumerated via CPUID; * PSB packet period is now configurable, available period values are enumerated via CPUID. This patch adds corresponding bit and register definitions, pmu driver capabilities based on CPUID enumeration, new attribute format bits for the new featurens and extends event configuration validation function to take these into account. Signed-off-by: Alexander Shishkin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/1438262131-12725-1-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/pt: Do not force sync packets on every schedule-inAlexander Shishkin1-2/+5
Currently, the PT driver zeroes out the status register every time before starting the event. However, all the writable bits are already taken care of in pt_handle_status() function, except the new PacketByteCnt field, which in new versions of PT contains the number of packet bytes written since the last sync (PSB) packet. Zeroing it out before enabling PT forces a sync packet to be written. This means that, with the existing code, a sync packet (PSB and PSBEND, 18 bytes in total) will be generated every time a PT event is scheduled in. To avoid these unnecessary syncs and save a WRMSR in the fast path, this patch changes the default behavior to not clear PacketByteCnt field, so that the sync packets will be generated with the period specified as "psb_period" attribute config field. This has little impact on the trace data as the other packets that are normally sent within PSB+ (between PSB and PSBEND) have their own generation scenarios which do not depend on the sync packets. One exception where we do need to force PSB like this when tracing starts, so that the decoder has a clear sync point in the trace. For this purpose we aready have hw::itrace_started flag, which we are currently using to output PERF_RECORD_ITRACE_START. This patch moves setting itrace_started from perf core to the pmu::start, where it should still be 0 on the very first run. Signed-off-by: Alexander Shishkin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/1438264104-16189-1-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel: Fix SLM MSR_OFFCORE_RSP1 valid_maskKan Liang1-6/+10
AVG_LATENCY(bit 38) is only available on MSR_OFFCORE_RSP0. So the bit should be removed from RSP1 valid_mask. Since RSP0 and RSP1 may have different valid_mask, intel_alt_er should validate the config on the alternate offcore reg before replacing it. Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/lbr: Kill off intel_pmu_needs_lbr_smpl for goodAlexander Shishkin1-14/+0
The x86_lbr_exclusive commit (4807034248be "perf/x86: Mark Intel PT and LBR/BTS as mutually exclusive") mistakenly moved intel_pmu_needs_lbr_smpl() to perf_event.h, while another commit (a46a2300019 "perf: Simplify the branch stack check") removed it in favor of needs_branch_stack(). This patch gets rid of intel_pmu_needs_lbr_smpl() for good. Signed-off-by: Alexander Shishkin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/1435140349-32588-3-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <[email protected]>
2015-08-04perf/x86/intel/bts: Drop redundant declarationsAlexander Shishkin1-3/+0
Both intel_pmu_enable_bts() and intel_pmu_disable_bts() are in perf_event.h header file, no need to have them declared again in the driver. Signed-off-by: Alexander Shishkin <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/1435140349-32588-2-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <[email protected]>