aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2016-03-08Merge branch 'kvm-ppc-fixes' of ↵Paolo Bonzini1-0/+14
git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD
2016-03-08KVM: VMX: disable PEBS before a guest entryRadim Krčmář1-0/+7
Linux guests on Haswell (and also SandyBridge and Broadwell, at least) would crash if you decided to run a host command that uses PEBS, like perf record -e 'cpu/mem-stores/pp' -a This happens because KVM is using VMX MSR switching to disable PEBS, but SDM [2015-12] 18.4.4.4 Re-configuring PEBS Facilities explains why it isn't safe: When software needs to reconfigure PEBS facilities, it should allow a quiescent period between stopping the prior event counting and setting up a new PEBS event. The quiescent period is to allow any latent residual PEBS records to complete its capture at their previously specified buffer address (provided by IA32_DS_AREA). There might not be a quiescent period after the MSR switch, so a CPU ends up using host's MSR_IA32_DS_AREA to access an area in guest's memory. (Or MSR switching is just buggy on some models.) The guest can learn something about the host this way: If the guest doesn't map address pointed by MSR_IA32_DS_AREA, it results in #PF where we leak host's MSR_IA32_DS_AREA through CR2. After that, a malicious guest can map and configure memory where MSR_IA32_DS_AREA is pointing and can therefore get an output from host's tracing. This is not a critical leak as the host must initiate with PEBS tracing and I have not been able to get a record from more than one instruction before vmentry in vmx_vcpu_run() (that place has most registers already overwritten with guest's). We could disable PEBS just few instructions before vmentry, but disabling it earlier shouldn't affect host tracing too much. We also don't need to switch MSR_IA32_PEBS_ENABLE on VMENTRY, but that optimization isn't worth its code, IMO. (If you are implementing PEBS for guests, be sure to handle the case where both host and guest enable PEBS, because this patch doesn't.) Fixes: 26a4f3c08de4 ("perf/x86: disable PEBS on a guest entry.") Cc: <[email protected]> Reported-by: Jiří Olša <[email protected]> Signed-off-by: Radim Krčmář <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: micro-optimize gpte_accessPaolo Bonzini1-2/+5
Avoid AND-NOT, most x86 processor lack an instruction for it. Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: simplify last_pte_bitmapPaolo Bonzini2-30/+28
Branch-free code is fun and everybody knows how much Avi loves it, but last_pte_bitmap takes it a bit to the extreme. Since the code is simply doing a range check, like (level == 1 || ((gpte & PT_PAGE_SIZE_MASK) && level < N) we can make it branch-free without storing the entire truth table; it is enough to cache N. Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: coalesce more page zapping in mmu_sync_childrenPaolo Bonzini1-4/+11
mmu_sync_children can only process up to 16 pages at a time. Check if we need to reschedule, and do not bother zapping the pages until that happens. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: move zap/flush to kvm_mmu_get_pagePaolo Bonzini1-20/+20
kvm_mmu_get_page is the only caller of kvm_sync_page_transient and kvm_sync_pages. Moving the handling of the invalid_list there removes the need for the underdocumented kvm_sync_page_transient function. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: invert return value of mmu.sync_page and *kvm_sync_page*Paolo Bonzini2-19/+16
Return true if the page was synced (and the TLB must be flushed) and false if the page was zapped. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: cleanup __kvm_sync_page and its callersPaolo Bonzini1-6/+4
Calling kvm_unlink_unsync_page in the middle of __kvm_sync_page makes things unnecessarily tricky. If kvm_mmu_prepare_zap_page is called, it will call kvm_unlink_unsync_page too. So kvm_unlink_unsync_page can be called just as well at the beginning or the end of __kvm_sync_page... which means that we might do it in kvm_sync_page too and remove the parameter. kvm_sync_page ends up being the same code that kvm_sync_pages used to have before the previous patch. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: use kvm_sync_page in kvm_sync_pagesPaolo Bonzini1-2/+1
If the last argument is true, kvm_unlink_unsync_page is called anyway in __kvm_sync_page (either by kvm_mmu_prepare_zap_page or by __kvm_sync_page itself). Therefore, kvm_sync_pages can just call kvm_sync_page, instead of going through kvm_unlink_unsync_page+__kvm_sync_page. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: move TLB flush out of __kvm_sync_pagePaolo Bonzini1-29/+24
By doing this, kvm_sync_pages can use __kvm_sync_page instead of reinventing it. Because of kvm_mmu_flush_or_zap, the code does not end up being more complex than before, and more cleanups to kvm_sync_pages will come in the next patches. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08KVM: MMU: introduce kvm_mmu_flush_or_zapPaolo Bonzini1-9/+10
This is a generalization of mmu_pte_write_flush_tlb, that also takes care of calling kvm_mmu_commit_zap_page. The next patches will introduce more uses. Reviewed-by: Takuya Yoshikawa <[email protected]> Signed-off-by: Paolo Bonzini <[email protected]>
2016-03-08x86/apic: Deinline _flat_send_IPI_mask, save ~150 bytesDenys Vlasenko1-1/+1
_flat_send_IPI_mask: 157 bytes, 3 callsites text data bss dec hex filename 96183823 20860520 36122624 153166967 9212477 vmlinux1_before 96183699 20860520 36122624 153166843 92123fb vmlinux Signed-off-by: Denys Vlasenko <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel J Blueman <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Travis <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/apic: Deinline __default_send_IPI_*, save ~200 bytesDenys Vlasenko2-56/+62
__default_send_IPI_shortcut: 49 bytes, 2 callsites __default_send_IPI_dest_field: 108 bytes, 7 callsites text data bss dec hex filename 96184086 20860488 36122624 153167198 921255e vmlinux_before 96183823 20860520 36122624 153166967 9212477 vmlinux Signed-off-by: Denys Vlasenko <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Daniel J Blueman <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Travis <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08Merge branch 'linus' into irq/core, to pick up fixesIngo Molnar201-671/+1305
Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08perf/x86/intel: Fix PEBS data source interpretation on Nehalem/WestmereAndi Kleen3-1/+14
Jiri reported some time ago that some entries in the PEBS data source table in perf do not agree with the SDM. We investigated and the bits changed for Sandy Bridge, but the SDM was not updated. perf already implements the bits correctly for Sandy Bridge and later. This patch patches it up for Nehalem and Westmere. Signed-off-by: Andi Kleen <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08perf/x86/pebs: Add proper PEBS constraints for BroadwellStephane Eranian3-1/+27
This patch adds a Broadwell specific PEBS event constraint table. Broadwell has a fix for the HT corruption bug erratum HSD29 on Haswell. Therefore, there is no need to mark events 0xd0, 0xd1, 0xd2, 0xd3 has requiring the exclusive mode across both sibling HT threads. This holds true for regular counting and sampling (see core.c) and PEBS (ds.c) which we fix in this patch. In doing so, we relax evnt scheduling for these events, they can now be programmed on any 4 counters without impacting what is measured on the sibling thread. Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08perf/x86/pebs: Add workaround for broken OVFL status on HSW+Stephane Eranian1-0/+10
This patch fixes an issue with the GLOBAL_OVERFLOW_STATUS bits on Haswell, Broadwell and Skylake processors when using PEBS. The SDM stipulates that when the PEBS iterrupt threshold is crossed, an interrupt is posted and the kernel is interrupted. The kernel will find GLOBAL_OVF_SATUS bit 62 set indicating there are PEBS records to drain. But the bits corresponding to the actual counters should NOT be set. The kernel follows the SDM and assumes that all PEBS events are processed in the drain_pebs() callback. The kernel then checks for remaining overflows on any other (non-PEBS) events and processes these in the for_each_bit_set(&status) loop. As it turns out, under certain conditions on HSW and later processors, on PEBS buffer interrupt, bit 62 is set but the counter bits may be set as well. In that case, the kernel drains PEBS and generates SAMPLES with the EXACT tag, then it processes the counter bits, and generates normal (non-EXACT) SAMPLES. I ran into this problem by trying to understand why on HSW sampling on a PEBS event was sometimes returning SAMPLES without the EXACT tag. This should not happen on user level code because HSW has the eventing_ip which always point to the instruction that caused the event. The workaround in this patch simply ensures that the bits for the counters used for PEBS events are cleared after the PEBS buffer has been drained. With this fix 100% of the PEBS samples on my user code report the EXACT tag. Before: $ perf record -e cpu/event=0xd0,umask=0x81/upp ./multichase $ perf report -D | fgrep SAMPLES PERF_RECORD_SAMPLE(IP, 0x2): 11775/11775: 0x406de5 period: 73469 addr: 0 exact=Y \--- EXACT tag is missing After: $ perf record -e cpu/event=0xd0,umask=0x81/upp ./multichase $ perf report -D | fgrep SAMPLES PERF_RECORD_SAMPLE(IP, 0x4002): 11775/11775: 0x406de5 period: 73469 addr: 0 exact=Y \--- EXACT tag is set The problem tends to appear more often when multiple PEBS events are used. Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08perf/x86/intel: Add definition for PT PMI bitStephane Eranian1-0/+1
This patch adds a definition for GLOBAL_OVFL_STATUS bit 55 which is used with the Processor Trace (PT) feature. Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08perf/x86/intel: Fix PEBS warning by only restoring active PMU in pmiKan Liang3-3/+29
This patch tries to fix a PEBS warning found in my stress test. The following perf command can easily trigger the pebs warning or spurious NMI error on Skylake/Broadwell/Haswell platforms: sudo perf record -e 'cpu/umask=0x04,event=0xc4/pp,cycles,branches,ref-cycles,cache-misses,cache-references' --call-graph fp -b -c1000 -a Also the NMI watchdog must be enabled. For this case, the events number is larger than counter number. So perf has to do multiplexing. In perf_mux_hrtimer_handler, it does perf_pmu_disable(), schedule out old events, rotate_ctx, schedule in new events and finally perf_pmu_enable(). If the old events include precise event, the MSR_IA32_PEBS_ENABLE should be cleared when perf_pmu_disable(). The MSR_IA32_PEBS_ENABLE should keep 0 until the perf_pmu_enable() is called and the new event is precise event. However, there is a corner case which could restore PEBS_ENABLE to stale value during the above period. In perf_pmu_disable(), GLOBAL_CTRL will be set to 0 to stop overflow and followed PMI. But there may be pending PMI from an earlier overflow, which cannot be stopped. So even GLOBAL_CTRL is cleared, the kernel still be possible to get PMI. At the end of the PMI handler, __intel_pmu_enable_all() will be called, which will restore the stale values if old events haven't scheduled out. Once the stale pebs value is set, it's impossible to be corrected if the new events are non-precise. Because the pebs_enabled will be set to 0. x86_pmu.enable_all() will ignore the MSR_IA32_PEBS_ENABLE setting. As a result, the following NMI with stale PEBS_ENABLE trigger pebs warning. The pending PMI after enabled=0 will become harmless if the NMI handler does not change the state. This patch checks cpuc->enabled in pmi and only restore the state when PMU is active. Here is the dump: Call Trace: <NMI> [<ffffffff813c3a2e>] dump_stack+0x63/0x85 [<ffffffff810a46f2>] warn_slowpath_common+0x82/0xc0 [<ffffffff810a483a>] warn_slowpath_null+0x1a/0x20 [<ffffffff8100fe2e>] intel_pmu_drain_pebs_nhm+0x2be/0x320 [<ffffffff8100caa9>] intel_pmu_handle_irq+0x279/0x460 [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 [<ffffffff811f290d>] ? vunmap_page_range+0x20d/0x330 [<ffffffff811f2f11>] ? unmap_kernel_range_noflush+0x11/0x20 [<ffffffff8148379f>] ? ghes_copy_tofrom_phys+0x10f/0x2a0 [<ffffffff814839c8>] ? ghes_read_estatus+0x98/0x170 [<ffffffff81005a7d>] perf_event_nmi_handler+0x2d/0x50 [<ffffffff810310b9>] nmi_handle+0x69/0x120 [<ffffffff810316f6>] default_do_nmi+0xe6/0x100 [<ffffffff810317f2>] do_nmi+0xe2/0x130 [<ffffffff817aea71>] end_repeat_nmi+0x1a/0x1e [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40 <<EOE>> <IRQ> [<ffffffff81006df8>] ? x86_perf_event_set_period+0xd8/0x180 [<ffffffff81006eec>] x86_pmu_start+0x4c/0x100 [<ffffffff8100722d>] x86_pmu_enable+0x28d/0x300 [<ffffffff811994d7>] perf_pmu_enable.part.81+0x7/0x10 [<ffffffff8119cb70>] perf_mux_hrtimer_handler+0x200/0x280 [<ffffffff8119c970>] ? __perf_install_in_context+0xc0/0xc0 [<ffffffff8110f92d>] __hrtimer_run_queues+0xfd/0x280 [<ffffffff811100d8>] hrtimer_interrupt+0xa8/0x190 [<ffffffff81199080>] ? __perf_read_group_add.part.61+0x1a0/0x1a0 [<ffffffff81051bd8>] local_apic_timer_interrupt+0x38/0x60 [<ffffffff817af01d>] smp_apic_timer_interrupt+0x3d/0x50 [<ffffffff817ad15c>] apic_timer_interrupt+0x8c/0xa0 <EOI> [<ffffffff81199080>] ? __perf_read_group_add.part.61+0x1a0/0x1a0 [<ffffffff81123de5>] ? smp_call_function_single+0xd5/0x130 [<ffffffff81123ddb>] ? smp_call_function_single+0xcb/0x130 [<ffffffff81199080>] ? __perf_read_group_add.part.61+0x1a0/0x1a0 [<ffffffff8119765a>] event_function_call+0x10a/0x120 [<ffffffff8119c660>] ? ctx_resched+0x90/0x90 [<ffffffff811971e0>] ? cpu_clock_event_read+0x30/0x30 [<ffffffff811976d0>] ? _perf_event_disable+0x60/0x60 [<ffffffff8119772b>] _perf_event_enable+0x5b/0x70 [<ffffffff81197388>] perf_event_for_each_child+0x38/0xa0 [<ffffffff811976d0>] ? _perf_event_disable+0x60/0x60 [<ffffffff811a0ffd>] perf_ioctl+0x12d/0x3c0 [<ffffffff8134d855>] ? selinux_file_ioctl+0x95/0x1e0 [<ffffffff8124a3a1>] do_vfs_ioctl+0xa1/0x5a0 [<ffffffff81036d29>] ? sched_clock+0x9/0x10 [<ffffffff8124a919>] SyS_ioctl+0x79/0x90 [<ffffffff817ac4b2>] entry_SYSCALL_64_fastpath+0x1a/0xa4 ---[ end trace aef202839fe9a71d ]--- Uhhuh. NMI received for unknown reason 2d on CPU 2. Do you have a strange power saving mode enabled? Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Link: http://lkml.kernel.org/r/[email protected] [ Fixed various typos and other small details. ] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08perf/x86/intel: Use PAGE_SIZE for PEBS buffer size on Core2Jiri Olsa2-2/+12
Using PAGE_SIZE buffers makes the WRMSR to PERF_GLOBAL_CTRL in intel_pmu_enable_all() mysteriously hang on Core2. As a workaround, we don't do this. The hard lockup is easily triggered by running 'perf test attr' repeatedly. Most of the time it gets stuck on sample session with small periods. # perf test attr -vv 14: struct perf_event_attr setup : --- start --- ... 'PERF_TEST_ATTR=/tmp/tmpuEKz3B /usr/bin/perf record -o /tmp/tmpuEKz3B/perf.data -c 123 kill >/dev/null 2>&1' ret 1 Reported-by: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Jiri Olsa <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Kan Liang <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vince Weaver <[email protected]> Cc: Wang Nan <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/mce/AMD: Document some functionalityAravind Gopalakrishnan2-14/+19
In an attempt to aid in understanding of what the threshold_block structure holds, provide comments to describe the members here. Also, trim comments around threshold_restart_bank() and update copyright info. No functional change is introduced. Signed-off-by: Aravind Gopalakrishnan <[email protected]> [ Shorten comments. ] Signed-off-by: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: linux-edac <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/mce: Clarify comments regarding deferred errorAravind Gopalakrishnan1-1/+1
Deferred errors indicate errors that hardware could not fix. But it still does not cause any interruption to program flow. So it does not generate any #MC and UC bit in MCx_STATUS is not set. Correct comment. Signed-off-by: Aravind Gopalakrishnan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: linux-edac <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/mce/AMD: Fix logic to obtain block addressAravind Gopalakrishnan2-29/+59
In upcoming processors, the BLKPTR field is no longer used to indicate the MSR number of the additional register. Insted, it simply indicates the prescence of additional MSRs. Fix the logic here to gather MSR address from MSR_AMD64_SMCA_MCx_MISC() for newer processors and fall back to existing logic for older processors. [ Drop nextaddr_out label; style cleanups. ] Signed-off-by: Aravind Gopalakrishnan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: linux-edac <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/mce/AMD, EDAC: Enable error decoding of Scalable MCA errorsAravind Gopalakrishnan2-0/+88
For Scalable MCA enabled processors, errors are listed per IP block. And since it is not required for an IP to map to a particular bank, we need to use HWID and McaType values from the MCx_IPID register to figure out which IP a given bank represents. We also have a new bit (TCC) in the MCx_STATUS register to indicate Task context is corrupt. Add logic here to decode errors from all known IP blocks for Fam17h Model 00-0fh and to print TCC errors. [ Minor fixups. ] Signed-off-by: Aravind Gopalakrishnan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: linux-edac <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08x86/mce: Move MCx_CONFIG MSR definitionsAravind Gopalakrishnan2-4/+4
Those MSRs are used only by the MCE code so move them there. Signed-off-by: Aravind Gopalakrishnan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: linux-edac <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08Merge branch 'linus' into ras/core, to pick up fixesIngo Molnar160-514/+1052
Signed-off-by: Ingo Molnar <[email protected]>
2016-03-08arm64: dts: foundation-v8: add SBSA Generic Watchdog device nodeFu Wei1-0/+8
This can be a example of adding SBSA Generic Watchdog device node into some dts files for the Soc which contains SBSA Generic Watchdog. Acked-by: Arnd Bergmann <[email protected]> Signed-off-by: Fu Wei <[email protected]> [edited subject and moved change to dtsi file] Signed-off-by: Sudeep Holla <[email protected]>
2016-03-08s390/cpumf: Fix lpp detectionChristian Borntraeger1-1/+1
we have to check bit 40 of the facility list before issuing LPP and not bit 48. Otherwise a guest running on a system with "The decimal-floating-point zoned-conversion facility" and without the "The set-program-parameters facility" might crash on an lpp instruction. Signed-off-by: Christian Borntraeger <[email protected]> Cc: [email protected] # v4.4+ Fixes: e22cf8ca6f75 ("s390/cpumf: rework program parameter setting to detect guest samples") Signed-off-by: Martin Schwidefsky <[email protected]>
2016-03-08x86/microcode/intel: Drop orig_sum from ext signature checksumBorislav Petkov1-3/+3
It is 0 because for !0 values we would have exited already. Signed-off-by: Borislav Petkov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-03-08x86/microcode/intel: Improve microcode sanity-checking error messagesBorislav Petkov1-10/+26
Turn them into proper sentences. Add comments to microcode_sanity_check() to explain what it does. Signed-off-by: Borislav Petkov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-03-08x86/microcode/intel: Merge two consecutive if-statementsBorislav Petkov1-5/+5
Merge the two consecutive "if (ext_table_size)". No functional change. Signed-off-by: Borislav Petkov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-03-08x86/microcode/intel: Get rid of DWSIZEBorislav Petkov2-3/+2
sizeof(u32) is perfectly clear as it is. Signed-off-by: Borislav Petkov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-03-08x86/microcode/intel: Change checksum variables to u32Chris Bainbridge1-4/+4
Microcode checksum verification should be done using unsigned 32-bit values otherwise the calculation overflow results in undefined behaviour. This is also nicely documented in the SDM, section "Microcode Update Checksum": "To check for a corrupt microcode update, software must perform a unsigned DWORD (32-bit) checksum of the microcode update. Even though some fields are signed, the checksum procedure treats all DWORDs as unsigned. Microcode updates with a header version equal to 00000001H must sum all DWORDs that comprise the microcode update. A valid checksum check will yield a value of 00000000H." but for some reason the code has been using ints from the very beginning. In practice, this bug possibly manifested itself only when doing the microcode data checksum - apparently, currently shipped Intel microcode doesn't have an extended signature table for which we do checksum verification too. UBSAN: Undefined behaviour in arch/x86/kernel/cpu/microcode/intel_lib.c:105:12 signed integer overflow: -1500151068 + -2125470173 cannot be represented in type 'int' CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc5+ #495 ... Call Trace: dump_stack ? inotify_ioctl ubsan_epilogue handle_overflow __ubsan_handle_add_overflow microcode_sanity_check get_matching_model_microcode.isra.2.constprop.8 ? early_idt_handler_common ? strlcpy ? find_cpio_data load_ucode_intel_bsp load_ucode_bsp ? load_ucode_bsp x86_64_start_kernel [ Expand and massage commit message. ] Signed-off-by: Chris Bainbridge <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-03-08KVM: PPC: Book3S HV: Sanitize special-purpose register values on guest exitPaul Mackerras1-0/+14
Thomas Huth discovered that a guest could cause a hard hang of a host CPU by setting the Instruction Authority Mask Register (IAMR) to a suitable value. It turns out that this is because when the code was added to context-switch the new special-purpose registers (SPRs) that were added in POWER8, we forgot to add code to ensure that they were restored to a sane value on guest exit. This adds code to set those registers where a bad value could compromise the execution of the host kernel to a suitable neutral value on guest exit. Cc: [email protected] # v3.14+ Fixes: b005255e12a3 Reported-by: Thomas Huth <[email protected]> Reviewed-by: David Gibson <[email protected]> Signed-off-by: Paul Mackerras <[email protected]>
2016-03-07PCI: Move pci_dma_* helpers to common codeChristoph Hellwig21-49/+0
For a long time all architectures implement the pci_dma_* functions using the generic DMA API, and they all use the same header to do so. Move this header, pci-dma-compat.h, to include/linux and include it from the generic pci.h instead of having each arch duplicate this include. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]>
2016-03-07frv/PCI: Remove stray pci_{alloc,free}_consistent() declarationChristoph Hellwig1-6/+0
FRV doesn't implement pci_{alloc,free}_consistent(), so remove the declarations for them. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]> Acked-by: David S. Miller <[email protected]> Acked-by: David Howells <[email protected]>
2016-03-07s390/pci: add ioctl interface for CLPMartin Schwidefsky4-39/+293
Provide a user space interface to issue call logical-processor instructions. Only selected CLP commands are allowed, enough to get the full overview of the installed PCI functions. Reviewed-by: Sebastian Ott <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2016-03-07s390: Use pr_warn instead of pr_warningJoe Perches4-19/+12
Convert the uses of pr_warning to pr_warn so there are fewer uses of the old pr_warning. Miscellanea: o Align arguments o Coalesce formats Signed-off-by: Joe Perches <[email protected]> Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2016-03-07ARM64: dts: amlogic: Add Tronsmart Vega S95 configsAndreas Färber5-0/+224
Add Device Trees for Tronsmart Vega S95 Pro, Meta and Telos TV boxes. Signed-off-by: Andreas Färber <[email protected]> Signed-off-by: Carlo Caione <[email protected]>
2016-03-07ARM64: dts: Prepare configs for Amlogic Meson GXBabyAndreas Färber3-0/+187
Signed-off-by: Andreas Färber <[email protected]> Signed-off-by: Carlo Caione <[email protected]>
2016-03-07ARM64: Enable Amlogic Meson GXBaby platformAndreas Färber1-0/+5
Provide the ARCH_MESON Kconfig symbol to allow enabling existing serial and i2c drivers. Signed-off-by: Andreas Färber <[email protected]> Signed-off-by: Carlo Caione <[email protected]>
2016-03-07ARM: dts: dra7: do not gate cpsw clock due to errata i877Mugunthan V N1-0/+10
Errata id: i877 Description: ------------ The RGMII 1000 Mbps Transmit timing is based on the output clock (rgmiin_txc) being driven relative to the rising edge of an internal clock and the output control/data (rgmiin_txctl/txd) being driven relative to the falling edge of an internal clock source. If the internal clock source is allowed to be static low (i.e., disabled) for an extended period of time then when the clock is actually enabled the timing delta between the rising edge and falling edge can change over the lifetime of the device. This can result in the device switching characteristics degrading over time, and eventually failing to meet the Data Manual Delay Time/Skew specs. To maintain RGMII 1000 Mbps IO Timings, SW should minimize the duration that the Ethernet internal clock source is disabled. Note that the device reset state for the Ethernet clock is "disabled". Other RGMII modes (10 Mbps, 100Mbps) are not affected Workaround: ----------- If the SoC Ethernet interface(s) are used in RGMII mode at 1000 Mbps, SW should minimize the time the Ethernet internal clock source is disabled to a maximum of 200 hours in a device life cycle. This is done by enabling the clock as early as possible in IPL (QNX) or SPL/u-boot (Linux/Android) by setting the register CM_GMAC_CLKSTCTRL[1:0]CLKTRCTRL = 0x2:SW_WKUP. So, do not allow to gate the cpsw clocks using ti,no-idle property in cpsw node assuming 1000 Mbps is being used all the time. If someone does not need 1000 Mbps and wants to gate clocks to cpsw, this property needs to be deleted in their respective board files. Signed-off-by: Mugunthan V N <[email protected]> Signed-off-by: Grygorii Strashko <[email protected]> Signed-off-by: Lokesh Vutla <[email protected]> Cc: <[email protected]> Signed-off-by: Paul Walmsley <[email protected]>
2016-03-07ARM: OMAP2+: hwmod: Introduce ti,no-idle dt propertyLokesh Vutla2-1/+11
Introduce a dt property, ti,no-idle, that prevents an IP to idle at any point. This is to handle Errata i877, which tells that GMAC clocks cannot be disabled. Acked-by: Roger Quadros <[email protected]> Tested-by: Mugunthan V N <[email protected]> Signed-off-by: Lokesh Vutla <[email protected]> Signed-off-by: Sekhar Nori <[email protected]> Signed-off-by: Dave Gerlach <[email protected]> Acked-by: Rob Herring <[email protected]> Cc: <[email protected]> Signed-off-by: Paul Walmsley <[email protected]>
2016-03-07Merge tag 'v4.5-rc7' into x86/asm, to pick up SMAP fixIngo Molnar157-467/+938
Signed-off-by: Ingo Molnar <[email protected]>
2016-03-07powerpc/ftrace: Add Kconfig & Make glue for mprofile-kernelTorsten Duwe3-0/+57
Firstly we add logic to Kconfig to allow a user to choose if they want mprofile-kernel. This has to be user-selectable because only some current toolchains support it. If we enabled it unconditionally we would prevent some users from building the kernel entirely. Arguably it would be nice if we could detect if mprofile-kernel was available, and use it then. However that would violate the principle of least surprise because a user having choosen options such as live patching, would then see them quietly disabled at build time. We also make the user selectable option negative, ie. it disables when selected, so that allyesconfig continues to build on old toolchains. Once we've decided we do want to use mprofile-kernel, we then add a script which checks it actually works. That is because there are versions of gcc that accept the flag but don't generate correct code. Due to the way kconfig works, we can't error out when we detect a non-working toolchain. If we did a user would never be able to modify their config and run oldconfig - because the check would block oldconfig from running. Instead we emit a warning and add a bogus flag to CFLAGS so that the build will fail. Signed-off-by: Torsten Duwe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-03-07powerpc/ftrace: Add support for -mprofile-kernel ftrace ABITorsten Duwe5-20/+324
The gcc switch -mprofile-kernel defines a new ABI for calling _mcount() very early in the function with minimal overhead. Although mprofile-kernel has been available since GCC 3.4, there were bugs which were only fixed recently. Currently it is known to work in GCC 4.9, 5 and 6. Additionally there are two possible code sequences generated by the flag, the first uses mflr/std/bl and the second is optimised to omit the std. Currently only gcc 6 has the optimised sequence. This patch supports both sequences. Initial work started by Vojtech Pavlik, used with permission. Key changes: - rework _mcount() to work for both the old and new ABIs. - implement new versions of ftrace_caller() and ftrace_graph_caller() which deal with the new ABI. - updates to __ftrace_make_nop() to recognise the new mcount calling sequence. - updates to __ftrace_make_call() to recognise the nop'ed sequence. - implement ftrace_modify_call(). - updates to the module loader to surpress the toc save in the module stub when calling mcount with the new ABI. Reviewed-by: Balbir Singh <[email protected]> Signed-off-by: Torsten Duwe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-03-07powerpc/ftrace: Use $(CC_FLAGS_FTRACE) when disabling ftraceTorsten Duwe3-9/+9
Rather than open-coding -pg whereever we want to disable ftrace, use the existing $(CC_FLAGS_FTRACE) variable. This has the advantage that it will work in future when we use a different set of flags to enable ftrace. Signed-off-by: Torsten Duwe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-03-07powerpc/ftrace: Use generic ftrace_modify_all_code()Torsten Duwe1-12/+5
Convert powerpc's arch_ftrace_update_code() from its own version to use the generic default functionality (without stop_machine -- our instructions are properly aligned and the replacements atomic). With this we gain error checking and the much-needed function_trace_op handling. Reviewed-by: Balbir Singh <[email protected]> Reviewed-by: Kamalesh Babulal <[email protected]> Signed-off-by: Torsten Duwe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-03-07powerpc/module: Create a special stub for ftrace_caller()Michael Ellerman1-1/+68
In order to support the new -mprofile-kernel ABI, we need to be able to call from the module back to ftrace_caller() (in the kernel) without using the module's r2. That is because the function in this module which is calling ftrace_caller() may not have setup r2, if it doesn't otherwise need it (ie. it accesses no globals). To make that work we add a new stub which is used for calling ftrace_caller(), which uses the kernel toc instead of the module toc. Reviewed-by: Balbir Singh <[email protected]> Reviewed-by: Torsten Duwe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>
2016-03-07powerpc/module: Mark module stubs with a magic valueMichael Ellerman3-64/+31
When a module is loaded, calls out to the kernel go via a stub which is generated at runtime. One of these stubs is used to call _mcount(), which is the default target of tracing calls generated by the compiler with -pg. If dynamic ftrace is enabled (which it typically is), another stub is used to call ftrace_caller(), which is the target of tracing calls when ftrace is actually active. ftrace then wants to disable the calls to _mcount() at module startup, and enable/disable the calls to ftrace_caller() when enabling/disabling tracing - all of these it does by patching the code. As part of that code patching, the ftrace code wants to confirm that the branch it is about to modify, is in fact a call to a module stub which calls _mcount() or ftrace_caller(). Currently it does that by inspecting the instructions and confirming they are what it expects. Although that works, the code to do it is pretty intricate because it requires lots of knowledge about the exact format of the stub. We can make that process easier by marking the generated stubs with a magic value, and then looking for that magic value. Altough this is not as rigorous as the current method, I believe it is sufficient in practice. Reviewed-by: Balbir Singh <[email protected]> Reviewed-by: Torsten Duwe <[email protected]> Signed-off-by: Michael Ellerman <[email protected]>