Age | Commit message (Collapse) | Author | Files | Lines |
|
Add a field to the current emulation context which contains the
instruction opcode length. This will streamline handling of opcodes of
different length.
Signed-off-by: Borislav Petkov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
Add a kvm ioctl which states which system functionality kvm emulates.
The format used is that of CPUID and we return the corresponding CPUID
bits set for which we do emulate functionality.
Make sure ->padding is being passed on clean from userspace so that we
can use it for something in the future, after the ioctl gets cast in
stone.
s/kvm_dev_ioctl_get_supported_cpuid/kvm_dev_ioctl_get_cpuid/ while at
it.
Signed-off-by: Borislav Petkov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
It's incredibly difficult to diagnose early EFI boot issues without
special hardware because earlyprintk=vga doesn't work on EFI systems.
Add support for writing to the EFI framebuffer, via earlyprintk=efi,
which will actually give users a chance of providing debug output.
Cc: H. Peter Anvin <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Peter Jones <[email protected]>
Signed-off-by: Matt Fleming <[email protected]>
|
|
* powercap:
PowerCap: Convert class code to use dev_groups
PowerCap: Introduce Intel RAPL power capping driver
bitops: Introduce BIT_ULL
x86 / msr: add 64bit _on_cpu access functions
PowerCap: Add to drivers Kconfig and Makefile
PowerCap: Add class driver
PowerCap: Documentation
|
|
* acpi-tables:
ACPI / x86: Increase override tables number limit
|
|
* acpi-processor:
ACPI / processor: fixed a brace coding style issue
ACPI / processor: Remove outdated comments
ACPI / processor: remove unnecessary if (!pr) check
ACPI / processor: remove some dead code in acpi_processor_get_info()
x86 / ACPI: simplify _acpi_map_lsapic()
ACPI / processor: use apic_id and remove duplicated _MAT evaluation
ACPI / processor: Introduce apic_id in struct processor to save parsed APIC id
|
|
Remove the unused x86 implementation of this_cpu_xor().
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
Similarly to copy_from_user(), where the range check is to
protect against kernel memory corruption, copy_to_user() can
benefit from such checking too: Here it protects against kernel
information leaks.
Signed-off-by: Jan Beulich <[email protected]>
Cc: <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
Cc: Arjan van de Ven <[email protected]>
|
|
Commits 4a3127693001c61a21d1ce680db6340623f52e93 ("x86: Turn the
copy_from_user check into an (optional) compile time warning")
and 63312b6a6faae3f2e5577f2b001e3b504f10a2aa ("x86: Add a
Kconfig option to turn the copy_from_user warnings into errors")
touched only the 32-bit variant of copy_from_user(), whereas the
original commit 9f0cf4adb6aa0bfccf675c938124e68f7f06349d ("x86:
Use __builtin_object_size() to validate the buffer size for
copy_from_user()") also added the same code to the 64-bit one.
Further the earlier conversion from an inline WARN() to the call
to copy_from_user_overflow() went a little too far: When the
number of bytes to be copied is not a constant (e.g. [looking at
3.11] in drivers/net/tun.c:__tun_chr_ioctl() or
drivers/pci/pcie/aer/aer_inject.c:aer_inject_write()), the
compiler will always have to keep the funtion call, and hence
there will always be a warning. By using __builtin_constant_p()
we can avoid this.
And then this slightly extends the effect of
CONFIG_DEBUG_STRICT_USER_COPY_CHECKS in that apart from
converting warnings to errors in the constant size case, it
retains the (possibly wrong) warnings in the non-constant size
case, such that if someone is prepared to get a few false
positives, (s)he'll be able to recover the current behavior
(except that these diagnostics now will never be converted to
errors).
Since the 32-bit variant (intentionally) didn't call
might_fault(), the unification results in this being called
twice now. Adding a suitable #ifdef would be the alternative if
that's a problem.
I'd like to point out though that with
__compiletime_object_size() being restricted to gcc before 4.6,
the whole construct is going to become more and more pointless
going forward. I would question however that commit
2fb0815c9ee6b9ac50e15dd8360ec76d9fa46a2 ("gcc4: disable
__compiletime_object_size for GCC 4.6+") was really necessary,
and instead this should have been dealt with as is done here
from the beginning.
Signed-off-by: Jan Beulich <[email protected]>
Cc: Arjan van de Ven <[email protected]>
Cc: Guenter Roeck <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Introduce xen_dma_map_page, xen_dma_unmap_page,
xen_dma_sync_single_for_cpu and xen_dma_sync_single_for_device.
They have empty implementations on x86 and ia64 but they call the
corresponding platform dma_ops function on arm and arm64.
Signed-off-by: Stefano Stabellini <[email protected]>
Changes in v9:
- xen_dma_map_page return void, avoid page_to_phys.
|
|
This H/W error log driver (a.k.a eMCA driver) is implemented based on
http://www.intel.com/content/www/us/en/architecture-and-technology/enhanced-mca-logging-xeon-paper.html
After errors are captured, more detailed platform specific information
can be got via this new enhanced H/W error log driver. Most notably we
can track memory errors back to the DIMM slot silk screen label.
Signed-off-by: Chen, Gong <[email protected]>
Signed-off-by: Tony Luck <[email protected]>
|
|
As Intel rolling out more SoC's after Moorestown, we need to
re-structure the code in a way that is backward compatible and easy to
expand. This patch implements a flexible way to support multiple boards
and devices.
This patch does not add any new functional support. It just refactors
the existing code to increase the modularity and decrease the code
duplication for supporting multiple soc's and boards.
Currently intel-mid.c has both board and soc related code in one file.
This patch moves the board related code to new files and let linker
script to create SFI devite table following this:
1. Move the SFI device specific code to
arch/x86/platform/intel-mid/device-libs/platform_<device>.*
A new device file is added for every supported device. This code will
get conditionally compiled by using corresponding device driver
CONFIG option.
2. Move the device_ids location to .x86_intel_mid_dev.init section by
using new sfi_device() macro.
This patch was based on previous code from Sathyanarayanan Kuppuswamy.
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Cohen <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Moved SFI specific parsing/handling code to sfi.c. This will enable us
to reuse our intel-mid code for platforms that supports firmware
interfaces other than SFI (like ACPI).
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Cohen <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Added a custom handler for medfield based ipc devices and
moved devs_id structure defintion to header file.
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Cohen <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
mrst is used as common name to represent all intel_mid type
soc's. But moorsetwon is just one of the intel_mid soc. So
renamed them to use intel_mid.
This patch mainly renames the variables and related
functions that uses *mrst* prefix with *intel_mid*.
To ensure that there are no functional changes, I have compared
the objdump of related files before and after rename and found
the only difference is symbol and name changes.
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Cohen <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Following files contains code that is common to all intel mid
soc's. So renamed them as below.
mrst/mrst.c -> intel-mid/intel-mid.c
mrst/vrtc.c -> intel-mid/intel_mid_vrtc.c
mrst/early_printk_mrst.c -> intel-mid/intel_mid_vrtc.c
pci/mrst.c -> pci/intel_mid_pci.c
Also, renamed the corresponding header files and made changes
to the driver files that included these header files.
To ensure that there are no functional changes, I have compared
the objdump of renamed files before and after rename and found
that the only difference is file name change.
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: David Cohen <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
Merging master into next to satisfy the dependencies.
Conflicts:
arch/arm/kvm/reset.c
|
|
Having 64-bit MSR access methods on given CPU can avoid shifting and
simplify MSR content manipulation. We already have other combinations
of rdmsrl_xxx and wrmsrl_xxx but missing the _on_cpu version.
Signed-off-by: Srinivas Pandruvada <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
Reviewed-by: H. Peter Anvin <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
The gfn_to_index function relies on huge page defines which either may
not make sense on systems that don't support huge pages or are defined
in an unconvenient way for other architectures. Since this is
x86-specific, move the function to arch/x86/include/asm/kvm_host.h.
Signed-off-by: Christoffer Dall <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
|
|
Apply the asm_volatile_goto() compiler quirk to the new rmwcc.h
file as well, introduced in:
c2daa3bed53a sched, x86: Provide a per-cpu preempt_count implementation
Reported-and-tested-by: Fengguang Wu <[email protected]>
Reported-by: Oleg Nesterov <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Suggested-by: Jakub Jelinek <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Merge in asm goto fix, to be able to apply the asm/rmwcc.h fix.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down
a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto'
constructs, as outlined here:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
Implement a workaround suggested by Jakub Jelinek.
Reported-and-tested-by: Fengguang Wu <[email protected]>
Reported-by: Oleg Nesterov <[email protected]>
Reported-by: Peter Zijlstra <[email protected]>
Suggested-by: Jakub Jelinek <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Hyper-V supports a mechanism for retrieving the local APIC frequency.
Use this and bypass the calibration code in the kernel . This would
allow us to boot the Linux kernel as a "modern VM" on Hyper-V where
many of the legacy devices (such as PIT) are not emulated.
I would like to thank Olaf Hering <[email protected]>, Jan Beulich <[email protected]> and
H. Peter Anvin <[email protected]> for their help in this effort.
In this version of the patch, I have addressed Jan's comments.
Signed-off-by: K. Y. Srinivasan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Tested-by: Olaf Hering <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
|
|
This patch contains the following two changes:
1. Fix the bug in nested preemption timer support. If vmexit L2->L0
with some reasons not emulated by L1, preemption timer value should
be save in such exits.
2. Add support of "Save VMX-preemption timer value" VM-Exit controls
to nVMX.
With this patch, nested VMX preemption timer features are fully
supported.
Signed-off-by: Arthur Chunqi Li <[email protected]>
Reviewed-by: Gleb Natapov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
|
|
HAVE_ARCH_DEVTREE_FIXUPS appears to always be needed except for sparc,
but it is only used for /proc/device-teee and sparc does not enable
/proc/device-tree. So this option is redundant. Remove the option and
always enable it. This has the side effect of fixing /proc/device-tree
on arches such as arm64 which failed to define this option.
Signed-off-by: Rob Herring <[email protected]>
Acked-by: Vineet Gupta <[email protected]>
Acked-by: Grant Likely <[email protected]>
Cc: Russell King <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Chris Zankel <[email protected]>
Cc: Max Filippov <[email protected]>
|
|
Implement pci_address_to_pio as weak function to remove the dependency on
asm/prom.h. This is in preparation to make prom.h optional.
Signed-off-by: Rob Herring <[email protected]>
Acked-by: Grant Likely <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Grant Likely <[email protected]>
|
|
xen_swiotlb_alloc_coherent needs to allocate a coherent buffer for cpu
and devices. On native x86 is sufficient to call __get_free_pages in
order to get a coherent buffer, while on ARM (and potentially ARM64) we
need to call the native dma_ops->alloc implementation.
Introduce xen_alloc_coherent_pages to abstract the arch specific buffer
allocation.
Similarly introduce xen_free_coherent_pages to free a coherent buffer:
on x86 is simply a call to free_pages while on ARM and ARM64 is
arm_dma_ops.free.
Signed-off-by: Stefano Stabellini <[email protected]>
Changes in v7:
- rename __get_dma_ops to __generic_dma_ops;
- call __generic_dma_ops(hwdev)->alloc/free on arm64 too.
Changes in v6:
- call __get_dma_ops to get the native dma_ops pointer on arm.
|
|
Merge Linux v3.12-rc4 to fix a conflict and also to refresh the tree
before applying more scheduler patches.
Conflicts:
arch/avr32/include/asm/Kbuild
Signed-off-by: Ingo Molnar <[email protected]>
|
|
kvm_mmu initialization is mostly filling in function pointers, there is
no way for it to fail. Clean up unused return values.
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
|
|
The new_cr3 MMU callback has been a wrapper for mmu_free_roots since commit
e676505 (KVM: MMU: Force cr3 reload with two dimensional paging on mov
cr3 emulation, 2012-07-08).
The commit message mentioned that "mmu_free_roots() is somewhat of an overkill,
but fixing that is more complicated and will be done after this minimal fix".
One year has passed, and no one really felt the need to do a different fix.
Wrap the call with a kvm_mmu_new_cr3 function for clarity, but remove the
callback.
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
|
|
The free MMU callback has been a wrapper for mmu_free_roots since mmu_free_roots
itself was introduced (commit 17ac10a, [PATCH] KVM: MU: Special treatment
for shadow pae root pages, 2007-01-05), and has always been the same for all
MMU cases. Remove the indirection as it is useless.
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
|
|
This makes the interface more deterministic for userspace, which can expect
(after configuring only the features it supports) to get exactly the same
state from the kernel, independent of the host CPU and kernel version.
Suggested-by: Gleb Natapov <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
|
|
A guest can still attempt to save and restore XSAVE states even if they
have been masked in CPUID leaf 0Dh. This usually is not visible to
the guest, but is still wrong: "Any attempt to set a reserved bit (as
determined by the contents of EAX and EDX after executing CPUID with
EAX=0DH, ECX= 0H) in XCR0 for a given processor will result in a #GP
exception".
The patch also performs the same checks as __kvm_set_xcr in KVM_SET_XSAVE.
This catches migration from newer to older kernel/processor before the
guest starts running.
Signed-off-by: Paolo Bonzini <[email protected]>
Signed-off-by: Gleb Natapov <[email protected]>
|
|
As the new x86 CPU bootup printout format code maintainer, I am
taking immediate action to improve and clean (and thus indulge
my OCD) the reporting of the cores when coming up online.
Fix padding to a right-hand alignment, cleanup code and bind
reporting width to the max number of supported CPUs on the
system, like this:
[ 0.074509] smpboot: Booting Node 0, Processors: #1 #2 #3 #4 #5 #6 #7 OK
[ 0.644008] smpboot: Booting Node 1, Processors: #8 #9 #10 #11 #12 #13 #14 #15 OK
[ 1.245006] smpboot: Booting Node 2, Processors: #16 #17 #18 #19 #20 #21 #22 #23 OK
[ 1.864005] smpboot: Booting Node 3, Processors: #24 #25 #26 #27 #28 #29 #30 #31 OK
[ 2.489005] smpboot: Booting Node 4, Processors: #32 #33 #34 #35 #36 #37 #38 #39 OK
[ 3.093005] smpboot: Booting Node 5, Processors: #40 #41 #42 #43 #44 #45 #46 #47 OK
[ 3.698005] smpboot: Booting Node 6, Processors: #48 #49 #50 #51 #52 #53 #54 #55 OK
[ 4.304005] smpboot: Booting Node 7, Processors: #56 #57 #58 #59 #60 #61 #62 #63 OK
[ 4.961413] Brought up 64 CPUs
and this:
[ 0.072367] smpboot: Booting Node 0, Processors: #1 #2 #3 #4 #5 #6 #7 OK
[ 0.686329] Brought up 8 CPUs
Signed-off-by: Borislav Petkov <[email protected]>
Cc: Libin <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Yuanhan reported a serious throughput regression in his pigz
benchmark. Using the ftrace patch I found that several idle
paths need more TLC before we can switch the generic
need_resched() over to preempt_need_resched.
The preemption paths benefit most from preempt_need_resched and
do indeed use it; all other need_resched() users don't really
care that much so reverting need_resched() back to
tif_need_resched() is the simple and safe solution.
Reported-by: Yuanhan Liu <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Fengguang Wu <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: [email protected]
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip
Pull Xen fixes from Konrad Rzeszutek Wilk:
"Bug-fixes and one update to the kernel-paramters.txt documentation.
- Fix PV spinlocks triggering jump_label code bug
- Remove extraneous code in the tpm front driver
- Fix ballooning out of pages when non-preemptible
- Fix deadlock when using a 32-bit initial domain with large amount
of memory
- Add xen_nopvpsin parameter to the documentation"
* tag 'stable/for-linus-3.12-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
xen/spinlock: Document the xen_nopvspin parameter.
xen/p2m: check MFN is in range before using the m2p table
xen/balloon: don't alloc page while non-preemptible
xen: Do not enable spinlocks before jump_label_init() has executed
tpm: xen-tpmfront: Remove the locality sysfs attribute
tpm: xen-tpmfront: Fix default durations
|
|
Current ACPI tables in initrd is limited to 10, that is too small.
64 should be good enough as we have 35 sigs and could have several
SSDT.
Two problems in current code prevent us from increasing limit:
1. The cpio file info array is put in stack, as every element is 32
bytes, could run out of stack if we have that array size to 64.
We can move it out from stack, make it global and put it into the
__initdata section.
2. early_ioremap() only can remap 256k one time. Current code maps
10 tables at a time. If we increased that limit, the whole size
could be more than 256k, so early_ioremap() would fail with that.
We can map chunks one by one during copying, instead of mapping
all of them together.
Signed-off-by: Yinghai <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Tested-by: Thomas Renninger <[email protected]>
Reviewed-by: Tang Chen <[email protected]>
Tested-by: Tang Chen <[email protected]>
Acked-by: Toshi Kani <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
On hosts with more than 168 GB of memory, a 32-bit guest may attempt
to grant map an MFN that is error cannot lookup in its mapping of the
m2p table. There is an m2p lookup as part of m2p_add_override() and
m2p_remove_override(). The lookup falls off the end of the mapped
portion of the m2p and (because the mapping is at the highest virtual
address) wraps around and the lookup causes a fault on what appears to
be a user space address.
do_page_fault() (thinking it's a fault to a userspace address), tries
to lock mm->mmap_sem. If the gntdev device is used for the grant map,
m2p_add_override() is called from from gnttab_mmap() with mm->mmap_sem
already locked. do_page_fault() then deadlocks.
The deadlock would most commonly occur when a 64-bit guest is started
and xenconsoled attempts to grant map its console ring.
Introduce mfn_to_pfn_no_overrides() which checks the MFN is within the
mapped portion of the m2p table before accessing the table and use
this in m2p_add_override(), m2p_remove_override(), and mfn_to_pfn()
(which already had the correct range check).
All faults caused by accessing the non-existant parts of the m2p are
thus within the kernel address space and exception_fixup() is called
without trying to lock mm->mmap_sem.
This means that for MFNs that are outside the mapped range of the m2p
then mfn_to_pfn() will always look in the m2p overrides. This is
correct because it must be a foreign MFN (and the PFN in the m2p in
this case is only relevant for the other domain).
Signed-off-by: David Vrabel <[email protected]>
Cc: Stefano Stabellini <[email protected]>
Cc: Jan Beulich <[email protected]>
--
v3: check for auto_translated_physmap in mfn_to_pfn_no_overrides()
v2: in mfn_to_pfn() look in m2p_overrides if the MFN is out of
range as it's probably foreign.
Signed-off-by: Konrad Rzeszutek Wilk <[email protected]>
Acked-by: Stefano Stabellini <[email protected]>
|
|
Remove the bloat of the C calling convention out of the
preempt_enable() sites by creating an ASM wrapper which allows us to
do an asm("call ___preempt_schedule") instead.
calling.h bits by Andi Kleen
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
[ Fixed build error. ]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Convert x86 to use a per-cpu preemption count. The reason for doing so
is that accessing per-cpu variables is a lot cheaper than accessing
thread_info variables.
We still need to save/restore the actual preemption count due to
PREEMPT_ACTIVE so we place the per-cpu __preempt_count variable in the
same cache-line as the other hot __switch_to() variables such as
current_task.
NOTE: this save/restore is required even for !PREEMPT kernels as
cond_resched() also relies on preempt_count's PREEMPT_ACTIVE to ignore
task_struct::state.
Also rename thread_info::preempt_count to ensure nobody is
'accidentally' still poking at it.
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
In order to prepare to per-arch implementations of preempt_count move
the required bits into an asm-generic header and use this for all
archs.
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Linus suggested using asm goto to get rid of the typical SETcc + TEST
instruction pair -- which also clobbers an extra register -- for our
typical modify_and_test() functions.
Because asm goto doesn't allow output fields it has to include an
unconditinal memory clobber when it changes a memory variable to force
a reload.
Luckily all atomic ops already imply a compiler barrier to go along
with their memory barrier semantics.
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
This patch adds support for the uvtrace module by providing a
skeleton call to the registered trace function. It also
provides another separate 'NMI' tracer that is triggered by the
system wide 'power nmi' command.
Signed-off-by: Mike Travis <[email protected]>
Reviewed-by: Dimitri Sivanich <[email protected]>
Reviewed-by: Hedi Berriche <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jason Wessel <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
The current UV NMI handler has not been updated for the changes
in the system NMI handler and the perf operations. The UV NMI
handler reads an MMR in the UV Hub to check to see if the NMI
event was caused by the external 'system NMI' that the operator
can initiate on the System Mgmt Controller.
The problem arises when the perf tools are running, causing
millions of perf events per second on very large CPU count
systems. Previously this was okay because the perf NMI handler
ran at a higher priority on the NMI call chain and if the NMI
was a perf event, it would stop calling other NMI handlers
remaining on the NMI call chain.
Now the system NMI handler calls all the handlers on the NMI
call chain including the UV NMI handler. This causes the UV NMI
handler to read the MMRs at the same millions per second rate.
This can lead to significant performance loss and possible
system failures. It also can cause thousands of 'Dazed and
Confused' messages being sent to the system console. This
effectively makes perf tools unusable on UV systems.
To avoid this excessive overhead when perf tools are running,
this code has been optimized to minimize reading of the MMRs as
much as possible, by moving to the NMI_UNKNOWN notifier chain.
This chain is called only when all the users on the standard
NMI_LOCAL call chain have been called and none of them have
claimed this NMI.
There is an exception where the NMI_LOCAL notifier chain is
used. When the perf tools are in use, it's possible that the UV
NMI was captured by some other NMI handler and then either
ignored or mistakenly processed as a perf event. We set a
per_cpu ('ping') flag for those CPUs that ignored the initial
NMI, and then send them an IPI NMI signal. The NMI_LOCAL
handler on each cpu does not need to read the MMR, but instead
checks the in memory flag indicating it was pinged. There are
two module variables, 'ping_count' indicating how many requested
NMI events occurred, and 'ping_misses' indicating how many stray
NMI events. These most likely are perf events so it shows the
overhead of the perf NMI interrupts and how many MMR reads were avoided.
This patch also minimizes the reads of the MMRs by having the
first cpu entering the NMI handler on each node set a per HUB
in-memory atomic value. (Having a per HUB value avoids sending
lock traffic over NumaLink.) Both types of UV NMIs from the SMI
layer are supported.
Signed-off-by: Mike Travis <[email protected]>
Reviewed-by: Dimitri Sivanich <[email protected]>
Reviewed-by: Hedi Berriche <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jason Wessel <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
This patch moves the UV NMI support from the x2apic file to a
new separate uv_nmi.c file in preparation for the next sequence
of patches. It prevents upcoming bloat of the x2apic file, and
has the added benefit of putting the upcoming /sys/module
parameters under the name 'uv_nmi' instead of 'x2apic_uv_x',
which was obscure.
Signed-off-by: Mike Travis <[email protected]>
Reviewed-by: Dimitri Sivanich <[email protected]>
Reviewed-by: Hedi Berriche <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Cc: Jason Wessel <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
In acpi_register_lapic(), it will generates a new logical cpu
number and maps to the local APIC id, this logical cpu number
can be returned to simplify _acpi_map_lsapic() implementation.
Signed-off-by: Jiang Liu <[email protected]>
Signed-off-by: Hanjun Guo <[email protected]>
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
Move all users of ablk_helper under x86/ to the generic version
and delete the x86 specific version.
Acked-by: Jussi Kivilinna <[email protected]>
Signed-off-by: Ard Biesheuvel <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
_PAGE_SOFT_DIRTY bit should never be set on present pte so add VM_BUG_ON
to catch any potential future abuse.
Also add a comment on _PAGE_SWP_SOFT_DIRTY definition explaining scope of
its usage.
Signed-off-by: Cyrill Gorcunov <[email protected]>
Acked-by: Pavel Emelyanov <[email protected]>
Acked-by: Jan Beulich <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The previous patch doing vmstats for TLB flushes ("mm: vmstats: tlb flush
counters") effectively missed UP since arch/x86/mm/tlb.c is only compiled
for SMP.
UP systems do not do remote TLB flushes, so compile those counters out on
UP.
arch/x86/kernel/cpu/mtrr/generic.c calls __flush_tlb() directly. This is
probably an optimization since both the mtrr code and __flush_tlb() write
cr4. It would probably be safe to make that a flush_tlb_all() (and then
get these statistics), but the mtrr code is ancient and I'm hesitant to
touch it other than to just stick in the counters.
[[email protected]: tweak comments]
Signed-off-by: Dave Hansen <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 jumplabel changes from Peter Anvin:
"One more x86 tree for this merge window. This tree improves the
handling of jump labels, so that most of the time we don't have to do
a massive initial patching run.
Furthermore, we will error out of the jump label is not what is
expected, eg if it has been corrupted or tampered with"
* 'x86/jumplabel' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/jump-label: Show where and what was wrong on errors
x86/jump-label: Add safety checks to jump label conversions
x86/jump-label: Do not bother updating nops if they are correct
x86/jump-label: Use best default nops for inital jump label calls
|