aboutsummaryrefslogtreecommitdiff
path: root/arch/x86/include/asm/elf.h
AgeCommit message (Collapse)AuthorFilesLines
2012-02-20x32: Handle process creationH. Peter Anvin1-4/+21
Allow an x32 process to be started. Originally-by: H. J. Lu <[email protected]> Signed-off-by: H. Peter Anvin <[email protected]> Cc: Peter Zijlstra <[email protected]>
2012-02-20x86: Factor out TIF_IA32 from 32-bit address spaceH. Peter Anvin1-2/+2
Factor out IA32 (compatibility instruction set) from 32-bit address space in the thread_info flags; this is a precondition patch for x32 support. Originally-by: H. J. Lu <[email protected]> Signed-off-by: H. Peter Anvin <[email protected]> Link: http://lkml.kernel.org/n/[email protected]
2011-08-05x86, amd: Avoid cache aliasing penalties on AMD family 15hBorislav Petkov1-0/+31
This patch provides performance tuning for the "Bulldozer" CPU. With its shared instruction cache there is a chance of generating an excessive number of cache cross-invalidates when running specific workloads on the cores of a compute module. This excessive amount of cross-invalidations can be observed if cache lines backed by shared physical memory alias in bits [14:12] of their virtual addresses, as those bits are used for the index generation. This patch addresses the issue by clearing all the bits in the [14:12] slice of the file mapping's virtual address at generation time, thus forcing those bits the same for all mappings of a single shared library across processes and, in doing so, avoids instruction cache aliases. It also adds the command line option "align_va_addr=(32|64|on|off)" with which virtual address alignment can be enabled for 32-bit or 64-bit x86 individually, or both, or be completely disabled. This change leaves virtual region address allocation on other families and/or vendors unaffected. Signed-off-by: Borislav Petkov <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: H. Peter Anvin <[email protected]>
2010-02-16x86: ELF_PLAT_INIT() shouldn't worry about TIF_IA32Oleg Nesterov1-4/+1
The 64-bit version of ELF_PLAT_INIT() clears TIF_IA32, but at this point it has already been cleared by SET_PERSONALITY == set_personality_64bit. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2010-01-29x86: get rid of the insane TIF_ABI_PENDING bitH. Peter Anvin1-8/+2
Now that the previous commit made it possible to do the personality setting at the point of no return, we do just that for ELF binaries. And suddenly all the reasons for that insane TIF_ABI_PENDING bit go away, and we can just make SET_PERSONALITY() just do the obvious thing for a 32-bit compat process. Everything becomes much more straightforward this way. Signed-off-by: H. Peter Anvin <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
2009-12-16elf: kill USE_ELF_CORE_DUMPChristoph Hellwig1-1/+0
Currently all architectures but microblaze unconditionally define USE_ELF_CORE_DUMP. The microblaze omission seems like an error to me, so let's kill this ifdef and make sure we are the same everywhere. Signed-off-by: Christoph Hellwig <[email protected]> Acked-by: Hugh Dickins <[email protected]> Cc: <[email protected]> Cc: Michal Simek <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2009-10-09x86-64: make compat_start_thread() match start_thread()H. Peter Anvin1-18/+2
For no real good reason, compat_start_thread() was embedded inline in <asm/elf.h> whereas the native start_thread() lives in process_*.c. Move compat_start_thread() to process_64.c, remove gratuitious differences, and fix a few items which mostly look like bit rot. In particular, compat_start_thread() didn't do free_thread_xstate(), which means it was hanging on to the xstate store area even when it was not needed. It was also not setting old_rsp, but it looks like that generally shouldn't matter for a 32-bit process. Note: compat_start_thread *has* to be a macro, since it is tested with start_thread_ia32() as the out of line function name. Signed-off-by: H. Peter Anvin <[email protected]> Acked-by: Suresh Siddha <[email protected]>
2009-09-10x86: Increase MIN_GAP to include randomized stackMichal Hocko1-0/+2
Currently we are not including randomized stack size when calculating mmap_base address in arch_pick_mmap_layout for topdown case. This might cause that mmap_base starts in the stack reserved area because stack is randomized by 1GB for 64b (8MB for 32b) and the minimum gap is 128MB. If the stack really grows down to mmap_base then we can get silent mmap region overwrite by the stack values. Let's include maximum stack randomization size into MIN_GAP which is used as the low bound for the gap in mmap. Signed-off-by: Michal Hocko <[email protected]> LKML-Reference: <[email protected]> Acked-by: Jiri Kosina <[email protected]> Signed-off-by: H. Peter Anvin <[email protected]> Cc: Stable Team <[email protected]>
2009-02-10x86: make lazy %gs optional on x86_32Tejun Heo1-2/+13
Impact: pt_regs changed, lazy gs handling made optional, add slight overhead to SAVE_ALL, simplifies error_code path a bit On x86_32, %gs hasn't been used by kernel and handled lazily. pt_regs doesn't have place for it and gs is saved/loaded only when necessary. In preparation for stack protector support, this patch makes lazy %gs handling optional by doing the followings. * Add CONFIG_X86_32_LAZY_GS and place for gs in pt_regs. * Save and restore %gs along with other registers in entry_32.S unless LAZY_GS. Note that this unfortunately adds "pushl $0" on SAVE_ALL even when LAZY_GS. However, it adds no overhead to common exit path and simplifies entry path with error code. * Define different user_gs accessors depending on LAZY_GS and add lazy_save_gs() and lazy_load_gs() which are noop if !LAZY_GS. The lazy_*_gs() ops are used to save, load and clear %gs lazily. * Define ELF_CORE_COPY_KERNEL_REGS() which always read %gs directly. xen and lguest changes need to be verified. Signed-off-by: Tejun Heo <[email protected]> Cc: Jeremy Fitzhardinge <[email protected]> Cc: Rusty Russell <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-02-10x86: add %gs accessors for x86_32Tejun Heo1-1/+1
Impact: cleanup On x86_32, %gs is handled lazily. It's not saved and restored on kernel entry/exit but only when necessary which usually is during task switch but there are few other places. Currently, it's done by calling savesegment() and loadsegment() explicitly. Define get_user_gs(), set_user_gs() and task_user_gs() and use them instead. While at it, clean up register access macros in signal.c. This cleans up code a bit and will help future changes. Signed-off-by: Tejun Heo <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-25[S390] arch_setup_additional_pages argumentsMartin Schwidefsky1-1/+1
arch_setup_additional_pages currently gets two arguments, the binary format descripton and an indication if the process uses an executable stack or not. The second argument is not used by anybody, it could be removed without replacement. What actually does make sense is to pass an indication if the process uses the elf interpreter or not. The glibc code will not use anything from the vdso if the process does not use the dynamic linker, so for statically linked binaries the architecture backend can choose not to map the vdso. Acked-by: Ingo Molnar <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2008-10-22x86: Fix ASM_X86__ header guardsH. Peter Anvin1-3/+3
Change header guards named "ASM_X86__*" to "_ASM_X86_*" since: a. the double underscore is ugly and pointless. b. no leading underscore violates namespace constraints. Signed-off-by: H. Peter Anvin <[email protected]>
2008-10-22x86, um: ... and asm-x86 moveAl Viro1-0/+336
Signed-off-by: Al Viro <[email protected]> Signed-off-by: H. Peter Anvin <[email protected]>