aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2016-09-30ARCv2: intc: Use kflag if STATUS32.IE must be resetYuriy Kolerov1-1/+1
In the end of "arc_init_IRQ" STATUS32.IE flag is going to be affected by "flag" instruction but "flag" never touches IE flag on ARCv2. So "kflag" instruction must be used instead of "flag". Signed-off-by: Yuriy Kolerov <[email protected]> Cc: [email protected] #4.2+ Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: .exit.* sections can be discarded in .eh_frame regimeVineet Gupta1-8/+0
We used to keep the .exit.* sections as linker would fail in final link due to references from .debug_frame which itself could not be discardrd due to the forced "write,alloc" attributes for it. | LD init/built-in.o | `.exit.text' referenced in section `.debug_frame' of arch/arc/built-in.o: defined in discarded section `.exit.text' of arch/arc/built-in.o | Makefile:949: recipe for target 'vmlinux' failed With .debug_frame now retired, this hack is no longer needed. kernel binary is now a little bit smaller as well. closes STAR 9000549913 Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: dw2 unwind: enable cfi pseudo ops in string libVineet Gupta11-25/+26
This uses a new set of annoations viz. ENTRY_CFI/END_CFI to enabel cfi ops generation. Note that we didn't change the normal ENTRY/EXIT as we don't actually want unwind info in the trap/exception/interrutp handlers which use these, as unwinder then gets confused (it keeps recursing vs. stopping). Semantically these are leaf routines and unwinding should stop when it hits those routines. Before ------ 28.52% 1.19% 9929 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--8.95%--EV_Trap | --8.25%--sys_write | |--3.93%--sock_write_iter ... |--2.62%--memset <==== [LEAF entry as no unwind info] ^^^^^^ After ----- 29.46% 1.24% 13622 hackbench libuClibc-1.0.17.so [.] __write_nocancel | ---__write_nocancel |--9.31%--EV_Trap | --8.62%--sys_write | |--4.17%--sock_write_iter ... |--6.19%--sys_write | --6.19%--sock_write_iter | unix_stream_sendmsg | |--1.62%--sock_alloc_send_pskb | |--0.89%--sock_def_readable | |--0.88%--_raw_spin_unlock_irqrestore | |--0.69%--memset | | ^^^^^^ <==== [now in proper callframe] | | | --0.52%--skb_copy_datagram_from_iter Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: dw2 unwind: add infrastructure for adding cfi pseudo ops to asmVineet Gupta3-1/+52
1. detect whether binutils supports the cfi pseudo ops 2. define conditional macros to generate the ops 3. define new ENTRY_CFI/END_CFI to annotate hand asm code. - Needed because we don't want to emit dwarf info in general ENTRY/END used by lowest level trap/exception/interrutp handlers as unwinder gets confused trying to unwind out of them. We want unwinder to instead stop when it hits onfo those routines - These provide minimal start/end cfi ops assuming routine doesn't touch stack memory/regs Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: entry: make ret_from_system_call local labelVineet Gupta1-7/+5
This essentially removes ENTRY() assembler annotation for this symbol since it didn't have a pairing END() This in ahead of introducing cfi pseudo ops in ENTRY/END which expects paired cfi_startproc/cfi_endproc | ../arch/arc/kernel/entry.S: Assembler messages: | ../arch/arc/kernel/entry.S:270: Error: previous CFI entry not closed (missing .cfi_endproc) | ../scripts/Makefile.build:326: recipe for target 'arch/arc/kernel/entry-arcv2.o' failed | make[4]: *** [arch/arc/kernel/entry-arcv2.o] Error 1 Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: dw2 unwind: don't force dwarf 2Vineet Gupta1-6/+0
In .debug_frame based unwinding regime, we used to force -gdwarf-2 since kernel unwinder only claimed to handle dwarf 2. This changed since commit 6d0d506012c93d ("ARC: dw2 unwind: Don't bail for CIE.version != 1") which added some support beyond dwarf 2, atleast to handle CIE != 1 The ill-effect of -gdwarf-2 is that it forces generation of .debug_* sections, which bloats loadable modules .ko files. For the curious, this doesn't affect vmlinx binary since linker script discards .debug_* but same discard is not yet implemented for modules. So it seems we can drop the -gdwarf-2 toggle, which should not be needed anyways given that we now use .eh_frame based unwinding. I've verified using GNU 2016.09-engo10 that the actual unwind info is not different with or w/o this toggle - but the debug_* sections are gone for good. before ----- arc-linux-readelf -S q_proc.ko-unwinding-1-eh_frame-switch | grep debug [15] .debug_info PROGBITS 00000000 000300 00d08d 00 0 0 1 [16] .rela.debug_info RELA 00000000 0162a0 008844 0c I 29 15 4 [17] .debug_abbrev PROGBITS 00000000 00d38d 0005f8 00 0 0 1 [18] .debug_loc PROGBITS 00000000 00d985 000070 00 0 0 1 [19] .rela.debug_loc RELA 00000000 01eae4 0000c0 0c I 29 18 4 [20] .debug_aranges PROGBITS 00000000 00d9f5 000040 00 0 0 1 [21] .rela.debug_arang RELA 00000000 01eba4 000030 0c I 29 20 4 [22] .debug_ranges PROGBITS 00000000 00da35 000018 00 0 0 1 [23] .rela.debug_range RELA 00000000 01ebd4 000030 0c I 29 22 4 [24] .debug_line PROGBITS 00000000 00da4d 000b5b 00 0 0 1 [25] .rela.debug_line RELA 00000000 01ec04 0000cc 0c I 29 24 4 [26] .debug_str PROGBITS 00000000 00e5a8 007831 01 MS 0 0 1 after ---- Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: dw2 unwind: switch to .eh_frame based unwindingVineet Gupta5-34/+13
So finally after almost 8 years of dealing with .debug_frame, we are finally switching to .eh_frame. The reason being stripped kernel binaries had non-functional unwinder as .debug_frame was gone. Also, in general .eh_frame seems more common way of doing unwinding. This also folds a revert of f52e126cc747 ("ARC: unwind: ensure that .debug_frame is generated (vs. .eh_frame)") to ensure that we start getting .eh_frame Reported-by: Daniel Mentz <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: dw2 unwind: factor CIE specifics for .eh_frame/.debug_frameVineet Gupta1-7/+18
This paves way for switching to .eh_frame based unwindiing Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: module: support R_ARC_32_PCREL relocationVineet Gupta2-4/+5
Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30arc: perf: Enable generic "cache-references" and "cache-misses" eventsAlexey Brodkin2-2/+7
We used to live with PERF_COUNT_HW_CACHE_REFERENCES and PERF_COUNT_HW_CACHE_REFERENCES not specified on ARC. Those events are actually aliases to 2 cache events that we do support and so this change sets "cache-reference" and "cache-misses" events in the same way as "L1-dcache-loads" and L1-dcache-load-misses. And while at it adding debug info for cache events as well as doing a subtle fix in HW events debug info - config value is much better represented by hex so we may see not only event index but as well other control bits set (if they exist). Signed-off-by: Alexey Brodkin <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARC: [plat-eznps] add missing atomic_fetch_xxx operationsNoam Camus1-0/+2
Build brekeage since last changes to generic atomic operations. Added couple of missing macros which are now mandatory Signed-off-by: Noam Camus <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARCv2: Implement atomic64 based on LLOCKD/SCONDD instructionsVineet Gupta2-3/+260
ARCv2 ISA provides 64-bit exclusive load/stores so use them to implement the 64-bit atomics and elide the spinlock based generic 64-bit atomics boot tested with atomic64 self-test (and GOD bless the person who wrote them, I realized my inline assmebly is sloppy as hell) Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARCv2: Support dynamic peripheral address space in HS38 rel 3.0 coresVineet Gupta5-18/+23
HS release 3.0 provides for even more flexibility in specifying the volatile address space for mapping peripherals. With HS 2.1 @start was made flexible / programmable - with HS 3.0 even @end can be setup (vs. fixed to 0xFFFF_FFFF before). So add code to reflect that and while at it remove an unused struct defintion Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARCv2: identify HS38 rel 3.0 coresVineet Gupta1-0/+1
Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30ARCv2: Add support for ZeBu Emulation platform for HS coresVineet Gupta5-0/+330
The cool thing is that same kernel image can run on - nsim OSCI simulation platform - SDPlite FPGA setups Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30arc: Add "model" properly in device tree description of all boardsAlexey Brodkin13-0/+13
As it was discussed quite some time ago (see https://lkml.org/lkml/2015/11/5/862) it's a good practice to add "model" property in .dts. Moreover as per ePAPR "model" property is required and should look like "manufacturer,model" so we do here. Signed-off-by: Alexey Brodkin <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Jonas Gorski <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Rob Herring <[email protected]> Cc: Christian Ruppert <[email protected]> Signed-off-by: Vineet Gupta <[email protected]>
2016-09-30MAINTAINERS: Switch to kernel.org email address for Javi MerinoJavi Merino2-1/+2
Change my email address to my kernel.org account instead of the ARM one. Signed-off-by: Javi Merino <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-09-30x86/entry/64: Fix context tracking state warning when load_gs_index failsWanpeng Li1-2/+2
This warning: WARNING: CPU: 0 PID: 3331 at arch/x86/entry/common.c:45 enter_from_user_mode+0x32/0x50 CPU: 0 PID: 3331 Comm: ldt_gdt_64 Not tainted 4.8.0-rc7+ #13 Call Trace: dump_stack+0x99/0xd0 __warn+0xd1/0xf0 warn_slowpath_null+0x1d/0x20 enter_from_user_mode+0x32/0x50 error_entry+0x6d/0xc0 ? general_protection+0x12/0x30 ? native_load_gs_index+0xd/0x20 ? do_set_thread_area+0x19c/0x1f0 SyS_set_thread_area+0x24/0x30 do_int80_syscall_32+0x7c/0x220 entry_INT80_compat+0x38/0x50 ... can be reproduced by running the GS testcase of the ldt_gdt test unit in the x86 selftests. do_int80_syscall_32() will call enter_form_user_mode() to convert context tracking state from user state to kernel state. The load_gs_index() call can fail with user gsbase, gsbase will be fixed up and proceed if this happen. However, enter_from_user_mode() will be called again in the fixed up path though it is context tracking kernel state currently. This patch fixes it by just fixing up gsbase and telling lockdep that IRQs are off once load_gs_index() failed with user gsbase. Signed-off-by: Wanpeng Li <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30x86/boot: Initialize FPU and X86_FEATURE_ALWAYS even if we don't have CPUIDAndy Lutomirski1-12/+11
Otherwise arch_task_struct_size == 0 and we die. While we're at it, set X86_FEATURE_ALWAYS, too. Reported-by: David Saggiorato <[email protected]> Tested-by: David Saggiorato <[email protected]> Signed-off-by: Andy Lutomirski <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: aaeb5c01c5b ("x86/fpu, sched: Introduce CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT and use it on x86") Link: http://lkml.kernel.org/r/8de723afbf0811071185039f9088733188b606c9.1475103911.git.luto@kernel.org Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30x86/asm: Get rid of __read_cr4_safe()Andy Lutomirski9-26/+11
We use __read_cr4() vs __read_cr4_safe() inconsistently. On CR4-less CPUs, all CR4 bits are effectively clear, so we can make the code simpler and more robust by making __read_cr4() always fix up faults on 32-bit kernels. This may fix some bugs on old 486-like CPUs, but I don't have any easy way to test that. Signed-off-by: Andy Lutomirski <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/ea647033d357d9ce2ad2bbde5a631045f5052fb6.1475178370.git.luto@kernel.org Signed-off-by: Thomas Gleixner <[email protected]>
2016-09-30Merge branch 'x86/urgent' into x86/asmThomas Gleixner153-743/+1276
Get the cr4 fixes so we can apply the final cleanup
2016-09-30x86/vdso: Fix building on big endian hostSegher Boessenkool1-1/+1
We need to call GET_LE to read hdr->e_type. Fixes: 57f90c3dfc75 ("x86/vdso: Error out if the vDSO isn't a valid DSO") Reported-by: Paul Gortmaker <[email protected]> Signed-off-by: Segher Boessenkool <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2016-09-30x86/boot: Fix another __read_cr4() case on 486Andy Lutomirski1-3/+1
The condition for reading CR4 was wrong: there are some CPUs with CPUID but not CR4. Rather than trying to make the condition exact, use __read_cr4_safe(). Fixes: 18bc7bd523e0 ("x86/boot: Synchronize trampoline_cr4_features and mmu_cr4_features directly") Reported-by: [email protected] Signed-off-by: Andy Lutomirski <[email protected]> Reviewed-by: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Link: http://lkml.kernel.org/r/8c453a61c4f44ab6ff43c29780ba04835234d2e5.1475178369.git.luto@kernel.org Signed-off-by: Thomas Gleixner <[email protected]>
2016-09-30sched/irqtime: Consolidate irqtime flushing codeFrederic Weisbecker1-15/+11
The code performing irqtime nsecs stats flushing to kcpustat is roughly the same for hardirq and softirq. So lets consolidate that common code. Signed-off-by: Frederic Weisbecker <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/irqtime: Consolidate accounting synchronization with u64_stats APIFrederic Weisbecker2-55/+29
The irqtime accounting currently implement its own ad hoc implementation of u64_stats API. Lets rather consolidate it with the appropriate library. Signed-off-by: Frederic Weisbecker <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30u64_stats: Introduce IRQs disabled helpersFrederic Weisbecker1-21/+24
Introduce light versions of u64_stats helpers for context where either preempt or IRQs are disabled. This way we can make this library usable by scheduler irqtime accounting which currenty implement its ad-hoc version. Signed-off-by: Frederic Weisbecker <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/irqtime: Remove needless IRQs disablement on kcpustat updateFrederic Weisbecker1-6/+5
The callers of the functions performing irqtime kcpustat updates have IRQS disabled, no need to disable them again. Signed-off-by: Frederic Weisbecker <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/irqtime: No need for preempt-safe accessorsFrederic Weisbecker1-2/+2
We can safely use the preempt-unsafe accessors for irqtime when we flush its counters to kcpustat as IRQs are disabled at this time. Signed-off-by: Frederic Weisbecker <[email protected]> Reviewed-by: Rik van Riel <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Paolo Bonzini <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/fair: Fix min_vruntime trackingPeter Zijlstra1-7/+22
While going through enqueue/dequeue to review the movement of set_curr_task() I noticed that the (2nd) update_min_vruntime() call in dequeue_entity() is suspect. It turns out, its actually wrong because it will consider cfs_rq->curr, which could be the entry we just normalized. This mixes different vruntime forms and leads to fail. The purpose of the second update_min_vruntime() is to move min_vruntime forward if the entity we just removed is the one that was holding it back; _except_ for the DEQUEUE_SAVE case, because then we know its a temporary removal and it will come back. However, since we do put_prev_task() _after_ dequeue(), cfs_rq->curr will still be set (and per the above, can be tranformed into a different unit), so update_min_vruntime() should also consider curr->on_rq. This also fixes another corner case where the enqueue (which also does update_curr()->update_min_vruntime()) happens on the rq->lock break in schedule(), between dequeue and put_prev_task. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Fixes: 1e876231785d ("sched: Fix ->min_vruntime calculation in dequeue_entity()") Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/debug: Add SCHED_WARN_ON()Peter Zijlstra2-6/+10
Provide SCHED_WARN_ON as wrapper for WARN_ON_ONCE() to avoid CONFIG_SCHED_DEBUG wrappery. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Fix set_user_nice()Peter Zijlstra1-1/+7
Almost all scheduler functions update state with the following pattern: if (queued) dequeue_task(rq, p, DEQUEUE_SAVE); if (running) put_prev_task(rq, p); /* update state */ if (queued) enqueue_task(rq, p, ENQUEUE_RESTORE); if (running) set_curr_task(rq, p); set_user_nice() however misses the running part, cure this. This was found by asserting we never enqueue 'current'. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/fair: Introduce set_curr_task() helperPeter Zijlstra2-5/+10
Now that the ia64 only set_curr_task() symbol is gone, provide a helper just like put_prev_task(). Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core, ia64: Rename set_curr_task()Peter Zijlstra3-7/+7
Rename the ia64 only set_curr_task() function to free up the name. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Tony Luck <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Fix incorrect utilization accounting when switching to fair classVincent Guittot1-10/+10
When a task switches to fair scheduling class, the period between now and the last update of its utilization is accounted as running time whatever happened during this period. This incorrect accounting applies to the task and also to the task group branch. When changing the property of a running task like its list of allowed CPUs or its scheduling class, we follow the sequence: - dequeue task - put task - change the property - set task as current task - enqueue task The end of the sequence doesn't follow the normal sequence (as per __schedule()) which is: - enqueue a task - then set the task as current task. This incorrectordering is the root cause of incorrect utilization accounting. Update the sequence to follow the right one: - dequeue task - put task - change the property - enqueue task - set task as current task Signed-off-by: Vincent Guittot <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: [email protected] Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Optimize SCHED_SMTPeter Zijlstra3-7/+43
Avoid pointless SCHED_SMT code when running on !SMT hardware. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Rewrite and improve select_idle_siblings()Peter Zijlstra5-47/+234
select_idle_siblings() is a known pain point for a number of workloads; it either does too much or not enough and sometimes just does plain wrong. This rewrite attempts to address a number of issues (but sadly not all). The current code does an unconditional sched_domain iteration; with the intent of finding an idle core (on SMT hardware). The problems which this patch tries to address are: - its pointless to look for idle cores if the machine is real busy; at which point you're just wasting cycles. - it's behaviour is inconsistent between SMT and !SMT hardware in that !SMT hardware ends up doing a scan for any idle CPU in the LLC domain, while SMT hardware does a scan for idle cores and if that fails, falls back to a scan for idle threads on the 'target' core. The new code replaces the sched_domain scan with 3 explicit scans: 1) search for an idle core in the LLC 2) search for an idle CPU in the LLC 3) search for an idle thread in the 'target' core where 1 and 3 are conditional on SMT support and 1 and 2 have runtime heuristics to skip the step. Step 1) is conditional on sd_llc_shared->has_idle_cores; when a cpu goes idle and sd_llc_shared->has_idle_cores is false, we scan all SMT siblings of the CPU going idle. Similarly, we clear sd_llc_shared->has_idle_cores when we fail to find an idle core. Step 2) tracks the average cost of the scan and compares this to the average idle time guestimate for the CPU doing the wakeup. There is a significant fudge factor involved to deal with the variability of the averages. Esp. hackbench was sensitive to this. Step 3) is unconditional; we assume (also per step 1) that scanning all SMT siblings in a core is 'cheap'. With this; SMT systems gain step 2, which cures a few benchmarks -- notably one from Facebook. One 'feature' of the sched_domain iteration, which we preserve in the new code, is that it would start scanning from the 'target' CPU, instead of scanning the cpumask in cpu id order. This avoids multiple CPUs in the LLC scanning for idle to gang up and find the same CPU quite as much. The down side is that tasks can end up hopping across the LLC for no apparent reason. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30x86/cmpxchg, locking/atomics: Remove superfluous definitionsNikolay Borisov1-44/+0
cmpxchg contained definitions for unused (x)add_* operations, dating back to the original ticket spinlock implementation. Nowadays these are unused so remove them. Signed-off-by: Nikolay Borisov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30x86, locking/spinlocks: Remove ticket (spin)lock implementationPeter Zijlstra10-719/+6
We've unconditionally used the queued spinlock for many releases now. Its time to remove the old ticket lock code. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Waiman Long <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30Merge branch 'linus' into locking/core, to pick up fixesIngo Molnar154-715/+1244
Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Replace sd_busy/nr_busy_cpus with sched_domain_sharedPeter Zijlstra4-20/+19
Move the nr_busy_cpus thing from its hacky sd->parent->groups->sgc location into the much more natural sched_domain_shared location. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Introduce 'struct sched_domain_shared'Peter Zijlstra2-5/+45
Since struct sched_domain is strictly per cpu; introduce a structure that is shared between all 'identical' sched_domains. Limit to SD_SHARE_PKG_RESOURCES domains for now, as we'll only use it for shared cache state; if another use comes up later we can easily relax this. While the sched_group's are normally shared between CPUs, these are not natural to use when we need some shared state on a domain level -- since that would require the domain to have a parent, which is not a given. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Restructure destroy_sched_domain()Peter Zijlstra1-7/+11
There is no point in doing a call_rcu() for each domain, only do a callback for the root sched domain and clean up the entire set in one go. Also make the entire call chain be called destroy_sched_domain*() to remove confusion with the free_sched_domains() call, which does an entirely different thing. Both cpu_attach_domain() callers of destroy_sched_domain() can live without the call_rcu() because at those points the sched_domain hasn't been published yet. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core: Remove unused @cpu argument from destroy_sched_domain*()Peter Zijlstra1-6/+6
Small cleanup; nothing uses the @cpu argument so make it go away. Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/wait: Introduce init_wait_entry()Oleg Nesterov2-9/+12
The partial initialization of wait_queue_t in prepare_to_wait_event() looks ugly. This was done to shrink .text, but we can simply add the new helper which does the full initialization and shrink the compiled code a bit more. And. This way prepare_to_wait_event() can have more users. In particular we are ready to remove the signal_pending_state() checks from wait_bit_action_f helpers and change __wait_on_bit_lock() to use prepare_to_wait_event(). Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Al Viro <[email protected]> Cc: Bart Van Assche <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Neil Brown <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/wait: Avoid abort_exclusive_wait() in __wait_on_bit_lock()Oleg Nesterov2-44/+21
__wait_on_bit_lock() doesn't need abort_exclusive_wait() too. Right now it can't use prepare_to_wait_event() (see the next change), but it can do the additional finish_wait() if action() fails. abort_exclusive_wait() no longer has callers, remove it. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Al Viro <[email protected]> Cc: Bart Van Assche <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Neil Brown <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/wait: Avoid abort_exclusive_wait() in ___wait_event()Oleg Nesterov2-16/+26
___wait_event() doesn't really need abort_exclusive_wait(), we can simply change prepare_to_wait_event() to remove the waiter from q->task_list if it was interrupted. This simplifies the code/logic, and this way prepare_to_wait_event() can have more users, see the next change. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Al Viro <[email protected]> Cc: Bart Van Assche <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Neil Brown <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]> -- include/linux/wait.h | 7 +------ kernel/sched/wait.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 26 insertions(+), 16 deletions(-)
2016-09-30sched/wait: Fix abort_exclusive_wait(), it should pass TASK_NORMAL to wake_up()Oleg Nesterov2-8/+6
Otherwise this logic only works if mode is "compatible" with another exclusive waiter. If some wq has both TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE waiters, abort_exclusive_wait() won't wait an uninterruptible waiter. The main user is __wait_on_bit_lock() and currently it is fine but only because TASK_KILLABLE includes TASK_UNINTERRUPTIBLE and we do not have lock_page_interruptible() yet. Just use TASK_NORMAL and remove the "mode" arg from abort_exclusive_wait(). Yes, this means that (say) wake_up_interruptible() can wake up the non- interruptible waiter(s), but I think this is fine. And in fact I think that abort_exclusive_wait() must die, see the next change. Signed-off-by: Oleg Nesterov <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Al Viro <[email protected]> Cc: Bart Van Assche <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Neil Brown <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/fair: Fix fixed point arithmetic width for shares and effective loadDietmar Eggemann1-2/+2
Since commit: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") we now have two different fixed point units for load: - 'shares' in calc_cfs_shares() has 20 bit fixed point unit on 64-bit kernels. Therefore use scale_load() on MIN_SHARES. - 'wl' in effective_load() has 10 bit fixed point unit. Therefore use scale_load_down() on tg->shares which has 20 bit fixed point unit on 64-bit kernels. Signed-off-by: Dietmar Eggemann <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30sched/core, x86/topology: Fix NUMA in package topology bugTim Chen2-16/+33
Current code can call set_cpu_sibling_map() and invoke sched_set_topology() more than once (e.g. on CPU hot plug). When this happens after sched_init_smp() has been called, we lose the NUMA topology extension to sched_domain_topology in sched_init_numa(). This results in incorrect topology when the sched domain is rebuilt. This patch fixes the bug and issues warning if we call sched_set_topology() after sched_init_smp(). Signed-off-by: Tim Chen <[email protected]> Signed-off-by: Srinivas Pandruvada <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mike Galbraith <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/1474485552-141429-2-git-send-email-srinivas.pandruvada@linux.intel.com Signed-off-by: Ingo Molnar <[email protected]>
2016-09-30Merge branch 'linus' into sched/core, to pick up fixesIngo Molnar155-716/+1245
Signed-off-by: Ingo Molnar <[email protected]>