aboutsummaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)AuthorFilesLines
2014-02-24smp: Iterate functions through llist_for_each_entry_safe()Jan Kara1-9/+3
The IPI function llist iteration is open coded. Lets simplify this with using an llist iterator. Also we want to keep the iteration safe against possible csd.llist->next value reuse from the IPI handler. At least the block subsystem used to do such things so lets stay careful and use llist_for_each_entry_safe(). Signed-off-by: Jan Kara <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jens Axboe <[email protected]> Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2014-02-24capability: Use current logging stylesJoe Perches1-19/+10
Prefix logging output with "capability: " via pr_fmt. Convert printks to pr_<level>. Use pr_<level>_once instead of guard flags. Coalesce formats. Signed-off-by: Joe Perches <[email protected]> Acked-by: Serge E. Hallyn <[email protected]> Signed-off-by: James Morris <[email protected]>
2014-02-23Merge branch 'timers-urgent-for-linus' of ↵Linus Torvalds1-17/+29
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer fix from Thomas Gleixner: "Serialize the registration of a new sched_clock in the currently ARM only generic sched_clock facilty to avoid sched_clock havoc" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched_clock: Prevent callers from seeing half-updated data
2014-02-23rcutorture: Add a lock_busted to test the testPaul E. McKenney1-1/+32
This commit adds a maximally broken locking primitive in which lock acquisition and release are both no-ops. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Gracefully handle NULL cleanup hooksPaul E. McKenney1-1/+4
Although most torture tests will have some cleanup hook, it is possible that one might not. This commit therefore enables graceful handling of a NULL cleanup hook during torture-test shutdown. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Add an rcu_busted to test the testPaul E. McKenney1-1/+43
This commit adds a deliberately buggy RCU implementation into rcutorture to allow easy checking that rcutorture correctly flags buggy RCU implementations. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23locktorture: Add a lock-torture kernel modulePaul E. McKenney2-0/+422
This commit adds the locking counterpart to rcutorture. Signed-off-by: Paul E. McKenney <[email protected]> [ paulmck: Make n_lock_torture_errors and torture_spinlock static as suggested by Fengguang Wu. ] Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Stop generic kthreads in torture_cleanup()Paul E. McKenney2-25/+19
The specific torture modules (like rcutorture) need to call torture_cleanup() in any case, so this commit makes torture_cleanup() deal with torture_shutdown_cleanup() and torture_stutter_cleanup() so that the specific modules don't have to deal with these details. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_stop_kthread()Paul E. McKenney2-58/+27
Stopping of kthreads is not RCU-specific, so this commit abstracts out torture_stop_kthread(), saving a few lines of code in the process. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_create_kthread()Paul E. McKenney2-125/+53
Creation of kthreads is not RCU-specific, so this commit abstracts out torture_create_kthread(), saving a few tens of lines of code in the process. This change requires modifying VERBOSE_TOROUT_ERRSTRING() to take a non-const string, so that _torture_create_kthread() can avoid an open-coded substitute. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Fix missing-return bug in rcu_torture_barrier_init()Paul E. McKenney1-0/+1
This commit adds a missing error return to the code path that creates the rcu_torture_barrier() kthread. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Fix rcutorture shutdown racesPaul E. McKenney2-32/+33
Not all of the rcutorture kthreads waited for kthread_should_stop() before returning from their top-level functions, and none of them used torture_shutdown_absorb() properly. These problems can result in segfaults and hangs at shutdown time, and some recent changes perturbed timing sufficiently to make them much more probable. This commit therefore creates a torture_kthread_stopping() function that does the proper kthread shutdown dance in one centralized location. Accommodate this grouping by making VERBOSE_TOROUT_STRING() capable of taking a non-const string as its argument, which allows the new torture_kthread_stopping() to pass its "title" argument directly to the updated version of VERBOSE_TOROUT_STRING(). Signed-off-by: Paul E. McKenney <[email protected]>
2014-02-23rcutorture: Announce task creationPaul E. McKenney2-0/+8
A few "stealth-start rcutorture kthreads" have accumulated over the years, so this commit adds console-log announcements (but only if the torture tests are running verbose). Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Clean up rcu_torture_init() error checkingPaul E. McKenney1-21/+10
This commit applies some simple cleanups to rcu_torture_init() error checking. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_shutdown()Paul E. McKenney2-58/+89
Because auto-shutdown of torture testing is not specific to RCU, this commit moves the auto-shutdown function to kernel/torture.c. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Apply ACCESS_ONCE() to racy fullstop accessesPaul E. McKenney1-5/+5
Because the fullstop variable can be accessed while it is being updated, this commit avoids any resulting compiler mischief through use of ACCESS_ONCE() for non-initialization accesses to this shared variable. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract stutter_wait()Paul E. McKenney2-58/+98
Because stuttering the test load (stopping and restarting it) is useful for non-RCU testing, this commit moves the load-stuttering functionality to kernel/torture.c. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Add diagnostic for unscheduled system shutdownPaul E. McKenney1-2/+4
Currently, rcutorture can terminate via rmmod, via self-shutdown, via something else shutting the system down, or of course the usual catastrophic termination. The first two get flagged, so this commit adds a message for the third. For the fourth, your warranty is void as always. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Privatize fullstopPaul E. McKenney2-25/+40
This commit introduces the torture_must_stop() function in order to keep use of the fullstop variable local to kernel/torture.c. There is also a torture_must_stop_irq() counterpart for use from RCU callbacks, timeout handlers, and the like. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_shutdown_notify()Paul E. McKenney2-28/+24
Because handling the race between rmmod and system shutdown is not specific to RCU, this commit abstracts torture_shutdown_notify(), placing this code into kernel/torture.c. This change also allows fullstop_mutex to be private to kernel/torture.c. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture-test cleanupPaul E. McKenney2-12/+29
This commit creates a torture_cleanup() that handles the generic cleanup actions local to kernel/torture.c. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture-test initializationPaul E. McKenney2-11/+35
This commit creates torture_init_begin() and torture_init_end() functions to abstract locking and allow the torture_type and verbose variables in kernel/torture.o to become static. With a bit more abstraction, fullstop_mutex will also become static. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_onoff()Paul E. McKenney2-158/+190
Because online/offline torturing is not specific to RCU, this commit abstracts it into the kernel/torture.c module to allow other torture tests to use it. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_shuffle()Paul E. McKenney2-109/+166
The torture_shuffle() function forces each CPU in turn to go idle periodically in order to check for problems interacting with per-CPU variables and with dyntick-idle mode. Because this sort of debugging is not specific to RCU, this commit abstracts that functionality. This in turn requires abstracting some additional infrastructure. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_shutdown_absorb()Paul E. McKenney2-39/+38
Because handling races between rmmod and normal shutdown is not specific to rcutorture, this commit renames rcutorture_shutdown_absorb() to torture_shutdown_absorb() and pulls it out into then kernel/torture.c module. This implies pulling the fullstop mechanism into kernel/torture.c as well. The exporting of fullstop and fullstop_mutex is ugly and must die. And it does in fact die in later commits that introduce higher-level APIs that encapsulate both of these variables. Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>`
2014-02-23rcutorture: Abstract TOROUT_STRING() and friendsPaul E. McKenney1-8/+0
These diagnostic macros are not confined to torturing RCU, so this commit makes them available to other torture tests. Also removed the do-while from TOROUT_STRING() in response to checkpatch complaints. Signed-off-by: Paul E. McKenney <[email protected]>
2014-02-23rcutorture: Rename PRINTK to TOROUTPaul E. McKenney1-67/+67
Since it doesn't do printk()s anymore anyway, this commit renames these macros from PRINTK to TOROUT (short for torture output). Signed-off-by: Paul E. McKenney <[email protected]> Reviewed-by: Josh Triplett <[email protected]>
2014-02-23rcutorture: Abstract torture_param()Paul E. McKenney1-69/+34
Create a torture_param() macro and apply it to rcutorture in order to save a few lines of code. This same macro may be applied to other torture frameworks. Signed-off-by: Paul E. McKenney <[email protected]>
2014-02-23rcutorture: Abstract rcu_torture_random()Paul E. McKenney4-46/+95
Because rcu_torture_random() will be used by the locking equivalent to rcutorture, pull it out into its own module. This new module cannot be separately configured, instead, use the Kconfig "select" statement from the Kconfig options of tests depending on it. Suggested-by: Rusty Russell <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]>
2014-02-23rcutorture: Fix checkpatch complaintPaul E. McKenney1-4/+4
This commit does a code-style cleanup so that the first curly brace of an initializer does not appear at the beginning of a line. Signed-off-by: Paul E. McKenney <[email protected]>
2014-02-22sched, nohz: Exclude isolated cores from load balancingMike Galbraith1-7/+18
The user explicitly disabled load balancing, else this core would not be disconnected. Don't add these to nohz.idle_cpus_mask. Signed-off-by: Mike Galbraith <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Lei Wen <[email protected]> Link: http://lkml.kernel.org/n/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Fix select_task_rq_fair() description commentsMorten Rasmussen1-5/+6
Brings select_task_rq_fair() description comments up-to-date. Signed-off-by: Morten Rasmussen <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22workqueue: Replace hardcoding of -20 and 19 with MIN_NICE and MAX_NICEDongsheng Yang1-1/+1
Signed-off-by: Dongsheng Yang <[email protected]> Acked-by: Tejun Heo <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/6d85138180c00ce86975addab6e34b24b84f00a5.1392103744.git.yangds.fnst@cn.fujitsu.com Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sys: Replace hardcoding of -20 and 19 with MIN_NICE and MAX_NICEDongsheng Yang1-4/+4
Signed-off-by: Dongsheng Yang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Oleg Nesterov <[email protected]> Cc: Robin Holt <[email protected]> Cc: Al Viro <[email protected]> Cc: Kees Cook <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Stephen Rothwell <[email protected]> Link: http://lkml.kernel.org/r/0261f094b836f1acbcdf52e7166487c0c77323c8.1392103744.git.yangds.fnst@cn.fujitsu.com Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Replace hardcoding of -20 and 19 with MIN_NICE and MAX_NICEDongsheng Yang2-7/+7
Signed-off-by: Dongsheng Yang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/bd80780f19b4f9b4a765acc353c8dbc130274dd6.1392103744.git.yangds.fnst@cn.fujitsu.com Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22rcu: Use MAX_NICE to replace hardcoding of 19Dongsheng Yang1-4/+4
Reviewed-by: Josh Triplett <[email protected]> Signed-off-by: Dongsheng Yang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Link: http://lkml.kernel.org/r/5b3bf232f41b33ab703a1595e94671b303e2d1fc.1392103744.git.yangds.fnst@cn.fujitsu.com Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched/rt: Make init_sched_rt_calss() __initLi Zefan1-1/+1
It's a bootstrap function. Signed-off-by: Li Zefan <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched/rt: Remove 'leaf_rt_rq_list' from 'struct rq'Li Zefan2-5/+0
This is a leftover from commit e23ee74777f389369431d77390c4b09332ce026a ("sched/rt: Simplify pull_rt_task() logic and remove .leaf_rt_rq_list"). Signed-off-by: Li Zefan <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Consider pi boosting in setscheduler()Thomas Gleixner2-11/+42
If a PI boosted task policy/priority is modified by a setscheduler() call we unconditionally dequeue and requeue the task if it is on the runqueue even if the new priority is lower than the current effective boosted priority. This can result in undesired reordering of the priority bucket list. If the new priority is less or equal than the current effective we just store the new parameters in the task struct and leave the scheduler class and the runqueue untouched. This is handled when the task deboosts itself. Only if the new priority is higher than the effective boosted priority we apply the change immediately. Signed-off-by: Thomas Gleixner <[email protected]> [ Rebase ontop of v3.14-rc1. ] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Cc: Dario Faggioli <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Queue RT tasks to head when prio dropsThomas Gleixner1-2/+7
The following scenario does not work correctly: Runqueue of CPUx contains two runnable and pinned tasks: T1: SCHED_FIFO, prio 80 T2: SCHED_FIFO, prio 80 T1 is on the cpu and executes the following syscalls (classic priority ceiling scenario): sys_sched_setscheduler(pid(T1), SCHED_FIFO, .prio = 90); ... sys_sched_setscheduler(pid(T1), SCHED_FIFO, .prio = 80); ... Now T1 gets preempted by T3 (SCHED_FIFO, prio 95). After T3 goes back to sleep the scheduler picks T2. Surprise! The same happens w/o actual preemption when T1 is forced into the scheduler due to a sporadic NEED_RESCHED event. The scheduler invokes pick_next_task() which returns T2. So T1 gets preempted and scheduled out. This happens because sched_setscheduler() dequeues T1 from the prio 90 list and then enqueues it on the tail of the prio 80 list behind T2. This violates the POSIX spec and surprises user space which relies on the guarantee that SCHED_FIFO tasks are not scheduled out unless they give the CPU up voluntarily or are preempted by a higher priority task. In the latter case the preempted task must get back on the CPU after the preempting task schedules out again. We fixed a similar issue already in commit 60db48c (sched: Queue a deboosted task to the head of the RT prio queue). The same treatment is necessary for sched_setscheduler(). So enqueue to head of the prio bucket list if the priority of the task is lowered. It might be possible that existing user space relies on the current behaviour, but it can be considered highly unlikely due to the corner case nature of the application scenario. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Adjust p->sched_reset_on_fork when nothing else changesThomas Gleixner1-1/+3
If the policy and priority remain unchanged a possible modification of p->sched_reset_on_fork gets lost in the early exit path. Signed-off-by: Thomas Gleixner <[email protected]> [ Rebase ontop of v3.14-rc1. ] Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Add better debug output for might_sleep()Thomas Gleixner1-2/+21
might_sleep() can tell us where interrupts have been disabled, but we have no idea what disabled preemption. Add some debug infrastructure. Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Check for idle task in might_sleep()Thomas Gleixner1-1/+2
Idle is not allowed to call sleeping functions ever! Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-02-22sched: Init idle->on_rq in init_idle()Thomas Gleixner1-0/+1
We stumbled in RT over a SMP bringup issue on ARM where the idle->on_rq == 0 was causing try_to_wakeup() on the other cpu to run into nada land. After adding that idle->on_rq = 1; I was able to find the root cause of the lockup: the idle task on the newly woken up cpu was fiddling with a sleeping spinlock, which is a nono. I kept the init of idle->on_rq to keep the state consistent and to avoid another long lasting debug session. As a side note, the whole debug mess could have been avoided if might_sleep() would have yelled when called from the idle task. That's fixed with patch 2/6 - and that one actually has a changelog :) Signed-off-by: Thomas Gleixner <[email protected]> Signed-off-by: Sebastian Andrzej Siewior <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-02-21perf/x86: Warn to early_printk() in case irq_work is too slowPeter Zijlstra2-4/+11
On Mon, Feb 10, 2014 at 08:45:16AM -0800, Dave Hansen wrote: > The reason I coded this up was that NMIs were firing off so fast that > nothing else was getting a chance to run. With this patch, at least the > printk() would come out and I'd have some idea what was going on. It will start spewing to early_printk() (which is a lot nicer to use from NMI context too) when it fails to queue the IRQ-work because its already enqueued. It does have the false-positive for when two CPUs trigger the warn concurrently, but that should be rare and some extra clutter on the early printk shouldn't be a problem. Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: Dave Hansen <[email protected]> Cc: [email protected] Fixes: 6a02ad66b2c4 ("perf/x86: Push the duration-logging printk() to IRQ context") Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2014-02-21sched: Remove some #ifdefferyPeter Zijlstra4-21/+60
Remove a few gratuitous #ifdefs in pick_next_task*(). Cc: Ingo Molnar <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Juri Lelli <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/n/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2014-02-21sched: Fix hotplug task migrationPeter Zijlstra7-12/+28
Dan Carpenter reported: > kernel/sched/rt.c:1347 pick_next_task_rt() warn: variable dereferenced before check 'prev' (see line 1338) > kernel/sched/deadline.c:1011 pick_next_task_dl() warn: variable dereferenced before check 'prev' (see line 1005) Kirill also spotted that migrate_tasks() will have an instant NULL deref because pick_next_task() will immediately deref prev. Instead of fixing all the corner cases because migrate_tasks() can pass in a NULL prev task in the unlikely case of hot-un-plug, provide a fake task such that we can remove all the NULL checks from the far more common paths. A further problem; not previously spotted; is that because we pushed pre_schedule() and idle_balance() into pick_next_task() we now need to avoid those getting called and pulling more tasks on our dying CPU. We avoid pull_{dl,rt}_task() by setting fake_task.prio to MAX_PRIO+1. We also note that since we call pick_next_task() exactly the amount of times we have runnable tasks present, we should never land in idle_balance(). Fixes: 38033c37faab ("sched: Push down pre_schedule() and idle_balance()") Cc: Juri Lelli <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Steven Rostedt <[email protected]> Reported-by: Kirill Tkhai <[email protected]> Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2014-02-21sched/fair: Remove idle_balance() declaration in sched.hPeter Zijlstra2-25/+29
Remove idle_balance() from the public life; also reduce some #ifdef clutter by folding the pick_next_task_fair() idle path into idle_balance(). Cc: [email protected] Reported-by: Daniel Lezcano <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2014-02-21sched/fair: Reset se-depth when task switched to FAIRMichael wang1-1/+9
Sasha reported: [ 522.645288] BUG: unable to handle kernel NULL pointer dereference at ... [ 522.646271] IP: [<ffffffff81186c6f>] check_preempt_wakeup+0x11f/0x210 ... [ 522.650021] Call Trace: [ 522.650021] <IRQ> [ 522.650021] [<ffffffff8117361d>] check_preempt_curr+0x3d/0xb0 [ 522.650021] [<ffffffff81175d88>] ttwu_do_wakeup+0x18/0x130 ... which was caused by the se-depth changed during the time when task is not FAIR, and we will use the wrong depth value after it switched back to FAIR. This patch reset the depth at the time when task switched to FAIR, make sure that we always have the correct value when task is FAIR. Cc: Ingo Molnar <[email protected]> Reported-by: Sasha Levin <[email protected]> Tested-by: Sasha Levin <[email protected]> Signed-off-by: Michael Wang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]>
2014-02-21Merge branch 'linus' into sched/coreThomas Gleixner13-33/+104
Reason: Bring bakc upstream modification to resolve conflicts Signed-off-by: Thomas Gleixner <[email protected]>