aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2014-07-16ASoC: compress: Prevent multicodec for compressed streamBenoit Cousson1-0/+5
Multiple codecs does not make sense in the case of compressed stream. Exit with error if it happens. Signed-off-by: Benoit Cousson <[email protected]> Tested-by: Lars-Peter Clausen <[email protected]> Reviewed-by: Lars-Peter Clausen <[email protected]> Acked-by: Vinod Koul <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2014-07-16ASoC: dapm: Add support for DAI multicodecBenoit Cousson1-20/+47
Add multicodec support in soc-dapm.c Signed-off-by: Benoit Cousson <[email protected]> Signed-off-by: Misael Lopez Cruz <[email protected]> Signed-off-by: Fabien Parent <[email protected]> Tested-by: Lars-Peter Clausen <[email protected]> Reviewed-by: Lars-Peter Clausen <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2014-07-16ASoC: pcm: Add support for DAI multicodecBenoit Cousson1-166/+368
Add multicodec support in soc-pcm.c Signed-off-by: Benoit Cousson <[email protected]> Signed-off-by: Misael Lopez Cruz <[email protected]> Signed-off-by: Fabien Parent <[email protected]> Tested-by: Lars-Peter Clausen <[email protected]> Reviewed-by: Lars-Peter Clausen <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2014-07-16ASoC: core: Add initial support for DAI multicodecBenoit Cousson3-90/+220
DAI link assumes a one to one mapping between CPU DAI and CODEC. In some cases, the same CPU DAI can be connected to several codecs. This is the case for example, if you connect two mono codecs to the same I2S link in order to have a stereo card. The current ASoC implementation does not allow such setup. Add support for DAI links composed of a single CPU DAI and multiple CODECs. Sound cards have to pass the CODECs array in the corresponding DAI link through a new 'snd_soc_dai_link_component' struct. Each CODEC in this array is described in the same manner single CODEC DAIs are (either DT/OF node or codec_name). Multi-codec links are not supported in the case of CODEC to CODEC links. Just print a warning if it happens. Based on an original code done by Misael. Signed-off-by: Benoit Cousson <[email protected]> Signed-off-by: Misael Lopez Cruz <[email protected]> Signed-off-by: Fabien Parent <[email protected]> Tested-by: Lars-Peter Clausen <[email protected]> Reviewed-by: Lars-Peter Clausen <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2014-07-16Merge remote-tracking branch 'asoc/topic/component' into asoc-multiMark Brown10-402/+424
2014-07-16net-gre-gro: Fix a bug that breaks the forwarding pathJerry Chu5-2/+10
Fixed a bug that was introduced by my GRE-GRO patch (bf5a755f5e9186406bbf50f4087100af5bd68e40 net-gre-gro: Add GRE support to the GRO stack) that breaks the forwarding path because various GSO related fields were not set. The bug will cause on the egress path either the GSO code to fail, or a GRE-TSO capable (NETIF_F_GSO_GRE) NICs to choke. The following fix has been tested for both cases. Signed-off-by: H.K. Jerry Chu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2014-07-16Merge tag 'for-linus-20140716' of git://git.infradead.org/linux-mtdLinus Torvalds3-2/+49
Pull MTD fixes from Brian Norris: - Fix ELM suspend/resume - Reduce warnings if NAND ECC is too weak - Add CFI support for Sharp LH28F640BF NOR The last fix is coming in because other commits in the 3.16 cycle depended on this support. * tag 'for-linus-20140716' of git://git.infradead.org/linux-mtd: mtd: cfi_cmdset_0001.c: add support for Sharp LH28F640BF NOR mtd: nand: reduce the warning noise when the ECC is too weak mtd: devices: elm: fix elm_context_save() and elm_context_restore() functions
2014-07-16Merge branch 'sched-urgent-for-linus' of ↵Linus Torvalds4-8/+8
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Ingo Molnar: "A cpufreq lockup fix and a compiler warning fix" * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched: Fix compiler warnings x86, tsc: Fix cpufreq lockup
2014-07-16Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds4-38/+48
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Tooling fixes and an Intel PMU driver fixlet" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf: Do not allow optimized switch for non-cloned events perf/x86/intel: ignore CondChgd bit to avoid false NMI handling perf symbols: Get kernel start address by symbol name perf tools: Fix segfault in cumulative.callchain report
2014-07-16Merge tag 'sound-3.16-rc6' of ↵Linus Torvalds5-8/+12
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound Pull sound fixes from Takashi Iwai: "Things seem to calm down so far, just a small few HD-audio fixes (regression fixes and a new codec ID addition) popping up" * tag 'sound-3.16-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: ALSA: hda - Fix broken PM due to incomplete i915 initialization ALSA: hda - Revert stream assignment order for Intel controllers ALSA: hda - Add new GPU codec ID 0x10de0070 to snd-hda ALSA: hda: Fix build warning
2014-07-16cpufreq: cpu0: OPPs can be populated at runtimeViresh Kumar2-7/+6
OPPs can be populated statically, via DT, or added at run time with dev_pm_opp_add(). While this driver handles the first case correctly, it would fail to populate OPPs added at runtime. Because call to of_init_opp_table() would fail as there are no OPPs in DT and probe will return early. To fix this, remove error checking and call dev_pm_opp_init_cpufreq_table() unconditionally. Update bindings as well. Suggested-by: Stephen Boyd <[email protected]> Tested-by: Stephen Boyd <[email protected]> Signed-off-by: Viresh Kumar <[email protected]> Acked-by: Santosh Shilimkar <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2014-07-16cpufreq: kirkwood: Reinstate cpufreq driver for ARCH_KIRKWOODQuentin Armitage1-1/+1
Commit ff1f0018cf66080d8e6f59791e552615648a033a ("drivers: Enable building of Kirkwood drivers for mach-mvebu") added Kirkwood into mach-mvebu, adding MACH_KIRKWOOD to ARCH_KIRKWOOD in the KConfig files. The change for ARM_KIRKWOOD_CPUFREQ replaced ARCH_KIRKWOOD with MACH_KIRKWOOD, whereas all the other changes were ARCH_KIRKWOOD || MACH_KIRKWOOD. As a consequence of this change, the cpufreq driver is no longer enabled for ARCH_KIRKWOOD. This patch reinstates ARM_KIRKWOOD_CPUFREQ for ARCH_KIRKWOOD. Signed-off-by: Quentin Armitage <[email protected]> Acked-by: Andrew Lunn <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2014-07-16Merge branch 'liblockdep-fixes' of ↵Ingo Molnar3-17/+15
git://git.kernel.org/pub/scm/linux/kernel/git/sashal/linux into locking/urgent Pull liblockdep fixes from Sasha Levin. Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNERDavidlohr Bueso4-5/+11
Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER), encapsulate the dependencies for rwsem optimistic spinning. No logical changes here as it continues to depend on both SMP and the XADD algorithm variant. Signed-off-by: Davidlohr Bueso <[email protected]> Acked-by: Jason Low <[email protected]> [ Also make it depend on ARCH_SUPPORTS_ATOMIC_RMW. ] Signed-off-by: Peter Zijlstra <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Cc: [email protected] Cc: Chris Mason <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Waiman Long <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/mutex: Disable optimistic spinning on some architecturesPeter Zijlstra6-1/+9
The optimistic spin code assumes regular stores and cmpxchg() play nice; this is found to not be true for at least: parisc, sparc32, tile32, metag-lock1, arc-!llsc and hexagon. There is further wreckage, but this in particular seemed easy to trigger, so blacklist this. Opt in for known good archs. Signed-off-by: Peter Zijlstra <[email protected]> Reported-by: Mikulas Patocka <[email protected]> Cc: David Miller <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: James Bottomley <[email protected]> Cc: Vineet Gupta <[email protected]> Cc: Jason Low <[email protected]> Cc: Waiman Long <[email protected]> Cc: "James E.J. Bottomley" <[email protected]> Cc: Paul McKenney <[email protected]> Cc: John David Anglin <[email protected]> Cc: James Hogan <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: [email protected] Cc: Benjamin Herrenschmidt <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Russell King <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/rwsem: Reduce the size of struct rw_semaphoreJason Low1-14/+11
Recent optimistic spinning additions to rwsem provide significant performance benefits on many workloads on large machines. The cost of it was increasing the size of the rwsem structure by up to 128 bits. However, now that the previous patches in this series bring the overhead of struct optimistic_spin_queue to 32 bits, this patch reorders some fields in struct rw_semaphore such that we can reduce the overhead of the rwsem structure by 64 bits (on 64 bit systems). The extra overhead required for rwsem optimistic spinning would now be up to 8 additional bytes instead of up to 16 bytes. Additionally, the size of rwsem would now be more in line with mutexes. Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/rwsem: Rename 'activity' to 'count'Peter Zijlstra2-18/+18
There are two definitions of struct rw_semaphore, one in linux/rwsem.h and one in linux/rwsem-spinlock.h. For some reason they have different names for the initial field. This makes it impossible to use C99 named initialization for __RWSEM_INITIALIZER() -- or we have to duplicate that entire thing along with the structure definitions. The simpler patch is renaming the rwsem-spinlock variant to match the regular rwsem. This allows us to switch to C99 named initialization. Signed-off-by: Peter Zijlstra <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/n/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16cpufreq: imx6q: Select PM_OPPNicolas Del Piano1-0/+1
PM_OPP is a library used by several of the existing cpufreq drivers. ARM IMX6Q cpufreq driver uses this library for its functionality. Thus, it should be selected in Kconfig. Reported-by: Ezequiel Garcia <[email protected]> Signed-off-by: Nicolas Del Piano <[email protected]> Acked-by: Viresh Kumar <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2014-07-16cpufreq: sa1110: set memory type for h3600Linus Walleij1-1/+1
The Compaq iPAQ h3600 also has the K4S281632b-1H memory type. Verified by prying apart a broken board. Signed-off-by: Linus Walleij <[email protected]> Acked-by: Viresh Kumar <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2014-07-16kprobes/x86: Don't try to resolve kprobe faults from userspaceAndy Lutomirski1-0/+3
This commit: commit 6f6343f53d133bae516caf3d254bce37d8774625 Author: Masami Hiramatsu <[email protected]> Date: Thu Apr 17 17:17:33 2014 +0900 kprobes/x86: Call exception handlers directly from do_int3/do_debug appears to have inadvertently dropped a check that the int3 came from kernel mode. Trying to dereference addr when addr is user-controlled is completely bogus. Signed-off-by: Andy Lutomirski <[email protected]> Acked-by: Masami Hiramatsu <[email protected]> Link: http://lkml.kernel.org/r/c4e339882c121aa76254f2adde3fcbdf502faec2.1405099506.git.luto@amacapital.net Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16ACPI / video: Add use_native_backlight quirk for HP ProBook 4540sHans de Goede1-0/+8
As reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1025690 This is yet another model which needs this quirk. Link: https://bugzilla.redhat.com/show_bug.cgi?id=1025690 Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Rafael J. Wysocki <[email protected]>
2014-07-16sched: Fix possible divide by zero in avg_atom() calculationMateusz Guzik1-1/+1
proc_sched_show_task() does: if (nr_switches) do_div(avg_atom, nr_switches); nr_switches is unsigned long and do_div truncates it to 32 bits, which means it can test non-zero on e.g. x86-64 and be truncated to zero for division. Fix the problem by using div64_ul() instead. As a side effect calculations of avg_atom for big nr_switches are now correct. Signed-off-by: Mateusz Guzik <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16perf/x86/intel: Avoid spamming kernel log for BTS buffer failureDavid Rientjes1-2/+4
It's unnecessary to excessively spam the kernel log anytime the BTS buffer cannot be allocated, so make this allocation __GFP_NOWARN. The user probably will want to at least find some artifact that the allocation has failed in the past, probably due to fragmentation because of its large size, when it's not allocated at bootstrap. Thus, add a WARN_ONCE() so something is left behind for them to understand why perf commnads that require PEBS is not working properly. Signed-off-by: David Rientjes <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Micro-optimize osq_unlock()Jason Low1-2/+2
In the unlock function of the cancellable MCS spinlock, the first thing we do is to retrive the current CPU's osq node. However, due to the changes made in the previous patch, in the common case where the lock is not contended, we wouldn't need to access the current CPU's osq node anymore. This patch optimizes this by only retriving this CPU's osq node after we attempt the initial cmpxchg to unlock the osq and found that its contended. Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Introduce and use init macro and function for osq locksJason Low4-3/+11
Currently, we initialize the osq lock by directly setting the lock's values. It would be preferable if we use an init macro to do the initialization like we do with other locks. This patch introduces and uses a macro and function for initializing the osq lock. Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Convert osq lock to atomic_t to reduce overheadJason Low7-17/+68
The cancellable MCS spinlock is currently used to queue threads that are doing optimistic spinning. It uses per-cpu nodes, where a thread obtaining the lock would access and queue the local node corresponding to the CPU that it's running on. Currently, the cancellable MCS lock is implemented by using pointers to these nodes. In this patch, instead of operating on pointers to the per-cpu nodes, we store the CPU numbers in which the per-cpu nodes correspond to in atomic_t. A similar concept is used with the qspinlock. By operating on the CPU # of the nodes using atomic_t instead of pointers to those nodes, this can reduce the overhead of the cancellable MCS spinlock by 32 bits (on 64 bit systems). Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/spinlocks/mcs: Rename optimistic_spin_queue() to optimistic_spin_node()Jason Low4-20/+20
Currently, the per-cpu nodes structure for the cancellable MCS spinlock is named "optimistic_spin_queue". However, in a follow up patch in the series we will be introducing a new structure that serves as the new "handle" for the lock. It would make more sense if that structure is named "optimistic_spin_queue". Additionally, since the current use of the "optimistic_spin_queue" structure are "nodes", it might be better if we rename them to "node" anyway. This preparatory patch renames all current "optimistic_spin_queue" to "optimistic_spin_node". Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Scott Norton <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Waiman Long <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Andrew Morton <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Tim Chen <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Aswin Chandramouleeswaran <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Chris Mason <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Josef Bacik <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16locking/rwsem: Allow conservative optimistic spinning when readers have lockJason Low1-5/+5
Commit 4fc828e24cd9 ("locking/rwsem: Support optimistic spinning") introduced a major performance regression for workloads such as xfs_repair which mix read and write locking of the mmap_sem across many threads. The result was xfs_repair ran 5x slower on 3.16-rc2 than on 3.15 and using 20x more system CPU time. Perf profiles indicate in some workloads that significant time can be spent spinning on !owner. This is because we don't set the lock owner when readers(s) obtain the rwsem. In this patch, we'll modify rwsem_can_spin_on_owner() such that we'll return false if there is no lock owner. The rationale is that if we just entered the slowpath, yet there is no lock owner, then there is a possibility that a reader has the lock. To be conservative, we'll avoid spinning in these situations. This patch reduced the total run time of the xfs_repair workload from about 4 minutes 24 seconds down to approximately 1 minute 26 seconds, back to close to the same performance as on 3.15. Retesting of AIM7, which were some of the workloads used to test the original optimistic spinning code, confirmed that we still get big performance gains with optimistic spinning, even with this additional regression fix. Davidlohr found that while the 'custom' workload took a performance hit of ~-14% to throughput for >300 users with this additional patch, the overall gain with optimistic spinning is still ~+45%. The 'disk' workload even improved by ~+15% at >1000 users. Tested-by: Dave Chinner <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Signed-off-by: Jason Low <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Tim Chen <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/1404532172.2572.30.camel@j-VirtualBox Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16perf/x86/intel: Protect LBR and extra_regs against KVM lyingKan Liang3-6/+75
With -cpu host, KVM reports LBR and extra_regs support, if the host has support. When the guest perf driver tries to access LBR or extra_regs MSR, it #GPs all MSR accesses,since KVM doesn't handle LBR and extra_regs support. So check the related MSRs access right once at initialization time to avoid the error access at runtime. For reproducing the issue, please build the kernel with CONFIG_KVM_INTEL = y (for host kernel). And CONFIG_PARAVIRT = n and CONFIG_KVM_GUEST = n (for guest kernel). Start the guest with -cpu host. Run perf record with --branch-any or --branch-filter in guest to trigger LBR Run perf stat offcore events (E.g. LLC-loads/LLC-load-misses ...) in guest to trigger offcore_rsp #GP Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Maria Dimakopoulou <[email protected]> Cc: Mark Davies <[email protected]> Cc: Paul Mackerras <[email protected]> Cc: Stephane Eranian <[email protected]> Cc: Yan, Zheng <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16perf: Fix lockdep warning on process exitPeter Zijlstra1-1/+17
Sasha Levin reported: > While fuzzing with trinity inside a KVM tools guest running the latest -next > kernel I've stumbled on the following spew: > > ====================================================== > [ INFO: possible circular locking dependency detected ] > 3.15.0-next-20140613-sasha-00026-g6dd125d-dirty #654 Not tainted > ------------------------------------------------------- > trinity-c578/9725 is trying to acquire lock: > (&(&pool->lock)->rlock){-.-...}, at: __queue_work (kernel/workqueue.c:1346) > > but task is already holding lock: > (&ctx->lock){-.....}, at: perf_event_exit_task (kernel/events/core.c:7471 kernel/events/core.c:7533) > > which lock already depends on the new lock. > 1 lock held by trinity-c578/9725: > #0: (&ctx->lock){-.....}, at: perf_event_exit_task (kernel/events/core.c:7471 kernel/events/core.c:7533) > > Call Trace: > dump_stack (lib/dump_stack.c:52) > print_circular_bug (kernel/locking/lockdep.c:1216) > __lock_acquire (kernel/locking/lockdep.c:1840 kernel/locking/lockdep.c:1945 kernel/locking/lockdep.c:2131 kernel/locking/lockdep.c:3182) > lock_acquire (./arch/x86/include/asm/current.h:14 kernel/locking/lockdep.c:3602) > _raw_spin_lock (include/linux/spinlock_api_smp.h:143 kernel/locking/spinlock.c:151) > __queue_work (kernel/workqueue.c:1346) > queue_work_on (kernel/workqueue.c:1424) > free_object (lib/debugobjects.c:209) > __debug_check_no_obj_freed (lib/debugobjects.c:715) > debug_check_no_obj_freed (lib/debugobjects.c:727) > kmem_cache_free (mm/slub.c:2683 mm/slub.c:2711) > free_task (kernel/fork.c:221) > __put_task_struct (kernel/fork.c:250) > put_ctx (include/linux/sched.h:1855 kernel/events/core.c:898) > perf_event_exit_task (kernel/events/core.c:907 kernel/events/core.c:7478 kernel/events/core.c:7533) > do_exit (kernel/exit.c:766) > do_group_exit (kernel/exit.c:884) > get_signal_to_deliver (kernel/signal.c:2347) > do_signal (arch/x86/kernel/signal.c:698) > do_notify_resume (arch/x86/kernel/signal.c:751) > int_signal (arch/x86/kernel/entry_64.S:600) Urgh.. so the only way I can make that happen is through: perf_event_exit_task_context() raw_spin_lock(&child_ctx->lock); unclone_ctx(child_ctx) put_ctx(ctx->parent_ctx); raw_spin_unlock_irqrestore(&child_ctx->lock); And we can avoid this by doing the change below. I can't immediately see how this changed recently, but given that you say it's easy to reproduce, lets fix this. Reported-by: Sasha Levin <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Dave Jones <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16perf/x86/intel/uncore: Fix SNB-EP/IVT Cbox filter mappingsStephane Eranian1-5/+6
This patch fixes the SNB-EP and IVT Cbox filter mapping table. The table controls which filters are supported by which events. There were several mistakes in those tables causing some filters to be ignored, such as NID on TOR_INSERTS. Signed-off-by: Stephane Eranian <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/20140630144624.GA2604@quad Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16perf/x86/intel: Use proper dTLB-load-misses event on IvyBridgeVince Weaver1-0/+3
This was discussed back in February: https://lkml.org/lkml/2014/2/18/956 But I never saw a patch come out of it. On IvyBridge we share the SandyBridge cache event tables, but the dTLB-load-miss event is not compatible. Patch it up after the fact to the proper DTLB_LOAD_MISSES.DEMAND_LD_MISS_CAUSES_A_WALK Signed-off-by: Vince Weaver <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1407141528200.17214@vincent-weaver-1.umelst.maine.edu Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16perf: Revert ("perf: Always destroy groups on exit")Peter Zijlstra1-1/+13
Vince reported that commit 15a2d4de0eab5 ("perf: Always destroy groups on exit") causes a regression with grouped events. In particular his read_group_attached.c test fails. https://github.com/deater/perf_event_tests/blob/master/tests/bugs/read_group_attached.c Because of the context switch optimization in perf_event_context_sched_out() the 'original' event may end up in the child process and when that exits the change in the patch in question destroys the actual grouping. Therefore revert that change and only destroy inherited groups. Reported-by: Vince Weaver <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Link: http://lkml.kernel.org/n/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16x86: Remove unused variable "polling"Paul Bolle1-1/+0
Compile tested. "polling" is unused since commit f80c5b39b80a ("sched/idle, x86: Switch from TS_POLLING to TIF_POLLING_NRFLAG"). Signed-off-by: Paul Bolle <[email protected]> Signed-off-by: Peter Zijlstra <[email protected]> Cc: Jiri Kosina <[email protected]> Link: http://lkml.kernel.org/r/1404138749.2978.6.camel@x41 Signed-off-by: Ingo Molnar <[email protected]>
2014-07-16s390: fix restore of invalid floating-point-controlMartin Schwidefsky1-2/+2
The fixup of the inline assembly to restore the floating-point-control register needs to check for instruction address *after* the lfcp instruction as the specification and data exceptions are suppresssing. Signed-off-by: Martin Schwidefsky <[email protected]>
2014-07-16s390/zcrypt: improve device probing for zcrypt adapter cardsIngo Tuchscherer1-2/+7
Improve device probing process for zcrypt adapters to transmit service request during registration process. Signed-off-by: Ingo Tuchscherer <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2014-07-16s390/ptrace: fix PSW mask checkMartin Schwidefsky1-2/+10
The PSW mask check of the PTRACE_POKEUSR_AREA command is incorrect. The PSW_MASK_USER define contains the PSW_MASK_ASC bits, the ptrace interface accepts all combinations for the address-space-control bits. To protect the kernel space the PSW mask check in ptrace needs to reject the address-space-control bit combination for home space. Fixes CVE-2014-3534 Cc: [email protected] Signed-off-by: Martin Schwidefsky <[email protected]>
2014-07-16s390/MSI: Use standard mask and unmask funtionsYijing Wang1-43/+6
MSI irqchip in s390 has its own mask and unmask MSI irq functions, zpci_enable_irq() and zpci_disable_irq(). They mask and unmask MSI irq in standard ways, no arch special. MSI driver provides two global standard functions mask_msi_irq() and unmask_msi_irq(). Local zpci_enable_irq() and zpci_disable_irq() are almost the same as the standard two. the difference is local mask/unmask functions read the mask status before mask and unmask everytime. Then change the value and rewrite to hardware. In standard functions, save the mask status after mask and unmask msi irq, and use the cached status to change the mask status. When we mask or unmask a MSI irq, we always cache its mask status except we know need not to cache it, like in pci_msi_shutdown. So use the standard functions to replace the local is safe. Signed-off-by: Yijing Wang <[email protected]> [sebott: fixed inverted function pointers] Signed-off-by: Sebastian Ott <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2014-07-16s390/3270: correct size detection with the read-partition commandMartin Schwidefsky1-1/+0
The size detection for 3270 terminals with the read-partition command is broken. The raw3270_reset_device_cb function clears the init_data array, but if raw3270_writesf_readpart has been called the read-partition command is queued which needs the init_data array. In this case the size detection will fail and the invalid command does funny things to the terminal. Signed-off-by: Martin Schwidefsky <[email protected]>
2014-07-16s390: require mvcos facility, not tod clock steering facilityDavid Hildenbrand1-3/+3
Inlined uaccess functions require the mvcos facility (bit 27), not the tod clock steering facility (bit 28) for z10 and newer machines. Signed-off-by: David Hildenbrand <[email protected]> Signed-off-by: Heiko Carstens <[email protected]> Signed-off-by: Martin Schwidefsky <[email protected]>
2014-07-16UBI: fastmap: do not miss bit-flipsBrian Norris1-1/+1
The return value from 'ubi_io_read_ec_hdr()' was stored in 'err', not in 'ret'. This fix makes sure Fastmap-enabled UBI does not miss bit-flip while reading EC headers, events and scrubs the affected PEBs. This issue was reported by Coverity Scan. Artem: improved the commit message. Signed-off-by: Brian Norris <[email protected]> Acked-by: Richard Weinberger <[email protected]> Signed-off-by: Artem Bityutskiy <[email protected]>
2014-07-15Merge tag 'xtensa-for-next-20140715' of ↵Chris Zankel3-25/+139
git://github.com/jcmvbkbc/linux-xtensa into for_next Xtensa fixes for 3.16: - resolve FIXMEs in double exception handler for window overflow. This fix makes native building of linux on xtensa host possible; - fix sysmem region removal issue introduced in 3.15.
2014-07-15Merge branch 'for_linus' of ↵Linus Torvalds1-0/+2
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs Pull quota fix from Jan Kara: "Fix locking of dquot shrinker" * 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs: quota: missing lock in dqcache_shrink_scan()
2014-07-15Merge tag 'gpio-v3.16-3' of ↵Linus Torvalds1-6/+0
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio Pull GPIO fix from Linus Walleij: "Fix up some merge confusion from the merge window" * tag 'gpio-v3.16-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpio: mcp23s08: Eliminates redundant checking.
2014-07-16ipvs: avoid netns exit crash on ip_vs_conn_drop_conntrackJulian Anastasov1-1/+0
commit 8f4e0a18682d91 ("IPVS netns exit causes crash in conntrack") added second ip_vs_conn_drop_conntrack call instead of just adding the needed check. As result, the first call still can cause crash on netns exit. Remove it. Signed-off-by: Julian Anastasov <[email protected]> Signed-off-by: Hans Schillstrom <[email protected]> Signed-off-by: Simon Horman <[email protected]>
2014-07-15ring-buffer: Fix polling on trace_pipeMartin Lau1-4/+0
ring_buffer_poll_wait() should always put the poll_table to its wait_queue even there is immediate data available. Otherwise, the following epoll and read sequence will eventually hang forever: 1. Put some data to make the trace_pipe ring_buffer read ready first 2. epoll_ctl(efd, EPOLL_CTL_ADD, trace_pipe_fd, ee) 3. epoll_wait() 4. read(trace_pipe_fd) till EAGAIN 5. Add some more data to the trace_pipe ring_buffer 6. epoll_wait() -> this epoll_wait() will block forever ~ During the epoll_ctl(efd, EPOLL_CTL_ADD,...) call in step 2, ring_buffer_poll_wait() returns immediately without adding poll_table, which has poll_table->_qproc pointing to ep_poll_callback(), to its wait_queue. ~ During the epoll_wait() call in step 3 and step 6, ring_buffer_poll_wait() cannot add ep_poll_callback() to its wait_queue because the poll_table->_qproc is NULL and it is how epoll works. ~ When there is new data available in step 6, ring_buffer does not know it has to call ep_poll_callback() because it is not in its wait queue. Hence, block forever. Other poll implementation seems to call poll_wait() unconditionally as the very first thing to do. For example, tcp_poll() in tcp.c. Link: http://lkml.kernel.org/p/[email protected] Cc: [email protected] # 2.6.27 Fixes: 2a2cc8f7c4d0 "ftrace: allow the event pipe to be polled" Reviewed-by: Chris Mason <[email protected]> Signed-off-by: Martin Lau <[email protected]> Signed-off-by: Steven Rostedt <[email protected]>
2014-07-15quota: missing lock in dqcache_shrink_scan()Niu Yawei1-0/+2
Commit 1ab6c4997e04 (fs: convert fs shrinkers to new scan/count API) accidentally removed locking from quota shrinker. Fix it - dqcache_shrink_scan() should use dq_list_lock to protect the scan on free_dquots list. CC: [email protected] Fixes: 1ab6c4997e04a00c50c6d786c2f046adc0d1f5de Signed-off-by: Niu Yawei <[email protected]> Signed-off-by: Jan Kara <[email protected]>
2014-07-15gpio: rcar: Add support for DT IRQ flagsLaurent Pinchart1-0/+1
The gpio-rcar driver has no IRQ domain OF xlate function and thus ignores IRQ flags specified in DT. Fix this by using the two-cell xlate function. Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Linus Walleij <[email protected]>
2014-07-15MAINTAINERS: Add entry for the Renesas pin controller driverLaurent Pinchart1-0/+6
I'm actively maintaining the driver, let's document that. Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Linus Walleij <[email protected]>
2014-07-15pinctrl: st: Fix irqmux handlerMaxime COQUELIN1-1/+1
st_gpio_irqmux_handler() reads the status register to find out which banks inside the controller have pending IRQs. For each banks having pending IRQs, it calls the corresponding handler. Problem is that current code restricts the number of possible banks inside the controller to ST_GPIO_PINS_PER_BANK. This define represents the number of pins inside a bank, so it shouldn't be used here. On STiH407, PIO_FRONT0 controller has 10 banks, so IRQs pending in the two last banks (PIO18 & PIO19) aren't handled. This patch replace ST_GPIO_PINS_PER_BANK by the number of banks inside the controller. Cc: Linus Walleij <[email protected]> Cc: <[email protected]> #v3.15+ Acked-by: Srinivas Kandagatla <[email protected]> Signed-off-by: Maxime Coquelin <[email protected]> Signed-off-by: Linus Walleij <[email protected]>