aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2017-07-06mm, THP, swap: unify swap slot free functions to put_swap_pageMinchan Kim5-24/+21
Now, get_swap_page takes struct page and allocates swap space according to page size(ie, normal or THP) so it would be more cleaner to introduce put_swap_page which is a counter function of get_swap_page. Then, it calls right swap slot free function depending on page's size. [[email protected]: minor cleanup and fix] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Minchan Kim <[email protected]> Signed-off-by: "Huang, Ying" <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Ebru Akagunduz <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Shaohua Li <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm, THP, swap: delay splitting THP during swap outHuang Ying12-164/+373
Patch series "THP swap: Delay splitting THP during swapping out", v11. This patchset is to optimize the performance of Transparent Huge Page (THP) swap. Recently, the performance of the storage devices improved so fast that we cannot saturate the disk bandwidth with single logical CPU when do page swap out even on a high-end server machine. Because the performance of the storage device improved faster than that of single logical CPU. And it seems that the trend will not change in the near future. On the other hand, the THP becomes more and more popular because of increased memory size. So it becomes necessary to optimize THP swap performance. The advantages of the THP swap support include: - Batch the swap operations for the THP to reduce lock acquiring/releasing, including allocating/freeing the swap space, adding/deleting to/from the swap cache, and writing/reading the swap space, etc. This will help improve the performance of the THP swap. - The THP swap space read/write will be 2M sequential IO. It is particularly helpful for the swap read, which are usually 4k random IO. This will improve the performance of the THP swap too. - It will help the memory fragmentation, especially when the THP is heavily used by the applications. The 2M continuous pages will be free up after THP swapping out. - It will improve the THP utilization on the system with the swap turned on. Because the speed for khugepaged to collapse the normal pages into the THP is quite slow. After the THP is split during the swapping out, it will take quite long time for the normal pages to collapse back into the THP after being swapped in. The high THP utilization helps the efficiency of the page based memory management too. There are some concerns regarding THP swap in, mainly because possible enlarged read/write IO size (for swap in/out) may put more overhead on the storage device. To deal with that, the THP swap in should be turned on only when necessary. For example, it can be selected via "always/never/madvise" logic, to be turned on globally, turned off globally, or turned on only for VMA with MADV_HUGEPAGE, etc. This patchset is the first step for the THP swap support. The plan is to delay splitting THP step by step, finally avoid splitting THP during the THP swapping out and swap out/in the THP as a whole. As the first step, in this patchset, the splitting huge page is delayed from almost the first step of swapping out to after allocating the swap space for the THP and adding the THP into the swap cache. This will reduce lock acquiring/releasing for the locks used for the swap cache management. With the patchset, the swap out throughput improves 15.5% (from about 3.73GB/s to about 4.31GB/s) in the vm-scalability swap-w-seq test case with 8 processes. The test is done on a Xeon E5 v3 system. The swap device used is a RAM simulated PMEM (persistent memory) device. To test the sequential swapping out, the test case creates 8 processes, which sequentially allocate and write to the anonymous pages until the RAM and part of the swap device is used up. This patch (of 5): In this patch, splitting huge page is delayed from almost the first step of swapping out to after allocating the swap space for the THP (Transparent Huge Page) and adding the THP into the swap cache. This will batch the corresponding operation, thus improve THP swap out throughput. This is the first step for the THP swap optimization. The plan is to delay splitting the THP step by step and avoid splitting the THP finally. In this patch, one swap cluster is used to hold the contents of each THP swapped out. So, the size of the swap cluster is changed to that of the THP (Transparent Huge Page) on x86_64 architecture (512). For other architectures which want such THP swap optimization, ARCH_USES_THP_SWAP_CLUSTER needs to be selected in the Kconfig file for the architecture. In effect, this will enlarge swap cluster size by 2 times on x86_64. Which may make it harder to find a free cluster when the swap space becomes fragmented. So that, this may reduce the continuous swap space allocation and sequential write in theory. The performance test in 0day shows no regressions caused by this. In the future of THP swap optimization, some information of the swapped out THP (such as compound map count) will be recorded in the swap_cluster_info data structure. The mem cgroup swap accounting functions are enhanced to support charge or uncharge a swap cluster backing a THP as a whole. The swap cluster allocate/free functions are added to allocate/free a swap cluster for a THP. A fair simple algorithm is used for swap cluster allocation, that is, only the first swap device in priority list will be tried to allocate the swap cluster. The function will fail if the trying is not successful, and the caller will fallback to allocate a single swap slot instead. This works good enough for normal cases. If the difference of the number of the free swap clusters among multiple swap devices is significant, it is possible that some THPs are split earlier than necessary. For example, this could be caused by big size difference among multiple swap devices. The swap cache functions is enhanced to support add/delete THP to/from the swap cache as a set of (HPAGE_PMD_NR) sub-pages. This may be enhanced in the future with multi-order radix tree. But because we will split the THP soon during swapping out, that optimization doesn't make much sense for this first step. The THP splitting functions are enhanced to support to split THP in swap cache during swapping out. The page lock will be held during allocating the swap cluster, adding the THP into the swap cache and splitting the THP. So in the code path other than swapping out, if the THP need to be split, the PageSwapCache(THP) will be always false. The swap cluster is only available for SSD, so the THP swap optimization in this patchset has no effect for HDD. [[email protected]: fix two issues in THP optimize patch] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: extensive cleanups and simplifications, reduce code size] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: "Huang, Ying" <[email protected]> Signed-off-by: Johannes Weiner <[email protected]> Suggested-by: Andrew Morton <[email protected]> [for config option] Acked-by: Kirill A. Shutemov <[email protected]> [for changes in huge_memory.c and huge_mm.h] Cc: Andrea Arcangeli <[email protected]> Cc: Ebru Akagunduz <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Shaohua Li <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/vmstat.c: standardize file operations variable namesAnshuman Khandual1-8/+8
Standardize the file operation variable names related to all four memory management /proc interface files. Also change all the symbol permissions (S_IRUGO) into octal permissions (0444) as it got complaints from checkpatch.pl. This does not create any functional change to the interface. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Anshuman Khandual <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06zram: count same page write as page_storedMinchan Kim1-0/+2
Regardless of whether it is same page or not, it's surely write and stored to zram so we should increase pages_stored stat. Otherwise, user can see zero value via mm_stats although he writes a lot of pages to zram. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Minchan Kim <[email protected]> Reviewed-by: Sergey Senozhatsky <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ksm: optimize refile of stable_node_dup at the head of the chainAndrea Arcangeli1-12/+23
If a candidate stable_node_dup has been found and it can accept further merges it can be refiled to the head of the list to speedup next searches without altering which dup is found and how the dups accumulate in the chain. We already refiled it back to the head in the prune_stale_stable_nodes case, but we didn't refile it if not pruning (which is more common). And we also refiled it when it was already at the head which is unnecessary (in the prune_stale_stable_nodes case, nr > 1 means there's more than one dup in the chain, it doesn't mean it's not already at the head of the chain). The stable_node_chain list is single threaded and there's no SMP locking contention so it should be faster to refile it to the head of the list also if prune_stale_stable_nodes is false. Profiling shows the refile happens 1.9% of the time when a dup is found with a max_page_sharing limit setting of 3 (with max_page_sharing of 2 the refile never happens of course as there's never space for one more merge) which is reasonably low. At higher max_page_sharing values it should be much less frequent. This is just an optimization. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Cc: Evgheni Dereveanchin <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Petr Holasek <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Gavin Guo <[email protected]> Cc: Jay Vosburgh <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Dan Carpenter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ksm: swap the two output parameters of chain/chain_pruneAndrea Arcangeli1-26/+52
Some static checker complains if chain/chain_prune returns a potentially stale pointer. There are two output parameters to chain/chain_prune, one is tree_page the other is stable_node_dup. Like in get_ksm_page the caller has to check tree_page is NULL before touching the stable_node. Similarly in chain/chain_prune the caller has to check tree_page before touching the stable_node_dup returned or the original stable_node passed as parameter. Because the tree_page is never returned as a stale pointer, it may be more intuitive to return tree_page and to pass stable_node_dup for reference instead of the reverse. This patch purely swaps the two output parameters of chain/chain_prune as a cleanup for the static checker and to mimic the get_ksm_page behavior more closely. There's no change to the caller at all except the swap, it's purely a cleanup and it is a noop from the caller point of view. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Reported-by: Dan Carpenter <[email protected]> Tested-by: Dan Carpenter <[email protected]> Cc: Evgheni Dereveanchin <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Petr Holasek <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Gavin Guo <[email protected]> Cc: Jay Vosburgh <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ksm: cleanup stable_node chain collapse caseAndrea Arcangeli1-22/+28
Patch series "KSMscale cleanup/optimizations". There are no fixes here it's just minor cleanups and optimizations. 1/3 removes makes the "fix" for the stale stable_node fall in the standard case without introducing new cases. Setting stable_node to NULL was marginally safer, but stale pointer is still wiped from the caller, this looks cleaner. 2/3 should fix the false positive from Dan's static checker. 3/3 is a microoptimization to apply the the refile of future merge candidate dups at the head of the chain in all cases and to skip it in one case where we did it and but it was a noop (to avoid checking if it was already at the head but now we've to check it anyway so it got optimized away). This patch (of 3): When the stable_node chain is collapsed we can as well set the caller stable_node to match the returned stable_node_dup in chain_prune(). This way the collapse case becomes indistinguishable from the regular stable_node case and we can remove two branches from the KSM page migration handling slow paths. While it was all correct this looks cleaner (and faster) as the caller has to deal with fewer special cases. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Cc: Evgheni Dereveanchin <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Petr Holasek <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Gavin Guo <[email protected]> Cc: Jay Vosburgh <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Dan Carpenter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ksm: fix use after free with merge_across_nodes = 0Andrea Arcangeli1-11/+55
If merge_across_nodes was manually set to 0 (not the default value) by the admin or a tuned profile on NUMA systems triggering cross-NODE page migrations, a stable_node use after free could materialize. If the chain is collapsed stable_node would point to the old chain that was already freed. stable_node_dup would be the stable_node dup now converted to a regular stable_node and indexed in the rbtree in replacement of the freed stable_node chain (not anymore a dup). This special case where the chain is collapsed in the NUMA replacement path, is now detected by setting stable_node to NULL by the chain_prune callee if it decides to collapse the chain. This tells the NUMA replacement code that even if stable_node and stable_node_dup are different, this is not a chain if stable_node is NULL, as the stable_node_dup was converted to a regular stable_node and the chain was collapsed. It is generally safer for the callee to force the caller stable_node to NULL the moment it become stale so any other mistake like this would result in an instant Oops easier to debug than an use after free. Otherwise the replace logic would act like if stable_node was a valid chain, when in fact it was freed. Notably stable_node_chain_add_dup(page_node, stable_node) would run on a stable stable_node. Andrey Ryabinin found the source of the use after free in chain_prune(). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Andrea Arcangeli <[email protected]> Reported-by: Andrey Ryabinin <[email protected]> Reported-by: Evgheni Dereveanchin <[email protected]> Tested-by: Andrey Ryabinin <[email protected]> Cc: Petr Holasek <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Gavin Guo <[email protected]> Cc: Jay Vosburgh <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ksm: introduce ksm_max_page_sharing per page deduplication limitAndrea Arcangeli2-66/+730
Without a max deduplication limit for each KSM page, the list of the rmap_items associated to each stable_node can grow infinitely large. During the rmap walk each entry can take up to ~10usec to process because of IPIs for the TLB flushing (both for the primary MMU and the secondary MMUs with the MMU notifier). With only 16GB of address space shared in the same KSM page, that would amount to dozens of seconds of kernel runtime. A ~256 max deduplication factor will reduce the latencies of the rmap walks on KSM pages to order of a few msec. Just doing the cond_resched() during the rmap walks is not enough, the list size must have a limit too, otherwise the caller could get blocked in (schedule friendly) kernel computations for seconds, unexpectedly. There's room for optimization to significantly reduce the IPI delivery cost during the page_referenced(), but at least for page_migration in the KSM case (used by hard NUMA bindings, compaction and NUMA balancing) it may be inevitable to send lots of IPIs if each rmap_item->mm is active on a different CPU and there are lots of CPUs. Even if we ignore the IPI delivery cost, we've still to walk the whole KSM rmap list, so we can't allow millions or billions (ulimited) number of entries in the KSM stable_node rmap_item lists. The limit is enforced efficiently by adding a second dimension to the stable rbtree. So there are three types of stable_nodes: the regular ones (identical as before, living in the first flat dimension of the stable rbtree), the "chains" and the "dups". Every "chain" and all "dups" linked into a "chain" enforce the invariant that they represent the same write protected memory content, even if each "dup" will be pointed by a different KSM page copy of that content. This way the stable rbtree lookup computational complexity is unaffected if compared to an unlimited max_sharing_limit. It is still enforced that there cannot be KSM page content duplicates in the stable rbtree itself. Adding the second dimension to the stable rbtree only after the max_page_sharing limit hits, provides for a zero memory footprint increase on 64bit archs. The memory overhead of the per-KSM page stable_tree and per virtual mapping rmap_item is unchanged. Only after the max_page_sharing limit hits, we need to allocate a stable_tree "chain" and rb_replace() the "regular" stable_node with the newly allocated stable_node "chain". After that we simply add the "regular" stable_node to the chain as a stable_node "dup" by linking hlist_dup in the stable_node_chain->hlist. This way the "regular" (flat) stable_node is converted to a stable_node "dup" living in the second dimension of the stable rbtree. During stable rbtree lookups the stable_node "chain" is identified as stable_node->rmap_hlist_len == STABLE_NODE_CHAIN (aka is_stable_node_chain()). When dropping stable_nodes, the stable_node "dup" is identified as stable_node->head == STABLE_NODE_DUP_HEAD (aka is_stable_node_dup()). The STABLE_NODE_DUP_HEAD must be an unique valid pointer never used elsewhere in any stable_node->head/node to avoid a clashes with the stable_node->node.rb_parent_color pointer, and different from &migrate_nodes. So the second field of &migrate_nodes is picked and verified as always safe with a BUILD_BUG_ON in case the list_head implementation changes in the future. The STABLE_NODE_DUP is picked as a random negative value in stable_node->rmap_hlist_len. rmap_hlist_len cannot become negative when it's a "regular" stable_node or a stable_node "dup". The stable_node_chain->nid is irrelevant. The stable_node_chain->kpfn is aliased in a union with a time field used to rate limit the stable_node_chain->hlist prunes. The garbage collection of the stable_node_chain happens lazily during stable rbtree lookups (as for all other kind of stable_nodes), or while disabling KSM with "echo 2 >/sys/kernel/mm/ksm/run" while collecting the entire stable rbtree. While the "regular" stable_nodes and the stable_node "dups" must wait for their underlying tree_page to be freed before they can be freed themselves, the stable_node "chains" can be freed immediately if the stable_node->hlist turns empty. This is because the "chains" are never pointed by any page->mapping and they're effectively stable rbtree KSM self contained metadata. [[email protected]: fix non-NUMA build] Signed-off-by: Andrea Arcangeli <[email protected]> Tested-by: Petr Holasek <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Arjan van de Ven <[email protected]> Cc: Evgheni Dereveanchin <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Gavin Guo <[email protected]> Cc: Jay Vosburgh <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/nobootmem.c: return 0 when start_pfn equals end_pfnWei Yang1-1/+1
When start_pfn equals end_pfn, __free_pages_memory() has no effect and __free_memory_core() will finally return (end_pfn - start_pfn) = 0. This patch returns 0 directly when start_pfn equals end_pfn. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/vmscan.c: fix unsequenced modification and access warningNick Desaulniers1-7/+6
Clang and its -Wunsequenced emits a warning mm/vmscan.c:2961:25: error: unsequenced modification and access to 'gfp_mask' [-Wunsequenced] .gfp_mask = (gfp_mask = current_gfp_context(gfp_mask)), ^ While it is not clear to me whether the initialization code violates the specification (6.7.8 par 19 (ISO/IEC 9899) looks like it disagrees) the code is quite confusing and worth cleaning up anyway. Fix this by reusing sc.gfp_mask rather than the updated input gfp_mask parameter. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Nick Desaulniers <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/mmap.c: mark protection_map as __ro_after_initDaniel Micay1-1/+1
The protection map is only modified by per-arch init code so it can be protected from writes after the init code runs. This change was extracted from PaX where it's part of KERNEXEC. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Daniel Micay <[email protected]> Acked-by: Kees Cook <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm, sparsemem: break out of loops earlyDave Hansen3-14/+52
There are a number of times that we loop over NR_MEM_SECTIONS, looking for section_present() on each section. But, when we have very large physical address spaces (large MAX_PHYSMEM_BITS), NR_MEM_SECTIONS becomes very large, making the loops quite long. With MAX_PHYSMEM_BITS=46 and a section size of 128MB, the current loops are 512k iterations, which we barely notice on modern hardware. But, raising MAX_PHYSMEM_BITS higher (like we will see on systems that support 5-level paging) makes this 64x longer and we start to notice, especially on slower systems like simulators. A 10-second delay for 512k iterations is annoying. But, a 640- second delay is crippling. This does not help if we have extremely sparse physical address spaces, but those are quite rare. We expect that most of the "slow" systems where this matters will also be quite small and non-sparse. To fix this, we track the highest section we've ever encountered. This lets us know when we will *never* see another section_present(), and lets us break out of the loops earlier. Doing the whole for_each_present_section_nr() macro is probably overkill, but it will ensure that any future loop iterations that we grow are more likely to be correct. Kirrill said "It shaved almost 40 seconds from boot time in qemu with 5-level paging enabled for me". Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Dave Hansen <[email protected]> Tested-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: allow slab_nomerge to be set at build timeKees Cook3-5/+24
Some hardened environments want to build kernels with slab_nomerge already set (so that they do not depend on remembering to set the kernel command line option). This is desired to reduce the risk of kernel heap overflows being able to overwrite objects from merged caches and changes the requirements for cache layout control, increasing the difficulty of these attacks. By keeping caches unmerged, these kinds of exploits can usually only damage objects in the same cache (though the risk to metadata exploitation is unchanged). Link: http://lkml.kernel.org/r/20170620230911.GA25238@beast Signed-off-by: Kees Cook <[email protected]> Cc: Daniel Micay <[email protected]> Cc: David Windsor <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Daniel Micay <[email protected]> Cc: David Windsor <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: "Rafael J. Wysocki" <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Mauro Carvalho Chehab <[email protected]> Cc: "Paul E. McKenney" <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Nicolas Pitre <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Daniel Mack <[email protected]> Cc: Sebastian Andrzej Siewior <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Cc: Helge Deller <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Randy Dunlap <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/slab.c: replace open-coded round-up code with ALIGNCanjiang Lu1-6/+2
Link: http://lkml.kernel.org/r/20170616072918epcms5p4ff16c24ef8472b4c3b4371823cd87856@epcms5p4 Signed-off-by: Canjiang Lu <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/slub.c: wrap kmem_cache->cpu_partial in config CONFIG_SLUB_CPU_PARTIALWei Yang2-31/+51
kmem_cache->cpu_partial is just used when CONFIG_SLUB_CPU_PARTIAL is set, so wrap it with config CONFIG_SLUB_CPU_PARTIAL will save some space on 32bit arch. This patch wraps kmem_cache->cpu_partial in config CONFIG_SLUB_CPU_PARTIAL and wraps its sysfs too. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/slub.c: wrap cpu_slab->partial in CONFIG_SLUB_CPU_PARTIALWei Yang2-7/+30
cpu_slab's field partial is used when CONFIG_SLUB_CPU_PARTIAL is set, which means we can save a pointer's space on each cpu for every slub item. This patch wraps cpu_slab->partial in CONFIG_SLUB_CPU_PARTIAL and wraps its sysfs use too. [[email protected]: avoid strange 80-col tricks] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/slub.c: pack red_left_pad with another int to save a wordWei Yang1-1/+1
Patch series "try to save some memory for kmem_cache in some cases", v2. kmem_cache is a frequently used data in kernel. During the code reading, I found maybe we could save some space in some cases. 1. On 64bit arch, type int will occupy a word if it doesn't sit well. 2. cpu_slab->partial is just used when CONFIG_SLUB_CPU_PARTIAL is set 3. cpu_partial is just used when CONFIG_SLUB_CPU_PARTIAL is set, while just save some space on 32bit arch. This patch (of 3): On 64bit arch, struct is 8-bytes aligned, so int will occupy a word if it doesn't sit well. This patch pack red_left_pad with reserved to save 8 bytes for struct kmem_cache on a 64bit arch. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/slub: reset cpu_slab's pointer in deactivate_slab()Wei Yang1-13/+8
Each time a slab is deactivated, the page and freelist pointer should be reset. This patch just merges these two options into deactivate_slab(). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/slub.c: remove a redundant assignment in ___slab_alloc()Wei Yang1-1/+0
When the code comes to this point, there are two cases: 1. cpu_slab is deactivated 2. cpu_slab is empty In both cased, cpu_slab->freelist is NULL at this moment. This patch removes the redundant assignment of cpu_slab->freelist. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Wei Yang <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06fs/file.c: replace alloc_fdmem() with kvmalloc() alternativeMichal Hocko1-18/+4
There is no real reason to duplicate kvmalloc* helpers so drop alloc_fdmem and replace it with the appropriate library function. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko <[email protected]> Cc: Alexander Viro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ocfs2: constify attribute_group structuresArvind Yadav1-1/+1
attribute_groups are not supposed to change at runtime. All functions working with attribute_groups provided by <linux/sysfs.h> work with const attribute_group. So mark the non-const structs as const. File size before: text data bss dec hex filename 4402 1088 38 5528 1598 fs/ocfs2/stackglue.o File size After adding 'const': text data bss dec hex filename 4442 1024 38 5504 1580 fs/ocfs2/stackglue.o Link: http://lkml.kernel.org/r/cab4e59b4918db3ed2ec77073a4cb310c4429ef5.1498808026.git.arvind.yadav.cs@gmail.com Signed-off-by: Arvind Yadav <[email protected]> Cc: Mark Fasheh <[email protected]> Cc: Joel Becker <[email protected]> Cc: Junxiao Bi <[email protected]> Cc: Joseph Qi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ocfs2: free 'dummy_sc' in sc_fop_release() to prevent memory leakpiaojun1-0/+1
'sd->dbg_sock' is malloced in sc_common_open(), but not freed at the end of sc_fop_release(). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Jun Piao <[email protected]> Reviewed-by: Joseph Qi <[email protected]> Cc: Mark Fasheh <[email protected]> Cc: Joel Becker <[email protected]> Cc: Junxiao Bi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ocfs2: use magic.hFabian Frederick2-3/+3
Filesystems generally use SUPER_MAGIC values from magic.h instead of a local definition. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Fabian Frederick <[email protected]> Reviewed-by: Mark Fasheh <[email protected]> Cc: Joel Becker <[email protected]> Cc: Junxiao Bi <[email protected]> Cc: Joseph Qi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ocfs2: fix a static checker warningGang He1-1/+1
Fix a static code checker warning: fs/ocfs2/inode.c:179 ocfs2_iget() warn: passing zero to 'ERR_PTR' Fixes: d56a8f32e4c6 ("ocfs2: check/fix inode block for online file check") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Gang He <[email protected]> Reviewed-by: Joseph Qi <[email protected]> Reviewed-by: Eric Ren <[email protected]> Cc: Mark Fasheh <[email protected]> Cc: Joel Becker <[email protected]> Cc: Junxiao Bi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06drivers/sh/intc/virq.c: delete an error message for a failed memory ↵SF Markus Elfring1-3/+1
allocation in add_virq_to_pirq() This issue was detected by using the Coccinelle software. Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Markus Elfring <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06include/linux/filter.h: use linux/set_memory.hMichael Ellerman1-4/+1
This header always exists, so doesn't require an ifdef around its inclusion. When CONFIG_ARCH_HAS_SET_MEMORY=y it includes the asm header, otherwise it provides empty versions of the set_memory_xx() routines. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michael Ellerman <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Laura Abbott <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06kernel/module.c: use linux/set_memory.hMichael Ellerman1-3/+1
This header always exists, so doesn't require an ifdef around its inclusion. When CONFIG_ARCH_HAS_SET_MEMORY=y it includes the asm header, otherwise it provides empty versions of the set_memory_xx() routines. The usages of set_memory_xx() are still guarded by CONFIG_STRICT_MODULE_RWX. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michael Ellerman <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Laura Abbott <[email protected]> Cc: Daniel Borkmann <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06kernel/power/snapshot.c: use linux/set_memory.hMichael Ellerman1-3/+1
This header always exists, so doesn't require an ifdef around its inclusion. When CONFIG_ARCH_HAS_SET_MEMORY=y it includes the asm header, otherwise it provides empty versions of the set_memory_xx() routines. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michael Ellerman <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Laura Abbott <[email protected]> Cc: Daniel Borkmann <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06provide linux/set_memory.hMichael Ellerman1-0/+20
Currently code that wants to use set_memory_ro() etc, needs to include asm/set_memory.h, which doesn't exist on all arches. Some code knows it only builds on arches which have the header, other code guards the inclusion with an #ifdef, neither is ideal. So create linux/set_memory.h. This always exists, so users don't need an #ifdef just to include the header. When CONFIG_ARCH_HAS_SET_MEMORY=y it includes asm/set_memory.h, otherwise it provides empty non-failing implementations. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michael Ellerman <[email protected]> Acked-by: Daniel Borkmann <[email protected]> Acked-by: Kees Cook <[email protected]> Acked-by: Laura Abbott <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06scripts/spelling.txt: add a bunch more spelling mistakesColin Ian King1-0/+31
Here are some of the more spelling mistakes and typos that I've found while fixing up spelling mistakes in kernel error message text over the past several weeks. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Colin Ian King <[email protected]> Acked-by: Kees Cook <[email protected]> Cc: Joe Perches <[email protected]> Cc: Stephen Boyd <[email protected]> Cc: Ross Zwisler <[email protected]> Cc: Masahiro Yamada <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06ramfs: clarify help text that compression applies to ramfs as well as legacy ↵Rob Landley1-6/+6
ramdisk. Clarify help text that compression applies to ramfs as well as legacy ramdisk. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Rob Landley <[email protected]> Cc: Jonathan Corbet <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06scripts/gen_initramfs_list.sh: teach INITRAMFS_ROOT_UID and ↵Rob Landley2-8/+6
INITRAMFS_ROOT_GID that -1 means "current user". Teach INITRAMFS_ROOT_UID and INITRAMFS_ROOT_GID that -1 means "current user". Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Rob Landley <[email protected]> Cc: Michal Marek <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06tile: provide default ioremap declarationLogan Gunthorpe1-0/+11
Add a default ioremap function which was not provided in all circumstances. (Only when CONFIG_PCI and CONFIG_TILEGX was set). I have designs to use them in scatterlist.c where they'd likely never be called with this architecture, but it is needed to compile. Thus, if the function is ever hit it returns NULL. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Logan Gunthorpe <[email protected]> Signed-off-by: Stephen Bates <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Al Viro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mn10300: use generic fb.hTobias Klauser2-23/+1
The mn10300 arch uses a verbatim copy of the asm-generic version and does not add any own implementations to the header, so use asm-generic/fb.h instead of duplicating code. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Tobias Klauser <[email protected]> Reviewed-by: David Howells <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mn10300: remove wrapper header for asm/device.hTobias Klauser2-1/+1
mn10300's asm/device.h is merely including asm-generic/device.h. Thus, the arch specific header can be omitted and the generic header can be used directly. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Tobias Klauser <[email protected]> Cc: David Howells <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06kernel/extable.c: mark core_kernel_text notraceMarcin Nowakowski1-1/+1
core_kernel_text is used by MIPS in its function graph trace processing, so having this method traced leads to an infinite set of recursive calls such as: Call Trace: ftrace_return_to_handler+0x50/0x128 core_kernel_text+0x10/0x1b8 prepare_ftrace_return+0x6c/0x114 ftrace_graph_caller+0x20/0x44 return_to_handler+0x10/0x30 return_to_handler+0x0/0x30 return_to_handler+0x0/0x30 ftrace_ops_no_ops+0x114/0x1bc core_kernel_text+0x10/0x1b8 core_kernel_text+0x10/0x1b8 core_kernel_text+0x10/0x1b8 ftrace_ops_no_ops+0x114/0x1bc core_kernel_text+0x10/0x1b8 prepare_ftrace_return+0x6c/0x114 ftrace_graph_caller+0x20/0x44 (...) Mark the function notrace to avoid it being traced. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Marcin Nowakowski <[email protected]> Reviewed-by: Masami Hiramatsu <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Meyer <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Daniel Borkmann <[email protected]> Cc: Paul Gortmaker <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06thp, mm: fix crash due race in MADV_FREE handlingKirill A. Shutemov1-1/+1
Reinette reported the following crash: BUG: Bad page state in process log2exe pfn:57600 page:ffffea00015d8000 count:0 mapcount:0 mapping: (null) index:0x20200 flags: 0x4000000000040019(locked|uptodate|dirty|swapbacked) raw: 4000000000040019 0000000000000000 0000000000020200 00000000ffffffff raw: ffffea00015d8020 ffffea00015d8020 0000000000000000 0000000000000000 page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set bad because of flags: 0x1(locked) Modules linked in: rfcomm 8021q bnep intel_rapl x86_pkg_temp_thermal coretemp efivars btusb btrtl btbcm pwm_lpss_pci snd_hda_codec_hdmi btintel pwm_lpss snd_hda_codec_realtek snd_soc_skl snd_hda_codec_generic snd_soc_skl_ipc spi_pxa2xx_platform snd_soc_sst_ipc snd_soc_sst_dsp i2c_designware_platform i2c_designware_core snd_hda_ext_core snd_soc_sst_match snd_hda_intel snd_hda_codec mei_me snd_hda_core mei snd_soc_rt286 snd_soc_rl6347a snd_soc_core efivarfs CPU: 1 PID: 354 Comm: log2exe Not tainted 4.12.0-rc7-test-test #19 Hardware name: Intel corporation NUC6CAYS/NUC6CAYB, BIOS AYAPLCEL.86A.0027.2016.1108.1529 11/08/2016 Call Trace: bad_page+0x16a/0x1f0 free_pages_check_bad+0x117/0x190 free_hot_cold_page+0x7b1/0xad0 __put_page+0x70/0xa0 madvise_free_huge_pmd+0x627/0x7b0 madvise_free_pte_range+0x6f8/0x1150 __walk_page_range+0x6b5/0xe30 walk_page_range+0x13b/0x310 madvise_free_page_range.isra.16+0xad/0xd0 madvise_free_single_vma+0x2e4/0x470 SyS_madvise+0x8ce/0x1450 If somebody frees the page under us and we hold the last reference to it, put_page() would attempt to free the page before unlocking it. The fix is trivial reorder of operations. Dave said: "I came up with the exact same patch. For posterity, here's the test case, generated by syzkaller and trimmed down by Reinette: https://www.sr71.net/~dave/intel/log2.c And the config that helps detect this: https://www.sr71.net/~dave/intel/config-log2" Fixes: b8d3c4c3009d ("mm/huge_memory.c: don't split THP page when MADV_FREE syscall is called") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Kirill A. Shutemov <[email protected]> Reported-by: Reinette Chatre <[email protected]> Acked-by: Dave Hansen <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Minchan Kim <[email protected]> Cc: Huang Ying <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06compiler, clang: always inline when CONFIG_OPTIMIZE_INLINING is disabledDavid Rientjes2-15/+11
The motivation for commit abb2ea7dfd82 ("compiler, clang: suppress warning for unused static inline functions") was to suppress clang's warnings about unused static inline functions. For configs without CONFIG_OPTIMIZE_INLINING enabled, such as any non-x86 architecture, `inline' in the kernel implies that __attribute__((always_inline)) is used. Some code depends on that behavior, see https://lkml.org/lkml/2017/6/13/918: net/built-in.o: In function `__xchg_mb': arch/arm64/include/asm/cmpxchg.h:99: undefined reference to `__compiletime_assert_99' arch/arm64/include/asm/cmpxchg.h:99: undefined reference to `__compiletime_assert_99 The full fix would be to identify these breakages and annotate the functions with __always_inline instead of `inline'. But since we are late in the 4.12-rc cycle, simply carry forward the forced inlining behavior and work toward moving arm64, and other architectures, toward CONFIG_OPTIMIZE_INLINING behavior. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: David Rientjes <[email protected]> Reported-by: Sodagudi Prasad <[email protected]> Tested-by: Sodagudi Prasad <[email protected]> Tested-by: Matthias Kaehlcke <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Will Deacon <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06Merge tag 'hwlock-v4.13' of git://github.com/andersson/remoteprocLinus Torvalds4-11/+222
Pull hwspinlock updates from Bjorn Andersson: "This introduces a driver for the Spreadtrum hardware spinlock device and cleans up the Kconfig file" * tag 'hwlock-v4.13' of git://github.com/andersson/remoteproc: DT: hwspinlock: Add binding documentation for Spreadtrum hwspinlock hwspinlock: sprd: Add hardware spinlock driver Make HWSPINLOCK a menuconfig to ease disabling
2017-07-06Merge tag 'rproc-v4.13' of git://github.com/andersson/remoteprocLinus Torvalds6-104/+787
Pull remoteproc updates from Bjorn Andersson: "This introduces the Keystone 2 DSP driver and refactors the start/stop code in recovery. The Davinci DSP driver gets a few fixes and the Kconfig gets cleaned up" * tag 'rproc-v4.13' of git://github.com/andersson/remoteproc: remoteproc/keystone: Fix circular dependencies for ARM configs remoteproc: Drop redundant REMOTEPROC dependency from driver Kconfigs remoteproc: Drop VIRTUALIZATION dependency from REMOTEPROC remoteproc/keystone: Ensure the DSPs are in reset in probe remoteproc/keystone: Add a remoteproc driver for Keystone 2 DSPs dt-bindings: remoteproc: Add Keystone DSP remoteproc binding remoteproc/davinci: fix unbalanced reset between start and stop ops remoteproc/davinci: simplify the reset function remoteproc/davinci: Update Kconfig to depend on DMA_CMA remoteproc: fix spelling mistake: "Resouce" -> "Resource" remoteproc: Modify recovery path to use rproc_{start,stop}() remoteproc: Introduce rproc_{start,stop}() functions
2017-07-06Merge tag 'rpmsg-v4.13' of git://github.com/andersson/remoteprocLinus Torvalds8-17/+1352
Pull rpmsg updates from Bjorn Andersson: "This introduces the Qualcomm GLINK protocol driver and DeviceTree-based modalias support, as well as a number of smaller fixes" * tag 'rpmsg-v4.13' of git://github.com/andersson/remoteproc: rpmsg: Make modalias work for DeviceTree based devices rpmsg: Drop VIRTUALIZATION dependency from RPMSG_VIRTIO rpmsg: Don't overwrite release op of rpdev rpmsg: virtio_rpmsg_bus: cleanup multiple assignment to ops rpmsg: virtio_rpmsg_bus: fix nameservice address rpmsg: cleanup incorrect function in dev_err message rpmsg: virtio_rpmsg_bus: fix announce for devices without endpoint rpmsg: Introduce Qualcomm RPM glink driver soc: qcom: Add device tree binding for GLINK RPM rpmsg: Release rpmsg devices in backends
2017-07-06Merge tag 'platform-drivers-x86-v4.13-1' of ↵Linus Torvalds29-671/+1443
git://git.infradead.org/linux-platform-drivers-x86 Pull x86 platform driver updates from Darren Hart: "Introduce new bus architecture for WMI and expose BMOF data through sysfs. Correct several assumptions about WMI instance number from 1 to 0. Further fujitsu-laptop cleanups, continuing to prepare for separation into two modules. Add support for several new ideapad laptops and silead-based tablets. Various minor fixes and const cleanups. Detail summary: sony-laptop: - constify attribute_group and input index array fujitsu-laptop: - rework debugging - do not evaluate ACPI _INI methods - do not update ACPI device power status - sanitize hotkey input device identification - use strcpy to set ACPI device names and classes - remove redundant safety checks - use device-specific data in remaining module code - use device-specific data in LED-related code - explicitly pass ACPI device to call_fext_func() - track the last instantiated FUJ02E3 ACPI device - allocate fujitsu_laptop in acpi_fujitsu_laptop_add() - use device-specific data in backlight code - allocate fujitsu_bl in acpi_fujitsu_bl_add() - distinguish current uses of device-specific data msi-laptop: - constify msipf*_attribute_group eeepc-laptop: - constify platform_attribute_group toshiba_haps: - constify haps_attr_group dell-wmi-led: - Adjust instance of wmi_evaluate_method calls to 0 alienware-wmi: - Adjust instance of wmi_evaluate_method calls to 0 intel_menlow: - Add const to thermal_cooling_device_ops structure acerhdf: - Add const to thermal_cooling_device_ops structure dell-laptop: - Fix bogus keyboard backlight sysfs interface acer-wmi: - Using zero as first WMI instance number - Detect RF Button capability ideapad-laptop: - Add Y720-15IKBN to no_hw_rfkill - Add Y520-15IKBN to no_hw_rfkill - constify rfkill_ops structure - Squelch ACPI event 1 - hide unused 'touchpad_store' - Switch touchpad attribute to be RO - Add sysfs interface for touchpad state silead_dmi: - Add touchscreen info for PoV mobii wintab p800w - Add touchscreen info for Pipo W2S tablet - Add touchscreen info for GP-electronic T701 dell-rbtn: - constify rfkill_ops structures - Improve explanation about DELLABC6 samsung-laptop: - constify rfkill_ops structures panasonic-laptop: - remove unused code samsung-laptop: - Initialize loca variable dell-wmi: - Convert to the WMI bus infrastructure - Add a better description for "stealth mode" - Add a comment explaining the 0xb2 magic number wmi-bmof: - New driver to expose embedded Binary WMI MOF metadata wmi*: - Fix printing info about WDG structure - Add recent copyright statements - Require query for data blocks, rename writable to setable - Add an interface for subdrivers to access sibling devices - Bind the platform device, not the ACPI node - Add a new interface to read block data - Incorporate acpi_install_notify_handler - Instantiate all devices before adding them - Probe data objects for read and write capabilities - Split devices into types and add basic sysfs attributes - Fix error handling when creating devices - Turn WMI into a bus driver - Track wmi devices per ACPI device - Clean up acpi_wmi_add - Pass the acpi_device through to parse_wdg - Drop "Mapper (un)loaded" messages intel_cht_int33fe: - Set supplied-from property on max17047 dev intel_pmc_ipc: - Mark ipc_data_readb() as __maybe_unused topstar-laptop: - Add new device id peaq-wmi: - Add new peaq-wmi driver thinkpad_acpi: - Add a comment about 0 in module_param_call() - Join string literals back toshiba_acpi: - use memdup_user_nul" * tag 'platform-drivers-x86-v4.13-1' of git://git.infradead.org/linux-platform-drivers-x86: (67 commits) platform/x86: sony-laptop: constify attribute_group and input index array platform/x86: fujitsu-laptop: rework debugging platform/x86: fujitsu-laptop: do not evaluate ACPI _INI methods platform/x86: fujitsu-laptop: do not update ACPI device power status platform/x86: fujitsu-laptop: sanitize hotkey input device identification platform/x86: fujitsu-laptop: use strcpy to set ACPI device names and classes platform/x86: fujitsu-laptop: remove redundant safety checks platform/x86: msi-laptop: constify msipf*_attribute_group platform/x86: eeepc-laptop: constify platform_attribute_group platform/x86: toshiba_haps: constify haps_attr_group platform/x86: dell-wmi-led: Adjust instance of wmi_evaluate_method calls to 0 platform/x86: alienware-wmi: Adjust instance of wmi_evaluate_method calls to 0 platform/x86: intel_menlow: Add const to thermal_cooling_device_ops structure platform/x86: acerhdf: Add const to thermal_cooling_device_ops structure platform/x86: dell-laptop: Fix bogus keyboard backlight sysfs interface platform/x86: acer-wmi: Using zero as first WMI instance number platform/x86: ideapad-laptop: Add Y720-15IKBN to no_hw_rfkill platform/x86: ideapad-laptop: Add Y520-15IKBN to no_hw_rfkill platform/x86: silead_dmi: Add touchscreen info for PoV mobii wintab p800w platform/x86: silead_dmi: Add touchscreen info for Pipo W2S tablet ...
2017-07-06genirq: Allow to pass the IRQF_TIMER flag with percpu irq requestDaniel Lezcano2-6/+20
The irq timings infrastructure tracks when interrupts occur in order to statistically predict te next interrupt event. There is no point to track timer interrupts and try to predict them because the next expiration time is already known. This can be avoided via the IRQF_TIMER flag which is passed by timer drivers in request_irq(). It marks the interrupt as timer based which alloes to ignore these interrupts in the timings code. Per CPU interrupts which are requested via request_percpu_+irq() have no flag argument, so marking per cpu timer interrupts is not possible and they get tracked pointlessly. Add __request_percpu_irq() as a variant of request_percpu_irq() with a flags argument and make request_percpu_irq() an inline wrapper passing flags = 0. The flag parameter is restricted to IRQF_TIMER as all other IRQF_ flags make no sense for per cpu interrupts. The next step is to convert all existing users of request_percpu_irq() and then remove the wrapper and the underscores. [ tglx: Massaged changelog ] Signed-off-by: Daniel Lezcano <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected]
2017-07-06Merge branch 'next' into for-linusDmitry Torokhov34-71/+1349
Prepare input updates for 4.13 merge window.
2017-07-06ext4: fix spelling mistake: "prellocated" -> "preallocated"Colin Ian King1-1/+1
Trivial fix to spelling mistake in mb_debug debug message Signed-off-by: Colin Ian King <[email protected]> Signed-off-by: Theodore Ts'o <[email protected]>
2017-07-06Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds164-4384/+11447
Pull SCSI updates from James Bottomley: "This is mostly updates of the usual suspects: lpfc, qla2xxx, bnx2fc, qedf, hpsa, hisi_sas, smartpqi, cxlflash, aacraid, csiostor along with a host of minor and miscellaneous changes" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (276 commits) qla2xxx: Fix NVMe entry_type for iocb packet on BE system scsi: qla2xxx: avoid unused-function warning scsi: snic: fix a couple of spelling mistakes/typos scsi: qla2xxx: fix a bunch of typos and spelling mistakes scsi: lpfc: don't double count abort errors scsi: lpfc: spin_lock_irq() is not nestable scsi: hisi_sas: optimise DMA slot memory scsi: ibmvfc: constify dev_pm_ops structures. scsi: ibmvscsi: constify dev_pm_ops structures. scsi: cxlflash: Update debug prints in reset handlers scsi: cxlflash: Update send_tmf() parameters scsi: cxlflash: Avoid double free of character device scsi: Add STARGET_CREATED_REMOVE state to scsi_target_state scsi: ses: do not add a device to an enclosure if enclosure_add_links() fails. scsi: ufs: flush eh_work when eh_work scheduled. scsi: qla2xxx: Protect access to qpair members with qpair->qp_lock scsi: sun_esp: fix device reference leaks scsi: fnic: changing queue command to return result DID_IMM_RETRY when rport is init scsi: fnic: correct speed display and add support for 25,40 and 100G scsi: fnic: added timestamp reporting in fnic debug stats ...
2017-07-06Merge tag 'for-4.13/dm-changes' of ↵Linus Torvalds21-78/+4958
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm Pull device mapper updates from Mike Snitzer: - Add the ability to use select or poll /dev/mapper/control to wait for events from multiple DM devices. - Convert DM's printk macros over to using pr_<level> macros. - Add a big-endian variant of plain64 IV to dm-crypt. - Add support for zoned (aka SMR) devices to DM core. DM kcopyd was also improved to provide a sequential write feature needed by zoned devices. - Introduce DM zoned target that provides support for host-managed zoned devices, the result dm-zoned device acts as a drive-managed interface to the underlying host-managed device. - A DM raid fix to avoid using BUG() for error handling. * tag 'for-4.13/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: dm zoned: fix overflow when converting zone ID to sectors dm raid: stop using BUG() in __rdev_sectors() dm zoned: drive-managed zoned block device target dm kcopyd: add sequential write feature dm linear: add support for zoned block devices dm flakey: add support for zoned block devices dm: introduce dm_remap_zone_report() dm: fix REQ_OP_ZONE_REPORT bio handling dm: fix REQ_OP_ZONE_RESET bio handling dm table: add zoned block devices validation dm: convert DM printk macros to pr_<level> macros dm crypt: add big-endian variant of plain64 IV dm bio prison: use rb_entry() rather than container_of() dm ioctl: report event number in DM_LIST_DEVICES dm ioctl: add a new DM_DEV_ARM_POLL ioctl dm: add basic support for using the select or poll function
2017-07-06Merge tag 'for-linus' of ↵Linus Torvalds12-44/+58
git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma Pull rdma update from Doug Ledford: "This includes two bugs against the newly added opa vnic that were found by turning on the debug kernel options: - sleeping while holding a lock, so a one line fix where they switched it from GFP_KERNEL allocation to a GFP_ATOMIC allocation - a case where they had an isolated caller of their code that could call them in an atomic context so they had to switch their use of a mutex to a spinlock to be safe, so this was considerably more lines of diff because all uses of that lock had to be switched In addition, the bug that was discussed with you already about an out of bounds array access in ib_uverbs_modify_qp and ib_uverbs_create_ah and is only seven lines of diff. And finally, one fix to an earlier fix in the -rc cycle that broke hfi1 and qib in regards to IPoIB (this one is, unfortunately, larger than I would like for a -rc7 submission, but fixing the problem required that we not treat all devices as though they had allocated a netdev universally because it isn't true, and it took 70 lines of diff to resolve the issue, but the final patch has been vetted by Intel and Mellanox and they've both given their approval to the fix). Summary: - Two fixes for OPA found by debug kernel - Fix for user supplied input causing kernel problems - Fix for the IPoIB fixes submitted around -rc4" [ Doug sent this having not noticed the 4.12 release, so I guess I'll be getting another rdma pull request with the actuakl merge window updates and not just fixes. Oh well - it would have been nice if this small update had been the merge window one. - Linus ] * tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma: IB/core, opa_vnic, hfi1, mlx5: Properly free rdma_netdev RDMA/uverbs: Check port number supplied by user verbs cmds IB/opa_vnic: Use spinlock instead of mutex for stats_lock IB/opa_vnic: Use GFP_ATOMIC while sending trap
2017-07-06Merge tag 'pinctrl-v4.13-1' of ↵Linus Torvalds123-8629/+14159
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl Pull pin control updates from Linus Walleij: "This is the big bulk of pin control changes for the v4.13 series: Core: - The documentation is moved over to RST. - We now have agreed bindings for enabling input and output buffers without actually enabling input and/or output on a pin. We are chiseling out some details of pin control electronics. New drivers: - ZTE ZX - Renesas RZA1 - MIPS Ingenic JZ47xx: also switch over existing drivers in the tree to use this pin controller and consolidate earlier spread out code. - Microschip MCP23S08: this driver is migrated from the GPIO subsystem and totally rewritten to use proper pin control. All users are switched over. New subdrivers: - Renesas R8A7743 and R8A7745. - Allwinner Sunxi A83T R_PIO. - Marvell MVEBU Armada CP110 and AP806. - Intel Cannon Lake PCH. - Qualcomm IPQ8074. Notable improvements: - IRQ support on the Marvell MVEBU Armada 37xx. - Meson driver supports HDMI CEC, AO, I2S, SPDIF and PWM. - Rockchip driver now supports iomux-route switching for RK3228, RK3328 and RK3399. - Rockchip A10 and A20 are merged into a single driver. - STM32 has improved GPIO support. - Samsung Exynos drivers are split per ARMv7 and ARMv8. - Marvell MVEBU is converted to use regmap for register access. Maintenance: - Several Renesas SH-PFC refactorings and updates. - Serious code size cut for Mediatek MT7623. - Misc janitorial and MAINTAINERS fixes" * tag 'pinctrl-v4.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: (137 commits) pinctrl: samsung: Remove bogus irq_[un]mask from resource management pinctrl: rza1: make structures rza1_gpiochip_template and rza1_pinmux_ops static pinctrl: rza1: Remove unneeded wrong check for wrong variable pinctrl: qcom: Add ipq8074 pinctrl driver pinctrl: freescale: imx7d: make of_device_ids const. pinctrl: DT: extend the pinmux property to support integers array pinctrl: generic: Add output-enable property pinctrl: armada-37xx: Fix number of pin in sdio_sb pinctrl: armada-37xx: Fix uart2 group selection register mask pinctrl: bcm2835: Avoid warning from __irq_do_set_handler pinctrl: sh-pfc: r8a7795: Add PWM support MAINTAINERS: Add Qualcomm pinctrl drivers section arm: dts: dt-bindings: Add Renesas RZ/A1 pinctrl header dt-bindings: pinctrl: Add RZ/A1 bindings doc pinctrl: Renesas RZ/A1 pin and gpio controller pinctrl: sh-pfc: r8a7792: Add SCIF1 and SCIF2 pin groups pinctrl.txt: move it to the driver-api book pinctrl: ingenic: checking for NULL instead of IS_ERR() pinctrl: uniphier: fix WARN_ON() of pingroups dump on LD20 pinctrl: uniphier: fix WARN_ON() of pingroups dump on LD11 ...