aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2015-04-15mm: use READ_ONCE() for non-scalar typesJason Low1-2/+2
Commit 38c5ce936a08 ("mm/gup: Replace ACCESS_ONCE with READ_ONCE") converted ACCESS_ONCE usage in gup_pmd_range() to READ_ONCE, since ACCESS_ONCE doesn't work reliably on non-scalar types. This patch also fixes the other ACCESS_ONCE usages in gup_pte_range() and __get_user_pages_fast() in mm/gup.c Signed-off-by: Jason Low <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Acked-by: Rik van Riel <[email protected]> Reviewed-by: Christian Borntraeger <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/mremap.c: clean up goto just return ERR_PTRDerek1-17/+8
As suggested by Kirill the "goto"s in vma_to_resize aren't necessary, just change them to explicit return. Signed-off-by: Derek Che <[email protected]> Suggested-by: "Kirill A. Shutemov" <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mremap should return -ENOMEM when __vm_enough_memory failDerek1-1/+1
Recently I straced bash behavior in this dd zero pipe to read test, in part of testing under vm.overcommit_memory=2 (OVERCOMMIT_NEVER mode): # dd if=/dev/zero | read x The bash sub shell is calling mremap to reallocate more and more memory untill it finally failed -ENOMEM (I expect), or to be killed by system OOM killer (which should not happen under OVERCOMMIT_NEVER mode); But the mremap system call actually failed of -EFAULT, which is a surprise to me, I think it's supposed to be -ENOMEM? then I wrote this piece of C code testing confirmed it: https://gist.github.com/crquan/326bde37e1ddda8effe5 $ ./remap allocated one page @0x7f686bf71000, (PAGE_SIZE: 4096) grabbed 7680512000 bytes of memory (1875125 pages) @ 00007f6690993000. mremap failed Bad address (14). The -EFAULT comes from the branch of security_vm_enough_memory_mm failure, underlyingly it calls __vm_enough_memory which returns only 0 for success or -ENOMEM; So why vma_to_resize needs to return -EFAULT in this case? this sounds like a mistake to me. Some more digging into git history: 1) Before commit 119f657c7 ("RLIMIT_AS checking fix") in May 1 2005 (pre 2.6.12 days) it was returning -ENOMEM for this failure; 2) but commit 119f657c7 ("untangling do_mremap(), part 1") changed it accidentally, to what ever is preserved in local ret, which happened to be -EFAULT, in a previous assignment; 3) then in commit 54f5de709 code refactoring, it's explicitly returning -EFAULT, should be wrong. Signed-off-by: Derek Che <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/vmalloc: get rid of dirty bitmap inside vmap_block structureRoman Pen1-18/+17
In original implementation of vm_map_ram made by Nick Piggin there were two bitmaps: alloc_map and dirty_map. None of them were used as supposed to be: finding a suitable free hole for next allocation in block. vm_map_ram allocates space sequentially in block and on free call marks pages as dirty, so freed space can't be reused anymore. Actually it would be very interesting to know the real meaning of those bitmaps, maybe implementation was incomplete, etc. But long time ago Zhang Yanfei removed alloc_map by these two commits: mm/vmalloc.c: remove dead code in vb_alloc 3fcd76e8028e0be37b02a2002b4f56755daeda06 mm/vmalloc.c: remove alloc_map from vmap_block b8e748b6c32999f221ea4786557b8e7e6c4e4e7a In this patch I replaced dirty_map with two range variables: dirty min and max. These variables store minimum and maximum position of dirty space in a block, since we need only to know the dirty range, not exact position of dirty pages. Why it was made? Several reasons: at first glance it seems that vm_map_ram allocator concerns about fragmentation thus it uses bitmaps for finding free hole, but it is not true. To avoid complexity seems it is better to use something simple, like min or max range values. Secondly, code also becomes simpler, without iteration over bitmap, just comparing values in min and max macros. Thirdly, bitmap occupies up to 1024 bits (4MB is a max size of a block). Here I replaced the whole bitmap with two longs. Finally vm_unmap_aliases should be slightly faster and the whole vmap_block structure occupies less memory. Signed-off-by: Roman Pen <[email protected]> Cc: Zhang Yanfei <[email protected]> Cc: Eric Dumazet <[email protected]> Acked-by: Joonsoo Kim <[email protected]> Cc: David Rientjes <[email protected]> Cc: WANG Chao <[email protected]> Cc: Fabian Frederick <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Gioh Kim <[email protected]> Cc: Rob Jones <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/vmalloc: occupy newly allocated vmap block just after allocationRoman Pen1-21/+37
Previous implementation allocates new vmap block and repeats search of a free block from the very beginning, iterating over the CPU free list. Why it can be better?? 1. Allocation can happen on one CPU, but search can be done on another CPU. In worst case we preallocate amount of vmap blocks which is equal to CPU number on the system. 2. In previous patch I added newly allocated block to the tail of free list to avoid soon exhaustion of virtual space and give a chance to occupy blocks which were allocated long time ago. Thus to find newly allocated block all the search sequence should be repeated, seems it is not efficient. In this patch newly allocated block is occupied right away, address of virtual space is returned to the caller, so there is no any need to repeat the search sequence, allocation job is done. Signed-off-by: Roman Pen <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Eric Dumazet <[email protected]> Acked-by: Joonsoo Kim <[email protected]> Cc: David Rientjes <[email protected]> Cc: WANG Chao <[email protected]> Cc: Fabian Frederick <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Gioh Kim <[email protected]> Cc: Rob Jones <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/vmalloc: fix possible exhaustion of vmalloc space caused by vm_map_ram ↵Roman Pen1-1/+1
allocator Recently I came across high fragmentation of vm_map_ram allocator: vmap_block has free space, but still new blocks continue to appear. Further investigation showed that certain mapping/unmapping sequences can exhaust vmalloc space. On small 32bit systems that's not a big problem, cause purging will be called soon on a first allocation failure (alloc_vmap_area), but on 64bit machines, e.g. x86_64 has 45 bits of vmalloc space, that can be a disaster. 1) I came up with a simple allocation sequence, which exhausts virtual space very quickly: while (iters) { /* Map/unmap big chunk */ vaddr = vm_map_ram(pages, 16, -1, PAGE_KERNEL); vm_unmap_ram(vaddr, 16); /* Map/unmap small chunks. * * -1 for hole, which should be left at the end of each block * to keep it partially used, with some free space available */ for (i = 0; i < (VMAP_BBMAP_BITS - 16) / 8 - 1; i++) { vaddr = vm_map_ram(pages, 8, -1, PAGE_KERNEL); vm_unmap_ram(vaddr, 8); } } The idea behind is simple: 1. We have to map a big chunk, e.g. 16 pages. 2. Then we have to occupy the remaining space with smaller chunks, i.e. 8 pages. At the end small hole should remain to keep block in free list, but do not let big chunk to occupy remaining space. 3. Goto 1 - allocation request of 16 pages can't be completed (only 8 slots are left free in the block in the #2 step), new block will be allocated, all further requests will lay into newly allocated block. To have some measurement numbers for all further tests I setup ftrace and enabled 4 basic calls in a function profile: echo vm_map_ram > /sys/kernel/debug/tracing/set_ftrace_filter; echo alloc_vmap_area >> /sys/kernel/debug/tracing/set_ftrace_filter; echo vm_unmap_ram >> /sys/kernel/debug/tracing/set_ftrace_filter; echo free_vmap_block >> /sys/kernel/debug/tracing/set_ftrace_filter; So for this scenario I got these results: BEFORE (all new blocks are put to the head of a free list) # cat /sys/kernel/debug/tracing/trace_stat/function0 Function Hit Time Avg s^2 -------- --- ---- --- --- vm_map_ram 126000 30683.30 us 0.243 us 30819.36 us vm_unmap_ram 126000 22003.24 us 0.174 us 340.886 us alloc_vmap_area 1000 4132.065 us 4.132 us 0.903 us AFTER (all new blocks are put to the tail of a free list) # cat /sys/kernel/debug/tracing/trace_stat/function0 Function Hit Time Avg s^2 -------- --- ---- --- --- vm_map_ram 126000 28713.13 us 0.227 us 24944.70 us vm_unmap_ram 126000 20403.96 us 0.161 us 1429.872 us alloc_vmap_area 993 3916.795 us 3.944 us 29.370 us free_vmap_block 992 654.157 us 0.659 us 1.273 us SUMMARY: The most interesting numbers in those tables are numbers of block allocations and deallocations: alloc_vmap_area and free_vmap_block calls, which show that before the change blocks were not freed, and virtual space and physical memory (vmap_block structure allocations, etc) were consumed. Average time which were spent in vm_map_ram/vm_unmap_ram became slightly better. That can be explained with a reasonable amount of blocks in a free list, which we need to iterate to find a suitable free block. 2) Another scenario is a random allocation: while (iters) { /* Randomly take number from a range [1..32/64] */ nr = rand(1, VMAP_MAX_ALLOC); vaddr = vm_map_ram(pages, nr, -1, PAGE_KERNEL); vm_unmap_ram(vaddr, nr); } I chose mersenne twister PRNG to generate persistent random state to guarantee that both runs have the same random sequence. For each vm_map_ram call random number from [1..32/64] was taken to represent amount of pages which I do map. I did 10'000 vm_map_ram calls and got these two tables: BEFORE (all new blocks are put to the head of a free list) # cat /sys/kernel/debug/tracing/trace_stat/function0 Function Hit Time Avg s^2 -------- --- ---- --- --- vm_map_ram 10000 10170.01 us 1.017 us 993.609 us vm_unmap_ram 10000 5321.823 us 0.532 us 59.789 us alloc_vmap_area 420 2150.239 us 5.119 us 3.307 us free_vmap_block 37 159.587 us 4.313 us 134.344 us AFTER (all new blocks are put to the tail of a free list) # cat /sys/kernel/debug/tracing/trace_stat/function0 Function Hit Time Avg s^2 -------- --- ---- --- --- vm_map_ram 10000 7745.637 us 0.774 us 395.229 us vm_unmap_ram 10000 5460.573 us 0.546 us 67.187 us alloc_vmap_area 414 2201.650 us 5.317 us 5.591 us free_vmap_block 412 574.421 us 1.394 us 15.138 us SUMMARY: 'BEFORE' table shows, that 420 blocks were allocated and only 37 were freed. Remained 383 blocks are still in a free list, consuming virtual space and physical memory. 'AFTER' table shows, that 414 blocks were allocated and 412 were really freed. 2 blocks remained in a free list. So fragmentation was dramatically reduced. Why? Because when we put newly allocated block to the head, all further requests will occupy new block, regardless remained space in other blocks. In this scenario all requests come randomly. Eventually remained free space will be less than requested size, free list will be iterated and it is possible that nothing will be found there - finally new block will be created. So exhaustion in random scenario happens for the maximum possible allocation size: 32 pages for 32-bit system and 64 pages for 64-bit system. Also average cost of vm_map_ram was reduced from 1.017 us to 0.774 us. Again this can be explained by iteration through smaller list of free blocks. 3) Next simple scenario is a sequential allocation, when the allocation order is increased for each block. This scenario forces allocator to reach maximum amount of partially free blocks in a free list: while (iters) { /* Populate free list with blocks with remaining space */ for (order = 0; order <= ilog2(VMAP_MAX_ALLOC); order++) { nr = VMAP_BBMAP_BITS / (1 << order); /* Leave a hole */ nr -= 1; for (i = 0; i < nr; i++) { vaddr = vm_map_ram(pages, (1 << order), -1, PAGE_KERNEL); vm_unmap_ram(vaddr, (1 << order)); } /* Completely occupy blocks from a free list */ for (order = 0; order <= ilog2(VMAP_MAX_ALLOC); order++) { vaddr = vm_map_ram(pages, (1 << order), -1, PAGE_KERNEL); vm_unmap_ram(vaddr, (1 << order)); } } Results which I got: BEFORE (all new blocks are put to the head of a free list) # cat /sys/kernel/debug/tracing/trace_stat/function0 Function Hit Time Avg s^2 -------- --- ---- --- --- vm_map_ram 2032000 399545.2 us 0.196 us 467123.7 us vm_unmap_ram 2032000 363225.7 us 0.178 us 111405.9 us alloc_vmap_area 7001 30627.76 us 4.374 us 495.755 us free_vmap_block 6993 7011.685 us 1.002 us 159.090 us AFTER (all new blocks are put to the tail of a free list) # cat /sys/kernel/debug/tracing/trace_stat/function0 Function Hit Time Avg s^2 -------- --- ---- --- --- vm_map_ram 2032000 394259.7 us 0.194 us 589395.9 us vm_unmap_ram 2032000 292500.7 us 0.143 us 94181.08 us alloc_vmap_area 7000 31103.11 us 4.443 us 703.225 us free_vmap_block 7000 6750.844 us 0.964 us 119.112 us SUMMARY: No surprises here, almost all numbers are the same. Fixing this fragmentation problem I also did some improvements in a allocation logic of a new vmap block: occupy block immediately and get rid of extra search in a free list. Also I replaced dirty bitmap with min/max dirty range values to make the logic simpler and slightly faster, since two longs comparison costs less, than loop thru bitmap. This patchset raises several questions: Q: Think the problem you comments is already known so that I wrote comments about it as "it could consume lots of address space through fragmentation". Could you tell me about your situation and reason why it should be avoided? Gioh Kim A: Indeed, there was a commit 364376383 which adds explicit comment about fragmentation. But fragmentation which is described in this comment caused by mixing of long-lived and short-lived objects, when a whole block is pinned in memory because some page slots are still in use. But here I am talking about blocks which are free, nobody uses them, and allocator keeps them alive forever, continuously allocating new blocks. Q: I think that if you put newly allocated block to the tail of a free list, below example would results in enormous performance degradation. new block: 1MB (256 pages) while (iters--) { vm_map_ram(3 or something else not dividable for 256) * 85 vm_unmap_ram(3) * 85 } On every iteration, it needs newly allocated block and it is put to the tail of a free list so finding it consumes large amount of time. Joonsoo Kim A: Second patch in current patchset gets rid of extra search in a free list, so new block will be immediately occupied.. Also, the scenario above is impossible, cause vm_map_ram allocates virtual range in orders, i.e. 2^n. I.e. passing 3 to vm_map_ram you will allocate 4 slots in a block and 256 slots (capacity of a block) of course dividable on 4, so block will be completely occupied. But there is a worst case which we can achieve: each free block has a hole equal to order size. The maximum size of allocation is 64 pages for 64-bit system (if you try to map more, original alloc_vmap_area will be called). So the maximum order is 6. That means that worst case, before allocator makes a decision to allocate a new block, is to iterate 7 blocks: HEAD 1st block - has 1 page slot free (order 0) 2nd block - has 2 page slots free (order 1) 3rd block - has 4 page slots free (order 2) 4th block - has 8 page slots free (order 3) 5th block - has 16 page slots free (order 4) 6th block - has 32 page slots free (order 5) 7th block - has 64 page slots free (order 6) TAIL So the worst scenario on 64-bit system is that each CPU queue can have 7 blocks in a free list. This can happen only and only if you allocate blocks increasing the order. (as I did in the function written in the comment of the first patch) This is weird and rare case, but still it is possible. Afterwards you will get 7 blocks in a list. All further requests should be placed in a newly allocated block or some free slots should be found in a free list. Seems it does not look dramatically awful. This patch (of 3): If suitable block can't be found, new block is allocated and put into a head of a free list, so on next iteration this new block will be found first. That's bad, because old blocks in a free list will not get a chance to be fully used, thus fragmentation will grow. Let's consider this simple example: #1 We have one block in a free list which is partially used, and where only one page is free: HEAD |xxxxxxxxx-| TAIL ^ free space for 1 page, order 0 #2 New allocation request of order 1 (2 pages) comes, new block is allocated since we do not have free space to complete this request. New block is put into a head of a free list: HEAD |----------|xxxxxxxxx-| TAIL #3 Two pages were occupied in a new found block: HEAD |xx--------|xxxxxxxxx-| TAIL ^ two pages mapped here #4 New allocation request of order 0 (1 page) comes. Block, which was created on #2 step, is located at the beginning of a free list, so it will be found first: HEAD |xxX-------|xxxxxxxxx-| TAIL ^ ^ page mapped here, but better to use this hole It is obvious, that it is better to complete request of #4 step using the old block, where free space is left, because in other case fragmentation will be highly increased. But fragmentation is not only the case. The worst thing is that I can easily create scenario, when the whole vmalloc space is exhausted by blocks, which are not used, but already dirty and have several free pages. Let's consider this function which execution should be pinned to one CPU: static void exhaust_virtual_space(struct page *pages[16], int iters) { /* Firstly we have to map a big chunk, e.g. 16 pages. * Then we have to occupy the remaining space with smaller * chunks, i.e. 8 pages. At the end small hole should remain. * So at the end of our allocation sequence block looks like * this: * XX big chunk * |XXxxxxxxx-| x small chunk * - hole, which is enough for a small chunk, * but is not enough for a big chunk */ while (iters--) { int i; void *vaddr; /* Map/unmap big chunk */ vaddr = vm_map_ram(pages, 16, -1, PAGE_KERNEL); vm_unmap_ram(vaddr, 16); /* Map/unmap small chunks. * * -1 for hole, which should be left at the end of each block * to keep it partially used, with some free space available */ for (i = 0; i < (VMAP_BBMAP_BITS - 16) / 8 - 1; i++) { vaddr = vm_map_ram(pages, 8, -1, PAGE_KERNEL); vm_unmap_ram(vaddr, 8); } } } On every iteration new block (1MB of vm area in my case) will be allocated and then will be occupied, without attempt to resolve small allocation request using previously allocated blocks in a free list. In case of random allocation (size should be randomly taken from the range [1..64] in 64-bit case or [1..32] in 32-bit case) situation is the same: new blocks continue to appear if maximum possible allocation size (32 or 64) passed to the allocator, because all remaining blocks in a free list do not have enough free space to complete this allocation request. In summary if new blocks are put into the head of a free list eventually virtual space will be exhausted. In current patch I simply put newly allocated block to the tail of a free list, thus reduce fragmentation, giving a chance to resolve allocation request using older blocks with possible holes left. Signed-off-by: Roman Pen <[email protected]> Cc: Eric Dumazet <[email protected]> Acked-by: Joonsoo Kim <[email protected]> Cc: David Rientjes <[email protected]> Cc: WANG Chao <[email protected]> Cc: Fabian Frederick <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Gioh Kim <[email protected]> Cc: Rob Jones <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15hugetlbfs: document min_size mount option and cleanupMike Kravetz1-9/+22
Add min_size mount option to the hugetlbfs documentation. Also, add the missing pagesize option and mention that size can be specified as bytes or a percentage of huge page pool. Signed-off-by: Mike Kravetz <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Aneesh Kumar <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15hugetlbfs: accept subpool min_size mount option and setup accordinglyMike Kravetz3-24/+94
Make 'min_size=<value>' be an option when mounting a hugetlbfs. This option takes the same value as the 'size' option. min_size can be specified without specifying size. If both are specified, min_size must be less that or equal to size else the mount will fail. If min_size is specified, then at mount time an attempt is made to reserve min_size pages. If the reservation fails, the mount fails. At umount time, the reserved pages are released. Signed-off-by: Mike Kravetz <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Aneesh Kumar <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15hugetlbfs: add minimum size accounting to subpoolsMike Kravetz1-23/+100
The same routines that perform subpool maximum size accounting hugepage_subpool_get/put_pages() are modified to also perform minimum size accounting. When a delta value is passed to these routines, calculate how global reservations must be adjusted to maintain the subpool minimum size. The routines now return this global reserve count adjustment. This global reserve count adjustment is then passed to the global accounting routine hugetlb_acct_memory(). Signed-off-by: Mike Kravetz <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Aneesh Kumar <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15hugetlbfs: add minimum size tracking fields to subpool structureMike Kravetz2-3/+8
hugetlbfs allocates huge pages from the global pool as needed. Even if the global pool contains a sufficient number pages for the filesystem size at mount time, those global pages could be grabbed for some other use. As a result, filesystem huge page allocations may fail due to lack of pages. Applications such as a database want to use huge pages for performance reasons. hugetlbfs filesystem semantics with ownership and modes work well to manage access to a pool of huge pages. However, the application would like some reasonable assurance that allocations will not fail due to a lack of huge pages. At application startup time, the application would like to configure itself to use a specific number of huge pages. Before starting, the application can check to make sure that enough huge pages exist in the system global pools. However, there are no guarantees that those pages will be available when needed by the application. What the application wants is exclusive use of a subset of huge pages. Add a new hugetlbfs mount option 'min_size=<value>' to indicate that the specified number of pages will be available for use by the filesystem. At mount time, this number of huge pages will be reserved for exclusive use of the filesystem. If there is not a sufficient number of free pages, the mount will fail. As pages are allocated to and freeed from the filesystem, the number of reserved pages is adjusted so that the specified minimum is maintained. This patch (of 4): Add a field to the subpool structure to indicate the minimimum number of huge pages to always be used by this subpool. This minimum count includes allocated pages as well as reserved pages. If the minimum number of pages for the subpool have not been allocated, pages are reserved up to this minimum. An additional field (rsv_hpages) is used to track the number of pages reserved to meet this minimum size. The hstate pointer in the subpool is convenient to have when reserving and unreserving the pages. Signed-off-by: Mike Kravetz <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Aneesh Kumar <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/compaction: reset compaction scanner positionsGioh Kim1-0/+8
When the compaction is activated via /proc/sys/vm/compact_memory it would better scan the whole zone. And some platforms, for instance ARM, have the start_pfn of a zone at zero. Therefore the first try to compact via /proc doesn't work. It needs to reset the compaction scanner position first. Signed-off-by: Gioh Kim <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm, memcg: sync allocation and memcg charge gfp flags for THPMichal Hocko1-22/+20
memcg currently uses hardcoded GFP_TRANSHUGE gfp flags for all THP charges. THP allocations, however, might be using different flags depending on /sys/kernel/mm/transparent_hugepage/{,khugepaged/}defrag and the current allocation context. The primary difference is that defrag configured to "madvise" value will clear __GFP_WAIT flag from the core gfp mask to make the allocation lighter for all mappings which are not backed by VM_HUGEPAGE vmas. If memcg charge path ignores this fact we will get light allocation but the a potential memcg reclaim would kill the whole point of the configuration. Fix the mismatch by providing the same gfp mask used for the allocation to the charge functions. This is quite easy for all paths except for hugepaged kernel thread with !CONFIG_NUMA which is doing a pre-allocation long before the allocated page is used in collapse_huge_page via khugepaged_alloc_page. To prevent from cluttering the whole code path from khugepaged_do_scan we simply return the current flags as per khugepaged_defrag() value which might have changed since the preallocation. If somebody changed the value of the knob we would charge differently but this shouldn't happen often and it is definitely not critical because it would only lead to a reduced success rate of one-off THP promotion. [[email protected]: fix weird code layout while we're there] [[email protected]: clean up around alloc_hugepage_gfpmask()] Signed-off-by: Michal Hocko <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Johannes Weiner <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm: rename deactivate_page to deactivate_file_pageMinchan Kim3-14/+14
"deactivate_page" was created for file invalidation so it has too specific logic for file-backed pages. So, let's change the name of the function and date to a file-specific one and yield the generic name. Signed-off-by: Minchan Kim <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Shaohua Li <[email protected]> Cc: Wang, Yalin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15Documentation/vm/unevictable-lru.txt: document interaction between ↵Eric B Munson1-0/+12
compaction and the unevictable LRU The memory compaction code uses the migration code to do most of the work in compaction. However, the compaction code interacts with the unevictable LRU differently than migration code and this difference should be noted in the documentation. [[email protected]: identify /proc/sys/vm/compact_unevictable directly] Signed-off-by: Eric B Munson <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm: allow compaction of unevictable pagesEric B Munson4-0/+28
Currently, pages which are marked as unevictable are protected from compaction, but not from other types of migration. The POSIX real time extension explicitly states that mlock() will prevent a major page fault, but the spirit of this is that mlock() should give a process the ability to control sources of latency, including minor page faults. However, the mlock manpage only explicitly says that a locked page will not be written to swap and this can cause some confusion. The compaction code today does not give a developer who wants to avoid swap but wants to have large contiguous areas available any method to achieve this state. This patch introduces a sysctl for controlling compaction behavior with respect to the unevictable lru. Users who demand no page faults after a page is present can set compact_unevictable_allowed to 0 and users who need the large contiguous areas can enable compaction on locked memory by leaving the default value of 1. To illustrate this problem I wrote a quick test program that mmaps a large number of 1MB files filled with random data. These maps are created locked and read only. Then every other mmap is unmapped and I attempt to allocate huge pages to the static huge page pool. When the compact_unevictable_allowed sysctl is 0, I cannot allocate hugepages after fragmenting memory. When the value is set to 1, allocations succeed. Signed-off-by: Eric B Munson <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Christoph Lameter <[email protected]> Acked-by: David Rientjes <[email protected]> Acked-by: Rik van Riel <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Mel Gorman <[email protected]> Cc: David Rientjes <[email protected]> Cc: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/page-writeback: check-before-clear PageReclaimNaoya Horiguchi1-1/+2
With the page flag sanitization patchset, an invalid usage of ClearPageReclaim() is detected in set_page_dirty(). This can be called from __unmap_hugepage_range(), so let's check PageReclaim() before trying to clear it to avoid the misuse. Signed-off-by: Naoya Horiguchi <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/migrate: check-before-clear PageSwapCacheNaoya Horiguchi1-1/+2
With the page flag sanitization patchset, an invalid usage of ClearPageSwapCache() is detected in migration_page_copy(). migrate_page_copy() is shared by both normal and hugepage (both thp and hugetlb) code path, so let's check PageSwapCache() and clear it if it's set to avoid misuse of the invalid clear operation. Signed-off-by: Naoya Horiguchi <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm: avoid tail page refcounting on non-THP compound pagesKirill A. Shutemov1-1/+1
THP uses tail page refcounting to be able to split huge pages at any time. Tail page refcounting is not needed for other users of compound pages and it's harmful because of overhead. We try to exclude non-THP pages from tail page refcounting using __compound_tail_refcounted() check. It excludes most common non-THP compound pages: SL*B and hugetlb, but it doesn't catch rest of __GFP_COMP users -- drivers. And it's not only about overhead. Drivers might want to use compound pages to get refcounting semantics suitable for mapping high-order pages to userspace. But tail page refcounting breaks it. Tail page refcounting uses ->_mapcount in tail pages to store GUP pins on them. It means GUP pins would affect page_mapcount() for tail pages. It's not a problem for THP, because it never maps tail pages. But unlike THP, drivers map parts of compound pages with PTEs and it makes page_mapcount() be called for tail pages. In particular, GUP pins would shift PSS up and affect /proc/kpagecount for such pages. But, I'm not aware about anything which can lead to crash or other serious misbehaviour. Since currently all THP pages are anonymous and all drivers pages are not, we can fix the __compound_tail_refcounted() check by requiring PageAnon() to enable tail page refcounting. Signed-off-by: Kirill A. Shutemov <[email protected]> Acked-by: Hugh Dickins <[email protected]> Cc: Andrea Arcangeli <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm: consolidate all page-flags helpers in <linux/page-flags.h>Kirill A. Shutemov4-105/+96
Currently we take a naive approach to page flags on compound pages - we set the flag on the page without consideration if the flag makes sense for tail page or for compound page in general. This patchset try to sort this out by defining per-flag policy on what need to be done if page-flag helper operate on compound page. The last patch in the patchset also sanitizes usege of page->mapping for tail pages. We don't define the meaning of page->mapping for tail pages. Currently it's always NULL, which can be inconsistent with head page and potentially lead to problems. For now I caught one case of illegal usage of page flags or ->mapping: sound subsystem allocates pages with __GFP_COMP and maps them with PTEs. It leads to setting dirty bit on tail pages and access to tail_page's ->mapping. I don't see any bad behaviour caused by this, but worth fixing anyway. This patchset makes more sense if you take my THP refcounting into account: we will see more compound pages mapped with PTEs and we need to define behaviour of flags on compound pages to avoid bugs. This patch (of 16): We have page-flags helper function declarations/definitions spread over several header files. Let's consolidate them in <linux/page-flags.h>. Signed-off-by: Kirill A. Shutemov <[email protected]> Cc: Andrea Arcangeli <[email protected]> Acked-by: Hugh Dickins <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: Steve Capper <[email protected]> Cc: "Aneesh Kumar K.V" <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Jerome Marchand <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/memory-failure.c: define page types for action_result() in one placeNaoya Horiguchi1-31/+77
This cleanup patch moves all strings passed to action_result() into a singl= e array action_page_type so that a reader can easily find which kind of actio= n results are possible. And this patch also fixes the odd lines to be printed out, like "unknown page state page" or "free buddy, 2nd try page". [[email protected]: rename messages, per David] [[email protected]: s/DIRTY_UNEVICTABLE_LRU/CLEAN_UNEVICTABLE_LRU', per Andi] Signed-off-by: Naoya Horiguchi <[email protected]> Reviewed-by: Andi Kleen <[email protected]> Cc: Tony Luck <[email protected]> Cc: "Xie XiuQi" <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Chen Gong <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15memcg: remove obsolete commentVladimir Davydov1-5/+0
Low and high watermarks, as they defined in the TODO to the mem_cgroup struct, have already been implemented by Johannes, so remove the stale comment. Signed-off-by: Vladimir Davydov <[email protected]> Cc: Johannes Weiner <[email protected]> Acked-by: Michal Hocko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15memcg: zap mem_cgroup_lookup()Vladimir Davydov2-17/+9
mem_cgroup_lookup() is a wrapper around mem_cgroup_from_id(), which checks that id != 0 before issuing the function call. Today, there is no point in this additional check apart from optimization, because there is no css with id <= 0, so that css_from_id, called by mem_cgroup_from_id, will return NULL for any id <= 0. Since mem_cgroup_from_id is only called from mem_cgroup_lookup, let us zap mem_cgroup_lookup, substituting calls to it with mem_cgroup_from_id and moving the check if id > 0 to css_from_id. Signed-off-by: Vladimir Davydov <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Johannes Weiner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm: refactor zone_movable_is_highmem()Zhang Zhen1-4/+4
All callers of zone_movable_is_highmem are under #ifdef CONFIG_HIGHMEM, so the else branch return 0 is not needed. Signed-off-by: Zhang Zhen <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15mm/oom_kill.c: fix typo in commentYaowei Bai1-1/+1
Alter 'taks' -> 'task' Signed-off-by: Yaowei Bai <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15vfs: delete vfs_readdir function declarationZhang Zhen1-1/+0
vfs_readdir() was replaced by iterate_dir() in commit 5c0ba4e0762e ("[readdir] introduce iterate_dir() and dir_context"). Signed-off-by: Zhang Zhen <[email protected]> Cc: Al Viro <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15Merge branch 'for-next' of ↵Linus Torvalds18-120/+735
git://git.kernel.org/pub/scm/linux/kernel/git/cooloney/linux-leds Pull LED subsystem updates from Bryan Wu: "In this cycle, we merged some fix and update for LED Flash class driver. Then the core code of LED Flash class driver is in the kernel now. Moreover, we also got some bug fixes, code cleanup and new drivers for LED controllers" * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/cooloney/linux-leds: leds: Don't treat the LED name as a format string leds: Use log level warn instead of info when telling about a name clash leds/led-class: Handle LEDs with the same name leds: lp8860: Fix typo in MODULE_DESCRIPTION in leds-lp8860.c leds: lp8501: Fix typo in MODULE_DESCRIPTION in leds-lp8501.c DT: leds: Add uniqueness requirement for 'label' property. dt-binding: leds: Add common LED DT bindings macros leds: add Qualcomm PM8941 WLED driver leds: add DT binding for Qualcomm PM8941 WLED block leds: pca963x: Add missing initialiation of struct led_info.flags leds: flash: Fix the size of sysfs_groups array Documentation: leds: Add description of LED Flash class extension leds: flash: document sysfs interface leds: flash: Remove synchronized flash strobe feature leds: Introduce devres helper for led_classdev_register leds: lp8860: make use of devm_gpiod_get_optional leds: Let the binding document example for leds-gpio follow the gpio bindings leds: flash: remove stray include directive leds: leds-pwm: drop one pwm_get_period() call
2015-04-15Merge tag 'sound-4.1-rc1' of ↵Linus Torvalds315-9019/+15172
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound Pull sound updates from Takashi Iwai: "There have been major modernization with the standard bus: in ALSA sequencer core and HD-audio. Also, HD-audio receives the regmap support replacing the in-house cache register cache code. These changes shouldn't impact the existing behavior, but rather refactoring. In addition, HD-audio got the code split to a core library part and the "legacy" driver parts. This is a preliminary work for adapting the upcoming ASoC HD-audio driver, and the whole transition is still work in progress, likely finished in 4.1. Along with them, there are many updates in ASoC area as usual, too: lots of cleanups, Intel code shuffling, etc. Here are some highlights: ALSA core: - PCM: the audio timestamp / wallclock enhancement - PCM: fixes in DPCM management - Fixes / cleanups of user-space control element management - Sequencer: modernization using the standard bus HD-audio: - Modernization using the standard bus - Regmap support - Use standard runtime PM for codec power saving - Widget-path based power-saving for IDT, VIA and Realtek codecs - Reorganized sysfs entries for each codec object - More Dell headset support ASoC: - Move of jack registration to the card level - Lots of ASoC cleanups, mainly moving things from the CODEC level to the card level - Support for DAPM routes specified by both the machine driver and DT - Continuing improvements to rcar - pcm512x enhacements - Intel platforms updates - rt5670 updates / fixes - New platforms / devices: some non-DSP Qualcomm platforms, Google's Storm platform, Maxmim MAX98925 CODECs and the Ingenic JZ4780 SoC Misc: - ice1724: Improved ESI W192M support - emu10k1: Emu 1010 fixes/enhancement" * tag 'sound-4.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (411 commits) ALSA: hda - set GET bit when adding a vendor verb to the codec regmap ALSA: hda/realtek - Enable the ALC292 dock fixup on the Thinkpad T450 ALSA: hda - Fix another race in runtime PM refcounting ALSA: hda - Expose codec type sysfs ALSA: ctl: fix to handle several elements added by one operation for userspace element ASoC: Intel: fix array_size.cocci warnings ASoC: n810: Automatically disconnect non-connected pins ASoC: n810: Consistently pass the card DAPM context to n810_ext_control() ASoC: davinci-evm: Use card DAPM context to access widgets ASoC: mop500_ab8500: Use card DAPM context to access widgets ASoC: wm1133-ev1: Use card DAPM context to access widgets ASoC: atmel: Improve machine driver compile test coverage ASoC: atmel: Add dependency to SND_SOC_I2C_AND_SPI where necessary ALSA: control: Fix a typo of SNDRV_CTL_ELEM_ACCESS_TLV_* with SNDRV_CTL_TLV_OP_* ALSA: usb-audio: Don't attempt to get Microsoft Lifecam Cinema sample rate ASoC: rnsd: fix build regression without CONFIG_OF ALSA: emu10k1: add toggles for E-mu 1010 optical ports ALSA: ctl: fill identical information to return value when adding userspace elements ALSA: ctl: fix a bug to return no identical information in info operation for userspace controls ALSA: ctl: confirm to return all identical information in 'activate' event ...
2015-04-15Merge branch 'master' of ↵David S. Miller13-163/+304
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue Jeff Kirsher says: ==================== Intel Wired LAN Driver Updates 2015-04-14 This series contains updates to i40e and i40evf. Mitch provides a fix for i40e, where VFs were gone and the associated VSI's had been removed and the rings were not stopped, which in some circumstances cased memory corruption or DMAR errors. So stop all the rings associated with each VF before releasing its resources. Also cleaned up a poorly indented piece of code. Fixes VF link state, where VF devices were assuming link is up unless told otherwise, which means that VFs instantiated on a PF with no link, would report the wrong state. Anjali adds support to add Flow director Sideband rules for a VF from it's PF. Fixes a recently discovered hardware issue, where after a VFLR hardware might be indicating to us a reset completion little too early, so wait another 10 msec for cache to be cleaned up. Jesse enables the user to dump the internal hardware state for better debugging by allowing a bash script to acquire information about the internal hardware state. The data output to the kernel log is collected by the script and can then be sent to Intel. Also fixed a possible failure path to allocate memory that was found by smatch. Cleaned up unused local variables. ==================== Signed-off-by: David S. Miller <[email protected]>
2015-04-15bnx2x: Fix busy_poll vs netpollEric Dumazet2-90/+56
Commit 9a2620c877454 ("bnx2x: prevent WARN during driver unload") switched the napi/busy_lock locking mechanism from spin_lock() into spin_lock_bh(), breaking inter-operability with netconsole, as netpoll disables interrupts prior to calling our napi mechanism. This switches the driver into using atomic assignments instead of the spinlock mechanisms previously employed. Based on initial patch from Yuval Mintz & Ariel Elior I basically added softirq starvation avoidance, and mixture of atomic operations, plain writes and barriers. Note this slightly reduces the overhead for this driver when no busy_poll sockets are in use. Fixes: 9a2620c877454 ("bnx2x: prevent WARN during driver unload") Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2015-04-15Merge tag 'locks-v4.1-1' of git://git.samba.org/jlayton/linuxLinus Torvalds3-39/+39
Pull file locking related changes from Jeff Layton: "This set is mostly minor cleanups to the overhaul that went in last cycle. The other noticeable items are the changes to the lm_get_owner and lm_put_owner prototypes, and the fact that we no longer need to use the i_lock to protect the i_flctx pointer" * tag 'locks-v4.1-1' of git://git.samba.org/jlayton/linux: locks: use cmpxchg to assign i_flctx pointer locks: get rid of WE_CAN_BREAK_LSLK_NOW dead code locks: change lm_get_owner and lm_put_owner prototypes locks: don't allocate a lock context for an F_UNLCK request locks: Add lockdep assertion for blocked_lock_lock locks: remove extraneous IS_POSIX and IS_FLOCK tests locks: Remove unnecessary IS_POSIX test
2015-04-15net: hip04: Make tx coalesce timer actually workThomas Gleixner1-7/+11
The code sets the expiry value of the timer to a relative value and starts it with hrtimer_start_expires. That's fine, but that only works once. The timer is started in relative mode, so the expiry value gets overwritten with the absolut expiry time (now + expiry). So once the timer expired, a new call to hrtimer_start_expires results in an immidiately expired timer, because the expiry value is already in the past. Use the proper mechanisms to (re)start the timer in the intended way. Signed-off-by: Thomas Gleixner <[email protected]> Cc: "David S. Miller" <[email protected]> Cc: dingtianhong <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Zhangfei Gao <[email protected]> Cc: Dan Carpenter <[email protected]> Cc: [email protected] Acked-by: Ding Tianhong <[email protected]> Acked-by: Arnd Bergmann <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2015-04-15crypto: fix mis-merge with the networking mergeLinus Torvalds1-4/+2
The networking updates from David Miller removed the iocb argument from sendmsg and recvmsg (in commit 1b784140474e: "net: Remove iocb argument from sendmsg and recvmsg"), but the crypto code had added new instances of them. When I pulled the crypto update, it was a silent semantic mis-merge, and I overlooked the new warning messages in my test-build. I try to fix those in the merge itself, but that relies on me noticing. Oh well. Signed-off-by: Linus Torvalds <[email protected]>
2015-04-15Merge branch 'exec_domain_rip_v2' of ↵Linus Torvalds85-675/+86
git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc Pull exec domain removal from Richard Weinberger: "This series removes execution domain support from Linux. The idea behind exec domains was to support different ABIs. The feature was never complete nor stable. Let's rip it out and make the kernel signal handling code less complicated" * 'exec_domain_rip_v2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc: (27 commits) arm64: Removed unused variable sparc: Fix execution domain removal Remove rest of exec domains. arch: Remove exec_domain from remaining archs arc: Remove signal translation and exec_domain xtensa: Remove signal translation and exec_domain xtensa: Autogenerate offsets in struct thread_info x86: Remove signal translation and exec_domain unicore32: Remove signal translation and exec_domain um: Remove signal translation and exec_domain tile: Remove signal translation and exec_domain sparc: Remove signal translation and exec_domain sh: Remove signal translation and exec_domain s390: Remove signal translation and exec_domain mn10300: Remove signal translation and exec_domain microblaze: Remove signal translation and exec_domain m68k: Remove signal translation and exec_domain m32r: Remove signal translation and exec_domain m32r: Autogenerate offsets in struct thread_info frv: Remove signal translation and exec_domain ...
2015-04-15Merge tag 'for-linus-4.1' of ↵Linus Torvalds59-2067/+323
git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml Pull UML updates from Richard Weinberger: - hostfs saw a face lifting - old/broken stuff was removed (SMP, HIGHMEM, SKAS3/4) - random cleanups and bug fixes * tag 'for-linus-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml: (26 commits) um: Print minimum physical memory requirement um: Move uml_postsetup in the init_thread stack um: add a kmsg_dumper x86, UML: fix integer overflow in ELF_ET_DYN_BASE um: hostfs: Reduce number of syscalls in readdir um: Remove broken highmem support um: Remove broken SMP support um: Remove SKAS3/4 support um: Remove ppc cruft um: Remove ia64 cruft um: Remove dead code from stacktrace hostfs: No need to box and later unbox the file mode hostfs: Use page_offset() hostfs: Set page flags in hostfs_readpage() correctly hostfs: Remove superfluous initializations in hostfs_open() hostfs: hostfs_open: Reset open flags upon each retry hostfs: Remove superfluous test in hostfs_open() hostfs: Report append flag in ->show_options() hostfs: Use __getname() in follow_link hostfs: Remove open coded strcpy() ...
2015-04-15Merge tag 'upstream-4.1-rc1' of git://git.infradead.org/linux-ubifsLinus Torvalds38-1141/+1507
Pull UBI/UBIFS updates from Richard Weinberger: "This pull request includes the following UBI/UBIFS changes: - powercut emulation for UBI - a huge update to UBI Fastmap - cleanups and bugfixes all over UBI and UBIFS" * tag 'upstream-4.1-rc1' of git://git.infradead.org/linux-ubifs: (50 commits) UBI: power cut emulation for testing UBIFS: fix output format of INUM_WATERMARK UBI: Fastmap: Fall back to scanning mode after ECC error UBI: Fastmap: Remove is_fm_block() UBI: Fastmap: Add blank line after declarations UBI: Fastmap: Remove else after return. UBI: Fastmap: Introduce may_reserve_for_fm() UBI: Fastmap: Introduce ubi_fastmap_init() UBI: Fastmap: Wire up WL accessor functions UBI: Add accessor functions for WL data structures UBI: Move fastmap specific functions out of wl.c UBI: Fastmap: Add new module parameter fm_debug UBI: Fastmap: Make self_check_eba() depend on fastmap self checking UBI: Fastmap: Add self check to detect absent PEBs UBI: Fix stale pointers in ubi->lookuptbl UBI: Fastmap: Enhance fastmap checking UBI: Add initial support for fastmap self checks UBI: Fastmap: Rework fastmap error paths UBI: Fastmap: Prepare for variable sized fastmaps UBI: Fastmap: Locking updates ...
2015-04-15Merge branch 'for-linus-2' of ↵Linus Torvalds90-1185/+598
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull second vfs update from Al Viro: "Now that net-next went in... Here's the next big chunk - killing ->aio_read() and ->aio_write(). There'll be one more pile today (direct_IO changes and generic_write_checks() cleanups/fixes), but I'd prefer to keep that one separate" * 'for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (37 commits) ->aio_read and ->aio_write removed pcm: another weird API abuse infinibad: weird APIs switched to ->write_iter() kill do_sync_read/do_sync_write fuse: use iov_iter_get_pages() for non-splice path fuse: switch to ->read_iter/->write_iter switch drivers/char/mem.c to ->read_iter/->write_iter make new_sync_{read,write}() static coredump: accept any write method switch /dev/loop to vfs_iter_write() serial2002: switch to __vfs_read/__vfs_write ashmem: use __vfs_read() export __vfs_read() autofs: switch to __vfs_write() new helper: __vfs_write() switch hugetlbfs to ->read_iter() coda: switch to ->read_iter/->write_iter ncpfs: switch to ->read_iter/->write_iter net/9p: remove (now-)unused helpers p9_client_attach(): set fid->uid correctly ...
2015-04-15block: loop: switch to VFS ITER_BVECChristoph Hellwig1-174/+120
Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15configfs: Fix inconsistent use of file_inode() vs file->f_path.dentry->d_inodeDavid Howells1-1/+1
Fix inconsistent use of file_inode() vs file->f_path.dentry->d_inode. Reported-by: Dan Carpenter <[email protected]> Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15VFS: Make pathwalk use d_is_reg() rather than S_ISREG()David Howells1-1/+1
Make pathwalk use d_is_reg() rather than S_ISREG() to determine whether to honour O_TRUNC. Since this occurs after complete_walk(), the dentry type field cannot change and the inode pointer cannot change as we hold a ref on the dentry, so this should be safe. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15VFS: Fix up debugfs to use d_is_dir() in place of S_ISDIR()David Howells1-1/+1
Fix up debugfs to use d_is_dir(dentry) in place of S_ISDIR(dentry->d_inode->i_mode). Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15VFS: Combine inode checks with d_is_negative() and d_is_positive() in pathwalkDavid Howells1-3/+3
Where we have: if (!dentry->d_inode || d_is_negative(dentry)) { type constructions in pathwalk we should be able to eliminate the check of d_inode and rely solely on the result of d_is_negative() or d_is_positive(). What we do have to take care to do is to read d_inode after calling a d_is_xxx() typecheck function to get the barriering right. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15NFS: Don't use d_inode as a variable nameDavid Howells1-4/+4
Don't use d_inode as a variable name as it now masks a function name. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15VFS: Impose ordering on accesses of d_inode and d_flagsDavid Howells2-26/+42
Impose ordering on accesses of d_inode and d_flags to avoid the need to do this: if (!dentry->d_inode || d_is_negative(dentry)) { when this: if (d_is_negative(dentry)) { should suffice. This check is especially problematic if a dentry can have its type field set to something other than DENTRY_MISS_TYPE when d_inode is NULL (as in unionmount). What we really need to do is stick a write barrier between setting d_inode and setting d_flags and a read barrier between reading d_flags and reading d_inode. Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15VFS: Add owner-filesystem positive/negative dentry checksDavid Howells1-0/+38
Supply two functions to test whether a filesystem's own dentries are positive or negative (d_really_is_positive() and d_really_is_negative()). The problem is that the DCACHE_ENTRY_TYPE field of dentry->d_flags may be overridden by the union part of a layered filesystem and isn't thus necessarily indicative of the type of dentry. Normally, this would involve a negative dentry (ie. ->d_inode == NULL) having ->d_layer.lower pointed to a lower layer dentry, DCACHE_PINNING_LOWER set and the DCACHE_ENTRY_TYPE field set to something other than DCACHE_MISS_TYPE - but it could also involve, say, a DCACHE_SPECIAL_TYPE being overridden to DCACHE_WHITEOUT_TYPE if a 0,0 chardev is detected in the top layer. However, inside a filesystem, when that fs is looking at its own dentries, it probably wants to know if they are really negative or not - and doesn't care about the fallthrough bits used by the union. To this end, a filesystem should normally use d_really_is_positive/negative() when looking at its own dentries rather than d_is_positive/negative() and should use d_inode() to get at the inode. Anyone looking at someone else's dentries (this includes pathwalk) should use d_is_xxx() and d_backing_inode(). Signed-off-by: David Howells <[email protected]> Signed-off-by: Al Viro <[email protected]>
2015-04-15nfs: generic_write_checks() shouldn't be done on swapout...Al Viro3-15/+11
Signed-off-by: Al Viro <[email protected]>
2015-04-15i2c: core: Export bus recovery functionsMark Brown1-0/+3
Current -next fails to link an ARM allmodconfig because drivers that use the core recovery functions can be built as modules but those functions are not exported: ERROR: "i2c_generic_gpio_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined! ERROR: "i2c_generic_scl_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined! ERROR: "i2c_recover_bus" [drivers/i2c/busses/i2c-davinci.ko] undefined! Add exports to fix this. Fixes: 5f9296ba21b3c (i2c: Add bus recovery infrastructure) Signed-off-by: Mark Brown <[email protected]> Signed-off-by: Wolfram Sang <[email protected]>
2015-04-15Merge branch 'kconfig' of ↵Linus Torvalds15-198/+174
git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild Pull kconfig updates from Michal Marek: "Here is the kconfig stuff for v4.1-rc1: - fixes for mergeconfig (used by make kvmconfig/tinyconfig) - header cleanup - make -s *config is silent now" * 'kconfig' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild: kconfig: Do not print status messages in make -s mode kconfig: Simplify Makefile kbuild: add generic mergeconfig target, %.config merge_config.sh: rename MAKE to RUNMAKE merge_config.sh: improve indentation kbuild: mergeconfig: remove redundant $(objtree) kbuild: mergeconfig: move an error check to merge_config.sh kbuild: mergeconfig: fix "jobserver unavailable" warning kconfig: Remove unnecessary prototypes from headers kconfig: Remove dead code kconfig: Get rid of the P() macro in headers kconfig: fix a misspelling in scripts/kconfig/merge_config.sh
2015-04-15Merge branch 'kbuild' of ↵Linus Torvalds11-48/+50
git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild Pull kbuild updates from Michal Marek: "Here is the first round of kbuild changes for v4.1-rc1: - kallsyms fix for ARM and cleanup - make dep(end) removed (developers have no sense of nostalgia these days...) - include Makefiles by relative path - stop useless rebuilds of asm-offsets.h and bounds.h" * 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild: Kbuild: kallsyms: drop special handling of pre-3.0 GCC symbols Kbuild: kallsyms: ignore veneers emitted by the ARM linker kbuild: ia64: use $(src)/Makefile.gate rather than particular path kbuild: include $(src)/Makefile rather than $(obj)/Makefile kbuild: use relative path more to include Makefile kbuild: use relative path to include Makefile kbuild: do not add $(bounds-file) and $(offsets-file) to targets kbuild: remove warning about "make depend" kbuild: Don't reset timestamps in include/generated if not needed
2015-04-15Merge branch 'next' of ↵Linus Torvalds31-1156/+1953
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull security subsystem updates from James Morris: "Highlights for this window: - improved AVC hashing for SELinux by John Brooks and Stephen Smalley - addition of an unconfined label to Smack - Smack documentation update - TPM driver updates" * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (28 commits) lsm: copy comm before calling audit_log to avoid race in string printing tomoyo: Do not generate empty policy files tomoyo: Use if_changed when generating builtin-policy.h tomoyo: Use bin2c to generate builtin-policy.h selinux: increase avtab max buckets selinux: Use a better hash function for avtab selinux: convert avtab hash table to flex_array selinux: reconcile security_netlbl_secattr_to_sid() and mls_import_netlbl_cat() selinux: remove unnecessary pointer reassignment Smack: Updates for Smack documentation tpm/st33zp24/spi: Add missing device table for spi phy. tpm/st33zp24: Add proper wait for ordinal duration in case of irq mode smack: Fix gcc warning from unused smack_syslog_lock mutex in smackfs.c Smack: Allow an unconfined label in bringup mode Smack: getting the Smack security context of keys Smack: Assign smack_known_web as default smk_in label for kernel thread's socket tpm/tpm_infineon: Use struct dev_pm_ops for power management MAINTAINERS: Add Jason as designated reviewer for TPM tpm: Update KConfig text to include TPM2.0 FIFO chips tpm/st33zp24/dts/st33zp24-spi: Add dts documentation for st33zp24 spi phy ...
2015-04-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds168-2202/+18223
Pull crypto update from Herbert Xu: "Here is the crypto update for 4.1: New interfaces: - user-space interface for AEAD - user-space interface for RNG (i.e., pseudo RNG) New hashes: - ARMv8 SHA1/256 - ARMv8 AES - ARMv8 GHASH - ARM assembler and NEON SHA256 - MIPS OCTEON SHA1/256/512 - MIPS img-hash SHA1/256 and MD5 - Power 8 VMX AES/CBC/CTR/GHASH - PPC assembler AES, SHA1/256 and MD5 - Broadcom IPROC RNG driver Cleanups/fixes: - prevent internal helper algos from being exposed to user-space - merge common code from assembly/C SHA implementations - misc fixes" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (169 commits) crypto: arm - workaround for building with old binutils crypto: arm/sha256 - avoid sha256 code on ARMv7-M crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer crypto: x86/sha256_ssse3 - move SHA-224/256 SSSE3 implementation to base layer crypto: x86/sha1_ssse3 - move SHA-1 SSSE3 implementation to base layer crypto: arm64/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer crypto: arm64/sha1-ce - move SHA-1 ARMv8 implementation to base layer crypto: arm/sha2-ce - move SHA-224/256 ARMv8 implementation to base layer crypto: arm/sha256 - move SHA-224/256 ASM/NEON implementation to base layer crypto: arm/sha1-ce - move SHA-1 ARMv8 implementation to base layer crypto: arm/sha1_neon - move SHA-1 NEON implementation to base layer crypto: arm/sha1 - move SHA-1 ARM asm implementation to base layer crypto: sha512-generic - move to generic glue implementation crypto: sha256-generic - move to generic glue implementation crypto: sha1-generic - move to generic glue implementation crypto: sha512 - implement base layer for SHA-512 crypto: sha256 - implement base layer for SHA-256 crypto: sha1 - implement base layer for SHA-1 crypto: api - remove instance when test failed crypto: api - Move alg ref count init to crypto_check_alg ...