Age | Commit message (Collapse) | Author | Files | Lines |
|
Changelog of commit dcda9b04713c ("mm, tree wide: replace __GFP_REPEAT by
__GFP_RETRY_MAYFAIL with more useful semantic") has very nice description
of GFP flags that affect reclaim behaviour of the page allocator.
It would be pity to keep this description buried in the log so let's expose
it in the Documentation/ as well.
Cc: Michal Hocko <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Signed-off-by: Mike Rapoport <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
Mention struct_size(), array_size() and array3_size() in the same place
as kmalloc() and friends.
Signed-off-by: Chris Packham <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
These are no longer needed as the documentation build will automatically
add the cross references.
Signed-off-by: Chris Packham <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
"on the safe size" should be "on the safe side".
Signed-off-by: Chris Packham <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
In most configurations, kmalloc() happens to return naturally aligned
(i.e. aligned to the block size itself) blocks for power of two sizes.
That means some kmalloc() users might unknowingly rely on that
alignment, until stuff breaks when the kernel is built with e.g.
CONFIG_SLUB_DEBUG or CONFIG_SLOB, and blocks stop being aligned. Then
developers have to devise workaround such as own kmem caches with
specified alignment [1], which is not always practical, as recently
evidenced in [2].
The topic has been discussed at LSF/MM 2019 [3]. Adding a
'kmalloc_aligned()' variant would not help with code unknowingly relying
on the implicit alignment. For slab implementations it would either
require creating more kmalloc caches, or allocate a larger size and only
give back part of it. That would be wasteful, especially with a generic
alignment parameter (in contrast with a fixed alignment to size).
Ideally we should provide to mm users what they need without difficult
workarounds or own reimplementations, so let's make the kmalloc()
alignment to size explicitly guaranteed for power-of-two sizes under all
configurations. What this means for the three available allocators?
* SLAB object layout happens to be mostly unchanged by the patch. The
implicitly provided alignment could be compromised with
CONFIG_DEBUG_SLAB due to redzoning, however SLAB disables redzoning for
caches with alignment larger than unsigned long long. Practically on at
least x86 this includes kmalloc caches as they use cache line alignment,
which is larger than that. Still, this patch ensures alignment on all
arches and cache sizes.
* SLUB layout is also unchanged unless redzoning is enabled through
CONFIG_SLUB_DEBUG and boot parameter for the particular kmalloc cache.
With this patch, explicit alignment is guaranteed with redzoning as
well. This will result in more memory being wasted, but that should be
acceptable in a debugging scenario.
* SLOB has no implicit alignment so this patch adds it explicitly for
kmalloc(). The potential downside is increased fragmentation. While
pathological allocation scenarios are certainly possible, in my testing,
after booting a x86_64 kernel+userspace with virtme, around 16MB memory
was consumed by slab pages both before and after the patch, with
difference in the noise.
[1] https://lore.kernel.org/linux-btrfs/c3157c8e8e0e7588312b40c853f65c02fe6c957a.1566399731.git.christophe.leroy@c-s.fr/
[2] https://lore.kernel.org/linux-fsdevel/[email protected]/
[3] https://lwn.net/Articles/787740/
[[email protected]: documentation fixlet, per Matthew]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Vlastimil Babka <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Acked-by: Christoph Hellwig <[email protected]>
Cc: David Sterba <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Ming Lei <[email protected]>
Cc: Dave Chinner <[email protected]>
Cc: "Darrick J . Wong" <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: James Bottomley <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
sphinx emits warning
WARNING: undefined label: memory-allocation ...
This seems to be caused by the use of a hyphen in the label name instead
of an underscore. Using an underscore for the label name and the
reference clears the warning.
Use underscore not hyphen in label and reference.
Signed-off-by: Tobin C. Harding <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
Mention that when a part of a slab cache might be exported to the
userspace, the cache should be created using kmem_cache_create_usercopy()
Signed-off-by: Mike Rapoport <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
I just went looking for the memory allocation guide in the MM docs instead
of in the core API. For the benefit of the next person who makes that
mistake, link to it from the MM docs.
Signed-off-by: Matthew Wilcox <[email protected]>
Acked-by: Mike Rapoport <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|
|
Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Randy Dunlap <[email protected]>
Signed-off-by: Jonathan Corbet <[email protected]>
|