Age | Commit message (Collapse) | Author | Files | Lines |
|
KSM pages can be shared between tasks that are not necessarily related
to each other from a NUMA perspective. This patch causes those pages to
be ignored by automatic NUMA balancing so they do not migrate and do not
cause unrelated tasks to be grouped together.
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Alex Thorlton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
A low local/remote numa hinting fault ratio is potentially explained by
failed migrations. This patch adds a tracepoint that fires when
migration fails due to migration rate limitation.
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Alex Thorlton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
NUMA migrate rate limiting protects a migration counter and window using
a lock but in some cases this can be a contended lock. It is not
critical that the number of pages be perfect, lost updates are
acceptable. Reduce the importance of this lock.
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Alex Thorlton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
numamigrate_update_ratelimit and numamigrate_isolate_page only have
callers in mm/migrate.c. This patch makes them static.
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Alex Thorlton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Show num_poisoned_pages when oom, it is a little helpful to find the
reason. Also it will be emitted anytime show_mem() is called.
Signed-off-by: Xishi Qiu <[email protected]>
Suggested-by: Naoya Horiguchi <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: David Rientjes <[email protected]>
Reviewed-by: Wanpeng Li <[email protected]>
Acked-by: KOSAKI Motohiro <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Add '#' to hwpoison_inject just as done in madvise_hwpoison.
Signed-off-by: Wanpeng Li <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Reviewed-by: Vladimir Murzin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Check nid parameter and produce warning if it has deprecated
MAX_NUMNODES value. Also re-assign NUMA_NO_NODE value to the nid
parameter in this case.
These will help to identify the wrong API usage (the caller) and make
code simpler.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Update X86 code to use NUMA_NO_NODE instead of MAX_NUMNODES while
calling memblock APIs, because memblock API will be changed to use
NUMA_NO_NODE and will produce warning during boot otherwise.
See:
https://lkml.org/lkml/2013/12/9/898
Signed-off-by: Grygorii Strashko <[email protected]>
Cc: Santosh Shilimkar <[email protected]>
Cc: Stephen Rothwell <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Yinghai Lu <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Correct ensure_zone_is_initialized() function description according to
the introduced memblock APIs for early memory allocations.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Acked-by: "Rafael J. Wysocki" <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Tony Lindgren <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fallback to
exiting bootmem APIs.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Switch to memblock interfaces for early memory allocator instead of
bootmem allocator. No functional change in beahvior than what it is in
current code from bootmem users points of view.
Archs already converted to NO_BOOTMEM now directly use memblock
interfaces instead of bootmem wrappers build on top of memblock. And
the archs which still uses bootmem, these new apis just fall back to
exiting bootmem APIs.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Introduce memblock memory allocation APIs which allow to support PAE or
LPAE extension on 32 bits archs where the physical memory start address
can be beyond 4GB. In such cases, existing bootmem APIs which operate
on 32 bit addresses won't work and needs memblock layer which operates
on 64 bit addresses.
So we add equivalent APIs so that we can replace usage of bootmem with
memblock interfaces. Architectures already converted to NO_BOOTMEM use
these new memblock interfaces. The architectures which are still not
converted to NO_BOOTMEM continue to function as is because we still
maintain the fal lback option of bootmem back-end supporting these new
interfaces. So no functional change as such.
In long run, once all the architectures moves to NO_BOOTMEM, we can get
rid of bootmem layer completely. This is one step to remove the core
code dependency with bootmem and also gives path for architectures to
move away from bootmem.
The proposed interface will became active if both CONFIG_HAVE_MEMBLOCK
and CONFIG_NO_BOOTMEM are specified by arch. In case
!CONFIG_NO_BOOTMEM, the memblock() wrappers will fallback to the
existing bootmem apis so that arch's not converted to NO_BOOTMEM
continue to work as is.
The meaning of MEMBLOCK_ALLOC_ACCESSIBLE and MEMBLOCK_ALLOC_ANYWHERE
is kept same.
[[email protected]: s/depricated/deprecated/]
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
It's recommended to use NUMA_NO_NODE everywhere to select "process any
node" behavior or to indicate that "no node id specified".
Hence, update __next_free_mem_range*() API's to accept both NUMA_NO_NODE
and MAX_NUMNODES, but emit warning once on MAX_NUMNODES, and correct
corresponding API's documentation to describe new behavior. Also,
update other memblock/nobootmem APIs where MAX_NUMNODES is used
dirrectly.
The change was suggested by Tejun Heo.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Reorder parameters of memblock_find_in_range_node to be consistent with
other memblock APIs.
The change was suggested by Tejun Heo <[email protected]>.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Don't produce warning and interpret 0 as "default align" equal to
SMP_CACHE_BYTES in case if caller of memblock_alloc_base_nid() doesn't
specify alignment for the block (align == 0).
This is done in preparation of introducing common memblock alloc interface
to make code behavior consistent. More details are in below thread :
https://lkml.org/lkml/2013/10/13/117.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Clean-up to remove depedency with bootmem headers.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Reviewed-by: Tejun Heo <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The __free_pages_bootmem is used internally by MM core and already
defined in internal.h. So, remove duplicated declaration.
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Reviewed-by: Tejun Heo <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now the Nobootmem allocator will always try to free memory allocated for
reserved memory regions (free_low_memory_core_early()) without taking
into to account current memblock debugging configuration
(CONFIG_ARCH_DISCARD_MEMBLOCK and CONFIG_DEBUG_FS state).
As result if:
- CONFIG_DEBUG_FS defined
- CONFIG_ARCH_DISCARD_MEMBLOCK not defined;
- reserved memory regions array have been resized during boot
then:
- memory allocated for reserved memory regions array will be freed to
buddy allocator;
- debug_fs entry "sys/kernel/debug/memblock/reserved" will show garbage
instead of state of memory reservations. like:
0: 0x98393bc0..0x9a393bbf
1: 0xff120000..0xff11ffff
2: 0x00000000..0xffffffff
Hence, do not free memory allocated for reserved memory regions if
defined(CONFIG_DEBUG_FS) && !defined(CONFIG_ARCH_DISCARD_MEMBLOCK).
Signed-off-by: Grygorii Strashko <[email protected]>
Signed-off-by: Santosh Shilimkar <[email protected]>
Reviewed-by: Tejun Heo <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The memblock current limit value is used to limit early boot memory
allocations below max low memory address by default, as the kernel can
access only to the low memory.
Hence, set memblock current limit value to the max mapped low memory
address instead of max mapped memory address.
Signed-off-by: Santosh Shilimkar <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Grygorii Strashko <[email protected]>
Cc: Yinghai Lu <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Paul Walmsley <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Russell King <[email protected]>
Cc: Tony Lindgren <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
find_lock_task_mm() expects it is called under rcu or tasklist lock, but
it seems that at least oom_unkillable_task()->task_in_mem_cgroup() and
mem_cgroup_out_of_memory()->oom_badness() can call it lockless.
Perhaps we could fix the callers, but this patch simply adds rcu lock
into find_lock_task_mm(). This also allows to simplify a bit one of its
callers, oom_kill_process().
Signed-off-by: Oleg Nesterov <[email protected]>
Cc: Sergey Dyasly <[email protected]>
Cc: Sameer Nanda <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Mandeep Singh Baines <[email protected]>
Cc: "Ma, Xindong" <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: "Tu, Xiaobing" <[email protected]>
Acked-by: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
At least out_of_memory() calls has_intersects_mems_allowed() without
even rcu_read_lock(), this is obviously buggy.
Add the necessary rcu_read_lock(). This means that we can not simply
return from the loop, we need "bool ret" and "break".
While at it, swap the names of task_struct's (the argument and the
local). This cleans up the code a little bit and avoids the unnecessary
initialization.
Signed-off-by: Oleg Nesterov <[email protected]>
Reviewed-by: Sergey Dyasly <[email protected]>
Tested-by: Sergey Dyasly <[email protected]>
Reviewed-by: Sameer Nanda <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Mandeep Singh Baines <[email protected]>
Cc: "Ma, Xindong" <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: "Tu, Xiaobing" <[email protected]>
Acked-by: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Change oom_kill.c to use for_each_thread() rather than the racy
while_each_thread() which can loop forever if we race with exit.
Note also that most users were buggy even if while_each_thread() was
fine, the task can exit even _before_ rcu_read_lock().
Fortunately the new for_each_thread() only requires the stable
task_struct, so this change fixes both problems.
Signed-off-by: Oleg Nesterov <[email protected]>
Reviewed-by: Sergey Dyasly <[email protected]>
Tested-by: Sergey Dyasly <[email protected]>
Reviewed-by: Sameer Nanda <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Mandeep Singh Baines <[email protected]>
Cc: "Ma, Xindong" <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: "Tu, Xiaobing" <[email protected]>
Acked-by: David Rientjes <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
while_each_thread() and next_thread() should die, almost every lockless
usage is wrong.
1. Unless g == current, the lockless while_each_thread() is not safe.
while_each_thread(g, t) can loop forever if g exits, next_thread()
can't reach the unhashed thread in this case. Note that this can
happen even if g is the group leader, it can exec.
2. Even if while_each_thread() itself was correct, people often use
it wrongly.
It was never safe to just take rcu_read_lock() and loop unless
you verify that pid_alive(g) == T, even the first next_thread()
can point to the already freed/reused memory.
This patch adds signal_struct->thread_head and task->thread_node to
create the normal rcu-safe list with the stable head. The new
for_each_thread(g, t) helper is always safe under rcu_read_lock() as
long as this task_struct can't go away.
Note: of course it is ugly to have both task_struct->thread_node and the
old task_struct->thread_group, we will kill it later, after we change
the users of while_each_thread() to use for_each_thread().
Perhaps we can kill it even before we convert all users, we can
reimplement next_thread(t) using the new thread_head/thread_node. But
we can't do this right now because this will lead to subtle behavioural
changes. For example, do/while_each_thread() always sees at least one
task, while for_each_thread() can do nothing if the whole thread group
has died. Or thread_group_empty(), currently its semantics is not clear
unless thread_group_leader(p) and we need to audit the callers before we
can change it.
So this patch adds the new interface which has to coexist with the old
one for some time, hopefully the next changes will be more or less
straightforward and the old one will go away soon.
Signed-off-by: Oleg Nesterov <[email protected]>
Reviewed-by: Sergey Dyasly <[email protected]>
Tested-by: Sergey Dyasly <[email protected]>
Reviewed-by: Sameer Nanda <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: "Eric W. Biederman" <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Mandeep Singh Baines <[email protected]>
Cc: "Ma, Xindong" <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: "Tu, Xiaobing" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now, we have an infrastructure in rmap_walk() to handle difference from
variants of rmap traversing functions.
So, just use it in page_mkclean().
In this patch, I change following things.
1. remove some variants of rmap traversing functions.
cf> page_mkclean_file
2. mechanical change to use rmap_walk() in page_mkclean().
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now, we have an infrastructure in rmap_walk() to handle difference from
variants of rmap traversing functions.
So, just use it in page_referenced().
In this patch, I change following things.
1. remove some variants of rmap traversing functions.
cf> page_referenced_ksm, page_referenced_anon,
page_referenced_file
2. introduce new struct page_referenced_arg and pass it to
page_referenced_one(), main function of rmap_walk, in order to count
reference, to store vm_flags and to check finish condition.
3. mechanical change to use rmap_walk() in page_referenced().
[[email protected]: fix BUG at rmap_walk]
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
Cc: Sasha Levin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now, we have an infrastructure in rmap_walk() to handle difference from
variants of rmap traversing functions.
So, just use it in try_to_munlock().
In this patch, I change following things.
1. remove some variants of rmap traversing functions.
cf> try_to_unmap_ksm, try_to_unmap_anon, try_to_unmap_file
2. mechanical change to use rmap_walk() in try_to_munlock().
3. copy and paste comments.
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Now, we have an infrastructure in rmap_walk() to handle difference from
variants of rmap traversing functions.
So, just use it in try_to_unmap().
In this patch, I change following things.
1. enable rmap_walk() if !CONFIG_MIGRATION.
2. mechanical change to use rmap_walk() in try_to_unmap().
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
There are a lot of common parts in traversing functions, but there are
also a little of uncommon parts in it. By assigning proper function
pointer on each rmap_walker_control, we can handle these difference
correctly.
Following are differences we should handle.
1. difference of lock function in anon mapping case
2. nonlinear handling in file mapping case
3. prechecked condition:
checking memcg in page_referenced(),
checking VM_SHARE in page_mkclean()
checking temporary vma in try_to_unmap()
4. exit condition:
checking page_mapped() in try_to_unmap()
So, in this patch, I introduce 4 function pointers to handle above
differences.
Signed-off-by: Joonsoo Kim <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
In each rmap traverse case, there is some difference so that we need
function pointers and arguments to them in order to handle these
For this purpose, struct rmap_walk_control is introduced in this patch,
and will be extended in following patch. Introducing and extending are
separate, because it clarify changes.
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
When we traverse anon_vma, we need to take a read-side anon_lock. But
there is subtle difference in the situation so that we can't use same
method to take a lock in each cases. Therefore, we need to make
rmap_walk_anon() taking difference lock function.
This patch is the first step, factoring lock function for anon_lock out
of rmap_walk_anon(). It will be used in case of removing migration
entry and in default of rmap_walk_anon().
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
To merge all kinds of rmap traverse functions, try_to_unmap(),
try_to_munlock(), page_referenced() and page_mkclean(), we need to
extract common parts and separate out non-common parts.
Nonlinear handling is handled just in try_to_unmap_file() and other rmap
traverse functions doesn't care of it. Therfore it is better to factor
nonlinear handling out of try_to_unmap_file() in order to merge all
kinds of rmap traverse functions easily.
Signed-off-by: Joonsoo Kim <[email protected]>
Reviewed-by: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Rmap traversing is used in five different cases, try_to_unmap(),
try_to_munlock(), page_referenced(), page_mkclean() and
remove_migration_ptes(). Each one implements its own traversing
functions for the cases, anon, file, ksm, respectively. These cause
lots of duplications and cause maintenance overhead. They also make
codes being hard to understand and error-prone. One example is hugepage
handling. There is a code to compute hugepage offset correctly in
try_to_unmap_file(), but, there isn't a code to compute hugepage offset
in rmap_walk_file(). These are used pairwise in migration context, but
we missed to modify pairwise.
To overcome these drawbacks, we should unify these through one unified
function. I decide rmap_walk() as main function since it has no
unnecessity. And to control behavior of rmap_walk(), I introduce struct
rmap_walk_control having some function pointers. These makes
rmap_walk() working for their specific needs.
This patchset remove a lot of duplicated code as you can see in below
short-stat and kernel text size also decrease slightly.
text data bss dec hex filename
10640 1 16 10657 29a1 mm/rmap.o
10047 1 16 10064 2750 mm/rmap.o
13823 705 8288 22816 5920 mm/ksm.o
13199 705 8288 22192 56b0 mm/ksm.o
This patch (of 9):
We have to recompute pgoff if the given page is huge, since result based
on HPAGE_SIZE is not approapriate for scanning the vma interval tree, as
shown by commit 36e4f20af833 ("hugetlb: do not use
vma_hugecache_offset() for vma_prio_tree_foreach") and commit 369a713e
("rmap: recompute pgoff for unmapping huge page").
To handle both the cases, normal page for page cache and hugetlb page,
by same way, we can use compound_page(). It returns 0 on non-compound
page and it also returns proper value on compound page.
Signed-off-by: Joonsoo Kim <[email protected]>
Cc: Naoya Horiguchi <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Hillf Danton <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This function is not used outside of memcontrol.c so make it static.
Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
We should start kmem accounting for a memory cgroup only after both its
kmem limit is set (KMEM_ACCOUNTED_ACTIVE) and related call sites are
patched (KMEM_ACCOUNTED_ACTIVATED). Currently memcg_can_account_kmem()
allows kmem accounting even if only one of the conditions is true. Fix
it.
This means that a page might get charged by memcg_kmem_newpage_charge
which would see its static key patched already but
memcg_kmem_commit_charge would still see it unpatched and so the charge
won't be committed. The result would be charge inconsistency
(page_cgroup not marked as PageCgroupUsed) and the charge would leak
because __memcg_kmem_uncharge_pages would ignore it.
[[email protected]: augment changelog]
Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Glauber Costa <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
If users specify the original movablecore=nn@ss boot option, the kernel
will arrange [ss, ss+nn) as ZONE_MOVABLE. The kernelcore=nn@ss boot
option is similar except it specifies ZONE_NORMAL ranges.
Now, if users specify "movable_node" in kernel commandline, the kernel
will arrange hotpluggable memory in SRAT as ZONE_MOVABLE. And if users
do this, all the other movablecore=nn@ss and kernelcore=nn@ss options
should be ignored.
For those who don't want this, just specify nothing. The kernel will
act as before.
Signed-off-by: Tang Chen <[email protected]>
Signed-off-by: Zhang Yanfei <[email protected]>
Reviewed-by: Wanpeng Li <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J . Wysocki" <[email protected]>
Cc: Chen Tang <[email protected]>
Cc: Gong Chen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jiang Liu <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Larry Woodman <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Liu Jiang <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Nazarewicz <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Prarit Bhargava <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Taku Izumi <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Thomas Renninger <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vasilis Liaskovitis <[email protected]>
Cc: Wen Congyang <[email protected]>
Cc: Yasuaki Ishimatsu <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Linux kernel cannot migrate pages used by the kernel. As a result,
hotpluggable memory used by the kernel won't be able to be hot-removed.
To solve this problem, the basic idea is to prevent memblock from
allocating hotpluggable memory for the kernel at early time, and arrange
all hotpluggable memory in ACPI SRAT(System Resource Affinity Table) as
ZONE_MOVABLE when initializing zones.
In the previous patches, we have marked hotpluggable memory regions with
MEMBLOCK_HOTPLUG flag in memblock.memory.
In this patch, we make memblock skip these hotpluggable memory regions
in the default top-down allocation function if movable_node boot option
is specified.
[[email protected]: coding-style fixes]
Signed-off-by: Tang Chen <[email protected]>
Signed-off-by: Zhang Yanfei <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J . Wysocki" <[email protected]>
Cc: Chen Tang <[email protected]>
Cc: Gong Chen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jiang Liu <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Larry Woodman <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Liu Jiang <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Nazarewicz <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Prarit Bhargava <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Taku Izumi <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Thomas Renninger <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vasilis Liaskovitis <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Wen Congyang <[email protected]>
Cc: Yasuaki Ishimatsu <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
At very early time, the kernel have to use some memory such as loading
the kernel image. We cannot prevent this anyway. So any node the
kernel resides in should be un-hotpluggable.
Signed-off-by: Tang Chen <[email protected]>
Reviewed-by: Zhang Yanfei <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J . Wysocki" <[email protected]>
Cc: Chen Tang <[email protected]>
Cc: Gong Chen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jiang Liu <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Larry Woodman <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Liu Jiang <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Nazarewicz <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Prarit Bhargava <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Taku Izumi <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Thomas Renninger <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vasilis Liaskovitis <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Wen Congyang <[email protected]>
Cc: Yasuaki Ishimatsu <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
When parsing SRAT, we know that which memory area is hotpluggable. So we
invoke function memblock_mark_hotplug() introduced by previous patch to
mark hotpluggable memory in memblock.
[[email protected]: coding-style fixes]
Signed-off-by: Tang Chen <[email protected]>
Reviewed-by: Zhang Yanfei <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J . Wysocki" <[email protected]>
Cc: Chen Tang <[email protected]>
Cc: Gong Chen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jiang Liu <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Larry Woodman <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Liu Jiang <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Nazarewicz <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Prarit Bhargava <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Taku Izumi <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Thomas Renninger <[email protected]>
Cc: Toshi Kani <[email protected]>
Cc: Vasilis Liaskovitis <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Wen Congyang <[email protected]>
Cc: Yasuaki Ishimatsu <[email protected]>
Cc: Yinghai Lu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|