aboutsummaryrefslogtreecommitdiff
path: root/include/linux/shrinker.h
AgeCommit message (Collapse)AuthorFilesLines
2015-02-12vmscan: per memory cgroup slab shrinkersVladimir Davydov1-1/+5
This patch adds SHRINKER_MEMCG_AWARE flag. If a shrinker has this flag set, it will be called per memory cgroup. The memory cgroup to scan objects from is passed in shrink_control->memcg. If the memory cgroup is NULL, a memcg aware shrinker is supposed to scan objects from the global list. Unaware shrinkers are only called on global pressure with memcg=NULL. Signed-off-by: Vladimir Davydov <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Greg Thelen <[email protected]> Cc: Glauber Costa <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2014-12-13mm: vmscan: invoke slab shrinkers from shrink_zone()Johannes Weiner1-2/+0
The slab shrinkers are currently invoked from the zonelist walkers in kswapd, direct reclaim, and zone reclaim, all of which roughly gauge the eligible LRU pages and assemble a nodemask to pass to NUMA-aware shrinkers, which then again have to walk over the nodemask. This is redundant code, extra runtime work, and fairly inaccurate when it comes to the estimation of actually scannable LRU pages. The code duplication will only get worse when making the shrinkers cgroup-aware and requiring them to have out-of-band cgroup hierarchy walks as well. Instead, invoke the shrinkers from shrink_zone(), which is where all reclaimers end up, to avoid this duplication. Take the count for eligible LRU pages out of get_scan_count(), which considers many more factors than just the availability of swap space, like zone_reclaimable_pages() currently does. Accumulate the number over all visited lruvecs to get the per-zone value. Some nodes have multiple zones due to memory addressing restrictions. To avoid putting too much pressure on the shrinkers, only invoke them once for each such node, using the class zone of the allocation as the pivot zone. For now, this integrates the slab shrinking better into the reclaim logic and gets rid of duplicative invocations from kswapd, direct reclaim, and zone reclaim. It also prepares for cgroup-awareness, allowing memcg-capable shrinkers to be added at the lruvec level without much duplication of both code and runtime work. This changes kswapd behavior, which used to invoke the shrinkers for each zone, but with scan ratios gathered from the entire node, resulting in meaningless pressure quantities on multi-zone nodes. Zone reclaim behavior also changes. It used to shrink slabs until the same amount of pages were shrunk as were reclaimed from the LRUs. Now it merely invokes the shrinkers once with the zone's scan ratio, which makes the shrinkers go easier on caches that implement aging and would prefer feeding back pressure from recently used slab objects to unused LRU pages. [[email protected]: assure class zone is populated] Signed-off-by: Johannes Weiner <[email protected]> Cc: Dave Chinner <[email protected]> Signed-off-by: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2013-09-10shrinker: Kill old ->shrink API.Dave Chinner1-10/+5
There are no more users of this API, so kill it dead, dead, dead and quietly bury the corpse in a shallow, unmarked grave in a dark forest deep in the hills... [[email protected]: added flowers to the grave] Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Glauber Costa <[email protected]> Reviewed-by: Greg Thelen <[email protected]> Acked-by: Mel Gorman <[email protected]> Cc: "Theodore Ts'o" <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Al Viro <[email protected]> Cc: Artem Bityutskiy <[email protected]> Cc: Arve Hjønnevåg <[email protected]> Cc: Carlos Maiolino <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chuck Lever <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Gleb Natapov <[email protected]> Cc: Greg Thelen <[email protected]> Cc: J. Bruce Fields <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: John Stultz <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Steven Whitehouse <[email protected]> Cc: Thomas Hellstrom <[email protected]> Cc: Trond Myklebust <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Al Viro <[email protected]>
2013-09-10vmscan: per-node deferred workGlauber Costa1-2/+12
The list_lru infrastructure already keeps per-node LRU lists in its node-specific list_lru_node arrays and provide us with a per-node API, and the shrinkers are properly equiped with node information. This means that we can now focus our shrinking effort in a single node, but the work that is deferred from one run to another is kept global at nr_in_batch. Work can be deferred, for instance, during direct reclaim under a GFP_NOFS allocation, where situation, all the filesystem shrinkers will be prevented from running and accumulate in nr_in_batch the amount of work they should have done, but could not. This creates an impedance problem, where upon node pressure, work deferred will accumulate and end up being flushed in other nodes. The problem we describe is particularly harmful in big machines, where many nodes can accumulate at the same time, all adding to the global counter nr_in_batch. As we accumulate more and more, we start to ask for the caches to flush even bigger numbers. The result is that the caches are depleted and do not stabilize. To achieve stable steady state behavior, we need to tackle it differently. In this patch we keep the deferred count per-node, in the new array nr_deferred[] (the name is also a bit more descriptive) and will never accumulate that to other nodes. Signed-off-by: Glauber Costa <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Mel Gorman <[email protected]> Cc: "Theodore Ts'o" <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Al Viro <[email protected]> Cc: Artem Bityutskiy <[email protected]> Cc: Arve Hjønnevåg <[email protected]> Cc: Carlos Maiolino <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chuck Lever <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Gleb Natapov <[email protected]> Cc: Greg Thelen <[email protected]> Cc: J. Bruce Fields <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: John Stultz <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Steven Whitehouse <[email protected]> Cc: Thomas Hellstrom <[email protected]> Cc: Trond Myklebust <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Al Viro <[email protected]>
2013-09-10shrinker: add node awarenessDave Chinner1-0/+3
Pass the node of the current zone being reclaimed to shrink_slab(), allowing the shrinker control nodemask to be set appropriately for node aware shrinkers. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Glauber Costa <[email protected]> Acked-by: Mel Gorman <[email protected]> Cc: "Theodore Ts'o" <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Al Viro <[email protected]> Cc: Artem Bityutskiy <[email protected]> Cc: Arve Hjønnevåg <[email protected]> Cc: Carlos Maiolino <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chuck Lever <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Gleb Natapov <[email protected]> Cc: Greg Thelen <[email protected]> Cc: J. Bruce Fields <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: John Stultz <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Steven Whitehouse <[email protected]> Cc: Thomas Hellstrom <[email protected]> Cc: Trond Myklebust <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Al Viro <[email protected]>
2013-09-10mm: new shrinker APIDave Chinner1-9/+29
The current shrinker callout API uses an a single shrinker call for multiple functions. To determine the function, a special magical value is passed in a parameter to change the behaviour. This complicates the implementation and return value specification for the different behaviours. Separate the two different behaviours into separate operations, one to return a count of freeable objects in the cache, and another to scan a certain number of objects in the cache for freeing. In defining these new operations, ensure the return values and resultant behaviours are clearly defined and documented. Modify shrink_slab() to use the new API and implement the callouts for all the existing shrinkers. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Glauber Costa <[email protected]> Acked-by: Mel Gorman <[email protected]> Cc: "Theodore Ts'o" <[email protected]> Cc: Adrian Hunter <[email protected]> Cc: Al Viro <[email protected]> Cc: Artem Bityutskiy <[email protected]> Cc: Arve Hjønnevåg <[email protected]> Cc: Carlos Maiolino <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Chuck Lever <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Gleb Natapov <[email protected]> Cc: Greg Thelen <[email protected]> Cc: J. Bruce Fields <[email protected]> Cc: Jan Kara <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: John Stultz <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Kent Overstreet <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Marcelo Tosatti <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Steven Whitehouse <[email protected]> Cc: Thomas Hellstrom <[email protected]> Cc: Trond Myklebust <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Al Viro <[email protected]>
2012-07-31vmscan: remove obsolete shrink_control commentMinchan Kim1-1/+0
09f363c7 ("vmscan: fix shrinker callback bug in fs/super.c") fixed a shrinker callback which was returning -1 when nr_to_scan is zero, which caused excessive slab scanning. But 635697c6 ("vmscan: fix initial shrinker size handling") fixed the problem, again so we can freely return -1 although nr_to_scan is zero. So let's revert 09f363c7 because the comment added in 09f363c7 made an unnecessary rule. Signed-off-by: Minchan Kim <[email protected]> Cc: Al Viro <[email protected]> Cc: Mikulas Patocka <[email protected]> Cc: Konstantin Khlebnikov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2011-12-09vmscan: use atomic-long for shrinker batchingKonstantin Khlebnikov1-1/+1
Use atomic-long operations instead of looping around cmpxchg(). [[email protected]: massage atomic.h inclusions] Signed-off-by: Konstantin Khlebnikov <[email protected]> Cc: Dave Chinner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2011-10-31vmscan: fix shrinker callback bug in fs/super.cMikulas Patocka1-0/+1
The callback must not return -1 when nr_to_scan is zero. Fix the bug in fs/super.c and add this requirement to the callback specification. Signed-off-by: Mikulas Patocka <[email protected]> Cc: Dave Chinner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2011-07-20superblock: introduce per-sb cache shrinker infrastructureDave Chinner1-0/+42
With context based shrinkers, we can implement a per-superblock shrinker that shrinks the caches attached to the superblock. We currently have global shrinkers for the inode and dentry caches that split up into per-superblock operations via a coarse proportioning method that does not batch very well. The global shrinkers also have a dependency - dentries pin inodes - so we have to be very careful about how we register the global shrinkers so that the implicit call order is always correct. With a per-sb shrinker callout, we can encode this dependency directly into the per-sb shrinker, hence avoiding the need for strictly ordering shrinker registrations. We also have no need for any proportioning code for the shrinker subsystem already provides this functionality across all shrinkers. Allowing the shrinker to operate on a single superblock at a time means that we do less superblock list traversals and locking and reclaim should batch more effectively. This should result in less CPU overhead for reclaim and potentially faster reclaim of items from each filesystem. Signed-off-by: Dave Chinner <[email protected]> Signed-off-by: Al Viro <[email protected]>