aboutsummaryrefslogtreecommitdiff
path: root/tools/perf/scripts/python/event_analyzing_sample.py
diff options
context:
space:
mode:
authorJohannes Weiner <[email protected]>2017-02-24 14:56:20 -0800
committerLinus Torvalds <[email protected]>2017-02-24 17:46:54 -0800
commit4eda48235011d6965f5229f8955ddcd355311570 (patch)
treedcfd0ddcd2cafa8da72be7b34f31de34b9e6612d /tools/perf/scripts/python/event_analyzing_sample.py
parentbbef938429f5b201f9972f399a04f01af1934cc2 (diff)
mm: vmscan: only write dirty pages that the scanner has seen twice
Dirty pages can easily reach the end of the LRU while there are still clean pages to reclaim around. Don't let kswapd write them back just because there are a lot of them. It costs more CPU to find the clean pages, but that's almost certainly better than to disrupt writeback from the flushers with LRU-order single-page writes from reclaim. And the flushers have been woken up by that point, so we spend IO capacity on flushing and CPU capacity on finding the clean cache. Only start writing dirty pages if they have cycled around the LRU twice now and STILL haven't been queued on the IO device. It's possible that the dirty pages are so sparsely distributed across different bdis, inodes, memory cgroups, that the flushers take forever to get to the ones we want reclaimed. Once we see them twice on the LRU, we know that's the quicker way to find them, so do LRU writeback. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Acked-by: Minchan Kim <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Mel Gorman <[email protected]> Acked-by: Hillf Danton <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/event_analyzing_sample.py')
0 files changed, 0 insertions, 0 deletions