diff options
author | Yafang Shao <[email protected]> | 2020-08-06 23:22:08 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2020-08-07 11:33:25 -0700 |
commit | 1378b37d03e8147c67fde60caf0474ea879163d8 (patch) | |
tree | b019bf43196f8b66f8540dcb54274fbb2cb8dad4 /tools/perf/scripts/python/syscall-counts.py | |
parent | 45c7f7e1ef17f09fe70bad4b705ce43772153fd7 (diff) |
memcg, oom: check memcg margin for parallel oom
Memcg oom killer invocation is synchronized by the global oom_lock and
tasks are sleeping on the lock while somebody is selecting the victim or
potentially race with the oom_reaper is releasing the victim's memory.
This can result in a pointless oom killer invocation because a waiter
might be racing with the oom_reaper
P1 oom_reaper P2
oom_reap_task mutex_lock(oom_lock)
out_of_memory # no victim because we have one already
__oom_reap_task_mm mute_unlock(oom_lock)
mutex_lock(oom_lock)
set MMF_OOM_SKIP
select_bad_process
# finds a new victim
The page allocator prevents from this race by trying to allocate after the
lock can be acquired (in __alloc_pages_may_oom) which acts as a last
minute check. Moreover page allocator simply doesn't block on the
oom_lock and simply retries the whole reclaim process.
Memcg oom killer should do the last minute check as well. Call
mem_cgroup_margin to do that. Trylock on the oom_lock could be done as
well but this doesn't seem to be necessary at this stage.
[[email protected]: commit log]
Suggested-by: Michal Hocko <[email protected]>
Signed-off-by: Yafang Shao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Chris Down <[email protected]>
Cc: Tetsuo Handa <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Johannes Weiner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/syscall-counts.py')
0 files changed, 0 insertions, 0 deletions