aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-10-09perf evlist: Introduce append_tp_filter() methodArnaldo Carvalho de Melo2-0/+22
Will be used by 'perf trace' to support 'perf trace --filter', we need to append to any pre-existing filter. When parse_filter() gets invoked to process --filter, it'll set the filter to that specified on the command line, later on, when we filter out 'perf trace' own pid to avoid an event feedback loop, we need to preserve the command line filter put in place by parse_filter(). Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf evlist: Factor out asprintf routine to build a tracepoint pid filterArnaldo Carvalho de Melo1-4/+15
Will be used to append such lists to existing filters. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf trace: Associate the "msr" tracepoint arg name with x86_MSR__scnprintf()Arnaldo Carvalho de Melo1-0/+1
So that we can go from: # perf trace -e msr:write_msr --max-stack=16 sleep 1 0.000 sleep/6740 msr:write_msr(msr: 3221225728, val: 139636317451648) do_trace_write_msr ([kernel.kallsyms]) do_trace_write_msr ([kernel.kallsyms]) do_arch_prctl_64 ([kernel.kallsyms]) __x64_sys_arch_prctl ([kernel.kallsyms]) do_syscall_64 ([kernel.kallsyms]) entry_SYSCALL_64 ([kernel.kallsyms]) init_tls (/usr/lib64/ld-2.29.so) dl_main (/usr/lib64/ld-2.29.so) _dl_sysdep_start (/usr/lib64/ld-2.29.so) _dl_start (/usr/lib64/ld-2.29.so) # To: # perf trace -e msr:write_msr --max-stack=16 sleep 1 0.000 sleep/8519 msr:write_msr(msr: FS_BASE, val: 139878031705472) do_trace_write_msr ([kernel.kallsyms]) do_trace_write_msr ([kernel.kallsyms]) do_arch_prctl_64 ([kernel.kallsyms]) __x64_sys_arch_prctl ([kernel.kallsyms]) do_syscall_64 ([kernel.kallsyms]) entry_SYSCALL_64 ([kernel.kallsyms]) init_tls (/usr/lib64/ld-2.29.so) dl_main (/usr/lib64/ld-2.29.so) _dl_sysdep_start (/usr/lib64/ld-2.29.so) _dl_start (/usr/lib64/ld-2.29.so) # This, in reverse, will allow for symbolic system call/tracepoint filtering. Cc: Adrian Hunter <[email protected]> Cc: Brendan Gregg <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf trace beauty: Add the glue for the autogenerated MSR arraysArnaldo Carvalho de Melo4-0/+39
We need to wrap those autogenerated string arrays with the strarrays__scnprintf() formatter, do it. Cc: Adrian Hunter <[email protected]> Cc: Brendan Gregg <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf trace: Allow associating scnprintf routines with well known arg namesArnaldo Carvalho de Melo1-0/+26
For instance 'msr' appears in several tracepoints, so we can associate it with a single scnprintf() routine auto-generated from kernel headers, as will be done in followup patches. Start with an empty array of associations. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf beauty: Hook up the x86 MSR table generatorArnaldo Carvalho de Melo1-0/+9
This way we generate the source with the table for later use by plugins, etc. I.e. after running: $ make -C tools/perf O=/tmp/build/perf We end up with: $ head /tmp/build/perf/trace/beauty/generated/x86_arch_MSRs_array.c static const char *x86_MSRs[] = { [0x00000000] = "IA32_P5_MC_ADDR", [0x00000001] = "IA32_P5_MC_TYPE", [0x00000010] = "IA32_TSC", [0x00000017] = "IA32_PLATFORM_ID", [0x0000001b] = "IA32_APICBASE", [0x00000020] = "KNC_PERFCTR0", [0x00000021] = "KNC_PERFCTR1", [0x00000028] = "KNC_EVNTSEL0", [0x00000029] = "KNC_EVNTSEL1", $ Now its just a matter of using it, first in a libtracevent plugin. At some point we should move tools/perf/trace/beauty to tools/beauty/, so that it can be used more generally and even made available externally like libbpf, libperf, libtraevent, etc. Cc: Adrian Hunter <[email protected]> Cc: Brendan Gregg <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf trace beauty: Add a x86 MSR cmd id->str table generatorArnaldo Carvalho de Melo1-0/+40
Without parameters it'll parse tools/arch/x86/include/asm/msr-index.h and output a table usable by tools, that will be wired up later to a libtraceevent plugin registered from perf's glue code: $ tools/perf/trace/beauty/tracepoints/x86_msr.sh static const char *x86_MSRs[] = { <SNIP> [0x00000034] = "SMI_COUNT", [0x0000003a] = "IA32_FEATURE_CONTROL", [0x0000003b] = "IA32_TSC_ADJUST", [0x00000040] = "LBR_CORE_FROM", [0x00000048] = "IA32_SPEC_CTRL", [0x00000049] = "IA32_PRED_CMD", <SNIP> [0x0000010b] = "IA32_FLUSH_CMD", [0x0000010F] = "TSX_FORCE_ABORT", <SNIP> [0x00000198] = "IA32_PERF_STATUS", [0x00000199] = "IA32_PERF_CTL", <SNIP> [0x00000da0] = "IA32_XSS", [0x00000dc0] = "LBR_INFO_0", [0x00000ffc] = "IA32_BNDCFGS_RSVD", }; #define x86_64_specific_MSRs_offset 0xc0000080 static const char *x86_64_specific_MSRs[] = { [0xc0000080 - x86_64_specific_MSRs_offset] = "EFER", [0xc0000081 - x86_64_specific_MSRs_offset] = "STAR", [0xc0000082 - x86_64_specific_MSRs_offset] = "LSTAR", [0xc0000083 - x86_64_specific_MSRs_offset] = "CSTAR", [0xc0000084 - x86_64_specific_MSRs_offset] = "SYSCALL_MASK", <SNIP> [0xc0000103 - x86_64_specific_MSRs_offset] = "TSC_AUX", [0xc0000104 - x86_64_specific_MSRs_offset] = "AMD64_TSC_RATIO", }; #define x86_AMD_V_KVM_MSRs_offset 0xc0010000 static const char *x86_AMD_V_KVM_MSRs[] = { [0xc0010000 - x86_AMD_V_KVM_MSRs_offset] = "K7_EVNTSEL0", <SNIP> [0xc0010114 - x86_AMD_V_KVM_MSRs_offset] = "VM_CR", [0xc0010115 - x86_AMD_V_KVM_MSRs_offset] = "VM_IGNNE", [0xc0010117 - x86_AMD_V_KVM_MSRs_offset] = "VM_HSAVE_PA", <SNIP> [0xc0010240 - x86_AMD_V_KVM_MSRs_offset] = "F15H_NB_PERF_CTL", [0xc0010241 - x86_AMD_V_KVM_MSRs_offset] = "F15H_NB_PERF_CTR", [0xc0010280 - x86_AMD_V_KVM_MSRs_offset] = "F15H_PTSC", }; Then these will in turn be hooked up in a follow up patch to be used by strarrays__scnprintf(). Cc: Adrian Hunter <[email protected]> Cc: Brendan Gregg <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf beauty: Make strarray's offset be u64Arnaldo Carvalho de Melo1-1/+1
We need it for things like MSRs that are sparse and go over MAXINT. Cc: Adrian Hunter <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Luis Cláudio Gonçalves <[email protected]> Cc: Namhyung Kim <[email protected]> Link: https://lkml.kernel.org/n/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2019-10-09perf/x86/amd: Change/fix NMI latency mitigation to use a timestampTom Lendacky1-13/+17
It turns out that the NMI latency workaround from commit: 6d3edaae16c6 ("x86/perf/amd: Resolve NMI latency issues for active PMCs") ends up being too conservative and results in the perf NMI handler claiming NMIs too easily on AMD hardware when the NMI watchdog is active. This has an impact, for example, on the hpwdt (HPE watchdog timer) module. This module can produce an NMI that is used to reset the system. It registers an NMI handler for the NMI_UNKNOWN type and relies on the fact that nothing has claimed an NMI so that its handler will be invoked when the watchdog device produces an NMI. After the referenced commit, the hpwdt module is unable to process its generated NMI if the NMI watchdog is active, because the current NMI latency mitigation results in the NMI being claimed by the perf NMI handler. Update the AMD perf NMI latency mitigation workaround to, instead, use a window of time. Whenever a PMC is handled in the perf NMI handler, set a timestamp which will act as a perf NMI window. Any NMIs arriving within that window will be claimed by perf. Anything outside that window will not be claimed by perf. The value for the NMI window is set to 100 msecs. This is a conservative value that easily covers any NMI latency in the hardware. While this still results in a window in which the hpwdt module will not receive its NMI, the window is now much, much smaller. Signed-off-by: Tom Lendacky <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Jerry Hoemann <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Fixes: 6d3edaae16c6 ("x86/perf/amd: Resolve NMI latency issues for active PMCs") Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Ingo Molnar <[email protected]>
2019-10-09perf/core: Fix corner case in perf_rotate_context()Song Liu1-5/+17
In perf_rotate_context(), when the first cpu flexible event fail to schedule, cpu_rotate is 1, while cpu_event is NULL. Since cpu_event is NULL, perf_rotate_context will _NOT_ call cpu_ctx_sched_out(), thus cpuctx->ctx.is_active will have EVENT_FLEXIBLE set. Then, the next perf_event_sched_in() will skip all cpu flexible events because of the EVENT_FLEXIBLE bit. In the next call of perf_rotate_context(), cpu_rotate stays 1, and cpu_event stays NULL, so this process repeats. The end result is, flexible events on this cpu will not be scheduled (until another event being added to the cpuctx). Here is an easy repro of this issue. On Intel CPUs, where ref-cycles could only use one counter, run one pinned event for ref-cycles, one flexible event for ref-cycles, and one flexible event for cycles. The flexible ref-cycles is never scheduled, which is expected. However, because of this issue, the cycles event is never scheduled either. $ perf stat -e ref-cycles:D,ref-cycles,cycles -C 5 -I 1000 time counts unit events 1.000152973 15,412,480 ref-cycles:D 1.000152973 <not counted> ref-cycles (0.00%) 1.000152973 <not counted> cycles (0.00%) 2.000486957 18,263,120 ref-cycles:D 2.000486957 <not counted> ref-cycles (0.00%) 2.000486957 <not counted> cycles (0.00%) To fix this, when the flexible_active list is empty, try rotate the first event in the flexible_groups. Also, rename ctx_first_active() to ctx_event_to_rotate(), which is more accurate. Signed-off-by: Song Liu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Sasha Levin <[email protected]> Cc: Thomas Gleixner <[email protected]> Fixes: 8d5bce0c37fa ("perf/core: Optimize perf_rotate_context() event scheduling") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-09perf/core: Rework memory accounting in perf_mmap()Song Liu1-2/+15
perf_mmap() always increases user->locked_vm. As a result, "extra" could grow bigger than "user_extra", which doesn't make sense. Here is an example case: (Note: Assume "user_lock_limit" is very small.) | # of perf_mmap calls |vma->vm_mm->pinned_vm|user->locked_vm| | 0 | 0 | 0 | | 1 | user_extra | user_extra | | 2 | 3 * user_extra | 2 * user_extra| | 3 | 6 * user_extra | 3 * user_extra| | 4 | 10 * user_extra | 4 * user_extra| Fix this by maintaining proper user_extra and extra. Reviewed-By: Hechao Li <[email protected]> Reported-by: Hechao Li <[email protected]> Signed-off-by: Song Liu <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: <[email protected]> Cc: Jie Meng <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-09sched/vtime: Fix guest/system mis-accounting on task switchFrederic Weisbecker1-3/+3
vtime_account_system() assumes that the target task to account cputime to is always the current task. This is most often true indeed except on task switch where we call: vtime_common_task_switch(prev) vtime_account_system(prev) Here prev is the scheduling-out task where we account the cputime to. It doesn't match current that is already the scheduling-in task at this stage of the context switch. So we end up checking the wrong task flags to determine if we are accounting guest or system time to the previous task. As a result the wrong task is used to check if the target is running in guest mode. We may then spuriously account or leak either system or guest time on task switch. Fix this assumption and also turn vtime_guest_enter/exit() to use the task passed in parameter as well to avoid future similar issues. Signed-off-by: Frederic Weisbecker <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Wanpeng Li <[email protected]> Fixes: 2a42eb9594a1 ("sched/cputime: Accumulate vtime on top of nsec clocksource") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-09sched/fair: Scale bandwidth quota and period without losing quota/period ↵Xuewei Zhang1-14/+22
ratio precision The quota/period ratio is used to ensure a child task group won't get more bandwidth than the parent task group, and is calculated as: normalized_cfs_quota() = [(quota_us << 20) / period_us] If the quota/period ratio was changed during this scaling due to precision loss, it will cause inconsistency between parent and child task groups. See below example: A userspace container manager (kubelet) does three operations: 1) Create a parent cgroup, set quota to 1,000us and period to 10,000us. 2) Create a few children cgroups. 3) Set quota to 1,000us and period to 10,000us on a child cgroup. These operations are expected to succeed. However, if the scaling of 147/128 happens before step 3, quota and period of the parent cgroup will be changed: new_quota: 1148437ns, 1148us new_period: 11484375ns, 11484us And when step 3 comes in, the ratio of the child cgroup will be 104857, which will be larger than the parent cgroup ratio (104821), and will fail. Scaling them by a factor of 2 will fix the problem. Tested-by: Phil Auld <[email protected]> Signed-off-by: Xuewei Zhang <[email protected]> Signed-off-by: Peter Zijlstra (Intel) <[email protected]> Acked-by: Phil Auld <[email protected]> Cc: Anton Blanchard <[email protected]> Cc: Ben Segall <[email protected]> Cc: Dietmar Eggemann <[email protected]> Cc: Juri Lelli <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vincent Guittot <[email protected]> Fixes: 2e8e19226398 ("sched/fair: Limit sched_cfs_period_timer() loop to avoid hard lockup") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-09powerpc/kvm: Fix kvmppc_vcore->in_guest value in kvmhv_switch_to_hostJordan Niethe1-0/+1
kvmhv_switch_to_host() in arch/powerpc/kvm/book3s_hv_rmhandlers.S needs to set kvmppc_vcore->in_guest to 0 to signal secondary CPUs to continue. This happens after resetting the PCR. Before commit 13c7bb3c57dc ("powerpc/64s: Set reserved PCR bits"), r0 would always be 0 before it was stored to kvmppc_vcore->in_guest. However because of this change in the commit: /* Reset PCR */ ld r0, VCORE_PCR(r5) - cmpdi r0, 0 + LOAD_REG_IMMEDIATE(r6, PCR_MASK) + cmpld r0, r6 beq 18f - li r0, 0 - mtspr SPRN_PCR, r0 + mtspr SPRN_PCR, r6 18: /* Signal secondary CPUs to continue */ stb r0,VCORE_IN_GUEST(r5) We are no longer comparing r0 against 0 and loading it with 0 if it contains something else. Hence when we store r0 to kvmppc_vcore->in_guest, it might not be 0. This means that secondary CPUs will not be signalled to continue. Those CPUs get stuck and errors like the following are logged: KVM: CPU 1 seems to be stuck KVM: CPU 2 seems to be stuck KVM: CPU 3 seems to be stuck KVM: CPU 4 seems to be stuck KVM: CPU 5 seems to be stuck KVM: CPU 6 seems to be stuck KVM: CPU 7 seems to be stuck This can be reproduced with: $ for i in `seq 1 7` ; do chcpu -d $i ; done ; $ taskset -c 0 qemu-system-ppc64 -smp 8,threads=8 \ -M pseries,accel=kvm,kvm-type=HV -m 1G -nographic -vga none \ -kernel vmlinux -initrd initrd.cpio.xz Fix by making sure r0 is 0 before storing it to kvmppc_vcore->in_guest. Fixes: 13c7bb3c57dc ("powerpc/64s: Set reserved PCR bits") Reported-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Jordan Niethe <[email protected]> Reviewed-by: Alistair Popple <[email protected]> Tested-by: Alexey Kardashevskiy <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2019-10-09selftests/powerpc: Fix compile error on tlbie_test due to newer gccDesnes A. Nunes do Rosario1-1/+1
Newer versions of GCC (>= 9) demand that the size of the string to be copied must be explicitly smaller than the size of the destination. Thus, the NULL char has to be taken into account on strncpy. This will avoid the following compiling error: tlbie_test.c: In function 'main': tlbie_test.c:639:4: error: 'strncpy' specified bound 100 equals destination size strncpy(logdir, optarg, LOGDIR_NAME_SIZE); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc1: all warnings being treated as errors Signed-off-by: Desnes A. Nunes do Rosario <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2019-10-09powerpc/pseries: Remove confusing warning message.Laurent Dufour1-0/+3
Since commit 1211ee61b4a8 ("powerpc/pseries: Read TLB Block Invalidate Characteristics"), a warning message is displayed when booting a guest on top of KVM: lpar: arch/powerpc/platforms/pseries/lpar.c pseries_lpar_read_hblkrm_characteristics Error calling get-system-parameter (0xfffffffd) This message is displayed because this hypervisor is not supporting the H_BLOCK_REMOVE hcall and thus is not exposing the corresponding feature. Reading the TLB Block Invalidate Characteristics should not be done if the feature is not exposed. Fixes: 1211ee61b4a8 ("powerpc/pseries: Read TLB Block Invalidate Characteristics") Reported-by: Stephen Rothwell <[email protected]> Signed-off-by: Laurent Dufour <[email protected]> Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2019-10-09powerpc/64s/radix: Fix build failure with RADIX_MMU=nStephen Rothwell1-0/+4
After merging the powerpc tree, today's linux-next build (powerpc64 allnoconfig) failed like this: arch/powerpc/mm/book3s64/pgtable.c:216:3: error: implicit declaration of function 'radix__flush_all_lpid_guest' radix__flush_all_lpid_guest() is only declared for CONFIG_PPC_RADIX_MMU which is not set for this build. Fix it by adding an empty version for the RADIX_MMU=n case, which should never be called. Fixes: 99161de3a283 ("powerpc/64s/radix: tidy up TLB flushing code") Signed-off-by: Stephen Rothwell <[email protected]> [mpe: Munge change log] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2019-10-09CIFS: Force reval dentry if LOOKUP_REVAL flag is setPavel Shilovsky1-1/+7
Mark inode for force revalidation if LOOKUP_REVAL flag is set. This tells the client to actually send a QueryInfo request to the server to obtain the latest metadata in case a directory or a file were changed remotely. Only do that if the client doesn't have a lease for the file to avoid unneeded round trips to the server. Cc: <[email protected]> Signed-off-by: Pavel Shilovsky <[email protected]> Signed-off-by: Steve French <[email protected]>
2019-10-09CIFS: Force revalidate inode when dentry is stalePavel Shilovsky1-0/+4
Currently the client indicates that a dentry is stale when inode numbers or type types between a local inode and a remote file don't match. If this is the case attributes is not being copied from remote to local, so, it is already known that the local copy has stale metadata. That's why the inode needs to be marked for revalidation in order to tell the VFS to lookup the dentry again before openning a file. This prevents unexpected stale errors to be returned to the user space when openning a file. Cc: <[email protected]> Signed-off-by: Pavel Shilovsky <[email protected]> Signed-off-by: Steve French <[email protected]>
2019-10-09smb3: Fix regression in time handlingSteve French1-8/+16
Fixes: cb7a69e60590 ("cifs: Initialize filesystem timestamp ranges") Only very old servers (e.g. OS/2 and DOS) did not support DCE TIME (100 nanosecond granularity). Fix the checks used to set minimum and maximum times. Fixes xfstest generic/258 (on 5.4-rc1 and later) CC: Deepa Dinamani <[email protected]> Acked-by: Jeff Layton <[email protected]> Signed-off-by: Steve French <[email protected]> Reviewed-by: Pavel Shilovsky <[email protected]>
2019-10-08smb3: remove noisy debug message and minor cleanupSteve French3-9/+6
Message was intended only for developer temporary build In addition cleanup two minor warnings noticed by Coverity and a trivial change to workaround a sparse warning Signed-off-by: Steve French <[email protected]> Reviewed-by: Pavel Shilovsky <[email protected]>
2019-10-08Merge tag 'led-fixes-for-5.4-rc3' of ↵Linus Torvalds2-3/+3
git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds Pull LED fixes from Jacek Anaszewski: - fix a leftover from earlier stage of development in the documentation of recently added led_compose_name() and fix old mistake in the documentation of led_set_brightness_sync() parameter name. - MAINTAINERS: add pointer to Pavel Machek's linux-leds.git tree. Pavel is going to take over LED tree maintainership from myself. * tag 'led-fixes-for-5.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/j.anaszewski/linux-leds: Add my linux-leds branch to MAINTAINERS leds: core: Fix leds.h structure documentation
2019-10-08Add my linux-leds branch to MAINTAINERSPavel Machek1-0/+1
Add pointer to my git tree to MAINTAINERS. I'd like to maintain linux-leds for-next branch for 5.5. Signed-off-by: Pavel Machek <[email protected]> Signed-off-by: Jacek Anaszewski <[email protected]>
2019-10-08leds: core: Fix leds.h structure documentationDan Murphy1-3/+2
Update the leds.h structure documentation to define the correct arguments. Signed-off-by: Dan Murphy <[email protected]> Signed-off-by: Jacek Anaszewski <[email protected]>
2019-10-08Merge tag 'gpio-v5.4-2' of ↵Linus Torvalds4-15/+27
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio Pull GPIO fixes from Linus Walleij: - don't clear FLAG_IS_OUT when emulating open drain/source in gpiolib - fix up the usage of nonexclusive GPIO descriptors from device trees - fix the incorrect IEC offset when toggling trigger edge in the Spreadtrum driver - use the correct unit for debounce settings in the MAX77620 driver * tag 'gpio-v5.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio: gpio: max77620: Use correct unit for debounce times gpio: eic: sprd: Fix the incorrect EIC offset when toggling gpio: fix getting nonexclusive gpiods from DT gpiolib: don't clear FLAG_IS_OUT when emulating open-drain/open-source
2019-10-08Merge tag 'selinux-pr-20191007' of ↵Linus Torvalds1-1/+8
git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux Pull selinuxfix from Paul Moore: "One patch to ensure we don't copy bad memory up into userspace" * tag 'selinux-pr-20191007' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux: selinux: fix context string corruption in convert_context()
2019-10-08Merge tag 'linux-kselftest-5.4-rc3' of ↵Linus Torvalds7-11/+97
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull Kselftest fixes from Shuah Khan: "Fixes for existing tests and the framework. Cristian Marussi's patches add the ability to skip targets (tests) and exclude tests that didn't build from run-list. These patches improve the Kselftest results. Ability to skip targets helps avoid running tests that aren't supported in certain environments. As an example, bpf tests from mainline aren't supported on stable kernels and have dependency on bleeding edge llvm. Being able to skip bpf on systems that can't meet this llvm dependency will be helpful. Kselftest can be built and installed from the main Makefile. This change help simplify Kselftest use-cases which addresses request from users. Kees Cook added per test timeout support to limit individual test run-time" * tag 'linux-kselftest-5.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests: watchdog: Add command line option to show watchdog_info selftests: watchdog: Validate optional file argument selftests/kselftest/runner.sh: Add 45 second timeout per test kselftest: exclude failed TARGETS from runlist kselftest: add capability to skip chosen TARGETS selftests: Add kselftest-all and kselftest-install targets
2019-10-08x86/cpu: Add Comet Lake to the Intel CPU models headerKan Liang1-0/+3
Comet Lake is the new 10th Gen Intel processor. Add two new CPU model numbers to the Intel family list. The CPU model numbers are not published in the SDM yet but they come from an authoritative internal source. [ bp: Touch up commit message. ] Signed-off-by: Kan Liang <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Cc: [email protected] Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: x86-ml <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-10-08doc: move namespaces.rst from kbuild/ to core-api/Masahiro Yamada3-0/+2
We discussed a better location for this file, and agreed that core-api/ is a good fit. Rename it to symbol-namespaces.rst for disambiguation, and also add it to index.rst and MAINTAINERS. Signed-off-by: Masahiro Yamada <[email protected]> Acked-by: Matthias Maennich <[email protected]> Signed-off-by: Jessica Yu <[email protected]>
2019-10-08arm64: armv8_deprecated: Checking return value for memory allocationYunfeng Ye1-0/+5
There are no return value checking when using kzalloc() and kcalloc() for memory allocation. so add it. Signed-off-by: Yunfeng Ye <[email protected]> Signed-off-by: Will Deacon <[email protected]>
2019-10-08lib/string: Make memzero_explicit() inline instead of externalArvind Sankar2-22/+20
With the use of the barrier implied by barrier_data(), there is no need for memzero_explicit() to be extern. Making it inline saves the overhead of a function call, and allows the code to be reused in arch/*/purgatory without having to duplicate the implementation. Tested-by: Hans de Goede <[email protected]> Signed-off-by: Arvind Sankar <[email protected]> Reviewed-by: Hans de Goede <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: H . Peter Anvin <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephan Mueller <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: 906a4bb97f5d ("crypto: sha256 - Use get/put_unaligned_be32 to get input, memzero_explicit") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-08x86/cpu/vmware: Use the full form of INL in VMWARE_PORTSami Tolvanen1-1/+1
LLVM's assembler doesn't accept the short form INL instruction: inl (%%dx) but instead insists on the output register to be explicitly specified: <inline asm>:1:7: error: invalid operand for instruction inl (%dx) ^ LLVM ERROR: Error parsing inline asm Use the full form of the instruction to fix the build. Signed-off-by: Sami Tolvanen <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Thomas Hellstrom <[email protected]> Cc: [email protected] Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: "VMware, Inc." <[email protected]> Cc: x86-ml <[email protected]> Link: https://github.com/ClangBuiltLinux/linux/issues/734 Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-08x86/asm: Fix MWAITX C-state hint valueJanakarajan Natarajan2-3/+3
As per "AMD64 Architecture Programmer's Manual Volume 3: General-Purpose and System Instructions", MWAITX EAX[7:4]+1 specifies the optional hint of the optimized C-state. For C0 state, EAX[7:4] should be set to 0xf. Currently, a value of 0xf is set for EAX[3:0] instead of EAX[7:4]. Fix this by changing MWAITX_DISABLE_CSTATES from 0xf to 0xf0. This hasn't had any implications so far because setting reserved bits in EAX is simply ignored by the CPU. [ bp: Fixup comment in delay_mwaitx() and massage. ] Signed-off-by: Janakarajan Natarajan <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: "H. Peter Anvin" <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: "[email protected]" <[email protected]> Cc: Zhenzhong Duan <[email protected]> Cc: <[email protected]> Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-08btrfs: silence maybe-uninitialized warning in clone_rangeAustin Kim1-1/+1
GCC throws warning message as below: ‘clone_src_i_size’ may be used uninitialized in this function [-Wmaybe-uninitialized] #define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0) ^ fs/btrfs/send.c:5088:6: note: ‘clone_src_i_size’ was declared here u64 clone_src_i_size; ^ The clone_src_i_size is only used as call-by-reference in a call to get_inode_info(). Silence the warning by initializing clone_src_i_size to 0. Note that the warning is a false positive and reported by older versions of GCC (eg. 7.x) but not eg 9.x. As there have been numerous people, the patch is applied. Setting clone_src_i_size to 0 does not otherwise make sense and would not do any action in case the code changes in the future. Signed-off-by: Austin Kim <[email protected]> Reviewed-by: David Sterba <[email protected]> [ add note ] Signed-off-by: David Sterba <[email protected]>
2019-10-08efi/tpm: Fix sanity check of unsigned tbl_size being less than zeroColin Ian King1-1/+1
Currently the check for tbl_size being less than zero is always false because tbl_size is unsigned. Fix this by making it a signed int. Addresses-Coverity: ("Unsigned compared against 0") Signed-off-by: Colin Ian King <[email protected]> Cc: Ard Biesheuvel <[email protected]> Cc: Jerry Snitselaar <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Fixes: e658c82be556 ("efi/tpm: Only set 'efi_tpm_final_log_size' after successful event log parsing") Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
2019-10-08drm/panel: tpo-td043mtea1: Fix SPI aliasLaurent Pinchart1-1/+8
The panel-tpo-td043mtea1 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: dc2e1e5b2799 ("drm/panel: Add driver for the Toppoly TD043MTEA1 panel") Reported-by: H. Nikolaus Schaller <[email protected]> Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Acked-by: Sam Ravnborg <[email protected]> Reviewed-by: Sebastian Reichel <[email protected]> Tested-by: H. Nikolaus Schaller <[email protected]>
2019-10-08drm/panel: tpo-td028ttec1: Fix SPI aliasLaurent Pinchart1-2/+1
The panel-tpo-td028ttec1 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it. Fixes: 415b8dd08711 ("drm/panel: Add driver for the Toppoly TD028TTEC1 panel") Reported-by: H. Nikolaus Schaller <[email protected]> Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Acked-by: Sam Ravnborg <[email protected]> Reviewed-by: Sebastian Reichel <[email protected]> Tested-by: H. Nikolaus Schaller <[email protected]> Tested-by: Andreas Kemnade <[email protected]>
2019-10-08drm/panel: sony-acx565akm: Fix SPI aliasLaurent Pinchart1-1/+8
The panel-sony-acx565akm driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: 1c8fc3f0c5d2 ("drm/panel: Add driver for the Sony ACX565AKM panel") Reported-by: H. Nikolaus Schaller <[email protected]> Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Acked-by: Sam Ravnborg <[email protected]> Reviewed-by: Sebastian Reichel <[email protected]>
2019-10-08drm/panel: nec-nl8048hl11: Fix SPI aliasLaurent Pinchart1-1/+8
The panel-nec-nl8048hl11 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: df439abe6501 ("drm/panel: Add driver for the NEC NL8048HL11 panel") Reported-by: H. Nikolaus Schaller <[email protected]> Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Acked-by: Sam Ravnborg <[email protected]> Reviewed-by: Sebastian Reichel <[email protected]>
2019-10-08drm/panel: lg-lb035q02: Fix SPI aliasLaurent Pinchart1-1/+8
The panel-lg-lb035q02 driver incorrectly includes the OF vendor prefix in its SPI alias. Fix it, and move the manual alias to an SPI module device table. Fixes: f5b0c6542476 ("drm/panel: Add driver for the LG Philips LB035Q02 panel") Reported-by: H. Nikolaus Schaller <[email protected]> Signed-off-by: Laurent Pinchart <[email protected]> Signed-off-by: Tomi Valkeinen <[email protected]> Link: https://patchwork.freedesktop.org/patch/msgid/[email protected] Acked-by: Sam Ravnborg <[email protected]> Reviewed-by: Sebastian Reichel <[email protected]>
2019-10-07io_uring: remove wait loop spurious wakeupsPavel Begunkov1-12/+4
Any changes interesting to tasks waiting in io_cqring_wait() are commited with io_cqring_ev_posted(). However, io_ring_drop_ctx_refs() also tries to do that but with no reason, that means spurious wakeups every io_free_req() and io_uring_enter(). Just use percpu_ref_put() instead. Signed-off-by: Pavel Begunkov <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2019-10-07Merge branch 'akpm' (patches from Andrew)Linus Torvalds21-89/+281
Merge misc fixes from Andrew Morton: "The usual shower of hotfixes. Chris's memcg patches aren't actually fixes - they're mature but a few niggling review issues were late to arrive. The ocfs2 fixes are quite old - those took some time to get reviewer attention. Subsystems affected by this patch series: ocfs2, hotfixes, mm/memcg, mm/slab-generic" * emailed patches from Andrew Morton <[email protected]>: mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two) mm, sl[ou]b: improve memory accounting mm, memcg: make scan aggression always exclude protection mm, memcg: make memory.emin the baseline for utilisation determination mm, memcg: proportional memory.{low,min} reclaim mm/vmpressure.c: fix a signedness bug in vmpressure_register_event() mm/page_alloc.c: fix a crash in free_pages_prepare() mm/z3fold.c: claim page in the beginning of free kernel/sysctl.c: do not override max_threads provided by userspace memcg: only record foreign writebacks with dirty pages when memcg is not disabled mm: fix -Wmissing-prototypes warnings writeback: fix use-after-free in finish_writeback_work() mm/memremap: drop unused SECTION_SIZE and SECTION_MASK panic: ensure preemption is disabled during panic() fs: ocfs2: fix a possible null-pointer dereference in ocfs2_info_scan_inode_alloc() fs: ocfs2: fix a possible null-pointer dereference in ocfs2_write_end_nolock() fs: ocfs2: fix possible null-pointer dereferences in ocfs2_xa_prepare_entry() ocfs2: clear zero in unaligned direct IO
2019-10-07mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)Vlastimil Babka4-12/+49
In most configurations, kmalloc() happens to return naturally aligned (i.e. aligned to the block size itself) blocks for power of two sizes. That means some kmalloc() users might unknowingly rely on that alignment, until stuff breaks when the kernel is built with e.g. CONFIG_SLUB_DEBUG or CONFIG_SLOB, and blocks stop being aligned. Then developers have to devise workaround such as own kmem caches with specified alignment [1], which is not always practical, as recently evidenced in [2]. The topic has been discussed at LSF/MM 2019 [3]. Adding a 'kmalloc_aligned()' variant would not help with code unknowingly relying on the implicit alignment. For slab implementations it would either require creating more kmalloc caches, or allocate a larger size and only give back part of it. That would be wasteful, especially with a generic alignment parameter (in contrast with a fixed alignment to size). Ideally we should provide to mm users what they need without difficult workarounds or own reimplementations, so let's make the kmalloc() alignment to size explicitly guaranteed for power-of-two sizes under all configurations. What this means for the three available allocators? * SLAB object layout happens to be mostly unchanged by the patch. The implicitly provided alignment could be compromised with CONFIG_DEBUG_SLAB due to redzoning, however SLAB disables redzoning for caches with alignment larger than unsigned long long. Practically on at least x86 this includes kmalloc caches as they use cache line alignment, which is larger than that. Still, this patch ensures alignment on all arches and cache sizes. * SLUB layout is also unchanged unless redzoning is enabled through CONFIG_SLUB_DEBUG and boot parameter for the particular kmalloc cache. With this patch, explicit alignment is guaranteed with redzoning as well. This will result in more memory being wasted, but that should be acceptable in a debugging scenario. * SLOB has no implicit alignment so this patch adds it explicitly for kmalloc(). The potential downside is increased fragmentation. While pathological allocation scenarios are certainly possible, in my testing, after booting a x86_64 kernel+userspace with virtme, around 16MB memory was consumed by slab pages both before and after the patch, with difference in the noise. [1] https://lore.kernel.org/linux-btrfs/c3157c8e8e0e7588312b40c853f65c02fe6c957a.1566399731.git.christophe.leroy@c-s.fr/ [2] https://lore.kernel.org/linux-fsdevel/[email protected]/ [3] https://lwn.net/Articles/787740/ [[email protected]: documentation fixlet, per Matthew] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Kirill A. Shutemov <[email protected]> Acked-by: Christoph Hellwig <[email protected]> Cc: David Sterba <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Ming Lei <[email protected]> Cc: Dave Chinner <[email protected]> Cc: "Darrick J . Wong" <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Bottomley <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm, sl[ou]b: improve memory accountingVlastimil Babka3-9/+33
Patch series "guarantee natural alignment for kmalloc()", v2. This patch (of 2): SLOB currently doesn't account its pages at all, so in /proc/meminfo the Slab field shows zero. Modifying a counter on page allocation and freeing should be acceptable even for the small system scenarios SLOB is intended for. Since reclaimable caches are not separated in SLOB, account everything as unreclaimable. SLUB currently doesn't account kmalloc() and kmalloc_node() allocations larger than order-1 page, that are passed directly to the page allocator. As they also don't appear in /proc/slabinfo, it might look like a memory leak. For consistency, account them as well. (SLAB doesn't actually use page allocator directly, so no change there). Ideally SLOB and SLUB would be handled in separate patches, but due to the shared kmalloc_order() function and different kfree() implementations, it's easier to patch both at once to prevent inconsistencies. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Ming Lei <[email protected]> Cc: Dave Chinner <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: "Darrick J . Wong" <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: James Bottomley <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm, memcg: make scan aggression always exclude protectionChris Down2-54/+32
This patch is an incremental improvement on the existing memory.{low,min} relative reclaim work to base its scan pressure calculations on how much protection is available compared to the current usage, rather than how much the current usage is over some protection threshold. This change doesn't change the experience for the user in the normal case too much. One benefit is that it replaces the (somewhat arbitrary) 100% cutoff with an indefinite slope, which makes it easier to ballpark a memory.low value. As well as this, the old methodology doesn't quite apply generically to machines with varying amounts of physical memory. Let's say we have a top level cgroup, workload.slice, and another top level cgroup, system-management.slice. We want to roughly give 12G to system-management.slice, so on a 32GB machine we set memory.low to 20GB in workload.slice, and on a 64GB machine we set memory.low to 52GB. However, because these are relative amounts to the total machine size, while the amount of memory we want to generally be willing to yield to system.slice is absolute (12G), we end up putting more pressure on system.slice just because we have a larger machine and a larger workload to fill it, which seems fairly unintuitive. With this new behaviour, we don't end up with this unintended side effect. Previously the way that memory.low protection works is that if you are 50% over a certain baseline, you get 50% of your normal scan pressure. This is certainly better than the previous cliff-edge behaviour, but it can be improved even further by always considering memory under the currently enforced protection threshold to be out of bounds. This means that we can set relatively low memory.low thresholds for variable or bursty workloads while still getting a reasonable level of protection, whereas with the previous version we may still trivially hit the 100% clamp. The previous 100% clamp is also somewhat arbitrary, whereas this one is more concretely based on the currently enforced protection threshold, which is likely easier to reason about. There is also a subtle issue with the way that proportional reclaim worked previously -- it promotes having no memory.low, since it makes pressure higher during low reclaim. This happens because we base our scan pressure modulation on how far memory.current is between memory.min and memory.low, but if memory.low is unset, we only use the overage method. In most cromulent configurations, this then means that we end up with *more* pressure than with no memory.low at all when we're in low reclaim, which is not really very usable or expected. With this patch, memory.low and memory.min affect reclaim pressure in a more understandable and composable way. For example, from a user standpoint, "protected" memory now remains untouchable from a reclaim aggression standpoint, and users can also have more confidence that bursty workloads will still receive some amount of guaranteed protection. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Chris Down <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Acked-by: Johannes Weiner <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm, memcg: make memory.emin the baseline for utilisation determinationChris Down2-28/+46
Roman points out that when when we do the low reclaim pass, we scale the reclaim pressure relative to position between 0 and the maximum protection threshold. However, if the maximum protection is based on memory.elow, and memory.emin is above zero, this means we still may get binary behaviour on second-pass low reclaim. This is because we scale starting at 0, not starting at memory.emin, and since we don't scan at all below emin, we end up with cliff behaviour. This should be a fairly uncommon case since usually we don't go into the second pass, but it makes sense to scale our low reclaim pressure starting at emin. You can test this by catting two large sparse files, one in a cgroup with emin set to some moderate size compared to physical RAM, and another cgroup without any emin. In both cgroups, set an elow larger than 50% of physical RAM. The one with emin will have less page scanning, as reclaim pressure is lower. Rebase on top of and apply the same idea as what was applied to handle cgroup_memory=disable properly for the original proportional patch http://lkml.kernel.org/r/[email protected] ("mm, memcg: Handle cgroup_disable=memory when getting memcg protection"). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Chris Down <[email protected]> Suggested-by: Roman Gushchin <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Dennis Zhou <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm, memcg: proportional memory.{low,min} reclaimChris Down4-12/+115
cgroup v2 introduces two memory protection thresholds: memory.low (best-effort) and memory.min (hard protection). While they generally do what they say on the tin, there is a limitation in their implementation that makes them difficult to use effectively: that cliff behaviour often manifests when they become eligible for reclaim. This patch implements more intuitive and usable behaviour, where we gradually mount more reclaim pressure as cgroups further and further exceed their protection thresholds. This cliff edge behaviour happens because we only choose whether or not to reclaim based on whether the memcg is within its protection limits (see the use of mem_cgroup_protected in shrink_node), but we don't vary our reclaim behaviour based on this information. Imagine the following timeline, with the numbers the lruvec size in this zone: 1. memory.low=1000000, memory.current=999999. 0 pages may be scanned. 2. memory.low=1000000, memory.current=1000000. 0 pages may be scanned. 3. memory.low=1000000, memory.current=1000001. 1000001* pages may be scanned. (?!) * Of course, we won't usually scan all available pages in the zone even without this patch because of scan control priority, over-reclaim protection, etc. However, as shown by the tests at the end, these techniques don't sufficiently throttle such an extreme change in input, so cliff-like behaviour isn't really averted by their existence alone. Here's an example of how this plays out in practice. At Facebook, we are trying to protect various workloads from "system" software, like configuration management tools, metric collectors, etc (see this[0] case study). In order to find a suitable memory.low value, we start by determining the expected memory range within which the workload will be comfortable operating. This isn't an exact science -- memory usage deemed "comfortable" will vary over time due to user behaviour, differences in composition of work, etc, etc. As such we need to ballpark memory.low, but doing this is currently problematic: 1. If we end up setting it too low for the workload, it won't have *any* effect (see discussion above). The group will receive the full weight of reclaim and won't have any priority while competing with the less important system software, as if we had no memory.low configured at all. 2. Because of this behaviour, we end up erring on the side of setting it too high, such that the comfort range is reliably covered. However, protected memory is completely unavailable to the rest of the system, so we might cause undue memory and IO pressure there when we *know* we have some elasticity in the workload. 3. Even if we get the value totally right, smack in the middle of the comfort zone, we get extreme jumps between no pressure and full pressure that cause unpredictable pressure spikes in the workload due to the current binary reclaim behaviour. With this patch, we can set it to our ballpark estimation without too much worry. Any undesirable behaviour, such as too much or too little reclaim pressure on the workload or system will be proportional to how far our estimation is off. This means we can set memory.low much more conservatively and thus waste less resources *without* the risk of the workload falling off a cliff if we overshoot. As a more abstract technical description, this unintuitive behaviour results in having to give high-priority workloads a large protection buffer on top of their expected usage to function reliably, as otherwise we have abrupt periods of dramatically increased memory pressure which hamper performance. Having to set these thresholds so high wastes resources and generally works against the principle of work conservation. In addition, having proportional memory reclaim behaviour has other benefits. Most notably, before this patch it's basically mandatory to set memory.low to a higher than desirable value because otherwise as soon as you exceed memory.low, all protection is lost, and all pages are eligible to scan again. By contrast, having a gradual ramp in reclaim pressure means that you now still get some protection when thresholds are exceeded, which means that one can now be more comfortable setting memory.low to lower values without worrying that all protection will be lost. This is important because workingset size is really hard to know exactly, especially with variable workloads, so at least getting *some* protection if your workingset size grows larger than you expect increases user confidence in setting memory.low without a huge buffer on top being needed. Thanks a lot to Johannes Weiner and Tejun Heo for their advice and assistance in thinking about how to make this work better. In testing these changes, I intended to verify that: 1. Changes in page scanning become gradual and proportional instead of binary. To test this, I experimented stepping further and further down memory.low protection on a workload that floats around 19G workingset when under memory.low protection, watching page scan rates for the workload cgroup: +------------+-----------------+--------------------+--------------+ | memory.low | test (pgscan/s) | control (pgscan/s) | % of control | +------------+-----------------+--------------------+--------------+ | 21G | 0 | 0 | N/A | | 17G | 867 | 3799 | 23% | | 12G | 1203 | 3543 | 34% | | 8G | 2534 | 3979 | 64% | | 4G | 3980 | 4147 | 96% | | 0 | 3799 | 3980 | 95% | +------------+-----------------+--------------------+--------------+ As you can see, the test kernel (with a kernel containing this patch) ramps up page scanning significantly more gradually than the control kernel (without this patch). 2. More gradual ramp up in reclaim aggression doesn't result in premature OOMs. To test this, I wrote a script that slowly increments the number of pages held by stress(1)'s --vm-keep mode until a production system entered severe overall memory contention. This script runs in a highly protected slice taking up the majority of available system memory. Watching vmstat revealed that page scanning continued essentially nominally between test and control, without causing forward reclaim progress to become arrested. [0]: https://facebookmicrosites.github.io/cgroup2/docs/overview.html#case-study-the-fbtax2-project [[email protected]: reflow block comments to fit in 80 cols] [[email protected]: handle cgroup_disable=memory when getting memcg protection] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Chris Down <[email protected]> Acked-by: Johannes Weiner <[email protected]> Reviewed-by: Roman Gushchin <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Tetsuo Handa <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm/vmpressure.c: fix a signedness bug in vmpressure_register_event()Dan Carpenter1-9/+11
The "mode" and "level" variables are enums and in this context GCC will treat them as unsigned ints so the error handling is never triggered. I also removed the bogus initializer because it isn't required any more and it's sort of confusing. [[email protected]: reduce implicit and explicit typecasting] [[email protected]: fix return value, add comment, per Matthew] Link: http://lkml.kernel.org/r/20190925110449.GO3264@mwanda Fixes: 3cadfa2b9497 ("mm/vmpressure.c: convert to use match_string() helper") Signed-off-by: Dan Carpenter <[email protected]> Reviewed-by: Andy Shevchenko <[email protected]> Acked-by: David Rientjes <[email protected]> Reviewed-by: Matthew Wilcox <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Enrico Weigelt <[email protected]> Cc: Kate Stewart <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm/page_alloc.c: fix a crash in free_pages_prepare()Qian Cai1-1/+7
On architectures like s390, arch_free_page() could mark the page unused (set_page_unused()) and any access later would trigger a kernel panic. Fix it by moving arch_free_page() after all possible accessing calls. Hardware name: IBM 2964 N96 400 (z/VM 6.4.0) Krnl PSW : 0404e00180000000 0000000026c2b96e (__free_pages_ok+0x34e/0x5d8) R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3 Krnl GPRS: 0000000088d43af7 0000000000484000 000000000000007c 000000000000000f 000003d080012100 000003d080013fc0 0000000000000000 0000000000100000 00000000275cca48 0000000000000100 0000000000000008 000003d080010000 00000000000001d0 000003d000000000 0000000026c2b78a 000000002717fdb0 Krnl Code: 0000000026c2b95c: ec1100b30659 risbgn %r1,%r1,0,179,6 0000000026c2b962: e32014000036 pfd 2,1024(%r1) #0000000026c2b968: d7ff10001000 xc 0(256,%r1),0(%r1) >0000000026c2b96e: 41101100 la %r1,256(%r1) 0000000026c2b972: a737fff8 brctg %r3,26c2b962 0000000026c2b976: d7ff10001000 xc 0(256,%r1),0(%r1) 0000000026c2b97c: e31003400004 lg %r1,832 0000000026c2b982: ebff1430016a asi 5168(%r1),-1 Call Trace: __free_pages_ok+0x16a/0x5d8) memblock_free_all+0x206/0x290 mem_init+0x58/0x120 start_kernel+0x2b0/0x570 startup_continue+0x6a/0xc0 INFO: lockdep is turned off. Last Breaking-Event-Address: __free_pages_ok+0x372/0x5d8 Kernel panic - not syncing: Fatal exception: panic_on_oops 00: HCPGIR450W CP entered; disabled wait PSW 00020001 80000000 00000000 26A2379C In the past, only kernel_poison_pages() would trigger this but it needs "page_poison=on" kernel cmdline, and I suspect nobody tested that on s390. Recently, kernel_init_free_pages() (commit 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")) was added and could trigger this as well. [[email protected]: add comment] Link: http://lkml.kernel.org/r/[email protected] Fixes: 8823b1dbc05f ("mm/page_poison.c: enable PAGE_POISONING as a separate option") Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options") Signed-off-by: Qian Cai <[email protected]> Reviewed-by: Heiko Carstens <[email protected]> Acked-by: Christian Borntraeger <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Alexander Duyck <[email protected]> Cc: <[email protected]> [5.3+] Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-10-07mm/z3fold.c: claim page in the beginning of freeVitaly Wool1-2/+8
There's a really hard to reproduce race in z3fold between z3fold_free() and z3fold_reclaim_page(). z3fold_reclaim_page() can claim the page after z3fold_free() has checked if the page was claimed and z3fold_free() will then schedule this page for compaction which may in turn lead to random page faults (since that page would have been reclaimed by then). Fix that by claiming page in the beginning of z3fold_free() and not forgetting to clear the claim in the end. [[email protected]: v2] Link: http://lkml.kernel.org/r/20190928113456.152742cf@bigdell Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vitaly Wool <[email protected]> Reported-by: Markus Linnala <[email protected]> Cc: Dan Streetman <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Henry Burns <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Markus Linnala <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>