aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2017-07-07cfg80211: Validate frequencies nested in NL80211_ATTR_SCAN_FREQUENCIESSrinivas Dasari1-0/+4
validate_scan_freqs() retrieves frequencies from attributes nested in the attribute NL80211_ATTR_SCAN_FREQUENCIES with nla_get_u32(), which reads 4 bytes from each attribute without validating the size of data received. Attributes nested in NL80211_ATTR_SCAN_FREQUENCIES don't have an nla policy. Validate size of each attribute before parsing to avoid potential buffer overread. Fixes: 2a519311926 ("cfg80211/nl80211: scanning (and mac80211 update to use it)") Cc: [email protected] Signed-off-by: Srinivas Dasari <[email protected]> Signed-off-by: Jouni Malinen <[email protected]> Signed-off-by: Johannes Berg <[email protected]>
2017-07-07cfg80211: Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODESrinivas Dasari1-0/+1
Buffer overread may happen as nl80211_set_station() reads 4 bytes from the attribute NL80211_ATTR_LOCAL_MESH_POWER_MODE without validating the size of data received when userspace sends less than 4 bytes of data with NL80211_ATTR_LOCAL_MESH_POWER_MODE. Define nla_policy for NL80211_ATTR_LOCAL_MESH_POWER_MODE to avoid the buffer overread. Fixes: 3b1c5a5307f ("{cfg,nl}80211: mesh power mode primitives and userspace access") Cc: [email protected] Signed-off-by: Srinivas Dasari <[email protected]> Signed-off-by: Jouni Malinen <[email protected]> Signed-off-by: Johannes Berg <[email protected]>
2017-07-07cfg80211: Check if NAN service ID is of expected sizeSrinivas Dasari1-1/+1
nla policy checks for only maximum length of the attribute data when the attribute type is NLA_BINARY. If userspace sends less data than specified, cfg80211 may access illegal memory. When type is NLA_UNSPEC, nla policy check ensures that userspace sends minimum specified length number of bytes. Remove type assignment to NLA_BINARY from nla_policy of NL80211_NAN_FUNC_SERVICE_ID to make these NLA_UNSPEC and to make sure minimum NL80211_NAN_FUNC_SERVICE_ID_LEN bytes are received from userspace with NL80211_NAN_FUNC_SERVICE_ID. Fixes: a442b761b24 ("cfg80211: add add_nan_func / del_nan_func") Cc: [email protected] Signed-off-by: Srinivas Dasari <[email protected]> Signed-off-by: Jouni Malinen <[email protected]> Signed-off-by: Johannes Berg <[email protected]>
2017-07-07cfg80211: Check if PMKID attribute is of expected sizeSrinivas Dasari1-2/+1
nla policy checks for only maximum length of the attribute data when the attribute type is NLA_BINARY. If userspace sends less data than specified, the wireless drivers may access illegal memory. When type is NLA_UNSPEC, nla policy check ensures that userspace sends minimum specified length number of bytes. Remove type assignment to NLA_BINARY from nla_policy of NL80211_ATTR_PMKID to make this NLA_UNSPEC and to make sure minimum WLAN_PMKID_LEN bytes are received from userspace with NL80211_ATTR_PMKID. Fixes: 67fbb16be69d ("nl80211: PMKSA caching support") Cc: [email protected] Signed-off-by: Srinivas Dasari <[email protected]> Signed-off-by: Jouni Malinen <[email protected]> Signed-off-by: Johannes Berg <[email protected]>
2017-07-07iov_iter: saner checks on copyin/copyoutAl Viro1-16/+39
* might_fault() is better checked in caller (and e.g. fault-in + kmap_atomic codepath also needs might_fault() coverage) * we have already done object size checks * we have *NOT* done access_ok() recently enough; we rely upon the iovec array having passed sanity checks back when it had been created and not nothing having buggered it since. However, that's very much non-local, so we'd better recheck that. So the thing we want does not match anything in uaccess - we need access_ok + kasan checks + raw copy without any zeroing. Just define such helpers and use them here. Signed-off-by: Al Viro <[email protected]>
2017-07-07arcnet: com20020-pci: Fix an error handling path in 'com20020pci_probe()'Christophe Jaillet1-2/+4
If this memory allocation fails, we should go through the error handling path as done everywhere else in this function before returning. Signed-off-by: Christophe JAILLET <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-07-07nfp: flower: add missing clean up call to avoid memory leaksJakub Kicinski1-0/+1
nfp_flower_metadata_cleanup() is defined but never invoked, not calling it will cause us to leak mask and statistics queue memory on the host. Fixes: 43f84b72c50d ("nfp: add metadata to each flow offload") Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2017-07-07genirq/debugfs: Remove redundant NULL pointer checkThomas Gleixner1-2/+1
debugfs_remove() can be called with a NULL pointer. Fixes: 087cdfb662ae5 ("genirq/debugfs: Add proper debugfs interface") Reported-by: Fengguang Wu <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]>
2017-07-06Merge branch 'akpm' (patches from Andrew)Linus Torvalds131-1723/+3093
Merge misc updates from Andrew Morton: - a few hotfixes - various misc updates - ocfs2 updates - most of MM * emailed patches from Andrew Morton <[email protected]>: (108 commits) mm, memory_hotplug: move movable_node to the hotplug proper mm, memory_hotplug: drop CONFIG_MOVABLE_NODE mm, memory_hotplug: drop artificial restriction on online/offline mm: memcontrol: account slab stats per lruvec mm: memcontrol: per-lruvec stats infrastructure mm: memcontrol: use generic mod_memcg_page_state for kmem pages mm: memcontrol: use the node-native slab memory counters mm: vmstat: move slab statistics from zone to node counters mm/zswap.c: delete an error message for a failed memory allocation in zswap_dstmem_prepare() mm/zswap.c: improve a size determination in zswap_frontswap_init() mm/zswap.c: delete an error message for a failed memory allocation in zswap_pool_create() mm/swapfile.c: sort swap entries before free mm/oom_kill: count global and memory cgroup oom kills mm: per-cgroup memory reclaim stats mm: kmemleak: treat vm_struct as alternative reference to vmalloc'ed objects mm: kmemleak: factor object reference updating out of scan_block() mm: kmemleak: slightly reduce the size of some structures on 64-bit architectures mm, mempolicy: don't check cpuset seqlock where it doesn't matter mm, cpuset: always use seqlock when changing task's nodemask mm, mempolicy: simplify rebinding mempolicies when updating cpusets ...
2017-07-06Merge branch 'uaccess.strlen' of ↵Linus Torvalds40-601/+5
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull user access str* updates from Al Viro: "uaccess str...() dead code removal" * 'uaccess.strlen' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: s390 keyboard.c: don't open-code strndup_user() mips: get rid of unused __strnlen_user() get rid of unused __strncpy_from_user() instances kill strlen_user()
2017-07-06Merge branch 'work.probe_kernel_read' of ↵Linus Torvalds3-25/+4
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull probe_kernel_read() uses from Al Viro: "Several open-coded probe_kernel_read()..." * 'work.probe_kernel_read' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: dio: use probe_kernel_read() hp_sdc: use probe_kernel_read() hpfb: use probe_kernel_read()
2017-07-06Merge branch 'misc.alpha' of ↵Linus Torvalds1-41/+40
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull alpha user access updates from Al Viro: "Several alpha osf_sys.c uaccess cleanups - getdomainname() had insane byte-by-byte copying of string to userland (instead of strnlen + copy_to_user) plus yet another compat variant of timeval/itimerval with associated copyin/copyout primitives" * 'misc.alpha' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: osf_sigstack(): switch to put_user() osf_sys.c: switch handling of timeval32/itimerval32 to copy_{to,from}_user() osf_getdomainname(): use copy_to_user()
2017-07-06Merge branch 'misc.compat' of ↵Linus Torvalds23-1137/+1019
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull misc compat stuff updates from Al Viro: "This part is basically untangling various compat stuff. Compat syscalls moved to their native counterparts, getting rid of quite a bit of double-copying and/or set_fs() uses. A lot of field-by-field copyin/copyout killed off. - kernel/compat.c is much closer to containing just the copyin/copyout of compat structs. Not all compat syscalls are gone from it yet, but it's getting there. - ipc/compat_mq.c killed off completely. - block/compat_ioctl.c cleaned up; floppy compat ioctls moved to drivers/block/floppy.c where they belong. Yes, there are several drivers that implement some of the same ioctls. Some are m68k and one is 32bit-only pmac. drivers/block/floppy.c is the only one in that bunch that can be built on biarch" * 'misc.compat' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: mqueue: move compat syscalls to native ones usbdevfs: get rid of field-by-field copyin compat_hdio_ioctl: get rid of set_fs() take floppy compat ioctls to sodding floppy.c ipmi: get rid of field-by-field __get_user() ipmi: get COMPAT_IPMICTL_RECEIVE_MSG in sync with the native one rt_sigtimedwait(): move compat to native select: switch compat_{get,put}_fd_set() to compat_{get,put}_bitmap() put_compat_rusage(): switch to copy_to_user() sigpending(): move compat to native getrlimit()/setrlimit(): move compat to native times(2): move compat to native compat_{get,put}_bitmap(): use unsafe_{get,put}_user() fb_get_fscreeninfo(): don't bother with do_fb_ioctl() do_sigaltstack(): lift copying to/from userland into callers take compat_sys_old_getrlimit() to native syscall trim __ARCH_WANT_SYS_OLD_GETRLIMIT
2017-07-06Merge branch 'work.drm' of ↵Linus Torvalds12-1045/+476
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull DRM compat ioctl handling updates from Al Viro: "This kills the double-copies in there and tons of field-by-field copyin/copyout. Several dead ioctls put to rest, while we are at it - the native counterparts had been gone for a decade, so we can bloody well fail early on the compat side. No point rearranging the 32bit structure into 64bit one (and back) only to be told "piss off, I don't know that ioctl" by the native code..." * 'work.drm' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (29 commits) Fix trivial misannotations mga: switch compat ioctls to drm_ioctl_kernel() radeon: take out dead compat ioctls drm compat: ia64 is not biarch drm_compat_ioctl(): tidy up a bit switch compat_drm_mapbufs() to drm_ioctl_kernel() switch compat_drm_rmmap() to drm_ioctl_kernel() switch compat_drm_mode_addfb2() to drm_ioctl_kernel() switch compat_drm_wait_vblank() to drm_ioctl_kernel() switch compat_drm_update_draw() compat_drm: switch sg ioctls compat_drm: switch AGP compat ioctls to drm_ioctl_kernel() switch compat_drm_dma() to drm_ioctl_kernel() switch compat_drm_resctx() to drm_ioctl_kernel() switch compat_drm_getsareactx() to drm_ioctl_kernel() switch compat_drm_setsareactx() to drm_ioctl_kernel() switch compat_drm_freebufs() to drm_ioctl_kernel() switch compat_drm_markbufs() to drm_ioctl_kernel() switch compat_drm_addmap() to drm_ioctl_kernel() switch compat_drm_getstats() to drm_ioctl_kernel() ...
2017-07-06Merge tag 'trace-v4.13' of ↵Linus Torvalds17-245/+797
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "The new features of this release: - Added TRACE_DEFINE_SIZEOF() which allows trace events that use sizeof() it the TP_printk() to be converted to the actual size such that trace-cmd and perf can parse them correctly. - Some rework of the TRACE_DEFINE_ENUM() such that the above TRACE_DEFINE_SIZEOF() could reuse the same code. - Recording of tgid (Thread Group ID). This is similar to how task COMMs are recorded (cached at sched_switch), where it is in a table and used on output of the trace and trace_pipe files. - Have ":mod:<module>" be cached when written into set_ftrace_filter. Then the functions of the module will be traced at module load. - Some random clean ups and small fixes" * tag 'trace-v4.13' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (26 commits) ftrace: Test for NULL iter->tr in regex for stack_trace_filter changes ftrace: Decrement count for dyn_ftrace_total_info for init functions ftrace: Unlock hash mutex on failed allocation in process_mod_list() tracing: Add support for display of tgid in trace output tracing: Add support for recording tgid of tasks ftrace: Decrement count for dyn_ftrace_total_info file ftrace: Remove unused function ftrace_arch_read_dyn_info() sh/ftrace: Remove only user of ftrace_arch_read_dyn_info() ftrace: Have cached module filters be an active filter ftrace: Implement cached modules tracing on module load ftrace: Have the cached module list show in set_ftrace_filter ftrace: Add :mod: caching infrastructure to trace_array tracing: Show address when function names are not found ftrace: Add missing comment for FTRACE_OPS_FL_RCU tracing: Rename update the enum_map file tracing: Add TRACE_DEFINE_SIZEOF() macros tracing: define TRACE_DEFINE_SIZEOF() macro to map sizeof's to their values tracing: Rename enum_replace to eval_replace trace: rename enum_map functions trace: rename trace.c enum functions ...
2017-07-06Merge tag 'dma-mapping-4.13' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds75-725/+778
Pull dma-mapping infrastructure from Christoph Hellwig: "This is the first pull request for the new dma-mapping subsystem In this new subsystem we'll try to properly maintain all the generic code related to dma-mapping, and will further consolidate arch code into common helpers. This pull request contains: - removal of the DMA_ERROR_CODE macro, replacing it with calls to ->mapping_error so that the dma_map_ops instances are more self contained and can be shared across architectures (me) - removal of the ->set_dma_mask method, which duplicates the ->dma_capable one in terms of functionality, but requires more duplicate code. - various updates for the coherent dma pool and related arm code (Vladimir) - various smaller cleanups (me)" * tag 'dma-mapping-4.13' of git://git.infradead.org/users/hch/dma-mapping: (56 commits) ARM: dma-mapping: Remove traces of NOMMU code ARM: NOMMU: Set ARM_DMA_MEM_BUFFERABLE for M-class cpus ARM: NOMMU: Introduce dma operations for noMMU drivers: dma-mapping: allow dma_common_mmap() for NOMMU drivers: dma-coherent: Introduce default DMA pool drivers: dma-coherent: Account dma_pfn_offset when used with device tree dma: Take into account dma_pfn_offset dma-mapping: replace dmam_alloc_noncoherent with dmam_alloc_attrs dma-mapping: remove dmam_free_noncoherent crypto: qat - avoid an uninitialized variable warning au1100fb: remove a bogus dma_free_nonconsistent call MAINTAINERS: add entry for dma mapping helpers powerpc: merge __dma_set_mask into dma_set_mask dma-mapping: remove the set_dma_mask method powerpc/cell: use the dma_supported method for ops switching powerpc/cell: clean up fixed mapping dma_ops initialization tile: remove dma_supported and mapping_error methods xen-swiotlb: remove xen_swiotlb_set_dma_mask arm: implement ->dma_supported instead of ->set_dma_mask mips/loongson64: implement ->dma_supported instead of ->set_dma_mask ...
2017-07-06Merge branch 'for-linus' of ↵Linus Torvalds12-228/+151
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull more s390 updates from Martin Schwidefsky: - The fixup for the blk-mq clash with the scm driver - An improvement for the dasd driver in regard to raw I/O - Bug fixes and cleanup * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: Update my email address s390/syscalls: Fix out of bounds arguments access s390/vfio_ccw: remove unused variable s390/dasd: remove unneeded code s390/crash: Remove unused KEXEC_NOTE_BYTES s390/zcrypt: Fix missing newlines at some debug feature messages. s390/dasd: Make raw I/O usable without prefix support s390/dasd: Rename dasd_raw_build_cp() s390/dasd: Refactor prefix_LRE() and related functions s390: fix up for "blk-mq: switch ->queue_rq return value to blk_status_t"
2017-07-06Merge tag 'for-linus-4.13-rc1-tag' of ↵Linus Torvalds25-167/+542
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull xen updates from Juergen Gross: "Other than fixes and cleanups it contains: - support > 32 VCPUs at domain restore - support for new sysfs nodes related to Xen - some performance tuning for Linux running as Xen guest" * tag 'for-linus-4.13-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: x86/xen: allow userspace access during hypercalls x86: xen: remove unnecessary variable in xen_foreach_remap_area() xen: allocate page for shared info page from low memory xen: avoid deadlock in xenbus driver xen: add sysfs node for hypervisor build id xen: sync include/xen/interface/version.h xen: add sysfs node for guest type doc,xen: document hypervisor sysfs nodes for xen xen/vcpu: Handle xen_vcpu_setup() failure at boot xen/vcpu: Handle xen_vcpu_setup() failure in hotplug xen/pv: Fix OOPS on restore for a PV, !SMP domain xen/pvh*: Support > 32 VCPUs at domain restore xen/vcpu: Simplify xen_vcpu related code xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU xen: avoid type warning in xchg_xen_ulong xen: fix HYPERVISOR_dm_op() prototype xen: don't print error message in case of missing Xenstore entry arm/xen: Adjust one function call together with a variable assignment arm/xen: Delete an error message for a failed memory allocation in __set_phys_to_machine_multi() arm/xen: Improve a size determination in __set_phys_to_machine_multi()
2017-07-06Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds95-967/+4250
Pull KVM updates from Paolo Bonzini: "PPC: - Better machine check handling for HV KVM - Ability to support guests with threads=2, 4 or 8 on POWER9 - Fix for a race that could cause delayed recognition of signals - Fix for a bug where POWER9 guests could sleep with interrupts pending. ARM: - VCPU request overhaul - allow timer and PMU to have their interrupt number selected from userspace - workaround for Cavium erratum 30115 - handling of memory poisonning - the usual crop of fixes and cleanups s390: - initial machine check forwarding - migration support for the CMMA page hinting information - cleanups and fixes x86: - nested VMX bugfixes and improvements - more reliable NMI window detection on AMD - APIC timer optimizations Generic: - VCPU request overhaul + documentation of common code patterns - kvm_stat improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (124 commits) Update my email address kvm: vmx: allow host to access guest MSR_IA32_BNDCFGS x86: kvm: mmu: use ept a/d in vmcs02 iff used in vmcs12 kvm: x86: mmu: allow A/D bits to be disabled in an mmu x86: kvm: mmu: make spte mmio mask more explicit x86: kvm: mmu: dead code thanks to access tracking KVM: PPC: Book3S: Fix typo in XICS-on-XIVE state saving code KVM: PPC: Book3S HV: Close race with testing for signals on guest entry KVM: PPC: Book3S HV: Simplify dynamic micro-threading code KVM: x86: remove ignored type attribute KVM: LAPIC: Fix lapic timer injection delay KVM: lapic: reorganize restart_apic_timer KVM: lapic: reorganize start_hv_timer kvm: nVMX: Check memory operand to INVVPID KVM: s390: Inject machine check into the nested guest KVM: s390: Inject machine check into the guest tools/kvm_stat: add new interactive command 'b' tools/kvm_stat: add new command line switch '-i' tools/kvm_stat: fix error on interactive command 'g' KVM: SVM: suppress unnecessary NMI singlestep on GIF=0 and nested exit ...
2017-07-07IB/core: Fix static analysis warning in ib_policy_change_taskDaniel Jurgens1-1/+2
ib_get_cached_subnet_prefix can technically fail, but the only way it could is not possible based on the loop conditions. Check the return value before using the variable sp to resolve a static analysis warning. -v1: - Fix check to !ret. Paul Moore Fixes: 8f408ab64be6 ("selinux lsm IB/core: Implement LSM notification system") Signed-off-by: Daniel Jurgens <[email protected]> Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Paul Moore <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07IB/core: Fix uninitialized variable use in check_qp_port_pkey_settingsDaniel Jurgens1-8/+12
Check the return value from get_pkey_and_subnet_prefix to prevent using uninitialized variables. Fixes: d291f1a65232 ("IB/core: Enforce PKey security on QPs") Signed-off-by: Daniel Jurgens <[email protected]> Reported-by: Dan Carpenter <[email protected]> Signed-off-by: Paul Moore <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: do not suspend/resume if power stays onEnric Balletbo i Serra3-0/+7
The suspend/resume behavior of the TPM can be controlled by setting "powered-while-suspended" in the DTS. This is useful for the cases when hardware does not power-off the TPM. Signed-off-by: Sonny Rao <[email protected]> Signed-off-by: Enric Balletbo i Serra <[email protected]> Reviewed-by: Jason Gunthorpe <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: use tpm2_pcr_read() in tpm2_do_selftest()Roberto Sassu1-30/+1
tpm2_do_selftest() performs a PCR read during the TPM initialization phase. This patch replaces the PCR read code with a call to tpm2_pcr_read(). Signed-off-by: Roberto Sassu <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: use tpm_buf functions in tpm2_pcr_read()Roberto Sassu1-30/+30
tpm2_pcr_read() now builds the PCR read command buffer with tpm_buf functions. This solution is preferred to using a tpm2_cmd structure, as tpm_buf functions provide protection against buffer overflow. Signed-off-by: Roberto Sassu <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm_tis: make ilb_base_addr staticColin Ian King1-1/+1
The pointer ilb_base_addr does not need to be in global scope, so make it static. Cleans up sparse warning: "symbol 'ilb_base_addr' was not declared. Should it be static?" Signed-off-by: Colin Ian King <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: consolidate the TPM startup codeJarkko Sakkinen3-61/+44
Consolidated all the "manual" TPM startup code to a single function in order to make code flows a bit cleaner and migrate to tpm_buf. Tested-by: Stefan Berger <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: Enable CLKRUN protocol for Braswell systemsAzhar Shaikh2-0/+117
To overcome a hardware limitation on Intel Braswell systems, disable CLKRUN protocol during TPM transactions and re-enable once the transaction is completed. Signed-off-by: Azhar Shaikh <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm/tpm_crb: fix priv->cmd_size initialisationManuel Lauss1-2/+3
priv->cmd_size is never initialised if the cmd and rsp buffers reside at different addresses. Initialise it in the exit path of the function when rsp buffer has also been successfully allocated. Fixes: aa77ea0e43dc ("tpm/tpm_crb: cache cmd_size register value."). Signed-off-by: Manuel Lauss <[email protected]> Reviewed-by: Jarkko Sakkinen <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: fix a kernel memory leak in tpm-sysfs.cJarkko Sakkinen1-1/+2
While cleaning up sysfs callback that prints EK we discovered a kernel memory leak. This commit fixes the issue by zeroing the buffer used for TPM command/response. The leak happen when we use either tpm_vtpm_proxy, tpm_ibmvtpm or xen-tpmfront. Cc: [email protected] Fixes: 0883743825e3 ("TPM: sysfs functions consolidation") Reported-by: Jason Gunthorpe <[email protected]> Tested-by: Stefan Berger <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07tpm: Issue a TPM2_Shutdown for TPM2 devices.Josh Zimmerman2-0/+37
If a TPM2 loses power without a TPM2_Shutdown command being issued (a "disorderly reboot"), it may lose some state that has yet to be persisted to NVRam, and will increment the DA counter. After the DA counter gets sufficiently large, the TPM will lock the user out. NOTE: This only changes behavior on TPM2 devices. Since TPM1 uses sysfs, and sysfs relies on implicit locking on chip->ops, it is not safe to allow this code to run in TPM1, or to add sysfs support to TPM2, until that locking is made explicit. Signed-off-by: Josh Zimmerman <[email protected]> Cc: [email protected] Fixes: 74d6b3ceaa17 ("tpm: fix suspend/resume paths for TPM 2.0") Reviewed-by: Jarkko Sakkinen <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-07Add "shutdown" to "struct class".Josh Zimmerman2-1/+7
The TPM class has some common shutdown code that must be executed for all drivers. This adds some needed functionality for that. Signed-off-by: Josh Zimmerman <[email protected]> Acked-by: Greg Kroah-Hartman <[email protected]> Cc: [email protected] Fixes: 74d6b3ceaa17 ("tpm: fix suspend/resume paths for TPM 2.0") Reviewed-by: Jarkko Sakkinen <[email protected]> Tested-by: Jarkko Sakkinen <[email protected]> Signed-off-by: Jarkko Sakkinen <[email protected]> Signed-off-by: James Morris <[email protected]>
2017-07-06mm, memory_hotplug: move movable_node to the hotplug properMichal Hocko4-8/+16
movable_node_is_enabled is defined in memblock proper while it is initialized from the memory hotplug proper. This is quite messy and it makes a dependency between the two so move movable_node along with the helper functions to memory_hotplug. To make it more entertaining the kernel parameter is ignored unless CONFIG_HAVE_MEMBLOCK_NODE_MAP=y because we do not have the node information for each memblock otherwise. So let's warn when the option is disabled. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko <[email protected]> Acked-by: Reza Arbab <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: Kani Toshimitsu <[email protected]> Cc: Chen Yucong <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Cc: Daniel Kiper <[email protected]> Cc: Igor Mammedov <[email protected]> Cc: Vitaly Kuznetsov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm, memory_hotplug: drop CONFIG_MOVABLE_NODEMichal Hocko8-62/+5
Commit 20b2f52b73fe ("numa: add CONFIG_MOVABLE_NODE for movable-dedicated node") has introduced CONFIG_MOVABLE_NODE without a good explanation on why it is actually useful. It makes a lot of sense to make movable node semantic opt in but we already have that because the feature has to be explicitly enabled on the kernel command line. A config option on top only makes the configuration space larger without a good reason. It also adds an additional ifdefery that pollutes the code. Just drop the config option and make it de-facto always enabled. This shouldn't introduce any change to the semantic. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko <[email protected]> Acked-by: Reza Arbab <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: Kani Toshimitsu <[email protected]> Cc: Chen Yucong <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Cc: Daniel Kiper <[email protected]> Cc: Igor Mammedov <[email protected]> Cc: Vitaly Kuznetsov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm, memory_hotplug: drop artificial restriction on online/offlineMichal Hocko1-58/+0
Patch series "remove CONFIG_MOVABLE_NODE". I am continuing to clean up the memory hotplug code and CONFIG_MOVABLE_NODE seems dubious at best. The following two patches simply removes the flag and make it de-facto always enabled. The current semantic of the config option is twofold 1) it automatically binds hotplugable nodes to have memory in zone_movable by default when movable_node is enabled 2) forbids memory hotplug to online all the memory as movable when !CONFIG_MOVABLE_NODE. The later restriction is quite dubious because there is no clear cut of how much normal memory do we need for a reasonable system operation. A single memory block which is sufficient to allow further movable onlines is far from sufficient (e.g a node with >2GB and memblocks 128MB will fill up this zone with struct pages leaving nothing for other allocations). Removing the config option will not only reduce the configuration space it also removes quite some code. The semantic of the movable_node command line parameter is preserved. The first patch removes the restriction mentioned above and the second one simply removes all the CONFIG_MOVABLE_NODE related stuff. The last patch moves movable_node flag handling to memory_hotplug proper where it belongs. [1] http://lkml.kernel.org/r/[email protected] This patch (of 3): Commit 74d42d8fe146 ("memory_hotplug: ensure every online node has NORMAL memory") has introduced a restriction that every numa node has to have at least some memory in !movable zones before a first movable memory can be onlined if !CONFIG_MOVABLE_NODE. Likewise can_offline_normal checks the amount of normal memory in !movable zones and it disallows to offline memory if there is no normal memory left with a justification that "memory-management acts bad when we have nodes which is online but don't have any normal memory". While it is true that not having _any_ memory for kernel allocations on a NUMA node is far from great and such a node would be quite subotimal because all kernel allocations will have to fallback to another NUMA node but there is no reason to disallow such a configuration in principle. Besides that there is not really a big difference to have one memblock for ZONE_NORMAL available or none. With 128MB size memblocks the system might trash on the kernel allocations requests anyway. It is really hard to draw a line on how much normal memory is really sufficient so we have to rely on administrator to configure system sanely therefore drop the artificial restriction and remove can_offline_normal and can_online_high_movable altogether. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Reza Arbab <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: Yasuaki Ishimatsu <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: Kani Toshimitsu <[email protected]> Cc: Chen Yucong <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andi Kleen <[email protected]> Cc: David Rientjes <[email protected]> Cc: Daniel Kiper <[email protected]> Cc: Igor Mammedov <[email protected]> Cc: Vitaly Kuznetsov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: memcontrol: account slab stats per lruvecJohannes Weiner3-27/+7
Josef's redesign of the balancing between slab caches and the page cache requires slab cache statistics at the lruvec level. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Acked-by: Vladimir Davydov <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: memcontrol: per-lruvec stats infrastructureJohannes Weiner9-55/+238
lruvecs are at the intersection of the NUMA node and memcg, which is the scope for most paging activity. Introduce a convenient accounting infrastructure that maintains statistics per node, per memcg, and the lruvec itself. Then convert over accounting sites for statistics that are already tracked in both nodes and memcgs and can be easily switched. [[email protected]: fix crash in the new cgroup stat keeping code] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: don't track uncharged pages at all Link: http://lkml.kernel.org/r/[email protected] [[email protected]: add missing free_percpu()] Link: http://lkml.kernel.org/r/[email protected] [[email protected]: hexagon: fix build error caused by include file order] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Signed-off-by: Guenter Roeck <[email protected]> Acked-by: Vladimir Davydov <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: memcontrol: use generic mod_memcg_page_state for kmem pagesJohannes Weiner3-29/+12
The kmem-specific functions do the same thing. Switch and drop. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Acked-by: Vladimir Davydov <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: memcontrol: use the node-native slab memory countersJohannes Weiner3-8/+6
Now that the slab counters are moved from the zone to the node level we can drop the private memcg node stats and use the official ones. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Acked-by: Vladimir Davydov <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: vmstat: move slab statistics from zone to node countersJohannes Weiner7-20/+19
Patch series "mm: per-lruvec slab stats" Josef is working on a new approach to balancing slab caches and the page cache. For this to work, he needs slab cache statistics on the lruvec level. These patches implement that by adding infrastructure that allows updating and reading generic VM stat items per lruvec, then switches some existing VM accounting sites, including the slab accounting ones, to this new cgroup-aware API. I'll follow up with more patches on this, because there is actually substantial simplification that can be done to the memory controller when we replace private memcg accounting with making the existing VM accounting sites cgroup-aware. But this is enough for Josef to base his slab reclaim work on, so here goes. This patch (of 5): To re-implement slab cache vs. page cache balancing, we'll need the slab counters at the lruvec level, which, ever since lru reclaim was moved from the zone to the node, is the intersection of the node, not the zone, and the memcg. We could retain the per-zone counters for when the page allocator dumps its memory information on failures, and have counters on both levels - which on all but NUMA node 0 is usually redundant. But let's keep it simple for now and just move them. If anybody complains we can restore the per-zone counters. [[email protected]: fix oops] Link: http://lkml.kernel.org/r/[email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Johannes Weiner <[email protected]> Cc: Josef Bacik <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Vladimir Davydov <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/zswap.c: delete an error message for a failed memory allocation in ↵Markus Elfring1-3/+2
zswap_dstmem_prepare() Omit an extra message for a memory allocation failure in this function. This issue was detected by using the Coccinelle software. Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Markus Elfring <[email protected]> Cc: Dan Streetman <[email protected]> Cc: Seth Jennings <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/zswap.c: improve a size determination in zswap_frontswap_init()Markus Elfring1-1/+1
Replace the specification of a data structure by a pointer dereference as the parameter for the operator "sizeof" to make the corresponding size determination a bit safer according to the Linux coding style convention. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Markus Elfring <[email protected]> Cc: Dan Streetman <[email protected]> Cc: Seth Jennings <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/zswap.c: delete an error message for a failed memory allocation in ↵Markus Elfring1-3/+1
zswap_pool_create() Omit an extra message for a memory allocation failure in this function. This issue was detected by using the Coccinelle software. Link: http://events.linuxfoundation.org/sites/events/files/slides/LCJ16-Refactor_Strings-WSang_0.pdf Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Markus Elfring <[email protected]> Cc: Dan Streetman <[email protected]> Cc: Seth Jennings <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/swapfile.c: sort swap entries before freeHuang Ying1-0/+16
To reduce the lock contention of swap_info_struct->lock when freeing swap entry. The freed swap entries will be collected in a per-CPU buffer firstly, and be really freed later in batch. During the batch freeing, if the consecutive swap entries in the per-CPU buffer belongs to same swap device, the swap_info_struct->lock needs to be acquired/released only once, so that the lock contention could be reduced greatly. But if there are multiple swap devices, it is possible that the lock may be unnecessarily released/acquired because the swap entries belong to the same swap device are non-consecutive in the per-CPU buffer. To solve the issue, the per-CPU buffer is sorted according to the swap device before freeing the swap entries. With the patch, the memory (some swapped out) free time reduced 11.6% (from 2.65s to 2.35s) in the vm-scalability swap-w-rand test case with 16 processes. The test is done on a Xeon E5 v3 system. The swap device used is a RAM simulated PMEM (persistent memory) device. To test swapping, the test case creates 16 processes, which allocate and write to the anonymous pages until the RAM and part of the swap device is used up, finally the memory (some swapped out) is freed before exit. [[email protected]: tweak comment] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Huang Ying <[email protected]> Acked-by: Tim Chen <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Shaohua Li <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Rik van Riel <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm/oom_kill: count global and memory cgroup oom killsKonstantin Khlebnikov6-5/+29
Show count of oom killer invocations in /proc/vmstat and count of processes killed in memory cgroup in knob "memory.events" (in memory.oom_control for v1 cgroup). Also describe difference between "oom" and "oom_kill" in memory cgroup documentation. Currently oom in memory cgroup kills tasks iff shortage has happened inside page fault. These counters helps in monitoring oom kills - for now the only way is grepping for magic words in kernel log. [[email protected]: fix for mem_cgroup_count_vm_event() rename] [[email protected]: fix comment, per Konstantin] Link: http://lkml.kernel.org/r/149570810989.203600.9492483715840752937.stgit@buzz Signed-off-by: Konstantin Khlebnikov <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Tetsuo Handa <[email protected]> Cc: Roman Guschin <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: per-cgroup memory reclaim statsRoman Gushchin10-17/+113
Track the following reclaim counters for every memory cgroup: PGREFILL, PGSCAN, PGSTEAL, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE and PGLAZYFREED. These values are exposed using the memory.stats interface of cgroup v2. The meaning of each value is the same as for global counters, available using /proc/vmstat. Also, for consistency, rename mem_cgroup_count_vm_event() to count_memcg_event_mm(). Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Roman Gushchin <[email protected]> Suggested-by: Johannes Weiner <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: Vladimir Davydov <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Li Zefan <[email protected]> Cc: Balbir Singh <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: kmemleak: treat vm_struct as alternative reference to vmalloc'ed objectsCatalin Marinas4-10/+98
Kmemleak requires that vmalloc'ed objects have a minimum reference count of 2: one in the corresponding vm_struct object and the other owned by the vmalloc() caller. There are cases, however, where the original vmalloc() returned pointer is lost and, instead, a pointer to vm_struct is stored (see free_thread_stack()). Kmemleak currently reports such objects as leaks. This patch adds support for treating any surplus references to an object as additional references to a specified object. It introduces the kmemleak_vmalloc() API function which takes a vm_struct pointer and sets its surplus reference passing to the actual vmalloc() returned pointer. The __vmalloc_node_range() calling site has been modified accordingly. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]> Reported-by: "Luis R. Rodriguez" <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "Luis R. Rodriguez" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: kmemleak: factor object reference updating out of scan_block()Catalin Marinas1-18/+25
scan_block() updates the number of references (pointers) to objects, adding them to the gray_list when object->min_count is reached. The patch factors out this functionality into a separate update_refs() function. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "Luis R. Rodriguez" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm: kmemleak: slightly reduce the size of some structures on 64-bit ↵Catalin Marinas1-3/+3
architectures Change the kmemleak_object.flags type to unsigned int and moves the early_log.min_count (int) near early_log.op_type (int) to slightly reduce the size of these structures on 64-bit architectures. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Catalin Marinas <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: "Luis R. Rodriguez" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm, mempolicy: don't check cpuset seqlock where it doesn't matterVlastimil Babka1-16/+0
Two wrappers of __alloc_pages_nodemask() are checking task->mems_allowed_seq themselves to retry allocation that has raced with a cpuset update. This has been shown to be ineffective in preventing premature OOM's which can happen in __alloc_pages_slowpath() long before it returns back to the wrappers to detect the race at that level. Previous patches have made __alloc_pages_slowpath() more robust, so we can now simply remove the seqlock checking in the wrappers to prevent further wrong impression that it can actually help. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dimitri Sivanich <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Li Zefan <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2017-07-06mm, cpuset: always use seqlock when changing task's nodemaskVlastimil Babka1-21/+8
When updating task's mems_allowed and rebinding its mempolicy due to cpuset's mems being changed, we currently only take the seqlock for writing when either the task has a mempolicy, or the new mems has no intersection with the old mems. This should be enough to prevent a parallel allocation seeing no available nodes, but the optimization is IMHO unnecessary (cpuset updates should not be frequent), and we still potentially risk issues if the intersection of new and old nodes has limited amount of free/reclaimable memory. Let's just use the seqlock for all tasks. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Anshuman Khandual <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dimitri Sivanich <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Li Zefan <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>