aboutsummaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2018-08-13bpf: decouple btf from seq bpf fs dump and enable more mapsDaniel Borkmann1-4/+9
Commit a26ca7c982cb ("bpf: btf: Add pretty print support to the basic arraymap") and 699c86d6ec21 ("bpf: btf: add pretty print for hash/lru_hash maps") enabled support for BTF and dumping via BPF fs for array and hash/lru map. However, both can be decoupled from each other such that regular BPF maps can be supported for attaching BTF key/value information, while not all maps necessarily need to dump via map_seq_show_elem() callback. The basic sanity check which is a prerequisite for all maps is that key/value size has to match in any case, and some maps can have extra checks via map_check_btf() callback, e.g. probing certain types or indicating no support in general. With that we can also enable retrieving BTF info for per-cpu map types and lpm. Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Acked-by: Yonghong Song <[email protected]>
2018-08-12init: rename and re-order boot_cpu_state_init()Linus Torvalds1-1/+1
This is purely a preparatory patch for upcoming changes during the 4.19 merge window. We have a function called "boot_cpu_state_init()" that isn't really about the bootup cpu state: that is done much earlier by the similarly named "boot_cpu_init()" (note lack of "state" in name). This function initializes some hotplug CPU state, and needs to run after the percpu data has been properly initialized. It even has a comment to that effect. Except it _doesn't_ actually run after the percpu data has been properly initialized. On x86 it happens to do that, but on at least arm and arm64, the percpu base pointers are initialized by the arch-specific 'smp_prepare_boot_cpu()' hook, which ran _after_ boot_cpu_state_init(). This had some unexpected results, and in particular we have a patch pending for the merge window that did the obvious cleanup of using 'this_cpu_write()' in the cpu hotplug init code: - per_cpu_ptr(&cpuhp_state, smp_processor_id())->state = CPUHP_ONLINE; + this_cpu_write(cpuhp_state.state, CPUHP_ONLINE); which is obviously the right thing to do. Except because of the ordering issue, it actually failed miserably and unexpectedly on arm64. So this just fixes the ordering, and changes the name of the function to be 'boot_cpu_hotplug_init()' to make it obvious that it's about cpu hotplug state, because the core CPU state was supposed to have already been done earlier. Marked for stable, since the (not yet merged) patch that will show this problem is marked for stable. Reported-by: Vlastimil Babka <[email protected]> Reported-by: Mian Yousaf Kaukab <[email protected]> Suggested-by: Catalin Marinas <[email protected]> Acked-by: Thomas Gleixner <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Signed-off-by: Linus Torvalds <[email protected]>
2018-08-11blkcg: Make blkg_root_lookup() work for queues in bypass modeBart Van Assche1-10/+5
For legacy queues the only call of blkg_root_lookup() happens after bypass mode has been enabled. Since blkg_lookup() returns NULL for queues in bypass mode, modify the blkg_root_lookup() such that it no longer depends on bypass mode. Rename the function into blk_queue_root_blkg() as suggested by Tejun. Suggested-by: Tejun Heo <[email protected]> Fixes: 6bad9b210a22 ("blkcg: Introduce blkg_root_lookup()") Signed-off-by: Bart Van Assche <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2018-08-11virtio: Make vp_set_vq_affinity() take a mask.Caleb Raitto1-3/+4
Make vp_set_vq_affinity() take a cpumask instead of taking a single CPU. If there are fewer queues than cores, queue affinity should be able to map to multiple cores. Link: https://patchwork.ozlabs.org/patch/948149/ Suggested-by: Willem de Bruijn <[email protected]> Signed-off-by: Caleb Raitto <[email protected]> Acked-by: Gonglei <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-11Merge tag 'spi-nor/for-4.19' of git://git.infradead.org/linux-mtd into mtd/nextBoris Brezillon1-0/+1
Pull SPI NOR updates from Boris Brezillon: " Core changes: - Apply reset hacks only when reset is explicitly marked as broken in the DT Driver changes: - Minor cleanup/fixes in the m25p80 driver - Release flash_np in the nxp-spifi driver - Add suspend/resume hooks to the atmel-quadspi driver - Include gpio/consumer.h instead of gpio.h in the atmel-quadspi driver - Use %pK instead of %p in the stm32-quadspi driver - Improve timeout handling in the cadence-quadspi driver - Use mtd_device_register() instead of mtd_device_parse_register() in the intel-spi driver "
2018-08-11Merge tag 'nand/for-4.19' of git://git.infradead.org/linux-mtd into mtd/nextBoris Brezillon37-131/+759
Pull NAND updates from Miquel Raynal: " NAND core changes: - Add the SPI-NAND framework. - Create a helper to find the best ECC configuration. - Create NAND controller operations. - Allocate dynamically ONFI parameters structure. - Add defines for ONFI version bits. - Add manufacturer fixup for ONFI parameter page. - Add an option to specify NAND chip as a boot device. - Add Reed-Solomon error correction algorithm. - Better name for the controller structure. - Remove unused caller_is_module() definition. - Make subop helpers return unsigned values. - Expose _notsupp() helpers for raw page accessors. - Add default values for dynamic timings. - Kill the chip->scan_bbt() hook. - Rename nand_default_bbt() into nand_create_bbt(). - Start to clean the nand_chip structure. - Remove stale prototype from rawnand.h. Raw NAND controllers drivers changes: - Qcom: structuring cleanup. - Denali: use core helper to find the best ECC configuration. - Possible build of almost all drivers by adding a dependency on COMPILE_TEST for almost all of them in Kconfig, implies various fixes, Kconfig cleanup, GPIO headers inclusion cleanup, and even changes in sparc64 and ia64 architectures. - Clean the ->probe() functions error path of a lot of drivers. - Migrate all drivers to use nand_scan() instead of nand_scan_ident()/nand_scan_tail() pair. - Use mtd_device_register() where applicable to simplify the code. - Marvell: * Handle on-die ECC. * Better clocks handling. * Remove bogus comment. * Add suspend and resume support. - Tegra: add NAND controller driver. - Atmel: * Add module param to avoid using dma. * Drop Wenyou Yang from MAINTAINERS. - Denali: optimize timings handling. - FSMC: Stop using chip->read_buf(). - FSL: * Switch to SPDX license tag identifiers. * Fix qualifiers in MXC init functions. Raw NAND chip drivers changes: - Micron: * Add fixup for ONFI revision. * Update ecc_stats.corrected. * Make ECC activation stateful. * Avoid enabling/disabling ECC when it can't be disabled. * Get the actual number of bitflips. * Allow forced on-die ECC. * Support 8/512 on-die ECC. * Fix on-die ECC detection logic. - Hynix: * Fix decoding the OOB size on H27UCG8T2BTR. * Use ->exec_op() in hynix_nand_reg_write_op(). "
2018-08-11bpf: Enable BPF_PROG_TYPE_SK_REUSEPORT bpf prog in reuseport selectionMartin KaFai Lau1-0/+1
This patch allows a BPF_PROG_TYPE_SK_REUSEPORT bpf prog to select a SO_REUSEPORT sk from a BPF_MAP_TYPE_REUSEPORT_ARRAY introduced in the earlier patch. "bpf_run_sk_reuseport()" will return -ECONNREFUSED when the BPF_PROG_TYPE_SK_REUSEPORT prog returns SK_DROP. The callers, in inet[6]_hashtable.c and ipv[46]/udp.c, are modified to handle this case and return NULL immediately instead of continuing the sk search from its hashtable. It re-uses the existing SO_ATTACH_REUSEPORT_EBPF setsockopt to attach BPF_PROG_TYPE_SK_REUSEPORT. The "sk_reuseport_attach_bpf()" will check if the attaching bpf prog is in the new SK_REUSEPORT or the existing SOCKET_FILTER type and then check different things accordingly. One level of "__reuseport_attach_prog()" call is removed. The "sk_unhashed() && ..." and "sk->sk_reuseport_cb" tests are pushed back to "reuseport_attach_prog()" in sock_reuseport.c. sock_reuseport.c seems to have more knowledge on those test requirements than filter.c. In "reuseport_attach_prog()", after new_prog is attached to reuse->prog, the old_prog (if any) is also directly freed instead of returning the old_prog to the caller and asking the caller to free. The sysctl_optmem_max check is moved back to the "sk_reuseport_attach_filter()" and "sk_reuseport_attach_bpf()". As of other bpf prog types, the new BPF_PROG_TYPE_SK_REUSEPORT is only bounded by the usual "bpf_prog_charge_memlock()" during load time instead of bounded by both bpf_prog_charge_memlock and sysctl_optmem_max. Signed-off-by: Martin KaFai Lau <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-08-11bpf: Introduce BPF_PROG_TYPE_SK_REUSEPORTMartin KaFai Lau2-0/+18
This patch adds a BPF_PROG_TYPE_SK_REUSEPORT which can select a SO_REUSEPORT sk from a BPF_MAP_TYPE_REUSEPORT_ARRAY. Like other non SK_FILTER/CGROUP_SKB program, it requires CAP_SYS_ADMIN. BPF_PROG_TYPE_SK_REUSEPORT introduces "struct sk_reuseport_kern" to store the bpf context instead of using the skb->cb[48]. At the SO_REUSEPORT sk lookup time, it is in the middle of transiting from a lower layer (ipv4/ipv6) to a upper layer (udp/tcp). At this point, it is not always clear where the bpf context can be appended in the skb->cb[48] to avoid saving-and-restoring cb[]. Even putting aside the difference between ipv4-vs-ipv6 and udp-vs-tcp. It is not clear if the lower layer is only ipv4 and ipv6 in the future and will it not touch the cb[] again before transiting to the upper layer. For example, in udp_gro_receive(), it uses the 48 byte NAPI_GRO_CB instead of IP[6]CB and it may still modify the cb[] after calling the udp[46]_lib_lookup_skb(). Because of the above reason, if sk->cb is used for the bpf ctx, saving-and-restoring is needed and likely the whole 48 bytes cb[] has to be saved and restored. Instead of saving, setting and restoring the cb[], this patch opts to create a new "struct sk_reuseport_kern" and setting the needed values in there. The new BPF_PROG_TYPE_SK_REUSEPORT and "struct sk_reuseport_(kern|md)" will serve all ipv4/ipv6 + udp/tcp combinations. There is no protocol specific usage at this point and it is also inline with the current sock_reuseport.c implementation (i.e. no protocol specific requirement). In "struct sk_reuseport_md", this patch exposes data/data_end/len with semantic similar to other existing usages. Together with "bpf_skb_load_bytes()" and "bpf_skb_load_bytes_relative()", the bpf prog can peek anywhere in the skb. The "bind_inany" tells the bpf prog that the reuseport group is bind-ed to a local INANY address which cannot be learned from skb. The new "bind_inany" is added to "struct sock_reuseport" which will be used when running the new "BPF_PROG_TYPE_SK_REUSEPORT" bpf prog in order to avoid repeating the "bind INANY" test on "sk_v6_rcv_saddr/sk->sk_rcv_saddr" every time a bpf prog is run. It can only be properly initialized when a "sk->sk_reuseport" enabled sk is adding to a hashtable (i.e. during "reuseport_alloc()" and "reuseport_add_sock()"). The new "sk_select_reuseport()" is the main helper that the bpf prog will use to select a SO_REUSEPORT sk. It is the only function that can use the new BPF_MAP_TYPE_REUSEPORT_ARRAY. As mentioned in the earlier patch, the validity of a selected sk is checked in run time in "sk_select_reuseport()". Doing the check in verification time is difficult and inflexible (consider the map-in-map use case). The runtime check is to compare the selected sk's reuseport_id with the reuseport_id that we want. This helper will return -EXXX if the selected sk cannot serve the incoming request (e.g. reuseport_id not match). The bpf prog can decide if it wants to do SK_DROP as its discretion. When the bpf prog returns SK_PASS, the kernel will check if a valid sk has been selected (i.e. "reuse_kern->selected_sk != NULL"). If it does , it will use the selected sk. If not, the kernel will select one from "reuse->socks[]" (as before this patch). The SK_DROP and SK_PASS handling logic will be in the next patch. Signed-off-by: Martin KaFai Lau <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-08-11bpf: Introduce BPF_MAP_TYPE_REUSEPORT_SOCKARRAYMartin KaFai Lau2-0/+31
This patch introduces a new map type BPF_MAP_TYPE_REUSEPORT_SOCKARRAY. To unleash the full potential of a bpf prog, it is essential for the userspace to be capable of directly setting up a bpf map which can then be consumed by the bpf prog to make decision. In this case, decide which SO_REUSEPORT sk to serve the incoming request. By adding BPF_MAP_TYPE_REUSEPORT_SOCKARRAY, the userspace has total control and visibility on where a SO_REUSEPORT sk should be located in a bpf map. The later patch will introduce BPF_PROG_TYPE_SK_REUSEPORT such that the bpf prog can directly select a sk from the bpf map. That will raise the programmability of the bpf prog attached to a reuseport group (a group of sk serving the same IP:PORT). For example, in UDP, the bpf prog can peek into the payload (e.g. through the "data" pointer introduced in the later patch) to learn the application level's connection information and then decide which sk to pick from a bpf map. The userspace can tightly couple the sk's location in a bpf map with the application logic in generating the UDP payload's connection information. This connection info contact/API stays within the userspace. Also, when used with map-in-map, the userspace can switch the old-server-process's inner map to a new-server-process's inner map in one call "bpf_map_update_elem(outer_map, &index, &new_reuseport_array)". The bpf prog will then direct incoming requests to the new process instead of the old process. The old process can finish draining the pending requests (e.g. by "accept()") before closing the old-fds. [Note that deleting a fd from a bpf map does not necessary mean the fd is closed] During map_update_elem(), Only SO_REUSEPORT sk (i.e. which has already been added to a reuse->socks[]) can be used. That means a SO_REUSEPORT sk that is "bind()" for UDP or "bind()+listen()" for TCP. These conditions are ensured in "reuseport_array_update_check()". A SO_REUSEPORT sk can only be added once to a map (i.e. the same sk cannot be added twice even to the same map). SO_REUSEPORT already allows another sk to be created for the same IP:PORT. There is no need to re-create a similar usage in the BPF side. When a SO_REUSEPORT is deleted from the "reuse->socks[]" (e.g. "close()"), it will notify the bpf map to remove it from the map also. It is done through "bpf_sk_reuseport_detach()" and it will only be called if >=1 of the "reuse->sock[]" has ever been added to a bpf map. The map_update()/map_delete() has to be in-sync with the "reuse->socks[]". Hence, the same "reuseport_lock" used by "reuse->socks[]" has to be used here also. Care has been taken to ensure the lock is only acquired when the adding sk passes some strict tests. and freeing the map does not require the reuseport_lock. The reuseport_array will also support lookup from the syscall side. It will return a sock_gen_cookie(). The sock_gen_cookie() is on-demand (i.e. a sk's cookie is not generated until the very first map_lookup_elem()). The lookup cookie is 64bits but it goes against the logical userspace expectation on 32bits sizeof(fd) (and as other fd based bpf maps do also). It may catch user in surprise if we enforce value_size=8 while userspace still pass a 32bits fd during update. Supporting different value_size between lookup and update seems unintuitive also. We also need to consider what if other existing fd based maps want to return 64bits value from syscall's lookup in the future. Hence, reuseport_array supports both value_size 4 and 8, and assuming user will usually use value_size=4. The syscall's lookup will return ENOSPC on value_size=4. It will will only return 64bits value from sock_gen_cookie() when user consciously choose value_size=8 (as a signal that lookup is desired) which then requires a 64bits value in both lookup and update. Signed-off-by: Martin KaFai Lau <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-08-10gpio: mmio: Fix up inverted direction registersLinus Walleij1-0/+3
The bgpio_init() takes one of two arguments to specify a register to set the direction of the GPIO line: either dirout that indicates that a 1 in the bit in that register sets the corresponding line to output, or dirin which indicates that a 1 in the bit in that register sets the corresponding line to input. Conversely setting the bit to 0 on these will turn the line into input and output respectively. One of these can be defined but not both. This means that a platform that sets a bit to 1 for output only defines dirout and a platform that sets a bit to 0 for output only defines dirin. In short this defines the polarity of the direction register. Both can also be left as NULL meaning the GPIO chip is either input only or output only. Tomer Maimon discovered that for get/set chips (those where the get and set registers are defined but no separate clear register, and specifying BGPIOF_READ_OUTPUT_REG_SET so that we say we want to read the output value from the SET register) we are unconditionally reading the value from the SET register when the direction bit is 1 and from the DAT register when the direction bit is 0, not taking the direction bit polarity into account. It would be expected that when the direction bit is inverted (dirin is defined but not dirout) we read the current value from the DAT register when the bit is 1 and from the SET register when the bit is 0. Currently only some versions of ATH79, brcmstb, some versions of CLP711x, GE, IOP and Loongson use the dirin mode (a 1 in the register means input). They are unaffected because BGPIOF_READ_OUTPUT_REG_SET is not set on any of them. (They do not read back the SET register to figure out the output value.) So this is no regression with current drivers. However the behaviour is wrong and does not work with Tomer's new driver where he needs to use the BGIOF_READ_OUTPUT_REG_SET. This fixes the above issue by: - Instead of defining separate functions for the inverted case, set up a flag in the gpio_chip that indicates that the direction is inverted. - Remove the special inverted functions for setting input/output and getting the direction, rely on the flag instead. - Respect this flag in bgpio_get_set() and bgpio_get_set_multiple() Reported-by: Tomer Maimon <[email protected]> Signed-off-by: Linus Walleij <[email protected]>
2018-08-10tracing: Partial revert of "tracing: Centralize preemptirq tracepoints and ↵Steven Rostedt (VMware)2-4/+6
unify their usage" Joel Fernandes created a nice patch that cleaned up the duplicate hooks used by lockdep and irqsoff latency tracer. It made both use tracepoints. But it caused lockdep to trigger several false positives. We have not figured out why yet, but removing lockdep from using the trace event hooks and just call its helper functions directly (like it use to), makes the problem go away. This is a partial revert of the clean up patch c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints and unify their usage") that adds direct calls for lockdep, but also keeps most of the clean up done to get rid of the horrible preprocessor if statements. Link: http://lkml.kernel.org/r/[email protected] Cc: Peter Zijlstra <[email protected]> Reviewed-by: Joel Fernandes (Google) <[email protected]> Fixes: c3bc8fd637a9 ("tracing: Centralize preemptirq tracepoints and unify their usage") Signed-off-by: Steven Rostedt (VMware) <[email protected]>
2018-08-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-nextDavid S. Miller1-12/+0
Pablo Neira Ayuso says: ==================== Netfilter updates for net-next The following batch contains netfilter updates for your net-next tree: 1) Expose NFT_OSF_MAXGENRELEN maximum OS name length from the new OS passive fingerprint matching extension, from Fernando Fernandez. 2) Add extension to support for fine grain conntrack timeout policies from nf_tables. As preparation works, this patchset moves nf_ct_untimeout() to nf_conntrack_timeout and it also decouples the timeout policy from the ctnl_timeout object, most work done by Harsha Sharma. 3) Enable connection tracking when conntrack helper is in place. 4) Missing enumeration in uapi header when splitting original xt_osf to nfnetlink_osf, also from Fernando. 5) Fix a sparse warning due to incorrect typing in the nf_osf_find(), from Wei Yongjun. ==================== Signed-off-by: David S. Miller <[email protected]>
2018-08-10net: Provide stub for __netif_set_xps_queue if there is no CONFIG_XPSKrzysztof Kozlowski1-0/+7
Building virtio_net driver without CONFIG_XPS fails with: drivers/net/virtio_net.c: In function ‘virtnet_set_affinity’: drivers/net/virtio_net.c:1910:3: error: implicit declaration of function ‘__netif_set_xps_queue’ [-Werror=implicit-function-declaration] __netif_set_xps_queue(vi->dev, mask, i, false); ^ Fixes: 4d99f6602cb5 ("net: allow to call netif_reset_xps_queues() under cpus_read_lock") Signed-off-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-10Merge branch 'spi-4.19' into spi-nextMark Brown3-257/+16
2018-08-10Merge branch 'regulator-4.19' into regulator-nextMark Brown5-2/+198
2018-08-10Merge tag 'qcom-drivers-for-4.19' of ↵Mark Brown2-0/+184
git://git.kernel.org/pub/scm/linux/kernel/git/agross/linux into regulator-4.19 for RPMH Qualcomm ARM Based Driver Updates for v4.19 * Add Qualcomm LLCC driver * Add Qualcomm RPMH controller * Fix memleak in Qualcomm RMTFS * Add dummy qcom_scm_assign_mem() * Fix check for global partition in SMEM
2018-08-10xdp: Helpers for disabling napi_direct of xdp_return_frameToshiaki Makita1-0/+25
We need some mechanism to disable napi_direct on calling xdp_return_frame_rx_napi() from some context. When veth gets support of XDP_REDIRECT, it will redirects packets which are redirected from other devices. On redirection veth will reuse xdp_mem_info of the redirection source device to make return_frame work. But in this case .ndo_xdp_xmit() called from veth redirection uses xdp_mem_info which is not guarded by NAPI, because the .ndo_xdp_xmit() is not called directly from the rxq which owns the xdp_mem_info. This approach introduces a flag in bpf_redirect_info to indicate that napi_direct should be disabled even when _rx_napi variant is used as well as helper functions to use it. A NAPI handler who wants to use this flag needs to call xdp_set_return_frame_no_direct() before processing packets, and call xdp_clear_return_frame_no_direct() after xdp_do_flush_map() before exiting NAPI. v4: - Use bpf_redirect_info for storing the flag instead of xdp_mem_info to avoid per-frame copy cost. Signed-off-by: Toshiaki Makita <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-08-10bpf: Make redirect_info accessible from modulesToshiaki Makita1-0/+10
We are going to add kern_flags field in redirect_info for kernel internal use. In order to avoid function call to access the flags, make redirect_info accessible from modules. Also as it is now non-static, add prefix bpf_ to redirect_info. v6: - Fix sparse warning around EXPORT_SYMBOL. Signed-off-by: Toshiaki Makita <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-08-10net: Export skb_headers_offset_updateToshiaki Makita1-0/+1
This is needed for veth XDP which does skb_copy_expand()-like operation. v2: - Drop skb_copy_header part because it has already been exported now. Signed-off-by: Toshiaki Makita <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2018-08-09PCI: Hide ACS quirk declarations inside PCI coreBjorn Helgaas1-11/+0
Move declarations for these functions: pci_dev_specific_acs_enabled() pci_dev_specific_enable_acs() from include/linux/pci.h to drivers/pci/pci.h because nothing outside the PCI core needs to use them. Signed-off-by: Bjorn Helgaas <[email protected]>
2018-08-09net: skbuff.h: fix using plain integer as NULL warningYueHaibing1-1/+1
Fixes the following sparse warning: ./include/linux/skbuff.h:2365:58: warning: Using plain integer as NULL pointer Signed-off-by: YueHaibing <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-09qed/qede: Multi CoS support.Manish Chopra1-0/+6
This patch adds support for tc mqprio offload, using this different traffic classes on the adapter can be utilized based on configured priority to tc map. For example - tc qdisc add dev eth0 root mqprio num_tc 4 map 0 1 2 3 This will cause SKBs with priority 0,1,2,3 to transmit over tc 0,1,2,3 hardware queues respectively. Signed-off-by: Manish Chopra <[email protected]> Signed-off-by: Ariel Elior <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-09PCI: Export pcie_has_flr()Alex Williamson1-0/+1
pcie_flr() suggests pcie_has_flr() to ensure that PCIe FLR support is present prior to calling. pcie_flr() is exported while pcie_has_flr() is not. Resolve this. Signed-off-by: Alex Williamson <[email protected]> Signed-off-by: Bjorn Helgaas <[email protected]>
2018-08-09NFSD: Handle full-length symlinksChuck Lever1-1/+2
I've given up on the idea of zero-copy handling of SYMLINK on the server side. This is because the Linux VFS symlink API requires the symlink pathname to be in a NUL-terminated kmalloc'd buffer. The NUL-termination is going to be problematic (watching out for landing on a page boundary and dealing with a 4096-byte pathname). I don't believe that SYMLINK creation is on a performance path or is requested frequently enough that it will cause noticeable CPU cache pollution due to data copies. There will be two places where a transport callout will be necessary to fill in the rqstp: one will be in the svc_fill_symlink_pathname() helper that is used by NFSv2 and NFSv3, and the other will be in nfsd4_decode_create(). Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2018-08-09NFSD: Refactor the generic write vector fill helperChuck Lever1-0/+1
fill_in_write_vector() is nearly the same logic as svc_fill_write_vector(), but there are a few differences so that the former can handle multiple WRITE payloads in a single COMPOUND. svc_fill_write_vector() can be adjusted so that it can be used in the NFSv4 WRITE code path too. Instead of assuming the pages are coming from rq_args.pages, have the caller pass in the page list. The immediate benefit is a reduction of code duplication. It also prevents the NFSv4 WRITE decoder from passing an empty vector element when the transport has provided the payload in the xdr_buf's page array. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2018-08-09nfsd: fix leaked file lock with nfs exported overlayfsAmir Goldstein1-2/+2
nfsd and lockd call vfs_lock_file() to lock/unlock the inode returned by locks_inode(file). Many places in nfsd/lockd code use the inode returned by file_inode(file) for lock manipulation. With Overlayfs, file_inode() (the underlying inode) is not the same object as locks_inode() (the overlay inode). This can result in "Leaked POSIX lock" messages and eventually to a kernel crash as reported by Eddie Horng: https://marc.info/?l=linux-unionfs&m=153086643202072&w=2 Fix all the call sites in nfsd/lockd that should use locks_inode(). This is a correctness bug that manifested when overlayfs gained NFS export support in v4.16. Reported-by: Eddie Horng <[email protected]> Tested-by: Eddie Horng <[email protected]> Cc: Jeff Layton <[email protected]> Fixes: 8383f1748829 ("ovl: wire up NFS export operations") Cc: [email protected] Signed-off-by: Amir Goldstein <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2018-08-09net: stmmac: Add XGMAC 2.10 HWIF entryJose Abreu1-0/+1
Add a new entry to HWIF table for XGMAC 2.10. For now we fill it with empty callbacks which will be added in posterior patches. Signed-off-by: Jose Abreu <[email protected]> Cc: David S. Miller <[email protected]> Cc: Joao Pinto <[email protected]> Cc: Giuseppe Cavallaro <[email protected]> Cc: Alexandre Torgue <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-09phylink: add helper for configuring 2500BaseX modesRussell King1-0/+1
Add a helper for MAC drivers to use in their validate callback to deal with 2500BaseX vs 1000BaseX modes, where the hardware supports both but it is not possible to automatically select between them. This helper defaults to 1000BaseX, as that is the 802.3 standard, and will allow users to select 2500BaseX either by forcing the speed if AN is disabled, or by changing the advertising mask if AN is enabled. Disabling AN is not recommended as it is only the speed that we're interested in controlling, not the duplex or pause mode parameters. Signed-off-by: Russell King <[email protected]> Signed-off-by: Andrew Lunn <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-09signal: Don't restart fork when signals come in.Eric W. Biederman1-0/+8
Wen Yang <[email protected]> and majiang <[email protected]> report that a periodic signal received during fork can cause fork to continually restart preventing an application from making progress. The code was being overly pessimistic. Fork needs to guarantee that a signal sent to multiple processes is logically delivered before the fork and just to the forking process or logically delivered after the fork to both the forking process and it's newly spawned child. For signals like periodic timers that are always delivered to a single process fork can safely complete and let them appear to logically delivered after the fork(). While examining this issue I also discovered that fork today will miss signals delivered to multiple processes during the fork and handled by another thread. Similarly the current code will also miss blocked signals that are delivered to multiple process, as those signals will not appear pending during fork. Add a list of each thread that is currently forking, and keep on that list a signal set that records all of the signals sent to multiple processes. When fork completes initialize the new processes shared_pending signal set with it. The calculate_sigpending function will see those signals and set TIF_SIGPENDING causing the new task to take the slow path to userspace to handle those signals. Making it appear as if those signals were received immediately after the fork. It is not possible to send real time signals to multiple processes and exceptions don't go to multiple processes, which means that that are no signals sent to multiple processes that require siginfo. This means it is safe to not bother collecting siginfo on signals sent during fork. The sigaction of a child of fork is initially the same as the sigaction of the parent process. So a signal the parent ignores the child will also initially ignore. Therefore it is safe to ignore signals sent to multiple processes and ignored by the forking process. Signals sent to only a single process or only a single thread and delivered during fork are treated as if they are received after the fork, and generally not dealt with. They won't cause any problems. V2: Added removal from the multiprocess list on failure. V3: Use -ERESTARTNOINTR directly V4: - Don't queue both SIGCONT and SIGSTOP - Initialize signal_struct.multiprocess in init_task - Move setting of shared_pending to before the new task is visible to signals. This prevents signals from comming in before shared_pending.signal is set to delayed.signal and being lost. V5: - rework list add and delete to account for idle threads v6: - Use sigdelsetmask when removing stop signals Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=200447 Reported-by: Wen Yang <[email protected]> and Reported-by: majiang <[email protected]> Fixes: 4a2c7a7837da ("[PATCH] make fork() atomic wrt pgrp/session signals") Signed-off-by: "Eric W. Biederman" <[email protected]>
2018-08-09NFS handle COPY reply CB_OFFLOAD call raceOlga Kornievskaia1-0/+1
It's possible that server replies back with CB_OFFLOAD call and COPY reply at the same time such that client will process CB_OFFLOAD before reply to COPY. For that keep a list of pending callback stateids received and then before waiting on completion check the pending list. Cleanup any pending copies on the client shutdown. Signed-off-by: Olga Kornievskaia <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2018-08-09NFS add support for asynchronous COPYOlga Kornievskaia3-0/+11
Change xdr to always send COPY asynchronously. Keep the list copies send in a list under a server structure. Once copy is sent, it waits on a completion structure that will be signalled by the callback thread that receives CB_OFFLOAD. If CB_OFFLOAD returned an error and even if it returned partial bytes, ignore them (as we can't commit without a verifier to match) and return an error. Signed-off-by: Olga Kornievskaia <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2018-08-09NFS COPY xdr handle async replyOlga Kornievskaia1-0/+1
If server returns async reply, it must include a callback stateid, wr_callback_id in the write_response4. Signed-off-by: Olga Kornievskaia <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2018-08-09NFS OFFLOAD_CANCEL xdrOlga Kornievskaia3-0/+14
Signed-off-by: Olga Kornievskaia <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2018-08-09ssb: Remove SSB_WARN_ON, SSB_BUG_ON and SSB_DEBUGMichael Büsch1-2/+0
Use the standard WARN_ON instead. If a small kernel is desired, WARN_ON can be disabled globally. Also remove SSB_DEBUG. Besides WARN_ON it only adds a tiny debug check. Include this check unconditionally. Signed-off-by: Michael Buesch <[email protected]> Signed-off-by: Kalle Valo <[email protected]>
2018-08-09blkcg: Introduce blkg_root_lookup()Bart Van Assche1-0/+18
This new function will be used in a later patch to verify whether a queue has been dissociated from the cgroup controller before being released. Signed-off-by: Bart Van Assche <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Ming Lei <[email protected]> Cc: Omar Sandoval <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: Alexandru Moise <[email protected]> Cc: Joseph Qi <[email protected]> Cc: <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2018-08-09block: Remove two superfluous #include directivesBart Van Assche1-2/+0
Commit 12f5b9314545 ("blk-mq: Remove generation seqeunce") removed the only seqcount_t and u64_stats_sync instances from <linux/blkdev.h> but did not remove the corresponding #include directives. Since these include directives are no longer needed, remove them. Signed-off-by: Bart Van Assche <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Keith Busch <[email protected]> Cc: Ming Lei <[email protected]> Cc: Jianchao Wang <[email protected]> Cc: Hannes Reinecke <[email protected]>, Cc: Johannes Thumshirn <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2018-08-09Merge branch 'asoc-4.19' into asoc-nextMark Brown5-2/+80
2018-08-09Merge branch 'regmap-4.19' into regmap-nextMark Brown2-3/+62
2018-08-09Merge tag 'regmap-noinc-read' into regmap-4.19Mark Brown1-0/+19
regmap: Support non-incrementing registers Some devices have individual registers that don't autoincrement the register address during bulk reads but instead repeatedly read the same value, for example for monitoring GPIOs or ADCs. Add support for these.
2018-08-09regmap: Add regmap_noinc_read APICrestez Dan Leonard1-0/+19
The regmap API usually assumes that bulk read operations will read a range of registers but some I2C/SPI devices have certain registers for which a such a read operation will return data from an internal FIFO instead. Add an explicit API to support bulk read without range semantics. Some linux drivers use regmap_bulk_read or regmap_raw_read for such registers, for example mpu6050 or bmi150 from IIO. This only happens to work because when caching is disabled a single regmap read op will map to a single bus read op (as desired). This breaks if caching is enabled and reg+1 happens to be a cacheable register. Without regmap support refactoring a driver to enable regmap caching requires separate I2C and SPI paths. This is exactly what regmap is supposed to help avoid. Suggested-by: Jonathan Cameron <[email protected]> Signed-off-by: Crestez Dan Leonard <[email protected]> Signed-off-by: Stefan Popa <[email protected]> Signed-off-by: Mark Brown <[email protected]>
2018-08-08net/mlx5: Remove unused mlx5_query_vport_admin_stateEli Cohen1-2/+0
mlx5_query_vport_admin_state() is not used anywhere. Remove it. Signed-off-by: Eli Cohen <[email protected]> Reviewed-by: Or Gerlitz <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-08net/mlx5: Rename modify/query_vport state related enumsEran Ben Elisha2-5/+5
Modify and query vport state commands share the same admin_state and op_mod values, rename the enums to fit them both. In addition, remove the esw prefix from the admin state enum as this also applied for vnic. Signed-off-by: Eran Ben Elisha <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-08net/mlx5: Use max_num_eqs for calculation of required MSIX vectorsDenis Drozdov1-1/+4
New firmware has defined new HCA capability field called "max_num_eqs", that is the number of available EQs after subtracting reserved FW EQs. Before this capability the FW reported the EQ number in "log_max_eqs", the reported value also contained FW reserved EQs, but the driver might be failing to load on 320 cpus systems due to the fact that FW reserved EQs were not available to the driver. Now the driver has to obtain max_num_eqs value from new FW to get real number of EQs available. Signed-off-by: Denis Drozdov <[email protected]> Reviewed-by: Alex Vesker <[email protected]> Signed-off-by: Saeed Mahameed <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2018-08-08i2c: core: Parse SDA hold time from firmwareAndy Shevchenko1-0/+2
There are two drivers already using the SDA hold time setting. It might be more in the future, thus, make I2C core to parse the setting for us if provided by firmware. Signed-off-by: Andy Shevchenko <[email protected]> Tested-by: Alexandre Belloni <[email protected]> Reviewed-by: Alexandre Belloni <[email protected]> Acked-by: Ludovic Desroches <[email protected]> Signed-off-by: Wolfram Sang <[email protected]>
2018-08-08netfilter: nfnetlink_osf: add missing enum in nfnetlink_osf uapi headerFernando Fernandez Mancera1-12/+0
xt_osf_window_size_options was originally part of include/uapi/linux/netfilter/xt_osf.h, restore it. Fixes: bfb15f2a95cb ("netfilter: extract Passive OS fingerprint infrastructure from xt_osf") Signed-off-by: Fernando Fernandez Mancera <[email protected]> Signed-off-by: Pablo Neira Ayuso <[email protected]>
2018-08-08overflow.h: Add arithmetic shift helperJason Gunthorpe1-0/+31
Add shift_overflow() helper to assist driver authors in ensuring that shift operations don't cause overflows or other odd conditions. Signed-off-by: Jason Gunthorpe <[email protected]> Signed-off-by: Leon Romanovsky <[email protected]> [kees: tweaked comments and commit log, dropped unneeded assignment] Signed-off-by: Kees Cook <[email protected]>
2018-08-08Merge branches 'arm/shmobile', 'arm/renesas', 'arm/msm', 'arm/smmu', ↵Joerg Roedel35-111/+286
'arm/omap', 'x86/amd', 'x86/vt-d' and 'core' into next
2018-08-08nvme.h: add support for ns write protect definitionsChaitanya Kulkarni1-2/+15
Add various definitions from NVMe 1.3 TP 4005. Signed-off-by: Chaitanya Kulkarni <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2018-08-08nvme.h: fixup ANA group descriptor formatHannes Reinecke1-1/+1
ANA Phase 3 draft had the 'reserved' field in the group descriptor format set to '23:17' (so that the first namespace identifier started at byte 24), but that got move with the approved TP to '31:17' (so that the first namespace identifier started at byte 32). Signed-off-by: Hannes Reinecke <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
2018-08-08iommu: Remove the ->map_sg indirectionChristoph Hellwig1-14/+2
All iommu drivers use the default_iommu_map_sg implementation, and there is no good reason to ever override it. Just expose it as iommu_map_sg directly and remove the indirection, specially in our post-spectre world where indirect calls are horribly expensive. Signed-off-by: Christoph Hellwig <[email protected]> Signed-off-by: Joerg Roedel <[email protected]>