aboutsummaryrefslogtreecommitdiff
path: root/net
AgeCommit message (Collapse)AuthorFilesLines
2016-05-17xprtrdma: Update comments in rpcrdma_marshal_req()Chuck Lever1-14/+4
Update documenting comments to reflect code changes over the past year. Signed-off-by: Chuck Lever <[email protected]> Tested-by: Steve Wise <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2016-05-17xprtrdma: Avoid using Write list for small NFS READ requestsChuck Lever1-4/+5
Avoid the latency and interrupt overhead of registering a Write chunk when handling NFS READ requests of a few hundred bytes or less. This change does not interoperate with Linux NFS/RDMA servers that do not have commit 9d11b51ce7c1 ('svcrdma: Fix send_reply() scatter/gather set-up'). Commit 9d11b51ce7c1 was introduced in v4.3, and is included in 4.2.y, 4.1.y, and 3.18.y. Oracle bug 22925946 has been filed to request that the above fix be included in the Oracle Linux UEK4 NFS/RDMA server. Red Hat bugzillas 1327280 and 1327554 have been filed to request that RHEL NFS/RDMA server backports include the above fix. Workaround: Replace the "proto=rdma,port=20049" mount options with "proto=tcp" until commit 9d11b51ce7c1 is applied to your NFS server. Signed-off-by: Chuck Lever <[email protected]> Tested-by: Steve Wise <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2016-05-17xprtrdma: Prevent inline overflowChuck Lever5-11/+90
When deciding whether to send a Call inline, rpcrdma_marshal_req doesn't take into account header bytes consumed by chunk lists. This results in Call messages on the wire that are sometimes larger than the inline threshold. Likewise, when a Write list or Reply chunk is in play, the server's reply has to emit an RDMA Send that includes a larger-than-minimal RPC-over-RDMA header. The actual size of a Call message cannot be estimated until after the chunk lists have been registered. Thus the size of each RPC-over-RDMA header can be estimated only after chunks are registered; but the decision to register chunks is based on the size of that header. Chicken, meet egg. The best a client can do is estimate header size based on the largest header that might occur, and then ensure that inline content is always smaller than that. Signed-off-by: Chuck Lever <[email protected]> Tested-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2016-05-17xprtrdma: Limit number of RDMA segments in RPC-over-RDMA headersChuck Lever5-26/+23
Send buffer space is shared between the RPC-over-RDMA header and an RPC message. A large RPC-over-RDMA header means less space is available for the associated RPC message, which then has to be moved via an RDMA Read or Write. As more segments are added to the chunk lists, the header increases in size. Typical modern hardware needs only a few segments to convey the maximum payload size, but some devices and registration modes may need a lot of segments to convey data payload. Sometimes so many are needed that the remaining space in the Send buffer is not enough for the RPC message. Sending such a message usually fails. To ensure a transport can always make forward progress, cap the number of RDMA segments that are allowed in chunk lists. This prevents less-capable devices and memory registrations from consuming a large portion of the Send buffer by reducing the maximum data payload that can be conveyed with such devices. For now I choose an arbitrary maximum of 8 RDMA segments. This allows a maximum size RPC-over-RDMA header to fit nicely in the current 1024 byte inline threshold with over 700 bytes remaining for an inline RPC message. The current maximum data payload of NFS READ or WRITE requests is one megabyte. To convey that payload on a client with 4KB pages, each chunk segment would need to handle 32 or more data pages. This is well within the capabilities of FMR. For physical registration, the maximum payload size on platforms with 4KB pages is reduced to 32KB. For FRWR, a device's maximum page list depth would need to be at least 34 to support the maximum 1MB payload. A device with a smaller maximum page list depth means the maximum data payload is reduced when using that device. Signed-off-by: Chuck Lever <[email protected]> Tested-by: Steve Wise <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2016-05-17xprtrdma: Bound the inline threshold valuesChuck Lever1-0/+6
Currently the sysctls that allow setting the inline threshold allow any value to be set. Small values only make the transport run slower. The default 1KB setting is as low as is reasonable. And the logic that decides how to divide a Send buffer between RPC-over-RDMA header and RPC message assumes (but does not check) that the lower bound is not crazy (say, 57 bytes). Send and receive buffers share a page with some control information. Values larger than about 3KB can't be supported, currently. Signed-off-by: Chuck Lever <[email protected]> Tested-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2016-05-17sunrpc: Advertise maximum backchannel payload sizeChuck Lever5-0/+41
RPC-over-RDMA transports have a limit on how large a backward direction (backchannel) RPC message can be. Ensure that the NFSv4.x CREATE_SESSION operation advertises this limit to servers. Signed-off-by: Chuck Lever <[email protected]> Tested-by: Steve Wise <[email protected]> Signed-off-by: Anna Schumaker <[email protected]>
2016-05-17Merge tag 'net-next-qcom-soc-4.7-2-merge' of git://github.com/andersson/kernelDavid S. Miller1-3/+6
Merge tag 'qcom-soc-for-4.7-2' into net-next This merges the Qualcomm SOC tree with the net-next, solving the merge conflict in the SMD API between the two.
2016-05-17Merge branch 'for-linus' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull parallel filesystem directory handling update from Al Viro. This is the main parallel directory work by Al that makes the vfs layer able to do lookup and readdir in parallel within a single directory. That's a big change, since this used to be all protected by the directory inode mutex. The inode mutex is replaced by an rwsem, and serialization of lookups of a single name is done by a "in-progress" dentry marker. The series begins with xattr cleanups, and then ends with switching filesystems over to actually doing the readdir in parallel (switching to the "iterate_shared()" that only takes the read lock). A more detailed explanation of the process from Al Viro: "The xattr work starts with some acl fixes, then switches ->getxattr to passing inode and dentry separately. This is the point where the things start to get tricky - that got merged into the very beginning of the -rc3-based #work.lookups, to allow untangling the security_d_instantiate() mess. The xattr work itself proceeds to switch a lot of filesystems to generic_...xattr(); no complications there. After that initial xattr work, the series then does the following: - untangle security_d_instantiate() - convert a bunch of open-coded lookup_one_len_unlocked() to calls of that thing; one such place (in overlayfs) actually yields a trivial conflict with overlayfs fixes later in the cycle - overlayfs ended up switching to a variant of lookup_one_len_unlocked() sans the permission checks. I would've dropped that commit (it gets overridden on merge from #ovl-fixes in #for-next; proper resolution is to use the variant in mainline fs/overlayfs/super.c), but I didn't want to rebase the damn thing - it was fairly late in the cycle... - some filesystems had managed to depend on lookup/lookup exclusion for *fs-internal* data structures in a way that would break if we relaxed the VFS exclusion. Fixing hadn't been hard, fortunately. - core of that series - parallel lookup machinery, replacing ->i_mutex with rwsem, making lookup_slow() take it only shared. At that point lookups happen in parallel; lookups on the same name wait for the in-progress one to be done with that dentry. Surprisingly little code, at that - almost all of it is in fs/dcache.c, with fs/namei.c changes limited to lookup_slow() - making it use the new primitive and actually switching to locking shared. - parallel readdir stuff - first of all, we provide the exclusion on per-struct file basis, same as we do for read() vs lseek() for regular files. That takes care of most of the needed exclusion in readdir/readdir; however, these guys are trickier than lookups, so I went for switching them one-by-one. To do that, a new method '->iterate_shared()' is added and filesystems are switched to it as they are either confirmed to be OK with shared lock on directory or fixed to be OK with that. I hope to kill the original method come next cycle (almost all in-tree filesystems are switched already), but it's still not quite finished. - several filesystems get switched to parallel readdir. The interesting part here is dealing with dcache preseeding by readdir; that needs minor adjustment to be safe with directory locked only shared. Most of the filesystems doing that got switched to in those commits. Important exception: NFS. Turns out that NFS folks, with their, er, insistence on VFS getting the fuck out of the way of the Smart Filesystem Code That Knows How And What To Lock(tm) have grown the locking of their own. They had their own homegrown rwsem, with lookup/readdir/atomic_open being *writers* (sillyunlink is the reader there). Of course, with VFS getting the fuck out of the way, as requested, the actual smarts of the smart filesystem code etc. had become exposed... - do_last/lookup_open/atomic_open cleanups. As the result, open() without O_CREAT locks the directory only shared. Including the ->atomic_open() case. Backmerge from #for-linus in the middle of that - atomic_open() fix got brought in. - then comes NFS switch to saner (VFS-based ;-) locking, killing the homegrown "lookup and readdir are writers" kinda-sorta rwsem. All exclusion for sillyunlink/lookup is done by the parallel lookups mechanism. Exclusion between sillyunlink and rmdir is a real rwsem now - rmdir being the writer. Result: NFS lookups/readdirs/O_CREAT-less opens happen in parallel now. - the rest of the series consists of switching a lot of filesystems to parallel readdir; in a lot of cases ->llseek() gets simplified as well. One backmerge in there (again, #for-linus - rockridge fix)" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (74 commits) ext4: switch to ->iterate_shared() hfs: switch to ->iterate_shared() hfsplus: switch to ->iterate_shared() hostfs: switch to ->iterate_shared() hpfs: switch to ->iterate_shared() hpfs: handle allocation failures in hpfs_add_pos() gfs2: switch to ->iterate_shared() f2fs: switch to ->iterate_shared() afs: switch to ->iterate_shared() befs: switch to ->iterate_shared() befs: constify stuff a bit isofs: switch to ->iterate_shared() get_acorn_filename(): deobfuscate a bit btrfs: switch to ->iterate_shared() logfs: no need to lock directory in lseek switch ecryptfs to ->iterate_shared 9p: switch to ->iterate_shared() fat: switch to ->iterate_shared() romfs, squashfs: switch to ->iterate_shared() more trivial ->iterate_shared conversions ...
2016-05-17switchdev: pass pointer to fib_info instead of copyJiri Pirko1-4/+2
The problem is that fib_info->nh is [0] so the struct fib_info allocation size depends on number of nexthops. If we just copy fib_info, we do not copy the nexthops info and driver accesses memory which is not ours. Given the fact that fib4 does not defer operations and therefore it does not need copy, just pass the pointer down to drivers as it was done before. Fixes: 850d0cbc91 ("switchdev: remove pointers from switchdev objects") Signed-off-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-17net_sched: close another race condition in tcf_mirred_release()WANG Cong1-2/+3
We saw the following extra refcount release on veth device: kernel: [7957821.463992] unregister_netdevice: waiting for mesos50284 to become free. Usage count = -1 Since we heavily use mirred action to redirect packets to veth, I think this is caused by the following race condition: CPU0: tcf_mirred_release(): (in RCU callback) struct net_device *dev = rcu_dereference_protected(m->tcfm_dev, 1); CPU1: mirred_device_event(): spin_lock_bh(&mirred_list_lock); list_for_each_entry(m, &mirred_list, tcfm_list) { if (rcu_access_pointer(m->tcfm_dev) == dev) { dev_put(dev); /* Note : no rcu grace period necessary, as * net_device are already rcu protected. */ RCU_INIT_POINTER(m->tcfm_dev, NULL); } } spin_unlock_bh(&mirred_list_lock); CPU0: tcf_mirred_release(): spin_lock_bh(&mirred_list_lock); list_del(&m->tcfm_list); spin_unlock_bh(&mirred_list_lock); if (dev) // <======== Stil refers to the old m->tcfm_dev dev_put(dev); // <======== dev_put() is called on it again The action init code path is good because it is impossible to modify an action that is being removed. So, fix this by moving everything under the spinlock. Fixes: 2ee22a90c7af ("net_sched: act_mirred: remove spinlock in fast path") Fixes: 6bd00b850635 ("act_mirred: fix a race condition on mirred_list") Cc: Jamal Hadi Salim <[email protected]> Signed-off-by: Cong Wang <[email protected]> Acked-by: Jamal Hadi Salim <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-17tipc: fix nametable publication field in nl compatRichard Alpe1-1/+1
The publication field of the old netlink API should contain the publication key and not the publication reference. Fixes: 44a8ae94fd55 (tipc: convert legacy nl name table dump to nl compat) Signed-off-by: Richard Alpe <[email protected]> Acked-by: Jon Maloy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-17Merge branch 'ovl-fixes' into for-linusAl Viro64-374/+682
Backmerge to resolve a conflict in ovl_lookup_real(); "ovl_lookup_real(): use lookup_one_len_unlocked()" instead, but it was too late in the cycle to rebase.
2016-05-16netlink: Fix dump skb leak/double freeHerbert Xu1-2/+5
When we free cb->skb after a dump, we do it after releasing the lock. This means that a new dump could have started in the time being and we'll end up freeing their skb instead of ours. This patch saves the skb and module before we unlock so we free the right memory. Fixes: 16b304f3404f ("netlink: Eliminate kmalloc in netlink dump operation.") Reported-by: Baozeng Ding <[email protected]> Signed-off-by: Herbert Xu <[email protected]> Acked-by: Cong Wang <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16tipc: check nl sock before parsing nested attributesRichard Alpe1-0/+3
Make sure the socket for which the user is listing publication exists before parsing the socket netlink attributes. Prior to this patch a call without any socket caused a NULL pointer dereference in tipc_nl_publ_dump(). Tested-and-reported-by: Baozeng Ding <[email protected]> Signed-off-by: Richard Alpe <[email protected]> Acked-by: Jon Maloy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16fq_codel: fix memory limitation driftEric Dumazet1-1/+2
memory_usage must be decreased in dequeue_func(), not in fq_codel_dequeue(), otherwise packets dropped by Codel algo are missing this decrease. Also we need to clear memory_usage in fq_codel_reset() Fixes: 95b58430abe7 ("fq_codel: add memory limitation per queue") Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16net: also make sch_handle_egress() drop monitor readyDaniel Borkmann1-3/+3
Follow-up for 8a3a4c6e7b34 ("net: make sch_handle_ingress() drop monitor ready") to also make the egress side drop monitor ready. Also here only TC_ACT_SHOT is a clear indication that something went wrong. Hence don't provide false positives to drop monitors such as 'perf record -e skb:kfree_skb ...'. Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16net/hsr: Use setup_timer and mod_timer.Muhammad Falak R Wani1-8/+3
The function setup_timer combines the initialization of a timer with the initialization of the timer's function and data fields. The mulitiline code for timer initialization is now replaced with function setup_timer. Also, quoting the mod_timer() function comment: -> mod_timer() is a more efficient way to update the expire field of an active timer (if the timer is inactive it will be activated). Use setup_timer() and mod_timer() to setup and arm a timer, making the code compact and aid readablity. Signed-off-by: Muhammad Falak R Wani <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16Merge branch 'for-upstream' of ↵David S. Miller1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next Johan Hedberg says: ==================== pull request: bluetooth-next 2016-05-14 Here are two more Bluetooth patches for the 4.7 kernel which we wanted to get into net-next before the merge window opens. Please let me know if there are any issues pulling. Thanks. ==================== Signed-off-by: David S. Miller <[email protected]>
2016-05-16bpf: add generic constant blinding for use in jitsDaniel Borkmann2-2/+14
This work adds a generic facility for use from eBPF JIT compilers that allows for further hardening of JIT generated images through blinding constants. In response to the original work on BPF JIT spraying published by Keegan McAllister [1], most BPF JITs were changed to make images read-only and start at a randomized offset in the page, where the rest was filled with trap instructions. We have this nowadays in x86, arm, arm64 and s390 JIT compilers. Additionally, later work also made eBPF interpreter images read only for kernels supporting DEBUG_SET_MODULE_RONX, that is, x86, arm, arm64 and s390 archs as well currently. This is done by default for mentioned JITs when JITing is enabled. Furthermore, we had a generic and configurable constant blinding facility on our todo for quite some time now to further make spraying harder, and first implementation since around netconf 2016. We found that for systems where untrusted users can load cBPF/eBPF code where JIT is enabled, start offset randomization helps a bit to make jumps into crafted payload harder, but in case where larger programs that cross page boundary are injected, we again have some part of the program opcodes at a page start offset. With improved guessing and more reliable payload injection, chances can increase to jump into such payload. Elena Reshetova recently wrote a test case for it [2, 3]. Moreover, eBPF comes with 64 bit constants, which can leave some more room for payloads. Note that for all this, additional bugs in the kernel are still required to make the jump (and of course to guess right, to not jump into a trap) and naturally the JIT must be enabled, which is disabled by default. For helping mitigation, the general idea is to provide an option bpf_jit_harden that admins can tweak along with bpf_jit_enable, so that for cases where JIT should be enabled for performance reasons, the generated image can be further hardened with blinding constants for unpriviledged users (bpf_jit_harden == 1), with trading off performance for these, but not for privileged ones. We also added the option of blinding for all users (bpf_jit_harden == 2), which is quite helpful for testing f.e. with test_bpf.ko. There are no further e.g. hardening levels of bpf_jit_harden switch intended, rationale is to have it dead simple to use as on/off. Since this functionality would need to be duplicated over and over for JIT compilers to use, which are already complex enough, we provide a generic eBPF byte-code level based blinding implementation, which is then just transparently JITed. JIT compilers need to make only a few changes to integrate this facility and can be migrated one by one. This option is for eBPF JITs and will be used in x86, arm64, s390 without too much effort, and soon ppc64 JITs, thus that native eBPF can be blinded as well as cBPF to eBPF migrations, so that both can be covered with a single implementation. The rule for JITs is that bpf_jit_blind_constants() must be called from bpf_int_jit_compile(), and in case blinding is disabled, we follow normally with JITing the passed program. In case blinding is enabled and we fail during the process of blinding itself, we must return with the interpreter. Similarly, in case the JITing process after the blinding failed, we return normally to the interpreter with the non-blinded code. Meaning, interpreter doesn't change in any way and operates on eBPF code as usual. For doing this pre-JIT blinding step, we need to make use of a helper/auxiliary register, here BPF_REG_AX. This is strictly internal to the JIT and not in any way part of the eBPF architecture. Just like in the same way as JITs internally make use of some helper registers when emitting code, only that here the helper register is one abstraction level higher in eBPF bytecode, but nevertheless in JIT phase. That helper register is needed since f.e. manually written program can issue loads to all registers of eBPF architecture. The core concept with the additional register is: blind out all 32 and 64 bit constants by converting BPF_K based instructions into a small sequence from K_VAL into ((RND ^ K_VAL) ^ RND). Therefore, this is transformed into: BPF_REG_AX := (RND ^ K_VAL), BPF_REG_AX ^= RND, and REG <OP> BPF_REG_AX, so actual operation on the target register is translated from BPF_K into BPF_X one that is operating on BPF_REG_AX's content. During rewriting phase when blinding, RND is newly generated via prandom_u32() for each processed instruction. 64 bit loads are split into two 32 bit loads to make translation and patching not too complex. Only basic thing required by JITs is to call the helper bpf_jit_blind_constants()/bpf_jit_prog_release_other() pair, and to map BPF_REG_AX into an unused register. Small bpf_jit_disasm extract from [2] when applied to x86 JIT: echo 0 > /proc/sys/net/core/bpf_jit_harden ffffffffa034f5e9 + <x>: [...] 39: mov $0xa8909090,%eax 3e: mov $0xa8909090,%eax 43: mov $0xa8ff3148,%eax 48: mov $0xa89081b4,%eax 4d: mov $0xa8900bb0,%eax 52: mov $0xa810e0c1,%eax 57: mov $0xa8908eb4,%eax 5c: mov $0xa89020b0,%eax [...] echo 1 > /proc/sys/net/core/bpf_jit_harden ffffffffa034f1e5 + <x>: [...] 39: mov $0xe1192563,%r10d 3f: xor $0x4989b5f3,%r10d 46: mov %r10d,%eax 49: mov $0xb8296d93,%r10d 4f: xor $0x10b9fd03,%r10d 56: mov %r10d,%eax 59: mov $0x8c381146,%r10d 5f: xor $0x24c7200e,%r10d 66: mov %r10d,%eax 69: mov $0xeb2a830e,%r10d 6f: xor $0x43ba02ba,%r10d 76: mov %r10d,%eax 79: mov $0xd9730af,%r10d 7f: xor $0xa5073b1f,%r10d 86: mov %r10d,%eax 89: mov $0x9a45662b,%r10d 8f: xor $0x325586ea,%r10d 96: mov %r10d,%eax [...] As can be seen, original constants that carry payload are hidden when enabled, actual operations are transformed from constant-based to register-based ones, making jumps into constants ineffective. Above extract/example uses single BPF load instruction over and over, but of course all instructions with constants are blinded. Performance wise, JIT with blinding performs a bit slower than just JIT and faster than interpreter case. This is expected, since we still get all the performance benefits from JITing and in normal use-cases not every single instruction needs to be blinded. Summing up all 296 test cases averaged over multiple runs from test_bpf.ko suite, interpreter was 55% slower than JIT only and JIT with blinding was 8% slower than JIT only. Since there are also some extremes in the test suite, I expect for ordinary workloads that the performance for the JIT with blinding case is even closer to JIT only case, f.e. nmap test case from suite has averaged timings in ns 29 (JIT), 35 (+ blinding), and 151 (interpreter). BPF test suite, seccomp test suite, eBPF sample code and various bigger networking eBPF programs have been tested with this and were running fine. For testing purposes, I also adapted interpreter and redirected blinded eBPF image to interpreter and also here all tests pass. [1] http://mainisusuallyafunction.blogspot.com/2012/11/attacking-hardened-linux-systems-with.html [2] https://github.com/01org/jit-spray-poc-for-ksp/ [3] http://www.openwall.com/lists/kernel-hardening/2016/05/03/5 Signed-off-by: Daniel Borkmann <[email protected]> Reviewed-by: Elena Reshetova <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16bpf: prepare bpf_int_jit_compile/bpf_prog_select_runtime apisDaniel Borkmann1-1/+5
Since the blinding is strictly only called from inside eBPF JITs, we need to change signatures for bpf_int_jit_compile() and bpf_prog_select_runtime() first in order to prepare that the eBPF program we're dealing with can change underneath. Hence, for call sites, we need to return the latest prog. No functional change in this patch. Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16bpf: split HAVE_BPF_JIT into cBPF and eBPF variantDaniel Borkmann1-3/+11
Split the HAVE_BPF_JIT into two for distinguishing cBPF and eBPF JITs. Current cBPF ones: # git grep -n HAVE_CBPF_JIT arch/ arch/arm/Kconfig:44: select HAVE_CBPF_JIT arch/mips/Kconfig:18: select HAVE_CBPF_JIT if !CPU_MICROMIPS arch/powerpc/Kconfig:129: select HAVE_CBPF_JIT arch/sparc/Kconfig:35: select HAVE_CBPF_JIT Current eBPF ones: # git grep -n HAVE_EBPF_JIT arch/ arch/arm64/Kconfig:61: select HAVE_EBPF_JIT arch/s390/Kconfig:126: select HAVE_EBPF_JIT if PACK_STACK && HAVE_MARCH_Z196_FEATURES arch/x86/Kconfig:94: select HAVE_EBPF_JIT if X86_64 Later code also needs this facility to check for eBPF JITs. Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16bpf: minor cleanups in ebpf codeDaniel Borkmann1-19/+15
Besides others, remove redundant comments where the code is self documenting enough, and properly indent various bpf_verifier_ops and bpf_prog_type_list declarations. Moreover, remove two exports that actually have no module user. Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Alexei Starovoitov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16tcp: minor optimizations around tcp_hdr() usageEric Dumazet3-22/+20
tcp_hdr() is slightly more expensive than using skb->data in contexts where we know they point to the same byte. In receive path, tcp_v4_rcv() and tcp_v6_rcv() are in this situation, as tcp header has not been pulled yet. In output path, the same can be said when we just pushed the tcp header in the skb, in tcp_transmit_skb() and tcp_make_synack() Also factorize the two checks for tcb->tcp_flags & TCPHDR_SYN in tcp_transmit_skb() and pass tcp header pointer to tcp_ecn_send(), so that compiler can further optimize and avoid a reload. Signed-off-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16sock: propagate __sock_cmsg_send() errorEric Dumazet2-4/+6
__sock_cmsg_send() might return different error codes, not only -EINVAL. Fixes: 24025c465f77 ("ipv4: process socket-level control messages in IPv4") Fixes: ad1e46a83716 ("ipv6: process socket-level control messages in IPv6") Signed-off-by: Eric Dumazet <[email protected]> Cc: Soheil Hassas Yeganeh <[email protected]> Acked-by: Soheil Hassas Yeganeh <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16net: qrtr: fix build problemsArnd Bergmann2-2/+4
Having multiple loadable modules with the same name cannot work with modprobe, and having both net/qrtr/smd.ko and drivers/soc/qcom/smd.ko results in a (somewhat cryptic) build error: ERROR: "qcom_smd_driver_unregister" [net/qrtr/smd.ko] undefined! ERROR: "qcom_smd_driver_register" [net/qrtr/smd.ko] undefined! ERROR: "qcom_smd_set_drvdata" [net/qrtr/smd.ko] undefined! ERROR: "qcom_smd_send" [net/qrtr/smd.ko] undefined! ERROR: "qcom_smd_get_drvdata" [net/qrtr/smd.ko] undefined! ERROR: "qcom_smd_driver_unregister" [drivers/soc/qcom/wcnss_ctrl.ko] undefined! ERROR: "qcom_smd_driver_register" [drivers/soc/qcom/wcnss_ctrl.ko] undefined! ERROR: "qcom_smd_set_drvdata" [drivers/soc/qcom/wcnss_ctrl.ko] undefined! ERROR: "qcom_smd_send" [drivers/soc/qcom/wcnss_ctrl.ko] undefined! ERROR: "qcom_smd_get_drvdata" [drivers/soc/qcom/wcnss_ctrl.ko] undefined! Also, the qrtr driver uses the SMD interface and has a Kconfig dependency, but also allows for compile-testing when SMD is disabled. However, if with QCOM_SMD=m and COMPILE_TEST=y we can end up with QRTR_SMD=y and that fails with a related link error. The changes the dependency so we can still compile-test the driver but not have it built-in if SMD is a module, to avoid running in the broken configuration, and changes the Makefile to provide the driver under a different module name. Signed-off-by: Arnd Bergmann <[email protected]> Fixes: bdabad3e363d ("net: Add Qualcomm IPC router") Signed-off-by: David S. Miller <[email protected]>
2016-05-16net/sched: cls_flower: Hardware offloaded filters statistics supportAmir Vadai1-0/+21
Introduce a new command in ndo_setup_tc() for hardware offloaded filters, to call the NIC driver, and make it update the statistics. This will be done before dumping the filter and its statistics. Signed-off-by: Amir Vadai <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16net/sched: act_gact: Update statistics when offloaded to hardwareAmir Vadai1-0/+15
Implement the stats_update callback that will be called by NIC drivers for hardware offloaded filters. Signed-off-by: Amir Vadai <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-16net: cls_u32: Add support for skip-sw flag to tc u32 classifier.Samudrala, Sridhar1-9/+36
On devices that support TC U32 offloads, this flag enables a filter to be added only to HW. skip-sw and skip-hw are mutually exclusive flags. By default without any flags, the filter is added to both HW and SW, but no error checks are done in case of failure to add to HW. With skip-sw, failure to add to HW is treated as an error. Here is a sample script that adds 2 filters, one with skip-sw and the other with skip-hw flag. # add ingress qdisc tc qdisc add dev p4p1 ingress # enable hw tc offload. ethtool -K p4p1 hw-tc-offload on # add u32 filter with skip-sw flag. tc filter add dev p4p1 parent ffff: protocol ip prio 99 \ handle 800:0:1 u32 ht 800: flowid 800:1 \ skip-sw \ match ip src 192.168.1.0/24 \ action drop # add u32 filter with skip-hw flag. tc filter add dev p4p1 parent ffff: protocol ip prio 99 \ handle 800:0:2 u32 ht 800: flowid 800:2 \ skip-hw \ match ip src 192.168.2.0/24 \ action drop Signed-off-by: Sridhar Samudrala <[email protected]> Acked-by: John Fastabend <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-15Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller15-39/+106
The nf_conntrack_core.c fix in 'net' is not relevant in 'net-next' because we no longer have a per-netns conntrack hash. The ip_gre.c conflict as well as the iwlwifi ones were cases of overlapping changes. Conflicts: drivers/net/wireless/intel/iwlwifi/mvm/tx.c net/ipv4/ip_gre.c net/netfilter/nf_conntrack_core.c Signed-off-by: David S. Miller <[email protected]>
2016-05-14net: switchdev: Drop EXPERIMENTAL from descriptionFlorian Fainelli1-1/+1
Switchdev has been around for quite a while now, putting "EXPERIMENTAL" in the description is no longer accurate, drop it. Signed-off-by: Florian Fainelli <[email protected]> Acked-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-14net/route: enforce hoplimit max valuePaolo Abeni2-0/+4
Currently, when creating or updating a route, no check is performed in both ipv4 and ipv6 code to the hoplimit value. The caller can i.e. set hoplimit to 256, and when such route will be used, packets will be sent with hoplimit/ttl equal to 0. This commit adds checks for the RTAX_HOPLIMIT value, in both ipv4 ipv6 route code, substituting any value greater than 255 with 255. This is consistent with what is currently done for ADVMSS and MTU in the ipv4 code. Signed-off-by: Paolo Abeni <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-14nf_conntrack: avoid kernel pointer value leak in slab nameLinus Torvalds1-1/+3
The slab name ends up being visible in the directory structure under /sys, and even if you don't have access rights to the file you can see the filenames. Just use a 64-bit counter instead of the pointer to the 'net' structure to generate a unique name. This code will go away in 4.7 when the conntrack code moves to a single kmemcache, but this is the backportable simple solution to avoiding leaking kernel pointers to user space. Fixes: 5b3501faa874 ("netfilter: nf_conntrack: per netns nf_conntrack_cachep") Signed-off-by: Linus Torvalds <[email protected]> Acked-by: Eric Dumazet <[email protected]> Cc: [email protected] Signed-off-by: David S. Miller <[email protected]>
2016-05-13Merge tag 'qcom-soc-for-4.7-2' into net-nextBjorn Andersson1-3/+6
This merges the Qualcomm SOC tree with the net-next, solving the merge conflict in the SMD API between the two. Signed-off-by: Bjorn Andersson <[email protected]>
2016-05-13svcrdma: Generalize svc_rdma_xdr_decode_req()Chuck Lever2-11/+23
Clean up: Pass in just the piece of the svc_rqst that is needed here. While we're in the area, add an informative documenting comment. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: Eliminate code duplication in svc_rdma_recvfrom()Chuck Lever1-21/+5
Clean up. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: Drain QP before freeing svcrdma_xprtChuck Lever1-0/+3
If the server has forced a disconnect, the associated QP has not been moved to the Error state, and thus Receives are still posted. Ensure Receives (and any other outstanding WRs) are drained to release resources that can be freed during teardown of the svcrdma_xprt. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: Post Receives only for forward channel requestsChuck Lever1-1/+1
Since backward direction support was added, the rq_depth was increased to accommodate both forward and backward Receives. But only forward Receives need to be posted after a connection has been accepted. Receives for backward replies are posted as needed by svc_rdma_bc_sendto(). This doesn't break anything, but it means some resources are wasted. Fixes: 03fe9931536f ('svcrdma: Define maximum number of ...') Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: Remove superfluous line from rdma_read_chunks()Chuck Lever1-3/+1
Clean up: svc_rdma_get_read_chunk() already returns a pointer to the Read list. No need to set "ch" again to the value it already contains. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: svc_rdma_put_context() is invoked twice in Send error pathChuck Lever1-15/+13
Get a fresh op_ctxt in send_reply() instead of in svc_rdma_sendto(). This ensures that svc_rdma_put_context() is invoked only once if send_reply() fails. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: Do not add XDR padding to xdr_buf page vectorChuck Lever1-1/+1
An xdr_buf has a head, a vector of pages, and a tail. Each RPC request is presented to the NFS server contained in an xdr_buf. The RDMA transport would like to supply the NFS server with only the NFS WRITE payload bytes in the page vector. In some common cases, that would allow the NFS server to swap those pages right into the target file's page cache. Have the transport's RDMA Read logic put XDR pad bytes in the tail iovec, and not in the pages that hold the data payload. The NFSv3 WRITE XDR decoder is finicky about the lengths involved, so make sure it is looking in the correct places when computing the total length of the incoming NFS WRITE request. Signed-off-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13svcrdma: Support IPv6 with NFS/RDMAShirley Ma1-1/+11
Allow both IPv4 and IPv6 to bind same port at the same time, restricts use of the IPv6 socket to IPv6 communication. Changes from v1: - Check rdma_set_afonly return value (suggested by Leon Romanovsky) Changes from v2: - Acked-by: Leon Romanovsky <[email protected]> Signed-off-by: Shirley Ma <[email protected]> Acked-by: Leon Romanovsky <[email protected]> Reviewed-by: Chuck Lever <[email protected]> Signed-off-by: J. Bruce Fields <[email protected]>
2016-05-13IB/core: Enhance ib_map_mr_sg()Bart Van Assche2-2/+2
The SRP initiator allows to set max_sectors to a value that exceeds the largest amount of data that can be mapped at once with an mlx4 HCA using fast registration and a page size of 4 KB. Hence modify ib_map_mr_sg() such that it can map partial sg-elements. If an sg-element has been mapped partially, let the caller know which fraction has been mapped by adjusting *sg_offset. Signed-off-by: Bart Van Assche <[email protected]> Tested-by: Laurence Oberman <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Sagi Grimberg <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-05-13IB/core: Add passing an offset into the SG to ib_map_mr_sgChristoph Hellwig3-3/+3
Signed-off-by: Christoph Hellwig <[email protected]> Tested-by: Steve Wise <[email protected]> Reviewed-by: Bart Van Assche <[email protected]> Reviewed-by: Sagi Grimberg <[email protected]> Reviewed-by: Steve Wise <[email protected]> Signed-off-by: Doug Ledford <[email protected]>
2016-05-13Bluetooth: fix power_on vs close raceJiri Slaby1-2/+2
With all the latest fixes applied, I am still able to reproduce this (and other) warning(s): WARNING: CPU: 1 PID: 19684 at ../kernel/workqueue.c:4092 destroy_workqueue+0x70a/0x770() ... Call Trace: [<ffffffff819fee81>] ? dump_stack+0xb3/0x112 [<ffffffff8117377e>] ? warn_slowpath_common+0xde/0x140 [<ffffffff811ce68a>] ? destroy_workqueue+0x70a/0x770 [<ffffffff811739ae>] ? warn_slowpath_null+0x2e/0x40 [<ffffffff811ce68a>] ? destroy_workqueue+0x70a/0x770 [<ffffffffa0c944c9>] ? hci_unregister_dev+0x2a9/0x720 [bluetooth] [<ffffffffa0b301db>] ? vhci_release+0x7b/0xf0 [hci_vhci] [<ffffffffa0b30160>] ? vhci_flush+0x50/0x50 [hci_vhci] [<ffffffff8117cd73>] ? do_exit+0x863/0x2b90 This is due to race present in the hci_unregister_dev path. hdev->power_on work races with hci_dev_do_close. One tries to open, the other tries to close, leading to warning like the above. (Another example is a warning in kobject_get or kobject_put depending on who wins the race.) Fix this by switching those two racers to ensure hdev->power_on never triggers while hci_dev_do_close is in progress. Signed-off-by: Jiri Slaby <[email protected]> Signed-off-by: Marcel Holtmann <[email protected]>
2016-05-13udp: Resolve NULL pointer dereference over flow-based vxlan deviceAlexander Duyck2-13/+5
While testing an OpenStack configuration using VXLANs I saw the following call trace: RIP: 0010:[<ffffffff815fad49>] udp4_lib_lookup_skb+0x49/0x80 RSP: 0018:ffff88103867bc50 EFLAGS: 00010286 RAX: ffff88103269bf00 RBX: ffff88103269bf00 RCX: 00000000ffffffff RDX: 0000000000004300 RSI: 0000000000000000 RDI: ffff880f2932e780 RBP: ffff88103867bc60 R08: 0000000000000000 R09: 000000009001a8c0 R10: 0000000000004400 R11: ffffffff81333a58 R12: ffff880f2932e794 R13: 0000000000000014 R14: 0000000000000014 R15: ffffe8efbfd89ca0 FS: 0000000000000000(0000) GS:ffff88103fd80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000488 CR3: 0000000001c06000 CR4: 00000000001426e0 Stack: ffffffff81576515 ffffffff815733c0 ffff88103867bc98 ffffffff815fcc17 ffff88103269bf00 ffffe8efbfd89ca0 0000000000000014 0000000000000080 ffffe8efbfd89ca0 ffff88103867bcc8 ffffffff815fcf8b ffff880f2932e794 Call Trace: [<ffffffff81576515>] ? skb_checksum+0x35/0x50 [<ffffffff815733c0>] ? skb_push+0x40/0x40 [<ffffffff815fcc17>] udp_gro_receive+0x57/0x130 [<ffffffff815fcf8b>] udp4_gro_receive+0x10b/0x2c0 [<ffffffff81605863>] inet_gro_receive+0x1d3/0x270 [<ffffffff81589e59>] dev_gro_receive+0x269/0x3b0 [<ffffffff8158a1b8>] napi_gro_receive+0x38/0x120 [<ffffffffa0871297>] gro_cell_poll+0x57/0x80 [vxlan] [<ffffffff815899d0>] net_rx_action+0x160/0x380 [<ffffffff816965c7>] __do_softirq+0xd7/0x2c5 [<ffffffff8107d969>] run_ksoftirqd+0x29/0x50 [<ffffffff8109a50f>] smpboot_thread_fn+0x10f/0x160 [<ffffffff8109a400>] ? sort_range+0x30/0x30 [<ffffffff81096da8>] kthread+0xd8/0xf0 [<ffffffff81693c82>] ret_from_fork+0x22/0x40 [<ffffffff81096cd0>] ? kthread_park+0x60/0x60 The following trace is seen when receiving a DHCP request over a flow-based VXLAN tunnel. I believe this is caused by the metadata dst having a NULL dev value and as a result dev_net(dev) is causing a NULL pointer dereference. To resolve this I am replacing the check for skb_dst(skb)->dev with just skb->dev. This makes sense as the callers of this function are usually in the receive path and as such skb->dev should always be populated. In addition other functions in the area where these are called are already using dev_net(skb->dev) to determine the namespace the UDP packet belongs in. Fixes: 63058308cd55 ("udp: Add udp6_lib_lookup_skb and udp4_lib_lookup_skb") Signed-off-by: Alexander Duyck <[email protected]> Acked-by: Eric Dumazet <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-13sunrpc: set SOCK_FASYNCEric Dumazet1-0/+3
sunrpc is using SOCKWQ_ASYNC_NOSPACE without setting SOCK_FASYNC, so the recent optimizations done in sk_set_bit() and sk_clear_bit() broke it. There is still the risk that a subsequent sock_fasync() call would clear SOCK_FASYNC, but sunrpc does not use this yet. Fixes: 9317bb69824e ("net: SOCKWQ_ASYNC_NOSPACE optimizations") Signed-off-by: Eric Dumazet <[email protected]> Reported-by: Jiri Pirko <[email protected]> Reported-by: Huang, Ying <[email protected]> Tested-by: Jiri Pirko <[email protected]> Tested-by: Huang, Ying <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-12tipc: eliminate risk of double link_up eventsJon Paul Maloy1-1/+1
When an ACTIVATE or data packet is received in a link in state ESTABLISHING, the link does not immediately change state to ESTABLISHED, but does instead return a LINK_UP event to the caller, which will execute the state change in a different lock context. This non-atomic approach incurs a low risk that we may have two LINK_UP events pending simultaneously for the same link, resulting in the final part of the setup procedure being executed twice. The only potential harm caused by this it that we may see two LINK_UP events issued to subsribers of the topology server, something that may cause confusion. This commit eliminates this risk by checking if the link is already up before proceeding with the second half of the setup. Signed-off-by: Jon Maloy <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-12gre: Fix wrong tpi->proto in WCCPHaishuang Yan3-6/+6
When dealing with WCCP in gre6 tunnel, it sets the wrong tpi->protocol, that is, ETH_P_IP instead of ETH_P_IPV6 for the encapuslated traffic. Signed-off-by: Haishuang Yan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-12ip6_gre: Fix get_size calculation for gre6 tunnelHaishuang Yan1-3/+0
Do not include attribute IFLA_GRE_TOS. Signed-off-by: Haishuang Yan <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2016-05-12Merge tag 'mac80211-next-for-davem-2016-05-12' of ↵David S. Miller13-113/+141
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211-next Johannes Berg says: ==================== Some more work for 4.7, notably: * completion and fixups of nla_put_64_64bit() work * remove a/b/g/n from wext nickname to avoid confusion with 11ac (which wouldn't even fit fully there due to string length restrictions) along with some other minor changes/cleanups. ==================== Signed-off-by: David S. Miller <[email protected]>