aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-04-14net: phy: shrink PHY settings arrayHeiner Kallweit1-202/+45
The definition of array settings[] is quite lengthy meanwhile. Add a macro to shrink the definition. v2: - Fix an indentation issue Signed-off-by: Heiner Kallweit <[email protected]> Reviewed-by: Andrew Lunn <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12Merge branch 'rhashtable-bit-locking-m68k'David S. Miller3-110/+142
NeilBrown says: ==================== Fix rhashtable bit-locking for m68k As reported by Guenter Roeck, the new rhashtable bit-locking doesn't work on m68k as it only requires 2-byte alignment, so BIT(1) is addresses is not unused. We current use BIT(0) to identify a NULLS marker, but that is only needed in ->next pointers. The bucket head does not need a NULLS marker, so the lsb there can be used for locking. the first 4 patches make some small improvements and re-arrange some code. The final patch converts to using only BIT(0) for these two different special purposes. I had previously suggested dropping the series until I fix it. Given that this was fairly easy, I retract that I think it best simply to add these patches to fix the code. ==================== Tested-by: Guenter Roeck <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: use BIT(0) for locking.NeilBrown2-12/+25
As reported by Guenter Roeck, the new bit-locking using BIT(1) doesn't work on the m68k architecture. m68k only requires 2-byte alignment for words and longwords, so there is only one unused bit in pointers to structs - We current use two, one for the NULLS marker at the end of the linked list, and one for the bit-lock in the head of the list. The two uses don't need to conflict as we never need the head of the list to be a NULLS marker - the marker is only needed to check if an object has moved to a different table, and the bucket head cannot move. The NULLS marker is only needed in a ->next pointer. As we already have different types for the bucket head pointer (struct rhash_lock_head) and the ->next pointers (struct rhash_head), it is fairly easy to treat the lsb differently in each. So: Initialize buckets heads to NULL, and use the lsb for locking. When loading the pointer from the bucket head, if it is NULL (ignoring the lock big), report as being the expected NULLS marker. When storing a value into a bucket head, if it is a NULLS marker, store NULL instead. And convert all places that used bit 1 for locking, to use bit 0. Fixes: 8f0db018006a ("rhashtable: use bit_spin_locks to protect hash bucket.") Reported-by: Guenter Roeck <[email protected]> Tested-by: Guenter Roeck <[email protected]> Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: replace rht_ptr_locked() with rht_assign_locked()NeilBrown2-6/+9
The only times rht_ptr_locked() is used, it is to store a new value in a bucket-head. This is the only time it makes sense to use it too. So replace it by a function which does the whole task: Sets the lock bit and assigns to a bucket head. Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: move dereference inside rht_ptr()NeilBrown3-34/+49
Rather than dereferencing a pointer to a bucket and then passing the result to rht_ptr(), we now pass in the pointer and do the dereference in rht_ptr(). This requires that we pass in the tbl and hash as well to support RCU checks, and means that the various rht_for_each functions can expect a pointer that can be dereferenced without further care. There are two places where we dereference a bucket pointer where there is no testable protection - in each case we know that we much have exclusive access without having taken a lock. The previous code used rht_dereference() to pretend that holding the mutex provided protects, but holding the mutex never provides protection for accessing buckets. So instead introduce rht_ptr_exclusive() that can be used when there is known to be exclusive access without holding any locks. Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: reorder some inline functions and macros.NeilBrown1-71/+71
This patch only moves some code around, it doesn't change the code at all. A subsequent patch will benefit from this as it needs to add calls to functions which are now defined before the call-site, but weren't before. Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: fix some __rcu annotation errorsNeilBrown2-7/+8
With these annotations, the rhashtable now gets no warnings when compiled with "C=1" for sparse checking. Signed-off-by: NeilBrown <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12rhashtable: use struct_size() in kvzalloc()Gustavo A. R. Silva1-2/+1
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; struct boo entry[]; }; size = sizeof(struct foo) + count * sizeof(struct boo); instance = kvzalloc(size, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kvzalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12Merge branch 'nfp-update-to-control-structures'David S. Miller12-293/+451
Jakub Kicinski says: ==================== nfp: update to control structures This series prepares NFP control structures for crypto offloads. So far we mostly dealt with configuration requests under rtnl lock. This will no longer be the case with crypto. Additionally we will try to reuse the BPF control message format, so we move common code out of BPF. ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-12nfp: split out common control message handling codeJakub Kicinski8-263/+340
BPF's control message handler seems like a good base to built on for request-reply control messages. Split it out to allow for reuse. Signed-off-by: Jakub Kicinski <[email protected]> Reviewed-by: Dirk van der Merwe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12nfp: move vNIC reset before netdev initJakub Kicinski1-3/+3
During probe we clear vNIC configuration in case the device wasn't closed cleanly by previous driver. Move that code before netdev init, so netdev init can already try to apply its config parameters. Signed-off-by: Jakub Kicinski <[email protected]> Reviewed-by: Dirk van der Merwe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12nfp: add a mutex lock for the vNIC ctrl BARJakub Kicinski4-23/+87
Soon we will try to write to the vNIC mailbox without RTNL held. Add a new mutex to protect access to specific parts of the PCI control BAR. Move the mailbox size checking to the mailbox lock() helper, where it can be more effective (happen prior to potential overwrite of other data). Signed-off-by: Jakub Kicinski <[email protected]> Reviewed-by: Dirk van der Merwe <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12nfp: opportunistically poll for reconfig resultDirk van der Merwe1-4/+21
If the reconfig was a quick update, we could have results available from firmware within 200us. Signed-off-by: Dirk van der Merwe <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12ipv6: Remove flowi6_oif compare from __ip6_route_redirectDavid Ahern1-2/+0
In the review of 0b34eb004347 ("ipv6: Refactor __ip6_route_redirect"), Martin noted that the flowi6_oif compare is moved to the new helper and should be removed from __ip6_route_redirect. Fix the oversight. Fixes: 0b34eb004347 ("ipv6: Refactor __ip6_route_redirect") Reported-by: Martin Lau <[email protected]> Signed-off-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12Merge branch 'netdevsim-Mostly-cleanup-in-sdev-bpf-iface-area'David S. Miller6-90/+159
Jiri Pirko says: ==================== netdevsim: Mostly cleanup in sdev/bpf iface area This patches does mainly internal netdevsim code shuffle. Nothing serious, just small changes to help readability and preparations for future work. ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-12netdevsim: move sdev-specific init/uninit code into separate functionsJiri Pirko1-25/+38
In order to improve readability and prepare for future code changes, move sdev specific init/uninit code into separate functions. Signed-off-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12netdevsim: make bpf_offload_dev_create() per-sdev instead of first nsJiri Pirko1-12/+14
offload dev is stored in sdev struct. However, first netdevsim instance is used as a priv. Change this to be sdev to as it is shared among multiple netdevsim instances. Signed-off-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12netdevsim: move sdev specific bpf debugfs files to sdev dirJiri Pirko3-13/+13
Some netdevsim bpf debugfs files are per-sdev, yet they are defined per netdevsim instance. Move them under sdev directory. Signed-off-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12netdevsim: move shared dev creation and destruction into separate fileJiri Pirko4-50/+104
To make code easier to read, move shared dev bits into a separate file. Signed-off-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12Documentation: net: dsa: transition to the rst formatIoana Ciornei5-154/+169
This patch also performs some minor adjustments such as numbering for the receive path sequence, conversion of keywords to inline literals and adding an index page so it looks better in the output of 'make htmldocs'. Signed-off-by: Ioana Ciornei <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net: veth: use generic helper to report timestamping infoJulian Wiedmann1-13/+1
For reporting the common set of SW timestamping capabilities, use ethtool_op_get_ts_info() instead of re-implementing it. Signed-off-by: Julian Wiedmann <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net: loopback: use generic helper to report timestamping infoJulian Wiedmann1-13/+1
For reporting the common set of SW timestamping capabilities, use ethtool_op_get_ts_info() instead of re-implementing it. Signed-off-by: Julian Wiedmann <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net: dummy: use generic helper to report timestamping infoJulian Wiedmann1-13/+2
For reporting the common set of SW timestamping capabilities, use ethtool_op_get_ts_info() instead of re-implementing it. Signed-off-by: Julian Wiedmann <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12Merge branch 'smc-next'David S. Miller8-274/+294
Ursula Braun says: ==================== net/smc: patches 2019-04-12 here are patches for SMC: * patch 1 improves behavior of non-blocking connect * patches 2, 3, 5, 7, and 8 improve connecting return codes * patches 4 and 6 are a cleanups without functional change ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: improve smc_conn_create reason codesKarsten Graul4-59/+58
Rework smc_conn_create() to always return a valid DECLINE reason code. This removes the need to translate the return codes on 4 different places and allows to easily add more detailed return codes by changing smc_conn_create() only. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: improve smc_listen_work reason codesKarsten Graul2-46/+54
Rework smc_listen_work() to provide improved reason codes when an SMC connection is declined. This allows better debugging on user side. This also adds 3 more detailed reason codes in smc_clc.h to indicate what type of device was not found (ism or rdma or both), or if ism cannot talk to the peer. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: code cleanup smc_listen_workKarsten Graul1-15/+14
In smc_listen_work() the variables rc and reason_code are defined which have the same meaning. Eliminate reason_code in favor of the shorter name rc. No functional changes. Rename the functions smc_check_ism() and smc_check_rdma() into smc_find_ism_device() and smc_find_rdma_device() to make there purpose more clear. No functional changes. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: cleanup of get vlan idKarsten Graul3-6/+10
The vlan_id of the underlying CLC socket was retrieved two times during processing of the listen handshaking. Change this to get the vlan id one time in connect and in listen processing, and reuse the id. And add a new CLC DECLINE return code for the case when the retrieval of the vlan id failed. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: consolidate function parametersKarsten Graul7-141/+139
During initialization of an SMC socket a lot of function parameters need to get passed down the function call path. Consolidate the parameters in a helper struct so there are less enough parameters to get all passed by register. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: check for ip prefix and subnetKarsten Graul2-3/+10
The check for a matching ip prefix and subnet was only done for SMC-R in smc_listen_rdma_check() but not when an SMC-D connection was possible. Rename the function into smc_listen_prfx_check() and move its call to a place where it is called for both SMC variants. And add a new CLC DECLINE reason for the case when the IP prefix or subnet check fails so the reason for the failing SMC connection can be found out more easily. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: fallback to TCP after connect problemsKarsten Graul1-4/+4
Correct the CLC decline reason codes for internal problems to not have the sign bit set, negative reason codes are interpreted as not eligible for TCP fallback. Signed-off-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12net/smc: nonblocking connect reworkUrsula Braun2-42/+47
For nonblocking sockets move the kernel_connect() from the connect worker into the initial smc_connect part to return kernel_connect() errors other than -EINPROGRESS to user space. Reviewed-by: Karsten Graul <[email protected]> Signed-off-by: Ursula Braun <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-12xen-netback: add reference from xenvif to backend_info to facilitate ↵Dongli Zhang2-16/+19
coredump analysis During coredump analysis, it is not easy to obtain the address of backend_info in xen-netback. So far there are two ways to obtain backend_info: 1. Do what xenbus_device_find() does for vmcore to find the xenbus_device and then derive it from dev_get_drvdata(). 2. Extract backend_info from callstack of xenwatch (e.g., netback_remove() or frontend_changed()). This patch adds a reference from xenvif to backend_info so that it would be much more easier to obtain backend_info during coredump analysis. Signed-off-by: Dongli Zhang <[email protected]> Acked-by: Wei Liu <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11Merge branch 'sctp-skb-list'David S. Miller3-41/+71
David Miller says: ==================== SCTP: Event skb list overhaul. This patch series eliminates the explicit reference to the skb list implementation via skb->prev dereferences. The approach used is to pass a non-empty skb list around instead of an event skb object which may or may not be on a list. I'd like to thank Marcelo Leitner, Xin Long, and Neil Horman for reviewing previous versions of this series. Testing would be very much appreciated, in addition to the review of course. v4 --> v5: Rebase to net-next v3 --> v4: Fix the logic in patch #4 so that we don't miss cases where we should add event to the on-stack temp list. ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-11sctp: Pass sk_buff_head explicitly to sctp_ulpq_tail_event().David Miller3-20/+13
Now the SKB list implementation assumption can be removed. And now that we know that the list head is always non-NULL we can remove the code blocks dealing with that as well. Signed-off-by: David S. Miller <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11sctp: Make sctp_enqueue_event tak an skb list.David Miller2-15/+39
Pass this, instead of an event. Then everything trickles down and we always have events a non-empty list. Then we needs a list creating stub to place into .enqueue_event for sctp_stream_interleave_1. Signed-off-by: David S. Miller <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11sctp: Use helper for sctp_ulpq_tail_event() when hooked up to ->enqueue_eventDavid Miller1-1/+10
This way we can make sure events sent this way to sctp_ulpq_tail_event() are on a list as well. Now all such code paths are fully covered. Signed-off-by: David S. Miller <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11sctp: Always pass skbs on a list to sctp_ulpq_tail_event().David Miller1-6/+10
This way we can simplify the logic and remove assumptions about the implementation of skb lists. Signed-off-by: David S. Miller <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11sctp: Remove superfluous test in sctp_ulpq_reasm_drain().David Miller1-1/+1
Inside the loop, we always start with event non-NULL. Signed-off-by: David S. Miller <[email protected]> Acked-by: Marcelo Ricardo Leitner <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11net: sched: flower: fix filter net reference countingVlad Buslov1-11/+6
Fix net reference counting in fl_change() and remove redundant call to tcf_exts_get_net() from __fl_delete(). __fl_put() already tries to get net before releasing exts and deallocating a filter, so this code caused flower classifier to obtain net twice per filter that is being deleted. Implementation of __fl_delete() called tcf_exts_get_net() to pass its result as 'async' flag to fl_mask_put(). However, 'async' flag is redundant and only complicates fl_mask_put() implementation. This functionality seems to be copied from filter cleanup code, where it was added by Cong with following explanation: This patchset tries to fix the race between call_rcu() and cleanup_net() again. Without holding the netns refcnt the tc_action_net_exit() in netns workqueue could be called before filter destroy works in tc filter workqueue. This patchset moves the netns refcnt from tc actions to tcf_exts, without breaking per-netns tc actions. This doesn't apply to flower mask, which doesn't call any tc action code during cleanup. Simplify fl_mask_put() by removing the flag parameter and always use tcf_queue_work() to free mask objects. Fixes: 061775583e35 ("net: sched: flower: introduce reference counting for filters") Fixes: 1f17f7742eeb ("net: sched: flower: insert filter to ht before offloading it to hw") Fixes: 05cd271fd61a ("cls_flower: Support multiple masks per priority") Reported-by: Ido Schimmel <[email protected]> Signed-off-by: Vlad Buslov <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11selftests: Add debugging options to pmtu.shDavid Ahern1-89/+124
pmtu.sh script runs a number of tests and dumps a summary of pass/fail. If a test fails, it is near impossible to debug why. For example: TEST: ipv6: PMTU exceptions [FAIL] There are a lot of commands run behind the scenes for this test. Which one is failing? Add a VERBOSE option to show commands that are run and any output from those commands. Add a PAUSE_ON_FAIL option to halt the script if a test fails allowing users to poke around with the setup in the failed state. In the process, rename tracing to TRACING and move declaration to top with the new variables. Signed-off-by: David Ahern <[email protected]> Reviewed-by: Stefano Brivio <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller75-425/+4603
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-04-12 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) Improve BPF verifier scalability for large programs through two optimizations: i) remove verifier states that are not useful in pruning, ii) stop walking parentage chain once first LIVE_READ is seen. Combined gives approx 20x speedup. Increase limits for accepting large programs under root, and add various stress tests, from Alexei. 2) Implement global data support in BPF. This enables static global variables for .data, .rodata and .bss sections to be properly handled which allows for more natural program development. This also opens up the possibility to optimize program workflow by compiling ELFs only once and later only rewriting section data before reload, from Daniel and with test cases and libbpf refactoring from Joe. 3) Add config option to generate BTF type info for vmlinux as part of the kernel build process. DWARF debug info is converted via pahole to BTF. Latter relies on libbpf and makes use of BTF deduplication algorithm which results in 100x savings compared to DWARF data. Resulting .BTF section is typically about 2MB in size, from Andrii. 4) Add BPF verifier support for stack access with variable offset from helpers and add various test cases along with it, from Andrey. 5) Extend bpf_skb_adjust_room() growth BPF helper to mark inner MAC header so that L2 encapsulation can be used for tc tunnels, from Alan. 6) Add support for input __sk_buff context in BPF_PROG_TEST_RUN so that users can define a subset of allowed __sk_buff fields that get fed into the test program, from Stanislav. 7) Add bpf fs multi-dimensional array tests for BTF test suite and fix up various UBSAN warnings in bpftool, from Yonghong. 8) Generate a pkg-config file for libbpf, from Luca. 9) Dump program's BTF id in bpftool, from Prashant. 10) libbpf fix to use smaller BPF log buffer size for AF_XDP's XDP program, from Magnus. 11) kallsyms related fixes for the case when symbols are not present in BPF selftests and samples, from Daniel ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-12bpf: explicitly prohibit ctx_{in, out} in non-skb BPF_PROG_TEST_RUNStanislav Fomichev1-0/+6
This should allow us later to extend BPF_PROG_TEST_RUN for non-skb case and be sure that nobody is erroneously setting ctx_{in,out}. Fixes: b0b9395d865e ("bpf: support input __sk_buff context in BPF_PROG_TEST_RUN") Reported-by: Daniel Borkmann <[email protected]> Signed-off-by: Stanislav Fomichev <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]>
2019-04-11tools: add smp_* barrier variants to include infrastructureDaniel Borkmann2-2/+15
Add the definition for smp_rmb(), smp_wmb(), and smp_mb() to the tools include infrastructure: this patch adds the implementation for x86-64 and arm64, and have it fall back as currently is for other archs which do not have it implemented at this point. The x86-64 one uses lock + add combination for smp_mb() with address below red zone. This is on top of 09d62154f613 ("tools, perf: add and use optimized ring_buffer_{read_head, write_tail} helpers"), which didn't touch smp_* barrier implementations. Magnus recently rightfully reported however that the latter on x86-64 still wrongly falls back to sfence, lfence and mfence respectively, thus fix that for applications under tools making use of these to avoid such ugly surprises. The main header under tools (include/asm/barrier.h) will in that case not select the fallback implementation. Reported-by: Magnus Karlsson <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]>
2019-04-11Merge branch 'ipv6-Refactor-nexthop-selection-helpers-during-a-fib-lookup'David S. Miller2-125/+149
David Ahern says: ==================== ipv6: Refactor nexthop selection helpers during a fib lookup IPv6 has a fib6_nh embedded within each fib6_info and a separate fib6_info for each path in a multipath route. A side effect is that a fib6_info is passed all the way down the stack when selecting a path on a fib lookup. Refactor the fib lookup functions and associated helper functions to take a fib6_nh when appropriate to enable IPv6 to work with nexthop objects where the fib6_nh is not directly part of a fib entry. ==================== Signed-off-by: David S. Miller <[email protected]>
2019-04-11ipv6: Refactor __ip6_route_redirectDavid Ahern1-23/+33
Move the nexthop evaluation of a fib entry to a helper that can be leveraged for each fib6_nh in a multipath nexthop object. In the move, 'continue' statements means the helper returns false (loop should continue) and 'break' means return true (found the entry of interest). Signed-off-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11ipv6: Refactor rt6_device_matchDavid Ahern1-13/+25
Move the device and gateway checks in the fib6_next loop to a helper that can be called per fib6_nh entry. Signed-off-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11ipv6: Move fib6_multipath_select down in ip6_pol_routeDavid Ahern1-3/+3
Move the siblings and fib6_multipath_select after the null entry check since a null entry can not have siblings. Signed-off-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11ipv6: Be smarter with null_entry handling in ip6_pol_route_lookupDavid Ahern1-12/+13
Clean up the fib6_null_entry handling in ip6_pol_route_lookup. rt6_device_match can return fib6_null_entry, but fib6_multipath_select can not. Consolidate the fib6_null_entry handling and on the final null_entry check set rt and goto out - no need to defer to a second check after rt6_find_cached_rt. Signed-off-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>
2019-04-11ipv6: Refactor find_rr_leafDavid Ahern1-36/+30
find_rr_leaf has 3 loops over fib_entries calling find_match. The loops are very similar with differences in start point and whether the metric is evaluated: 1. start at rr_head, no extra loop compare, check fib metric 2. start at leaf, compare rt against rr_head, check metric 3. start at cont (potential saved point from earlier loops), no extra loop compare, no metric check Create 1 loop that is called 3 different times. This will make a later change with multipath nexthop objects much simpler. Signed-off-by: David Ahern <[email protected]> Signed-off-by: David S. Miller <[email protected]>