Age | Commit message (Collapse) | Author | Files | Lines |
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi updates from Mark Brown:
"One new core feature here, a small collection of new drivers and a
bunch of small improvements in existing drivers:
- A new CS_WORD flag for transfers where the chip select is toggled
at every word, with both a generic implementation and the ability
for controllers to do this automatically (including a DaVinci one).
- New drivers for Mediatek MT2712, Qualcomm GENI and QSPI, Spreadtrum
SPI and ST STM32 QSPI plus new IDs for several existing ones"
* tag 'spi-v5.0' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (86 commits)
spi: lpspi: add imx8qxp compatible string
spi: Allow building SPI_BCM63XX_HSSPI on ARM-based SoCs
spi: omap2-mcspi: Add slave mode support
spi: omap2-mcspi: Set FIFO DMA trigger level to word length
spi: omap2-mcspi: Switch to readl_poll_timeout()
spi: spi-mem: add stm32 qspi controller
dt-bindings: spi: add stm32 qspi controller
spi: sh-msiof: document R8A779{7|8}0 bindings
spi: pic32-sqi: don't pass GFP_DMA32 to dma_alloc_coherent
MAINTAINERS: Add entry for Broadcom SPI controller
spi: sh-msiof: fix deferred probing
spi: imx: use PIO mode if size is small
spi: imx: correct wml as the last sg length
spi: imx: move wml setting to later than setup_transfer
PCI: Provide pci_match_id() with CONFIG_PCI=n
spi: Make GPIO CSs honour the SPI_NO_CS flag
spi/spi-pxa2xx: add PXA2xx SSP SPI Controller
spi: pxa2xx: Add devicetree support
spi: pxa2xx: Use an enum for type
spi: spi-geni-qcom: Add SPI driver support for GENI based QUP
...
|
|
Daniel Borkmann says:
====================
pull-request: bpf-next 2018-10-21
The following pull-request contains BPF updates for your *net-next* tree.
The main changes are:
1) Implement two new kind of BPF maps, that is, queue and stack
map along with new peek, push and pop operations, from Mauricio.
2) Add support for MSG_PEEK flag when redirecting into an ingress
psock sk_msg queue, and add a new helper bpf_msg_push_data() for
insert data into the message, from John.
3) Allow for BPF programs of type BPF_PROG_TYPE_CGROUP_SKB to use
direct packet access for __skb_buff, from Song.
4) Use more lightweight barriers for walking perf ring buffer for
libbpf and perf tool as well. Also, various fixes and improvements
from verifier side, from Daniel.
5) Add per-symbol visibility for DSO in libbpf and hide by default
global symbols such as netlink related functions, from Andrey.
6) Two improvements to nfp's BPF offload to check vNIC capabilities
in case prog is shared with multiple vNICs and to protect against
mis-initializing atomic counters, from Jakub.
7) Fix for bpftool to use 4 context mode for the nfp disassembler,
also from Jakub.
8) Fix a return value comparison in test_libbpf.sh and add several
bpftool improvements in bash completion, documentation of bpf fs
restrictions and batch mode summary print, from Quentin.
9) Fix a file resource leak in BPF selftest's load_kallsyms()
helper, from Peng.
10) Fix an unused variable warning in map_lookup_and_delete_elem(),
from Alexei.
11) Fix bpf_skb_adjust_room() signature in BPF UAPI helper doc,
from Nicolas.
12) Add missing executables to .gitignore in BPF selftests, from Anders.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
David Ahern's dump indexing bug fix in 'net' overlapped the
change of the function signature of inet6_fill_ifaddr() in
'net-next'. Trivially resolved.
Signed-off-by: David S. Miller <[email protected]>
|
|
When trying to complete "bpftool map update" commands, the call to
printf would print an error message that would show on the command line
if no map is found to complete the command line.
Fix it by making sure we have map ids to complete the line with, before
we try to print something.
Signed-off-by: Quentin Monnet <[email protected]>
Reviewed-by: Jakub Kicinski <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
When batch mode is used and all commands succeeds, bpftool prints the
number of commands processed to stderr. There is no particular reason to
use stderr for this, we could as well use stdout. It would avoid getting
unnecessary output on stderr if the standard ouptut is redirected, for
example.
Reported-by: David Beckett <[email protected]>
Signed-off-by: Quentin Monnet <[email protected]>
Reviewed-by: Jakub Kicinski <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
Names used to pin eBPF programs and maps under the eBPF virtual file
system cannot contain a dot character, which is reserved for future
extensions of this file system.
Document this in bpftool man pages to avoid users getting confused if
pinning fails because of a dot.
Signed-off-by: Quentin Monnet <[email protected]>
Reviewed-by: Jakub Kicinski <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
|
|
All users have now been converted to the XArray. Removing the support
reduces code size and ensures new users will use the XArray instead.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
This is the last remaining user of the multiorder functionality of the
radix tree. Test the XArray instead.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
In preparation for the removal of the multiorder radix tree code,
convert item_delete_rcu() to use the XArray so it can still be called
for XArrays containing multi-index entries.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
In preparation for the removal of the multiorder radix tree code,
convert item_kill_tree() to use the XArray so it can still be called
for XArrays containing multi-index entries.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
The remaining tests are not suitable for moving in-kernel, so move
item_insert_order() into multiorder.c, make it static and make it use
the XArray.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
The multiorder radix tree code is being removed, so remove the
benchmarking of its performance.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
Inline it into its one caller
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
This version is a little less thorough in order to be a little quicker,
but tests the important edge cases. Also test adding a multiorder entry
at a non-canonical index, and erasing it.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
Test this functionality inside the kernel as well as in userspace.
Also remove insert_bug() as there's no comparable thing to test
in the XArray code.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
Move this test to the in-kernel test suite, and enhance it to test
several different orders.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
With no code left in the kernel using the multiorder radix tree, convert
the iteration test from the radix tree to the XArray. It's unlikely to
suffer the same bug as the radix tree, but this test will prevent that
bug from ever creeping into the XArray implementation.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
The tag_tagged_items() function is supposed to test the page-writeback
tagging code. Since that has been converted to the XArray, there's
not much point in testing the radix tree's tagging code. This requires
using the pthread mutex embedded in the xarray instead of an external
lock, so remove the pthread mutexes which protect xarrays/radix trees.
Also remove radix_tree_iter_tag_set() as this was the last user.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
The page cache was the only user of this interface and it has now
been converted to the XArray. Transform the test into a test of
xas_init_marks().
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
radix_tree_split and radix_tree_join were never used upstream. Remove
them; if they're needed in future they will be replaced by XArray
equivalents.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
The only user of this functionality was the workingset code, and it's
now been converted to the XArray. Remove __radix_tree_delete_node()
entirely as it was also only used by the workingset code.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
This is a 1:1 conversion. The major part of this patch is converting
the test framework from userspace to kernel space and mirroring the
algorithm now used in find_swap_entry().
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
Now the page cache lookup is using the XArray, let's convert this
regression test from the radix tree API to the XArray so it's testing
roughly the same thing it was testing before.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
There's no direct replacement for radix_tree_for_each_contig()
in the XArray API as it's an unusual thing to do. Instead,
open-code a loop using xas_next(). This removes the only user of
radix_tree_for_each_contig() so delete the iterator from the API and
the test suite code for it.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
Use the XA_TRACK_FREE ability to track which entries have a free bit,
similarly to how it uses the radix tree's IDR_FREE tag. This eliminates
the per-cpu ida_bitmap preload, and fixes the memory consumption
regression I introduced when making the IDR able to store any pointer.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
xa_store() differs from radix_tree_insert() in that it will overwrite an
existing element in the array rather than returning an error. This is
the behaviour which most users want, and those that want more complex
behaviour generally want to use the xas family of routines anyway.
For memory allocation, xa_store() will first attempt to request memory
from the slab allocator; if memory is not immediately available, it will
drop the xa_lock and allocate memory, keeping a pointer in the xa_state.
It does not use the per-CPU cache, although those will continue to exist
until all radix tree users are converted to the xarray.
This patch also includes xa_erase() and __xa_erase() for a streamlined
way to store NULL. Since there is no need to allocate memory in order
to store a NULL in the XArray, we do not need to trouble the user with
deciding what memory allocation flags to use.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
XArray marks are like the radix tree tags, only slightly more strongly
typed. They are renamed in order to distinguish them from tagged
pointers. This commit adds the basic get/set/clear operations.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
The xa_load function brings with it a lot of infrastructure; xa_empty(),
xa_is_err(), and large chunks of the XArray advanced API that are used
to implement xa_load.
As the test-suite demonstrates, it is possible to use the XArray functions
on a radix tree. The radix tree functions depend on the GFP flags being
stored in the root of the tree, so it's not possible to use the radix
tree functions on an XArray.
Signed-off-by: Matthew Wilcox <[email protected]>
|
|
This is a direct replacement for struct radix_tree_node. A couple of
struct members have changed name, so convert those. Use a #define so
that radix tree users continue to work without change.
Signed-off-by: Matthew Wilcox <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
|
|
This is a direct replacement for struct radix_tree_root. Some of the
struct members have changed name; convert those, and use a #define so
that radix_tree users continue to work without change.
Signed-off-by: Matthew Wilcox <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
|
|
The return value for each test in test_libbpf.sh is compared with
if (( $? == 0 )) ; then ...
This works well with bash, but not with dash, that /bin/sh is aliased to
on some systems (such as Ubuntu).
Let's replace this comparison by something that works on both shells.
Signed-off-by: Quentin Monnet <[email protected]>
Reviewed-by: Jakub Kicinski <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Simplify bpf_perf_event_read_simple() a bit and fix up some minor
things along the way: the return code in the header is not of type
int but enum bpf_perf_event_ret instead. Once callback indicated
to break the loop walking event data, it also needs to be consumed
in data_tail since it has been processed already.
Moreover, bpf_perf_event_print_t callback should avoid void * as
we actually get a pointer to struct perf_event_header and thus
applications can make use of container_of() to have type checks.
The walk also doesn't have to use modulo op since the ring size is
required to be power of two.
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Using reg_type_str[insn->dst_reg] is incorrect since insn->dst_reg
contains the register number but not the actual register type. Add
a small reg_state() helper and use it to get to the type. Also fix
up the test_verifier test cases that have an incorrect errstr.
Fixes: 9d2be44a7f33 ("bpf: Reuse canonical string formatter for ctx errs")
Signed-off-by: Daniel Borkmann <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Add options to run msg_push_data, this patch creates two more flags
in test_sockmap that can be used to specify the offset and length
of bytes to be added. The new options are --txmsg_start_push to
specify where bytes should be inserted and --txmsg_end_push to
specify how many bytes. This is analagous to the options that are
used to pull data, --txmsg_start and --txmsg_end.
In addition to adding the options tests are added to the test
suit to run the tests similar to what was done for msg_pull_data.
Signed-off-by: John Fastabend <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
Add support for new bpf_msg_push_data in libbpf.
Signed-off-by: John Fastabend <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Ingo writes:
"perf fixes:
Misc perf tooling fixes."
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf tools: Stop fallbacking to kallsyms for vdso symbols lookup
perf tools: Pass build flags to traceevent build
perf report: Don't crash on invalid inline debug information
perf cpu_map: Align cpu map synthesized events properly.
perf tools: Fix tracing_path_mount proper path
perf tools: Fix use of alternatives to find JDIR
perf evsel: Store ids for events with their own cpus perf_event__synthesize_event_update_cpus
perf vendor events intel: Fix wrong filter_band* values for uncore events
Revert "perf tools: Fix PMU term format max value calculation"
tools headers uapi: Sync kvm.h copy
tools arch uapi: Sync the x86 kvm.h copy
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Steven writes:
"tracing: A few small fixes to synthetic events
Masami found some issues with the creation of synthetic events. The
first two patches fix handling of unsigned type, and handling of a
space before an ending semi-colon.
The third patch adds a selftest to test the processing of synthetic
events."
* tag 'trace-v4.19-rc8-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
selftests: ftrace: Add synthetic event syntax testcase
tracing: Fix synthetic event to allow semicolon at end
tracing: Fix synthetic event to accept unsigned modifier
|
|
This tests that a bctr (Branch to counter and link), ie. a function
call, to a wildly out-of-bounds address is handled correctly.
Some old kernel versions didn't handle it correctly, see eg:
"powerpc/slb: Force a full SLB flush when we insert for a bad EA"
https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-April/157397.html
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Some of our Makefiles don't do the right thing when building the
selftests with O=, fix them up.
Signed-off-by: Michael Ellerman <[email protected]>
|
|
This adds a test to verify proper functioning of the rfi flush
capability implemented to mitigate meltdown. The test works by
measuring the number of L1d cache misses encountered while loading
data from memory. Across a system call, since the L1d cache is flushed
when rfi_flush is enabled, the number of cache misses is expected to
be relative to the number of cachelines corresponding to the data
being loaded.
The current system setting is reflected via powerpc/rfi_flush under
debugfs (assumed to be /sys/kernel/debug/). This test verifies the
expected result with rfi_flush enabled as well as when it is disabled.
Signed-off-by: Anton Blanchard <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
Signed-off-by: Naveen N. Rao <[email protected]>
[mpe: Add SPDX tags, clang format, skip if the debugfs is missing, use
__u64 and SANE_USERSPACE_TYPES to avoid printf() build errors.]
Signed-off-by: Michael Ellerman <[email protected]>
|
|
... so that it can be used by others.
Signed-off-by: Naveen N. Rao <[email protected]>
Signed-off-by: Michael Ellerman <[email protected]>
|
|
Add a testcase to check the syntax and field types for
synthetic_events interface.
Link: http://lkml.kernel.org/r/153986838264.18251.16627517536956299922.stgit@devbox
Acked-by: Shuah Khan <[email protected]>
Signed-off-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
|
|
Tests are added to make sure CGROUP_SKB cannot access:
tc_classid, data_meta, flow_keys
and can read and write:
mark, prority, and cb[0-4]
and can read other fields.
To make selftest with skb->sk work, a dummy sk is added in
bpf_prog_test_run_skb().
Signed-off-by: Song Liu <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Given libbpf is a generic library and not restricted to x86-64 only,
the compiler barrier in bpf_perf_event_read_simple() after fetching
the head needs to be replaced with smp_rmb() at minimum. Also, writing
out the tail we should use WRITE_ONCE() to avoid store tearing.
Now that we have the logic in place in ring_buffer_read_head() and
ring_buffer_write_tail() helper also used by perf tool which would
select the correct and best variant for a given architecture (e.g.
x86-64 can avoid CPU barriers entirely), make use of these in order
to fix bpf_perf_event_read_simple().
Fixes: d0cabbb021be ("tools: bpf: move the event reading loop to libbpf")
Fixes: 39111695b1b8 ("samples: bpf: add bpf_perf_event_output example")
Signed-off-by: Daniel Borkmann <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Currently, on x86-64, perf uses LFENCE and MFENCE (rmb() and mb(),
respectively) when processing events from the perf ring buffer which
is unnecessarily expensive as we can do more lightweight in particular
given this is critical fast-path in perf.
According to Peter rmb()/mb() were added back then via a94d342b9cb0
("tools/perf: Add required memory barriers") at a time where kernel
still supported chips that needed it, but nowadays support for these
has been ditched completely, therefore we can fix them up as well.
While for x86-64, replacing rmb() and mb() with smp_*() variants would
result in just a compiler barrier for the former and LOCK + ADD for
the latter (__sync_synchronize() uses slower MFENCE by the way), Peter
suggested we can use smp_{load_acquire,store_release}() instead for
architectures where its implementation doesn't resolve in slower smp_mb().
Thus, e.g. in x86-64 we would be able to avoid CPU barrier entirely due
to TSO. For architectures where the latter needs to use smp_mb() e.g.
on arm, we stick to cheaper smp_rmb() variant for fetching the head.
This work adds helpers ring_buffer_read_head() and ring_buffer_write_tail()
for tools infrastructure that either switches to smp_load_acquire() for
architectures where it is cheaper or uses READ_ONCE() + smp_rmb() barrier
for those where it's not in order to fetch the data_head from the perf
control page, and it uses smp_store_release() to write the data_tail.
Latter is smp_mb() + WRITE_ONCE() combination or a cheaper variant if
architecture allows for it. Those that rely on smp_rmb() and smp_mb() can
further improve performance in a follow up step by implementing the two
under tools/arch/*/include/asm/barrier.h such that they don't have to
fallback to rmb() and mb() in tools/include/asm/barrier.h.
Switch perf to use ring_buffer_read_head() and ring_buffer_write_tail()
so it can make use of the optimizations. Later, we convert libbpf as
well to use the same helpers.
Side note [0]: the topic has been raised of whether one could simply use
the C11 gcc builtins [1] for the smp_load_acquire() and smp_store_release()
instead:
__atomic_load_n(ptr, __ATOMIC_ACQUIRE);
__atomic_store_n(ptr, val, __ATOMIC_RELEASE);
Kernel and (presumably) tooling shipped along with the kernel has a
minimum requirement of being able to build with gcc-4.6 and the latter
does not have C11 builtins. While generally the C11 memory models don't
align with the kernel's, the C11 load-acquire and store-release alone
/could/ suffice, however. Issue is that this is implementation dependent
on how the load-acquire and store-release is done by the compiler and
the mapping of supported compilers must align to be compatible with the
kernel's implementation, and thus needs to be verified/tracked on a
case by case basis whether they match (unless an architecture uses them
also from kernel side). The implementations for smp_load_acquire() and
smp_store_release() in this patch have been adapted from the kernel side
ones to have a concrete and compatible mapping in place.
[0] http://patchwork.ozlabs.org/patch/985422/
[1] https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
Signed-off-by: Daniel Borkmann <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Fixes: 371e4fcc9d96 ("selftests/bpf: cgroup local storage-based network counters")
Fixes: 370920c47b26 ("selftests/bpf: Test libbpf_{prog,attach}_type_by_name")
Signed-off-by: Anders Roxell <[email protected]>
Acked-by: Yonghong Song <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
test_maps:
Tests that queue/stack maps are behaving correctly even in corner cases
test_progs:
Tests new ebpf helpers
Signed-off-by: Mauricio Vasquez B <[email protected]>
Acked-by: Song Liu <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
Sync both files.
Signed-off-by: Mauricio Vasquez B <[email protected]>
Acked-by: Song Liu <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
net/sched/cls_api.c has overlapping changes to a call to
nlmsg_parse(), one (from 'net') added rtm_tca_policy instead of NULL
to the 5th argument, and another (from 'net-next') added cb->extack
instead of NULL to the 6th argument.
net/ipv4/ipmr_base.c is a case of a bug fix in 'net' being done to
code which moved (to mr_table_dump)) in 'net-next'. Thanks to David
Ahern for the heads up.
Signed-off-by: David S. Miller <[email protected]>
|