Age | Commit message (Collapse) | Author | Files | Lines |
|
Pull Microblaze fixes from Michal Simek:
- Kbuild fix
- use vma_pages
- setup default little endians
* tag 'microblaze-4.14-rc3' of git://git.monstr.eu/linux-2.6-microblaze:
arch: change default endian for microblaze
microblaze: Cocci spatch "vma_pages"
microblaze: Add missing kvm_para.h to Kbuild
|
|
This started out as just replacing the use of crypto/rng with
get_random_bytes_wait, so that we wouldn't use bad randomness at boot
time. But, upon looking further, it appears that there were even deeper
underlying cryptographic problems, and that this seems to have been
committed with very little crypto review. So, I rewrote the whole thing,
trying to keep to the conventions introduced by the previous author, to
fix these cryptographic flaws.
It makes no sense to seed crypto/rng at boot time and then keep
using it like this, when in fact there's already get_random_bytes_wait,
which can ensure there's enough entropy and be a much more standard way
of generating keys. Since this sensitive material is being stored
untrusted, using ECB and no authentication is simply not okay at all. I
find it surprising and a bit horrifying that this code even made it past
basic crypto review, which perhaps points to some larger issues. This
patch moves from using AES-ECB to using AES-GCM. Since keys are uniquely
generated each time, we can set the nonce to zero. There was also a race
condition in which the same key would be reused at the same time in
different threads. A mutex fixes this issue now.
So, to summarize, this commit fixes the following vulnerabilities:
* Low entropy key generation, allowing an attacker to potentially
guess or predict keys.
* Unauthenticated encryption, allowing an attacker to modify the
cipher text in particular ways in order to manipulate the plaintext,
which is is even more frightening considering the next point.
* Use of ECB mode, allowing an attacker to trivially swap blocks or
compare identical plaintext blocks.
* Key re-use.
* Faulty memory zeroing.
Signed-off-by: Jason A. Donenfeld <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Signed-off-by: David Howells <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Kirill Marinushkin <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
Error paths forgot to zero out sensitive material, so this patch changes
some kfrees into a kzfrees.
Signed-off-by: Jason A. Donenfeld <[email protected]>
Signed-off-by: David Howells <[email protected]>
Reviewed-by: Eric Biggers <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Kirill Marinushkin <[email protected]>
Cc: [email protected]
Cc: [email protected]
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"Stack tracing and RCU has been having issues with each other and
lockdep has been pointing out constant problems.
The changes have been going into the stack tracer, but it has been
discovered that the problem isn't with the stack tracer itself, but it
is with calling save_stack_trace() from within the internals of RCU.
The stack tracer is the one that can trigger the issue the easiest,
but examining the problem further, it could also happen from a WARN()
in the wrong place, or even if an NMI happened in this area and it did
an rcu_read_lock().
The critical area is where RCU is not watching. Which can happen while
going to and from idle, or bringing up or taking down a CPU.
The final fix was to put the protection in kernel_text_address() as it
is the one that requires RCU to be watching while doing the stack
trace.
To make this work properly, Paul had to allow rcu_irq_enter() happen
after rcu_nmi_enter(). This should have been done anyway, since an NMI
can page fault (reading vmalloc area), and a page fault triggers
rcu_irq_enter().
One patch is just a consolidation of code so that the fix only needed
to be done in one location"
* tag 'trace-v4.14-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing: Remove RCU work arounds from stack tracer
extable: Enable RCU if it is not watching in kernel_text_address()
extable: Consolidate *kernel_text_address() functions
rcu: Allow for page faults in NMI handlers
|
|
Guillaume Nault says:
====================
l2tp: fix some races in session deletion
L2TP provides several interfaces for deleting sessions. Using two of
them concurrently can lead to use-after-free bugs.
Patch #2 uses a flag to prevent double removal of L2TP sessions.
Patch #1 fixes a bug found in the way. Fixing this bug is also
necessary for patch #2 to handle all cases.
This issue is similar to the tunnel deletion bug being worked on by
Sabrina: https://patchwork.ozlabs.org/patch/814173/
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
There are several ways to remove L2TP sessions:
* deleting a session explicitly using the netlink interface (with
L2TP_CMD_SESSION_DELETE),
* deleting the session's parent tunnel (either by closing the
tunnel's file descriptor or using the netlink interface),
* closing the PPPOL2TP file descriptor of a PPP pseudo-wire.
In some cases, when these methods are used concurrently on the same
session, the session can be removed twice, leading to use-after-free
bugs.
This patch adds a 'dead' flag, used by l2tp_session_delete() and
l2tp_tunnel_closeall() to prevent them from stepping on each other's
toes.
The session deletion path used when closing a PPPOL2TP file descriptor
doesn't need to be adapted. It already has to ensure that a session
remains valid for the lifetime of its PPPOL2TP file descriptor.
So it takes an extra reference on the session in the ->session_close()
callback (pppol2tp_session_close()), which is eventually dropped
in the ->sk_destruct() callback of the PPPOL2TP socket
(pppol2tp_session_destruct()).
Still, __l2tp_session_unhash() and l2tp_session_queue_purge() can be
called twice and even concurrently for a given session, but thanks to
proper locking and re-initialisation of list fields, this is not an
issue.
Signed-off-by: Guillaume Nault <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
If l2tp_tunnel_delete() or l2tp_tunnel_closeall() deletes a session
right after pppol2tp_release() orphaned its socket, then the 'sock'
variable of the pppol2tp_session_close() callback is NULL. Yet the
session is still used by pppol2tp_release().
Therefore we need to take an extra reference in any case, to prevent
l2tp_tunnel_delete() or l2tp_tunnel_closeall() from freeing the session.
Since the pppol2tp_session_close() callback is only set if the session
is associated to a PPPOL2TP socket and that both l2tp_tunnel_delete()
and l2tp_tunnel_closeall() hold the PPPOL2TP socket before calling
pppol2tp_session_close(), we're sure that pppol2tp_session_close() and
pppol2tp_session_destruct() are paired and called in the right order.
So the reference taken by the former will be released by the later.
Signed-off-by: Guillaume Nault <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add a sysfs file to one-time fail a specific state. This can be used
to test the state rollback code paths.
Something like this (hotplug-up.sh):
#!/bin/bash
echo 0 > /debug/sched_debug
echo 1 > /debug/tracing/events/cpuhp/enable
ALL_STATES=`cat /sys/devices/system/cpu/hotplug/states | cut -d':' -f1`
STATES=${1:-$ALL_STATES}
for state in $STATES
do
echo 0 > /sys/devices/system/cpu/cpu1/online
echo 0 > /debug/tracing/trace
echo Fail state: $state
echo $state > /sys/devices/system/cpu/cpu1/hotplug/fail
cat /sys/devices/system/cpu/cpu1/hotplug/fail
echo 1 > /sys/devices/system/cpu/cpu1/online
cat /debug/tracing/trace > hotfail-${state}.trace
sleep 1
done
Can be used to test for all possible rollback (barring multi-instance)
scenarios on CPU-up, CPU-down is a trivial modification of the above.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
With lockdep-crossrelease we get deadlock reports that span cpu-up and
cpu-down chains. Such deadlocks cannot possibly happen because cpu-up
and cpu-down are globally serialized.
takedown_cpu()
irq_lock_sparse()
wait_for_completion(&st->done)
cpuhp_thread_fun
cpuhp_up_callback
cpuhp_invoke_callback
irq_affinity_online_cpu
irq_local_spare()
irq_unlock_sparse()
complete(&st->done)
Now that we have consistent AP state, we can trivially separate the
AP completion between up and down using st->bringup.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Acked-by: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
With lockdep-crossrelease we get deadlock reports that span cpu-up and
cpu-down chains. Such deadlocks cannot possibly happen because cpu-up
and cpu-down are globally serialized.
CPU0 CPU1 CPU2
cpuhp_up_callbacks: takedown_cpu: cpuhp_thread_fun:
cpuhp_state
irq_lock_sparse()
irq_lock_sparse()
wait_for_completion()
cpuhp_state
complete()
Now that we have consistent AP state, we can trivially separate the
AP-work class between up and down using st->bringup.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
While the generic callback functions have an 'int' return and thus
appear to be allowed to return error, this is not true for all states.
Specifically, what used to be STARTING/DYING are ran with IRQs
disabled from critical parts of CPU bringup/teardown and are not
allowed to fail. Add WARNs to enforce this rule.
But since some callbacks are indeed allowed to fail, we have the
situation where a state-machine rollback encounters a failure, in this
case we're stuck, we can't go forward and we can't go back. Also add a
WARN for that case.
AFAICT this is a fundamental 'problem' with no real obvious solution.
We want the 'prepare' callbacks to allow failure on either up or down.
Typically on prepare-up this would be things like -ENOMEM from
resource allocations, and the typical usage in prepare-down would be
something like -EBUSY to avoid CPUs being taken away.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
There is currently no explicit state change on rollback. That is,
st->bringup, st->rollback and st->target are not consistent when doing
the rollback.
Rework the AP state handling to be more coherent. This does mean we
have to do a second AP kick-and-wait for rollback, but since rollback
is the slow path of a slowpath, this really should not matter.
Take this opportunity to simplify the AP thread function to only run a
single callback per invocation. This unifies the three single/up/down
modes is supports. The looping it used to do for up/down are achieved
by retaining should_run and relying on the main smpboot_thread_fn()
loop.
(I have most of a patch that does the same for the BP state handling,
but that's not critical and gets a little complicated because
CPUHP_BRINGUP_CPU does the AP handoff from a callback, which gets
recursive @st usage, I still have de-fugly that.)
[ tglx: Move cpuhp_down_callbacks() et al. into the HOTPLUG_CPU section to
avoid gcc complaining about unused functions. Make the HOTPLUG_CPU
one piece instead of having two consecutive ifdef sections of the
same type. ]
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
Currently the rollback of multi-instance states is handled inside
cpuhp_invoke_callback(). The problem is that when we want to allow an
explicit state change for rollback, we need to return from the
function without doing the rollback.
Change cpuhp_invoke_callback() to optionally return the multi-instance
state, such that rollback can be done from a subsequent call.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
Add a state diagram to clarify when which states are ran where.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
|
|
Add myself as a maintainer to support existing SoCs and push forward
following MediaTek PMICs with LEDs to reuse the driver.
Signed-off-by: Sean Wang <[email protected]>
Signed-off-by: Jacek Anaszewski <[email protected]>
|
|
nlmsg properly
ChunYu found a kernel crash by syzkaller:
[ 651.617875] kasan: CONFIG_KASAN_INLINE enabled
[ 651.618217] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 651.618731] general protection fault: 0000 [#1] SMP KASAN
[ 651.621543] CPU: 1 PID: 9539 Comm: scsi Not tainted 4.11.0.cov #32
[ 651.621938] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[ 651.622309] task: ffff880117780000 task.stack: ffff8800a3188000
[ 651.622762] RIP: 0010:skb_release_data+0x26c/0x590
[...]
[ 651.627260] Call Trace:
[ 651.629156] skb_release_all+0x4f/0x60
[ 651.629450] consume_skb+0x1a5/0x600
[ 651.630705] netlink_unicast+0x505/0x720
[ 651.632345] netlink_sendmsg+0xab2/0xe70
[ 651.633704] sock_sendmsg+0xcf/0x110
[ 651.633942] ___sys_sendmsg+0x833/0x980
[ 651.637117] __sys_sendmsg+0xf3/0x240
[ 651.638820] SyS_sendmsg+0x32/0x50
[ 651.639048] entry_SYSCALL_64_fastpath+0x1f/0xc2
It's caused by skb_shared_info at the end of sk_buff was overwritten by
ISCSI_KEVENT_IF_ERROR when parsing nlmsg info from skb in iscsi_if_rx.
During the loop if skb->len == nlh->nlmsg_len and both are sizeof(*nlh),
ev = nlmsg_data(nlh) will acutally get skb_shinfo(SKB) instead and set a
new value to skb_shinfo(SKB)->nr_frags by ev->type.
This patch is to fix it by checking nlh->nlmsg_len properly there to
avoid over accessing sk_buff.
Reported-by: ChunYu Wang <[email protected]>
Signed-off-by: Xin Long <[email protected]>
Acked-by: Chris Leech <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
|
|
Fix various address spaces warning of sparse.
kernel/irq/irqdomain.c:1463:14: warning: incorrect type in assignment (different address spaces)
kernel/irq/irqdomain.c:1463:14: expected void **slot
kernel/irq/irqdomain.c:1463:14: got void [noderef] <asn:4>**
kernel/irq/irqdomain.c:1465:66: warning: incorrect type in argument 2 (different address spaces)
kernel/irq/irqdomain.c:1465:66: expected void [noderef] <asn:4>**slot
kernel/irq/irqdomain.c:1465:66: got void **slot
Signed-off-by: Masahiro Yamada <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Jason Cooper <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
Commit 7778c4b27cbe ("irqchip: mips-gic: Use pcpu_masks to avoid reading
GIC_SH_MASK*") adjusted the way we handle masking interrupts to set &
clear the interrupt's bit in each pcpu_mask. This allows us to avoid
needing to read the GIC mask registers and perform a bitwise and of
their values with the pending & pcpu_masks.
Unfortunately this didn't quite work for IPIs, which were mapped to a
particular CPU/VP during initialisation but never set the affinity or
effective_affinity fields of their struct irq_desc. This led to them
losing their affinity when gic_unmask_irq() was called for them, and
they'd all become affine to cpu0.
Fix this by:
1) Setting the effective affinity of interrupts in
gic_shared_irq_domain_map(), which is where we actually map an
interrupt to a CPU/VP. This ensures that the effective affinity mask
is always valid, not just after explicitly setting affinity.
2) Using an interrupt's effective affinity when unmasking it, which
prevents gic_unmask_irq() from unintentionally changing which
pcpu_mask includes an interrupt.
Fixes: 7778c4b27cbe ("irqchip: mips-gic: Use pcpu_masks to avoid reading GIC_SH_MASK*")
Signed-off-by: Paul Burton <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Jason Cooper <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
The MIPS GIC driver is incorrectly using __fls to shift registers,
intending to shift to the least significant bit of a value based upon
its mask but instead shifting off all but the value's top bit. It should
actually be using __ffs to shift to the first, not last, bit of the
value.
Apparently the system I used when testing commit 3680746abd87
("irqchip: mips-gic: Convert remaining shared reg access to new
accessors") and commit b2b2e584ceab ("irqchip: mips-gic: Clean up mti,
reserved-cpu-vectors handling") managed to work correctly despite this
issue, but not all systems do...
Fixes: 3680746abd87 ("irqchip: mips-gic: Convert remaining shared reg access to new accessors")
Fixes: b2b2e584ceab ("irqchip: mips-gic: Clean up mti, reserved-cpu-vectors handling")
Signed-off-by: Paul Burton <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: Jason Cooper <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
Now that there are potentially long delays between when a remoteport or
targetport delete calls is made and when the callback occurs (dev_loss_tmo
timeout), no longer block in the delete routines and move the final nport
puts to the callbacks.
Moved the fcloop_nport_get/put/free routines to avoid forward declarations.
Ensure port_info structs used in registrations are nulled in case fields
are not set (ex: devloss_tmo values).
Signed-off-by: James Smart <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Comments were incorrect:
- defer_rcv was in host port template. moved to target port template
- Added Mandatory statements for target port template items
Signed-off-by: James Smart <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
When searching for queue id's ensure they are within the expected range.
Signed-off-by: James Smart <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Avoid calling the put routine, as it may traverse to free routines while
holding the target lock.
Signed-off-by: James Smart <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
By calling nvme_stop_ctrl on a already failed controller will wait for the
scan work to complete (only by identify timeout expiration which is 60
seconds). This is unnecessary when we already know that the controller has
failed.
Reported-by: Yi Zhang <[email protected]>
Signed-off-by: Sagi Grimberg <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
If we failed to transition to state LIVE after a successful reconnect,
then controller deletion already started. In this case there is no
point moving forward with reconnect.
Reviewed-by: Johannes Thumshirn <[email protected]>
Signed-off-by: Sagi Grimberg <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
async_event_work might race as it is executed from two different
workqueues at the moment.
Reviewed-by: Johannes Thumshirn <[email protected]>
Signed-off-by: Sagi Grimberg <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Fix bug in sqhd patch.
It wasn't the sq that was at risk. In the case where the admin queue
connect command fails, the sq->size field is not set. Therefore, this
becomes a divide by zero error.
Add a quick check to bypass under this failure condition.
Signed-off-by: James Smart <[email protected]>
Signed-off-by: Christoph Hellwig <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
for_each_cpu() unintuitively reports CPU0 as set independend of the actual
cpumask content on UP kernels. That leads to a NULL pointer dereference
when the cleanup function is invoked and there is no event to clean up.
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
|
|
The switch to rhashtables (commit 88ffbf3e03) broke the debugfs glock
dump (/sys/kernel/debug/gfs2/<device>/glocks) for dumps bigger than a
single buffer: the right function for restarting an rhashtable iteration
from the beginning of the hash table is rhashtable_walk_enter;
rhashtable_walk_stop + rhashtable_walk_start will just resume from the
current position.
Signed-off-by: Andreas Gruenbacher <[email protected]>
Signed-off-by: Bob Peterson <[email protected]>
Cc: [email protected] # v4.3+
|
|
When timer_create() fails on a bootime or realtime clock, setup_timer()
returns 0 as if timer has been set. Callers wait forever for the timer
to expire.
This hang is seen on a system that doesn't have support for:
CLOCK_REALTIME_ALARM ABSTIME missing CAP_WAKE_ALARM? : [UNSUPPORTED]
Test hangs waiting for a timer that hasn't been set to expire. Fix
setup_timer() to return 1, add handling in callers to detect the
unsupported case and return 0 without waiting to not fail the test.
Signed-off-by: Shuah Khan <[email protected]>
|
|
do_timer_oneshot() uses select() as a timer with FD_SETSIZE and readfs
is cleared with FD_ZERO without FD_SET.
When stdout and stderr are redirected, the test hangs in select forever.
Fix the problem calling select() with readfds empty and nfds zero. This
is sufficient for using select() for timer.
With this fix "./set-timer-lat > /dev/null 2>&1" no longer hangs.
Signed-off-by: Shuah Khan <[email protected]>
Acked-by: Greg Hackmann <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
to fix the following issue:
------------------
TAP version 13
selftests: run_tests.sh
========================================
selftests: Warning: file run_tests.sh is not executable, correct this.
not ok 1..1 selftests: run_tests.sh [FAIL]
------------------
Signed-off-by: Li Zhijian <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
The 2.26 release of glibc changed how siginfo_t is defined, and the earlier
work-around to using the kernel definition are no longer needed. The old
way needs to stay around for a while, though.
Reported-by: Seth Forshee <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Will Drewry <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
Tested-by: Seth Forshee <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Fix for loops in targets to run silently to avoid cluttering the test
results.
Suppresses the following from targets:
for DIR in functional; do \
BUILD_TARGET=./tools/testing/selftests/futex/$DIR; \
mkdir $BUILD_TARGET -p; \
make OUTPUT=$BUILD_TARGET -C $DIR all;\
done
./tools/testing/selftests/futex/run.sh
Signed-off-by: Shuah Khan <[email protected]>
Reviewed-by: Darren Hart (VMware) <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Fix for loops in targets to run silently to avoid cluttering the test
results.
Suppresses the following from targets: e.g run from breakpoints
for TARGET in breakpoints; do \
BUILD_TARGET=$BUILD/$TARGET; \
mkdir $BUILD_TARGET -p; \
make OUTPUT=$BUILD_TARGET -C $TARGET;\
done;
Signed-off-by: Shuah Khan <[email protected]>
|
|
Use full path including $(OUTPUT) to run tests from Makefile for
normal case when objects reside in the source tree as well as when
objects are relocated with make O=dir. In both cases $(OUTPUT) will
be set correctly by lib.mk.
Signed-off-by: Shuah Khan <[email protected]>
|
|
For make O=dir run_tests to work, test scripts from sub-directories
need to be copied over to the object directory. Running tests from the
object directory is necessary to avoid making the source tree dirty.
Signed-off-by: Shuah Khan <[email protected]>
Reviewed-by: Darren Hart (VMware) <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
If CONFIG_PCI=n and gcc (e.g. 4.1.2) decides not to inline
get_pci_function_alias_group(), the build fails with:
drivers/iommu/iommu.o: In function `get_pci_function_alias_group':
iommu.c:(.text+0xfdc): undefined reference to `pci_acs_enabled'
Due to the various dummies for PCI calls in the CONFIG_PCI=n case,
pci_acs_enabled() never called, but not all versions of gcc are smart
enough to realize that.
While explicitly marking get_pci_function_alias_group() inline would fix
the build, this would inflate the code for the CONFIG_PCI=y case, as
get_pci_function_alias_group() is a not-so-small function called from two
places.
Hence fix the issue by introducing a dummy for pci_acs_enabled() instead.
Fixes: 0ae349a0f33f ("iommu/qcom: Add qcom_iommu")
Signed-off-by: Geert Uytterhoeven <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
Reviewed-by: Alex Williamson <[email protected]>
|
|
mlx5_ib_reg_user_mr called mlx5_ib_dereg_mr in case of MR population
failure. This resulted in a NULL dereference as ibmr->device wasn't
initialized yet.
We address this by adding an internal dereg_mr function that can handle
partially initialized MRs, and fixing clean_mr to work on partially
initialized MRs.
Fixes: ff740aefecb9 ("IB/mlx5: Decouple MR allocation and population flows")
Signed-off-by: Ilya Lesokhin <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
The patch simplifies mlx5_ib_cont_pages and fixes the following
issues in the original implementation:
First issues is related to alignment of the PFNs. After the check
base + p != PFN, the alignment of the PFN wasn't checked. So the PFN
sequence 0, 1, 1, 2 would result in a page_shift of 13 even though
the 3rd PFN is not 8KB aligned.
This wasn't actually a bug because it was supported by all the
existing mlx5 compatible device, but we don't want to require
this support in all future devices.
Another issue is because the inner loop didn't advance PFN so
the test "if (base + p != pfn)" always failed for SGE with
len > (1<<page_shift).
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: Ilya Lesokhin <[email protected]>
Reviewed-by: Eli Cohen <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
Call free_rdma_netdev instead of free_netdev each time we want to
release a netdevice. This call is also relevant for future freeing
of offloaded child interfaces.
This patch also adds a missing call for free netdevice when releasing
a parent interface that has child interfaces using ipoib_remove_one.
Fixes: cd565b4b51e5 ('IB/IPoIB: Support acceleration options callbacks')
Signed-off-by: Alex Vesker <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
A possible ABBA lock can happen with RTNL and vlan_rwsem.
For example:
Flow A: Device Flush
__ipoib_ib_dev_flush
down_read(vlan_rwsem) // Lock A
ipoib_flush_ah
flush_workqueue(priv->wq) // Wait for completion
A work on shared WQ (Mcast carrier)
ipoib_mcast_carrier_on_task
while (!rtnl_trylock()) // Wait for lock B
Flow B: Sysfs PKEY delete
ipoib_vlan_delete
lock(RTNL) // Lock B
down_write(vlan_rwsem) // Wait for lock A
This can happen with PKEY creates as well. The solution is to release
the RTNL lock in sysfs functions in case it is not possible to lock
VLAN RW semaphore and reset the SYS call.
Fixes: 69956d83267e ("IB/ipoib: Sync between remove_one to sysfs calls that use rtnl_lock")
Signed-off-by: Shalom Lagziel <[email protected]>
Signed-off-by: Alex Vesker <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
The ib_mr->length represents the length of the MR in bytes as per
the IBTA spec 1.3 section 11.2.10.3 (REGISTER PHYSICAL MEMORY REGION).
Currently ib_mr->length field is defined as only 32-bits field.
This might result into truncation and failed WRs of consumers who
registers more than 4GB bytes memory regions and whose WRs accessing
such MRs.
This patch makes the length 64-bit to avoid such truncation.
Cc: Sagi Grimberg <[email protected]>
Cc: Chuck Lever <[email protected]>
Cc: Faisal Latif <[email protected]>
Fixes: 4c67e2bfc8b7 ("IB/core: Introduce new fast registration API")
Signed-off-by: Ilya Lesokhin <[email protected]>
Signed-off-by: Parav Pandit <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
When security_ib_alloc_security fails, qp->qp_sec memory is freed.
However ib_destroy_qp still tries to access this memory which result
in kernel crash. So its initialized to NULL to avoid such access.
Fixes: d291f1a65232 ("IB/core: Enforce PKey security on QPs")
Signed-off-by: Parav Pandit <[email protected]>
Reviewed-by: Daniel Jurgens <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
The tag matching functionality is implemented by mlx5 driver
by extending XRQ, however this internal kernel information was
exposed to user space applications with *xrq* name instead of *tm*.
This patch renames *xrq* to *tm* to handle that.
Fixes: 8d50505ada72 ("IB/uverbs: Expose XRQ capabilities")
Signed-off-by: Leon Romanovsky <[email protected]>
Reviewed-by: Yishai Hadas <[email protected]>
Signed-off-by: Doug Ledford <[email protected]>
|
|
The build of kernel v4.14-rc1 for i686 fails on RHEL 6 with the error
in tools/perf:
util/syscalltbl.c:157: error: expected ';', ',' or ')' before '__maybe_unused'
mv: cannot stat `util/.syscalltbl.o.tmp': No such file or directory
Fix it by placing/moving:
#include <linux/compiler.h>
outside of #ifdef HAVE_SYSCALL_TABLE block.
Signed-off-by: Akemi Yagi <[email protected]>
Cc: Alan Bartlett <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
With --call-graph option, perf report can display call chains using
type, min percent threshold, optional print limit and order. And the
default call-graph parameter is 'graph,0.5,caller,function,percent'.
Before this patch, 'perf report --call-graph' shows incorrect debug
messages as below:
# perf report --call-graph
Invalid callchain mode: 0.5
Invalid callchain order: 0.5
Invalid callchain sort key: 0.5
Invalid callchain config key: 0.5
Invalid callchain mode: caller
Invalid callchain mode: function
Invalid callchain order: function
Invalid callchain mode: percent
Invalid callchain order: percent
Invalid callchain sort key: percent
That is because in function __parse_callchain_report_opt(),each field of
the call-graph parameter is passed to parse_callchain_{mode,order,
sort_key,value} in turn until it meets the matching value.
For example, the order field "caller" is passed to
parse_callchain_mode() firstly and obviously it doesn't match any mode
field. Therefore parse_callchain_mode() will shows the debug message
"Invalid callchain mode: caller", which could confuse users.
The patch fixes this issue by moving the warning out of the function
parse_callchain_{mode,order,sort_key,value}.
Signed-off-by: Mengting Zhang <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Krister Johansen <[email protected]>
Cc: Li Bin <[email protected]>
Cc: Milian Wolff <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Wang Nan <[email protected]>
Cc: Yao Jin <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
|
|
The size of struct dm_name_list is different on 32-bit and 64-bit
kernels (so "(nl + 1)" differs between 32-bit and 64-bit kernels).
This mismatch caused some harmless difference in padding when using 32-bit
or 64-bit kernel. Commit 23d70c5e52dd ("dm ioctl: report event number in
DM_LIST_DEVICES") added reporting event number in the output of
DM_LIST_DEVICES_CMD. This difference in padding makes it impossible for
userspace to determine the location of the event number (the location
would be different when running on 32-bit and 64-bit kernels).
Fix the padding by using offsetof(struct dm_name_list, name) instead of
sizeof(struct dm_name_list) to determine the location of entries.
Also, the ioctl version number is incremented to 37 so that userspace
can use the version number to determine that the event number is present
and correctly located.
In addition, a global event is now raised when a DM device is created,
removed, renamed or when table is swapped, so that the user can monitor
for device changes.
Reported-by: Eugene Syromiatnikov <[email protected]>
Fixes: 23d70c5e52dd ("dm ioctl: report event number in DM_LIST_DEVICES")
Cc: [email protected] # 4.13
Signed-off-by: Mikulas Patocka <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
part_stat_show takes a part device not a disk, so we should use
part_to_disk.
Fixes: d62e26b3ffd2("block: pass in queue to inflight accounting")
Cc: Bart Van Assche <[email protected]>
Cc: Omar Sandoval <[email protected]>
Signed-off-by: Shaohua Li <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
Currently when mixing buffered reads and asynchronous direct writes it
is possible to end up with the situation where we have stale data in the
page cache while the new data is already written to disk. This is
permanent until the affected pages are flushed away. Despite the fact
that mixing buffered and direct IO is ill-advised it does pose a thread
for a data integrity, is unexpected and should be fixed.
Fix this by deferring completion of asynchronous direct writes to a
process context in the case that there are mapped pages to be found in
the inode. Later before the completion in dio_complete() invalidate
the pages in question. This ensures that after the completion the pages
in the written area are either unmapped, or populated with up-to-date
data. Also do the same for the iomap case which uses
iomap_dio_complete() instead.
This has a side effect of deferring the completion to a process context
for every AIO DIO that happens on inode that has pages mapped. However
since the consensus is that this is ill-advised practice the performance
implication should not be a problem.
This was based on proposal from Jeff Moyer, thanks!
Reviewed-by: Jan Kara <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Reviewed-by: Jeff Moyer <[email protected]>
Signed-off-by: Lukas Czerner <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|