Age | Commit message (Collapse) | Author | Files | Lines |
|
It was reported that some perf event setup can make fork failed on
ARM64. It was the case of a group of mixed hw and sw events and it
failed in perf_event_init_task() due to armpmu_event_init().
The ARM PMU code checks if all the events in a group belong to the
same PMU except for software events. But it didn't set the event_caps
of inherited events and no longer identify them as software events.
Therefore the test failed in a child process.
A simple reproducer is:
$ perf stat -e '{cycles,cs,instructions}' perf bench sched messaging
# Running 'sched/messaging' benchmark:
perf: fork(): Invalid argument
The perf stat was fine but the perf bench failed in fork(). Let's
inherit the event caps from the parent.
Signed-off-by: Namhyung Kim <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
The uncore PMU of the Raptor Lake is the same as Alder Lake.
Add new PCIIDs of IMC for Raptor Lake.
Signed-off-by: Kan Liang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Raptor Lake is Intel's successor to Alder lake. PPERF and SMI_COUNT MSRs
are also supported.
Signed-off-by: Kan Liang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Raptor Lake is Intel's successor to Alder lake. From the perspective of
Intel cstate residency counters, there is nothing changed compared with
Alder lake.
Share adl_cstates with Alder lake.
Update the comments for Raptor Lake.
Signed-off-by: Kan Liang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
From PMU's perspective, Raptor Lake is the same as the Alder Lake. The
only difference is the event list, which will be supported in the perf
tool later.
Signed-off-by: Kan Liang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The local_lock() is now using a proper static inline function which is
enough for llvm to accept that the variable is used.
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
With volatile removed from arch_raw_cpu_ptr() the compiler no longer
creates the per-CPU reference. The usage of the macro can be reverted
now.
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
The volatile attribute in the inline assembly of arch_raw_cpu_ptr()
forces the compiler to always generate the code, even if the compiler
can decide upfront that its result is not needed.
For instance invoking __intel_pmu_disable_all(false) (like
intel_pmu_snapshot_arch_branch_stack() does) leads to loading the
address of &cpu_hw_events into the register while compiler knows that it
has no need for it. This ends up with code like:
| movq $cpu_hw_events, %rax #, tcp_ptr__
| add %gs:this_cpu_off(%rip), %rax # this_cpu_off, tcp_ptr__
| xorl %eax, %eax # tmp93
It also creates additional code within local_lock() with !RT &&
!LOCKDEP which is not desired.
By removing the volatile attribute the compiler can place the
function freely and avoid it if it is not needed in the end.
By using the function twice the compiler properly caches only the
variable offset and always loads the CPU-offset.
this_cpu_ptr() also remains properly placed within a preempt_disable()
sections because
- arch_raw_cpu_ptr() assembly has a memory input ("m" (this_cpu_off))
- prempt_{dis,en}able() fundamentally has a 'barrier()' in it
Therefore this_cpu_ptr() is already properly serialized and does not
rely on the 'volatile' attribute.
Remove volatile from arch_raw_cpu_ptr().
[ bigeasy: Added Linus' explanation why this_cpu_ptr() is not moved out
of a preempt_disable() section without the 'volatile' attribute. ]
Suggested-by: Linus Torvalds <[email protected]>
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
Only DEFINE_STATIC_CALL use __DEFINE_STATIC_CALL macro now when
CONFIG_HAVE_STATIC_CALL is selected.
Only keep __DEFINE_STATIC_CALL() for the generic fallback, and
also use it to implement DEFINE_STATIC_CALL_NULL() in that case.
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/329074f92d96e3220ebe15da7bbe2779beee31eb.1647253456.git.christophe.leroy@csgroup.eu
|
|
When a static call is updated with __static_call_return0() as target,
arch_static_call_transform() set it to use an optimised set of
instructions which are meant to lay in the same cacheline.
But when initialising a static call with DEFINE_STATIC_CALL_RET0(),
we get a branch to the real __static_call_return0() function instead
of getting the optimised setup:
c00d8120 <__SCT__perf_snapshot_branch_stack>:
c00d8120: 4b ff ff f4 b c00d8114 <__static_call_return0>
c00d8124: 3d 80 c0 0e lis r12,-16370
c00d8128: 81 8c 81 3c lwz r12,-32452(r12)
c00d812c: 7d 89 03 a6 mtctr r12
c00d8130: 4e 80 04 20 bctr
c00d8134: 38 60 00 00 li r3,0
c00d8138: 4e 80 00 20 blr
c00d813c: 00 00 00 00 .long 0x0
Add ARCH_DEFINE_STATIC_CALL_RET0_TRAMP() defined by each architecture
to setup the optimised configuration, and rework
DEFINE_STATIC_CALL_RET0() to call it:
c00d8120 <__SCT__perf_snapshot_branch_stack>:
c00d8120: 48 00 00 14 b c00d8134 <__SCT__perf_snapshot_branch_stack+0x14>
c00d8124: 3d 80 c0 0e lis r12,-16370
c00d8128: 81 8c 81 3c lwz r12,-32452(r12)
c00d812c: 7d 89 03 a6 mtctr r12
c00d8130: 4e 80 04 20 bctr
c00d8134: 38 60 00 00 li r3,0
c00d8138: 4e 80 00 20 blr
c00d813c: 00 00 00 00 .long 0x0
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lore.kernel.org/r/1e0a61a88f52a460f62a58ffc2a5f847d1f7d9d8.1647253456.git.christophe.leroy@csgroup.eu
|
|
System.map shows that vmlinux contains several instances of
__static_call_return0():
c0004fc0 t __static_call_return0
c0011518 t __static_call_return0
c00d8160 t __static_call_return0
arch_static_call_transform() uses the middle one to check whether we are
setting a call to __static_call_return0 or not:
c0011520 <arch_static_call_transform>:
c0011520: 3d 20 c0 01 lis r9,-16383 <== r9 = 0xc001 << 16
c0011524: 39 29 15 18 addi r9,r9,5400 <== r9 += 0x1518
c0011528: 7c 05 48 00 cmpw r5,r9 <== r9 has value 0xc0011518 here
So if static_call_update() is called with one of the other instances of
__static_call_return0(), arch_static_call_transform() won't recognise it.
In order to work properly, global single instance of __static_call_return0() is required.
Fixes: 3f2a8fc4b15d ("static_call/x86: Add __static_call_return0()")
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lkml.kernel.org/r/30821468a0e7d28251954b578e5051dc09300d04.1647258493.git.christophe.leroy@csgroup.eu
|
|
Paolo reported that the instruction sequence that is used to replace:
call __static_call_return0
namely:
66 66 48 31 c0 data16 data16 xor %rax,%rax
decodes to something else on i386, namely:
66 66 48 data16 dec %ax
31 c0 xor %eax,%eax
Which is a nonsensical sequence that happens to have the same outcome.
*However* an important distinction is that it consists of 2
instructions which is a problem when the thing needs to be overwriten
with a regular call instruction again.
As such, replace the instruction with something that decodes the same
on both i386 and x86_64.
Fixes: 3f2a8fc4b15d ("static_call/x86: Add __static_call_return0()")
Reported-by: Paolo Bonzini <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
kernel/entry/common.c: In function ‘dynamic_irqentry_exit_cond_resched’:
kernel/entry/common.c:409:14: error: implicit declaration of function ‘static_key_unlikely’; did you mean ‘static_key_enable’? [-Werror=implicit-function-declaration]
409 | if (!static_key_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
| ^~~~~~~~~~~~~~~~~~~
| static_key_enable
static_key_unlikely() should be static_branch_unlikely().
Fixes: 99cf983cc8bca ("sched/preempt: Add PREEMPT_DYNAMIC using static keys")
Signed-off-by: Sven Schnelle <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
|
|
try_steal_cookie() looks at task_struct::cpus_mask to decide if the
task could be moved to `this' CPU. It ignores that the task might be in
a migration disabled section while not on the CPU. In this case the task
must not be moved otherwise per-CPU assumption are broken.
Use is_cpu_allowed(), as suggested by Peter Zijlstra, to decide if the a
task can be moved.
Fixes: d2dfa17bc7de6 ("sched: Trivial forced-newidle balancer")
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
Steve reported that ChromeOS encounters the forceidle balancer being
ran from rt_mutex_setprio()'s balance_callback() invocation and
explodes.
Now, the forceidle balancer gets queued every time the idle task gets
selected, set_next_task(), which is strictly too often.
rt_mutex_setprio() also uses set_next_task() in the 'change' pattern:
queued = task_on_rq_queued(p); /* p->on_rq == TASK_ON_RQ_QUEUED */
running = task_current(rq, p); /* rq->curr == p */
if (queued)
dequeue_task(...);
if (running)
put_prev_task(...);
/* change task properties */
if (queued)
enqueue_task(...);
if (running)
set_next_task(...);
However, rt_mutex_setprio() will explicitly not run this pattern on
the idle task (since priority boosting the idle task is quite insane).
Most other 'change' pattern users are pidhash based and would also not
apply to idle.
Also, the change pattern doesn't contain a __balance_callback()
invocation and hence we could have an out-of-band balance-callback,
which *should* trigger the WARN in rq_pin_lock() (which guards against
this exact anti-pattern).
So while none of that explains how this happens, it does indicate that
having it in set_next_task() might not be the most robust option.
Instead, explicitly queue the forceidle balancer from pick_next_task()
when it does indeed result in forceidle selection. Having it here,
ensures it can only be triggered under the __schedule() rq->lock
instance, and hence must be ran from that context.
This also happens to clean up the code a little, so win-win.
Fixes: d2dfa17bc7de ("sched: Trivial forced-newidle balancer")
Reported-by: Steven Rostedt <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Tested-by: T.J. Alumbaugh <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
|
|
MIPI-DSI devices, if they are controlled through the bus itself, have to
be described as a child node of the controller they are attached to.
Thus, there's no requirement on the controller having an OF-Graph output
port to model the data stream: it's assumed that it would go from the
parent to the child.
However, some bridges controlled through the DSI bus still require an
input OF-Graph port, thus requiring a controller with an OF-Graph output
port. This prevents those bridges from being used with the controllers
that do not have one without any particular reason to.
Let's drop that requirement.
Signed-off-by: Maxime Ripard <[email protected]>
Reviewed-by: Rob Herring <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
Singleton chunks (INIT, HEARTBEAT PMTU probes, and SHUTDOWN-
COMPLETE) are not counted in SCTP_GET_ASOC_STATS "sas_octrlchunks"
counter available to the assoc owner.
These are all control chunks so they should be counted as such.
Add counting of singleton chunks so they are properly accounted for.
Fixes: 196d67593439 ("sctp: Add support to per-association statistics via a new SCTP_GET_ASSOC_STATS call")
Signed-off-by: Jamie Bainbridge <[email protected]>
Acked-by: Marcelo Ricardo Leitner <[email protected]>
Link: https://lore.kernel.org/r/c9ba8785789880cf07923b8a5051e174442ea9ee.1649029663.git.jamie.bainbridge@gmail.com
Signed-off-by: Paolo Abeni <[email protected]>
|
|
To 2.36
Signed-off-by: Steve French <[email protected]>
|
|
Do not reuse existing sessions and tcons in DFS failover as it might
connect to different servers and shares.
Signed-off-by: Paulo Alcantara (SUSE) <[email protected]>
Cc: [email protected]
Reviewed-by: Enzo Matsumiya <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
In preparation for not necessarily having a file assigned at prep time,
defer any initialization associated with the file to when the opcode
handler is run.
Cc: [email protected] # v5.15+
Signed-off-by: Jens Axboe <[email protected]>
|
|
In preparation for not using the file at prep time, defer checking if this
file refers to a valid io_uring instance until issue time.
This also means we can get rid of the cleanup flag for splice and tee.
Cc: [email protected] # v5.15+
Signed-off-by: Jens Axboe <[email protected]>
|
|
Fixes a crash booting on those platforms with nouveau.
Fixes: 4cdd2450bf73 ("drm/nouveau/pmu/gm200-: use alternate falcon reset sequence")
Cc: Ben Skeggs <[email protected]>
Cc: Karol Herbst <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: <[email protected]> # v5.17+
Signed-off-by: Karol Herbst <[email protected]>
Reviewed-by: Lyude Paul <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
|
|
FIXTURE_VARIANT data is passed to FIXTURE_SETUP and TEST_F as "variant".
In some cases, the variant will change the setup, such that expectations
also change on teardown. Also pass variant to FIXTURE_TEARDOWN.
The new FIXTURE_TEARDOWN logic is identical to that in FIXTURE_SETUP,
right above.
Signed-off-by: Willem de Bruijn <[email protected]>
Reviewed-by: Jakub Kicinski <[email protected]>
Acked-by: Kees Cook <[email protected]>
Signed-off-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Shuah Khan <[email protected]>
|
|
The kselftest test harness has traditionally not run the registered
TEARDOWN handler when a test encountered an ASSERT. This creates
unexpected situations and tests need to be very careful about using
ASSERT, which seems a needless hurdle for test writers.
Because of the harness's design for optional failure handlers, the
original implementation of ASSERT used an abort() to immediately
stop execution, but that meant the context for running teardown was
lost. Instead, use setjmp/longjmp so that teardown can be done.
Failed SETUP routines continue to not be followed by TEARDOWN, though.
Cc: Andy Lutomirski <[email protected]>
Cc: Will Drewry <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: [email protected]
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
I fixed a few warnings like this in commit e2aa5e650b07
("selftests: fixup build warnings in pidfd / clone3 tests"), but I
missed this one by mistake. Since this variable is unused, remove it.
Signed-off-by: Axel Rasmussen <[email protected]>
Reviewed-by: Christian Brauner <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
The way the test target was defined before, when building with clang we
get a command line like this:
clang -Wall -Werror -g -I../../../../usr/include/ \
regression_enomem.c ../pidfd/pidfd.h -o regression_enomem
This yields an error, because clang thinks we want to produce both a *.o
file, as well as a precompiled header:
clang: error: cannot specify -o when generating multiple output files
gcc, for whatever reason, doesn't exhibit the same behavior which I
suspect is why the problem wasn't noticed before.
This can be fixed simply by using the LOCAL_HDRS infrastructure the
selftests lib.mk provides. It does the right think and marks the target
as depending on the header (so if the header changes, we rebuild), but
it filters the header out of the compiler command line, so we don't get
the error described above.
Signed-off-by: Axel Rasmussen <[email protected]>
Reviewed-by: Christian Brauner <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
In order to successfully build all these 32bit tests, these 32bit gcc
and glibc packages, named gcc-32bit and glibc-devel-static-32bit on SUSE,
need to be installed.
This patch added this information in warn_32bit_failure.
Signed-off-by: Geliang Tang <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Fix the following coccicheck warning:
tools/testing/selftests/proc/proc-pid-vm.c:371:26-27:
WARNING: Use ARRAY_SIZE
tools/testing/selftests/proc/proc-pid-vm.c:420:26-27:
WARNING: Use ARRAY_SIZE
It has been tested with gcc (Debian 8.3.0-6) 8.3.0 on x86_64.
Signed-off-by: Guo Zhengkui <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Fix the following coccicheck warning:
tools/testing/selftests/vDSO/vdso_test_correctness.c:309:46-47:
WARNING: Use ARRAY_SIZE
tools/testing/selftests/vDSO/vdso_test_correctness.c:373:46-47:
WARNING: Use ARRAY_SIZE
It has been tested with gcc (Debian 8.3.0-6) 8.3.0 on x86_64.
Signed-off-by: Guo Zhengkui <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
Revert commit 87ebbb8c612b ("ACPI: processor: idle: Only flush cache
on entering C3") that broke the assumptions of the acpi_idle_play_dead()
callers.
Namely, the CPU cache must always be flushed in acpi_idle_play_dead(),
regardless of the target C-state that is going to be requested, because
this is likely to be part of a CPU offline procedure or preparation for
entering a system-wide sleep state and the lack of synchronization
between the CPU cache and RAM may lead to problems going forward, for
example when the CPU is brought back online.
In particular, it breaks resume from suspend-to-RAM on Lenovo ThinkPad
C13 which fails occasionally until the problematic commit is reverted.
Signed-off-by: Akihiko Odaki <[email protected]>
[ rjw: Changelog ]
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
Commit ddbd60c779b4 ("kunit: use --build_dir=.kunit as default") changed
the default --build_dir, which had the side effect of making
`.kunitconfig` move to `.kunit/.kunitconfig`.
However, the first few lines of kunit/start.rst never got updated, oops.
Fix this by telling people to run kunit.py first, which will
automatically generate the .kunit directory and .kunitconfig file, and
then edit the file manually as desired.
Reported-by: Yifan Yuan <[email protected]>
Signed-off-by: Daniel Latypov <[email protected]>
Reviewed-by: David Gow <[email protected]>
Reviewed-by: Brendan Higgins <[email protected]>
Signed-off-by: Shuah Khan <[email protected]>
|
|
The documentation of the function rvt_error_qp says both r_lock and s_lock
need to be held when calling that function. It also asserts using lockdep
that both of those locks are held. However, the commit I referenced in
Fixes accidentally makes the call to rvt_error_qp in rvt_ruc_loopback no
longer covered by r_lock. This results in the lockdep assertion failing
and also possibly in a race condition.
Fixes: d757c60eca9b ("IB/rdmavt: Fix concurrency panics in QP post_send and modify to error")
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Niels Dossche <[email protected]>
Acked-by: Dennis Dalessandro <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
0day reported a regression on a microbenchmark which is intended to
stress the TLB flushing path:
https://lore.kernel.org/all/20220317090415.GE735@xsang-OptiPlex-9020/
It pointed at a commit from Nadav which intended to remove retpoline
overhead in the TLB flushing path by taking the 'cond'-ition in
on_each_cpu_cond_mask(), pre-calculating it, and incorporating it into
'cpumask'. That allowed the code to use a bunch of earlier direct
calls instead of later indirect calls that need a retpoline.
But, in practice, threads can go idle (and into lazy TLB mode where
they don't need to flush their TLB) between the early and late calls.
It works in this direction and not in the other because TLB-flushing
threads tend to hold mmap_lock for write. Contention on that lock
causes threads to _go_ idle right in this early/late window.
There was not any performance data in the original commit specific
to the retpoline overhead. I did a few tests on a system with
retpolines:
https://lore.kernel.org/all/[email protected]/
which showed a possible small win. But, that small win pales in
comparison with the bigger loss induced on non-retpoline systems.
Revert the patch that removed the retpolines. This was not a
clean revert, but it was self-contained enough not to be too painful.
Fixes: 6035152d8eeb ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()")
Reported-by: kernel test robot <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Acked-by: Nadav Amit <[email protected]>
Cc: <[email protected]>
Link: https://lkml.kernel.org/r/164874672286.389.7021457716635788197.tip-bot2@tip-bot2
|
|
add_hwgenerator_randomness() tries to only use the required amount of input
for fast init, but credits all the entropy, rather than a fraction of
it. Since it's hard to determine how much entropy is left over out of a
non-unformly random sample, either give it all to fast init or credit
it, but don't attempt to do both. In the process, we can clean up the
injection code to no longer need to return a value.
Signed-off-by: Jan Varho <[email protected]>
[Jason: expanded commit message]
Fixes: 73c7733f122e ("random: do not throw away excess input to crng_fast_load")
Cc: [email protected] # 5.17+, requires af704c856e88
Signed-off-by: Jason A. Donenfeld <[email protected]>
|
|
When list_for_each_entry() completes the iteration over the whole list
without breaking the loop, the iterator value will be a bogus pointer
computed based on the head element.
While it is safe to use the pointer to determine if it was computed
based on the head element, either with list_entry_is_head() or
&pos->member == head, using the iterator variable after the loop should
be avoided.
In preparation to limit the scope of a list iterator to the list
traversal loop, use a dedicated pointer to point to the found element [1].
Link: https://lore.kernel.org/all/CAHk-=wgRr_D8CB-D9Kg-c=EHreAsk5SqXPwr9Y7k9sA6cWXJ6w@mail.gmail.com/ [1]
Reviewed-by: Paulo Alcantara (SUSE) <[email protected]>
Signed-off-by: Jakob Koschel <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
To avoid racing with demultiplex thread while it is handling data on
socket, use cifs_signal_cifsd_for_reconnect() helper for marking
current server to reconnect and let the demultiplex thread handle the
rest.
Fixes: dca65818c80c ("cifs: use a different reconnect helper for non-cifsd threads")
Reviewed-by: Enzo Matsumiya <[email protected]>
Reviewed-by: Shyam Prasad N <[email protected]>
Signed-off-by: Paulo Alcantara (SUSE) <[email protected]>
Signed-off-by: Steve French <[email protected]>
|
|
allmodconfig builds on 32-bit architectures fail with the following error.
drivers/misc/habanalabs/common/memory.c: In function 'alloc_device_memory':
drivers/misc/habanalabs/common/memory.c:153:49: error:
cast from pointer to integer of different size
Fix the typecast. While at it, drop other unnecessary typecasts associated
with the same commit.
Fixes: e8458e20e0a3c ("habanalabs: make sure device mem alloc is page aligned")
Cc: Ohad Sharabi <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
In __nat25_add_pppoe_tag(), the tag length is read from the tag data
structure. The value is kept in network format, but read as raw value.
With -Warray-bounds, this results in the following gcc error/warning
when building the driver on alpha.
In function '__nat25_add_pppoe_tag',
inlined from 'nat25_db_handle' at
drivers/staging/r8188eu/core/rtw_br_ext.c:479:11:
arch/alpha/include/asm/string.h:22:16: error:
'__builtin_memcpy' forming offset [40, 2051] is out of the bounds
[0, 40] of object 'tag_buf' with type 'unsigned char[40]'
Add the missing be16_to_cpu() to fix the compile error. It should be
noted, however, that this fix means that the code did probably not work
on any little endian systems and/or that the driver has other endiannes
related issues. A build with C=1 suggests that this is indeed the case.
This patch does not attempt to fix any of those other issues.
Fixes: 15865124feed ("staging: r8188eu: introduce new core dir for RTL8188eu driver")
Cc: Phillip Potter <[email protected]>
Signed-off-by: Guenter Roeck <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
On the passive side when the disconnectReq event comes, if the current
state is MRA_REP_RCVD, it needs to cancel the MAD before entering the
DREQ_RCVD and TIMEWAIT states, otherwise the destroy_id may block until
this mad will reach timeout.
Fixes: a977049dacde ("[PATCH] IB: Add the kernel CM implementation")
Link: https://lore.kernel.org/r/75261c00c1d82128b1d981af9ff46e994186e621.1649062436.git.leonro@nvidia.com
Signed-off-by: Mark Zhang <[email protected]>
Reviewed-by: Maor Gottlieb <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Update cache->last_add when returning an MR to the cache so that the cache
work won't remove it.
Fixes: b9358bdbc713 ("RDMA/mlx5: Fix locking in MR cache work queue")
Link: https://lore.kernel.org/r/c99f076fce4b44829d434936bbcd3b5fc4c95020.1649062436.git.leonro@nvidia.com
Signed-off-by: Aharon Landau <[email protected]>
Reviewed-by: Shay Drory <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Don't remove MRs from the cache if need to delay the removal.
Fixes: b9358bdbc713 ("RDMA/mlx5: Fix locking in MR cache work queue")
Link: https://lore.kernel.org/r/c3087a90ff362c8796c7eaa2715128743ce36722.1649062436.git.leonro@nvidia.com
Signed-off-by: Aharon Landau <[email protected]>
Reviewed-by: Shay Drory <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Remove Mike's contact from maintainers file.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Dennis Dalessandro <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Welcome Leon to the maintainer list so we continue to have two people on a
medium sized subsystem.
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
GPIO chip irq members are exposed before they could be completely
initialized and this leads to race conditions.
One such issue was observed for the gc->irq.domain variable which
was accessed through the I2C interface in gpiochip_to_irq() before
it could be initialized by gpiochip_add_irqchip(). This resulted in
Kernel NULL pointer dereference.
Following are the logs for reference :-
kernel: Call Trace:
kernel: gpiod_to_irq+0x53/0x70
kernel: acpi_dev_gpio_irq_get_by+0x113/0x1f0
kernel: i2c_acpi_get_irq+0xc0/0xd0
kernel: i2c_device_probe+0x28a/0x2a0
kernel: really_probe+0xf2/0x460
kernel: RIP: 0010:gpiochip_to_irq+0x47/0xc0
To avoid such scenarios, restrict usage of GPIO chip irq members before
they are completely initialized.
Signed-off-by: Shreeya Patel <[email protected]>
Cc: [email protected]
Reviewed-by: Andy Shevchenko <[email protected]>
Reviewed-by: Linus Walleij <[email protected]>
Signed-off-by: Bartosz Golaszewski <[email protected]>
|
|
When the page_ring is not used page_ptr_mask is 0.
Do not dereference page_ring[0] in this case.
Fixes: 2768935a4660 ("sfc: reuse pages to avoid DMA mapping/unmapping costs")
Reported-by: Taehee Yoo <[email protected]>
Signed-off-by: Martin Habets <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Smatch reports this issue
dwmac-loongson.c:208:19: warning: symbol
'loongson_dwmac_driver' was not declared.
Should it be static?
loongson_dwmac_driver is only used in dwmac-loongson.c.
File scope variables used only in one file should
be static. Change loongson_dwmac_driver's
storage-class-specifier from global to static.
Signed-off-by: Tom Rix <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Previous documentation was vague, so we included SDR104 for slow SDnH
clock settings. It turns out now, that it is only needed for HS400.
Fixes: bb6d3fa98a41 ("clk: renesas: rcar-gen3: Switch to new SD clock handling")
Cc: [email protected]
Reported-by: Yoshihiro Shimoda <[email protected]>
Signed-off-by: Wolfram Sang <[email protected]>
Reviewed-by: Yoshihiro Shimoda <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Ulf Hansson <[email protected]>
|
|
Michael Chan says:
====================
bnxt_en: XDP redirect fixes
This series includes 3 fixes related to the XDP redirect code path in
the driver. The first one adds locking when the number of TX XDP rings
is less than the number of CPUs. The second one adjusts the maximum MTU
that can support XDP with enough tail room in the buffer. The 3rd one
fixes a race condition between TX ring shutdown and the XDP redirect path.
====================
Signed-off-by: David S. Miller <[email protected]>
|
|
Add checks in the XDP redirect callback to prevent XDP from running when
the TX ring is undergoing shutdown.
Also remove redundant checks in the XDP redirect callback to validate the
txr and the flag that indicates the ring supports XDP. The modulo
arithmetic on 'tx_nr_rings_xdp' already guarantees the derived TX
ring is an XDP ring. txr is also guaranteed to be valid after checking
BNXT_STATE_OPEN and within RCU grace period.
Fixes: f18c2b77b2e4 ("bnxt_en: optimized XDP_REDIRECT support")
Reviewed-by: Vladimir Olovyannikov <[email protected]>
Signed-off-by: Ray Jui <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Insufficient space was being reserved in the page used for packet
reception, so the interface MTU could be set too large to still have
room for the contents of the packet when doing XDP redirect. This
resulted in the following message when redirecting a packet between
3520 and 3822 bytes with an MTU of 3822:
[311815.561880] XDP_WARN: xdp_update_frame_from_buff(line:200): Driver BUG: missing reserved tailroom
Fixes: f18c2b77b2e4 ("bnxt_en: optimized XDP_REDIRECT support")
Reviewed-by: Somnath Kotur <[email protected]>
Reviewed-by: Pavan Chebbi <[email protected]>
Signed-off-by: Andy Gospodarek <[email protected]>
Signed-off-by: Michael Chan <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|