Age | Commit message (Collapse) | Author | Files | Lines |
|
There was some ancient code which padded the results of
a clear key ME or CRT operation with some PKCS 1.2 header.
According to the comment this was only needed by crypto
cards older than the CEX2. These cards are not supported
any more and so this patch removes this obscure result
padding code.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Juergen Christ <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Currently, exception tables are marked as ro_after_init. However,
since they are sorted during compile time using scripts/sorttable,
they can be moved to RO_DATA using the RO_EXCEPTION_TABLE_ALIGN macro,
which is specifically designed for this purpose.
Suggested-by: Heiko Carstens <[email protected]>
Reviewed-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Since commit 4efd417f298b ("s390: raise minimum supported machine
generation to z10"), the long-displacement facility is assumed and
required for the kernel. Clean up a couple of places in the entry code,
where long-displacement could be used directly instead of using a base
register.
However, there are still a few other places where a base register has
to be used to extend short-displacement for the second lowcore page
access. Notably, boot/head.S still has to be built for z900, and in
mcck_int_handler, spt and lbear, which don't have long-displacements,
but need to access save areas at the second lowcore page.
Reviewed-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Heiko Carstens says:
===================
There are a couple of oddities within the s390 uaccess library
functions. Therefore cleanup the whole uaccess.c file.
There is no functional change, only improved readability. The output
of "objdump -Dr" was always compared before/after each patch to make
sure that the generated object file is identical, if that could be
expected. Therefore the series also includes more patches than really
required to cleanup the code.
Furthermore the kunit usercopy tests also still pass.
===================
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
In order to get uaccess.c (nearly) checkpatch warning free remove an
extra blank line:
CHECK: Blank lines aren't necessary before a close brace '}'
+
+}
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Get rid of the not needed val local variable and pass the constant
value directly as operand value. In addition this turns the val
operand into an input operand, since it is not changed within the
inline assemblies.
This in turn requires also to add the earlyclobber contraint modifier
to all output operands, since the (former) val operand is used after
all output variants have been modified.
The usercopy kunit tests still pass after this change.
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Rename tmp1 and tmp2 variables to more meaningful val (for value) and rem
(for remainder).
Except for debug sections the output of "objdump -Dr" of the uaccess object
file is identical before/after this change.
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Rename and sort labels in uaccess inline assemblies to increase
readability. In addition have only one EX_TABLE entry per line - also to
increase readability.
Except for debug sections the output of "objdump -Dr" of the uaccess object
file is identical before/after this change.
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Remove an unused label in all three uaccess inline assemblies.
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Improve readability of the uaccess inline assemblies by using symbolic
names for all input and output operands.
Except for debug sections the output of "objdump -Dr" of the uaccess object
file is identical before/after this change.
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Add missing earlyclobber annotation to size, to, and tmp2 operands of the
__clear_user() inline assembly since they are modified or written to before
the last usage of all input operands. This can lead to incorrect register
allocation for the inline assembly.
Fixes: 6c2a9e6df604 ("[S390] Use alternative user-copy operations for new hardware.")
Reported-by: Mark Rutland <[email protected]>
Link: https://lore.kernel.org/all/[email protected]/
Cc: [email protected]
Reviewed-by: Gerald Schaefer <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
If there is no driver match function, the driver core assumes that each
candidate pair (driver, device) matches, see driver_match_device().
Drop the matrix bus's match function that always returned 1 and so
implements the same behaviour as when there is no match function
Signed-off-by: Lizhe <[email protected]>
Reviewed-by: Tony Krowiak <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
s390 trivially supports the ARCH_HAS_MEMBARRIER_SYNC_CORE requirements
since the used lpswe(y) instruction to return from any kernel context to
user space performs CPU serialization. This is very similar to arm, arm64
and powerpc.
See commit 70216e18e519 ("membarrier: Provide core serializing command,
*_SYNC_CORE") for further details.
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
This flag is used to process only fully populated sampling buffers
when an sampling event is stopped on a CPU. By default the last sampling
buffer is also scanned for samples even if the sampling block full
indicator is not set in the trailer entry of a sampling buffer page.
This flag can be set via perf_event_attr::config1 field. It was never
used and never documented. It is useless now.
With PERF_CPUM_SF_FULL_BLOCKS:
When a process is scheduled off the CPU, the sampling is stopped and
the samples are copied to the perf ring buffer and marked invalid.
When stopped at the last full sample buffer page (which is
achieved with the PERF_CPUM_SF_FULL_BLOCKS options), the hardware
sampling will resume at the first free sample entry in the current,
partially filled sample buffer.
Without PERF_CPUM_SF_FULL_BLOCKS (default behavior):
The partially filled last sample buffer is scanned and valid samples
are saved to the perf ring buffer. The valid samples are marked invalid.
The sampling is resumed when the process is scheduled on this CPU.
Again the hardware sampling will resume at the first free sample entry in
the current, partially filled sample buffer.
Now the next interrupt handler invocation scans the
full sample block and saves the valid samples to the ring buffer.
It omits the invalid samples at the top of the buffer.
The default behavior is fully sufficient, therefore remove this feature.
Signed-off-by: Thomas Richter <[email protected]>
Acked-by: Hendrik Brueckner <[email protected]>
Acked-by: Sumanth Korikkar <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
Make use of atomic_fetch_xor() instead of an atomic_cmpxchg() loop to
implement atomic_xor_bits() (aka atomic_xor_return()). This makes the C
code more readable and in addition generates better code, since for z196
and newer a single lax instruction is generated instead of a cmpxchg()
loop.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Review and extend the low level AP code to be able to
deal with asynchronous reported errors on APQNs.
The hypervisor and the SE guest may be confronted with
an asynchronously reported error at return of an AP
instruction. So all places where AP instructions are
called need review and may eventually need extensions.
However, not all places need rework. As together with
the AP status and the enabled asynch bit there is always
a response code set. The asynch error reporting comes
with new response codes which may be simple handled in
the default case of a switch statement.
The idea behind this patch is to report asynch errors
as -EPERM (read this as "Operation not permitted") which
reflects the fact that only a rapq (with F bit enabled)
is a valid AP instruction when an asynch error is flagged.
The AP queue state machine functions return
AP_SM_WAIT_NONE when a asynch error is detected to reflect
the fact, that the state machine can't do anything with
such an error as long as the queue is reset.
Unfortunately the ap bus scan function needed some
update as the ap_queue_info() now needs to return
3 states: 1 if an APQN exists and info is available,
-1 if it is assumed an APQN does not exist and the new
return value 0 without any info values filled. This 0
returncode is handled as "there is an APQN but we currently
don't know any more hw info about this, so please use
your previous info and try again later".
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Implementation of the new functions for SE AP support:
bind, unbind and associate. There are two new sysfs
attributes for this:
/sys/devices/ap/cardxx/xx.yyyy/se_bind
/sys/devices/ap/cardxx/xx.yyyy/se_associate
Writing a 1 into the se_bind attribute triggers the
SE AP bind for this AP queue, writing a 0 into does
an unbind - that's a reset (RAPQ) with the F bit enabled.
The se_associate attribute needs an integer value in
range 0...2^16-1 written in. This is the index into a
secrets table feed into the ultravisor. For more details
please see the Architecture documents.
These both new ap queue attributes are only visible
inside a SE guest with SB (Secure Binding) available.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
For some events the ap bus needs to poll. For example
when an AP queue is reset until the reset is through.
Also when no interrupt support is available (e.g. zVM)
there is a need to poll until all requests have been
processed and all replies have been delivered.
Polling is done with a high resolution timer by default
run with a rate of 4kHz (LPAR) or 666Hz (zVM guest).
For some events (wait for reset complete, wait for irq
enabled complete) this is a much too high poll rate
which triggers a lot of TAPQ invocations.
This patch introduces the possibility for the state
machine functions to return a new wait enum
AP_SM_WAIT_LOW_TIMEOUT which gives a hint to the
ap_wait() function to eventually set up the timer
with a more relaxed timeout value of 25Hz.
This patch also includes a slight rework of the sysfs
functions parsing the timer related stuff: Use of
kstrtobool and kstrtoul instead of sscanf.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Introduce two new low level functions ap_bapq() (calls
PQAP(BAPQ)) and ap_aapq (calls PQAP(AAPQ)). Both functions
are only meant to be used in SE environment with the SE
AP binding facility available.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Extent the ap inline functions ap_rapq() (calls PQAP(RAPQ))
and ap_zapq() (calls PQAP(ZAPQ)) with a new parameter to
enable the new architectured F bit which forces an
unassociate and/or unbind on a secure execution associated
and/or bound queue.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Tony Krowiak <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
With SE SB (Secure Binding) some currently unused and thus always
zero bits in the TAPQ GR2 result are now used to show the binding
state of a queue. So to check if a card has changed the comparing
base is exactly this GR2 value shown as 'ap_function' in sysfs
(/sys/devices/ap/cardxx/ap_functions). Now there is some queue
specific info in this info and so a new mask TAPQ_CARD_FUNC_CMP_MASK
is used to filter out only the relevant bits for card compare.
For the same reason now the function bits (including exactly this
bind/associate information) need to be exposed to user space now.
So tools like lszcrypt can evaluate binding/association state on a
queue base. So here comes a new sysfs attribute
/sys/devices/ap/cardxx/xx.yyyy/ap_functions
This sysfs attribute is similar to the already existing
ap_functions attribute at ap card level. It shows the
upper 32 bits of GR2 from an invocation of TAPQ for this
AP queue.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
This patch introduces a new struct ap_tapq_gr2 which covers
the response in GR2 on TAPQ invocation. This makes it much
easier and less error-prone for the calling functions to
access the right field without shifting and masking.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Tony Krowiak <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Introduce a new AP bus sysfs attribute /sys/bus/ap/features
which shows the features from the QCI information.
Currently these feature bits are evaluated:
- QCI S bit is shown as 'APSC'
- QCI N bit is shown as 'APXA'
- QCI C bit is shown as 'QACT'
- QCI R bit is shown as 'RC8A'
- QCI B bit is shown as 'APSB'
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Tony Krowiak <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
This patch introduces an update to the ap_config_info
struct which is filled with the QCI subfunction. There
is a new bit apsb (short 'B') showing if the AP secure
bind facility is available. The patch also includes a
simple function ap_sb_available() wrapping this bit test.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Tony Krowiak <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Replace scnprintf() with sysfs_emit() and friends
where possible.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Reviewed-by: Tony Krowiak <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
The inline ap_dqap function does not return the number of
bytes actually written into the message buffer. The calling
code inspects the AP message header to figure out what kind
of AP message has been received and pulls the length
information from this header. This processing may not work
correctly in cases where only a fragment of the reply is
received.
With this patch the ap_dqap inline function now returns
the number of actually written bytes in the *length parameter.
So the calling function has a chance to compare the number of
received bytes against what the AP message header length
field states. This is especially useful in cases where a
message could only get partially received.
The low level reply processing functions needed some rework
to be able to catch this new length information and compare
it the right way. The rework also deals with some situations
where until now the reply length was not correctly calculated
and/or set.
All this has been heavily tested as the modifications on
the reply length information may affect crypto load.
Signed-off-by: Harald Freudenberger <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Since s390 kernel build does not support 32 bit build any
more there is no difference between long and long long.
So this patch reworks all occurrences of psmid (a 64 bit
value) to use unsigned long now.
Signed-off-by: Harald Freudenberger <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Reviewed-by: Holger Dengler <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Allow to enforce 64 byte function alignment like it is possible for a
couple of other architectures. This may or may not be helpful for
debugging performance problems, as described with the Kconfig option.
Since the kernel works also with 64 byte function alignment there is
no reason for not allowing to enforce this function alignment.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use __ALIGN instead of open coded .align statement to make sure that
vdso code follows global kernel function alignment rules.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Use __ALIGN instead of open coded .align statement to make sure that
external expoline thunks follow global function alignment rules.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Move the ftrace hotpatch trampolines to mcount.S. This allows to make
use of the standard SYM_CODE macros which again makes sure that the
hotpatch trampolines follow the function alignment rules of the rest
of the kernel.
Signed-off-by: Heiko Carstens <[email protected]>
Acked-by: Ilya Leoshkevich <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Make use of CONFIG_FUNCTION_ALIGNMENT which was introduced with commit
d49a0626216b ("arch: Introduce CONFIG_FUNCTION_ALIGNMENT").
Select FUNCTION_ALIGNMENT_8B for gcc in order to reflect gcc's default
function alignment. For all other compilers, which is only clang, select
a function alignment of 16 bytes which reflects the default function
alignment for clang.
Also change the __ALIGN define to follow whatever the value of
CONFIG_FUNCTION_ALIGNMENT is. This makes sure that the alignment of C and
assembler functions is the same.
In result everything still uses the default function alignment for both
compilers. However in addition this is now also true for all assembly
functions, so that all functions have a consistent alignment.
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Vasily Gorbik says:
===================
Combine and generalize all methods for finding unused memory in
decompressor, while decreasing complexity, add memory holes support,
while improving error handling (especially in low-memory conditions)
and debug-ability.
===================
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Since regular paging structs are initialized in decompressor already
move KASAN shadow mapping to decompressor as well. This helps to avoid
allocating KASAN required memory in 1 large chunk, de-duplicate paging
structs creation code and start the uncompressed kernel with KASAN
instrumentation right away. This also allows to avoid all pitfalls
accidentally calling KASAN instrumented code during KASAN initialization.
Acked-by: Heiko Carstens <[email protected]>
Reviewed-by: Alexander Gordeev <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Allow changing page table attributes for KASAN shadow memory ranges.
Acked-by: Heiko Carstens <[email protected]>
Reviewed-by: Alexander Gordeev <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Currently several approaches for finding unused memory in decompressor
are utilized. While "safe_addr" grows towards higher addresses, vmem
code allocates paging structures top down. The former requires careful
ordering. In addition to that ipl report handling code verifies potential
intersections with secure boot certificates on its own. Neither of two
approaches are memory holes aware and consistent with each other in low
memory conditions.
To solve that, existing approaches are generalized and combined
together, as well as online memory ranges are now taken into
consideration.
physmem_info has been extended to contain reserved memory ranges. New
set of functions allow to handle reserves and find unused memory.
All reserves and memory allocations are "typed". In case of out of
memory condition decompressor fails with detailed info on current
reserved ranges and usable online memory.
Linux version 6.2.0 ...
Kernel command line: ... mem=100M
Our of memory allocating 100000 bytes 100000 aligned in range 0:5800000
Reserved memory ranges:
0000000000000000 0000000003e33000 DECOMPRESSOR
0000000003f00000 00000000057648a3 INITRD
00000000063e0000 00000000063e8000 VMEM
00000000063eb000 00000000063f4000 VMEM
00000000063f7800 0000000006400000 VMEM
0000000005800000 0000000006300000 KASAN
Usable online memory ranges (info source: sclp read info [3]):
0000000000000000 0000000006400000
Usable online memory total: 6400000 Reserved: 61b10a3 Free: 24ef5d
Call Trace:
(sp:000000000002bd58 [<0000000000012a70>] physmem_alloc_top_down+0x60/0x14c)
sp:000000000002bdc8 [<0000000000013756>] _pa+0x56/0x6a
sp:000000000002bdf0 [<0000000000013bcc>] pgtable_populate+0x45c/0x65e
sp:000000000002be90 [<00000000000140aa>] setup_vmem+0x2da/0x424
sp:000000000002bec8 [<0000000000011c20>] startup_kernel+0x428/0x8b4
sp:000000000002bf60 [<00000000000100f4>] startup_normal+0xd4/0xd4
physmem_alloc_range allows to find free memory in specified range. It
should be used for one time allocations only like finding position for
amode31 and vmlinux.
physmem_alloc_top_down can be used just like physmem_alloc_range, but
it also allows multiple allocations per type and tries to merge sequential
allocations together. Which is useful for paging structures allocations.
If sequential allocations cannot be merged together they are "chained",
allowing easy per type reserved ranges enumeration and migration to
memblock later. Extra "struct reserved_range" allocated for chaining are
not tracked or reserved but rely on the fact that both
physmem_alloc_range and physmem_alloc_top_down search for free memory
only below current top down allocator position. All reserved ranges
should be transferred to memblock before memblock allocations are
enabled.
The startup code has been reordered to delay any memory allocations until
online memory ranges are detected and occupied memory ranges are marked as
reserved to be excluded from follow-up allocations.
Ipl report certificates are a special case, ipl report certificates list
is checked together with other memory reserves until certificates are
saved elsewhere.
KASAN required memory for shadow memory allocation and mapping is reserved
as 1 large chunk which is later passed to KASAN early initialization code.
Acked-by: Heiko Carstens <[email protected]>
Reviewed-by: Alexander Gordeev <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
In preparation to extending mem_detect with additional information like
reserved ranges rename it to more generic physmem_info. This new naming
also help to avoid confusion by using more exact terms like "physmem
online ranges", etc.
Acked-by: Heiko Carstens <[email protected]>
Reviewed-by: Alexander Gordeev <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
check_image_bootable() has been introduced with commit 627c9b62058e
("s390/boot: block uncompressed vmlinux booting attempts") to make sure
that users don't try to boot uncompressed vmlinux ELF image in qemu. It
used to be possible quite some time ago. That commit prevented confusion
with uncompressed vmlinux image starting to boot and even printing
kernel messages until it crashed. Users might have tried to report the
problem without realizing they are doing something which was not intended.
Since commit f1d3c5323772 ("s390/boot: move sclp early buffer from fixed
address in asm to C") check_image_bootable() doesn't function properly
anymore, as well as booting uncompressed vmlinux image in qemu doesn't
really produce any output and crashes. Moving forward it doesn't make
sense to fix check_image_bootable() anymore, so simply remove it.
Acked-by: Alexander Gordeev <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
report_user_fault() currently does not show which library last_break
points to. Call print_vma_addr() to find out; the output now looks
like this:
Last Breaking-Event-Address:
[<000003ffaa2a56e4>] libc.so.6[3ffaa180000+251000]
For kernel it's unchanged:
Last Breaking-Event-Address:
[<000000000030fd06>] trace_hardirqs_on+0x56/0xc8
Signed-off-by: Ilya Leoshkevich <[email protected]>
Acked-by: Heiko Carstens <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
The routine appldata_register_ops() allocates a sysctl table
with 4 entries. The firsts one, ops->ctl_table[0] is the parent directory
with an empty entry following it, ops->ctl_table[1]. The next entry is
for the ops->name and that is ops->ctl_table[2]. It needs an empty
entry following that, and that is ops->ctl_table[3]. And so hence the
kcalloc(4, sizeof(struct ctl_table), GFP_KERNEL).
We can simplify this considerably since sysctl_register("foo", table)
can create the parent directory for us if it does not exist. So we
can just remove the first two entries and move back the ops->name to
the first entry, and just use kcalloc(2, ...).
[[email protected]: appldata_generic_handler fixup ctl_table index 2->0]
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Vasily Gorbik <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
There is no need to declare an extra tables to just create directory,
this can be easily be done with a prefix path with register_sysctl().
Simplify this registration.
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Vasily Gorbik <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
There is no need to declare an extra tables to just create directory,
this can be easily be done with a prefix path with register_sysctl().
Simplify this registration.
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Vasily Gorbik <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
There is no need to declare an extra tables to just create directory,
this can be easily be done with a prefix path with register_sysctl().
Simplify this registration.
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Vasily Gorbik <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
There is no need to declare an extra tables to just create directory,
this can be easily be done with a prefix path with register_sysctl().
Simplify this registration.
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Vasily Gorbik <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
There is no need to declare an extra tables to just create directory,
this can be easily be done with a prefix path with register_sysctl().
Simplify this registration.
Signed-off-by: Luis Chamberlain <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Reviewed-by: Vasily Gorbik <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
Signed-off-by: Heiko Carstens <[email protected]>
|
|
Prior to commit 960ac3626487 ("s390/pci: allow zPCI zbus without
a function zero") enabling and scanning a PCI function had to
potentially be postponed until the function with devfn zero on that bus
was plugged. While the commit removed the waiting itself extra code to
scan all functions on the PCI bus once function zero appeared was
missed. Remove that code and the outdated comments about waiting for
function zero.
Signed-off-by: Niklas Schnelle <[email protected]>
Reviewed-by: Matthew Rosato <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
The pci_bus_add_devices() call in zpci_bus_create_pci_bus() is without
function since at this point no device could have been added to the
freshly created PCI bus.
Suggested-by: Bjorn Helgaas <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
Reviewed-by: Matthew Rosato <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
As the name suggests zpci_bus_scan_device() is used to scan a specific
device and thus pci_bus_add_device() for that device is sufficient.
Furthermore move this call inside the rescan/remove locking.
Suggested-by: Bjorn Helgaas <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
Reviewed-by: Matthew Rosato <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Vasily Gorbik <[email protected]>
|
|
gen_lpswe() contains a BUILD_BUG_ON() statement which depends on a function
parameter. If the compiler decides to generate a not inlined function this
will lead to a build error, even if all call sites pass a valid parameter.
To avoid this always inline gen_lpswe().
Signed-off-by: Heiko Carstens <[email protected]>
Signed-off-by: Vasily Gorbik <[email protected]>
|