Age | Commit message (Collapse) | Author | Files | Lines |
|
For now it always returns '0' (false), but once the iommu work is in
place to enable per-process pagetables we can update the value returned.
Userspace needs to know this to make an informed decision about exposing
KHR_robustness.
Signed-off-by: Rob Clark <[email protected]>
Reviewed-by: Jordan Crouse <[email protected]>
|
|
Merge misc fixes from Andrew Morton:
"16 fixes"
* emailed patches from Andrew Morton <[email protected]>:
coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping
mm/kmemleak.c: fix unused-function warning
init: initialize jump labels before command line option parsing
kernel/watchdog_hld.c: hard lockup message should end with a newline
kcov: improve CONFIG_ARCH_HAS_KCOV help text
mm: fix inactive list balancing between NUMA nodes and cgroups
mm/hotplug: treat CMA pages as unmovable
proc: fixup proc-pid-vm test
proc: fix map_files test on F29
mm/vmstat.c: fix /proc/vmstat format for CONFIG_DEBUG_TLBFLUSH=y CONFIG_SMP=n
mm/memory_hotplug: do not unlock after failing to take the device_hotplug_lock
mm: swapoff: shmem_unuse() stop eviction without igrab()
mm: swapoff: take notice of completion sooner
mm: swapoff: remove too limiting SWAP_UNUSE_MAX_TRIES
mm: swapoff: shmem_find_swap_entries() filter out other types
slab: store tagged freelist for off-slab slabmgmt
|
|
The IXP4xx (arch/arm/mach-ixp4xx) is an old Intel XScale
platform that has very wide deployment and use.
As part of modernizing the platform, we need to implement a
proper irqchip in the irqchip subsystem.
The IXP4xx irqchip is tightly jotted together with the GPIO
controller, and whereas in the past we would deal with this
complex logic by adding necessarily different code, we can
nowadays modernize it using a hierarchical irqchip.
The actual IXP4 irqchip is a simple active low level IRQ
controller, whereas the GPIO functionality resides in a
different memory area and adds edge trigger support for
the interrupts.
The interrupts from GPIO lines 0..12 are 1:1 mapped to
a fixed set of hardware IRQs on this IRQchip, so we
expect the child GPIO interrupt controller to go in and
allocate descriptors for these interrupts.
For the other interrupts, as we do not yet have DT
support for this platform, we create a linear irqdomain
and then go in and allocate the IRQs that the legacy
boards use. This code will be removed on the DT probe
path when we add DT support to the platform.
We add some translation code for supporting DT
translations for the fwnodes, but we leave most of that
for later.
Cc: Marc Zyngier <[email protected]>
Cc: Jason Cooper <[email protected]>
Acked-by: Marc Zyngier <[email protected]>
Signed-off-by: Linus Walleij <[email protected]>
|
|
Add cgroup:cgroup_freeze and cgroup:cgroup_unfreeze events,
which are using the existing cgroup tracing infrastructure.
Add the cgroup_event event class, which is similar to the cgroup
class, but contains an additional integer field to store a new
value (the level field is dropped).
Also add two tracing events: cgroup_notify_populated and
cgroup_notify_frozen, which are raised in a generic way using
the TRACE_CGROUP_PATH() macro.
This allows to trace cgroup state transitions and is generally
helpful for debugging the cgroup freezer code.
Signed-off-by: Roman Gushchin <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
|
|
Cgroup v1 implements the freezer controller, which provides an ability
to stop the workload in a cgroup and temporarily free up some
resources (cpu, io, network bandwidth and, potentially, memory)
for some other tasks. Cgroup v2 lacks this functionality.
This patch implements freezer for cgroup v2.
Cgroup v2 freezer tries to put tasks into a state similar to jobctl
stop. This means that tasks can be killed, ptraced (using
PTRACE_SEIZE*), and interrupted. It is possible to attach to
a frozen task, get some information (e.g. read registers) and detach.
It's also possible to migrate a frozen tasks to another cgroup.
This differs cgroup v2 freezer from cgroup v1 freezer, which mostly
tried to imitate the system-wide freezer. However uninterruptible
sleep is fine when all tasks are going to be frozen (hibernation case),
it's not the acceptable state for some subset of the system.
Cgroup v2 freezer is not supporting freezing kthreads.
If a non-root cgroup contains kthread, the cgroup still can be frozen,
but the kthread will remain running, the cgroup will be shown
as non-frozen, and the notification will not be delivered.
* PTRACE_ATTACH is not working because non-fatal signal delivery
is blocked in frozen state.
There are some interface differences between cgroup v1 and cgroup v2
freezer too, which are required to conform the cgroup v2 interface
design principles:
1) There is no separate controller, which has to be turned on:
the functionality is always available and is represented by
cgroup.freeze and cgroup.events cgroup control files.
2) The desired state is defined by the cgroup.freeze control file.
Any hierarchical configuration is allowed.
3) The interface is asynchronous. The actual state is available
using cgroup.events control file ("frozen" field). There are no
dedicated transitional states.
4) It's allowed to make any changes with the cgroup hierarchy
(create new cgroups, remove old cgroups, move tasks between cgroups)
no matter if some cgroups are frozen.
Signed-off-by: Roman Gushchin <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
No-objection-from-me-by: Oleg Nesterov <[email protected]>
Cc: [email protected]
|
|
The number of descendant cgroups and the number of dying
descendant cgroups are currently synchronized using the cgroup_mutex.
The number of descendant cgroups will be required by the cgroup v2
freezer, which will use it to determine if a cgroup is frozen
(depending on total number of descendants and number of frozen
descendants). It's not always acceptable to grab the cgroup_mutex,
especially from quite hot paths (e.g. exit()).
To avoid this, let's additionally synchronize these counters using
the css_set_lock.
So, it's safe to read these counters with either cgroup_mutex or
css_set_lock locked, and for changing both locks should be acquired.
Signed-off-by: Roman Gushchin <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Cc: [email protected]
|
|
bvec->bv_offset may be bigger than PAGE_SIZE sometimes, such as,
when one bio is splitted in the middle of one bvec via bio_split(),
and bi_iter.bi_bvec_done is used to build offset of the 1st bvec of
remained bio. And the remained bio's bvec may be re-submitted to fs
layer via ITER_IBVEC, such as loop and nvme-loop.
So we have to make sure that every bvec's offset is less than
PAGE_SIZE from bio_for_each_segment_all() because some drivers(loop,
nvme-loop) passes the splitted bvec to fs layer via ITER_BVEC.
This patch fixes this issue reported by Zhang Yi When running nvme/011.
Cc: Christoph Hellwig <[email protected]>
Cc: Yi Zhang <[email protected]>
Reported-by: Yi Zhang <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Fixes: 6dc4f100c175 ("block: allow bio_for_each_segment_all() to iterate over multi-page bvec")
Signed-off-by: Ming Lei <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
all_q_node has not been used since commit 4b855ad37194 ("blk-mq: Create
hctx for each present CPU"), so remove it.
Reviewed-by: Chaitanya Kulkarni <[email protected]>
Reviewed-by: Ming Lei <[email protected]>
Signed-off-by: Hou Tao <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input
Pull input updates from Dmitry Torokhov:
- several new key mappings for HID
- a host of new ACPI IDs used to identify Elan touchpads in Lenovo
laptops
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: snvs_pwrkey - initialize necessary driver data before enabling IRQ
HID: input: add mapping for "Toggle Display" key
HID: input: add mapping for "Full Screen" key
HID: input: add mapping for keyboard Brightness Up/Down/Toggle keys
HID: input: add mapping for Expose/Overview key
HID: input: fix mapping of aspect ratio key
[media] doc-rst: switch to new names for Full Screen/Aspect keys
Input: document meanings of KEY_SCREEN and KEY_ZOOM
Input: elan_i2c - add hardware ID for multiple Lenovo laptops
|
|
dumping
The core dumping code has always run without holding the mmap_sem for
writing, despite that is the only way to ensure that the entire vma
layout will not change from under it. Only using some signal
serialization on the processes belonging to the mm is not nearly enough.
This was pointed out earlier. For example in Hugh's post from Jul 2017:
https://lkml.kernel.org/r/[email protected]
"Not strictly relevant here, but a related note: I was very surprised
to discover, only quite recently, how handle_mm_fault() may be called
without down_read(mmap_sem) - when core dumping. That seems a
misguided optimization to me, which would also be nice to correct"
In particular because the growsdown and growsup can move the
vm_start/vm_end the various loops the core dump does around the vma will
not be consistent if page faults can happen concurrently.
Pretty much all users calling mmget_not_zero()/get_task_mm() and then
taking the mmap_sem had the potential to introduce unexpected side
effects in the core dumping code.
Adding mmap_sem for writing around the ->core_dump invocation is a
viable long term fix, but it requires removing all copy user and page
faults and to replace them with get_dump_page() for all binary formats
which is not suitable as a short term fix.
For the time being this solution manually covers the places that can
confuse the core dump either by altering the vma layout or the vma flags
while it runs. Once ->core_dump runs under mmap_sem for writing the
function mmget_still_valid() can be dropped.
Allowing mmap_sem protected sections to run in parallel with the
coredump provides some minor parallelism advantage to the swapoff code
(which seems to be safe enough by never mangling any vma field and can
keep doing swapins in parallel to the core dumping) and to some other
corner case.
In order to facilitate the backporting I added "Fixes: 86039bd3b4e6"
however the side effect of this same race condition in /proc/pid/mem
should be reproducible since before 2.6.12-rc2 so I couldn't add any
other "Fixes:" because there's no hash beyond the git genesis commit.
Because find_extend_vma() is the only location outside of the process
context that could modify the "mm" structures under mmap_sem for
reading, by adding the mmget_still_valid() check to it, all other cases
that take the mmap_sem for reading don't need the new check after
mmget_not_zero()/get_task_mm(). The expand_stack() in page fault
context also doesn't need the new check, because all tasks under core
dumping are frozen.
Link: http://lkml.kernel.org/r/[email protected]
Fixes: 86039bd3b4e6 ("userfaultfd: add new syscall to provide memory externalization")
Signed-off-by: Andrea Arcangeli <[email protected]>
Reported-by: Jann Horn <[email protected]>
Suggested-by: Oleg Nesterov <[email protected]>
Acked-by: Peter Xu <[email protected]>
Reviewed-by: Mike Rapoport <[email protected]>
Reviewed-by: Oleg Nesterov <[email protected]>
Reviewed-by: Jann Horn <[email protected]>
Acked-by: Jason Gunthorpe <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
The igrab() in shmem_unuse() looks good, but we forgot that it gives no
protection against concurrent unmounting: a point made by Konstantin
Khlebnikov eight years ago, and then fixed in 2.6.39 by 778dd893ae78
("tmpfs: fix race between umount and swapoff"). The current 5.1-rc
swapoff is liable to hit "VFS: Busy inodes after unmount of tmpfs.
Self-destruct in 5 seconds. Have a nice day..." followed by GPF.
Once again, give up on using igrab(); but don't go back to making such
heavy-handed use of shmem_swaplist_mutex as last time: that would spoil
the new design, and I expect could deadlock inside shmem_swapin_page().
Instead, shmem_unuse() just raise a "stop_eviction" count in the shmem-
specific inode, and shmem_evict_inode() wait for that to go down to 0.
Call it "stop_eviction" rather than "swapoff_busy" because it can be put
to use for others later (huge tmpfs patches expect to use it).
That simplifies shmem_unuse(), protecting it from both unlink and
unmount; and in practice lets it locate all the swap in its first try.
But do not rely on that: there's still a theoretical case, when
shmem_writepage() might have been preempted after its get_swap_page(),
before making the swap entry visible to swapoff.
[[email protected]: remove incorrect list_del()]
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
Fixes: b56a2d8af914 ("mm: rid swapoff of quadratic complexity")
Signed-off-by: Hugh Dickins <[email protected]>
Cc: "Alex Xu (Hello71)" <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Kelley Nielsen <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Vineeth Pillai <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
When a driver unloads without unloading TTM we don't correctly
clear the global structures leading to errors on re-init.
Next step should probably be to remove the global structures and
kobjs all together, but this is tricky since we need to maintain
backward compatibility.
Signed-off-by: Christian König <[email protected]>
Reviewed-by: Karol Herbst <[email protected]>
Tested-by: Karol Herbst <[email protected]>
CC: [email protected] # 5.0.x
Signed-off-by: Alex Deucher <[email protected]>
|
|
This patch breaks set_selection() into two functions so that when
called from kernel, copy_from_user() can be avoided. The two functions
are called set_selection_user() and set_selection_kernel() in order to
be explicit about their purposes. This also means updating any
references to set_selection() and fixing for name change. It also
exports set_selection_kernel() and paste_selection().
These changes are used the following patch where speakup's selection
functionality calls into the above functions, thereby doing away with
parallel implementation.
Signed-off-by: Okash Khawaja <[email protected]>
Reviewed-by: Samuel Thibault <[email protected]>
Tested-by: Gregory Nowak <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
Verify the stack frame pointer on kretprobe trampoline handler,
If the stack frame pointer does not match, it skips the wrong
entry and tries to find correct one.
This can happen if user puts the kretprobe on the function
which can be used in the path of ftrace user-function call.
Such functions should not be probed, so this adds a warning
message that reports which function should be blacklisted.
Tested-by: Andrea Righi <[email protected]>
Signed-off-by: Masami Hiramatsu <[email protected]>
Acked-by: Steven Rostedt <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/155094059185.6137.15527904013362842072.stgit@devbox
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Some tcpc device-drivers need to explicitly be told to watch for connection
events, otherwise the tcpc will not generate any TCPM_CC_EVENTs and devices
being plugged into the Type-C port will not be noticed.
For dual-role ports tcpm_start_drp_toggling() is used to tell the tcpc to
watch for connection events. Sofar we lack a similar callback to the tcpc
for single-role ports. With some tcpc-s such as the fusb302 this means
no TCPM_CC_EVENTs will be generated when the port is configured as a
single-role port.
This commit renames start_drp_toggling to start_toggling and since the
device-properties are parsed by the tcpm-core, adds a port_type parameter
to the start_toggling callback so that the tcpc_dev driver knows the
port-type and can act accordingly when it starts toggling.
The new start_toggling callback now always gets called if defined, instead
of only being called for DRP ports.
To avoid this causing undesirable functional changes all existing
start_drp_toggling implementations are not only renamed to start_toggling,
but also get a port_type check added and return -EOPNOTSUPP when port_type
is not DRP.
Fixes: ea3b4d5523bc("usb: typec: fusb302: Resolve fixed power role ...")
Cc: Adam Thomson <[email protected]>
Signed-off-by: Hans de Goede <[email protected]>
Reviewed-by: Guenter Roeck <[email protected]>
Acked-by: Heikki Krogerus <[email protected]>
Tested-by: Adam Thomson <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
The rseq system call, when invoked with flags of "0" or
"RSEQ_FLAG_UNREGISTER" values, expects the rseq_len parameter to
be equal to sizeof(struct rseq), which is fixed-size and fixed-layout,
specified in uapi linux/rseq.h.
Expecting a fixed size for rseq_len is a design choice that ensures
multiple libraries and application defining __rseq_abi in the same
process agree on its exact size.
Considering that this size is and will always be the same value, there
is no point in saving this value within task_struct rseq_len. Remove
this field from task_struct.
No change in functionality intended.
Signed-off-by: Mathieu Desnoyers <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Ben Maurer <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Chris Lameter <[email protected]>
Cc: Dave Watson <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Josh Triplett <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Michael Kerrisk <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Paul Turner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
This renames the GPIO reset of mdio devices from 'reset' to
'reset_gpio' to better differentiate between GPIO and
reset-controller driven reset line.
Signed-off-by: David Bauer <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
This commit adds support for PHY reset pins handled by a reset controller.
Signed-off-by: David Bauer <[email protected]>
Reviewed-by: Andrew Lunn <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
We are discouraging the use of BUG() these days, remove the
unused ASSERT macros from skbuff.h.
Signed-off-by: Jakub Kicinski <[email protected]>
Reviewed-by: Dirk van der Merwe <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
To make ICMPv6 closer to ICMPv4, add ratemask parameter. Since the ICMP
message types use larger numeric values, a simple bitmask doesn't fit.
I use large bitmap. The input and output are the in form of list of
ranges. Set the default to rate limit all error messages but Packet Too
Big. For Packet Too Big, use ratemask instead of hard-coded.
There are functions where icmpv6_xrlim_allow() and icmpv6_global_allow()
aren't called. This patch only adds them to icmpv6_echo_reply().
Rate limiting error messages is mandated by RFC 4443 but RFC 4890 says
that it is also acceptable to rate limit informational messages. Thus,
I removed the current hard-coded behavior of icmpv6_mask_allow() that
doesn't rate limit informational messages.
v2: Add dummy function proc_do_large_bitmap() if CONFIG_PROC_SYSCTL
isn't defined, expand the description in ip-sysctl.txt and remove
unnecessary conditional before kfree().
v3: Inline the bitmap instead of dynamically allocated. Still is a
pointer to it is needed because of the way proc_do_large_bitmap work.
Signed-off-by: Stephen Suryaputra <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add hlist_bl_add_before/behind helpers to add an element before/after an
existing element in a bl_list.
Co-developed-by: Ilias Tsitsimpis <[email protected]>
Signed-off-by: Nikos Tsironis <[email protected]>
Reviewed-by: Paul E. McKenney <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Commit 1c97be677f72b3 ("list: Use WRITE_ONCE() when adding to lists and
hlists") introduced the use of WRITE_ONCE() to atomically write the list
head's ->next pointer.
hlist_add_behind() doesn't touch the hlist head's ->first pointer so
there is no reason to use WRITE_ONCE() in this case.
Co-developed-by: Ilias Tsitsimpis <[email protected]>
Signed-off-by: Nikos Tsironis <[email protected]>
Reviewed-by: Paul E. McKenney <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
|
|
Merge immutable branch containing the IIO changes required
for the new Ingenic JZ47xx battery fuel gauge driver.
Signed-off-by: Sebastian Reichel <[email protected]>
|
|
fwnode_graph_get_endpoint_by_id() is intended for obtaining local
endpoints by a given local port.
fwnode_graph_get_endpoint_by_id() is slightly different from its OF
counterpart, of_graph_get_endpoint_by_regs(): instead of using -1 as
a value to indicate that a port or an endpoint number does not matter,
it uses flags to look for equal or greater endpoint. The port number
is always fixed. It also returns only remote endpoints that belong
to an available device, a behaviour that can be turned off with a flag.
Signed-off-by: Sakari Ailus <[email protected]>
[ rjw: Changelog ]
Signed-off-by: Rafael J. Wysocki <[email protected]>
|
|
Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create
algorithms with the deprecated "ablkcipher" type.
This has been unused since commit 0e145b477dea ("crypto: ablk_helper -
remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used.
Signed-off-by: Eric Biggers <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Add Elliptic Curve Russian Digital Signature Algorithm (GOST R
34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since
2018 the CIS countries) cryptographic standard algorithms (called GOST
algorithms). Only signature verification is supported, with intent to be
used in the IMA.
Summary of the changes:
* crypto/Kconfig:
- EC-RDSA is added into Public-key cryptography section.
* crypto/Makefile:
- ecrdsa objects are added.
* crypto/asymmetric_keys/x509_cert_parser.c:
- Recognize EC-RDSA and Streebog OIDs.
* include/linux/oid_registry.h:
- EC-RDSA OIDs are added to the enum. Also, a two currently not
implemented curve OIDs are added for possible extension later (to
not change numbering and grouping).
* crypto/ecc.c:
- Kenneth MacKay copyright date is updated to 2014, because
vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his
code from micro-ecc.
- Functions needed for ecrdsa are EXPORT_SYMBOL'ed.
- New functions:
vli_is_negative - helper to determine sign of vli;
vli_from_be64 - unpack big-endian array into vli (used for
a signature);
vli_from_le64 - unpack little-endian array into vli (used for
a public key);
vli_uadd, vli_usub - add/sub u64 value to/from vli (used for
increment/decrement);
mul_64_64 - optimized to use __int128 where appropriate, this speeds
up point multiplication (and as a consequence signature
verification) by the factor of 1.5-2;
vli_umult - multiply vli by a small value (speeds up point
multiplication by another factor of 1.5-2, depending on vli sizes);
vli_mmod_special - module reduction for some form of Pseudo-Mersenne
primes (used for the curves A);
vli_mmod_special2 - module reduction for another form of
Pseudo-Mersenne primes (used for the curves B);
vli_mmod_barrett - module reduction using pre-computed value (used
for the curve C);
vli_mmod_slow - more general module reduction which is much slower
(used when the modulus is subgroup order);
vli_mod_mult_slow - modular multiplication;
ecc_point_add - add two points;
ecc_point_mult_shamir - add two points multiplied by scalars in one
combined multiplication (this gives speed up by another factor 2 in
compare to two separate multiplications).
ecc_is_pubkey_valid_partial - additional samity check is added.
- Updated vli_mmod_fast with non-strict heuristic to call optimal
module reduction function depending on the prime value;
- All computations for the previously defined (two NIST) curves should
not unaffected.
* crypto/ecc.h:
- Newly exported functions are documented.
* crypto/ecrdsa_defs.h
- Five curves are defined.
* crypto/ecrdsa.c:
- Signature verification is implemented.
* crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1:
- Templates for BER decoder for EC-RDSA parameters and public key.
Cc: [email protected]
Signed-off-by: Vitaly Chikunov <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Some public key algorithms (like EC-DSA) keep in parameters field
important data such as digest and curve OIDs (possibly more for
different EC-DSA variants). Thus, just setting a public key (as
for RSA) is not enough.
Append parameters into the key stream for akcipher_set_{pub,priv}_key.
Appended data is: (u32) algo OID, (u32) parameters length, parameters
data.
This does not affect current akcipher API nor RSA ciphers (they could
ignore it). Idea of appending parameters to the key stream is by Herbert
Xu.
Cc: David Howells <[email protected]>
Cc: Denis Kenzior <[email protected]>
Cc: [email protected]
Signed-off-by: Vitaly Chikunov <[email protected]>
Reviewed-by: Denis Kenzior <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Previous akcipher .verify() just `decrypts' (using RSA encrypt which is
using public key) signature to uncover message hash, which was then
compared in upper level public_key_verify_signature() with the expected
hash value, which itself was never passed into verify().
This approach was incompatible with EC-DSA family of algorithms,
because, to verify a signature EC-DSA algorithm also needs a hash value
as input; then it's used (together with a signature divided into halves
`r||s') to produce a witness value, which is then compared with `r' to
determine if the signature is correct. Thus, for EC-DSA, nor
requirements of .verify() itself, nor its output expectations in
public_key_verify_signature() wasn't sufficient.
Make improved .verify() call which gets hash value as input and produce
complete signature check without any output besides status.
Now for the top level verification only crypto_akcipher_verify() needs
to be called and its return value inspected.
Make sure that `digest' is in kmalloc'd memory (in place of `output`) in
{public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will
be changed in the following commit.
Cc: David Howells <[email protected]>
Cc: [email protected]
Signed-off-by: Vitaly Chikunov <[email protected]>
Reviewed-by: Denis Kenzior <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
This patch adds a requirement to the generic 3DES implementation
such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode.
We will also provide helpers that may be used by drivers that
implement 3DES to make the same check.
Signed-off-by: Herbert Xu <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu
Pull RCU and LKMM commits from Paul E. McKenney:
- An LKMM commit adding support for synchronize_srcu_expedited()
- A couple of straggling RCU flavor consolidation updates
- Documentation updates.
- Miscellaneous fixes
- SRCU updates
- RCU CPU stall-warning updates
- Torture-test updates
Signed-off-by: Ingo Molnar <[email protected]>
|
|
When lockdep is enabled, and -Wuninitialized warnings are enabled,
Clang produces a silly warning for every file we compile:
In file included from kernel/sched/fair.c:23:
kernel/sched/sched.h:1094:15: error: variable 'cookie' is uninitialized when used here [-Werror,-Wuninitialized]
rf->cookie = lockdep_pin_lock(&rq->lock);
^~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/lockdep.h:474:60: note: expanded from macro 'lockdep_pin_lock'
#define lockdep_pin_lock(l) ({ struct pin_cookie cookie; cookie; })
^~~~~~
kernel/sched/sched.h:1094:15: note: variable 'cookie' is declared here
include/linux/lockdep.h:474:34: note: expanded from macro 'lockdep_pin_lock'
#define lockdep_pin_lock(l) ({ struct pin_cookie cookie; cookie; })
^
As the 'struct pin_cookie' structure is empty in this configuration,
there is no need to initialize it for correctness, but it also
does not hurt to set it to an empty structure, so do that to
avoid the warning.
Signed-off-by: Arnd Bergmann <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Bart Van Assche <[email protected]>
Cc: Joel Fernandes (Google) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Nathan Chancellor <[email protected]>
Cc: Nick Desaulniers <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Steven Rostedt (VMware) <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
|
|
To be able to check and set bad block markers in the first and
second page of a block independently of each other, we create
separate flags for both cases.
Previously NAND_BBM_SECONDPAGE meant, that both, the first and the
second page were used. With this patch NAND_BBM_FIRSTPAGE stands for
using the first page and NAND_BBM_SECONDPAGE for using the second
page.
This patch is only for preparation of subsequent changes and does
not implement the logic to actually handle both flags separately.
Signed-off-by: Frieder Schrempf <[email protected]>
Reviewed-by: Boris Brezillon <[email protected]>
Reviewed-by: Miquel Raynal <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
Now that we have moved the information to the chip level, let's
remove all the unused flags and fields.
Signed-off-by: Frieder Schrempf <[email protected]>
Reviewed-by: Miquel Raynal <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
The information about where the manufacturer puts the bad block
markers inside the bad block and in the OOB data is stored in
different places. Let's move this information to the chip struct,
as we did it for rawnand.
Signed-off-by: Frieder Schrempf <[email protected]>
Reviewed-by: Miquel Raynal <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
The information about where the manufacturer puts the bad block
markers inside the bad block and in the OOB data is stored in
different places. Let's move this information to nand_chip.options
and nand_chip.badblockpos.
As this chip-specific information is not directly related to the
bad block table (BBT), we also rename the flags to NAND_BBM_*.
Signed-off-by: Frieder Schrempf <[email protected]>
Reviewed-by: Miquel Raynal <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
Currently, drivers are able to constify a nand_op_parser array,
but not nand_op_parser_pattern and nand_op_parser_pattern_elem
since they are instantiated by using the NAND_OP_PARSER(_PATTERN).
Add 'const' to them in order to move more driver data from .data to
.rodata section.
Signed-off-by: Masahiro Yamada <[email protected]>
Reviewed-by: Boris Brezillon <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
Sphinx doesn't handle expressions in identifier references.
This fixes the following warnings:
./include/linux/mtd/rawnand.h:1184: WARNING: Inline strong start-string without end-string.
./include/linux/mtd/rawnand.h:1186: WARNING: Inline strong start-string without end-string.
Signed-off-by: Jonathan Neuschäfer <[email protected]>
Signed-off-by: Miquel Raynal <[email protected]>
|
|
There is no point in having two distinct entries, merge them and
rename the symbol for more clarity: MTD_NAND_ECC_SW_BCH
Signed-off-by: Miquel Raynal <[email protected]>
|
|
Disabling IPv6 on an interface removes existing entries but nothing prevents
new entries from being manually added. To that end, add a new neigh_table
operation, allow_add, that is called on RTM_NEWNEIGH to see if neighbor
entries are allowed on a given device. If IPv6 is disabled on the device,
allow_add returns false and passes a message back to the user via extack.
$ echo 1 > /proc/sys/net/ipv6/conf/eth1/disable_ipv6
$ ip -6 neigh add fe80::4c88:bff:fe21:2704 dev eth1 lladdr de:ad:be:ef:01:01
Error: IPv6 is disabled on this device.
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
When scatter to CQE is enabled on a DCT QP it corrupts the mailbox command
since it tried to treat it as as QP create mailbox command instead of a
DCT create command.
The corrupted mailbox command causes userspace to malfunction as the
device doesn't create the QP as expected.
A new mlx5 capability is exposed to user-space which ensures that it will
not enable the feature on DCT without this fix in the kernel.
Fixes: 5d6ff1babe78 ("IB/mlx5: Support scatter to CQE for DC transport type")
Signed-off-by: Guy Levi <[email protected]>
Signed-off-by: Leon Romanovsky <[email protected]>
Signed-off-by: Jason Gunthorpe <[email protected]>
|
|
Add the fib6_flags and fib6_type to fib6_result. Update the lookup helpers
to set them and update post fib lookup users to use the version from the
result.
This allows nexthop objects to have blackhole nexthop.
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Change fib6_lookup and fib6_table_lookup to take a fib6_result and set
f6i and nh rather than returning a fib6_info. For now both always
return 0.
A later patch set can make these more like the IPv4 counterparts and
return EINVAL, EACCESS, etc based on fib6_type.
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Change fib6_table_lookup tracepoint to take the fib6_result and use
the fib6_info and fib6_nh from it.
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Change ip6_mtu_from_fib6 and fib6_mtu to take a fib6_result over a
fib6_info. Update both to use the fib6_nh from fib6_result.
Since the signature of ip6_mtu_from_fib6 is already changing, add const
to daddr and saddr.
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Add 'struct fib6_result' to hold the fib entry and fib6_nh from a fib
lookup as separate entries, similar to what IPv4 now has with fib_result.
Rename fib6_multipath_select to fib6_select_path, pass fib6_result to
it, and set f6i and nh in the result once a path selection is done.
Call fib6_select_path unconditionally for path selection which means
moving the sibling and oif check to fib6_select_path. To handle the two
different call paths (2 only call multipath_select if flowi6_oif == 0 and
the other always calls it), add a new have_oif_match that controls the
sibling walk if relevant.
Update callers of fib6_multipath_select accordingly and have them use the
fib6_info and fib6_nh from the result.
This is needed for multipath nexthop objects where a single f6i can
point to multiple fib6_nh (similar to IPv4).
Signed-off-by: David Ahern <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
The function build_skb() also have the responsibility to allocate and clear
the SKB structure. Introduce a new function build_skb_around(), that moves
the responsibility of allocation and clearing to the caller. This allows
caller to use kmem_cache (slab/slub) bulk allocation API.
Next patch use this function combined with kmem_cache_alloc_bulk.
Signed-off-by: Jesper Dangaard Brouer <[email protected]>
Acked-by: Song Liu <[email protected]>
Acked-by: Eric Dumazet <[email protected]>
Signed-off-by: Alexei Starovoitov <[email protected]>
|
|
The Switchtec devices supports two PCIe Function Frameworks (PFFs) per
upstream port (one for the port itself and one for the management endoint),
and each PFF may have up to 255 ports. Previously the driver only
supported 48 of those ports, and the SWITCHTEC_IOCTL_EVENT_SUMMARY ioctl
only returned information about those 48.
Increase SWITCHTEC_MAX_PFF_CSR from 48 to 255 so the driver supports all
255 possible ports.
Rename SWITCHTEC_IOCTL_EVENT_SUMMARY and associated struct
switchtec_ioctl_event_summary to SWITCHTEC_IOCTL_EVENT_SUMMARY_LEGACY and
switchtec_ioctl_event_summary_legacy with so existing applications work
unchanged, supporting up to 48 ports.
Add replacement SWITCHTEC_IOCTL_EVENT_SUMMARY and struct
switchtec_ioctl_event_summary that new and recompiled applications support
up to 255 ports.
Signed-off-by: Wesley Sheng <[email protected]>
[bhelgaas: changelog]
Signed-off-by: Bjorn Helgaas <[email protected]>
Reviewed-by: Logan Gunthorpe <[email protected]>
|
|
pci_request_region_exclusive() was introduced with commit e8de1481fd71
("resource: allow MMIO exclusivity for device drivers") in 2.6.29 which
was released 2008.
It never had an in tree user since then, so after 11 years later let's
remove it.
Signed-off-by: Johannes Thumshirn <[email protected]>
Signed-off-by: Bjorn Helgaas <[email protected]>
|
|
The "Enhanced Allocation (EA) for Memory and I/O Resources" ECN, approved
23 October 2014, sec 6.9.1.2, specifies a second DW in the capability for
type 1 (bridge) functions to describe fixed secondary and subordinate bus
numbers. This ECN was included in the PCIe r4.0 spec, but sec 6.9.1.2 was
omitted, presumably by mistake.
Read fixed bus numbers from the EA capability for bridges.
Signed-off-by: Subbaraya Sundeep <[email protected]>
[bhelgaas: add pci_ea_fixed_busnrs() return value]
Signed-off-by: Bjorn Helgaas <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
Pull in the command line updates from the tip tree so the MDS parts can be
added.
|