Age | Commit message (Collapse) | Author | Files | Lines |
|
In netdevice.h we removed the structure in net-next that is being
changes in 'net'. In macsec.c and rtnetlink.c we have overlaps
between fixes in 'net' and the u64 attribute changes in 'net-next'.
The mlx5 conflicts have to do with vxlan support dependencies.
Signed-off-by: David S. Miller <[email protected]>
|
|
they are open-coded in all users except iov_iter_advance(), and there
they wouldn't be a bad idea either - as it is, iov_iter_advance(i, 0)
ends up dereferencing potentially past the end of iovec array. It
doesn't do anything with the value it reads, and very unlikely to
trigger an oops on dereference, but it is not impossible.
Reported-by: Jiri Slaby <[email protected]>
Reported-by: Takashi Iwai <[email protected]>
Signed-off-by: Al Viro <[email protected]>
|
|
Recently, we allow to save the stacktrace whose hashed value is 0. It
causes the problem that stackdepot could return 0 even if in success.
User of stackdepot cannot distinguish whether it is success or not so we
need to solve this problem. In this patch, 1 bit are added to handle
and make valid handle none 0 by setting this bit. After that, valid
handle will not be 0 and 0 handle will represent failure correctly.
Fixes: 33334e25769c ("lib/stackdepot.c: allow the stack trace hash to be zero")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Joonsoo Kim <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Conflicts:
net/ipv4/ip_gre.c
Minor conflicts between tunnel bug fixes in net and
ipv6 tunnel cleanups in net-next.
Signed-off-by: David S. Miller <[email protected]>
|
|
A kernel taint results when loading the rsa_generic module:
root@(none):~# modprobe rsa_generic
asn1_decoder: module license 'unspecified' taints kernel.
Disabling lock debugging due to kernel taint
"Tainting" of the kernel is (usually) a way of indicating that
a proprietary module has been inserted, which is not the case here.
Signed-off-by: Tudor Ambarus <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Do not bail out from depot_save_stack() if the stack trace has zero hash.
Initially depot_save_stack() silently dropped stack traces with zero
hashes, however there's actually no point in reserving this zero value.
Reported-by: Joonsoo Kim <[email protected]>
Signed-off-by: Alexander Potapenko <[email protected]>
Acked-by: Andrey Ryabinin <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu
Pull RCU updates from Paul E. McKenney:
* Documentation updates, including fixes to the design-level
requirements documentation and a fixed version of the design-level
data-structure documentation. These fixes include removing
cartoons and getting rid of the html/htmlx duplication.
* Further improvements to the new-age expedited grace periods.
* Miscellaneous fixes.
* Torture-test changes, including a new rcuperf module for measuring
RCU grace-period performance and scalability, which is useful for
the expedited-grace-period changes.
Signed-off-by: Ingo Molnar <[email protected]>
|
|
Fix typo and describe 'padattr'.
Fixes: 089bf1a6a924 ("libnl: add more helpers to align attributes on 64-bit")
Signed-off-by: Nicolas Dichtel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Conflicts were two cases of simple overlapping changes,
nothing serious.
In the UDP case, we need to add a hlist_add_tail_rcu()
to linux/rculist.h, because we've moved UDP socket handling
away from using nulls lists.
Signed-off-by: David S. Miller <[email protected]>
|
|
Signed-off-by: Nicolas Dichtel <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Allocate a NULL-terminated file path with special characters escaped,
safe for logging.
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
|
Provide an escaped (but readable: no inter-argument NULLs) commandline
safe for logging.
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
|
Handle allocating and escaping a string safe for logging.
Signed-off-by: Kees Cook <[email protected]>
Signed-off-by: James Morris <[email protected]>
|
|
We want those fixes in here as well.
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull misc fixes from Greg KH:
"Here are three small fixes for 4.6-rc4.
Two fix up some lz4 issues with big endian systems, and the remaining
one resolves a minor debugfs issue that was reported.
All have been in linux-next with no reported issues"
* tag 'driver-core-4.6-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
lib: lz4: cleanup unaligned access efficiency detection
lib: lz4: fixed zram with lz4 on big endian machines
debugfs: Make automount point inodes permanently empty
|
|
Now it's ready to move the mempool based SG chained allocator code from
SCSI driver to lib/sg_pool.c, which will be compiled only based on a Kconfig
symbol CONFIG_SG_POOL.
SCSI selects CONFIG_SG_POOL.
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Ming Lin <[email protected]>
Reviewed-by: Sagi Grimberg <[email protected]>
Signed-off-by: Martin K. Petersen <[email protected]>
|
|
These identifiers are bogus. The interested architectures should define
HAVE_EFFICIENT_UNALIGNED_ACCESS whenever relevant to do so. If this
isn't true for some arch, it should be fixed in the arch definition.
Signed-off-by: Rui Salvaterra <[email protected]>
Reviewed-by: Sergey Senozhatsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
Based on Sergey's test patch [1], this fixes zram with lz4 compression
on big endian cpus.
Note that the 64-bit preprocessor test is not a cleanup, it's part of
the fix, since those identifiers are bogus (for example, __ppc64__
isn't defined anywhere else in the kernel, which means we'd fall into
the 32-bit definitions on ppc64).
Tested on ppc64 with no regression on x86_64.
[1] http://marc.info/?l=linux-kernel&m=145994470805853&w=4
Cc: [email protected]
Suggested-by: Sergey Senozhatsky <[email protected]>
Signed-off-by: Rui Salvaterra <[email protected]>
Reviewed-by: Sergey Senozhatsky <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs into for-linus
|
|
Nothing prevents a dentry found by path lookup before a return of
__debugfs_remove() to actually get opened after that return. Now, after
the return of __debugfs_remove(), there are no guarantees whatsoever
regarding the memory the corresponding inode's file_operations object
had been kept in.
Since __debugfs_remove() is seldomly invoked, usually from module exit
handlers only, the race is hard to trigger and the impact is very low.
A discussion of the problem outlined above as well as a suggested
solution can be found in the (sub-)thread rooted at
http://lkml.kernel.org/g/[email protected]
("Yet another pipe related oops.")
Basically, Greg KH suggests to introduce an intermediate fops and
Al Viro points out that a pointer to the original ones may be stored in
->d_fsdata.
Follow this line of reasoning:
- Add SRCU as a reverse dependency of DEBUG_FS.
- Introduce a srcu_struct object for the debugfs subsystem.
- In debugfs_create_file(), store a pointer to the original
file_operations object in ->d_fsdata.
- Make debugfs_remove() and debugfs_remove_recursive() wait for a
SRCU grace period after the dentry has been delete()'d and before they
return to their callers.
- Introduce an intermediate file_operations object named
"debugfs_open_proxy_file_operations". It's ->open() functions checks,
under the protection of a SRCU read lock, whether the dentry is still
alive, i.e. has not been d_delete()'d and if so, tries to acquire a
reference on the owning module.
On success, it sets the file object's ->f_op to the original
file_operations and forwards the ongoing open() call to the original
->open().
- For clarity, rename the former debugfs_file_operations to
debugfs_noop_file_operations -- they are in no way canonical.
The choice of SRCU over "normal" RCU is justified by the fact, that the
former may also be used to protect ->i_private data from going away
during the execution of a file's readers and writers which may (and do)
sleep.
Finally, introduce the fs/debugfs/internal.h header containing some
declarations internal to the debugfs implementation.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
|
|
|
|
Signed-off-by: Al Viro <[email protected]>
|
|
Some of these tests proved useful with the powerpc eBPF JIT port due to
sign-extended 16-bit immediate loads. Though some of these aspects get
covered in other tests, it is better to have explicit tests so as to
quickly tag the precise problem.
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Naveen N. Rao <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
BPF_ALU32 and BPF_ALU64 tests for adding two 32-bit values that results in
32-bit overflow.
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Naveen N. Rao <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Unsigned Jump-if-Greater-Than.
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Naveen N. Rao <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
JMP_JSET tests incorrectly used BPF_JNE. Fix the same.
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Ananth N Mavinakayanahalli <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Naveen N. Rao <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Acked-by: Daniel Borkmann <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
|
|
Changes since V1: fixed the description and added KASan warning.
In assoc_array_insert_into_terminal_node(), we call the
compare_object() method on all non-empty slots, even when they're
not leaves, passing a pointer to an unexpected structure to
compare_object(). Currently it causes an out-of-bound read access
in keyring_compare_object detected by KASan (see below). The issue
is easily reproduced with keyutils testsuite.
Only call compare_object() when the slot is a leave.
KASan warning:
==================================================================
BUG: KASAN: slab-out-of-bounds in keyring_compare_object+0x213/0x240 at addr ffff880060a6f838
Read of size 8 by task keyctl/1655
=============================================================================
BUG kmalloc-192 (Not tainted): kasan: bad access detected
-----------------------------------------------------------------------------
Disabling lock debugging due to kernel taint
INFO: Allocated in assoc_array_insert+0xfd0/0x3a60 age=69 cpu=1 pid=1647
___slab_alloc+0x563/0x5c0
__slab_alloc+0x51/0x90
kmem_cache_alloc_trace+0x263/0x300
assoc_array_insert+0xfd0/0x3a60
__key_link_begin+0xfc/0x270
key_create_or_update+0x459/0xaf0
SyS_add_key+0x1ba/0x350
entry_SYSCALL_64_fastpath+0x12/0x76
INFO: Slab 0xffffea0001829b80 objects=16 used=8 fp=0xffff880060a6f550 flags=0x3fff8000004080
INFO: Object 0xffff880060a6f740 @offset=5952 fp=0xffff880060a6e5d1
Bytes b4 ffff880060a6f730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f740: d1 e5 a6 60 00 88 ff ff 0e 00 00 00 00 00 00 00 ...`............
Object ffff880060a6f750: 02 cf 8e 60 00 88 ff ff 02 c0 8e 60 00 88 ff ff ...`.......`....
Object ffff880060a6f760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f7a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f7b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f7c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f7d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f7e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Object ffff880060a6f7f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
CPU: 0 PID: 1655 Comm: keyctl Tainted: G B 4.5.0-rc4-kasan+ #291
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
0000000000000000 000000001b2800b4 ffff880060a179e0 ffffffff81b60491
ffff88006c802900 ffff880060a6f740 ffff880060a17a10 ffffffff815e2969
ffff88006c802900 ffffea0001829b80 ffff880060a6f740 ffff880060a6e650
Call Trace:
[<ffffffff81b60491>] dump_stack+0x85/0xc4
[<ffffffff815e2969>] print_trailer+0xf9/0x150
[<ffffffff815e9454>] object_err+0x34/0x40
[<ffffffff815ebe50>] kasan_report_error+0x230/0x550
[<ffffffff819949be>] ? keyring_get_key_chunk+0x13e/0x210
[<ffffffff815ec62d>] __asan_report_load_n_noabort+0x5d/0x70
[<ffffffff81994cc3>] ? keyring_compare_object+0x213/0x240
[<ffffffff81994cc3>] keyring_compare_object+0x213/0x240
[<ffffffff81bc238c>] assoc_array_insert+0x86c/0x3a60
[<ffffffff81bc1b20>] ? assoc_array_cancel_edit+0x70/0x70
[<ffffffff8199797d>] ? __key_link_begin+0x20d/0x270
[<ffffffff8199786c>] __key_link_begin+0xfc/0x270
[<ffffffff81993389>] key_create_or_update+0x459/0xaf0
[<ffffffff8128ce0d>] ? trace_hardirqs_on+0xd/0x10
[<ffffffff81992f30>] ? key_type_lookup+0xc0/0xc0
[<ffffffff8199e19d>] ? lookup_user_key+0x13d/0xcd0
[<ffffffff81534763>] ? memdup_user+0x53/0x80
[<ffffffff819983ea>] SyS_add_key+0x1ba/0x350
[<ffffffff81998230>] ? key_get_type_from_user.constprop.6+0xa0/0xa0
[<ffffffff828bcf4e>] ? retint_user+0x18/0x23
[<ffffffff8128cc7e>] ? trace_hardirqs_on_caller+0x3fe/0x580
[<ffffffff81004017>] ? trace_hardirqs_on_thunk+0x17/0x19
[<ffffffff828bc432>] entry_SYSCALL_64_fastpath+0x12/0x76
Memory state around the buggy address:
ffff880060a6f700: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc
>ffff880060a6f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
^
ffff880060a6f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
ffff880060a6f900: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00
==================================================================
Signed-off-by: Jerome Marchand <[email protected]>
Signed-off-by: David Howells <[email protected]>
cc: [email protected]
|
|
Within the copying loop in mpi_read_raw_from_sgl(), the last input SGE's
byte count gets artificially extended as follows:
if (sg_is_last(sg) && (len % BYTES_PER_MPI_LIMB))
len += BYTES_PER_MPI_LIMB - (len % BYTES_PER_MPI_LIMB);
Within the following byte copying loop, this causes reads beyond that
SGE's allocated buffer:
BUG: KASAN: slab-out-of-bounds in mpi_read_raw_from_sgl+0x331/0x650
at addr ffff8801e168d4d8
Read of size 1 by task systemd-udevd/721
[...]
Call Trace:
[<ffffffff818c4d35>] dump_stack+0xbc/0x117
[<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
[<ffffffff814af5d1>] ? print_section+0x61/0xb0
[<ffffffff814b1109>] print_trailer+0x179/0x2c0
[<ffffffff814bc524>] object_err+0x34/0x40
[<ffffffff814bfdc7>] kasan_report_error+0x307/0x8c0
[<ffffffff814bf315>] ? kasan_unpoison_shadow+0x35/0x50
[<ffffffff814bf38e>] ? kasan_kmalloc+0x5e/0x70
[<ffffffff814c0ad1>] kasan_report+0x71/0xa0
[<ffffffff81938171>] ? mpi_read_raw_from_sgl+0x331/0x650
[<ffffffff814bf1a6>] __asan_load1+0x46/0x50
[<ffffffff81938171>] mpi_read_raw_from_sgl+0x331/0x650
[<ffffffff817f41b6>] rsa_verify+0x106/0x260
[<ffffffff817f40b0>] ? rsa_set_pub_key+0xf0/0xf0
[<ffffffff818edc79>] ? sg_init_table+0x29/0x50
[<ffffffff817f4d22>] ? pkcs1pad_sg_set_buf+0xb2/0x2e0
[<ffffffff817f5b74>] pkcs1pad_verify+0x1f4/0x2b0
[<ffffffff81831057>] public_key_verify_signature+0x3a7/0x5e0
[<ffffffff81830cb0>] ? public_key_describe+0x80/0x80
[<ffffffff817830f0>] ? keyring_search_aux+0x150/0x150
[<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370
[<ffffffff814b83f0>] ? kfree+0x220/0x370
[<ffffffff818312c2>] public_key_verify_signature_2+0x32/0x50
[<ffffffff81830b5c>] verify_signature+0x7c/0xb0
[<ffffffff81835d0c>] pkcs7_validate_trust+0x42c/0x5f0
[<ffffffff813c391a>] system_verify_data+0xca/0x170
[<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b
[<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0
[<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290
[...]
The exact purpose of the len extension isn't clear to me, but due to
its form, I suspect that it's a leftover somehow accounting for leading
zero bytes within the most significant output limb.
Note however that without that len adjustement, the total number of bytes
ever processed by the inner loop equals nbytes and thus, the last output
limb gets written at this point. Thus the net effect of the len adjustement
cited above is just to keep the inner loop running for some more
iterations, namely < BYTES_PER_MPI_LIMB ones, reading some extra bytes from
beyond the last SGE's buffer and discarding them afterwards.
Fix this issue by purging the extension of len beyond the last input SGE's
buffer length.
Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Within the byte reading loop in mpi_read_raw_sgl(), there are two
housekeeping indices used, z and x.
At all times, the index z represents the number of output bytes covered
by the input SGEs for which processing has completed so far. This includes
any leading zero bytes within the most significant limb.
The index x changes its meaning after the first outer loop's iteration
though: while processing the first input SGE, it represents
"number of leading zero bytes in most significant output limb" +
"current position within current SGE"
For the remaining SGEs OTOH, x corresponds just to
"current position within current SGE"
After all, it is only the sum of z and x that has any meaning for the
output buffer and thus, the
"number of leading zero bytes in most significant output limb"
part can be moved away from x into z from the beginning, opening up the
opportunity for cleaner code.
Before the outer loop iterating over the SGEs, don't initialize z with
zero, but with the number of leading zero bytes in the most significant
output limb. For the inner loop iterating over a single SGE's bytes,
get rid of the buf_shift offset to x' bounds and let x run from zero to
sg->length - 1.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
The number of bits, nbits, is calculated in mpi_read_raw_from_sgl() as
follows:
nbits = nbytes * 8;
Afterwards, the number of leading zero bits of the first byte get
subtracted:
nbits -= count_leading_zeros(*(u8 *)(sg_virt(sgl) + lzeros));
However, count_leading_zeros() takes an unsigned long and thus,
the u8 gets promoted to an unsigned long.
Thus, the above doesn't subtract the number of leading zeros in the most
significant nonzero input byte from nbits, but the number of leading
zeros of the most significant nonzero input byte promoted to unsigned long,
i.e. BITS_PER_LONG - 8 too many.
Fix this by subtracting
count_leading_zeros(...) - (BITS_PER_LONG - 8)
from nbits only.
Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
In mpi_read_raw_from_sgl(), unsigned nbits is calculated as follows:
nbits = nbytes * 8;
and redundantly cleared later on if nbytes == 0:
if (nbytes > 0)
...
else
nbits = 0;
Purge this redundant clearing for the sake of clarity.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
At the very beginning of mpi_read_raw_from_sgl(), the leading zeros of
the input scatterlist are counted:
lzeros = 0;
for_each_sg(sgl, sg, ents, i) {
...
if (/* sg contains nonzero bytes */)
break;
/* sg contains nothing but zeros here */
ents--;
lzeros = 0;
}
Later on, the total number of trailing nonzero bytes is calculated by
subtracting the number of leading zero bytes from the total number of input
bytes:
nbytes -= lzeros;
However, since lzeros gets reset to zero for each completely zero leading
sg in the loop above, it doesn't include those.
Besides wasting resources by allocating a too large output buffer,
this mistake propagates into the calculation of x, the number of
leading zeros within the most significant output limb:
x = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB;
What's more, the low order bytes of the output, equal in number to the
extra bytes in nbytes, are left uninitialized.
Fix this by adjusting nbytes for each completely zero leading scatterlist
entry.
Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Currently, the nbytes local variable is calculated from the len argument
as follows:
... mpi_read_raw_from_sgl(..., unsigned int len)
{
unsigned nbytes;
...
if (!ents)
nbytes = 0;
else
nbytes = len - lzeros;
...
}
Given that nbytes is derived from len in a trivial way and that the len
argument is shadowed by a local len variable in several loops, this is just
confusing.
Rename the len argument to nbytes and get rid of the nbytes local variable.
Do the nbytes calculation in place.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Currently, mpi_read_buffer() writes full limbs to the output buffer
and moves memory around to purge leading zero limbs afterwards.
However, with
commit 9cbe21d8f89d ("lib/mpi: only require buffers as big as needed for
the integer")
the caller is only required to provide a buffer large enough to hold the
result without the leading zeros.
This might result in a buffer overflow for small MP numbers with leading
zeros.
Fix this by coping the result to its final destination within the output
buffer and not copying the leading zeros at all.
Fixes: 9cbe21d8f89d ("lib/mpi: only require buffers as big as needed for
the integer")
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Currently, the endian conversion from CPU order to BE is open coded in
mpi_read_buffer().
Replace this by the centrally provided cpu_to_be*() macros.
Copy from the temporary storage on stack to the destination buffer
by means of memcpy().
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Currently, if the number of leading zeros is greater than fits into a
complete limb, mpi_read_buffer() skips them by iterating over them
limb-wise.
Instead of skipping the high order zero limbs within the loop as shown
above, adjust the copying loop's bounds.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Currently, the endian conversion from CPU order to BE is open coded in
mpi_write_sgl().
Replace this by the centrally provided cpu_to_be*() macros.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Within the copying loop in mpi_write_sgl(), we have
if (lzeros) {
mpi_limb_t *limb1 = (void *)p - sizeof(alimb);
mpi_limb_t *limb2 = (void *)p - sizeof(alimb)
+ lzeros;
*limb1 = *limb2;
...
}
where p points past the end of alimb2 which lives on the stack and contains
the current limb in BE order.
The purpose of the above is to shift the non-zero bytes of alimb2 to its
beginning in memory, i.e. to skip its leading zero bytes.
However, limb2 points somewhere into the middle of alimb2 and thus, reading
*limb2 pulls in lzero bytes from somewhere.
Indeed, KASAN splats:
BUG: KASAN: stack-out-of-bounds in mpi_write_to_sgl+0x4e3/0x6f0
at addr ffff8800cb04f601
Read of size 8 by task systemd-udevd/391
page:ffffea00032c13c0 count:0 mapcount:0 mapping: (null) index:0x0
flags: 0x3fff8000000000()
page dumped because: kasan: bad access detected
CPU: 3 PID: 391 Comm: systemd-udevd Tainted: G B L
4.5.0-next-20160316+ #12
[...]
Call Trace:
[<ffffffff8194889e>] dump_stack+0xdc/0x15e
[<ffffffff819487c2>] ? _atomic_dec_and_lock+0xa2/0xa2
[<ffffffff814892b5>] ? __dump_page+0x185/0x330
[<ffffffff8150ffd6>] kasan_report_error+0x5e6/0x8b0
[<ffffffff814724cd>] ? kzfree+0x2d/0x40
[<ffffffff819c5bce>] ? mpi_free_limb_space+0xe/0x20
[<ffffffff819c469e>] ? mpi_powm+0x37e/0x16f0
[<ffffffff815109f1>] kasan_report+0x71/0xa0
[<ffffffff819c0353>] ? mpi_write_to_sgl+0x4e3/0x6f0
[<ffffffff8150ed34>] __asan_load8+0x64/0x70
[<ffffffff819c0353>] mpi_write_to_sgl+0x4e3/0x6f0
[<ffffffff819bfe70>] ? mpi_set_buffer+0x620/0x620
[<ffffffff819c0e6f>] ? mpi_cmp+0xbf/0x180
[<ffffffff8186e282>] rsa_verify+0x202/0x260
What's more, since lzeros can be anything from 1 to sizeof(mpi_limb_t)-1,
the above will cause unaligned accesses which is bad on non-x86 archs.
Fix the issue, by preparing the starting point p for the upcoming copy
operation instead of shifting the source memory, i.e. alimb2.
Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Within the copying loop in mpi_write_sgl(), we have
if (lzeros) {
...
p -= lzeros;
y = lzeros;
}
p = p - (sizeof(alimb) - y);
If lzeros == 0, then y == 0, too. Thus, lzeros gets subtracted and added
back again to p.
Purge this redundancy.
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Within the copying loop in mpi_write_sgl(), we have
if (lzeros > 0) {
...
lzeros -= sizeof(alimb);
}
However, at this point, lzeros < sizeof(alimb) holds. Make this fact
explicit by rewriting the above to
if (lzeros) {
...
lzeros = 0;
}
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
Currently, if the number of leading zeros is greater than fits into a
complete limb, mpi_write_sgl() skips them by iterating over them limb-wise.
However, it fails to adjust its internal leading zeros tracking variable,
lzeros, accordingly: it does a
p -= sizeof(alimb);
continue;
which should really have been a
lzeros -= sizeof(alimb);
continue;
Since lzeros never decreases if its initial value >= sizeof(alimb), nothing
gets copied by mpi_write_sgl() in that case.
Instead of skipping the high order zero limbs within the loop as shown
above, fix the issue by adjusting the copying loop's bounds.
Fixes: 2d4d1eea540b ("lib/mpi: Add mpi sgl helpers")
Signed-off-by: Nicolai Stange <[email protected]>
Signed-off-by: Herbert Xu <[email protected]>
|
|
In certain cases, the 802.11 mesh pathtable code wants to
iterate over all of the entries in the forwarding table from
the receive path, which is inside an RCU read-side critical
section. Enable walks inside atomic sections by allowing
GFP_ATOMIC allocations for the walker state.
Change all existing callsites to pass in GFP_KERNEL.
Acked-by: Thomas Graf <[email protected]>
Signed-off-by: Bob Copeland <[email protected]>
[also adjust gfs2/glock.c and rhashtable tests]
Signed-off-by: Johannes Berg <[email protected]>
|
|
By accident I stumbled across code that is no longer used. According
to git grep, the global functions in lib/proportions.c are not used
anywhere. This patch removes the old, unused code.
Peter Zijlstra further commented:
"Ah indeed, that got replaced with the flex proportion code a while back."
Signed-off-by: Richard Cochran <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/4265b49bed713fbe3faaf8c05da0e1792f09c0b3.1459432020.git.rcochran@linutronix.de
Signed-off-by: Ingo Molnar <[email protected]>
|
|
This commit adds a new rcuperf module that carries out simple performance
tests of RCU grace periods.
Signed-off-by: Paul E. McKenney <[email protected]>
|
|
Signed-off-by: Alexander Potapenko <[email protected]>
Acked-by: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Konstantin Serebryany <[email protected]>
Cc: Dmitry Chernenkov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot
will allow KASAN store allocation/deallocation stack traces for memory
chunks. The stack traces are stored in a hash table and referenced by
handles which reside in the kasan_alloc_meta and kasan_free_meta
structures in the allocated memory chunks.
IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
duplication.
Right now stackdepot support is only enabled in SLAB allocator. Once
KASAN features in SLAB are on par with those in SLUB we can switch SLUB
to stackdepot as well, thus removing the dependency on SLUB stack
bookkeeping, which wastes a lot of memory.
This patch is based on the "mm: kasan: stack depots" patch originally
prepared by Dmitry Chernenkov.
Joonsoo has said that he plans to reuse the stackdepot code for the
mm/page_owner.c debugging facility.
[[email protected]: s/depot_stack_handle/depot_stack_handle_t]
[[email protected]: comment style fixes]
Signed-off-by: Alexander Potapenko <[email protected]>
Signed-off-by: Andrey Ryabinin <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Konstantin Serebryany <[email protected]>
Cc: Dmitry Chernenkov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
Add KASAN hooks to SLAB allocator.
This patch is based on the "mm: kasan: unified support for SLUB and SLAB
allocators" patch originally prepared by Dmitry Chernenkov.
Signed-off-by: Alexander Potapenko <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Konstantin Serebryany <[email protected]>
Cc: Dmitry Chernenkov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
This patchset implements SLAB support for KASAN
Unlike SLUB, SLAB doesn't store allocation/deallocation stacks for heap
objects, therefore we reimplement this feature in mm/kasan/stackdepot.c.
The intention is to ultimately switch SLUB to use this implementation as
well, which will save a lot of memory (right now SLUB bloats each object
by 256 bytes to store the allocation/deallocation stacks).
Also neither SLUB nor SLAB delay the reuse of freed memory chunks, which
is necessary for better detection of use-after-free errors. We
introduce memory quarantine (mm/kasan/quarantine.c), which allows
delayed reuse of deallocated memory.
This patch (of 7):
Rename kmalloc_large_oob_right() to kmalloc_pagealloc_oob_right(), as
the test only checks the page allocator functionality. Also reimplement
kmalloc_large_oob_right() so that the test allocates a large enough
chunk of memory that still does not trigger the page allocator fallback.
Signed-off-by: Alexander Potapenko <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Konstantin Serebryany <[email protected]>
Cc: Dmitry Chernenkov <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|
|
On parisc and metag the stack grows upwards, so for those we need to
scan the stack downwards in order to calculate how much stack a process
has used.
Tested on a 64bit parisc kernel.
Signed-off-by: Helge Deller <[email protected]>
|
|
-fsanitize=* options makes GCC less smart than usual and increase number
of 'maybe-uninitialized' false-positives. So this patch does two things:
* Add -Wno-maybe-uninitialized to CFLAGS_UBSAN which will disable all
such warnings for instrumented files.
* Remove CONFIG_UBSAN_SANITIZE_ALL from all[yes|mod]config builds. So
the all[yes|mod]config build goes without -fsanitize=* and still with
-Wmaybe-uninitialized.
Signed-off-by: Andrey Ryabinin <[email protected]>
Reported-by: Fengguang Wu <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
|