aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2019-09-13padata, pcrypt: take CPU hotplug lock internally in padata_alloc_possibleDaniel Jordan2-12/+9
With pcrypt's cpumask no longer used, take the CPU hotplug lock inside padata_alloc_possible. Useful later in the series for avoiding nested acquisition of the CPU hotplug lock in padata when padata_alloc_possible is allocating an unbound workqueue. Without this patch, this nested acquisition would happen later in the series: pcrypt_init_padata get_online_cpus alloc_padata_possible alloc_padata alloc_workqueue(WQ_UNBOUND) // later in the series alloc_and_link_pwqs apply_wqattrs_lock get_online_cpus // recursive rwsem acquisition Signed-off-by: Daniel Jordan <[email protected]> Acked-by: Steffen Klassert <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Herbert Xu <[email protected]>
2019-09-13crypto: pcrypt - remove padata cpumask notifierDaniel Jordan1-107/+18
Now that padata_do_parallel takes care of finding an alternate callback CPU, there's no need for pcrypt's callback cpumask, so remove it and the notifier callback that keeps it in sync. Signed-off-by: Daniel Jordan <[email protected]> Acked-by: Steffen Klassert <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Herbert Xu <[email protected]>
2019-09-13padata: make padata_do_parallel find alternate callback CPUDaniel Jordan3-39/+23
padata_do_parallel currently returns -EINVAL if the callback CPU isn't in the callback cpumask. pcrypt tries to prevent this situation by keeping its own callback cpumask in sync with padata's and checks that the callback CPU it passes to padata is valid. Make padata handle this instead. padata_do_parallel now takes a pointer to the callback CPU and updates it for the caller if an alternate CPU is used. Overall behavior in terms of which callback CPUs are chosen stays the same. Prepares for removal of the padata cpumask notifier in pcrypt, which will fix a lockdep complaint about nested acquisition of the CPU hotplug lock later in the series. Signed-off-by: Daniel Jordan <[email protected]> Acked-by: Steffen Klassert <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Herbert Xu <[email protected]>
2019-09-13workqueue: require CPU hotplug read exclusion for apply_workqueue_attrsDaniel Jordan1-5/+14
Change the calling convention for apply_workqueue_attrs to require CPU hotplug read exclusion. Avoids lockdep complaints about nested calls to get_online_cpus in a future patch where padata calls apply_workqueue_attrs when changing other CPU-hotplug-sensitive data structures with the CPU read lock already held. Signed-off-by: Daniel Jordan <[email protected]> Acked-by: Tejun Heo <[email protected]> Acked-by: Steffen Klassert <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Herbert Xu <[email protected]>
2019-09-13workqueue: unconfine alloc/apply/free_workqueue_attrs()Daniel Jordan2-3/+7
padata will use these these interfaces in a later patch, so unconfine them. Signed-off-by: Daniel Jordan <[email protected]> Acked-by: Tejun Heo <[email protected]> Acked-by: Steffen Klassert <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: [email protected] Cc: [email protected] Signed-off-by: Herbert Xu <[email protected]>
2019-09-13padata: allocate workqueue internallyDaniel Jordan4-28/+24
Move workqueue allocation inside of padata to prepare for further changes to how padata uses workqueues. Guarantees the workqueue is created with max_active=1, which padata relies on to work correctly. No functional change. Signed-off-by: Daniel Jordan <[email protected]> Acked-by: Steffen Klassert <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Tejun Heo <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Signed-off-by: Herbert Xu <[email protected]>
2019-09-13arm64: dts: imx8mq: Add CAAM nodeAndrey Smirnov1-0/+30
Add node for CAAM - Cryptographic Acceleration and Assurance Module. Signed-off-by: Horia Geantă <[email protected]> Signed-off-by: Andrey Smirnov <[email protected]> Cc: Cory Tusar <[email protected]> Cc: Chris Healy <[email protected]> Cc: Lucas Stach <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Shawn Guo <[email protected]> Cc: Iuliana Prodan <[email protected]> Cc: [email protected] Cc: [email protected] Acked-by: Li Yang <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09random: Use wait_event_freezable() in add_hwgenerator_randomness()Stephen Boyd1-7/+5
Sebastian reports that after commit ff296293b353 ("random: Support freezable kthreads in add_hwgenerator_randomness()") we can call might_sleep() when the task state is TASK_INTERRUPTIBLE (state=1). This leads to the following warning. do not call blocking ops when !TASK_RUNNING; state=1 set at [<00000000349d1489>] prepare_to_wait_event+0x5a/0x180 WARNING: CPU: 0 PID: 828 at kernel/sched/core.c:6741 __might_sleep+0x6f/0x80 Modules linked in: CPU: 0 PID: 828 Comm: hwrng Not tainted 5.3.0-rc7-next-20190903+ #46 RIP: 0010:__might_sleep+0x6f/0x80 Call Trace: kthread_freezable_should_stop+0x1b/0x60 add_hwgenerator_randomness+0xdd/0x130 hwrng_fillfn+0xbf/0x120 kthread+0x10c/0x140 ret_from_fork+0x27/0x50 We shouldn't call kthread_freezable_should_stop() from deep within the wait_event code because the task state is still set as TASK_INTERRUPTIBLE instead of TASK_RUNNING and kthread_freezable_should_stop() will try to call into the freezer with the task in the wrong state. Use wait_event_freezable() instead so that it calls schedule() in the right place and tries to enter the freezer when the task state is TASK_RUNNING instead. Reported-by: Sebastian Andrzej Siewior <[email protected]> Tested-by: Sebastian Andrzej Siewior <[email protected]> Cc: Keerthy <[email protected]> Fixes: ff296293b353 ("random: Support freezable kthreads in add_hwgenerator_randomness()") Signed-off-by: Stephen Boyd <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: ux500 - Fix COMPILE_TEST warningsHerbert Xu2-9/+11
This patch fixes a number of warnings encountered when this driver is built on a 64-bit platform with COMPILE_TEST. Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: x86/aes-ni - use AES library instead of single-use AES cipherArd Biesheuvel1-11/+6
The RFC4106 key derivation code instantiates an AES cipher transform to encrypt only a single block before it is freed again. Switch to the new AES library which is more suitable for such use cases. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: cavium/zip - Add missing single_release()Wei Yongjun1-0/+3
When using single_open() for opening, single_release() should be used instead of seq_release(), otherwise there is a memory leak. Fixes: 09ae5d37e093 ("crypto: zip - Add Compression/Decompression statistics") Cc: <[email protected]> Signed-off-by: Wei Yongjun <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: marvell - Use kzfree rather than its implementationzhong jiang1-2/+1
Use kzfree instead of memset() + kfree(). Signed-off-by: zhong jiang <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: caam - dispose of IRQ mapping only after IRQ is freedAndrey Smirnov1-4/+10
With IRQ requesting being managed by devres we need to make sure that we dispose of IRQ mapping after and not before it is free'd (otherwise we'll end up with a warning from the kernel). To achieve that simply convert IRQ mapping to rely on devres as well. Fixes: f314f12db65c ("crypto: caam - convert caam_jr_init() to use devres") Signed-off-by: Andrey Smirnov <[email protected]> Cc: Chris Healy <[email protected]> Cc: Lucas Stach <[email protected]> Cc: Horia Geantă <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Iuliana Prodan <[email protected]> Cc: [email protected] Cc: [email protected] Reviewed-by: Horia Geantă <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: caam - check irq_of_parse_and_map for errorsAndrey Smirnov1-0/+4
Irq_of_parse_and_map will return zero in case of error, so add a error check for that. Signed-off-by: Andrey Smirnov <[email protected]> Cc: Chris Healy <[email protected]> Cc: Lucas Stach <[email protected]> Cc: Horia Geantă <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Iuliana Prodan <[email protected]> Cc: [email protected] Cc: [email protected] Reviewed-by: Horia Geantă <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: caam - use devres to unmap JR's registersAndrey Smirnov1-4/+9
Use devres to unmap memory and drop explicit de-initialization code. NOTE: There's no corresponding unmapping code in caam_jr_remove which seems like a resource leak. Signed-off-by: Andrey Smirnov <[email protected]> Cc: Chris Healy <[email protected]> Cc: Lucas Stach <[email protected]> Cc: Horia Geantă <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Iuliana Prodan <[email protected]> Cc: [email protected] Cc: [email protected] Reviewed-by: Horia Geantă <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: caam - make sure clocks are enabled firstAndrey Smirnov1-15/+15
In order to access IP block's registers we need to enable appropriate clocks first, otherwise we are risking hanging the CPU. The problem becomes very apparent when trying to use CAAM driver built as a kernel module. In that case caam_probe() gets called after clk_disable_unused() which means all of the necessary clocks are guaranteed to be disabled. Coincidentally, this change also fixes iomap leak introduced by early return (instead of "goto iounmap_ctrl") in commit 41fc54afae70 ("crypto: caam - simplfy clock initialization") Tested on ZII i.MX6Q+ RDU2 Fixes: 176435ad2ac7 ("crypto: caam - defer probing until QMan is available") Fixes: 41fc54afae70 ("crypto: caam - simplfy clock initialization") Signed-off-by: Andrey Smirnov <[email protected]> Cc: Chris Healy <[email protected]> Cc: Lucas Stach <[email protected]> Cc: Horia Geantă <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Iuliana Prodan <[email protected]> Cc: [email protected] Cc: [email protected] Tested-by: Horia Geantă <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-ce - implement ciphertext stealing for CBCArd Biesheuvel2-17/+256
Instead of relying on the CTS template to wrap the accelerated CBC skcipher, implement the ciphertext stealing part directly. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-neonbs - implement ciphertext stealing for XTSArd Biesheuvel2-13/+72
Update the AES-XTS implementation based on NEON instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-ce - implement ciphertext stealing for XTSArd Biesheuvel2-23/+208
Update the AES-XTS implementation based on AES instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes-neonbs - implement ciphertext stealing for XTSArd Biesheuvel5-14/+110
Update the AES-XTS implementation based on NEON instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Since the bit slicing driver is only faster if it can operate on at least 7 blocks of input at the same time, let's reuse the alternate path we are adding for CTS to process any data tail whose size is not a multiple of 128 bytes. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes - implement support for XTS ciphertext stealingArd Biesheuvel2-30/+195
Add the missing support for ciphertext stealing in the implementation of AES-XTS, which is part of the XTS specification but was omitted up until now due to lack of a need for it. The asm helpers are updated so they can deal with any input size, as long as the last full block and the final partial block are presented at the same time. The glue code is updated so that the common case of operating on a sector or page is mostly as before. When CTS is needed, the walk is split up into two pieces, unless the entire input is covered by a single step. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes-cts-cbc - move request context data to the stackArd Biesheuvel1-35/+26
Since the CTS-CBC code completes synchronously, there is no point in keeping part of the scratch data it uses in the request context, so move it to the stack instead. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes-cts-cbc-ce - performance tweakArd Biesheuvel1-3/+2
Optimize away one of the tbl instructions in the decryption path, which turns out to be unnecessary. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: skcipher - add the ability to abort a skcipher walkArd Biesheuvel1-0/+5
After starting a skcipher walk, the only way to ensure that all resources it has tied up are released is to complete it. In some cases, it will be useful to be able to abort a walk cleanly after it has started, so add this ability to the skcipher walk API. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes-neon - limit exposed routines if faster driver is enabledArd Biesheuvel1-53/+59
The pure NEON AES implementation predates the bit-slicing one, and is generally slower, unless the algorithm in question can only execute sequentially. So advertising the skciphers that the bit-slicing driver implements as well serves no real purpose, and we can just disable them. Note that the bit-slicing driver also has a link time dependency on the pure NEON driver, for CBC encryption and for XTS tweak calculation, so we still need both drivers on systems that do not implement the Crypto Extensions. At the same time, expose those modaliases for the AES instruction based driver. This is necessary since otherwise, we may end up loading the wrong driver when any of the skciphers are instantiated before the CPU capability based module loading has completed. Finally, add the missing modalias for cts(cbc(aes)) so requests for this algorithm will autoload the correct module. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes-neonbs - replace tweak mask literal with compositionArd Biesheuvel1-6/+3
Replace the vector load from memory sequence with a simple instruction sequence to compose the tweak vector directly. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-neonbs - replace tweak mask literal with compositionArd Biesheuvel1-5/+3
Replace the vector load from memory sequence with a simple instruction sequence to compose the tweak vector directly. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-ce - replace tweak mask literal with compositionArd Biesheuvel1-6/+3
Replace the vector load from memory sequence with a simple instruction sequence to compose the tweak vector directly. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-ce - switch to 4x interleaveArd Biesheuvel1-119/+144
When the ARM AES instruction based crypto driver was introduced, there were no known implementations that could benefit from a 4-way interleave, and so a 3-way interleave was used instead. Since we have sufficient space in the SIMD register file, let's switch to a 4-way interleave to align with the 64-bit driver, and to ensure that we can reach optimum performance when running under emulation on high end 64-bit cores. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes-ce - yield the SIMD unit between scatterwalk stepsArd Biesheuvel2-35/+34
Reduce the scope of the kernel_neon_begin/end regions so that the SIMD unit is released (and thus preemption re-enabled) if the crypto operation cannot be completed in a single scatterwalk step. This avoids scheduling blackouts due to preemption being enabled for unbounded periods, resulting in a more responsive system. After this change, we can also permit the cipher_walk infrastructure to sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as well. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm/aes - fix round key prototypesArd Biesheuvel2-29/+29
The AES round keys are arrays of u32s in native endianness now, so update the function prototypes accordingly. Signed-off-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: skcipher - Unmap pages after an external errorHerbert Xu1-19/+23
skcipher_walk_done may be called with an error by internal or external callers. For those internal callers we shouldn't unmap pages but for external callers we must unmap any pages that are in use. This patch distinguishes between the two cases by checking whether walk->nbytes is zero or not. For internal callers, we now set walk->nbytes to zero prior to the call. For external callers, walk->nbytes has always been non-zero (as zero is used to indicate the termination of a walk). Reported-by: Ard Biesheuvel <[email protected]> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <[email protected]> Signed-off-by: Herbert Xu <[email protected]> Tested-by: Ard Biesheuvel <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-09crypto: arm64/aes - Use PTR_ERR_OR_ZERO rather than its implementation.zhong jiang1-3/+1
PTR_ERR_OR_ZERO contains if(IS_ERR(...)) + PTR_ERR. It is better to use it directly. hence just replace it. Signed-off-by: zhong jiang <[email protected]> Acked-by: Will Deacon <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: sha256 - Remove sha256/224_init code duplicationHans de Goede3-56/+30
lib/crypto/sha256.c and include/crypto/sha256_base.h define 99% identical functions to init a sha256_state struct for sha224 or sha256 use. This commit moves the functions from lib/crypto/sha256.c to include/crypto/sha.h (making them static inline) and makes the sha224/256_base_init static inline functions from include/crypto/sha256_base.h wrappers around the now also static inline include/crypto/sha.h functions. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: sha256 - Merge crypto/sha256.h into crypto/sha.hHans de Goede6-38/+24
The generic sha256 implementation from lib/crypto/sha256.c uses data structs defined in crypto/sha.h, so lets move the function prototypes there too. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: n2 - Rename arrays to avoid conflict with crypto/sha256.hHans de Goede1-8/+8
Rename the sha*_init arrays to n2_sha*_init so that they do not conflict with the functions declared in crypto/sha256.h. Also rename md5_init to n2_md5_init for consistency. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: chelsio - Rename arrays to avoid conflict with crypto/sha256.hHans de Goede1-10/+10
Rename the sha*_init arrays to chcr_sha*_init so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: ccree - Rename arrays to avoid conflict with crypto/sha256.hHans de Goede1-76/+77
Rename the algo_init arrays to cc_algo_init so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Acked-by: Gilad Ben-Yossef <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: x86 - Rename functions to avoid conflict with crypto/sha256.hHans de Goede1-6/+6
Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: s390 - Rename functions to avoid conflict with crypto/sha256.hHans de Goede1-4/+4
Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: arm64 - Rename functions to avoid conflict with crypto/sha256.hHans de Goede1-12/+12
Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: arm - Rename functions to avoid conflict with crypto/sha256.hHans de Goede2-16/+16
Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05hwrng: timeriomem - relax check on memory resource sizeDaniel Mack2-3/+3
The timeriomem_rng driver only accesses the first 4 bytes of the given memory area and currently, it also forces that memory resource to be exactly 4 bytes in size. This, however, is problematic when used with device-trees that are generated from things like FPGA toolchains, where the minimum size of an exposed memory block may be something like 4k. Hence, let's only check for what's needed for the driver to operate properly; namely that we have enough memory available to read the random data from. Signed-off-by: Daniel Mack <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Added support for basic AES-CCMPascal van Leeuwen3-53/+249
This patch adds support for the basic AES-CCM AEAD cipher suite. Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Added AES-OFB supportPascal van Leeuwen3-0/+39
This patch adds support for AES in output feedback mode (AES-OFB). Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Added AES-CFB supportPascal van Leeuwen3-0/+39
This patch adds support for AES in 128 bit cipher feedback mode (AES-CFB). Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Added support for basic AES-GCMPascal van Leeuwen4-41/+204
This patch adds support for the basic AES-GCM AEAD cipher suite. Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Minor code cleanup and optimizationsPascal van Leeuwen1-39/+47
Some minor cleanup changing e.g. "if (!x) A else B" to "if (x) B else A", merging some back-to-back if's with the same condition, collapsing some back-to-back assignments to the same variable and replacing some weird assignments with proper symbolics. Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Minor optimization recognizing CTR is always AESPascal van Leeuwen1-11/+14
Moved counter mode handling code in front as it doesn't depend on the rest of the code to be executed, it can just do its thing and exit. Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2019-09-05crypto: inside-secure - Made .cra_priority value a definePascal van Leeuwen3-31/+34
Instead of having a fixed value (of 300) all over the place, the value for for .cra_priority is now made into a define (SAFEXCEL_CRA_PRIORITY). This makes it easier to play with, e.g. during development. Signed-off-by: Pascal van Leeuwen <[email protected]> Signed-off-by: Herbert Xu <[email protected]>