Crypto library updates for 6.17

This is the main crypto library pull request for 6.17. The main focus
 this cycle is on reorganizing the SHA-1 and SHA-2 code, providing
 high-quality library APIs for SHA-1 and SHA-2 including HMAC support,
 and establishing conventions for lib/crypto/ going forward:
 
  - Migrate the SHA-1 and SHA-512 code (and also SHA-384 which shares
    most of the SHA-512 code) into lib/crypto/. This includes both the
    generic and architecture-optimized code. Greatly simplify how the
    architecture-optimized code is integrated. Add an easy-to-use
    library API for each SHA variant, including HMAC support. Finally,
    reimplement the crypto_shash support on top of the library API.
 
  - Apply the same reorganization to the SHA-256 code (and also SHA-224
    which shares most of the SHA-256 code). This is a somewhat smaller
    change, due to my earlier work on SHA-256. But this brings in all
    the same additional improvements that I made for SHA-1 and SHA-512.
 
 There are also some smaller changes:
 
  - Move the architecture-optimized ChaCha, Poly1305, and BLAKE2s code
    from arch/$(SRCARCH)/lib/crypto/ to lib/crypto/$(SRCARCH)/. For
    these algorithms it's just a move, not a full reorganization yet.
 
  - Fix the MIPS chacha-core.S to build with the clang assembler.
 
  - Fix the Poly1305 functions to work in all contexts.
 
  - Fix a performance regression in the x86_64 Poly1305 code.
 
  - Clean up the x86_64 SHA-NI optimized SHA-1 assembly code.
 
 Note that since the new organization of the SHA code is much simpler,
 the diffstat of this pull request is negative, despite the addition of
 new fully-documented library APIs for multiple SHA and HMAC-SHA
 variants. These APIs will allow further simplifications across the
 kernel as users start using them instead of the old-school crypto API.
 (I've already written a lot of such conversion patches, removing over
 1000 more lines of code. But most of those will target 6.18 or later.)
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQSacvsUNc7UX4ntmEPzXCl4vpKOKwUCaIZ93BQcZWJpZ2dlcnNA
 a2VybmVsLm9yZwAKCRDzXCl4vpKOK8HCAQD3O9P0qd6wscne5XuRwaybzKHQ2AqU
 OlhlDZWQQEvYAgD/aa6KP/DS+8RKGj0TBn6bACAJyXyDygFXq5a5s9pGzAs=
 =UmMM
 -----END PGP SIGNATURE-----

Merge tag 'libcrypto-updates-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux

Pull crypto library updates from Eric Biggers:
 "This is the main crypto library pull request for 6.17. The main focus
  this cycle is on reorganizing the SHA-1 and SHA-2 code, providing
  high-quality library APIs for SHA-1 and SHA-2 including HMAC support,
  and establishing conventions for lib/crypto/ going forward:

   - Migrate the SHA-1 and SHA-512 code (and also SHA-384 which shares
     most of the SHA-512 code) into lib/crypto/. This includes both the
     generic and architecture-optimized code. Greatly simplify how the
     architecture-optimized code is integrated. Add an easy-to-use
     library API for each SHA variant, including HMAC support. Finally,
     reimplement the crypto_shash support on top of the library API.

   - Apply the same reorganization to the SHA-256 code (and also SHA-224
     which shares most of the SHA-256 code). This is a somewhat smaller
     change, due to my earlier work on SHA-256. But this brings in all
     the same additional improvements that I made for SHA-1 and SHA-512.

  There are also some smaller changes:

   - Move the architecture-optimized ChaCha, Poly1305, and BLAKE2s code
     from arch/$(SRCARCH)/lib/crypto/ to lib/crypto/$(SRCARCH)/. For
     these algorithms it's just a move, not a full reorganization yet.

   - Fix the MIPS chacha-core.S to build with the clang assembler.

   - Fix the Poly1305 functions to work in all contexts.

   - Fix a performance regression in the x86_64 Poly1305 code.

   - Clean up the x86_64 SHA-NI optimized SHA-1 assembly code.

  Note that since the new organization of the SHA code is much simpler,
  the diffstat of this pull request is negative, despite the addition of
  new fully-documented library APIs for multiple SHA and HMAC-SHA
  variants.

  These APIs will allow further simplifications across the kernel as
  users start using them instead of the old-school crypto API. (I've
  already written a lot of such conversion patches, removing over 1000
  more lines of code. But most of those will target 6.18 or later)"

* tag 'libcrypto-updates-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (67 commits)
  lib/crypto: arm64/sha512-ce: Drop compatibility macros for older binutils
  lib/crypto: x86/sha1-ni: Convert to use rounds macros
  lib/crypto: x86/sha1-ni: Minor optimizations and cleanup
  crypto: sha1 - Remove sha1_base.h
  lib/crypto: x86/sha1: Migrate optimized code into library
  lib/crypto: sparc/sha1: Migrate optimized code into library
  lib/crypto: s390/sha1: Migrate optimized code into library
  lib/crypto: powerpc/sha1: Migrate optimized code into library
  lib/crypto: mips/sha1: Migrate optimized code into library
  lib/crypto: arm64/sha1: Migrate optimized code into library
  lib/crypto: arm/sha1: Migrate optimized code into library
  crypto: sha1 - Use same state format as legacy drivers
  crypto: sha1 - Wrap library and add HMAC support
  lib/crypto: sha1: Add HMAC support
  lib/crypto: sha1: Add SHA-1 library functions
  lib/crypto: sha1: Rename sha1_init() to sha1_init_raw()
  crypto: x86/sha1 - Rename conflicting symbol
  lib/crypto: sha2: Add hmac_sha*_init_usingrawkey()
  lib/crypto: arm/poly1305: Remove unneeded empty weak function
  lib/crypto: x86/poly1305: Fix performance regression on short messages
  ...
This commit is contained in:
Linus Torvalds 2025-07-28 17:58:52 -07:00
commit 13150742b0
232 changed files with 4341 additions and 4819 deletions

View file

@ -6412,7 +6412,6 @@ L: linux-crypto@vger.kernel.org
S: Maintained
T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-next
T: git https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git libcrypto-fixes
F: arch/*/lib/crypto/
F: lib/crypto/
CRYPTO SPEED TEST COMPARE

View file

@ -363,8 +363,6 @@ CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA512_ARM=m
CONFIG_CRYPTO_AES_ARM_BS=m
CONFIG_CRYPTO_CHACHA20_NEON=m
CONFIG_CRYPTO_DEV_EXYNOS_RNG=y

View file

@ -98,9 +98,6 @@ CONFIG_CRYPTO_SELFTESTS=y
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_SEQIV=m
CONFIG_CRYPTO_GHASH_ARM_CE=m
CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA1_ARM_CE=m
CONFIG_CRYPTO_SHA512_ARM=m
CONFIG_CRYPTO_AES_ARM=m
CONFIG_CRYPTO_AES_ARM_BS=m
CONFIG_CRYPTO_AES_ARM_CE=m

View file

@ -1280,9 +1280,6 @@ CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_CRYPTO_GHASH_ARM_CE=m
CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA1_ARM_CE=m
CONFIG_CRYPTO_SHA512_ARM=m
CONFIG_CRYPTO_AES_ARM=m
CONFIG_CRYPTO_AES_ARM_BS=m
CONFIG_CRYPTO_AES_ARM_CE=m

View file

@ -704,8 +704,6 @@ CONFIG_NLS_ISO8859_1=y
CONFIG_SECURITY=y
CONFIG_CRYPTO_MICHAEL_MIC=y
CONFIG_CRYPTO_GHASH_ARM_CE=m
CONFIG_CRYPTO_SHA1_ARM_NEON=m
CONFIG_CRYPTO_SHA512_ARM=m
CONFIG_CRYPTO_AES_ARM=m
CONFIG_CRYPTO_AES_ARM_BS=m
CONFIG_CRYPTO_CHACHA20_NEON=m

View file

@ -658,8 +658,6 @@ CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_SHA1_ARM=m
CONFIG_CRYPTO_SHA512_ARM=m
CONFIG_CRYPTO_AES_ARM=m
CONFIG_FONTS=y
CONFIG_FONT_8x8=y

View file

@ -62,47 +62,6 @@ config CRYPTO_BLAKE2B_NEON
much faster than the SHA-2 family and slightly faster than
SHA-1.
config CRYPTO_SHA1_ARM
tristate "Hash functions: SHA-1"
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: arm
config CRYPTO_SHA1_ARM_NEON
tristate "Hash functions: SHA-1 (NEON)"
depends on KERNEL_MODE_NEON
select CRYPTO_SHA1_ARM
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: arm using
- NEON (Advanced SIMD) extensions
config CRYPTO_SHA1_ARM_CE
tristate "Hash functions: SHA-1 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_SHA1_ARM
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: arm using ARMv8 Crypto Extensions
config CRYPTO_SHA512_ARM
tristate "Hash functions: SHA-384 and SHA-512 (NEON)"
select CRYPTO_HASH
depends on !CPU_V7M
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: arm using
- NEON (Advanced SIMD) extensions
config CRYPTO_AES_ARM
tristate "Ciphers: AES"
select CRYPTO_ALGAPI

View file

@ -5,38 +5,17 @@
obj-$(CONFIG_CRYPTO_AES_ARM) += aes-arm.o
obj-$(CONFIG_CRYPTO_AES_ARM_BS) += aes-arm-bs.o
obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o
obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
obj-$(CONFIG_CRYPTO_BLAKE2B_NEON) += blake2b-neon.o
obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o
obj-$(CONFIG_CRYPTO_CURVE25519_NEON) += curve25519-neon.o
obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
obj-$(CONFIG_CRYPTO_GHASH_ARM_CE) += ghash-arm-ce.o
aes-arm-y := aes-cipher-core.o aes-cipher-glue.o
aes-arm-bs-y := aes-neonbs-core.o aes-neonbs-glue.o
sha1-arm-y := sha1-armv4-large.o sha1_glue.o
sha1-arm-neon-y := sha1-armv7-neon.o sha1_neon_glue.o
sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha512-neon-glue.o
sha512-arm-y := sha512-core.o sha512-glue.o $(sha512-arm-neon-y)
blake2b-neon-y := blake2b-neon-core.o blake2b-neon-glue.o
sha1-arm-ce-y := sha1-ce-core.o sha1-ce-glue.o
aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o
ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o
nhpoly1305-neon-y := nh-neon-core.o nhpoly1305-neon-glue.o
curve25519-neon-y := curve25519-core.o curve25519-glue.o
quiet_cmd_perl = PERL $@
cmd_perl = $(PERL) $(<) > $(@)
$(obj)/%-core.S: $(src)/%-armv4.pl
$(call cmd,perl)
clean-files += sha512-core.S
aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=1
AFLAGS_sha512-core.o += $(aflags-thumb2-y)

View file

@ -1,72 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* sha1-ce-glue.c - SHA-1 secure hash using ARMv8 Crypto Extensions
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*/
#include <asm/neon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/module.h>
MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
asmlinkage void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
int blocks);
static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
int remain;
kernel_neon_begin();
remain = sha1_base_do_update_blocks(desc, data, len, sha1_ce_transform);
kernel_neon_end();
return remain;
}
static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
kernel_neon_begin();
sha1_base_do_finup(desc, data, len, sha1_ce_transform);
kernel_neon_end();
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.init = sha1_base_init,
.update = sha1_ce_update,
.finup = sha1_ce_finup,
.descsize = SHA1_STATE_SIZE,
.digestsize = SHA1_DIGEST_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-ce",
.cra_priority = 200,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_ce_mod_init(void)
{
return crypto_register_shash(&alg);
}
static void __exit sha1_ce_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_cpu_feature_match(SHA1, sha1_ce_mod_init);
module_exit(sha1_ce_mod_fini);

View file

@ -1,75 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
* Glue code for the SHA1 Secure Hash Algorithm assembler implementation
*
* This file is based on sha1_generic.c and sha1_ssse3_glue.c
*
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
* Copyright (c) Mathias Krause <minipli@googlemail.com>
*/
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha1_block_data_order(struct sha1_state *digest,
const u8 *data, int rounds);
static int sha1_update_arm(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
/* make sure signature matches sha1_block_fn() */
BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0);
return sha1_base_do_update_blocks(desc, data, len,
sha1_block_data_order);
}
static int sha1_finup_arm(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha1_base_do_finup(desc, data, len, sha1_block_data_order);
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_update_arm,
.finup = sha1_finup_arm,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-asm",
.cra_priority = 150,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_mod_init(void)
{
return crypto_register_shash(&alg);
}
static void __exit sha1_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_init(sha1_mod_init);
module_exit(sha1_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm (ARM)");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_AUTHOR("David McCullough <ucdevel@gmail.com>");

View file

@ -1,83 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Glue code for the SHA1 Secure Hash Algorithm assembler implementation using
* ARM NEON instructions.
*
* Copyright © 2014 Jussi Kivilinna <jussi.kivilinna@iki.fi>
*
* This file is based on sha1_generic.c and sha1_ssse3_glue.c:
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
* Copyright (c) Mathias Krause <minipli@googlemail.com>
* Copyright (c) Chandramouli Narayanan <mouli@linux.intel.com>
*/
#include <asm/neon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha1_transform_neon(struct sha1_state *state_h,
const u8 *data, int rounds);
static int sha1_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
int remain;
kernel_neon_begin();
remain = sha1_base_do_update_blocks(desc, data, len,
sha1_transform_neon);
kernel_neon_end();
return remain;
}
static int sha1_neon_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
kernel_neon_begin();
sha1_base_do_finup(desc, data, len, sha1_transform_neon);
kernel_neon_end();
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_neon_update,
.finup = sha1_neon_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-neon",
.cra_priority = 250,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_neon_mod_init(void)
{
if (!cpu_has_neon())
return -ENODEV;
return crypto_register_shash(&alg);
}
static void __exit sha1_neon_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_init(sha1_neon_mod_init);
module_exit(sha1_neon_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, NEON accelerated");
MODULE_ALIAS_CRYPTO("sha1");

View file

@ -1,110 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* sha512-glue.c - accelerated SHA-384/512 for ARM
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*/
#include <asm/hwcap.h>
#include <asm/neon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "sha512.h"
MODULE_DESCRIPTION("Accelerated SHA-384/SHA-512 secure hash for ARM");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha384-arm");
MODULE_ALIAS_CRYPTO("sha512-arm");
asmlinkage void sha512_block_data_order(struct sha512_state *state,
u8 const *src, int blocks);
static int sha512_arm_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update_blocks(desc, data, len,
sha512_block_data_order);
}
static int sha512_arm_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha512_base_do_finup(desc, data, len, sha512_block_data_order);
return sha512_base_finish(desc, out);
}
static struct shash_alg sha512_arm_algs[] = { {
.init = sha384_base_init,
.update = sha512_arm_update,
.finup = sha512_arm_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA384_DIGEST_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-arm",
.cra_priority = 250,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.init = sha512_base_init,
.update = sha512_arm_update,
.finup = sha512_arm_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA512_DIGEST_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-arm",
.cra_priority = 250,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int __init sha512_arm_mod_init(void)
{
int err;
err = crypto_register_shashes(sha512_arm_algs,
ARRAY_SIZE(sha512_arm_algs));
if (err)
return err;
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon()) {
err = crypto_register_shashes(sha512_neon_algs,
ARRAY_SIZE(sha512_neon_algs));
if (err)
goto err_unregister;
}
return 0;
err_unregister:
crypto_unregister_shashes(sha512_arm_algs,
ARRAY_SIZE(sha512_arm_algs));
return err;
}
static void __exit sha512_arm_mod_fini(void)
{
crypto_unregister_shashes(sha512_arm_algs,
ARRAY_SIZE(sha512_arm_algs));
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon())
crypto_unregister_shashes(sha512_neon_algs,
ARRAY_SIZE(sha512_neon_algs));
}
module_init(sha512_arm_mod_init);
module_exit(sha512_arm_mod_fini);

View file

@ -1,75 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* sha512-neon-glue.c - accelerated SHA-384/512 for ARM NEON
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*/
#include <asm/neon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "sha512.h"
MODULE_ALIAS_CRYPTO("sha384-neon");
MODULE_ALIAS_CRYPTO("sha512-neon");
asmlinkage void sha512_block_data_order_neon(struct sha512_state *state,
const u8 *src, int blocks);
static int sha512_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
int remain;
kernel_neon_begin();
remain = sha512_base_do_update_blocks(desc, data, len,
sha512_block_data_order_neon);
kernel_neon_end();
return remain;
}
static int sha512_neon_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
kernel_neon_begin();
sha512_base_do_finup(desc, data, len, sha512_block_data_order_neon);
kernel_neon_end();
return sha512_base_finish(desc, out);
}
struct shash_alg sha512_neon_algs[] = { {
.init = sha384_base_init,
.update = sha512_neon_update,
.finup = sha512_neon_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA384_DIGEST_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-neon",
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.init = sha512_base_init,
.update = sha512_neon_update,
.finup = sha512_neon_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA512_DIGEST_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-neon",
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };

View file

@ -1,3 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
extern struct shash_alg sha512_neon_algs[2];

4
arch/arm/lib/.gitignore vendored Normal file
View file

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
# This now-removed directory used to contain generated files.
/crypto/

View file

@ -5,8 +5,6 @@
# Copyright (C) 1995-2000 Russell King
#
obj-y += crypto/
lib-y := changebit.o csumipv6.o csumpartial.o \
csumpartialcopy.o csumpartialcopyuser.o clearbit.o \
delay.o delay-loop.o findbit.o memchr.o memcpy.o \

View file

@ -1,64 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SHA-256 optimized for ARM
*
* Copyright 2025 Google LLC
*/
#include <asm/neon.h>
#include <crypto/internal/sha2.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
EXPORT_SYMBOL_GPL(sha256_blocks_arch);
asmlinkage void sha256_block_data_order_neon(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
asmlinkage void sha256_ce_transform(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce);
void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) &&
static_branch_likely(&have_neon)) {
kernel_neon_begin();
if (static_branch_likely(&have_ce))
sha256_ce_transform(state, data, nblocks);
else
sha256_block_data_order_neon(state, data, nblocks);
kernel_neon_end();
} else {
sha256_blocks_arch(state, data, nblocks);
}
}
EXPORT_SYMBOL_GPL(sha256_blocks_simd);
bool sha256_is_arch_optimized(void)
{
/* We always can use at least the ARM scalar implementation. */
return true;
}
EXPORT_SYMBOL_GPL(sha256_is_arch_optimized);
static int __init sha256_arm_mod_init(void)
{
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_NEON)) {
static_branch_enable(&have_neon);
if (elf_hwcap2 & HWCAP2_SHA2)
static_branch_enable(&have_ce);
}
return 0;
}
subsys_initcall(sha256_arm_mod_init);
static void __exit sha256_arm_mod_exit(void)
{
}
module_exit(sha256_arm_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-256 optimized for ARM");

View file

@ -1744,8 +1744,6 @@ CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_GHASH_ARM64_CE=y
CONFIG_CRYPTO_SHA1_ARM64_CE=y
CONFIG_CRYPTO_SHA512_ARM64_CE=m
CONFIG_CRYPTO_SHA3_ARM64=m
CONFIG_CRYPTO_SM3_ARM64_CE=m
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y

View file

@ -25,36 +25,6 @@ config CRYPTO_NHPOLY1305_NEON
Architecture: arm64 using:
- NEON (Advanced SIMD) extensions
config CRYPTO_SHA1_ARM64_CE
tristate "Hash functions: SHA-1 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA1
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_SHA512_ARM64
tristate "Hash functions: SHA-384 and SHA-512"
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: arm64
config CRYPTO_SHA512_ARM64_CE
tristate "Hash functions: SHA-384 and SHA-512 (ARMv8 Crypto Extensions)"
depends on KERNEL_MODE_NEON
select CRYPTO_HASH
select CRYPTO_SHA512_ARM64
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: arm64 using:
- ARMv8 Crypto Extensions
config CRYPTO_SHA3_ARM64
tristate "Hash functions: SHA-3 (ARMv8.2 Crypto Extensions)"
depends on KERNEL_MODE_NEON

View file

@ -5,12 +5,6 @@
# Copyright (C) 2014 Linaro Ltd <ard.biesheuvel@linaro.org>
#
obj-$(CONFIG_CRYPTO_SHA1_ARM64_CE) += sha1-ce.o
sha1-ce-y := sha1-ce-glue.o sha1-ce-core.o
obj-$(CONFIG_CRYPTO_SHA512_ARM64_CE) += sha512-ce.o
sha512-ce-y := sha512-ce-glue.o sha512-ce-core.o
obj-$(CONFIG_CRYPTO_SHA3_ARM64) += sha3-ce.o
sha3-ce-y := sha3-ce-glue.o sha3-ce-core.o
@ -53,9 +47,6 @@ aes-ce-blk-y := aes-glue-ce.o aes-ce.o
obj-$(CONFIG_CRYPTO_AES_ARM64_NEON_BLK) += aes-neon-blk.o
aes-neon-blk-y := aes-glue-neon.o aes-neon.o
obj-$(CONFIG_CRYPTO_SHA512_ARM64) += sha512-arm64.o
sha512-arm64-y := sha512-glue.o sha512-core.o
obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) += nhpoly1305-neon.o
nhpoly1305-neon-y := nh-neon-core.o nhpoly1305-neon-glue.o
@ -64,11 +55,3 @@ aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
obj-$(CONFIG_CRYPTO_AES_ARM64_BS) += aes-neon-bs.o
aes-neon-bs-y := aes-neonbs-core.o aes-neonbs-glue.o
quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) void $(@)
$(obj)/sha512-core.S: $(src)/../lib/crypto/sha2-armv8.pl
$(call cmd,perlasm)
clean-files += sha512-core.S

View file

@ -1,118 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* sha1-ce-glue.c - SHA-1 secure hash using ARMv8 Crypto Extensions
*
* Copyright (C) 2014 - 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
*/
#include <asm/neon.h>
#include <asm/simd.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/simd.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/string.h>
MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("sha1");
struct sha1_ce_state {
struct sha1_state sst;
u32 finalize;
};
extern const u32 sha1_ce_offsetof_count;
extern const u32 sha1_ce_offsetof_finalize;
asmlinkage int __sha1_ce_transform(struct sha1_ce_state *sst, u8 const *src,
int blocks);
static void sha1_ce_transform(struct sha1_state *sst, u8 const *src,
int blocks)
{
while (blocks) {
int rem;
kernel_neon_begin();
rem = __sha1_ce_transform(container_of(sst,
struct sha1_ce_state,
sst), src, blocks);
kernel_neon_end();
src += (blocks - rem) * SHA1_BLOCK_SIZE;
blocks = rem;
}
}
const u32 sha1_ce_offsetof_count = offsetof(struct sha1_ce_state, sst.count);
const u32 sha1_ce_offsetof_finalize = offsetof(struct sha1_ce_state, finalize);
static int sha1_ce_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha1_ce_state *sctx = shash_desc_ctx(desc);
sctx->finalize = 0;
return sha1_base_do_update_blocks(desc, data, len, sha1_ce_transform);
}
static int sha1_ce_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
struct sha1_ce_state *sctx = shash_desc_ctx(desc);
bool finalized = false;
/*
* Allow the asm code to perform the finalization if there is no
* partial data and the input is a round multiple of the block size.
*/
if (len >= SHA1_BLOCK_SIZE) {
unsigned int remain = len - round_down(len, SHA1_BLOCK_SIZE);
finalized = !remain;
sctx->finalize = finalized;
sha1_base_do_update_blocks(desc, data, len, sha1_ce_transform);
data += len - remain;
len = remain;
}
if (!finalized) {
sctx->finalize = 0;
sha1_base_do_finup(desc, data, len, sha1_ce_transform);
}
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.init = sha1_base_init,
.update = sha1_ce_update,
.finup = sha1_ce_finup,
.descsize = sizeof(struct sha1_ce_state),
.statesize = SHA1_STATE_SIZE,
.digestsize = SHA1_DIGEST_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-ce",
.cra_priority = 200,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_ce_mod_init(void)
{
return crypto_register_shash(&alg);
}
static void __exit sha1_ce_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_cpu_feature_match(SHA1, sha1_ce_mod_init);
module_exit(sha1_ce_mod_fini);

View file

@ -1,96 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* sha512-ce-glue.c - SHA-384/SHA-512 using ARMv8 Crypto Extensions
*
* Copyright (C) 2018 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <asm/neon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/module.h>
MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash using ARMv8 Crypto Extensions");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512");
asmlinkage int __sha512_ce_transform(struct sha512_state *sst, u8 const *src,
int blocks);
static void sha512_ce_transform(struct sha512_state *sst, u8 const *src,
int blocks)
{
do {
int rem;
kernel_neon_begin();
rem = __sha512_ce_transform(sst, src, blocks);
kernel_neon_end();
src += (blocks - rem) * SHA512_BLOCK_SIZE;
blocks = rem;
} while (blocks);
}
static int sha512_ce_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update_blocks(desc, data, len,
sha512_ce_transform);
}
static int sha512_ce_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha512_base_do_finup(desc, data, len, sha512_ce_transform);
return sha512_base_finish(desc, out);
}
static struct shash_alg algs[] = { {
.init = sha384_base_init,
.update = sha512_ce_update,
.finup = sha512_ce_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA384_DIGEST_SIZE,
.base.cra_name = "sha384",
.base.cra_driver_name = "sha384-ce",
.base.cra_priority = 200,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA512_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
}, {
.init = sha512_base_init,
.update = sha512_ce_update,
.finup = sha512_ce_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA512_DIGEST_SIZE,
.base.cra_name = "sha512",
.base.cra_driver_name = "sha512-ce",
.base.cra_priority = 200,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA512_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
} };
static int __init sha512_ce_mod_init(void)
{
return crypto_register_shashes(algs, ARRAY_SIZE(algs));
}
static void __exit sha512_ce_mod_fini(void)
{
crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
}
module_cpu_feature_match(SHA512, sha512_ce_mod_init);
module_exit(sha512_ce_mod_fini);

View file

@ -1,83 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Linux/arm64 port of the OpenSSL SHA512 implementation for AArch64
*
* Copyright (c) 2016 Linaro Ltd. <ard.biesheuvel@linaro.org>
*/
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash for arm64");
MODULE_AUTHOR("Andy Polyakov <appro@openssl.org>");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512");
asmlinkage void sha512_blocks_arch(u64 *digest, const void *data,
unsigned int num_blks);
static void sha512_arm64_transform(struct sha512_state *sst, u8 const *src,
int blocks)
{
sha512_blocks_arch(sst->state, src, blocks);
}
static int sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update_blocks(desc, data, len,
sha512_arm64_transform);
}
static int sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha512_base_do_finup(desc, data, len, sha512_arm64_transform);
return sha512_base_finish(desc, out);
}
static struct shash_alg algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_update,
.finup = sha512_finup,
.descsize = SHA512_STATE_SIZE,
.base.cra_name = "sha512",
.base.cra_driver_name = "sha512-arm64",
.base.cra_priority = 150,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA512_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_update,
.finup = sha512_finup,
.descsize = SHA512_STATE_SIZE,
.base.cra_name = "sha384",
.base.cra_driver_name = "sha384-arm64",
.base.cra_priority = 150,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA384_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
} };
static int __init sha512_mod_init(void)
{
return crypto_register_shashes(algs, ARRAY_SIZE(algs));
}
static void __exit sha512_mod_fini(void)
{
crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
}
module_init(sha512_mod_init);
module_exit(sha512_mod_fini);

4
arch/arm64/lib/.gitignore vendored Normal file
View file

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
# This now-removed directory used to contain generated files.
/crypto/

View file

@ -1,7 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += crypto/
lib-y := clear_user.o delay.o copy_from_user.o \
copy_to_user.o copy_page.o \
clear_page.o csum.o insn.o memchr.o memcpy.o \

View file

@ -1,75 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SHA-256 optimized for ARM64
*
* Copyright 2025 Google LLC
*/
#include <asm/neon.h>
#include <crypto/internal/sha2.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
EXPORT_SYMBOL_GPL(sha256_blocks_arch);
asmlinkage void sha256_block_neon(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
asmlinkage size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce);
void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) &&
static_branch_likely(&have_neon)) {
if (static_branch_likely(&have_ce)) {
do {
size_t rem;
kernel_neon_begin();
rem = __sha256_ce_transform(state,
data, nblocks);
kernel_neon_end();
data += (nblocks - rem) * SHA256_BLOCK_SIZE;
nblocks = rem;
} while (nblocks);
} else {
kernel_neon_begin();
sha256_block_neon(state, data, nblocks);
kernel_neon_end();
}
} else {
sha256_blocks_arch(state, data, nblocks);
}
}
EXPORT_SYMBOL_GPL(sha256_blocks_simd);
bool sha256_is_arch_optimized(void)
{
/* We always can use at least the ARM64 scalar implementation. */
return true;
}
EXPORT_SYMBOL_GPL(sha256_is_arch_optimized);
static int __init sha256_arm64_mod_init(void)
{
if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) &&
cpu_have_named_feature(ASIMD)) {
static_branch_enable(&have_neon);
if (cpu_have_named_feature(SHA2))
static_branch_enable(&have_ce);
}
return 0;
}
subsys_initcall(sha256_arm64_mod_init);
static void __exit sha256_arm64_mod_exit(void)
{
}
module_exit(sha256_arm64_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-256 optimized for ARM64");

View file

@ -23,12 +23,6 @@ config CAVIUM_OCTEON_CVMSEG_SIZE
legally range is from zero to 54 cache blocks (i.e. CVMSEG LM is
between zero and 6192 bytes).
config CRYPTO_SHA256_OCTEON
tristate
default CRYPTO_LIB_SHA256
select CRYPTO_ARCH_HAVE_LIB_SHA256
select CRYPTO_LIB_SHA256_GENERIC
endif # CPU_CAVIUM_OCTEON
if CAVIUM_OCTEON_SOC

View file

@ -6,6 +6,3 @@
obj-y += octeon-crypto.o
obj-$(CONFIG_CRYPTO_MD5_OCTEON) += octeon-md5.o
obj-$(CONFIG_CRYPTO_SHA1_OCTEON) += octeon-sha1.o
obj-$(CONFIG_CRYPTO_SHA256_OCTEON) += octeon-sha256.o
obj-$(CONFIG_CRYPTO_SHA512_OCTEON) += octeon-sha512.o

View file

@ -7,12 +7,11 @@
*/
#include <asm/cop2.h>
#include <asm/octeon/crypto.h>
#include <linux/export.h>
#include <linux/interrupt.h>
#include <linux/sched/task_stack.h>
#include "octeon-crypto.h"
/**
* Enable access to Octeon's COP2 crypto hardware for kernel use. Wrap any
* crypto operations in calls to octeon_crypto_enable/disable in order to make

View file

@ -19,6 +19,7 @@
* any later version.
*/
#include <asm/octeon/crypto.h>
#include <asm/octeon/octeon.h>
#include <crypto/internal/hash.h>
#include <crypto/md5.h>
@ -27,8 +28,6 @@
#include <linux/string.h>
#include <linux/unaligned.h>
#include "octeon-crypto.h"
struct octeon_md5_state {
__le32 hash[MD5_HASH_WORDS];
u64 byte_count;

View file

@ -1,147 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
*
* SHA1 Secure Hash Algorithm.
*
* Adapted for OCTEON by Aaro Koskinen <aaro.koskinen@iki.fi>.
*
* Based on crypto/sha1_generic.c, which is:
*
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
*/
#include <asm/octeon/octeon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "octeon-crypto.h"
/*
* We pass everything as 64-bit. OCTEON can handle misaligned data.
*/
static void octeon_sha1_store_hash(struct sha1_state *sctx)
{
u64 *hash = (u64 *)sctx->state;
union {
u32 word[2];
u64 dword;
} hash_tail = { { sctx->state[4], } };
write_octeon_64bit_hash_dword(hash[0], 0);
write_octeon_64bit_hash_dword(hash[1], 1);
write_octeon_64bit_hash_dword(hash_tail.dword, 2);
memzero_explicit(&hash_tail.word[0], sizeof(hash_tail.word[0]));
}
static void octeon_sha1_read_hash(struct sha1_state *sctx)
{
u64 *hash = (u64 *)sctx->state;
union {
u32 word[2];
u64 dword;
} hash_tail;
hash[0] = read_octeon_64bit_hash_dword(0);
hash[1] = read_octeon_64bit_hash_dword(1);
hash_tail.dword = read_octeon_64bit_hash_dword(2);
sctx->state[4] = hash_tail.word[0];
memzero_explicit(&hash_tail.dword, sizeof(hash_tail.dword));
}
static void octeon_sha1_transform(struct sha1_state *sctx, const u8 *src,
int blocks)
{
do {
const u64 *block = (const u64 *)src;
write_octeon_64bit_block_dword(block[0], 0);
write_octeon_64bit_block_dword(block[1], 1);
write_octeon_64bit_block_dword(block[2], 2);
write_octeon_64bit_block_dword(block[3], 3);
write_octeon_64bit_block_dword(block[4], 4);
write_octeon_64bit_block_dword(block[5], 5);
write_octeon_64bit_block_dword(block[6], 6);
octeon_sha1_start(block[7]);
src += SHA1_BLOCK_SIZE;
} while (--blocks);
}
static int octeon_sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha1_state *sctx = shash_desc_ctx(desc);
struct octeon_cop2_state state;
unsigned long flags;
int remain;
flags = octeon_crypto_enable(&state);
octeon_sha1_store_hash(sctx);
remain = sha1_base_do_update_blocks(desc, data, len,
octeon_sha1_transform);
octeon_sha1_read_hash(sctx);
octeon_crypto_disable(&state, flags);
return remain;
}
static int octeon_sha1_finup(struct shash_desc *desc, const u8 *src,
unsigned int len, u8 *out)
{
struct sha1_state *sctx = shash_desc_ctx(desc);
struct octeon_cop2_state state;
unsigned long flags;
flags = octeon_crypto_enable(&state);
octeon_sha1_store_hash(sctx);
sha1_base_do_finup(desc, src, len, octeon_sha1_transform);
octeon_sha1_read_hash(sctx);
octeon_crypto_disable(&state, flags);
return sha1_base_finish(desc, out);
}
static struct shash_alg octeon_sha1_alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = octeon_sha1_update,
.finup = octeon_sha1_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "octeon-sha1",
.cra_priority = OCTEON_CR_OPCODE_PRIORITY,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init octeon_sha1_mod_init(void)
{
if (!octeon_has_crypto())
return -ENOTSUPP;
return crypto_register_shash(&octeon_sha1_alg);
}
static void __exit octeon_sha1_mod_fini(void)
{
crypto_unregister_shash(&octeon_sha1_alg);
}
module_init(octeon_sha1_mod_init);
module_exit(octeon_sha1_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm (OCTEON)");
MODULE_AUTHOR("Aaro Koskinen <aaro.koskinen@iki.fi>");

View file

@ -1,167 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
*
* SHA-512 and SHA-384 Secure Hash Algorithm.
*
* Adapted for OCTEON by Aaro Koskinen <aaro.koskinen@iki.fi>.
*
* Based on crypto/sha512_generic.c, which is:
*
* Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) 2003 Kyle McMartin <kyle@debian.org>
*/
#include <asm/octeon/octeon.h>
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "octeon-crypto.h"
/*
* We pass everything as 64-bit. OCTEON can handle misaligned data.
*/
static void octeon_sha512_store_hash(struct sha512_state *sctx)
{
write_octeon_64bit_hash_sha512(sctx->state[0], 0);
write_octeon_64bit_hash_sha512(sctx->state[1], 1);
write_octeon_64bit_hash_sha512(sctx->state[2], 2);
write_octeon_64bit_hash_sha512(sctx->state[3], 3);
write_octeon_64bit_hash_sha512(sctx->state[4], 4);
write_octeon_64bit_hash_sha512(sctx->state[5], 5);
write_octeon_64bit_hash_sha512(sctx->state[6], 6);
write_octeon_64bit_hash_sha512(sctx->state[7], 7);
}
static void octeon_sha512_read_hash(struct sha512_state *sctx)
{
sctx->state[0] = read_octeon_64bit_hash_sha512(0);
sctx->state[1] = read_octeon_64bit_hash_sha512(1);
sctx->state[2] = read_octeon_64bit_hash_sha512(2);
sctx->state[3] = read_octeon_64bit_hash_sha512(3);
sctx->state[4] = read_octeon_64bit_hash_sha512(4);
sctx->state[5] = read_octeon_64bit_hash_sha512(5);
sctx->state[6] = read_octeon_64bit_hash_sha512(6);
sctx->state[7] = read_octeon_64bit_hash_sha512(7);
}
static void octeon_sha512_transform(struct sha512_state *sctx,
const u8 *src, int blocks)
{
do {
const u64 *block = (const u64 *)src;
write_octeon_64bit_block_sha512(block[0], 0);
write_octeon_64bit_block_sha512(block[1], 1);
write_octeon_64bit_block_sha512(block[2], 2);
write_octeon_64bit_block_sha512(block[3], 3);
write_octeon_64bit_block_sha512(block[4], 4);
write_octeon_64bit_block_sha512(block[5], 5);
write_octeon_64bit_block_sha512(block[6], 6);
write_octeon_64bit_block_sha512(block[7], 7);
write_octeon_64bit_block_sha512(block[8], 8);
write_octeon_64bit_block_sha512(block[9], 9);
write_octeon_64bit_block_sha512(block[10], 10);
write_octeon_64bit_block_sha512(block[11], 11);
write_octeon_64bit_block_sha512(block[12], 12);
write_octeon_64bit_block_sha512(block[13], 13);
write_octeon_64bit_block_sha512(block[14], 14);
octeon_sha512_start(block[15]);
src += SHA512_BLOCK_SIZE;
} while (--blocks);
}
static int octeon_sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
struct octeon_cop2_state state;
unsigned long flags;
int remain;
flags = octeon_crypto_enable(&state);
octeon_sha512_store_hash(sctx);
remain = sha512_base_do_update_blocks(desc, data, len,
octeon_sha512_transform);
octeon_sha512_read_hash(sctx);
octeon_crypto_disable(&state, flags);
return remain;
}
static int octeon_sha512_finup(struct shash_desc *desc, const u8 *src,
unsigned int len, u8 *hash)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
struct octeon_cop2_state state;
unsigned long flags;
flags = octeon_crypto_enable(&state);
octeon_sha512_store_hash(sctx);
sha512_base_do_finup(desc, src, len, octeon_sha512_transform);
octeon_sha512_read_hash(sctx);
octeon_crypto_disable(&state, flags);
return sha512_base_finish(desc, hash);
}
static struct shash_alg octeon_sha512_algs[2] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = octeon_sha512_update,
.finup = octeon_sha512_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name= "octeon-sha512",
.cra_priority = OCTEON_CR_OPCODE_PRIORITY,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = octeon_sha512_update,
.finup = octeon_sha512_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name= "octeon-sha384",
.cra_priority = OCTEON_CR_OPCODE_PRIORITY,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int __init octeon_sha512_mod_init(void)
{
if (!octeon_has_crypto())
return -ENOTSUPP;
return crypto_register_shashes(octeon_sha512_algs,
ARRAY_SIZE(octeon_sha512_algs));
}
static void __exit octeon_sha512_mod_fini(void)
{
crypto_unregister_shashes(octeon_sha512_algs,
ARRAY_SIZE(octeon_sha512_algs));
}
module_init(octeon_sha512_mod_init);
module_exit(octeon_sha512_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms (OCTEON)");
MODULE_AUTHOR("Aaro Koskinen <aaro.koskinen@iki.fi>");

View file

@ -156,8 +156,6 @@ CONFIG_SECURITY_NETWORK=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_MD5_OCTEON=y
CONFIG_CRYPTO_SHA1_OCTEON=m
CONFIG_CRYPTO_SHA512_OCTEON=m
CONFIG_CRYPTO_DES=y
CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=y
CONFIG_DEBUG_FS=y

View file

@ -12,24 +12,4 @@ config CRYPTO_MD5_OCTEON
Architecture: mips OCTEON using crypto instructions, when available
config CRYPTO_SHA1_OCTEON
tristate "Hash functions: SHA-1 (OCTEON)"
depends on CPU_CAVIUM_OCTEON
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: mips OCTEON
config CRYPTO_SHA512_OCTEON
tristate "Hash functions: SHA-384 and SHA-512 (OCTEON)"
depends on CPU_CAVIUM_OCTEON
select CRYPTO_SHA512
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: mips OCTEON using crypto instructions, when available
endmenu

4
arch/mips/lib/.gitignore vendored Normal file
View file

@ -0,0 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
# This now-removed directory used to contain generated files.
/crypto/

View file

@ -3,8 +3,6 @@
# Makefile for MIPS-specific library files..
#
obj-y += crypto/
lib-y += bitops.o csum_partial.o delay.o memcpy.o memset.o \
mips-atomic.o strncpy_user.o \
strnlen_user.o uncached.o

View file

@ -128,6 +128,5 @@ CONFIG_PPC_EARLY_DEBUG_44x_PHYSLOW=0x00010000
CONFIG_PPC_EARLY_DEBUG_44x_PHYSHIGH=0x33f
CONFIG_CRYPTO_PCBC=y
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_SHA1_PPC=y
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_HW is not set

View file

@ -322,7 +322,6 @@ CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_MD5_PPC=m
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_SHA1_PPC=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_ANUBIS=m

View file

@ -388,7 +388,6 @@ CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_LZO=m
CONFIG_CRYPTO_MD5_PPC=m
CONFIG_CRYPTO_SHA1_PPC=m
CONFIG_CRYPTO_AES_GCM_P10=m
CONFIG_CRYPTO_DEV_NX=y
CONFIG_CRYPTO_DEV_NX_ENCRYPT=m

View file

@ -23,22 +23,6 @@ config CRYPTO_MD5_PPC
Architecture: powerpc
config CRYPTO_SHA1_PPC
tristate "Hash functions: SHA-1"
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: powerpc
config CRYPTO_SHA1_PPC_SPE
tristate "Hash functions: SHA-1 (SPE)"
depends on SPE
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: powerpc using
- SPE (Signal Processing Engine) extensions
config CRYPTO_AES_PPC_SPE
tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (SPE)"
depends on SPE

View file

@ -7,16 +7,12 @@
obj-$(CONFIG_CRYPTO_AES_PPC_SPE) += aes-ppc-spe.o
obj-$(CONFIG_CRYPTO_MD5_PPC) += md5-ppc.o
obj-$(CONFIG_CRYPTO_SHA1_PPC) += sha1-powerpc.o
obj-$(CONFIG_CRYPTO_SHA1_PPC_SPE) += sha1-ppc-spe.o
obj-$(CONFIG_CRYPTO_AES_GCM_P10) += aes-gcm-p10-crypto.o
obj-$(CONFIG_CRYPTO_DEV_VMX_ENCRYPT) += vmx-crypto.o
obj-$(CONFIG_CRYPTO_CURVE25519_PPC64) += curve25519-ppc64le.o
aes-ppc-spe-y := aes-spe-core.o aes-spe-keys.o aes-tab-4k.o aes-spe-modes.o aes-spe-glue.o
md5-ppc-y := md5-asm.o md5-glue.o
sha1-powerpc-y := sha1-powerpc-asm.o sha1.o
sha1-ppc-spe-y := sha1-spe-asm.o sha1-spe-glue.o
aes-gcm-p10-crypto-y := aes-gcm-p10-glue.o aes-gcm-p10.o ghashp10-ppc.o aesp10-ppc.o
vmx-crypto-objs := vmx.o aesp8-ppc.o ghashp8-ppc.o aes.o aes_cbc.o aes_ctr.o aes_xts.o ghash.o
curve25519-ppc64le-y := curve25519-ppc64le-core.o curve25519-ppc64le_asm.o

View file

@ -1,107 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Glue code for SHA-1 implementation for SPE instructions (PPC)
*
* Based on generic implementation.
*
* Copyright (c) 2015 Markus Stockhausen <stockhausen@collogia.de>
*/
#include <asm/switch_to.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/kernel.h>
#include <linux/preempt.h>
#include <linux/module.h>
/*
* MAX_BYTES defines the number of bytes that are allowed to be processed
* between preempt_disable() and preempt_enable(). SHA1 takes ~1000
* operations per 64 bytes. e500 cores can issue two arithmetic instructions
* per clock cycle using one 32/64 bit unit (SU1) and one 32 bit unit (SU2).
* Thus 2KB of input data will need an estimated maximum of 18,000 cycles.
* Headroom for cache misses included. Even with the low end model clocked
* at 667 MHz this equals to a critical time window of less than 27us.
*
*/
#define MAX_BYTES 2048
asmlinkage void ppc_spe_sha1_transform(u32 *state, const u8 *src, u32 blocks);
static void spe_begin(void)
{
/* We just start SPE operations and will save SPE registers later. */
preempt_disable();
enable_kernel_spe();
}
static void spe_end(void)
{
disable_kernel_spe();
/* reenable preemption */
preempt_enable();
}
static void ppc_spe_sha1_block(struct sha1_state *sctx, const u8 *src,
int blocks)
{
do {
int unit = min(blocks, MAX_BYTES / SHA1_BLOCK_SIZE);
spe_begin();
ppc_spe_sha1_transform(sctx->state, src, unit);
spe_end();
src += unit * SHA1_BLOCK_SIZE;
blocks -= unit;
} while (blocks);
}
static int ppc_spe_sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_base_do_update_blocks(desc, data, len, ppc_spe_sha1_block);
}
static int ppc_spe_sha1_finup(struct shash_desc *desc, const u8 *src,
unsigned int len, u8 *out)
{
sha1_base_do_finup(desc, src, len, ppc_spe_sha1_block);
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = ppc_spe_sha1_update,
.finup = ppc_spe_sha1_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-ppc-spe",
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init ppc_spe_sha1_mod_init(void)
{
return crypto_register_shash(&alg);
}
static void __exit ppc_spe_sha1_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_init(ppc_spe_sha1_mod_init);
module_exit(ppc_spe_sha1_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, SPE optimized");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_ALIAS_CRYPTO("sha1-ppc-spe");

View file

@ -1,78 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
*
* powerpc implementation of the SHA1 Secure Hash Algorithm.
*
* Derived from cryptoapi implementation, adapted for in-place
* scatterlist interface.
*
* Derived from "crypto/sha1.c"
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
*/
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void powerpc_sha_transform(u32 *state, const u8 *src);
static void powerpc_sha_block(struct sha1_state *sctx, const u8 *data,
int blocks)
{
do {
powerpc_sha_transform(sctx->state, data);
data += 64;
} while (--blocks);
}
static int powerpc_sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_base_do_update_blocks(desc, data, len, powerpc_sha_block);
}
/* Add padding and return the message digest. */
static int powerpc_sha1_finup(struct shash_desc *desc, const u8 *src,
unsigned int len, u8 *out)
{
sha1_base_do_finup(desc, src, len, powerpc_sha_block);
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = powerpc_sha1_update,
.finup = powerpc_sha1_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-powerpc",
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_powerpc_mod_init(void)
{
return crypto_register_shash(&alg);
}
static void __exit sha1_powerpc_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_init(sha1_powerpc_mod_init);
module_exit(sha1_powerpc_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_ALIAS_CRYPTO("sha1-powerpc");

View file

@ -3,8 +3,6 @@
# Makefile for ppc-specific library files..
#
obj-y += crypto/
CFLAGS_code-patching.o += -fno-stack-protector
CFLAGS_feature-fixups.o += -fno-stack-protector

View file

@ -28,17 +28,6 @@ config CRYPTO_GHASH_RISCV64
Architecture: riscv64 using:
- Zvkg vector crypto extension
config CRYPTO_SHA512_RISCV64
tristate "Hash functions: SHA-384 and SHA-512"
depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
select CRYPTO_SHA512
help
SHA-384 and SHA-512 secure hash algorithm (FIPS 180)
Architecture: riscv64 using:
- Zvknhb vector crypto extension
- Zvkb vector crypto extension
config CRYPTO_SM3_RISCV64
tristate "Hash functions: SM3 (ShangMi 3)"
depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO

View file

@ -7,9 +7,6 @@ aes-riscv64-y := aes-riscv64-glue.o aes-riscv64-zvkned.o \
obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o
ghash-riscv64-y := ghash-riscv64-glue.o ghash-riscv64-zvkg.o
obj-$(CONFIG_CRYPTO_SHA512_RISCV64) += sha512-riscv64.o
sha512-riscv64-y := sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o
obj-$(CONFIG_CRYPTO_SM3_RISCV64) += sm3-riscv64.o
sm3-riscv64-y := sm3-riscv64-glue.o sm3-riscv64-zvksh-zvkb.o

View file

@ -1,124 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SHA-512 and SHA-384 using the RISC-V vector crypto extensions
*
* Copyright (C) 2023 VRULL GmbH
* Author: Heiko Stuebner <heiko.stuebner@vrull.eu>
*
* Copyright (C) 2023 SiFive, Inc.
* Author: Jerry Shih <jerry.shih@sifive.com>
*/
#include <asm/simd.h>
#include <asm/vector.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/simd.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
/*
* Note: the asm function only uses the 'state' field of struct sha512_state.
* It is assumed to be the first field.
*/
asmlinkage void sha512_transform_zvknhb_zvkb(
struct sha512_state *state, const u8 *data, int num_blocks);
static void sha512_block(struct sha512_state *state, const u8 *data,
int num_blocks)
{
/*
* Ensure struct sha512_state begins directly with the SHA-512
* 512-bit internal state, as this is what the asm function expects.
*/
BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0);
if (crypto_simd_usable()) {
kernel_vector_begin();
sha512_transform_zvknhb_zvkb(state, data, num_blocks);
kernel_vector_end();
} else {
sha512_generic_block_fn(state, data, num_blocks);
}
}
static int riscv64_sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update_blocks(desc, data, len, sha512_block);
}
static int riscv64_sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha512_base_do_finup(desc, data, len, sha512_block);
return sha512_base_finish(desc, out);
}
static int riscv64_sha512_digest(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha512_base_init(desc) ?:
riscv64_sha512_finup(desc, data, len, out);
}
static struct shash_alg riscv64_sha512_algs[] = {
{
.init = sha512_base_init,
.update = riscv64_sha512_update,
.finup = riscv64_sha512_finup,
.digest = riscv64_sha512_digest,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA512_DIGEST_SIZE,
.base = {
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_name = "sha512",
.cra_driver_name = "sha512-riscv64-zvknhb-zvkb",
.cra_module = THIS_MODULE,
},
}, {
.init = sha384_base_init,
.update = riscv64_sha512_update,
.finup = riscv64_sha512_finup,
.descsize = SHA512_STATE_SIZE,
.digestsize = SHA384_DIGEST_SIZE,
.base = {
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_name = "sha384",
.cra_driver_name = "sha384-riscv64-zvknhb-zvkb",
.cra_module = THIS_MODULE,
},
},
};
static int __init riscv64_sha512_mod_init(void)
{
if (riscv_isa_extension_available(NULL, ZVKNHB) &&
riscv_isa_extension_available(NULL, ZVKB) &&
riscv_vector_vlen() >= 128)
return crypto_register_shashes(riscv64_sha512_algs,
ARRAY_SIZE(riscv64_sha512_algs));
return -ENODEV;
}
static void __exit riscv64_sha512_mod_exit(void)
{
crypto_unregister_shashes(riscv64_sha512_algs,
ARRAY_SIZE(riscv64_sha512_algs));
}
module_init(riscv64_sha512_mod_init);
module_exit(riscv64_sha512_mod_exit);
MODULE_DESCRIPTION("SHA-512 (RISC-V accelerated)");
MODULE_AUTHOR("Heiko Stuebner <heiko.stuebner@vrull.eu>");
MODULE_LICENSE("GPL");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha384");

View file

@ -1,5 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y += crypto/
lib-y += delay.o
lib-y += memcpy.o
lib-y += memset.o

View file

@ -1,16 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config CRYPTO_CHACHA_RISCV64
tristate
depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
default CRYPTO_LIB_CHACHA
select CRYPTO_ARCH_HAVE_LIB_CHACHA
select CRYPTO_LIB_CHACHA_GENERIC
config CRYPTO_SHA256_RISCV64
tristate
depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
default CRYPTO_LIB_SHA256
select CRYPTO_ARCH_HAVE_LIB_SHA256
select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD
select CRYPTO_LIB_SHA256_GENERIC

View file

@ -1,67 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SHA-256 (RISC-V accelerated)
*
* Copyright (C) 2022 VRULL GmbH
* Author: Heiko Stuebner <heiko.stuebner@vrull.eu>
*
* Copyright (C) 2023 SiFive, Inc.
* Author: Jerry Shih <jerry.shih@sifive.com>
*/
#include <asm/vector.h>
#include <crypto/internal/sha2.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb(
u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions);
void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
if (static_branch_likely(&have_extensions)) {
kernel_vector_begin();
sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks);
kernel_vector_end();
} else {
sha256_blocks_generic(state, data, nblocks);
}
}
EXPORT_SYMBOL_GPL(sha256_blocks_simd);
void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
sha256_blocks_generic(state, data, nblocks);
}
EXPORT_SYMBOL_GPL(sha256_blocks_arch);
bool sha256_is_arch_optimized(void)
{
return static_key_enabled(&have_extensions);
}
EXPORT_SYMBOL_GPL(sha256_is_arch_optimized);
static int __init riscv64_sha256_mod_init(void)
{
/* Both zvknha and zvknhb provide the SHA-256 instructions. */
if ((riscv_isa_extension_available(NULL, ZVKNHA) ||
riscv_isa_extension_available(NULL, ZVKNHB)) &&
riscv_isa_extension_available(NULL, ZVKB) &&
riscv_vector_vlen() >= 128)
static_branch_enable(&have_extensions);
return 0;
}
subsys_initcall(riscv64_sha256_mod_init);
static void __exit riscv64_sha256_mod_exit(void)
{
}
module_exit(riscv64_sha256_mod_exit);
MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)");
MODULE_AUTHOR("Heiko Stuebner <heiko.stuebner@vrull.eu>");
MODULE_LICENSE("GPL");

View file

@ -20,14 +20,14 @@ struct kexec_sha_region purgatory_sha_regions[KEXEC_SEGMENT_MAX] __section(".kex
static int verify_sha256_digest(void)
{
struct kexec_sha_region *ptr, *end;
struct sha256_state ss;
struct sha256_ctx sctx;
u8 digest[SHA256_DIGEST_SIZE];
sha256_init(&ss);
sha256_init(&sctx);
end = purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions);
for (ptr = purgatory_sha_regions; ptr < end; ptr++)
sha256_update(&ss, (uint8_t *)(ptr->start), ptr->len);
sha256_final(&ss, digest);
sha256_update(&sctx, (uint8_t *)(ptr->start), ptr->len);
sha256_final(&sctx, digest);
if (memcmp(digest, purgatory_sha256_digest, sizeof(digest)) != 0)
return 1;
return 0;

View file

@ -804,8 +804,6 @@ CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_GHASH_S390=m

View file

@ -791,8 +791,6 @@ CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA3_256_S390=m
CONFIG_CRYPTO_SHA3_512_S390=m
CONFIG_CRYPTO_GHASH_S390=m

View file

@ -2,26 +2,6 @@
menu "Accelerated Cryptographic Algorithms for CPU (s390)"
config CRYPTO_SHA512_S390
tristate "Hash functions: SHA-384 and SHA-512"
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: s390
It is available as of z10.
config CRYPTO_SHA1_S390
tristate "Hash functions: SHA-1"
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: s390
It is available as of z990.
config CRYPTO_SHA3_256_S390
tristate "Hash functions: SHA3-224 and SHA3-256"
select CRYPTO_HASH

View file

@ -3,8 +3,6 @@
# Cryptographic API
#
obj-$(CONFIG_CRYPTO_SHA1_S390) += sha1_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA512_S390) += sha512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA3_256_S390) += sha3_256_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_SHA3_512_S390) += sha3_512_s390.o sha_common.o
obj-$(CONFIG_CRYPTO_DES_S390) += des_s390.o

View file

@ -1,105 +0,0 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Cryptographic API.
*
* s390 implementation of the SHA1 Secure Hash Algorithm.
*
* Derived from cryptoapi implementation, adapted for in-place
* scatterlist interface. Originally based on the public domain
* implementation written by Steve Reid.
*
* s390 Version:
* Copyright IBM Corp. 2003, 2007
* Author(s): Thomas Spatzier
* Jan Glauber (jan.glauber@de.ibm.com)
*
* Derived from "crypto/sha1_generic.c"
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
*/
#include <asm/cpacf.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "sha.h"
static int s390_sha1_init(struct shash_desc *desc)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
sctx->state[0] = SHA1_H0;
sctx->state[1] = SHA1_H1;
sctx->state[2] = SHA1_H2;
sctx->state[3] = SHA1_H3;
sctx->state[4] = SHA1_H4;
sctx->count = 0;
sctx->func = CPACF_KIMD_SHA_1;
sctx->first_message_part = 0;
return 0;
}
static int s390_sha1_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha1_state *octx = out;
octx->count = sctx->count;
memcpy(octx->state, sctx->state, sizeof(octx->state));
return 0;
}
static int s390_sha1_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha1_state *ictx = in;
sctx->count = ictx->count;
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
sctx->func = CPACF_KIMD_SHA_1;
sctx->first_message_part = 0;
return 0;
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = s390_sha1_init,
.update = s390_sha_update_blocks,
.finup = s390_sha_finup,
.export = s390_sha1_export,
.import = s390_sha1_import,
.descsize = S390_SHA_CTX_SIZE,
.statesize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-s390",
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_s390_init(void)
{
if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_1))
return -ENODEV;
return crypto_register_shash(&alg);
}
static void __exit sha1_s390_fini(void)
{
crypto_unregister_shash(&alg);
}
module_cpu_feature_match(S390_CPU_FEATURE_MSA, sha1_s390_init);
module_exit(sha1_s390_fini);
MODULE_ALIAS_CRYPTO("sha1");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm");

View file

@ -1,154 +0,0 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Cryptographic API.
*
* s390 implementation of the SHA512 and SHA38 Secure Hash Algorithm.
*
* Copyright IBM Corp. 2007
* Author(s): Jan Glauber (jang@de.ibm.com)
*/
#include <asm/cpacf.h>
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <linux/cpufeature.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include "sha.h"
static int sha512_init(struct shash_desc *desc)
{
struct s390_sha_ctx *ctx = shash_desc_ctx(desc);
ctx->sha512.state[0] = SHA512_H0;
ctx->sha512.state[1] = SHA512_H1;
ctx->sha512.state[2] = SHA512_H2;
ctx->sha512.state[3] = SHA512_H3;
ctx->sha512.state[4] = SHA512_H4;
ctx->sha512.state[5] = SHA512_H5;
ctx->sha512.state[6] = SHA512_H6;
ctx->sha512.state[7] = SHA512_H7;
ctx->count = 0;
ctx->sha512.count_hi = 0;
ctx->func = CPACF_KIMD_SHA_512;
ctx->first_message_part = 0;
return 0;
}
static int sha512_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha512_state *octx = out;
octx->count[0] = sctx->count;
octx->count[1] = sctx->sha512.count_hi;
memcpy(octx->state, sctx->state, sizeof(octx->state));
return 0;
}
static int sha512_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha512_state *ictx = in;
sctx->count = ictx->count[0];
sctx->sha512.count_hi = ictx->count[1];
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
sctx->func = CPACF_KIMD_SHA_512;
sctx->first_message_part = 0;
return 0;
}
static struct shash_alg sha512_alg = {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_init,
.update = s390_sha_update_blocks,
.finup = s390_sha_finup,
.export = sha512_export,
.import = sha512_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name= "sha512-s390",
.cra_priority = 300,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
MODULE_ALIAS_CRYPTO("sha512");
static int sha384_init(struct shash_desc *desc)
{
struct s390_sha_ctx *ctx = shash_desc_ctx(desc);
ctx->sha512.state[0] = SHA384_H0;
ctx->sha512.state[1] = SHA384_H1;
ctx->sha512.state[2] = SHA384_H2;
ctx->sha512.state[3] = SHA384_H3;
ctx->sha512.state[4] = SHA384_H4;
ctx->sha512.state[5] = SHA384_H5;
ctx->sha512.state[6] = SHA384_H6;
ctx->sha512.state[7] = SHA384_H7;
ctx->count = 0;
ctx->sha512.count_hi = 0;
ctx->func = CPACF_KIMD_SHA_512;
ctx->first_message_part = 0;
return 0;
}
static struct shash_alg sha384_alg = {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_init,
.update = s390_sha_update_blocks,
.finup = s390_sha_finup,
.export = sha512_export,
.import = sha512_import,
.descsize = sizeof(struct s390_sha_ctx),
.statesize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name= "sha384-s390",
.cra_priority = 300,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_ctxsize = sizeof(struct s390_sha_ctx),
.cra_module = THIS_MODULE,
}
};
MODULE_ALIAS_CRYPTO("sha384");
static int __init init(void)
{
int ret;
if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_512))
return -ENODEV;
if ((ret = crypto_register_shash(&sha512_alg)) < 0)
goto out;
if ((ret = crypto_register_shash(&sha384_alg)) < 0)
crypto_unregister_shash(&sha512_alg);
out:
return ret;
}
static void __exit fini(void)
{
crypto_unregister_shash(&sha512_alg);
crypto_unregister_shash(&sha384_alg);
}
module_cpu_feature_match(S390_CPU_FEATURE_MSA, init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA512 and SHA-384 Secure Hash Algorithm");

View file

@ -3,7 +3,6 @@
# Makefile for s390-specific library files..
#
obj-y += crypto/
lib-y += delay.o string.o uaccess.o find.o spinlock.o tishift.o
lib-y += csum-partial.o
obj-y += mem.o xor.o

View file

@ -1,47 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SHA-256 optimized using the CP Assist for Cryptographic Functions (CPACF)
*
* Copyright 2025 Google LLC
*/
#include <asm/cpacf.h>
#include <crypto/internal/sha2.h>
#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/module.h>
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha256);
void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
if (static_branch_likely(&have_cpacf_sha256))
cpacf_kimd(CPACF_KIMD_SHA_256, state, data,
nblocks * SHA256_BLOCK_SIZE);
else
sha256_blocks_generic(state, data, nblocks);
}
EXPORT_SYMBOL_GPL(sha256_blocks_arch);
bool sha256_is_arch_optimized(void)
{
return static_key_enabled(&have_cpacf_sha256);
}
EXPORT_SYMBOL_GPL(sha256_is_arch_optimized);
static int __init sha256_s390_mod_init(void)
{
if (cpu_have_feature(S390_CPU_FEATURE_MSA) &&
cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256))
static_branch_enable(&have_cpacf_sha256);
return 0;
}
subsys_initcall(sha256_s390_mod_init);
static void __exit sha256_s390_mod_exit(void)
{
}
module_exit(sha256_s390_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-256 using the CP Assist for Cryptographic Functions (CPACF)");

View file

@ -16,7 +16,7 @@ int verify_sha256_digest(void)
{
struct kexec_sha_region *ptr, *end;
u8 digest[SHA256_DIGEST_SIZE];
struct sha256_state sctx;
struct sha256_ctx sctx;
sha256_init(&sctx);
end = purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions);

View file

@ -26,26 +26,6 @@ config CRYPTO_MD5_SPARC64
Architecture: sparc64 using crypto instructions, when available
config CRYPTO_SHA1_SPARC64
tristate "Hash functions: SHA-1"
depends on SPARC64
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: sparc64
config CRYPTO_SHA512_SPARC64
tristate "Hash functions: SHA-384 and SHA-512"
depends on SPARC64
select CRYPTO_SHA512
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: sparc64 using crypto instructions, when available
config CRYPTO_AES_SPARC64
tristate "Ciphers: AES, modes: ECB, CBC, CTR"
depends on SPARC64

View file

@ -3,16 +3,12 @@
# Arch-specific CryptoAPI modules.
#
obj-$(CONFIG_CRYPTO_SHA1_SPARC64) += sha1-sparc64.o
obj-$(CONFIG_CRYPTO_SHA512_SPARC64) += sha512-sparc64.o
obj-$(CONFIG_CRYPTO_MD5_SPARC64) += md5-sparc64.o
obj-$(CONFIG_CRYPTO_AES_SPARC64) += aes-sparc64.o
obj-$(CONFIG_CRYPTO_DES_SPARC64) += des-sparc64.o
obj-$(CONFIG_CRYPTO_CAMELLIA_SPARC64) += camellia-sparc64.o
sha1-sparc64-y := sha1_asm.o sha1_glue.o
sha512-sparc64-y := sha512_asm.o sha512_glue.o
md5-sparc64-y := md5_asm.o md5_glue.o
aes-sparc64-y := aes_asm.o aes_glue.o

View file

@ -1,94 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Glue code for SHA1 hashing optimized for sparc64 crypto opcodes.
*
* This is based largely upon arch/x86/crypto/sha1_ssse3_glue.c
*
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
* Copyright (c) Mathias Krause <minipli@googlemail.com>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <asm/elf.h>
#include <asm/opcodes.h>
#include <asm/pstate.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha1_sparc64_transform(struct sha1_state *digest,
const u8 *data, int rounds);
static int sha1_sparc64_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_base_do_update_blocks(desc, data, len,
sha1_sparc64_transform);
}
/* Add padding and return the message digest. */
static int sha1_sparc64_finup(struct shash_desc *desc, const u8 *src,
unsigned int len, u8 *out)
{
sha1_base_do_finup(desc, src, len, sha1_sparc64_transform);
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_sparc64_update,
.finup = sha1_sparc64_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-sparc64",
.cra_priority = SPARC_CR_OPCODE_PRIORITY,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static bool __init sparc64_has_sha1_opcode(void)
{
unsigned long cfr;
if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO))
return false;
__asm__ __volatile__("rd %%asr26, %0" : "=r" (cfr));
if (!(cfr & CFR_SHA1))
return false;
return true;
}
static int __init sha1_sparc64_mod_init(void)
{
if (sparc64_has_sha1_opcode()) {
pr_info("Using sparc64 sha1 opcode optimized SHA-1 implementation\n");
return crypto_register_shash(&alg);
}
pr_info("sparc64 sha1 opcode not available.\n");
return -ENODEV;
}
static void __exit sha1_sparc64_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_init(sha1_sparc64_mod_init);
module_exit(sha1_sparc64_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, sparc64 sha1 opcode accelerated");
MODULE_ALIAS_CRYPTO("sha1");
#include "crop_devid.c"

View file

@ -1,122 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Glue code for SHA512 hashing optimized for sparc64 crypto opcodes.
*
* This is based largely upon crypto/sha512_generic.c
*
* Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) 2003 Kyle McMartin <kyle@debian.org>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <asm/elf.h>
#include <asm/opcodes.h>
#include <asm/pstate.h>
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
asmlinkage void sha512_sparc64_transform(u64 *digest, const char *data,
unsigned int rounds);
static void sha512_block(struct sha512_state *sctx, const u8 *src, int blocks)
{
sha512_sparc64_transform(sctx->state, src, blocks);
}
static int sha512_sparc64_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update_blocks(desc, data, len, sha512_block);
}
static int sha512_sparc64_finup(struct shash_desc *desc, const u8 *src,
unsigned int len, u8 *out)
{
sha512_base_do_finup(desc, src, len, sha512_block);
return sha512_base_finish(desc, out);
}
static struct shash_alg sha512 = {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_sparc64_update,
.finup = sha512_sparc64_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name= "sha512-sparc64",
.cra_priority = SPARC_CR_OPCODE_PRIORITY,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static struct shash_alg sha384 = {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_sparc64_update,
.finup = sha512_sparc64_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name= "sha384-sparc64",
.cra_priority = SPARC_CR_OPCODE_PRIORITY,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static bool __init sparc64_has_sha512_opcode(void)
{
unsigned long cfr;
if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO))
return false;
__asm__ __volatile__("rd %%asr26, %0" : "=r" (cfr));
if (!(cfr & CFR_SHA512))
return false;
return true;
}
static int __init sha512_sparc64_mod_init(void)
{
if (sparc64_has_sha512_opcode()) {
int ret = crypto_register_shash(&sha384);
if (ret < 0)
return ret;
ret = crypto_register_shash(&sha512);
if (ret < 0) {
crypto_unregister_shash(&sha384);
return ret;
}
pr_info("Using sparc64 sha512 opcode optimized SHA-512/SHA-384 implementation\n");
return 0;
}
pr_info("sparc64 sha512 opcode not available.\n");
return -ENODEV;
}
static void __exit sha512_sparc64_mod_fini(void)
{
crypto_unregister_shash(&sha384);
crypto_unregister_shash(&sha512);
}
module_init(sha512_sparc64_mod_init);
module_exit(sha512_sparc64_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-384 and SHA-512 Secure Hash Algorithm, sparc64 sha512 opcode accelerated");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha512");
#include "crop_devid.c"

View file

@ -4,7 +4,6 @@
asflags-y := -ansi -DST_DIV0=0x02
obj-y += crypto/
lib-$(CONFIG_SPARC32) += ashrdi3.o
lib-$(CONFIG_SPARC32) += memcpy.o memset.o
lib-y += strlen.o

View file

@ -1,8 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config CRYPTO_SHA256_SPARC64
tristate
depends on SPARC64
default CRYPTO_LIB_SHA256
select CRYPTO_ARCH_HAVE_LIB_SHA256
select CRYPTO_LIB_SHA256_GENERIC

View file

@ -1,4 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_CRYPTO_SHA256_SPARC64) += sha256-sparc64.o
sha256-sparc64-y := sha256.o sha256_asm.o

View file

@ -376,33 +376,6 @@ config CRYPTO_POLYVAL_CLMUL_NI
Architecture: x86_64 using:
- CLMUL-NI (carry-less multiplication new instructions)
config CRYPTO_SHA1_SSSE3
tristate "Hash functions: SHA-1 (SSSE3/AVX/AVX2/SHA-NI)"
depends on 64BIT
select CRYPTO_SHA1
select CRYPTO_HASH
help
SHA-1 secure hash algorithm (FIPS 180)
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX (Advanced Vector Extensions)
- AVX2 (Advanced Vector Extensions 2)
- SHA-NI (SHA Extensions New Instructions)
config CRYPTO_SHA512_SSSE3
tristate "Hash functions: SHA-384 and SHA-512 (SSSE3/AVX/AVX2)"
depends on 64BIT
select CRYPTO_SHA512
select CRYPTO_HASH
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180)
Architecture: x86_64 using:
- SSSE3 (Supplemental SSE3)
- AVX (Advanced Vector Extensions)
- AVX2 (Advanced Vector Extensions 2)
config CRYPTO_SM3_AVX_X86_64
tristate "Hash functions: SM3 (AVX)"
depends on 64BIT

View file

@ -51,12 +51,6 @@ ifeq ($(CONFIG_AS_VAES)$(CONFIG_AS_VPCLMULQDQ),yy)
aesni-intel-$(CONFIG_64BIT) += aes-gcm-avx10-x86_64.o
endif
obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o
sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ni_asm.o sha1_ssse3_glue.o
obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o
obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o

View file

@ -1,304 +0,0 @@
/*
* Intel SHA Extensions optimized implementation of a SHA-1 update function
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
*
* GPL LICENSE SUMMARY
*
* Copyright(c) 2015 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* Contact Information:
* Sean Gulley <sean.m.gulley@intel.com>
* Tim Chen <tim.c.chen@linux.intel.com>
*
* BSD LICENSE
*
* Copyright(c) 2015 Intel Corporation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#include <linux/linkage.h>
#include <linux/cfi_types.h>
#define DIGEST_PTR %rdi /* 1st arg */
#define DATA_PTR %rsi /* 2nd arg */
#define NUM_BLKS %rdx /* 3rd arg */
/* gcc conversion */
#define FRAME_SIZE 32 /* space for 2x16 bytes */
#define ABCD %xmm0
#define E0 %xmm1 /* Need two E's b/c they ping pong */
#define E1 %xmm2
#define MSG0 %xmm3
#define MSG1 %xmm4
#define MSG2 %xmm5
#define MSG3 %xmm6
#define SHUF_MASK %xmm7
/*
* Intel SHA Extensions optimized implementation of a SHA-1 update function
*
* The function takes a pointer to the current hash values, a pointer to the
* input data, and a number of 64 byte blocks to process. Once all blocks have
* been processed, the digest pointer is updated with the resulting hash value.
* The function only processes complete blocks, there is no functionality to
* store partial blocks. All message padding and hash value initialization must
* be done outside the update function.
*
* The indented lines in the loop are instructions related to rounds processing.
* The non-indented lines are instructions related to the message schedule.
*
* void sha1_ni_transform(uint32_t *digest, const void *data,
uint32_t numBlocks)
* digest : pointer to digest
* data: pointer to input data
* numBlocks: Number of blocks to process
*/
.text
SYM_TYPED_FUNC_START(sha1_ni_transform)
push %rbp
mov %rsp, %rbp
sub $FRAME_SIZE, %rsp
and $~0xF, %rsp
shl $6, NUM_BLKS /* convert to bytes */
jz .Ldone_hash
add DATA_PTR, NUM_BLKS /* pointer to end of data */
/* load initial hash values */
pinsrd $3, 1*16(DIGEST_PTR), E0
movdqu 0*16(DIGEST_PTR), ABCD
pand UPPER_WORD_MASK(%rip), E0
pshufd $0x1B, ABCD, ABCD
movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK
.Lloop0:
/* Save hash values for addition after rounds */
movdqa E0, (0*16)(%rsp)
movdqa ABCD, (1*16)(%rsp)
/* Rounds 0-3 */
movdqu 0*16(DATA_PTR), MSG0
pshufb SHUF_MASK, MSG0
paddd MSG0, E0
movdqa ABCD, E1
sha1rnds4 $0, E0, ABCD
/* Rounds 4-7 */
movdqu 1*16(DATA_PTR), MSG1
pshufb SHUF_MASK, MSG1
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1rnds4 $0, E1, ABCD
sha1msg1 MSG1, MSG0
/* Rounds 8-11 */
movdqu 2*16(DATA_PTR), MSG2
pshufb SHUF_MASK, MSG2
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1rnds4 $0, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 12-15 */
movdqu 3*16(DATA_PTR), MSG3
pshufb SHUF_MASK, MSG3
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $0, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 16-19 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $0, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 20-23 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $1, E1, ABCD
sha1msg1 MSG1, MSG0
pxor MSG1, MSG3
/* Rounds 24-27 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $1, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 28-31 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $1, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 32-35 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $1, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 36-39 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $1, E1, ABCD
sha1msg1 MSG1, MSG0
pxor MSG1, MSG3
/* Rounds 40-43 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $2, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 44-47 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $2, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 48-51 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $2, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 52-55 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $2, E1, ABCD
sha1msg1 MSG1, MSG0
pxor MSG1, MSG3
/* Rounds 56-59 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $2, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 60-63 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $3, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 64-67 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $3, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 68-71 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $3, E1, ABCD
pxor MSG1, MSG3
/* Rounds 72-75 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $3, E0, ABCD
/* Rounds 76-79 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1rnds4 $3, E1, ABCD
/* Add current hash values with previously saved */
sha1nexte (0*16)(%rsp), E0
paddd (1*16)(%rsp), ABCD
/* Increment data pointer and loop if more to process */
add $64, DATA_PTR
cmp NUM_BLKS, DATA_PTR
jne .Lloop0
/* Write hash values back in the correct order */
pshufd $0x1B, ABCD, ABCD
movdqu ABCD, 0*16(DIGEST_PTR)
pextrd $3, E0, 1*16(DIGEST_PTR)
.Ldone_hash:
mov %rbp, %rsp
pop %rbp
RET
SYM_FUNC_END(sha1_ni_transform)
.section .rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16
.align 16
PSHUFFLE_BYTE_FLIP_MASK:
.octa 0x000102030405060708090a0b0c0d0e0f
.section .rodata.cst16.UPPER_WORD_MASK, "aM", @progbits, 16
.align 16
UPPER_WORD_MASK:
.octa 0xFFFFFFFF000000000000000000000000

View file

@ -1,324 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
*
* Glue code for the SHA1 Secure Hash Algorithm assembler implementations
* using SSSE3, AVX, AVX2, and SHA-NI instructions.
*
* This file is based on sha1_generic.c
*
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
* Copyright (c) Mathias Krause <minipli@googlemail.com>
* Copyright (c) Chandramouli Narayanan <mouli@linux.intel.com>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <asm/cpu_device_id.h>
#include <asm/simd.h>
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
static const struct x86_cpu_id module_cpu_ids[] = {
X86_MATCH_FEATURE(X86_FEATURE_SHA_NI, NULL),
X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
static inline int sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len, sha1_block_fn *sha1_xform)
{
int remain;
/*
* Make sure struct sha1_state begins directly with the SHA1
* 160-bit internal state, as this is what the asm functions expect.
*/
BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0);
kernel_fpu_begin();
remain = sha1_base_do_update_blocks(desc, data, len, sha1_xform);
kernel_fpu_end();
return remain;
}
static inline int sha1_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out,
sha1_block_fn *sha1_xform)
{
kernel_fpu_begin();
sha1_base_do_finup(desc, data, len, sha1_xform);
kernel_fpu_end();
return sha1_base_finish(desc, out);
}
asmlinkage void sha1_transform_ssse3(struct sha1_state *state,
const u8 *data, int blocks);
static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_update(desc, data, len, sha1_transform_ssse3);
}
static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha1_finup(desc, data, len, out, sha1_transform_ssse3);
}
static struct shash_alg sha1_ssse3_alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_ssse3_update,
.finup = sha1_ssse3_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-ssse3",
.cra_priority = 150,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int register_sha1_ssse3(void)
{
if (boot_cpu_has(X86_FEATURE_SSSE3))
return crypto_register_shash(&sha1_ssse3_alg);
return 0;
}
static void unregister_sha1_ssse3(void)
{
if (boot_cpu_has(X86_FEATURE_SSSE3))
crypto_unregister_shash(&sha1_ssse3_alg);
}
asmlinkage void sha1_transform_avx(struct sha1_state *state,
const u8 *data, int blocks);
static int sha1_avx_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_update(desc, data, len, sha1_transform_avx);
}
static int sha1_avx_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha1_finup(desc, data, len, out, sha1_transform_avx);
}
static struct shash_alg sha1_avx_alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_avx_update,
.finup = sha1_avx_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-avx",
.cra_priority = 160,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static bool avx_usable(void)
{
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
if (boot_cpu_has(X86_FEATURE_AVX))
pr_info("AVX detected but unusable.\n");
return false;
}
return true;
}
static int register_sha1_avx(void)
{
if (avx_usable())
return crypto_register_shash(&sha1_avx_alg);
return 0;
}
static void unregister_sha1_avx(void)
{
if (avx_usable())
crypto_unregister_shash(&sha1_avx_alg);
}
#define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */
asmlinkage void sha1_transform_avx2(struct sha1_state *state,
const u8 *data, int blocks);
static bool avx2_usable(void)
{
if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2)
&& boot_cpu_has(X86_FEATURE_BMI1)
&& boot_cpu_has(X86_FEATURE_BMI2))
return true;
return false;
}
static inline void sha1_apply_transform_avx2(struct sha1_state *state,
const u8 *data, int blocks)
{
/* Select the optimal transform based on data block size */
if (blocks >= SHA1_AVX2_BLOCK_OPTSIZE)
sha1_transform_avx2(state, data, blocks);
else
sha1_transform_avx(state, data, blocks);
}
static int sha1_avx2_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_update(desc, data, len, sha1_apply_transform_avx2);
}
static int sha1_avx2_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha1_finup(desc, data, len, out, sha1_apply_transform_avx2);
}
static struct shash_alg sha1_avx2_alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_avx2_update,
.finup = sha1_avx2_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-avx2",
.cra_priority = 170,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int register_sha1_avx2(void)
{
if (avx2_usable())
return crypto_register_shash(&sha1_avx2_alg);
return 0;
}
static void unregister_sha1_avx2(void)
{
if (avx2_usable())
crypto_unregister_shash(&sha1_avx2_alg);
}
asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data,
int rounds);
static int sha1_ni_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_update(desc, data, len, sha1_ni_transform);
}
static int sha1_ni_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha1_finup(desc, data, len, out, sha1_ni_transform);
}
static struct shash_alg sha1_ni_alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = sha1_ni_update,
.finup = sha1_ni_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-ni",
.cra_priority = 250,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int register_sha1_ni(void)
{
if (boot_cpu_has(X86_FEATURE_SHA_NI))
return crypto_register_shash(&sha1_ni_alg);
return 0;
}
static void unregister_sha1_ni(void)
{
if (boot_cpu_has(X86_FEATURE_SHA_NI))
crypto_unregister_shash(&sha1_ni_alg);
}
static int __init sha1_ssse3_mod_init(void)
{
if (!x86_match_cpu(module_cpu_ids))
return -ENODEV;
if (register_sha1_ssse3())
goto fail;
if (register_sha1_avx()) {
unregister_sha1_ssse3();
goto fail;
}
if (register_sha1_avx2()) {
unregister_sha1_avx();
unregister_sha1_ssse3();
goto fail;
}
if (register_sha1_ni()) {
unregister_sha1_avx2();
unregister_sha1_avx();
unregister_sha1_ssse3();
goto fail;
}
return 0;
fail:
return -ENODEV;
}
static void __exit sha1_ssse3_mod_fini(void)
{
unregister_sha1_ni();
unregister_sha1_avx2();
unregister_sha1_avx();
unregister_sha1_ssse3();
}
module_init(sha1_ssse3_mod_init);
module_exit(sha1_ssse3_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, Supplemental SSE3 accelerated");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_ALIAS_CRYPTO("sha1-ssse3");
MODULE_ALIAS_CRYPTO("sha1-avx");
MODULE_ALIAS_CRYPTO("sha1-avx2");
MODULE_ALIAS_CRYPTO("sha1-ni");

View file

@ -1,322 +0,0 @@
/*
* Cryptographic API.
*
* Glue code for the SHA512 Secure Hash Algorithm assembler
* implementation using supplemental SSE3 / AVX / AVX2 instructions.
*
* This file is based on sha512_generic.c
*
* Copyright (C) 2013 Intel Corporation
* Author: Tim Chen <tim.c.chen@linux.intel.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <asm/cpu_device_id.h>
#include <asm/simd.h>
#include <crypto/internal/hash.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
asmlinkage void sha512_transform_ssse3(struct sha512_state *state,
const u8 *data, int blocks);
static int sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len, sha512_block_fn *sha512_xform)
{
int remain;
/*
* Make sure struct sha512_state begins directly with the SHA512
* 512-bit internal state, as this is what the asm functions expect.
*/
BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0);
kernel_fpu_begin();
remain = sha512_base_do_update_blocks(desc, data, len, sha512_xform);
kernel_fpu_end();
return remain;
}
static int sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out, sha512_block_fn *sha512_xform)
{
kernel_fpu_begin();
sha512_base_do_finup(desc, data, len, sha512_xform);
kernel_fpu_end();
return sha512_base_finish(desc, out);
}
static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_update(desc, data, len, sha512_transform_ssse3);
}
static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha512_finup(desc, data, len, out, sha512_transform_ssse3);
}
static struct shash_alg sha512_ssse3_algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_ssse3_update,
.finup = sha512_ssse3_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-ssse3",
.cra_priority = 150,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_ssse3_update,
.finup = sha512_ssse3_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-ssse3",
.cra_priority = 150,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int register_sha512_ssse3(void)
{
if (boot_cpu_has(X86_FEATURE_SSSE3))
return crypto_register_shashes(sha512_ssse3_algs,
ARRAY_SIZE(sha512_ssse3_algs));
return 0;
}
static void unregister_sha512_ssse3(void)
{
if (boot_cpu_has(X86_FEATURE_SSSE3))
crypto_unregister_shashes(sha512_ssse3_algs,
ARRAY_SIZE(sha512_ssse3_algs));
}
asmlinkage void sha512_transform_avx(struct sha512_state *state,
const u8 *data, int blocks);
static bool avx_usable(void)
{
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
if (boot_cpu_has(X86_FEATURE_AVX))
pr_info("AVX detected but unusable.\n");
return false;
}
return true;
}
static int sha512_avx_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_update(desc, data, len, sha512_transform_avx);
}
static int sha512_avx_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha512_finup(desc, data, len, out, sha512_transform_avx);
}
static struct shash_alg sha512_avx_algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_avx_update,
.finup = sha512_avx_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-avx",
.cra_priority = 160,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_avx_update,
.finup = sha512_avx_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-avx",
.cra_priority = 160,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int register_sha512_avx(void)
{
if (avx_usable())
return crypto_register_shashes(sha512_avx_algs,
ARRAY_SIZE(sha512_avx_algs));
return 0;
}
static void unregister_sha512_avx(void)
{
if (avx_usable())
crypto_unregister_shashes(sha512_avx_algs,
ARRAY_SIZE(sha512_avx_algs));
}
asmlinkage void sha512_transform_rorx(struct sha512_state *state,
const u8 *data, int blocks);
static int sha512_avx2_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_update(desc, data, len, sha512_transform_rorx);
}
static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha512_finup(desc, data, len, out, sha512_transform_rorx);
}
static struct shash_alg sha512_avx2_algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_avx2_update,
.finup = sha512_avx2_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-avx2",
.cra_priority = 170,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_avx2_update,
.finup = sha512_avx2_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-avx2",
.cra_priority = 170,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static bool avx2_usable(void)
{
if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) &&
boot_cpu_has(X86_FEATURE_BMI2))
return true;
return false;
}
static int register_sha512_avx2(void)
{
if (avx2_usable())
return crypto_register_shashes(sha512_avx2_algs,
ARRAY_SIZE(sha512_avx2_algs));
return 0;
}
static const struct x86_cpu_id module_cpu_ids[] = {
X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL),
X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL),
X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL),
{}
};
MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids);
static void unregister_sha512_avx2(void)
{
if (avx2_usable())
crypto_unregister_shashes(sha512_avx2_algs,
ARRAY_SIZE(sha512_avx2_algs));
}
static int __init sha512_ssse3_mod_init(void)
{
if (!x86_match_cpu(module_cpu_ids))
return -ENODEV;
if (register_sha512_ssse3())
goto fail;
if (register_sha512_avx()) {
unregister_sha512_ssse3();
goto fail;
}
if (register_sha512_avx2()) {
unregister_sha512_avx();
unregister_sha512_ssse3();
goto fail;
}
return 0;
fail:
return -ENODEV;
}
static void __exit sha512_ssse3_mod_fini(void)
{
unregister_sha512_avx2();
unregister_sha512_avx();
unregister_sha512_ssse3();
}
module_init(sha512_ssse3_mod_init);
module_exit(sha512_ssse3_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA512 Secure Hash Algorithm, Supplemental SSE3 accelerated");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha512-ssse3");
MODULE_ALIAS_CRYPTO("sha512-avx");
MODULE_ALIAS_CRYPTO("sha512-avx2");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha384-ssse3");
MODULE_ALIAS_CRYPTO("sha384-avx");
MODULE_ALIAS_CRYPTO("sha384-avx2");

View file

@ -1,2 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
# This now-removed directory used to contain generated files.
/crypto/
inat-tables.c

View file

@ -3,8 +3,6 @@
# Makefile for x86 specific library files.
#
obj-y += crypto/
# Produces uninteresting flaky coverage.
KCOV_INSTRUMENT_delay.o := n

View file

@ -1,80 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* SHA-256 optimized for x86_64
*
* Copyright 2025 Google LLC
*/
#include <asm/fpu/api.h>
#include <crypto/internal/sha2.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/static_call.h>
asmlinkage void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
asmlinkage void sha256_transform_avx(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
asmlinkage void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
asmlinkage void sha256_ni_transform(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86);
DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3);
void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
if (static_branch_likely(&have_sha256_x86)) {
kernel_fpu_begin();
static_call(sha256_blocks_x86)(state, data, nblocks);
kernel_fpu_end();
} else {
sha256_blocks_generic(state, data, nblocks);
}
}
EXPORT_SYMBOL_GPL(sha256_blocks_simd);
void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks)
{
sha256_blocks_generic(state, data, nblocks);
}
EXPORT_SYMBOL_GPL(sha256_blocks_arch);
bool sha256_is_arch_optimized(void)
{
return static_key_enabled(&have_sha256_x86);
}
EXPORT_SYMBOL_GPL(sha256_is_arch_optimized);
static int __init sha256_x86_mod_init(void)
{
if (boot_cpu_has(X86_FEATURE_SHA_NI)) {
static_call_update(sha256_blocks_x86, sha256_ni_transform);
} else if (cpu_has_xfeatures(XFEATURE_MASK_SSE |
XFEATURE_MASK_YMM, NULL) &&
boot_cpu_has(X86_FEATURE_AVX)) {
if (boot_cpu_has(X86_FEATURE_AVX2) &&
boot_cpu_has(X86_FEATURE_BMI2))
static_call_update(sha256_blocks_x86,
sha256_transform_rorx);
else
static_call_update(sha256_blocks_x86,
sha256_transform_avx);
} else if (!boot_cpu_has(X86_FEATURE_SSSE3)) {
return 0;
}
static_branch_enable(&have_sha256_x86);
return 0;
}
subsys_initcall(sha256_x86_mod_init);
static void __exit sha256_x86_mod_exit(void)
{
}
module_exit(sha256_x86_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-256 optimized for x86_64");

View file

@ -25,7 +25,7 @@ static int verify_sha256_digest(void)
{
struct kexec_sha_region *ptr, *end;
u8 digest[SHA256_DIGEST_SIZE];
struct sha256_state sctx;
struct sha256_ctx sctx;
sha256_init(&sctx);
end = purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions);

View file

@ -986,15 +986,16 @@ config CRYPTO_SHA1
select CRYPTO_HASH
select CRYPTO_LIB_SHA1
help
SHA-1 secure hash algorithm (FIPS 180, ISO/IEC 10118-3)
SHA-1 secure hash algorithm (FIPS 180, ISO/IEC 10118-3), including
HMAC support.
config CRYPTO_SHA256
tristate "SHA-224 and SHA-256"
select CRYPTO_HASH
select CRYPTO_LIB_SHA256
select CRYPTO_LIB_SHA256_GENERIC
help
SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC 10118-3)
SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC
10118-3), including HMAC support.
This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP).
Used by the btrfs filesystem, Ceph, NFS, and SMB.
@ -1002,8 +1003,10 @@ config CRYPTO_SHA256
config CRYPTO_SHA512
tristate "SHA-384 and SHA-512"
select CRYPTO_HASH
select CRYPTO_LIB_SHA512
help
SHA-384 and SHA-512 secure hash algorithms (FIPS 180, ISO/IEC 10118-3)
SHA-384 and SHA-512 secure hash algorithms (FIPS 180, ISO/IEC
10118-3), including HMAC support.
config CRYPTO_SHA3
tristate "SHA-3"
@ -1420,9 +1423,6 @@ config CRYPTO_USER_API_ENABLE_OBSOLETE
endmenu
config CRYPTO_HASH_INFO
bool
if !KMSAN # avoid false positives from assembly
if ARM
source "arch/arm/crypto/Kconfig"

View file

@ -75,10 +75,9 @@ obj-$(CONFIG_CRYPTO_NULL) += crypto_null.o
obj-$(CONFIG_CRYPTO_MD4) += md4.o
obj-$(CONFIG_CRYPTO_MD5) += md5.o
obj-$(CONFIG_CRYPTO_RMD160) += rmd160.o
obj-$(CONFIG_CRYPTO_SHA1) += sha1_generic.o
obj-$(CONFIG_CRYPTO_SHA1) += sha1.o
obj-$(CONFIG_CRYPTO_SHA256) += sha256.o
CFLAGS_sha256.o += -DARCH=$(ARCH)
obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
obj-$(CONFIG_CRYPTO_SHA512) += sha512.o
obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
obj-$(CONFIG_CRYPTO_SM3_GENERIC) += sm3_generic.o
obj-$(CONFIG_CRYPTO_STREEBOG) += streebog_generic.o
@ -203,7 +202,6 @@ obj-$(CONFIG_CRYPTO_ECRDSA) += ecrdsa_generic.o
obj-$(CONFIG_XOR_BLOCKS) += xor.o
obj-$(CONFIG_ASYNC_CORE) += async_tx/
obj-$(CONFIG_ASYMMETRIC_KEY_TYPE) += asymmetric_keys/
obj-$(CONFIG_CRYPTO_HASH_INFO) += hash_info.o
crypto_simd-y := simd.o
obj-$(CONFIG_CRYPTO_SIMD) += crypto_simd.o

201
crypto/sha1.c Normal file
View file

@ -0,0 +1,201 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Crypto API support for SHA-1 and HMAC-SHA1
*
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
* Copyright 2025 Google LLC
*/
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <linux/kernel.h>
#include <linux/module.h>
/*
* Export and import functions. crypto_shash wants a particular format that
* matches that used by some legacy drivers. It currently is the same as the
* library SHA context, except the value in bytecount must be block-aligned and
* the remainder must be stored in an extra u8 appended to the struct.
*/
#define SHA1_SHASH_STATE_SIZE (sizeof(struct sha1_ctx) + 1)
static_assert(sizeof(struct sha1_ctx) == sizeof(struct sha1_state));
static_assert(offsetof(struct sha1_ctx, state) == offsetof(struct sha1_state, state));
static_assert(offsetof(struct sha1_ctx, bytecount) == offsetof(struct sha1_state, count));
static_assert(offsetof(struct sha1_ctx, buf) == offsetof(struct sha1_state, buffer));
static int __crypto_sha1_export(const struct sha1_ctx *ctx0, void *out)
{
struct sha1_ctx ctx = *ctx0;
unsigned int partial;
u8 *p = out;
partial = ctx.bytecount % SHA1_BLOCK_SIZE;
ctx.bytecount -= partial;
memcpy(p, &ctx, sizeof(ctx));
p += sizeof(ctx);
*p = partial;
return 0;
}
static int __crypto_sha1_import(struct sha1_ctx *ctx, const void *in)
{
const u8 *p = in;
memcpy(ctx, p, sizeof(*ctx));
p += sizeof(*ctx);
ctx->bytecount += *p;
return 0;
}
const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = {
0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d,
0x32, 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90,
0xaf, 0xd8, 0x07, 0x09
};
EXPORT_SYMBOL_GPL(sha1_zero_message_hash);
#define SHA1_CTX(desc) ((struct sha1_ctx *)shash_desc_ctx(desc))
static int crypto_sha1_init(struct shash_desc *desc)
{
sha1_init(SHA1_CTX(desc));
return 0;
}
static int crypto_sha1_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
sha1_update(SHA1_CTX(desc), data, len);
return 0;
}
static int crypto_sha1_final(struct shash_desc *desc, u8 *out)
{
sha1_final(SHA1_CTX(desc), out);
return 0;
}
static int crypto_sha1_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
sha1(data, len, out);
return 0;
}
static int crypto_sha1_export(struct shash_desc *desc, void *out)
{
return __crypto_sha1_export(SHA1_CTX(desc), out);
}
static int crypto_sha1_import(struct shash_desc *desc, const void *in)
{
return __crypto_sha1_import(SHA1_CTX(desc), in);
}
#define HMAC_SHA1_KEY(tfm) ((struct hmac_sha1_key *)crypto_shash_ctx(tfm))
#define HMAC_SHA1_CTX(desc) ((struct hmac_sha1_ctx *)shash_desc_ctx(desc))
static int crypto_hmac_sha1_setkey(struct crypto_shash *tfm,
const u8 *raw_key, unsigned int keylen)
{
hmac_sha1_preparekey(HMAC_SHA1_KEY(tfm), raw_key, keylen);
return 0;
}
static int crypto_hmac_sha1_init(struct shash_desc *desc)
{
hmac_sha1_init(HMAC_SHA1_CTX(desc), HMAC_SHA1_KEY(desc->tfm));
return 0;
}
static int crypto_hmac_sha1_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
hmac_sha1_update(HMAC_SHA1_CTX(desc), data, len);
return 0;
}
static int crypto_hmac_sha1_final(struct shash_desc *desc, u8 *out)
{
hmac_sha1_final(HMAC_SHA1_CTX(desc), out);
return 0;
}
static int crypto_hmac_sha1_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
hmac_sha1(HMAC_SHA1_KEY(desc->tfm), data, len, out);
return 0;
}
static int crypto_hmac_sha1_export(struct shash_desc *desc, void *out)
{
return __crypto_sha1_export(&HMAC_SHA1_CTX(desc)->sha_ctx, out);
}
static int crypto_hmac_sha1_import(struct shash_desc *desc, const void *in)
{
struct hmac_sha1_ctx *ctx = HMAC_SHA1_CTX(desc);
ctx->ostate = HMAC_SHA1_KEY(desc->tfm)->ostate;
return __crypto_sha1_import(&ctx->sha_ctx, in);
}
static struct shash_alg algs[] = {
{
.base.cra_name = "sha1",
.base.cra_driver_name = "sha1-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA1_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA1_DIGEST_SIZE,
.init = crypto_sha1_init,
.update = crypto_sha1_update,
.final = crypto_sha1_final,
.digest = crypto_sha1_digest,
.export = crypto_sha1_export,
.import = crypto_sha1_import,
.descsize = sizeof(struct sha1_ctx),
.statesize = SHA1_SHASH_STATE_SIZE,
},
{
.base.cra_name = "hmac(sha1)",
.base.cra_driver_name = "hmac-sha1-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA1_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct hmac_sha1_key),
.base.cra_module = THIS_MODULE,
.digestsize = SHA1_DIGEST_SIZE,
.setkey = crypto_hmac_sha1_setkey,
.init = crypto_hmac_sha1_init,
.update = crypto_hmac_sha1_update,
.final = crypto_hmac_sha1_final,
.digest = crypto_hmac_sha1_digest,
.export = crypto_hmac_sha1_export,
.import = crypto_hmac_sha1_import,
.descsize = sizeof(struct hmac_sha1_ctx),
.statesize = SHA1_SHASH_STATE_SIZE,
},
};
static int __init crypto_sha1_mod_init(void)
{
return crypto_register_shashes(algs, ARRAY_SIZE(algs));
}
module_init(crypto_sha1_mod_init);
static void __exit crypto_sha1_mod_exit(void)
{
crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
}
module_exit(crypto_sha1_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Crypto API support for SHA-1 and HMAC-SHA1");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_ALIAS_CRYPTO("sha1-lib");
MODULE_ALIAS_CRYPTO("hmac(sha1)");
MODULE_ALIAS_CRYPTO("hmac-sha1-lib");

View file

@ -1,87 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Cryptographic API.
*
* SHA1 Secure Hash Algorithm.
*
* Derived from cryptoapi implementation, adapted for in-place
* scatterlist interface.
*
* Copyright (c) Alan Smithee.
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
*/
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <crypto/sha1_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/string.h>
const u8 sha1_zero_message_hash[SHA1_DIGEST_SIZE] = {
0xda, 0x39, 0xa3, 0xee, 0x5e, 0x6b, 0x4b, 0x0d,
0x32, 0x55, 0xbf, 0xef, 0x95, 0x60, 0x18, 0x90,
0xaf, 0xd8, 0x07, 0x09
};
EXPORT_SYMBOL_GPL(sha1_zero_message_hash);
static void sha1_generic_block_fn(struct sha1_state *sst, u8 const *src,
int blocks)
{
u32 temp[SHA1_WORKSPACE_WORDS];
while (blocks--) {
sha1_transform(sst->state, src, temp);
src += SHA1_BLOCK_SIZE;
}
memzero_explicit(temp, sizeof(temp));
}
static int crypto_sha1_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha1_base_do_update_blocks(desc, data, len,
sha1_generic_block_fn);
}
static int crypto_sha1_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
sha1_base_do_finup(desc, data, len, sha1_generic_block_fn);
return sha1_base_finish(desc, out);
}
static struct shash_alg alg = {
.digestsize = SHA1_DIGEST_SIZE,
.init = sha1_base_init,
.update = crypto_sha1_update,
.finup = crypto_sha1_finup,
.descsize = SHA1_STATE_SIZE,
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
};
static int __init sha1_generic_mod_init(void)
{
return crypto_register_shash(&alg);
}
static void __exit sha1_generic_mod_fini(void)
{
crypto_unregister_shash(&alg);
}
module_init(sha1_generic_mod_init);
module_exit(sha1_generic_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm");
MODULE_ALIAS_CRYPTO("sha1");
MODULE_ALIAS_CRYPTO("sha1-generic");

View file

@ -1,17 +1,57 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Crypto API wrapper for the SHA-256 and SHA-224 library functions
* Crypto API support for SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256
*
* Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
* SHA224 Support Copyright 2007 Intel Corporation <jonathan.lynch@intel.com>
* Copyright 2025 Google LLC
*/
#include <crypto/internal/hash.h>
#include <crypto/internal/sha2.h>
#include <crypto/sha2.h>
#include <linux/kernel.h>
#include <linux/module.h>
/*
* Export and import functions. crypto_shash wants a particular format that
* matches that used by some legacy drivers. It currently is the same as the
* library SHA context, except the value in bytecount must be block-aligned and
* the remainder must be stored in an extra u8 appended to the struct.
*/
#define SHA256_SHASH_STATE_SIZE 105
static_assert(offsetof(struct __sha256_ctx, state) == 0);
static_assert(offsetof(struct __sha256_ctx, bytecount) == 32);
static_assert(offsetof(struct __sha256_ctx, buf) == 40);
static_assert(sizeof(struct __sha256_ctx) + 1 == SHA256_SHASH_STATE_SIZE);
static int __crypto_sha256_export(const struct __sha256_ctx *ctx0, void *out)
{
struct __sha256_ctx ctx = *ctx0;
unsigned int partial;
u8 *p = out;
partial = ctx.bytecount % SHA256_BLOCK_SIZE;
ctx.bytecount -= partial;
memcpy(p, &ctx, sizeof(ctx));
p += sizeof(ctx);
*p = partial;
return 0;
}
static int __crypto_sha256_import(struct __sha256_ctx *ctx, const void *in)
{
const u8 *p = in;
memcpy(ctx, p, sizeof(*ctx));
p += sizeof(*ctx);
ctx->bytecount += *p;
return 0;
}
/* SHA-224 */
const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] = {
0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47,
0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2,
@ -20,6 +60,46 @@ const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] = {
};
EXPORT_SYMBOL_GPL(sha224_zero_message_hash);
#define SHA224_CTX(desc) ((struct sha224_ctx *)shash_desc_ctx(desc))
static int crypto_sha224_init(struct shash_desc *desc)
{
sha224_init(SHA224_CTX(desc));
return 0;
}
static int crypto_sha224_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
sha224_update(SHA224_CTX(desc), data, len);
return 0;
}
static int crypto_sha224_final(struct shash_desc *desc, u8 *out)
{
sha224_final(SHA224_CTX(desc), out);
return 0;
}
static int crypto_sha224_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
sha224(data, len, out);
return 0;
}
static int crypto_sha224_export(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export(&SHA224_CTX(desc)->ctx, out);
}
static int crypto_sha224_import(struct shash_desc *desc, const void *in)
{
return __crypto_sha256_import(&SHA224_CTX(desc)->ctx, in);
}
/* SHA-256 */
const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] = {
0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14,
0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24,
@ -28,256 +108,241 @@ const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] = {
};
EXPORT_SYMBOL_GPL(sha256_zero_message_hash);
#define SHA256_CTX(desc) ((struct sha256_ctx *)shash_desc_ctx(desc))
static int crypto_sha256_init(struct shash_desc *desc)
{
sha256_block_init(shash_desc_ctx(desc));
sha256_init(SHA256_CTX(desc));
return 0;
}
static inline int crypto_sha256_update(struct shash_desc *desc, const u8 *data,
unsigned int len, bool force_generic)
static int crypto_sha256_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
struct crypto_sha256_state *sctx = shash_desc_ctx(desc);
int remain = len % SHA256_BLOCK_SIZE;
sctx->count += len - remain;
sha256_choose_blocks(sctx->state, data, len / SHA256_BLOCK_SIZE,
force_generic, !force_generic);
return remain;
}
static int crypto_sha256_update_generic(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return crypto_sha256_update(desc, data, len, true);
}
static int crypto_sha256_update_lib(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
sha256_update(shash_desc_ctx(desc), data, len);
sha256_update(SHA256_CTX(desc), data, len);
return 0;
}
static int crypto_sha256_update_arch(struct shash_desc *desc, const u8 *data,
unsigned int len)
static int crypto_sha256_final(struct shash_desc *desc, u8 *out)
{
return crypto_sha256_update(desc, data, len, false);
}
static int crypto_sha256_final_lib(struct shash_desc *desc, u8 *out)
{
sha256_final(shash_desc_ctx(desc), out);
sha256_final(SHA256_CTX(desc), out);
return 0;
}
static __always_inline int crypto_sha256_finup(struct shash_desc *desc,
const u8 *data,
unsigned int len, u8 *out,
bool force_generic)
{
struct crypto_sha256_state *sctx = shash_desc_ctx(desc);
unsigned int remain = len;
u8 *buf;
if (len >= SHA256_BLOCK_SIZE)
remain = crypto_sha256_update(desc, data, len, force_generic);
sctx->count += remain;
buf = memcpy(sctx + 1, data + len - remain, remain);
sha256_finup(sctx, buf, remain, out,
crypto_shash_digestsize(desc->tfm), force_generic,
!force_generic);
return 0;
}
static int crypto_sha256_finup_generic(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return crypto_sha256_finup(desc, data, len, out, true);
}
static int crypto_sha256_finup_arch(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return crypto_sha256_finup(desc, data, len, out, false);
}
static int crypto_sha256_digest_generic(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
crypto_sha256_init(desc);
return crypto_sha256_finup_generic(desc, data, len, out);
}
static int crypto_sha256_digest_lib(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
static int crypto_sha256_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
sha256(data, len, out);
return 0;
}
static int crypto_sha256_digest_arch(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
static int crypto_sha256_export(struct shash_desc *desc, void *out)
{
crypto_sha256_init(desc);
return crypto_sha256_finup_arch(desc, data, len, out);
return __crypto_sha256_export(&SHA256_CTX(desc)->ctx, out);
}
static int crypto_sha224_init(struct shash_desc *desc)
static int crypto_sha256_import(struct shash_desc *desc, const void *in)
{
sha224_block_init(shash_desc_ctx(desc));
return __crypto_sha256_import(&SHA256_CTX(desc)->ctx, in);
}
/* HMAC-SHA224 */
#define HMAC_SHA224_KEY(tfm) ((struct hmac_sha224_key *)crypto_shash_ctx(tfm))
#define HMAC_SHA224_CTX(desc) ((struct hmac_sha224_ctx *)shash_desc_ctx(desc))
static int crypto_hmac_sha224_setkey(struct crypto_shash *tfm,
const u8 *raw_key, unsigned int keylen)
{
hmac_sha224_preparekey(HMAC_SHA224_KEY(tfm), raw_key, keylen);
return 0;
}
static int crypto_sha224_final_lib(struct shash_desc *desc, u8 *out)
static int crypto_hmac_sha224_init(struct shash_desc *desc)
{
sha224_final(shash_desc_ctx(desc), out);
hmac_sha224_init(HMAC_SHA224_CTX(desc), HMAC_SHA224_KEY(desc->tfm));
return 0;
}
static int crypto_sha256_import_lib(struct shash_desc *desc, const void *in)
static int crypto_hmac_sha224_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
struct sha256_state *sctx = shash_desc_ctx(desc);
const u8 *p = in;
memcpy(sctx, p, sizeof(*sctx));
p += sizeof(*sctx);
sctx->count += *p;
hmac_sha224_update(HMAC_SHA224_CTX(desc), data, len);
return 0;
}
static int crypto_sha256_export_lib(struct shash_desc *desc, void *out)
static int crypto_hmac_sha224_final(struct shash_desc *desc, u8 *out)
{
struct sha256_state *sctx0 = shash_desc_ctx(desc);
struct sha256_state sctx = *sctx0;
unsigned int partial;
u8 *p = out;
partial = sctx.count % SHA256_BLOCK_SIZE;
sctx.count -= partial;
memcpy(p, &sctx, sizeof(sctx));
p += sizeof(sctx);
*p = partial;
hmac_sha224_final(HMAC_SHA224_CTX(desc), out);
return 0;
}
static int crypto_hmac_sha224_digest(struct shash_desc *desc,
const u8 *data, unsigned int len,
u8 *out)
{
hmac_sha224(HMAC_SHA224_KEY(desc->tfm), data, len, out);
return 0;
}
static int crypto_hmac_sha224_export(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export(&HMAC_SHA224_CTX(desc)->ctx.sha_ctx, out);
}
static int crypto_hmac_sha224_import(struct shash_desc *desc, const void *in)
{
struct hmac_sha224_ctx *ctx = HMAC_SHA224_CTX(desc);
ctx->ctx.ostate = HMAC_SHA224_KEY(desc->tfm)->key.ostate;
return __crypto_sha256_import(&ctx->ctx.sha_ctx, in);
}
/* HMAC-SHA256 */
#define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(tfm))
#define HMAC_SHA256_CTX(desc) ((struct hmac_sha256_ctx *)shash_desc_ctx(desc))
static int crypto_hmac_sha256_setkey(struct crypto_shash *tfm,
const u8 *raw_key, unsigned int keylen)
{
hmac_sha256_preparekey(HMAC_SHA256_KEY(tfm), raw_key, keylen);
return 0;
}
static int crypto_hmac_sha256_init(struct shash_desc *desc)
{
hmac_sha256_init(HMAC_SHA256_CTX(desc), HMAC_SHA256_KEY(desc->tfm));
return 0;
}
static int crypto_hmac_sha256_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
hmac_sha256_update(HMAC_SHA256_CTX(desc), data, len);
return 0;
}
static int crypto_hmac_sha256_final(struct shash_desc *desc, u8 *out)
{
hmac_sha256_final(HMAC_SHA256_CTX(desc), out);
return 0;
}
static int crypto_hmac_sha256_digest(struct shash_desc *desc,
const u8 *data, unsigned int len,
u8 *out)
{
hmac_sha256(HMAC_SHA256_KEY(desc->tfm), data, len, out);
return 0;
}
static int crypto_hmac_sha256_export(struct shash_desc *desc, void *out)
{
return __crypto_sha256_export(&HMAC_SHA256_CTX(desc)->ctx.sha_ctx, out);
}
static int crypto_hmac_sha256_import(struct shash_desc *desc, const void *in)
{
struct hmac_sha256_ctx *ctx = HMAC_SHA256_CTX(desc);
ctx->ctx.ostate = HMAC_SHA256_KEY(desc->tfm)->key.ostate;
return __crypto_sha256_import(&ctx->ctx.sha_ctx, in);
}
/* Algorithm definitions */
static struct shash_alg algs[] = {
{
.base.cra_name = "sha256",
.base.cra_driver_name = "sha256-generic",
.base.cra_priority = 100,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA256_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA256_DIGEST_SIZE,
.init = crypto_sha256_init,
.update = crypto_sha256_update_generic,
.finup = crypto_sha256_finup_generic,
.digest = crypto_sha256_digest_generic,
.descsize = sizeof(struct crypto_sha256_state),
},
{
.base.cra_name = "sha224",
.base.cra_driver_name = "sha224-generic",
.base.cra_priority = 100,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_driver_name = "sha224-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA224_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA224_DIGEST_SIZE,
.init = crypto_sha224_init,
.update = crypto_sha256_update_generic,
.finup = crypto_sha256_finup_generic,
.descsize = sizeof(struct crypto_sha256_state),
.update = crypto_sha224_update,
.final = crypto_sha224_final,
.digest = crypto_sha224_digest,
.export = crypto_sha224_export,
.import = crypto_sha224_import,
.descsize = sizeof(struct sha224_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
{
.base.cra_name = "sha256",
.base.cra_driver_name = "sha256-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA256_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA256_DIGEST_SIZE,
.init = crypto_sha256_init,
.update = crypto_sha256_update_lib,
.final = crypto_sha256_final_lib,
.digest = crypto_sha256_digest_lib,
.descsize = sizeof(struct sha256_state),
.statesize = sizeof(struct crypto_sha256_state) +
SHA256_BLOCK_SIZE + 1,
.import = crypto_sha256_import_lib,
.export = crypto_sha256_export_lib,
.update = crypto_sha256_update,
.final = crypto_sha256_final,
.digest = crypto_sha256_digest,
.export = crypto_sha256_export,
.import = crypto_sha256_import,
.descsize = sizeof(struct sha256_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
{
.base.cra_name = "sha224",
.base.cra_driver_name = "sha224-lib",
.base.cra_name = "hmac(sha224)",
.base.cra_driver_name = "hmac-sha224-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA224_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct hmac_sha224_key),
.base.cra_module = THIS_MODULE,
.digestsize = SHA224_DIGEST_SIZE,
.init = crypto_sha224_init,
.update = crypto_sha256_update_lib,
.final = crypto_sha224_final_lib,
.descsize = sizeof(struct sha256_state),
.statesize = sizeof(struct crypto_sha256_state) +
SHA256_BLOCK_SIZE + 1,
.import = crypto_sha256_import_lib,
.export = crypto_sha256_export_lib,
.setkey = crypto_hmac_sha224_setkey,
.init = crypto_hmac_sha224_init,
.update = crypto_hmac_sha224_update,
.final = crypto_hmac_sha224_final,
.digest = crypto_hmac_sha224_digest,
.export = crypto_hmac_sha224_export,
.import = crypto_hmac_sha224_import,
.descsize = sizeof(struct hmac_sha224_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
{
.base.cra_name = "sha256",
.base.cra_driver_name = "sha256-" __stringify(ARCH),
.base.cra_name = "hmac(sha256)",
.base.cra_driver_name = "hmac-sha256-lib",
.base.cra_priority = 300,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA256_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct hmac_sha256_key),
.base.cra_module = THIS_MODULE,
.digestsize = SHA256_DIGEST_SIZE,
.init = crypto_sha256_init,
.update = crypto_sha256_update_arch,
.finup = crypto_sha256_finup_arch,
.digest = crypto_sha256_digest_arch,
.descsize = sizeof(struct crypto_sha256_state),
},
{
.base.cra_name = "sha224",
.base.cra_driver_name = "sha224-" __stringify(ARCH),
.base.cra_priority = 300,
.base.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.base.cra_blocksize = SHA224_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA224_DIGEST_SIZE,
.init = crypto_sha224_init,
.update = crypto_sha256_update_arch,
.finup = crypto_sha256_finup_arch,
.descsize = sizeof(struct crypto_sha256_state),
.setkey = crypto_hmac_sha256_setkey,
.init = crypto_hmac_sha256_init,
.update = crypto_hmac_sha256_update,
.final = crypto_hmac_sha256_final,
.digest = crypto_hmac_sha256_digest,
.export = crypto_hmac_sha256_export,
.import = crypto_hmac_sha256_import,
.descsize = sizeof(struct hmac_sha256_ctx),
.statesize = SHA256_SHASH_STATE_SIZE,
},
};
static unsigned int num_algs;
static int __init crypto_sha256_mod_init(void)
{
/* register the arch flavours only if they differ from generic */
num_algs = ARRAY_SIZE(algs);
BUILD_BUG_ON(ARRAY_SIZE(algs) <= 2);
if (!sha256_is_arch_optimized())
num_algs -= 2;
return crypto_register_shashes(algs, ARRAY_SIZE(algs));
}
module_init(crypto_sha256_mod_init);
static void __exit crypto_sha256_mod_exit(void)
{
crypto_unregister_shashes(algs, num_algs);
crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
}
module_exit(crypto_sha256_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Crypto API wrapper for the SHA-256 and SHA-224 library functions");
MODULE_DESCRIPTION("Crypto API support for SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256");
MODULE_ALIAS_CRYPTO("sha256");
MODULE_ALIAS_CRYPTO("sha256-generic");
MODULE_ALIAS_CRYPTO("sha256-" __stringify(ARCH));
MODULE_ALIAS_CRYPTO("sha224");
MODULE_ALIAS_CRYPTO("sha224-generic");
MODULE_ALIAS_CRYPTO("sha224-" __stringify(ARCH));
MODULE_ALIAS_CRYPTO("sha224-lib");
MODULE_ALIAS_CRYPTO("sha256");
MODULE_ALIAS_CRYPTO("sha256-lib");
MODULE_ALIAS_CRYPTO("hmac(sha224)");
MODULE_ALIAS_CRYPTO("hmac-sha224-lib");
MODULE_ALIAS_CRYPTO("hmac(sha256)");
MODULE_ALIAS_CRYPTO("hmac-sha256-lib");

354
crypto/sha512.c Normal file
View file

@ -0,0 +1,354 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
* Crypto API support for SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512
*
* Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) 2003 Kyle McMartin <kyle@debian.org>
* Copyright 2025 Google LLC
*/
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <linux/kernel.h>
#include <linux/module.h>
/*
* Export and import functions. crypto_shash wants a particular format that
* matches that used by some legacy drivers. It currently is the same as the
* library SHA context, except the value in bytecount_lo must be block-aligned
* and the remainder must be stored in an extra u8 appended to the struct.
*/
#define SHA512_SHASH_STATE_SIZE 209
static_assert(offsetof(struct __sha512_ctx, state) == 0);
static_assert(offsetof(struct __sha512_ctx, bytecount_lo) == 64);
static_assert(offsetof(struct __sha512_ctx, bytecount_hi) == 72);
static_assert(offsetof(struct __sha512_ctx, buf) == 80);
static_assert(sizeof(struct __sha512_ctx) + 1 == SHA512_SHASH_STATE_SIZE);
static int __crypto_sha512_export(const struct __sha512_ctx *ctx0, void *out)
{
struct __sha512_ctx ctx = *ctx0;
unsigned int partial;
u8 *p = out;
partial = ctx.bytecount_lo % SHA512_BLOCK_SIZE;
ctx.bytecount_lo -= partial;
memcpy(p, &ctx, sizeof(ctx));
p += sizeof(ctx);
*p = partial;
return 0;
}
static int __crypto_sha512_import(struct __sha512_ctx *ctx, const void *in)
{
const u8 *p = in;
memcpy(ctx, p, sizeof(*ctx));
p += sizeof(*ctx);
ctx->bytecount_lo += *p;
return 0;
}
/* SHA-384 */
const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = {
0x38, 0xb0, 0x60, 0xa7, 0x51, 0xac, 0x96, 0x38,
0x4c, 0xd9, 0x32, 0x7e, 0xb1, 0xb1, 0xe3, 0x6a,
0x21, 0xfd, 0xb7, 0x11, 0x14, 0xbe, 0x07, 0x43,
0x4c, 0x0c, 0xc7, 0xbf, 0x63, 0xf6, 0xe1, 0xda,
0x27, 0x4e, 0xde, 0xbf, 0xe7, 0x6f, 0x65, 0xfb,
0xd5, 0x1a, 0xd2, 0xf1, 0x48, 0x98, 0xb9, 0x5b
};
EXPORT_SYMBOL_GPL(sha384_zero_message_hash);
#define SHA384_CTX(desc) ((struct sha384_ctx *)shash_desc_ctx(desc))
static int crypto_sha384_init(struct shash_desc *desc)
{
sha384_init(SHA384_CTX(desc));
return 0;
}
static int crypto_sha384_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
sha384_update(SHA384_CTX(desc), data, len);
return 0;
}
static int crypto_sha384_final(struct shash_desc *desc, u8 *out)
{
sha384_final(SHA384_CTX(desc), out);
return 0;
}
static int crypto_sha384_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
sha384(data, len, out);
return 0;
}
static int crypto_sha384_export(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export(&SHA384_CTX(desc)->ctx, out);
}
static int crypto_sha384_import(struct shash_desc *desc, const void *in)
{
return __crypto_sha512_import(&SHA384_CTX(desc)->ctx, in);
}
/* SHA-512 */
const u8 sha512_zero_message_hash[SHA512_DIGEST_SIZE] = {
0xcf, 0x83, 0xe1, 0x35, 0x7e, 0xef, 0xb8, 0xbd,
0xf1, 0x54, 0x28, 0x50, 0xd6, 0x6d, 0x80, 0x07,
0xd6, 0x20, 0xe4, 0x05, 0x0b, 0x57, 0x15, 0xdc,
0x83, 0xf4, 0xa9, 0x21, 0xd3, 0x6c, 0xe9, 0xce,
0x47, 0xd0, 0xd1, 0x3c, 0x5d, 0x85, 0xf2, 0xb0,
0xff, 0x83, 0x18, 0xd2, 0x87, 0x7e, 0xec, 0x2f,
0x63, 0xb9, 0x31, 0xbd, 0x47, 0x41, 0x7a, 0x81,
0xa5, 0x38, 0x32, 0x7a, 0xf9, 0x27, 0xda, 0x3e
};
EXPORT_SYMBOL_GPL(sha512_zero_message_hash);
#define SHA512_CTX(desc) ((struct sha512_ctx *)shash_desc_ctx(desc))
static int crypto_sha512_init(struct shash_desc *desc)
{
sha512_init(SHA512_CTX(desc));
return 0;
}
static int crypto_sha512_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
sha512_update(SHA512_CTX(desc), data, len);
return 0;
}
static int crypto_sha512_final(struct shash_desc *desc, u8 *out)
{
sha512_final(SHA512_CTX(desc), out);
return 0;
}
static int crypto_sha512_digest(struct shash_desc *desc,
const u8 *data, unsigned int len, u8 *out)
{
sha512(data, len, out);
return 0;
}
static int crypto_sha512_export(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export(&SHA512_CTX(desc)->ctx, out);
}
static int crypto_sha512_import(struct shash_desc *desc, const void *in)
{
return __crypto_sha512_import(&SHA512_CTX(desc)->ctx, in);
}
/* HMAC-SHA384 */
#define HMAC_SHA384_KEY(tfm) ((struct hmac_sha384_key *)crypto_shash_ctx(tfm))
#define HMAC_SHA384_CTX(desc) ((struct hmac_sha384_ctx *)shash_desc_ctx(desc))
static int crypto_hmac_sha384_setkey(struct crypto_shash *tfm,
const u8 *raw_key, unsigned int keylen)
{
hmac_sha384_preparekey(HMAC_SHA384_KEY(tfm), raw_key, keylen);
return 0;
}
static int crypto_hmac_sha384_init(struct shash_desc *desc)
{
hmac_sha384_init(HMAC_SHA384_CTX(desc), HMAC_SHA384_KEY(desc->tfm));
return 0;
}
static int crypto_hmac_sha384_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
hmac_sha384_update(HMAC_SHA384_CTX(desc), data, len);
return 0;
}
static int crypto_hmac_sha384_final(struct shash_desc *desc, u8 *out)
{
hmac_sha384_final(HMAC_SHA384_CTX(desc), out);
return 0;
}
static int crypto_hmac_sha384_digest(struct shash_desc *desc,
const u8 *data, unsigned int len,
u8 *out)
{
hmac_sha384(HMAC_SHA384_KEY(desc->tfm), data, len, out);
return 0;
}
static int crypto_hmac_sha384_export(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export(&HMAC_SHA384_CTX(desc)->ctx.sha_ctx, out);
}
static int crypto_hmac_sha384_import(struct shash_desc *desc, const void *in)
{
struct hmac_sha384_ctx *ctx = HMAC_SHA384_CTX(desc);
ctx->ctx.ostate = HMAC_SHA384_KEY(desc->tfm)->key.ostate;
return __crypto_sha512_import(&ctx->ctx.sha_ctx, in);
}
/* HMAC-SHA512 */
#define HMAC_SHA512_KEY(tfm) ((struct hmac_sha512_key *)crypto_shash_ctx(tfm))
#define HMAC_SHA512_CTX(desc) ((struct hmac_sha512_ctx *)shash_desc_ctx(desc))
static int crypto_hmac_sha512_setkey(struct crypto_shash *tfm,
const u8 *raw_key, unsigned int keylen)
{
hmac_sha512_preparekey(HMAC_SHA512_KEY(tfm), raw_key, keylen);
return 0;
}
static int crypto_hmac_sha512_init(struct shash_desc *desc)
{
hmac_sha512_init(HMAC_SHA512_CTX(desc), HMAC_SHA512_KEY(desc->tfm));
return 0;
}
static int crypto_hmac_sha512_update(struct shash_desc *desc,
const u8 *data, unsigned int len)
{
hmac_sha512_update(HMAC_SHA512_CTX(desc), data, len);
return 0;
}
static int crypto_hmac_sha512_final(struct shash_desc *desc, u8 *out)
{
hmac_sha512_final(HMAC_SHA512_CTX(desc), out);
return 0;
}
static int crypto_hmac_sha512_digest(struct shash_desc *desc,
const u8 *data, unsigned int len,
u8 *out)
{
hmac_sha512(HMAC_SHA512_KEY(desc->tfm), data, len, out);
return 0;
}
static int crypto_hmac_sha512_export(struct shash_desc *desc, void *out)
{
return __crypto_sha512_export(&HMAC_SHA512_CTX(desc)->ctx.sha_ctx, out);
}
static int crypto_hmac_sha512_import(struct shash_desc *desc, const void *in)
{
struct hmac_sha512_ctx *ctx = HMAC_SHA512_CTX(desc);
ctx->ctx.ostate = HMAC_SHA512_KEY(desc->tfm)->key.ostate;
return __crypto_sha512_import(&ctx->ctx.sha_ctx, in);
}
/* Algorithm definitions */
static struct shash_alg algs[] = {
{
.base.cra_name = "sha384",
.base.cra_driver_name = "sha384-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA384_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA384_DIGEST_SIZE,
.init = crypto_sha384_init,
.update = crypto_sha384_update,
.final = crypto_sha384_final,
.digest = crypto_sha384_digest,
.export = crypto_sha384_export,
.import = crypto_sha384_import,
.descsize = sizeof(struct sha384_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
{
.base.cra_name = "sha512",
.base.cra_driver_name = "sha512-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA512_BLOCK_SIZE,
.base.cra_module = THIS_MODULE,
.digestsize = SHA512_DIGEST_SIZE,
.init = crypto_sha512_init,
.update = crypto_sha512_update,
.final = crypto_sha512_final,
.digest = crypto_sha512_digest,
.export = crypto_sha512_export,
.import = crypto_sha512_import,
.descsize = sizeof(struct sha512_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
{
.base.cra_name = "hmac(sha384)",
.base.cra_driver_name = "hmac-sha384-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA384_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct hmac_sha384_key),
.base.cra_module = THIS_MODULE,
.digestsize = SHA384_DIGEST_SIZE,
.setkey = crypto_hmac_sha384_setkey,
.init = crypto_hmac_sha384_init,
.update = crypto_hmac_sha384_update,
.final = crypto_hmac_sha384_final,
.digest = crypto_hmac_sha384_digest,
.export = crypto_hmac_sha384_export,
.import = crypto_hmac_sha384_import,
.descsize = sizeof(struct hmac_sha384_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
{
.base.cra_name = "hmac(sha512)",
.base.cra_driver_name = "hmac-sha512-lib",
.base.cra_priority = 300,
.base.cra_blocksize = SHA512_BLOCK_SIZE,
.base.cra_ctxsize = sizeof(struct hmac_sha512_key),
.base.cra_module = THIS_MODULE,
.digestsize = SHA512_DIGEST_SIZE,
.setkey = crypto_hmac_sha512_setkey,
.init = crypto_hmac_sha512_init,
.update = crypto_hmac_sha512_update,
.final = crypto_hmac_sha512_final,
.digest = crypto_hmac_sha512_digest,
.export = crypto_hmac_sha512_export,
.import = crypto_hmac_sha512_import,
.descsize = sizeof(struct hmac_sha512_ctx),
.statesize = SHA512_SHASH_STATE_SIZE,
},
};
static int __init crypto_sha512_mod_init(void)
{
return crypto_register_shashes(algs, ARRAY_SIZE(algs));
}
module_init(crypto_sha512_mod_init);
static void __exit crypto_sha512_mod_exit(void)
{
crypto_unregister_shashes(algs, ARRAY_SIZE(algs));
}
module_exit(crypto_sha512_mod_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Crypto API support for SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha384-lib");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha512-lib");
MODULE_ALIAS_CRYPTO("hmac(sha384)");
MODULE_ALIAS_CRYPTO("hmac-sha384-lib");
MODULE_ALIAS_CRYPTO("hmac(sha512)");
MODULE_ALIAS_CRYPTO("hmac-sha512-lib");

View file

@ -1,217 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/* SHA-512 code by Jean-Luc Cooke <jlcooke@certainkey.com>
*
* Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
* Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
* Copyright (c) 2003 Kyle McMartin <kyle@debian.org>
*/
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <crypto/sha512_base.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/unaligned.h>
const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE] = {
0x38, 0xb0, 0x60, 0xa7, 0x51, 0xac, 0x96, 0x38,
0x4c, 0xd9, 0x32, 0x7e, 0xb1, 0xb1, 0xe3, 0x6a,
0x21, 0xfd, 0xb7, 0x11, 0x14, 0xbe, 0x07, 0x43,
0x4c, 0x0c, 0xc7, 0xbf, 0x63, 0xf6, 0xe1, 0xda,
0x27, 0x4e, 0xde, 0xbf, 0xe7, 0x6f, 0x65, 0xfb,
0xd5, 0x1a, 0xd2, 0xf1, 0x48, 0x98, 0xb9, 0x5b
};
EXPORT_SYMBOL_GPL(sha384_zero_message_hash);
const u8 sha512_zero_message_hash[SHA512_DIGEST_SIZE] = {
0xcf, 0x83, 0xe1, 0x35, 0x7e, 0xef, 0xb8, 0xbd,
0xf1, 0x54, 0x28, 0x50, 0xd6, 0x6d, 0x80, 0x07,
0xd6, 0x20, 0xe4, 0x05, 0x0b, 0x57, 0x15, 0xdc,
0x83, 0xf4, 0xa9, 0x21, 0xd3, 0x6c, 0xe9, 0xce,
0x47, 0xd0, 0xd1, 0x3c, 0x5d, 0x85, 0xf2, 0xb0,
0xff, 0x83, 0x18, 0xd2, 0x87, 0x7e, 0xec, 0x2f,
0x63, 0xb9, 0x31, 0xbd, 0x47, 0x41, 0x7a, 0x81,
0xa5, 0x38, 0x32, 0x7a, 0xf9, 0x27, 0xda, 0x3e
};
EXPORT_SYMBOL_GPL(sha512_zero_message_hash);
static inline u64 Ch(u64 x, u64 y, u64 z)
{
return z ^ (x & (y ^ z));
}
static inline u64 Maj(u64 x, u64 y, u64 z)
{
return (x & y) | (z & (x | y));
}
static const u64 sha512_K[80] = {
0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL, 0xb5c0fbcfec4d3b2fULL,
0xe9b5dba58189dbbcULL, 0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL,
0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL, 0xd807aa98a3030242ULL,
0x12835b0145706fbeULL, 0x243185be4ee4b28cULL, 0x550c7dc3d5ffb4e2ULL,
0x72be5d74f27b896fULL, 0x80deb1fe3b1696b1ULL, 0x9bdc06a725c71235ULL,
0xc19bf174cf692694ULL, 0xe49b69c19ef14ad2ULL, 0xefbe4786384f25e3ULL,
0x0fc19dc68b8cd5b5ULL, 0x240ca1cc77ac9c65ULL, 0x2de92c6f592b0275ULL,
0x4a7484aa6ea6e483ULL, 0x5cb0a9dcbd41fbd4ULL, 0x76f988da831153b5ULL,
0x983e5152ee66dfabULL, 0xa831c66d2db43210ULL, 0xb00327c898fb213fULL,
0xbf597fc7beef0ee4ULL, 0xc6e00bf33da88fc2ULL, 0xd5a79147930aa725ULL,
0x06ca6351e003826fULL, 0x142929670a0e6e70ULL, 0x27b70a8546d22ffcULL,
0x2e1b21385c26c926ULL, 0x4d2c6dfc5ac42aedULL, 0x53380d139d95b3dfULL,
0x650a73548baf63deULL, 0x766a0abb3c77b2a8ULL, 0x81c2c92e47edaee6ULL,
0x92722c851482353bULL, 0xa2bfe8a14cf10364ULL, 0xa81a664bbc423001ULL,
0xc24b8b70d0f89791ULL, 0xc76c51a30654be30ULL, 0xd192e819d6ef5218ULL,
0xd69906245565a910ULL, 0xf40e35855771202aULL, 0x106aa07032bbd1b8ULL,
0x19a4c116b8d2d0c8ULL, 0x1e376c085141ab53ULL, 0x2748774cdf8eeb99ULL,
0x34b0bcb5e19b48a8ULL, 0x391c0cb3c5c95a63ULL, 0x4ed8aa4ae3418acbULL,
0x5b9cca4f7763e373ULL, 0x682e6ff3d6b2b8a3ULL, 0x748f82ee5defb2fcULL,
0x78a5636f43172f60ULL, 0x84c87814a1f0ab72ULL, 0x8cc702081a6439ecULL,
0x90befffa23631e28ULL, 0xa4506cebde82bde9ULL, 0xbef9a3f7b2c67915ULL,
0xc67178f2e372532bULL, 0xca273eceea26619cULL, 0xd186b8c721c0c207ULL,
0xeada7dd6cde0eb1eULL, 0xf57d4f7fee6ed178ULL, 0x06f067aa72176fbaULL,
0x0a637dc5a2c898a6ULL, 0x113f9804bef90daeULL, 0x1b710b35131c471bULL,
0x28db77f523047d84ULL, 0x32caab7b40c72493ULL, 0x3c9ebe0a15c9bebcULL,
0x431d67c49c100d4cULL, 0x4cc5d4becb3e42b6ULL, 0x597f299cfc657e2aULL,
0x5fcb6fab3ad6faecULL, 0x6c44198c4a475817ULL,
};
#define e0(x) (ror64(x,28) ^ ror64(x,34) ^ ror64(x,39))
#define e1(x) (ror64(x,14) ^ ror64(x,18) ^ ror64(x,41))
#define s0(x) (ror64(x, 1) ^ ror64(x, 8) ^ (x >> 7))
#define s1(x) (ror64(x,19) ^ ror64(x,61) ^ (x >> 6))
static inline void LOAD_OP(int I, u64 *W, const u8 *input)
{
W[I] = get_unaligned_be64((__u64 *)input + I);
}
static inline void BLEND_OP(int I, u64 *W)
{
W[I & 15] += s1(W[(I-2) & 15]) + W[(I-7) & 15] + s0(W[(I-15) & 15]);
}
static void
sha512_transform(u64 *state, const u8 *input)
{
u64 a, b, c, d, e, f, g, h, t1, t2;
int i;
u64 W[16];
/* load the state into our registers */
a=state[0]; b=state[1]; c=state[2]; d=state[3];
e=state[4]; f=state[5]; g=state[6]; h=state[7];
/* now iterate */
for (i=0; i<80; i+=8) {
if (!(i & 8)) {
int j;
if (i < 16) {
/* load the input */
for (j = 0; j < 16; j++)
LOAD_OP(i + j, W, input);
} else {
for (j = 0; j < 16; j++) {
BLEND_OP(i + j, W);
}
}
}
t1 = h + e1(e) + Ch(e,f,g) + sha512_K[i ] + W[(i & 15)];
t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
t1 = g + e1(d) + Ch(d,e,f) + sha512_K[i+1] + W[(i & 15) + 1];
t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
t1 = f + e1(c) + Ch(c,d,e) + sha512_K[i+2] + W[(i & 15) + 2];
t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
t1 = e + e1(b) + Ch(b,c,d) + sha512_K[i+3] + W[(i & 15) + 3];
t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
t1 = d + e1(a) + Ch(a,b,c) + sha512_K[i+4] + W[(i & 15) + 4];
t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
t1 = c + e1(h) + Ch(h,a,b) + sha512_K[i+5] + W[(i & 15) + 5];
t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
t1 = b + e1(g) + Ch(g,h,a) + sha512_K[i+6] + W[(i & 15) + 6];
t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
t1 = a + e1(f) + Ch(f,g,h) + sha512_K[i+7] + W[(i & 15) + 7];
t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
}
state[0] += a; state[1] += b; state[2] += c; state[3] += d;
state[4] += e; state[5] += f; state[6] += g; state[7] += h;
}
void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src,
int blocks)
{
do {
sha512_transform(sst->state, src);
src += SHA512_BLOCK_SIZE;
} while (--blocks);
}
EXPORT_SYMBOL_GPL(sha512_generic_block_fn);
static int crypto_sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_base_do_update_blocks(desc, data, len,
sha512_generic_block_fn);
}
static int crypto_sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *hash)
{
sha512_base_do_finup(desc, data, len, sha512_generic_block_fn);
return sha512_base_finish(desc, hash);
}
static struct shash_alg sha512_algs[2] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = crypto_sha512_update,
.finup = crypto_sha512_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = crypto_sha512_update,
.finup = crypto_sha512_finup,
.descsize = SHA512_STATE_SIZE,
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-generic",
.cra_priority = 100,
.cra_flags = CRYPTO_AHASH_ALG_BLOCK_ONLY |
CRYPTO_AHASH_ALG_FINUP_MAX,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static int __init sha512_generic_mod_init(void)
{
return crypto_register_shashes(sha512_algs, ARRAY_SIZE(sha512_algs));
}
static void __exit sha512_generic_mod_fini(void)
{
crypto_unregister_shashes(sha512_algs, ARRAY_SIZE(sha512_algs));
}
module_init(sha512_generic_mod_init);
module_exit(sha512_generic_mod_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms");
MODULE_ALIAS_CRYPTO("sha384");
MODULE_ALIAS_CRYPTO("sha384-generic");
MODULE_ALIAS_CRYPTO("sha512");
MODULE_ALIAS_CRYPTO("sha512-generic");

View file

@ -4184,6 +4184,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "authenc(hmac(sha1),cbc(aes))",
.generic_driver = "authenc(hmac-sha1-lib,cbc(aes-generic))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
@ -4191,12 +4192,14 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "authenc(hmac(sha1),cbc(des))",
.generic_driver = "authenc(hmac-sha1-lib,cbc(des-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha1_des_cbc_tv_temp)
}
}, {
.alg = "authenc(hmac(sha1),cbc(des3_ede))",
.generic_driver = "authenc(hmac-sha1-lib,cbc(des3_ede-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha1_des3_ede_cbc_tv_temp)
@ -4207,6 +4210,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha1),ecb(cipher_null))",
.generic_driver = "authenc(hmac-sha1-lib,ecb-cipher_null)",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha1_ecb_cipher_null_tv_temp)
@ -4217,18 +4221,21 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha224),cbc(des))",
.generic_driver = "authenc(hmac-sha224-lib,cbc(des-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha224_des_cbc_tv_temp)
}
}, {
.alg = "authenc(hmac(sha224),cbc(des3_ede))",
.generic_driver = "authenc(hmac-sha224-lib,cbc(des3_ede-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha224_des3_ede_cbc_tv_temp)
}
}, {
.alg = "authenc(hmac(sha256),cbc(aes))",
.generic_driver = "authenc(hmac-sha256-lib,cbc(aes-generic))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
@ -4236,12 +4243,14 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "authenc(hmac(sha256),cbc(des))",
.generic_driver = "authenc(hmac-sha256-lib,cbc(des-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha256_des_cbc_tv_temp)
}
}, {
.alg = "authenc(hmac(sha256),cbc(des3_ede))",
.generic_driver = "authenc(hmac-sha256-lib,cbc(des3_ede-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha256_des3_ede_cbc_tv_temp)
@ -4252,6 +4261,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha256),cts(cbc(aes)))",
.generic_driver = "authenc(hmac-sha256-lib,cts(cbc(aes-generic)))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(krb5_test_aes128_cts_hmac_sha256_128)
@ -4262,12 +4272,14 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha384),cbc(des))",
.generic_driver = "authenc(hmac-sha384-lib,cbc(des-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha384_des_cbc_tv_temp)
}
}, {
.alg = "authenc(hmac(sha384),cbc(des3_ede))",
.generic_driver = "authenc(hmac-sha384-lib,cbc(des3_ede-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha384_des3_ede_cbc_tv_temp)
@ -4278,6 +4290,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha384),cts(cbc(aes)))",
.generic_driver = "authenc(hmac-sha384-lib,cts(cbc(aes-generic)))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(krb5_test_aes256_cts_hmac_sha384_192)
@ -4288,6 +4301,7 @@ static const struct alg_test_desc alg_test_descs[] = {
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha512),cbc(aes))",
.generic_driver = "authenc(hmac-sha512-lib,cbc(aes-generic))",
.fips_allowed = 1,
.test = alg_test_aead,
.suite = {
@ -4295,12 +4309,14 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "authenc(hmac(sha512),cbc(des))",
.generic_driver = "authenc(hmac-sha512-lib,cbc(des-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha512_des_cbc_tv_temp)
}
}, {
.alg = "authenc(hmac(sha512),cbc(des3_ede))",
.generic_driver = "authenc(hmac-sha512-lib,cbc(des3_ede-generic))",
.test = alg_test_aead,
.suite = {
.aead = __VECS(hmac_sha512_des3_ede_cbc_tv_temp)
@ -4958,6 +4974,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "essiv(authenc(hmac(sha256),cbc(aes)),sha256)",
.generic_driver = "essiv(authenc(hmac-sha256-lib,cbc(aes-generic)),sha256-lib)",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
@ -4965,6 +4982,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "essiv(cbc(aes),sha256)",
.generic_driver = "essiv(cbc(aes-generic),sha256-lib)",
.test = alg_test_skcipher,
.fips_allowed = 1,
.suite = {
@ -5057,6 +5075,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "hmac(sha1)",
.generic_driver = "hmac-sha1-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5064,6 +5083,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "hmac(sha224)",
.generic_driver = "hmac-sha224-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5071,6 +5091,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "hmac(sha256)",
.generic_driver = "hmac-sha256-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5106,6 +5127,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "hmac(sha384)",
.generic_driver = "hmac-sha384-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5113,6 +5135,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "hmac(sha512)",
.generic_driver = "hmac-sha512-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5393,6 +5416,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "sha1",
.generic_driver = "sha1-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5400,6 +5424,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "sha224",
.generic_driver = "sha224-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5407,6 +5432,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "sha256",
.generic_driver = "sha256-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5442,6 +5468,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "sha384",
.generic_driver = "sha384-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
@ -5449,6 +5476,7 @@ static const struct alg_test_desc alg_test_descs[] = {
}
}, {
.alg = "sha512",
.generic_driver = "sha512-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {

View file

@ -390,7 +390,7 @@ static int tpm2_create_primary(struct tpm_chip *chip, u32 hierarchy,
* on every operation, so we weld the hmac init and final functions in
* here to give it the same usage characteristics as a regular hash
*/
static void tpm2_hmac_init(struct sha256_state *sctx, u8 *key, u32 key_len)
static void tpm2_hmac_init(struct sha256_ctx *sctx, u8 *key, u32 key_len)
{
u8 pad[SHA256_BLOCK_SIZE];
int i;
@ -406,7 +406,7 @@ static void tpm2_hmac_init(struct sha256_state *sctx, u8 *key, u32 key_len)
sha256_update(sctx, pad, sizeof(pad));
}
static void tpm2_hmac_final(struct sha256_state *sctx, u8 *key, u32 key_len,
static void tpm2_hmac_final(struct sha256_ctx *sctx, u8 *key, u32 key_len,
u8 *out)
{
u8 pad[SHA256_BLOCK_SIZE];
@ -440,7 +440,7 @@ static void tpm2_KDFa(u8 *key, u32 key_len, const char *label, u8 *u,
const __be32 bits = cpu_to_be32(bytes * 8);
while (bytes > 0) {
struct sha256_state sctx;
struct sha256_ctx sctx;
__be32 c = cpu_to_be32(counter);
tpm2_hmac_init(&sctx, key, key_len);
@ -467,7 +467,7 @@ static void tpm2_KDFa(u8 *key, u32 key_len, const char *label, u8 *u,
static void tpm2_KDFe(u8 z[EC_PT_SZ], const char *str, u8 *pt_u, u8 *pt_v,
u8 *out)
{
struct sha256_state sctx;
struct sha256_ctx sctx;
/*
* this should be an iterative counter, but because we know
* we're only taking 32 bytes for the point using a sha256
@ -592,7 +592,7 @@ void tpm_buf_fill_hmac_session(struct tpm_chip *chip, struct tpm_buf *buf)
u8 *hmac = NULL;
u32 attrs;
u8 cphash[SHA256_DIGEST_SIZE];
struct sha256_state sctx;
struct sha256_ctx sctx;
if (!auth)
return;
@ -750,7 +750,7 @@ int tpm_buf_check_hmac_response(struct tpm_chip *chip, struct tpm_buf *buf,
off_t offset_s, offset_p;
u8 rphash[SHA256_DIGEST_SIZE];
u32 attrs, cc;
struct sha256_state sctx;
struct sha256_ctx sctx;
u16 tag = be16_to_cpu(head->tag);
int parm_len, len, i, handles;

View file

@ -705,17 +705,17 @@ static int img_hash_cra_md5_init(struct crypto_tfm *tfm)
static int img_hash_cra_sha1_init(struct crypto_tfm *tfm)
{
return img_hash_cra_init(tfm, "sha1-generic");
return img_hash_cra_init(tfm, "sha1-lib");
}
static int img_hash_cra_sha224_init(struct crypto_tfm *tfm)
{
return img_hash_cra_init(tfm, "sha224-generic");
return img_hash_cra_init(tfm, "sha224-lib");
}
static int img_hash_cra_sha256_init(struct crypto_tfm *tfm)
{
return img_hash_cra_init(tfm, "sha256-generic");
return img_hash_cra_init(tfm, "sha256-lib");
}
static void img_hash_cra_exit(struct crypto_tfm *tfm)

View file

@ -493,25 +493,25 @@ static int starfive_hash_setkey(struct crypto_ahash *hash,
static int starfive_sha224_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "sha224-generic",
return starfive_hash_init_tfm(hash, "sha224-lib",
STARFIVE_HASH_SHA224, 0);
}
static int starfive_sha256_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "sha256-generic",
return starfive_hash_init_tfm(hash, "sha256-lib",
STARFIVE_HASH_SHA256, 0);
}
static int starfive_sha384_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "sha384-generic",
return starfive_hash_init_tfm(hash, "sha384-lib",
STARFIVE_HASH_SHA384, 0);
}
static int starfive_sha512_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "sha512-generic",
return starfive_hash_init_tfm(hash, "sha512-lib",
STARFIVE_HASH_SHA512, 0);
}
@ -523,25 +523,25 @@ static int starfive_sm3_init_tfm(struct crypto_ahash *hash)
static int starfive_hmac_sha224_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "hmac(sha224-generic)",
return starfive_hash_init_tfm(hash, "hmac-sha224-lib",
STARFIVE_HASH_SHA224, 1);
}
static int starfive_hmac_sha256_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "hmac(sha256-generic)",
return starfive_hash_init_tfm(hash, "hmac-sha256-lib",
STARFIVE_HASH_SHA256, 1);
}
static int starfive_hmac_sha384_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "hmac(sha384-generic)",
return starfive_hash_init_tfm(hash, "hmac-sha384-lib",
STARFIVE_HASH_SHA384, 1);
}
static int starfive_hmac_sha512_init_tfm(struct crypto_ahash *hash)
{
return starfive_hash_init_tfm(hash, "hmac(sha512-generic)",
return starfive_hash_init_tfm(hash, "hmac-sha512-lib",
STARFIVE_HASH_SHA512, 1);
}

View file

@ -1,66 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef _CRYPTO_INTERNAL_SHA2_H
#define _CRYPTO_INTERNAL_SHA2_H
#include <crypto/internal/simd.h>
#include <crypto/sha2.h>
#include <linux/compiler_attributes.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/unaligned.h>
#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256)
bool sha256_is_arch_optimized(void);
#else
static inline bool sha256_is_arch_optimized(void)
{
return false;
}
#endif
void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS],
const u8 *data, size_t nblocks);
static __always_inline void sha256_choose_blocks(
u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks,
bool force_generic, bool force_simd)
{
if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic)
sha256_blocks_generic(state, data, nblocks);
else if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD) &&
(force_simd || crypto_simd_usable()))
sha256_blocks_simd(state, data, nblocks);
else
sha256_blocks_arch(state, data, nblocks);
}
static __always_inline void sha256_finup(
struct crypto_sha256_state *sctx, u8 buf[SHA256_BLOCK_SIZE],
size_t len, u8 out[SHA256_DIGEST_SIZE], size_t digest_size,
bool force_generic, bool force_simd)
{
const size_t bit_offset = SHA256_BLOCK_SIZE - 8;
__be64 *bits = (__be64 *)&buf[bit_offset];
int i;
buf[len++] = 0x80;
if (len > bit_offset) {
memset(&buf[len], 0, SHA256_BLOCK_SIZE - len);
sha256_choose_blocks(sctx->state, buf, 1, force_generic,
force_simd);
len = 0;
}
memset(&buf[len], 0, bit_offset - len);
*bits = cpu_to_be64(sctx->count << 3);
sha256_choose_blocks(sctx->state, buf, 1, force_generic, force_simd);
for (i = 0; i < digest_size; i += 4)
put_unaligned_be32(sctx->state[i / 4], out + i);
}
#endif /* _CRYPTO_INTERNAL_SHA2_H */

View file

@ -33,7 +33,185 @@ struct sha1_state {
*/
#define SHA1_DIGEST_WORDS (SHA1_DIGEST_SIZE / 4)
#define SHA1_WORKSPACE_WORDS 16
void sha1_init(__u32 *buf);
void sha1_init_raw(__u32 *buf);
void sha1_transform(__u32 *digest, const char *data, __u32 *W);
/* State for the SHA-1 compression function */
struct sha1_block_state {
u32 h[SHA1_DIGEST_SIZE / 4];
};
/**
* struct sha1_ctx - Context for hashing a message with SHA-1
* @state: the compression function state
* @bytecount: number of bytes processed so far
* @buf: partial block buffer; bytecount % SHA1_BLOCK_SIZE bytes are valid
*/
struct sha1_ctx {
struct sha1_block_state state;
u64 bytecount;
u8 buf[SHA1_BLOCK_SIZE];
};
/**
* sha1_init() - Initialize a SHA-1 context for a new message
* @ctx: the context to initialize
*
* If you don't need incremental computation, consider sha1() instead.
*
* Context: Any context.
*/
void sha1_init(struct sha1_ctx *ctx);
/**
* sha1_update() - Update a SHA-1 context with message data
* @ctx: the context to update; must have been initialized
* @data: the message data
* @len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
void sha1_update(struct sha1_ctx *ctx, const u8 *data, size_t len);
/**
* sha1_final() - Finish computing a SHA-1 message digest
* @ctx: the context to finalize; must have been initialized
* @out: (output) the resulting SHA-1 message digest
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void sha1_final(struct sha1_ctx *ctx, u8 out[SHA1_DIGEST_SIZE]);
/**
* sha1() - Compute SHA-1 message digest in one shot
* @data: the message data
* @len: the data length in bytes
* @out: (output) the resulting SHA-1 message digest
*
* Context: Any context.
*/
void sha1(const u8 *data, size_t len, u8 out[SHA1_DIGEST_SIZE]);
/**
* struct hmac_sha1_key - Prepared key for HMAC-SHA1
* @istate: private
* @ostate: private
*/
struct hmac_sha1_key {
struct sha1_block_state istate;
struct sha1_block_state ostate;
};
/**
* struct hmac_sha1_ctx - Context for computing HMAC-SHA1 of a message
* @sha_ctx: private
* @ostate: private
*/
struct hmac_sha1_ctx {
struct sha1_ctx sha_ctx;
struct sha1_block_state ostate;
};
/**
* hmac_sha1_preparekey() - Prepare a key for HMAC-SHA1
* @key: (output) the key structure to initialize
* @raw_key: the raw HMAC-SHA1 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* Note: the caller is responsible for zeroizing both the struct hmac_sha1_key
* and the raw key once they are no longer needed.
*
* Context: Any context.
*/
void hmac_sha1_preparekey(struct hmac_sha1_key *key,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha1_init() - Initialize an HMAC-SHA1 context for a new message
* @ctx: (output) the HMAC context to initialize
* @key: the prepared HMAC key
*
* If you don't need incremental computation, consider hmac_sha1() instead.
*
* Context: Any context.
*/
void hmac_sha1_init(struct hmac_sha1_ctx *ctx, const struct hmac_sha1_key *key);
/**
* hmac_sha1_init_usingrawkey() - Initialize an HMAC-SHA1 context for a new
* message, using a raw key
* @ctx: (output) the HMAC context to initialize
* @raw_key: the raw HMAC-SHA1 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* If you don't need incremental computation, consider hmac_sha1_usingrawkey()
* instead.
*
* Context: Any context.
*/
void hmac_sha1_init_usingrawkey(struct hmac_sha1_ctx *ctx,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha1_update() - Update an HMAC-SHA1 context with message data
* @ctx: the HMAC context to update; must have been initialized
* @data: the message data
* @data_len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void hmac_sha1_update(struct hmac_sha1_ctx *ctx,
const u8 *data, size_t data_len)
{
sha1_update(&ctx->sha_ctx, data, data_len);
}
/**
* hmac_sha1_final() - Finish computing an HMAC-SHA1 value
* @ctx: the HMAC context to finalize; must have been initialized
* @out: (output) the resulting HMAC-SHA1 value
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void hmac_sha1_final(struct hmac_sha1_ctx *ctx, u8 out[SHA1_DIGEST_SIZE]);
/**
* hmac_sha1() - Compute HMAC-SHA1 in one shot, using a prepared key
* @key: the prepared HMAC key
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA1 value
*
* If you're using the key only once, consider using hmac_sha1_usingrawkey().
*
* Context: Any context.
*/
void hmac_sha1(const struct hmac_sha1_key *key,
const u8 *data, size_t data_len, u8 out[SHA1_DIGEST_SIZE]);
/**
* hmac_sha1_usingrawkey() - Compute HMAC-SHA1 in one shot, using a raw key
* @raw_key: the raw HMAC-SHA1 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA1 value
*
* If you're using the key multiple times, prefer to use hmac_sha1_preparekey()
* followed by multiple calls to hmac_sha1() instead.
*
* Context: Any context.
*/
void hmac_sha1_usingrawkey(const u8 *raw_key, size_t raw_key_len,
const u8 *data, size_t data_len,
u8 out[SHA1_DIGEST_SIZE]);
#endif /* _CRYPTO_SHA1_H */

View file

@ -1,82 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* sha1_base.h - core logic for SHA-1 implementations
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*/
#ifndef _CRYPTO_SHA1_BASE_H
#define _CRYPTO_SHA1_BASE_H
#include <crypto/internal/hash.h>
#include <crypto/sha1.h>
#include <linux/math.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/unaligned.h>
typedef void (sha1_block_fn)(struct sha1_state *sst, u8 const *src, int blocks);
static inline int sha1_base_init(struct shash_desc *desc)
{
struct sha1_state *sctx = shash_desc_ctx(desc);
sctx->state[0] = SHA1_H0;
sctx->state[1] = SHA1_H1;
sctx->state[2] = SHA1_H2;
sctx->state[3] = SHA1_H3;
sctx->state[4] = SHA1_H4;
sctx->count = 0;
return 0;
}
static inline int sha1_base_do_update_blocks(struct shash_desc *desc,
const u8 *data,
unsigned int len,
sha1_block_fn *block_fn)
{
unsigned int remain = len - round_down(len, SHA1_BLOCK_SIZE);
struct sha1_state *sctx = shash_desc_ctx(desc);
sctx->count += len - remain;
block_fn(sctx, data, len / SHA1_BLOCK_SIZE);
return remain;
}
static inline int sha1_base_do_finup(struct shash_desc *desc,
const u8 *src, unsigned int len,
sha1_block_fn *block_fn)
{
unsigned int bit_offset = SHA1_BLOCK_SIZE / 8 - 1;
struct sha1_state *sctx = shash_desc_ctx(desc);
union {
__be64 b64[SHA1_BLOCK_SIZE / 4];
u8 u8[SHA1_BLOCK_SIZE * 2];
} block = {};
if (len >= bit_offset * 8)
bit_offset += SHA1_BLOCK_SIZE / 8;
memcpy(&block, src, len);
block.u8[len] = 0x80;
sctx->count += len;
block.b64[bit_offset] = cpu_to_be64(sctx->count << 3);
block_fn(sctx, block.u8, (bit_offset + 1) * 8 / SHA1_BLOCK_SIZE);
memzero_explicit(&block, sizeof(block));
return 0;
}
static inline int sha1_base_finish(struct shash_desc *desc, u8 *out)
{
struct sha1_state *sctx = shash_desc_ctx(desc);
__be32 *digest = (__be32 *)out;
int i;
for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++)
put_unaligned_be32(sctx->state[i], digest++);
return 0;
}
#endif /* _CRYPTO_SHA1_BASE_H */

View file

@ -71,6 +71,32 @@ struct crypto_sha256_state {
u64 count;
};
static inline void sha224_block_init(struct crypto_sha256_state *sctx)
{
sctx->state[0] = SHA224_H0;
sctx->state[1] = SHA224_H1;
sctx->state[2] = SHA224_H2;
sctx->state[3] = SHA224_H3;
sctx->state[4] = SHA224_H4;
sctx->state[5] = SHA224_H5;
sctx->state[6] = SHA224_H6;
sctx->state[7] = SHA224_H7;
sctx->count = 0;
}
static inline void sha256_block_init(struct crypto_sha256_state *sctx)
{
sctx->state[0] = SHA256_H0;
sctx->state[1] = SHA256_H1;
sctx->state[2] = SHA256_H2;
sctx->state[3] = SHA256_H3;
sctx->state[4] = SHA256_H4;
sctx->state[5] = SHA256_H5;
sctx->state[6] = SHA256_H6;
sctx->state[7] = SHA256_H7;
sctx->count = 0;
}
struct sha256_state {
union {
struct crypto_sha256_state ctx;
@ -88,45 +114,763 @@ struct sha512_state {
u8 buf[SHA512_BLOCK_SIZE];
};
static inline void sha256_block_init(struct crypto_sha256_state *sctx)
/* State for the SHA-256 (and SHA-224) compression function */
struct sha256_block_state {
u32 h[SHA256_STATE_WORDS];
};
/*
* Context structure, shared by SHA-224 and SHA-256. The sha224_ctx and
* sha256_ctx structs wrap this one so that the API has proper typing and
* doesn't allow mixing the SHA-224 and SHA-256 functions arbitrarily.
*/
struct __sha256_ctx {
struct sha256_block_state state;
u64 bytecount;
u8 buf[SHA256_BLOCK_SIZE] __aligned(__alignof__(__be64));
};
void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len);
/*
* HMAC key and message context structs, shared by HMAC-SHA224 and HMAC-SHA256.
* The hmac_sha224_* and hmac_sha256_* structs wrap this one so that the API has
* proper typing and doesn't allow mixing the functions arbitrarily.
*/
struct __hmac_sha256_key {
struct sha256_block_state istate;
struct sha256_block_state ostate;
};
struct __hmac_sha256_ctx {
struct __sha256_ctx sha_ctx;
struct sha256_block_state ostate;
};
void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx,
const struct __hmac_sha256_key *key);
/**
* struct sha224_ctx - Context for hashing a message with SHA-224
* @ctx: private
*/
struct sha224_ctx {
struct __sha256_ctx ctx;
};
/**
* sha224_init() - Initialize a SHA-224 context for a new message
* @ctx: the context to initialize
*
* If you don't need incremental computation, consider sha224() instead.
*
* Context: Any context.
*/
void sha224_init(struct sha224_ctx *ctx);
/**
* sha224_update() - Update a SHA-224 context with message data
* @ctx: the context to update; must have been initialized
* @data: the message data
* @len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void sha224_update(struct sha224_ctx *ctx,
const u8 *data, size_t len)
{
sctx->state[0] = SHA256_H0;
sctx->state[1] = SHA256_H1;
sctx->state[2] = SHA256_H2;
sctx->state[3] = SHA256_H3;
sctx->state[4] = SHA256_H4;
sctx->state[5] = SHA256_H5;
sctx->state[6] = SHA256_H6;
sctx->state[7] = SHA256_H7;
sctx->count = 0;
__sha256_update(&ctx->ctx, data, len);
}
static inline void sha256_init(struct sha256_state *sctx)
/**
* sha224_final() - Finish computing a SHA-224 message digest
* @ctx: the context to finalize; must have been initialized
* @out: (output) the resulting SHA-224 message digest
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]);
/**
* sha224() - Compute SHA-224 message digest in one shot
* @data: the message data
* @len: the data length in bytes
* @out: (output) the resulting SHA-224 message digest
*
* Context: Any context.
*/
void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]);
/**
* struct hmac_sha224_key - Prepared key for HMAC-SHA224
* @key: private
*/
struct hmac_sha224_key {
struct __hmac_sha256_key key;
};
/**
* struct hmac_sha224_ctx - Context for computing HMAC-SHA224 of a message
* @ctx: private
*/
struct hmac_sha224_ctx {
struct __hmac_sha256_ctx ctx;
};
/**
* hmac_sha224_preparekey() - Prepare a key for HMAC-SHA224
* @key: (output) the key structure to initialize
* @raw_key: the raw HMAC-SHA224 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* Note: the caller is responsible for zeroizing both the struct hmac_sha224_key
* and the raw key once they are no longer needed.
*
* Context: Any context.
*/
void hmac_sha224_preparekey(struct hmac_sha224_key *key,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha224_init() - Initialize an HMAC-SHA224 context for a new message
* @ctx: (output) the HMAC context to initialize
* @key: the prepared HMAC key
*
* If you don't need incremental computation, consider hmac_sha224() instead.
*
* Context: Any context.
*/
static inline void hmac_sha224_init(struct hmac_sha224_ctx *ctx,
const struct hmac_sha224_key *key)
{
sha256_block_init(&sctx->ctx);
__hmac_sha256_init(&ctx->ctx, &key->key);
}
void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len);
void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]);
/**
* hmac_sha224_init_usingrawkey() - Initialize an HMAC-SHA224 context for a new
* message, using a raw key
* @ctx: (output) the HMAC context to initialize
* @raw_key: the raw HMAC-SHA224 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* If you don't need incremental computation, consider hmac_sha224_usingrawkey()
* instead.
*
* Context: Any context.
*/
void hmac_sha224_init_usingrawkey(struct hmac_sha224_ctx *ctx,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha224_update() - Update an HMAC-SHA224 context with message data
* @ctx: the HMAC context to update; must have been initialized
* @data: the message data
* @data_len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void hmac_sha224_update(struct hmac_sha224_ctx *ctx,
const u8 *data, size_t data_len)
{
__sha256_update(&ctx->ctx.sha_ctx, data, data_len);
}
/**
* hmac_sha224_final() - Finish computing an HMAC-SHA224 value
* @ctx: the HMAC context to finalize; must have been initialized
* @out: (output) the resulting HMAC-SHA224 value
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void hmac_sha224_final(struct hmac_sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]);
/**
* hmac_sha224() - Compute HMAC-SHA224 in one shot, using a prepared key
* @key: the prepared HMAC key
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA224 value
*
* If you're using the key only once, consider using hmac_sha224_usingrawkey().
*
* Context: Any context.
*/
void hmac_sha224(const struct hmac_sha224_key *key,
const u8 *data, size_t data_len, u8 out[SHA224_DIGEST_SIZE]);
/**
* hmac_sha224_usingrawkey() - Compute HMAC-SHA224 in one shot, using a raw key
* @raw_key: the raw HMAC-SHA224 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA224 value
*
* If you're using the key multiple times, prefer to use
* hmac_sha224_preparekey() followed by multiple calls to hmac_sha224() instead.
*
* Context: Any context.
*/
void hmac_sha224_usingrawkey(const u8 *raw_key, size_t raw_key_len,
const u8 *data, size_t data_len,
u8 out[SHA224_DIGEST_SIZE]);
/**
* struct sha256_ctx - Context for hashing a message with SHA-256
* @ctx: private
*/
struct sha256_ctx {
struct __sha256_ctx ctx;
};
/**
* sha256_init() - Initialize a SHA-256 context for a new message
* @ctx: the context to initialize
*
* If you don't need incremental computation, consider sha256() instead.
*
* Context: Any context.
*/
void sha256_init(struct sha256_ctx *ctx);
/**
* sha256_update() - Update a SHA-256 context with message data
* @ctx: the context to update; must have been initialized
* @data: the message data
* @len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void sha256_update(struct sha256_ctx *ctx,
const u8 *data, size_t len)
{
__sha256_update(&ctx->ctx, data, len);
}
/**
* sha256_final() - Finish computing a SHA-256 message digest
* @ctx: the context to finalize; must have been initialized
* @out: (output) the resulting SHA-256 message digest
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]);
/**
* sha256() - Compute SHA-256 message digest in one shot
* @data: the message data
* @len: the data length in bytes
* @out: (output) the resulting SHA-256 message digest
*
* Context: Any context.
*/
void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]);
static inline void sha224_block_init(struct crypto_sha256_state *sctx)
/**
* struct hmac_sha256_key - Prepared key for HMAC-SHA256
* @key: private
*/
struct hmac_sha256_key {
struct __hmac_sha256_key key;
};
/**
* struct hmac_sha256_ctx - Context for computing HMAC-SHA256 of a message
* @ctx: private
*/
struct hmac_sha256_ctx {
struct __hmac_sha256_ctx ctx;
};
/**
* hmac_sha256_preparekey() - Prepare a key for HMAC-SHA256
* @key: (output) the key structure to initialize
* @raw_key: the raw HMAC-SHA256 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* Note: the caller is responsible for zeroizing both the struct hmac_sha256_key
* and the raw key once they are no longer needed.
*
* Context: Any context.
*/
void hmac_sha256_preparekey(struct hmac_sha256_key *key,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha256_init() - Initialize an HMAC-SHA256 context for a new message
* @ctx: (output) the HMAC context to initialize
* @key: the prepared HMAC key
*
* If you don't need incremental computation, consider hmac_sha256() instead.
*
* Context: Any context.
*/
static inline void hmac_sha256_init(struct hmac_sha256_ctx *ctx,
const struct hmac_sha256_key *key)
{
sctx->state[0] = SHA224_H0;
sctx->state[1] = SHA224_H1;
sctx->state[2] = SHA224_H2;
sctx->state[3] = SHA224_H3;
sctx->state[4] = SHA224_H4;
sctx->state[5] = SHA224_H5;
sctx->state[6] = SHA224_H6;
sctx->state[7] = SHA224_H7;
sctx->count = 0;
__hmac_sha256_init(&ctx->ctx, &key->key);
}
static inline void sha224_init(struct sha256_state *sctx)
/**
* hmac_sha256_init_usingrawkey() - Initialize an HMAC-SHA256 context for a new
* message, using a raw key
* @ctx: (output) the HMAC context to initialize
* @raw_key: the raw HMAC-SHA256 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* If you don't need incremental computation, consider hmac_sha256_usingrawkey()
* instead.
*
* Context: Any context.
*/
void hmac_sha256_init_usingrawkey(struct hmac_sha256_ctx *ctx,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha256_update() - Update an HMAC-SHA256 context with message data
* @ctx: the HMAC context to update; must have been initialized
* @data: the message data
* @data_len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void hmac_sha256_update(struct hmac_sha256_ctx *ctx,
const u8 *data, size_t data_len)
{
sha224_block_init(&sctx->ctx);
__sha256_update(&ctx->ctx.sha_ctx, data, data_len);
}
/* Simply use sha256_update as it is equivalent to sha224_update. */
void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]);
/**
* hmac_sha256_final() - Finish computing an HMAC-SHA256 value
* @ctx: the HMAC context to finalize; must have been initialized
* @out: (output) the resulting HMAC-SHA256 value
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void hmac_sha256_final(struct hmac_sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]);
/**
* hmac_sha256() - Compute HMAC-SHA256 in one shot, using a prepared key
* @key: the prepared HMAC key
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA256 value
*
* If you're using the key only once, consider using hmac_sha256_usingrawkey().
*
* Context: Any context.
*/
void hmac_sha256(const struct hmac_sha256_key *key,
const u8 *data, size_t data_len, u8 out[SHA256_DIGEST_SIZE]);
/**
* hmac_sha256_usingrawkey() - Compute HMAC-SHA256 in one shot, using a raw key
* @raw_key: the raw HMAC-SHA256 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA256 value
*
* If you're using the key multiple times, prefer to use
* hmac_sha256_preparekey() followed by multiple calls to hmac_sha256() instead.
*
* Context: Any context.
*/
void hmac_sha256_usingrawkey(const u8 *raw_key, size_t raw_key_len,
const u8 *data, size_t data_len,
u8 out[SHA256_DIGEST_SIZE]);
/* State for the SHA-512 (and SHA-384) compression function */
struct sha512_block_state {
u64 h[8];
};
/*
* Context structure, shared by SHA-384 and SHA-512. The sha384_ctx and
* sha512_ctx structs wrap this one so that the API has proper typing and
* doesn't allow mixing the SHA-384 and SHA-512 functions arbitrarily.
*/
struct __sha512_ctx {
struct sha512_block_state state;
u64 bytecount_lo;
u64 bytecount_hi;
u8 buf[SHA512_BLOCK_SIZE] __aligned(__alignof__(__be64));
};
void __sha512_update(struct __sha512_ctx *ctx, const u8 *data, size_t len);
/*
* HMAC key and message context structs, shared by HMAC-SHA384 and HMAC-SHA512.
* The hmac_sha384_* and hmac_sha512_* structs wrap this one so that the API has
* proper typing and doesn't allow mixing the functions arbitrarily.
*/
struct __hmac_sha512_key {
struct sha512_block_state istate;
struct sha512_block_state ostate;
};
struct __hmac_sha512_ctx {
struct __sha512_ctx sha_ctx;
struct sha512_block_state ostate;
};
void __hmac_sha512_init(struct __hmac_sha512_ctx *ctx,
const struct __hmac_sha512_key *key);
/**
* struct sha384_ctx - Context for hashing a message with SHA-384
* @ctx: private
*/
struct sha384_ctx {
struct __sha512_ctx ctx;
};
/**
* sha384_init() - Initialize a SHA-384 context for a new message
* @ctx: the context to initialize
*
* If you don't need incremental computation, consider sha384() instead.
*
* Context: Any context.
*/
void sha384_init(struct sha384_ctx *ctx);
/**
* sha384_update() - Update a SHA-384 context with message data
* @ctx: the context to update; must have been initialized
* @data: the message data
* @len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void sha384_update(struct sha384_ctx *ctx,
const u8 *data, size_t len)
{
__sha512_update(&ctx->ctx, data, len);
}
/**
* sha384_final() - Finish computing a SHA-384 message digest
* @ctx: the context to finalize; must have been initialized
* @out: (output) the resulting SHA-384 message digest
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void sha384_final(struct sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]);
/**
* sha384() - Compute SHA-384 message digest in one shot
* @data: the message data
* @len: the data length in bytes
* @out: (output) the resulting SHA-384 message digest
*
* Context: Any context.
*/
void sha384(const u8 *data, size_t len, u8 out[SHA384_DIGEST_SIZE]);
/**
* struct hmac_sha384_key - Prepared key for HMAC-SHA384
* @key: private
*/
struct hmac_sha384_key {
struct __hmac_sha512_key key;
};
/**
* struct hmac_sha384_ctx - Context for computing HMAC-SHA384 of a message
* @ctx: private
*/
struct hmac_sha384_ctx {
struct __hmac_sha512_ctx ctx;
};
/**
* hmac_sha384_preparekey() - Prepare a key for HMAC-SHA384
* @key: (output) the key structure to initialize
* @raw_key: the raw HMAC-SHA384 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* Note: the caller is responsible for zeroizing both the struct hmac_sha384_key
* and the raw key once they are no longer needed.
*
* Context: Any context.
*/
void hmac_sha384_preparekey(struct hmac_sha384_key *key,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha384_init() - Initialize an HMAC-SHA384 context for a new message
* @ctx: (output) the HMAC context to initialize
* @key: the prepared HMAC key
*
* If you don't need incremental computation, consider hmac_sha384() instead.
*
* Context: Any context.
*/
static inline void hmac_sha384_init(struct hmac_sha384_ctx *ctx,
const struct hmac_sha384_key *key)
{
__hmac_sha512_init(&ctx->ctx, &key->key);
}
/**
* hmac_sha384_init_usingrawkey() - Initialize an HMAC-SHA384 context for a new
* message, using a raw key
* @ctx: (output) the HMAC context to initialize
* @raw_key: the raw HMAC-SHA384 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* If you don't need incremental computation, consider hmac_sha384_usingrawkey()
* instead.
*
* Context: Any context.
*/
void hmac_sha384_init_usingrawkey(struct hmac_sha384_ctx *ctx,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha384_update() - Update an HMAC-SHA384 context with message data
* @ctx: the HMAC context to update; must have been initialized
* @data: the message data
* @data_len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void hmac_sha384_update(struct hmac_sha384_ctx *ctx,
const u8 *data, size_t data_len)
{
__sha512_update(&ctx->ctx.sha_ctx, data, data_len);
}
/**
* hmac_sha384_final() - Finish computing an HMAC-SHA384 value
* @ctx: the HMAC context to finalize; must have been initialized
* @out: (output) the resulting HMAC-SHA384 value
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void hmac_sha384_final(struct hmac_sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]);
/**
* hmac_sha384() - Compute HMAC-SHA384 in one shot, using a prepared key
* @key: the prepared HMAC key
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA384 value
*
* If you're using the key only once, consider using hmac_sha384_usingrawkey().
*
* Context: Any context.
*/
void hmac_sha384(const struct hmac_sha384_key *key,
const u8 *data, size_t data_len, u8 out[SHA384_DIGEST_SIZE]);
/**
* hmac_sha384_usingrawkey() - Compute HMAC-SHA384 in one shot, using a raw key
* @raw_key: the raw HMAC-SHA384 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA384 value
*
* If you're using the key multiple times, prefer to use
* hmac_sha384_preparekey() followed by multiple calls to hmac_sha384() instead.
*
* Context: Any context.
*/
void hmac_sha384_usingrawkey(const u8 *raw_key, size_t raw_key_len,
const u8 *data, size_t data_len,
u8 out[SHA384_DIGEST_SIZE]);
/**
* struct sha512_ctx - Context for hashing a message with SHA-512
* @ctx: private
*/
struct sha512_ctx {
struct __sha512_ctx ctx;
};
/**
* sha512_init() - Initialize a SHA-512 context for a new message
* @ctx: the context to initialize
*
* If you don't need incremental computation, consider sha512() instead.
*
* Context: Any context.
*/
void sha512_init(struct sha512_ctx *ctx);
/**
* sha512_update() - Update a SHA-512 context with message data
* @ctx: the context to update; must have been initialized
* @data: the message data
* @len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void sha512_update(struct sha512_ctx *ctx,
const u8 *data, size_t len)
{
__sha512_update(&ctx->ctx, data, len);
}
/**
* sha512_final() - Finish computing a SHA-512 message digest
* @ctx: the context to finalize; must have been initialized
* @out: (output) the resulting SHA-512 message digest
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void sha512_final(struct sha512_ctx *ctx, u8 out[SHA512_DIGEST_SIZE]);
/**
* sha512() - Compute SHA-512 message digest in one shot
* @data: the message data
* @len: the data length in bytes
* @out: (output) the resulting SHA-512 message digest
*
* Context: Any context.
*/
void sha512(const u8 *data, size_t len, u8 out[SHA512_DIGEST_SIZE]);
/**
* struct hmac_sha512_key - Prepared key for HMAC-SHA512
* @key: private
*/
struct hmac_sha512_key {
struct __hmac_sha512_key key;
};
/**
* struct hmac_sha512_ctx - Context for computing HMAC-SHA512 of a message
* @ctx: private
*/
struct hmac_sha512_ctx {
struct __hmac_sha512_ctx ctx;
};
/**
* hmac_sha512_preparekey() - Prepare a key for HMAC-SHA512
* @key: (output) the key structure to initialize
* @raw_key: the raw HMAC-SHA512 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* Note: the caller is responsible for zeroizing both the struct hmac_sha512_key
* and the raw key once they are no longer needed.
*
* Context: Any context.
*/
void hmac_sha512_preparekey(struct hmac_sha512_key *key,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha512_init() - Initialize an HMAC-SHA512 context for a new message
* @ctx: (output) the HMAC context to initialize
* @key: the prepared HMAC key
*
* If you don't need incremental computation, consider hmac_sha512() instead.
*
* Context: Any context.
*/
static inline void hmac_sha512_init(struct hmac_sha512_ctx *ctx,
const struct hmac_sha512_key *key)
{
__hmac_sha512_init(&ctx->ctx, &key->key);
}
/**
* hmac_sha512_init_usingrawkey() - Initialize an HMAC-SHA512 context for a new
* message, using a raw key
* @ctx: (output) the HMAC context to initialize
* @raw_key: the raw HMAC-SHA512 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
*
* If you don't need incremental computation, consider hmac_sha512_usingrawkey()
* instead.
*
* Context: Any context.
*/
void hmac_sha512_init_usingrawkey(struct hmac_sha512_ctx *ctx,
const u8 *raw_key, size_t raw_key_len);
/**
* hmac_sha512_update() - Update an HMAC-SHA512 context with message data
* @ctx: the HMAC context to update; must have been initialized
* @data: the message data
* @data_len: the data length in bytes
*
* This can be called any number of times.
*
* Context: Any context.
*/
static inline void hmac_sha512_update(struct hmac_sha512_ctx *ctx,
const u8 *data, size_t data_len)
{
__sha512_update(&ctx->ctx.sha_ctx, data, data_len);
}
/**
* hmac_sha512_final() - Finish computing an HMAC-SHA512 value
* @ctx: the HMAC context to finalize; must have been initialized
* @out: (output) the resulting HMAC-SHA512 value
*
* After finishing, this zeroizes @ctx. So the caller does not need to do it.
*
* Context: Any context.
*/
void hmac_sha512_final(struct hmac_sha512_ctx *ctx, u8 out[SHA512_DIGEST_SIZE]);
/**
* hmac_sha512() - Compute HMAC-SHA512 in one shot, using a prepared key
* @key: the prepared HMAC key
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA512 value
*
* If you're using the key only once, consider using hmac_sha512_usingrawkey().
*
* Context: Any context.
*/
void hmac_sha512(const struct hmac_sha512_key *key,
const u8 *data, size_t data_len, u8 out[SHA512_DIGEST_SIZE]);
/**
* hmac_sha512_usingrawkey() - Compute HMAC-SHA512 in one shot, using a raw key
* @raw_key: the raw HMAC-SHA512 key
* @raw_key_len: the key length in bytes. All key lengths are supported.
* @data: the message data
* @data_len: the data length in bytes
* @out: (output) the resulting HMAC-SHA512 value
*
* If you're using the key multiple times, prefer to use
* hmac_sha512_preparekey() followed by multiple calls to hmac_sha512() instead.
*
* Context: Any context.
*/
void hmac_sha512_usingrawkey(const u8 *raw_key, size_t raw_key_len,
const u8 *data, size_t data_len,
u8 out[SHA512_DIGEST_SIZE]);
#endif /* _CRYPTO_SHA2_H */

View file

@ -1,120 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* sha512_base.h - core logic for SHA-512 implementations
*
* Copyright (C) 2015 Linaro Ltd <ard.biesheuvel@linaro.org>
*/
#ifndef _CRYPTO_SHA512_BASE_H
#define _CRYPTO_SHA512_BASE_H
#include <crypto/internal/hash.h>
#include <crypto/sha2.h>
#include <linux/compiler.h>
#include <linux/math.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/unaligned.h>
typedef void (sha512_block_fn)(struct sha512_state *sst, u8 const *src,
int blocks);
static inline int sha384_base_init(struct shash_desc *desc)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
sctx->state[0] = SHA384_H0;
sctx->state[1] = SHA384_H1;
sctx->state[2] = SHA384_H2;
sctx->state[3] = SHA384_H3;
sctx->state[4] = SHA384_H4;
sctx->state[5] = SHA384_H5;
sctx->state[6] = SHA384_H6;
sctx->state[7] = SHA384_H7;
sctx->count[0] = sctx->count[1] = 0;
return 0;
}
static inline int sha512_base_init(struct shash_desc *desc)
{
struct sha512_state *sctx = shash_desc_ctx(desc);
sctx->state[0] = SHA512_H0;
sctx->state[1] = SHA512_H1;
sctx->state[2] = SHA512_H2;
sctx->state[3] = SHA512_H3;
sctx->state[4] = SHA512_H4;
sctx->state[5] = SHA512_H5;
sctx->state[6] = SHA512_H6;
sctx->state[7] = SHA512_H7;
sctx->count[0] = sctx->count[1] = 0;
return 0;
}
static inline int sha512_base_do_update_blocks(struct shash_desc *desc,
const u8 *data,
unsigned int len,
sha512_block_fn *block_fn)
{
unsigned int remain = len - round_down(len, SHA512_BLOCK_SIZE);
struct sha512_state *sctx = shash_desc_ctx(desc);
len -= remain;
sctx->count[0] += len;
if (sctx->count[0] < len)
sctx->count[1]++;
block_fn(sctx, data, len / SHA512_BLOCK_SIZE);
return remain;
}
static inline int sha512_base_do_finup(struct shash_desc *desc, const u8 *src,
unsigned int len,
sha512_block_fn *block_fn)
{
unsigned int bit_offset = SHA512_BLOCK_SIZE / 8 - 2;
struct sha512_state *sctx = shash_desc_ctx(desc);
union {
__be64 b64[SHA512_BLOCK_SIZE / 4];
u8 u8[SHA512_BLOCK_SIZE * 2];
} block = {};
if (len >= SHA512_BLOCK_SIZE) {
int remain;
remain = sha512_base_do_update_blocks(desc, src, len, block_fn);
src += len - remain;
len = remain;
}
if (len >= bit_offset * 8)
bit_offset += SHA512_BLOCK_SIZE / 8;
memcpy(&block, src, len);
block.u8[len] = 0x80;
sctx->count[0] += len;
block.b64[bit_offset] = cpu_to_be64(sctx->count[1] << 3 |
sctx->count[0] >> 61);
block.b64[bit_offset + 1] = cpu_to_be64(sctx->count[0] << 3);
block_fn(sctx, block.u8, (bit_offset + 2) * 8 / SHA512_BLOCK_SIZE);
memzero_explicit(&block, sizeof(block));
return 0;
}
static inline int sha512_base_finish(struct shash_desc *desc, u8 *out)
{
unsigned int digest_size = crypto_shash_digestsize(desc->tfm);
struct sha512_state *sctx = shash_desc_ctx(desc);
__be64 *digest = (__be64 *)out;
int i;
for (i = 0; digest_size > 0; i++, digest_size -= sizeof(__be64))
put_unaligned_be64(sctx->state[i], digest++);
return 0;
}
void sha512_generic_block_fn(struct sha512_state *sst, u8 const *src,
int blocks);
#endif /* _CRYPTO_SHA512_BASE_H */

View file

@ -304,7 +304,7 @@ int bpf_prog_calc_tag(struct bpf_prog *fp)
if (!raw)
return -ENOMEM;
sha1_init(digest);
sha1_init_raw(digest);
memset(ws, 0, sizeof(ws));
/* We need to take out the map fd for the digest calculation

View file

@ -751,7 +751,7 @@ int kexec_add_buffer(struct kexec_buf *kbuf)
/* Calculate and store the digest of segments */
static int kexec_calculate_store_digests(struct kimage *image)
{
struct sha256_state state;
struct sha256_ctx sctx;
int ret = 0, i, j, zero_buf_sz, sha_region_sz;
size_t nullsz;
u8 digest[SHA256_DIGEST_SIZE];
@ -770,7 +770,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
if (!sha_regions)
return -ENOMEM;
sha256_init(&state);
sha256_init(&sctx);
for (j = i = 0; i < image->nr_segments; i++) {
struct kexec_segment *ksegment;
@ -796,7 +796,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
if (check_ima_segment_index(image, i))
continue;
sha256_update(&state, ksegment->kbuf, ksegment->bufsz);
sha256_update(&sctx, ksegment->kbuf, ksegment->bufsz);
/*
* Assume rest of the buffer is filled with zero and
@ -808,7 +808,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
if (bytes > zero_buf_sz)
bytes = zero_buf_sz;
sha256_update(&state, zero_buf, bytes);
sha256_update(&sctx, zero_buf, bytes);
nullsz -= bytes;
}
@ -817,7 +817,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
j++;
}
sha256_final(&state, digest);
sha256_final(&sctx, digest);
ret = kexec_purgatory_get_set_symbol(image, "purgatory_sha_regions",
sha_regions, sha_region_sz, 0);

View file

@ -2,6 +2,9 @@
menu "Crypto library routines"
config CRYPTO_HASH_INFO
bool
config CRYPTO_LIB_UTILS
tristate
@ -136,6 +139,20 @@ config CRYPTO_LIB_CHACHA20POLY1305
config CRYPTO_LIB_SHA1
tristate
help
The SHA-1 library functions. Select this if your module uses any of
the functions from <crypto/sha1.h>.
config CRYPTO_LIB_SHA1_ARCH
bool
depends on CRYPTO_LIB_SHA1 && !UML
default y if ARM
default y if ARM64 && KERNEL_MODE_NEON
default y if MIPS && CPU_CAVIUM_OCTEON
default y if PPC
default y if S390
default y if SPARC64
default y if X86_64
config CRYPTO_LIB_SHA256
tristate
@ -144,56 +161,60 @@ config CRYPTO_LIB_SHA256
by either the generic implementation or an arch-specific one, if one
is available and enabled.
config CRYPTO_ARCH_HAVE_LIB_SHA256
config CRYPTO_LIB_SHA256_ARCH
bool
help
Declares whether the architecture provides an arch-specific
accelerated implementation of the SHA-256 library interface.
depends on CRYPTO_LIB_SHA256 && !UML
default y if ARM && !CPU_V7M
default y if ARM64
default y if MIPS && CPU_CAVIUM_OCTEON
default y if PPC && SPE
default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
default y if S390
default y if SPARC64
default y if X86_64
config CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD
bool
help
Declares whether the architecture provides an arch-specific
accelerated implementation of the SHA-256 library interface
that is SIMD-based and therefore not usable in hardirq
context.
config CRYPTO_LIB_SHA256_GENERIC
config CRYPTO_LIB_SHA512
tristate
default CRYPTO_LIB_SHA256 if !CRYPTO_ARCH_HAVE_LIB_SHA256
help
This symbol can be selected by arch implementations of the SHA-256
library interface that require the generic code as a fallback, e.g.,
for SIMD implementations. If no arch specific implementation is
enabled, this implementation serves the users of CRYPTO_LIB_SHA256.
The SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 library functions.
Select this if your module uses any of these functions from
<crypto/sha2.h>.
config CRYPTO_LIB_SHA512_ARCH
bool
depends on CRYPTO_LIB_SHA512 && !UML
default y if ARM && !CPU_V7M
default y if ARM64
default y if MIPS && CPU_CAVIUM_OCTEON
default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO
default y if S390
default y if SPARC64
default y if X86_64
config CRYPTO_LIB_SM3
tristate
if !KMSAN # avoid false positives from assembly
if ARM
source "arch/arm/lib/crypto/Kconfig"
source "lib/crypto/arm/Kconfig"
endif
if ARM64
source "arch/arm64/lib/crypto/Kconfig"
source "lib/crypto/arm64/Kconfig"
endif
if MIPS
source "arch/mips/lib/crypto/Kconfig"
source "lib/crypto/mips/Kconfig"
endif
if PPC
source "arch/powerpc/lib/crypto/Kconfig"
source "lib/crypto/powerpc/Kconfig"
endif
if RISCV
source "arch/riscv/lib/crypto/Kconfig"
source "lib/crypto/riscv/Kconfig"
endif
if S390
source "arch/s390/lib/crypto/Kconfig"
endif
if SPARC
source "arch/sparc/lib/crypto/Kconfig"
source "lib/crypto/s390/Kconfig"
endif
if X86
source "arch/x86/lib/crypto/Kconfig"
source "lib/crypto/x86/Kconfig"
endif
endif

View file

@ -1,5 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
aflags-thumb2-$(CONFIG_THUMB2_KERNEL) := -U__thumb2__ -D__thumb2__=1
quiet_cmd_perlasm = PERLASM $@
cmd_perlasm = $(PERL) $(<) > $(@)
quiet_cmd_perlasm_with_args = PERLASM $@
cmd_perlasm_with_args = $(PERL) $(<) void $(@)
obj-$(CONFIG_CRYPTO_HASH_INFO) += hash_info.o
obj-$(CONFIG_CRYPTO_LIB_UTILS) += libcryptoutils.o
libcryptoutils-y := memneq.o utils.o
@ -55,14 +65,91 @@ libpoly1305-generic-y := poly1305-donna32.o
libpoly1305-generic-$(CONFIG_ARCH_SUPPORTS_INT128) := poly1305-donna64.o
libpoly1305-generic-y += poly1305-generic.o
obj-$(CONFIG_CRYPTO_LIB_SHA1) += libsha1.o
libsha1-y := sha1.o
################################################################################
obj-$(CONFIG_CRYPTO_LIB_SHA256) += libsha256.o
libsha256-y := sha256.o
obj-$(CONFIG_CRYPTO_LIB_SHA1) += libsha1.o
libsha1-y := sha1.o
ifeq ($(CONFIG_CRYPTO_LIB_SHA1_ARCH),y)
CFLAGS_sha1.o += -I$(src)/$(SRCARCH)
ifeq ($(CONFIG_ARM),y)
libsha1-y += arm/sha1-armv4-large.o
libsha1-$(CONFIG_KERNEL_MODE_NEON) += arm/sha1-armv7-neon.o \
arm/sha1-ce-core.o
endif
libsha1-$(CONFIG_ARM64) += arm64/sha1-ce-core.o
ifeq ($(CONFIG_PPC),y)
libsha1-y += powerpc/sha1-powerpc-asm.o
libsha1-$(CONFIG_SPE) += powerpc/sha1-spe-asm.o
endif
libsha1-$(CONFIG_SPARC) += sparc/sha1_asm.o
libsha1-$(CONFIG_X86) += x86/sha1-ssse3-and-avx.o \
x86/sha1-avx2-asm.o \
x86/sha1-ni-asm.o
endif # CONFIG_CRYPTO_LIB_SHA1_ARCH
obj-$(CONFIG_CRYPTO_LIB_SHA256_GENERIC) += libsha256-generic.o
libsha256-generic-y := sha256-generic.o
################################################################################
obj-$(CONFIG_CRYPTO_LIB_SHA256) += libsha256.o
libsha256-y := sha256.o
ifeq ($(CONFIG_CRYPTO_LIB_SHA256_ARCH),y)
CFLAGS_sha256.o += -I$(src)/$(SRCARCH)
ifeq ($(CONFIG_ARM),y)
libsha256-y += arm/sha256-ce.o arm/sha256-core.o
$(obj)/arm/sha256-core.S: $(src)/arm/sha256-armv4.pl
$(call cmd,perlasm)
clean-files += arm/sha256-core.S
AFLAGS_arm/sha256-core.o += $(aflags-thumb2-y)
endif
ifeq ($(CONFIG_ARM64),y)
libsha256-y += arm64/sha256-core.o
$(obj)/arm64/sha256-core.S: $(src)/arm64/sha2-armv8.pl
$(call cmd,perlasm_with_args)
clean-files += arm64/sha256-core.S
libsha256-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha256-ce.o
endif
libsha256-$(CONFIG_PPC) += powerpc/sha256-spe-asm.o
libsha256-$(CONFIG_RISCV) += riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.o
libsha256-$(CONFIG_SPARC) += sparc/sha256_asm.o
libsha256-$(CONFIG_X86) += x86/sha256-ssse3-asm.o \
x86/sha256-avx-asm.o \
x86/sha256-avx2-asm.o \
x86/sha256-ni-asm.o
endif # CONFIG_CRYPTO_LIB_SHA256_ARCH
################################################################################
obj-$(CONFIG_CRYPTO_LIB_SHA512) += libsha512.o
libsha512-y := sha512.o
ifeq ($(CONFIG_CRYPTO_LIB_SHA512_ARCH),y)
CFLAGS_sha512.o += -I$(src)/$(SRCARCH)
ifeq ($(CONFIG_ARM),y)
libsha512-y += arm/sha512-core.o
$(obj)/arm/sha512-core.S: $(src)/arm/sha512-armv4.pl
$(call cmd,perlasm)
clean-files += arm/sha512-core.S
AFLAGS_arm/sha512-core.o += $(aflags-thumb2-y)
endif
ifeq ($(CONFIG_ARM64),y)
libsha512-y += arm64/sha512-core.o
$(obj)/arm64/sha512-core.S: $(src)/arm64/sha2-armv8.pl
$(call cmd,perlasm_with_args)
clean-files += arm64/sha512-core.S
libsha512-$(CONFIG_KERNEL_MODE_NEON) += arm64/sha512-ce-core.o
endif
libsha512-$(CONFIG_RISCV) += riscv/sha512-riscv64-zvknhb-zvkb.o
libsha512-$(CONFIG_SPARC) += sparc/sha512_asm.o
libsha512-$(CONFIG_X86) += x86/sha512-ssse3-asm.o \
x86/sha512-avx-asm.o \
x86/sha512-avx2-asm.o
endif # CONFIG_CRYPTO_LIB_SHA512_ARCH
################################################################################
obj-$(CONFIG_MPILIB) += mpi/
@ -70,3 +157,11 @@ obj-$(CONFIG_CRYPTO_SELFTESTS_FULL) += simd.o
obj-$(CONFIG_CRYPTO_LIB_SM3) += libsm3.o
libsm3-y := sm3.o
obj-$(CONFIG_ARM) += arm/
obj-$(CONFIG_ARM64) += arm64/
obj-$(CONFIG_MIPS) += mips/
obj-$(CONFIG_PPC) += powerpc/
obj-$(CONFIG_RISCV) += riscv/
obj-$(CONFIG_S390) += s390/
obj-$(CONFIG_X86) += x86/

View file

@ -5,6 +5,7 @@
#include <crypto/aes.h>
#include <linux/crypto.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/unaligned.h>

View file

@ -5,11 +5,10 @@
* Copyright 2023 Google LLC
*/
#include <linux/module.h>
#include <crypto/algapi.h>
#include <crypto/aes.h>
#include <crypto/algapi.h>
#include <linux/export.h>
#include <linux/module.h>
#include <asm/irqflags.h>
static void aescfb_encrypt_block(const struct crypto_aes_ctx *ctx, void *dst,

View file

@ -5,12 +5,11 @@
* Copyright 2022 Google LLC
*/
#include <linux/module.h>
#include <crypto/algapi.h>
#include <crypto/gcm.h>
#include <crypto/ghash.h>
#include <linux/export.h>
#include <linux/module.h>
#include <asm/irqflags.h>
static void aesgcm_encrypt_block(const struct crypto_aes_ctx *ctx, void *dst,

Some files were not shown because too many files have changed in this diff Show more