This update includes the following changes:

API:
 
 - Allow hash drivers without fallbacks (e.g., hardware key).
 
 Algorithms:
 
 - Add hmac hardware key support (phmac) on s390.
 - Re-enable sha384 in FIPS mode.
 - Disable sha1 in FIPS mode.
 - Convert zstd to acomp.
 
 Drivers:
 
 - Lower priority of qat skcipher and aead.
 - Convert aspeed to partial block API.
 - Add iMX8QXP support in caam.
 - Add rate limiting support for GEN6 devices in qat.
 - Enable telemetry for GEN6 devices in qat.
 - Implement full backlog mode for hisilicon/sec2.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmiHQXwACgkQxycdCkmx
 i6f49A//dQtMg/nvlqForj3BTYKPtjpfZhGxOhda1Y01ts4nFLwM39HtNXGCa6no
 e5L/taHdGd4loZoFa0H7Jz8Qn+I8F3YJLE1gDmN1zogzM6hG7KwFpJLy+PrusS3H
 IwjUehPKNTK2XWmJCdxpsulmwBD+Y//DG3wpwGlkr+MMvlzoMkesvBSCwmXKh/rh
 dn8efrHqL+3LBM6F4nM5zTwcKpLvp7V9arwAE6Zat95WN1X2puEk9L8vYG96hU9/
 YmG79E6WIb4UBILJlYdfba+3tK0bZaU3iDHMLQVlAPgM8JvzF9THyFRlLRa586/P
 rHo2xrgD1vPlMFXKhNI9p+D65zF/5Z0EKTfn1Z99z1kVzz3L71GOYlAvcAw1S9/j
 dRAcfrs/7xEW1SI9j+jVYqZn5g/ClGF8MwEL2VYHzyxN3VPY7ELys4rk6Il29NQp
 EVH8VfZS3XmdF1oiH51/ZDT4mfvQjn3v33ssdNpAFsZX2XIBj0d48JtTN/ynDfUB
 SPS2pTa5FBJCOpRR/Pbct+eloyrVP4Lcy8/gwlKAEY0ZffBBPmd2wCujQf/SKcUH
 e4b6hXAWe0gns/4VSnaker3YdG6o4uPWotZKvIiyKlkKGmJXHuSRK32odRO66+Bg
 tlaUYOmRghmxgU9Sc6h9M6vkm5rBLMw4ccykmhGSvvudm9rLh6A=
 =E8nj
 -----END PGP SIGNATURE-----

Merge tag 'v6.17-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
 "API:
   - Allow hash drivers without fallbacks (e.g., hardware key)

  Algorithms:
   - Add hmac hardware key support (phmac) on s390
   - Re-enable sha384 in FIPS mode
   - Disable sha1 in FIPS mode
   - Convert zstd to acomp

  Drivers:
   - Lower priority of qat skcipher and aead
   - Convert aspeed to partial block API
   - Add iMX8QXP support in caam
   - Add rate limiting support for GEN6 devices in qat
   - Enable telemetry for GEN6 devices in qat
   - Implement full backlog mode for hisilicon/sec2"

* tag 'v6.17-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
  crypto: keembay - Use min() to simplify ocs_create_linked_list_from_sg()
  crypto: hisilicon/hpre - fix dma unmap sequence
  crypto: qat - make adf_dev_autoreset() static
  crypto: ccp - reduce stack usage in ccp_run_aes_gcm_cmd
  crypto: qat - refactor ring-related debug functions
  crypto: qat - fix seq_file position update in adf_ring_next()
  crypto: qat - fix DMA direction for compression on GEN2 devices
  crypto: jitter - replace ARRAY_SIZE definition with header include
  crypto: engine - remove {prepare,unprepare}_crypt_hardware callbacks
  crypto: engine - remove request batching support
  crypto: qat - flush misc workqueue during device shutdown
  crypto: qat - enable rate limiting feature for GEN6 devices
  crypto: qat - add compression slice count for rate limiting
  crypto: qat - add get_svc_slice_cnt() in device data structure
  crypto: qat - add adf_rl_get_num_svc_aes() in rate limiting
  crypto: qat - relocate service related functions
  crypto: qat - consolidate service enums
  crypto: qat - add decompression service for rate limiting
  crypto: qat - validate service in rate limiting sysfs api
  crypto: hisilicon/sec2 - implement full backlog mode for sec
  ...
This commit is contained in:
Linus Torvalds 2025-07-31 09:45:28 -07:00
commit 44a8c96edd
150 changed files with 4108 additions and 2064 deletions

View file

@ -67,7 +67,7 @@ Contact: qat-linux@intel.com
Description: (RO) Read returns power management information specific to the
QAT device.
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/kernel/debug/qat_<device>_<BDF>/cnv_errors
Date: January 2024

View file

@ -32,7 +32,7 @@ Description: (RW) Enables/disables the reporting of telemetry metrics.
echo 0 > /sys/kernel/debug/qat_4xxx_0000:6b:00.0/telemetry/control
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/kernel/debug/qat_<device>_<BDF>/telemetry/device_data
Date: March 2024
@ -67,6 +67,10 @@ Description: (RO) Reports device telemetry counters.
exec_xlt<N> execution count of Translator slice N
util_dcpr<N> utilization of Decompression slice N [%]
exec_dcpr<N> execution count of Decompression slice N
util_cnv<N> utilization of Compression and verify slice N [%]
exec_cnv<N> execution count of Compression and verify slice N
util_dcprz<N> utilization of Decompression slice N [%]
exec_dcprz<N> execution count of Decompression slice N
util_pke<N> utilization of PKE N [%]
exec_pke<N> execution count of PKE N
util_ucs<N> utilization of UCS slice N [%]
@ -100,7 +104,7 @@ Description: (RO) Reports device telemetry counters.
If a device lacks of a specific accelerator, the corresponding
attribute is not reported.
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/kernel/debug/qat_<device>_<BDF>/telemetry/rp_<A/B/C/D>_data
Date: March 2024
@ -225,4 +229,4 @@ Description: (RW) Selects up to 4 Ring Pairs (RP) to monitor, one per file,
``rp2srv`` from sysfs.
See Documentation/ABI/testing/sysfs-driver-qat for details.
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.

View file

@ -14,7 +14,7 @@ Description: (RW) Reports the current state of the QAT device. Write to
It is possible to transition the device from up to down only
if the device is up and vice versa.
This attribute is only available for qat_4xxx devices.
This attribute is available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/cfg_services
Date: June 2022
@ -23,24 +23,28 @@ Contact: qat-linux@intel.com
Description: (RW) Reports the current configuration of the QAT device.
Write to the file to change the configured services.
The values are:
One or more services can be enabled per device.
Certain configurations are restricted to specific device types;
where applicable this is explicitly indicated, for example
(qat_6xxx) denotes applicability exclusively to that device series.
* sym;asym: the device is configured for running crypto
services
* asym;sym: identical to sym;asym
* dc: the device is configured for running compression services
* dcc: identical to dc but enables the dc chaining feature,
hash then compression. If this is not required chose dc
* sym: the device is configured for running symmetric crypto
services
* asym: the device is configured for running asymmetric crypto
services
* asym;dc: the device is configured for running asymmetric
crypto services and compression services
* dc;asym: identical to asym;dc
* sym;dc: the device is configured for running symmetric crypto
services and compression services
* dc;sym: identical to sym;dc
The available services include:
* sym: Configures the device for symmetric cryptographic operations.
* asym: Configures the device for asymmetric cryptographic operations.
* dc: Configures the device for compression and decompression
operations.
* dcc: Similar to dc, but with the additional dc chaining feature
enabled, cipher then compress (qat_6xxx), hash then compression.
If this is not required choose dc.
* decomp: Configures the device for decompression operations (qat_6xxx).
Service combinations are permitted for all services except dcc.
On QAT GEN4 devices (qat_4xxx driver) a maximum of two services can be
combined and on QAT GEN6 devices (qat_6xxx driver ) a maximum of three
services can be combined.
The order of services is not significant. For instance, sym;asym is
functionally equivalent to asym;sym.
It is possible to set the configuration only if the device
is in the `down` state (see /sys/bus/pci/devices/<BDF>/qat/state)
@ -59,7 +63,7 @@ Description: (RW) Reports the current configuration of the QAT device.
# cat /sys/bus/pci/devices/<BDF>/qat/cfg_services
dc
This attribute is only available for qat_4xxx devices.
This attribute is available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/pm_idle_enabled
Date: June 2023
@ -94,7 +98,7 @@ Description: (RW) This configuration option provides a way to force the device i
# cat /sys/bus/pci/devices/<BDF>/qat/pm_idle_enabled
0
This attribute is only available for qat_4xxx devices.
This attribute is available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/rp2srv
Date: January 2024
@ -126,7 +130,7 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat/rp2srv
sym
This attribute is only available for qat_4xxx devices.
This attribute is available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/num_rps
Date: January 2024
@ -140,7 +144,7 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat/num_rps
64
This attribute is only available for qat_4xxx devices.
This attribute is available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/auto_reset
Date: May 2024
@ -160,4 +164,4 @@ Description: (RW) Reports the current state of the autoreset feature
* 0/Nn/off: auto reset disabled. If the device encounters an
unrecoverable error, it will not be reset.
This attribute is only available for qat_4xxx devices.
This attribute is available for qat_4xxx and qat_6xxx devices.

View file

@ -31,7 +31,7 @@ Description:
* rm_all: Removes all the configured SLAs.
* Inputs: None
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/rp
Date: January 2024
@ -68,7 +68,7 @@ Description:
## Write
# echo 0x5 > /sys/bus/pci/devices/<BDF>/qat_rl/rp
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/id
Date: January 2024
@ -101,7 +101,7 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat_rl/rp
0x5 ## ring pair ID 0 and ring pair ID 2
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/cir
Date: January 2024
@ -135,7 +135,7 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat_rl/cir
500
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/pir
Date: January 2024
@ -169,7 +169,7 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat_rl/pir
750
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/srv
Date: January 2024
@ -202,7 +202,7 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat_rl/srv
dc
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
Date: January 2024
@ -223,4 +223,4 @@ Description:
# cat /sys/bus/pci/devices/<BDF>/qat_rl/cap_rem
0
This attribute is only available for qat_4xxx devices.
This attribute is only available for qat_4xxx and qat_6xxx devices.

View file

@ -36,12 +36,6 @@ engine using ``crypto_engine_stop()`` and destroy the engine with
Before transferring any request, you have to fill the context enginectx by
providing functions for the following:
* ``prepare_crypt_hardware``: Called once before any prepare functions are
called.
* ``unprepare_crypt_hardware``: Called once after all unprepare functions have
been called.
* ``prepare_cipher_request``/``prepare_hash_request``: Called before each
corresponding request is performed. If some processing or other preparatory
work is required, do it here.

View file

@ -15,7 +15,9 @@ properties:
oneOf:
- const: atmel,at91sam9g46-aes
- items:
- const: microchip,sam9x7-aes
- enum:
- microchip,sam9x7-aes
- microchip,sama7d65-aes
- const: atmel,at91sam9g46-aes
reg:

View file

@ -15,7 +15,9 @@ properties:
oneOf:
- const: atmel,at91sam9g46-sha
- items:
- const: microchip,sam9x7-sha
- enum:
- microchip,sam9x7-sha
- microchip,sama7d65-sha
- const: atmel,at91sam9g46-sha
reg:

View file

@ -15,7 +15,9 @@ properties:
oneOf:
- const: atmel,at91sam9g46-tdes
- items:
- const: microchip,sam9x7-tdes
- enum:
- microchip,sam9x7-tdes
- microchip,sama7d65-tdes
- const: atmel,at91sam9g46-tdes
reg:

View file

@ -46,6 +46,8 @@ properties:
- items:
- enum:
- fsl,imx6ul-caam
- fsl,imx8qm-caam
- fsl,imx8qxp-caam
- fsl,sec-v5.0
- const: fsl,sec-v4.0
- const: fsl,sec-v4.0
@ -77,6 +79,9 @@ properties:
interrupts:
maxItems: 1
power-domains:
maxItems: 1
fsl,sec-era:
description: Defines the 'ERA' of the SEC device.
$ref: /schemas/types.yaml#/definitions/uint32
@ -106,7 +111,10 @@ patternProperties:
- const: fsl,sec-v5.0-job-ring
- const: fsl,sec-v4.0-job-ring
- items:
- const: fsl,sec-v5.0-job-ring
- enum:
- fsl,imx8qm-job-ring
- fsl,imx8qxp-job-ring
- fsl,sec-v5.0-job-ring
- const: fsl,sec-v4.0-job-ring
- const: fsl,sec-v4.0-job-ring
@ -116,6 +124,9 @@ patternProperties:
interrupts:
maxItems: 1
power-domains:
maxItems: 1
fsl,liodn:
description:
Specifies the LIODN to be used in conjunction with the ppid-to-liodn
@ -125,6 +136,20 @@ patternProperties:
$ref: /schemas/types.yaml#/definitions/uint32-array
items:
- maximum: 0xfff
allOf:
- if:
properties:
compatible:
contains:
enum:
- fsl,imx8qm-job-ring
- fsl,imx8qxp-job-ring
then:
required:
- power-domains
else:
properties:
power-domains: false
'^rtic@[0-9a-f]+$':
type: object
@ -212,6 +237,20 @@ required:
- reg
- ranges
if:
properties:
compatible:
contains:
enum:
- fsl,imx8qm-caam
- fsl,imx8qxp-caam
then:
required:
- power-domains
else:
properties:
power-domains: false
additionalProperties: false
examples:

View file

@ -1,31 +0,0 @@
OMAP SoC AES crypto Module
Required properties:
- compatible : Should contain entries for this and backward compatible
AES versions:
- "ti,omap2-aes" for OMAP2.
- "ti,omap3-aes" for OMAP3.
- "ti,omap4-aes" for OMAP4 and AM33XX.
Note that the OMAP2 and 3 versions are compatible (OMAP3 supports
more algorithms) but they are incompatible with OMAP4.
- ti,hwmods: Name of the hwmod associated with the AES module
- reg : Offset and length of the register set for the module
- interrupts : the interrupt-specifier for the AES module.
Optional properties:
- dmas: DMA specifiers for tx and rx dma. See the DMA client binding,
Documentation/devicetree/bindings/dma/dma.txt
- dma-names: DMA request names should include "tx" and "rx" if present.
Example:
/* AM335x */
aes: aes@53500000 {
compatible = "ti,omap4-aes";
ti,hwmods = "aes";
reg = <0x53500000 0xa0>;
interrupts = <102>;
dmas = <&edma 6>,
<&edma 5>;
dma-names = "tx", "rx";
};

View file

@ -1,30 +0,0 @@
OMAP SoC DES crypto Module
Required properties:
- compatible : Should contain "ti,omap4-des"
- ti,hwmods: Name of the hwmod associated with the DES module
- reg : Offset and length of the register set for the module
- interrupts : the interrupt-specifier for the DES module
- clocks : A phandle to the functional clock node of the DES module
corresponding to each entry in clock-names
- clock-names : Name of the functional clock, should be "fck"
Optional properties:
- dmas: DMA specifiers for tx and rx dma. See the DMA client binding,
Documentation/devicetree/bindings/dma/dma.txt
Each entry corresponds to an entry in dma-names
- dma-names: DMA request names should include "tx" and "rx" if present
Example:
/* DRA7xx SoC */
des: des@480a5000 {
compatible = "ti,omap4-des";
ti,hwmods = "des";
reg = <0x480a5000 0xa0>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&sdma 117>, <&sdma 116>;
dma-names = "tx", "rx";
clocks = <&l3_iclk_div>;
clock-names = "fck";
};

View file

@ -0,0 +1,58 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/ti,omap2-aes.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: OMAP SoC AES crypto Module
maintainers:
- Aaro Koskinen <aaro.koskinen@iki.fi>
- Andreas Kemnade <andreas@kemnade.info>
- Kevin Hilman <khilman@baylibre.com>
- Roger Quadros <rogerq@kernel.org>
- Tony Lindgren <tony@atomide.com>
properties:
compatible:
enum:
- ti,omap2-aes
- ti,omap3-aes
- ti,omap4-aes
reg:
maxItems: 1
interrupts:
maxItems: 1
dmas:
maxItems: 2
dma-names:
items:
- const: tx
- const: rx
ti,hwmods:
description: Name of the hwmod associated with the AES module
const: aes
deprecated: true
required:
- compatible
- reg
- interrupts
additionalProperties: false
examples:
- |
aes@53500000 {
compatible = "ti,omap4-aes";
reg = <0x53500000 0xa0>;
interrupts = <102>;
dmas = <&edma 6>,
<&edma 5>;
dma-names = "tx", "rx";
};

View file

@ -0,0 +1,65 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/ti,omap4-des.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: OMAP4 DES crypto Module
maintainers:
- Aaro Koskinen <aaro.koskinen@iki.fi>
- Andreas Kemnade <andreas@kemnade.info>
- Kevin Hilman <khilman@baylibre.com>
- Roger Quadros <rogerq@kernel.org>
- Tony Lindgren <tony@atomide.com>
properties:
compatible:
const: ti,omap4-des
reg:
maxItems: 1
interrupts:
maxItems: 1
dmas:
maxItems: 2
dma-names:
items:
- const: tx
- const: rx
clocks:
maxItems: 1
clock-names:
items:
- const: fck
dependencies:
dmas: [ dma-names ]
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
des@480a5000 {
compatible = "ti,omap4-des";
reg = <0x480a5000 0xa0>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&l3_iclk_div>;
clock-names = "fck";
dmas = <&sdma 117>, <&sdma 116>;
dma-names = "tx", "rx";
};

View file

@ -24,6 +24,7 @@ properties:
- items:
- enum:
- microchip,sam9x7-trng
- microchip,sama7d65-trng
- const: microchip,sam9x60-trng
clocks:

View file

@ -206,7 +206,7 @@ static int ctr_encrypt(struct skcipher_request *req)
while (walk.nbytes > 0) {
const u8 *src = walk.src.virt.addr;
u8 *dst = walk.dst.virt.addr;
int bytes = walk.nbytes;
unsigned int bytes = walk.nbytes;
if (unlikely(bytes < AES_BLOCK_SIZE))
src = dst = memcpy(buf + sizeof(buf) - bytes,

View file

@ -816,6 +816,7 @@ CONFIG_PKEY_EP11=m
CONFIG_PKEY_PCKMO=m
CONFIG_PKEY_UV=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_PHMAC_S390=m
CONFIG_CRYPTO_DEV_VIRTIO=m
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_CRYPTO_KRB5=m

View file

@ -803,6 +803,7 @@ CONFIG_PKEY_EP11=m
CONFIG_PKEY_PCKMO=m
CONFIG_PKEY_UV=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_PHMAC_S390=m
CONFIG_CRYPTO_DEV_VIRTIO=m
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_CRYPTO_KRB5=m

View file

@ -11,4 +11,5 @@ obj-$(CONFIG_CRYPTO_PAES_S390) += paes_s390.o
obj-$(CONFIG_S390_PRNG) += prng.o
obj-$(CONFIG_CRYPTO_GHASH_S390) += ghash_s390.o
obj-$(CONFIG_CRYPTO_HMAC_S390) += hmac_s390.o
obj-$(CONFIG_CRYPTO_PHMAC_S390) += phmac_s390.o
obj-y += arch_random.o

View file

@ -290,6 +290,7 @@ static int s390_hmac_export(struct shash_desc *desc, void *out)
struct s390_kmac_sha2_ctx *ctx = shash_desc_ctx(desc);
unsigned int bs = crypto_shash_blocksize(desc->tfm);
unsigned int ds = bs / 2;
u64 lo = ctx->buflen[0];
union {
u8 *u8;
u64 *u64;
@ -301,9 +302,10 @@ static int s390_hmac_export(struct shash_desc *desc, void *out)
else
memcpy(p.u8, ctx->param, ds);
p.u8 += ds;
put_unaligned(ctx->buflen[0], p.u64++);
lo += bs;
put_unaligned(lo, p.u64++);
if (ds == SHA512_DIGEST_SIZE)
put_unaligned(ctx->buflen[1], p.u64);
put_unaligned(ctx->buflen[1] + (lo < bs), p.u64);
return err;
}
@ -316,14 +318,16 @@ static int s390_hmac_import(struct shash_desc *desc, const void *in)
const u8 *u8;
const u64 *u64;
} p = { .u8 = in };
u64 lo;
int err;
err = s390_hmac_sha2_init(desc);
memcpy(ctx->param, p.u8, ds);
p.u8 += ds;
ctx->buflen[0] = get_unaligned(p.u64++);
lo = get_unaligned(p.u64++);
ctx->buflen[0] = lo - bs;
if (ds == SHA512_DIGEST_SIZE)
ctx->buflen[1] = get_unaligned(p.u64);
ctx->buflen[1] = get_unaligned(p.u64) - (lo < bs);
if (ctx->buflen[0] | ctx->buflen[1])
ctx->gr0.ikp = 1;
return err;

View file

@ -1633,7 +1633,7 @@ static int __init paes_s390_init(void)
/* with this pseudo devie alloc and start a crypto engine */
paes_crypto_engine =
crypto_engine_alloc_init_and_set(paes_dev.this_device,
true, NULL, false, MAX_QLEN);
true, false, MAX_QLEN);
if (!paes_crypto_engine) {
rc = -ENOMEM;
goto out_err;

File diff suppressed because it is too large Load diff

View file

@ -27,6 +27,9 @@ struct s390_sha_ctx {
u64 state[SHA512_DIGEST_SIZE / sizeof(u64)];
u64 count_hi;
} sha512;
struct {
__le64 state[SHA3_STATE_SIZE / sizeof(u64)];
} sha3;
};
int func; /* KIMD function to use */
bool first_message_part;

View file

@ -35,23 +35,33 @@ static int sha3_256_init(struct shash_desc *desc)
static int sha3_256_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha3_state *octx = out;
union {
u8 *u8;
u64 *u64;
} p = { .u8 = out };
int i;
if (sctx->first_message_part) {
memset(sctx->state, 0, sizeof(sctx->state));
sctx->first_message_part = 0;
memset(out, 0, SHA3_STATE_SIZE);
return 0;
}
memcpy(octx->st, sctx->state, sizeof(octx->st));
for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
put_unaligned(le64_to_cpu(sctx->sha3.state[i]), p.u64++);
return 0;
}
static int sha3_256_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
union {
const u8 *u8;
const u64 *u64;
} p = { .u8 = in };
int i;
for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
sctx->sha3.state[i] = cpu_to_le64(get_unaligned(p.u64++));
sctx->count = 0;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
sctx->first_message_part = 0;
sctx->func = CPACF_KIMD_SHA3_256;

View file

@ -34,24 +34,33 @@ static int sha3_512_init(struct shash_desc *desc)
static int sha3_512_export(struct shash_desc *desc, void *out)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
struct sha3_state *octx = out;
union {
u8 *u8;
u64 *u64;
} p = { .u8 = out };
int i;
if (sctx->first_message_part) {
memset(sctx->state, 0, sizeof(sctx->state));
sctx->first_message_part = 0;
memset(out, 0, SHA3_STATE_SIZE);
return 0;
}
memcpy(octx->st, sctx->state, sizeof(octx->st));
for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
put_unaligned(le64_to_cpu(sctx->sha3.state[i]), p.u64++);
return 0;
}
static int sha3_512_import(struct shash_desc *desc, const void *in)
{
struct s390_sha_ctx *sctx = shash_desc_ctx(desc);
const struct sha3_state *ictx = in;
union {
const u8 *u8;
const u64 *u64;
} p = { .u8 = in };
int i;
for (i = 0; i < SHA3_STATE_SIZE / 8; i++)
sctx->sha3.state[i] = cpu_to_le64(get_unaligned(p.u64++));
sctx->count = 0;
memcpy(sctx->state, ictx->st, sizeof(ictx->st));
sctx->first_message_part = 0;
sctx->func = CPACF_KIMD_SHA3_512;

View file

@ -129,6 +129,10 @@
#define CPACF_KMAC_HMAC_SHA_256 0x71
#define CPACF_KMAC_HMAC_SHA_384 0x72
#define CPACF_KMAC_HMAC_SHA_512 0x73
#define CPACF_KMAC_PHMAC_SHA_224 0x78
#define CPACF_KMAC_PHMAC_SHA_256 0x79
#define CPACF_KMAC_PHMAC_SHA_384 0x7a
#define CPACF_KMAC_PHMAC_SHA_512 0x7b
/*
* Function codes for the PCKMO (PERFORM CRYPTOGRAPHIC KEY MANAGEMENT)

View file

@ -104,10 +104,12 @@ static void crypto_aegis128_aesni_process_ad(
}
}
static __always_inline void
static __always_inline int
crypto_aegis128_aesni_process_crypt(struct aegis_state *state,
struct skcipher_walk *walk, bool enc)
{
int err = 0;
while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
if (enc)
aegis128_aesni_enc(state, walk->src.virt.addr,
@ -119,7 +121,10 @@ crypto_aegis128_aesni_process_crypt(struct aegis_state *state,
walk->dst.virt.addr,
round_down(walk->nbytes,
AEGIS128_BLOCK_SIZE));
skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
kernel_fpu_end();
err = skcipher_walk_done(walk,
walk->nbytes % AEGIS128_BLOCK_SIZE);
kernel_fpu_begin();
}
if (walk->nbytes) {
@ -131,8 +136,11 @@ crypto_aegis128_aesni_process_crypt(struct aegis_state *state,
aegis128_aesni_dec_tail(state, walk->src.virt.addr,
walk->dst.virt.addr,
walk->nbytes);
skcipher_walk_done(walk, 0);
kernel_fpu_end();
err = skcipher_walk_done(walk, 0);
kernel_fpu_begin();
}
return err;
}
static struct aegis_ctx *crypto_aegis128_aesni_ctx(struct crypto_aead *aead)
@ -165,7 +173,7 @@ static int crypto_aegis128_aesni_setauthsize(struct crypto_aead *tfm,
return 0;
}
static __always_inline void
static __always_inline int
crypto_aegis128_aesni_crypt(struct aead_request *req,
struct aegis_block *tag_xor,
unsigned int cryptlen, bool enc)
@ -174,20 +182,24 @@ crypto_aegis128_aesni_crypt(struct aead_request *req,
struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
struct skcipher_walk walk;
struct aegis_state state;
int err;
if (enc)
skcipher_walk_aead_encrypt(&walk, req, true);
err = skcipher_walk_aead_encrypt(&walk, req, false);
else
skcipher_walk_aead_decrypt(&walk, req, true);
err = skcipher_walk_aead_decrypt(&walk, req, false);
if (err)
return err;
kernel_fpu_begin();
aegis128_aesni_init(&state, &ctx->key, req->iv);
crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
crypto_aegis128_aesni_process_crypt(&state, &walk, enc);
aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
err = crypto_aegis128_aesni_process_crypt(&state, &walk, enc);
if (err == 0)
aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
kernel_fpu_end();
return err;
}
static int crypto_aegis128_aesni_encrypt(struct aead_request *req)
@ -196,8 +208,11 @@ static int crypto_aegis128_aesni_encrypt(struct aead_request *req)
struct aegis_block tag = {};
unsigned int authsize = crypto_aead_authsize(tfm);
unsigned int cryptlen = req->cryptlen;
int err;
crypto_aegis128_aesni_crypt(req, &tag, cryptlen, true);
err = crypto_aegis128_aesni_crypt(req, &tag, cryptlen, true);
if (err)
return err;
scatterwalk_map_and_copy(tag.bytes, req->dst,
req->assoclen + cryptlen, authsize, 1);
@ -212,11 +227,14 @@ static int crypto_aegis128_aesni_decrypt(struct aead_request *req)
struct aegis_block tag;
unsigned int authsize = crypto_aead_authsize(tfm);
unsigned int cryptlen = req->cryptlen - authsize;
int err;
scatterwalk_map_and_copy(tag.bytes, req->src,
req->assoclen + cryptlen, authsize, 0);
crypto_aegis128_aesni_crypt(req, &tag, cryptlen, false);
err = crypto_aegis128_aesni_crypt(req, &tag, cryptlen, false);
if (err)
return err;
return crypto_memneq(tag.bytes, zeros.bytes, authsize) ? -EBADMSG : 0;
}

View file

@ -9,6 +9,7 @@
#include <crypto/aria.h>
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/types.h>

View file

@ -9,6 +9,7 @@
#include <crypto/aria.h>
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/types.h>

View file

@ -8,6 +8,7 @@
#include <crypto/algapi.h>
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/types.h>

View file

@ -10,6 +10,7 @@
#include <linux/unaligned.h>
#include <linux/crypto.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/types.h>

View file

@ -7,6 +7,7 @@
#include <crypto/curve25519.h>
#include <crypto/internal/kpp.h>
#include <linux/export.h>
#include <linux/types.h>
#include <linux/jump_label.h>
#include <linux/kernel.h>

View file

@ -12,6 +12,7 @@
#include <linux/types.h>
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/export.h>
#include <crypto/algapi.h>
#include <crypto/serpent.h>

View file

@ -11,6 +11,7 @@
#include <asm/fpu/api.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/export.h>
#include <linux/kernel.h>
#include <crypto/internal/skcipher.h>
#include <crypto/sm4.h>

View file

@ -40,6 +40,7 @@
#include <crypto/algapi.h>
#include <crypto/twofish.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/types.h>

View file

@ -9,6 +9,7 @@
#include <crypto/algapi.h>
#include <crypto/twofish.h>
#include <linux/crypto.h>
#include <linux/export.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/types.h>

View file

@ -29,19 +29,6 @@
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
struct crypto_hash_walk {
const char *data;
unsigned int offset;
unsigned int flags;
struct page *pg;
unsigned int entrylen;
unsigned int total;
struct scatterlist *sg;
};
static int ahash_def_finup(struct ahash_request *req);
static inline bool crypto_ahash_block_only(struct crypto_ahash *tfm)
@ -112,8 +99,8 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
return hash_walk_next(walk);
}
static int crypto_hash_walk_first(struct ahash_request *req,
struct crypto_hash_walk *walk)
int crypto_hash_walk_first(struct ahash_request *req,
struct crypto_hash_walk *walk)
{
walk->total = req->nbytes;
walk->entrylen = 0;
@ -133,8 +120,9 @@ static int crypto_hash_walk_first(struct ahash_request *req,
return hash_walk_new_entry(walk);
}
EXPORT_SYMBOL_GPL(crypto_hash_walk_first);
static int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
{
if ((walk->flags & CRYPTO_AHASH_REQ_VIRT))
return err;
@ -160,11 +148,7 @@ static int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
return hash_walk_new_entry(walk);
}
static inline int crypto_hash_walk_last(struct crypto_hash_walk *walk)
{
return !(walk->entrylen | walk->total);
}
EXPORT_SYMBOL_GPL(crypto_hash_walk_done);
/*
* For an ahash tfm that is using an shash algorithm (instead of an ahash
@ -347,6 +331,12 @@ static int ahash_do_req_chain(struct ahash_request *req,
if (crypto_ahash_statesize(tfm) > HASH_MAX_STATESIZE)
return -ENOSYS;
if (!crypto_ahash_need_fallback(tfm))
return -ENOSYS;
if (crypto_hash_no_export_core(tfm))
return -ENOSYS;
{
u8 state[HASH_MAX_STATESIZE];
@ -954,6 +944,10 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
base->cra_reqsize > MAX_SYNC_HASH_REQSIZE)
return -EINVAL;
if (base->cra_flags & CRYPTO_ALG_NEED_FALLBACK &&
base->cra_flags & CRYPTO_ALG_NO_FALLBACK)
return -EINVAL;
err = hash_prepare_alg(&alg->halg);
if (err)
return err;
@ -962,7 +956,8 @@ static int ahash_prepare_alg(struct ahash_alg *alg)
base->cra_flags |= CRYPTO_ALG_TYPE_AHASH;
if ((base->cra_flags ^ CRYPTO_ALG_REQ_VIRT) &
(CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_VIRT))
(CRYPTO_ALG_ASYNC | CRYPTO_ALG_REQ_VIRT) &&
!(base->cra_flags & CRYPTO_ALG_NO_FALLBACK))
base->cra_flags |= CRYPTO_ALG_NEED_FALLBACK;
if (!alg->setkey)

View file

@ -34,6 +34,7 @@ MODULE_PARM_DESC(cryptd_max_cpu_qlen, "Set cryptd Max queue depth");
static struct workqueue_struct *cryptd_wq;
struct cryptd_cpu_queue {
local_lock_t bh_lock;
struct crypto_queue queue;
struct work_struct work;
};
@ -110,6 +111,7 @@ static int cryptd_init_queue(struct cryptd_queue *queue,
cpu_queue = per_cpu_ptr(queue->cpu_queue, cpu);
crypto_init_queue(&cpu_queue->queue, max_cpu_qlen);
INIT_WORK(&cpu_queue->work, cryptd_queue_worker);
local_lock_init(&cpu_queue->bh_lock);
}
pr_info("cryptd: max_cpu_qlen set to %d\n", max_cpu_qlen);
return 0;
@ -135,6 +137,7 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue,
refcount_t *refcnt;
local_bh_disable();
local_lock_nested_bh(&queue->cpu_queue->bh_lock);
cpu_queue = this_cpu_ptr(queue->cpu_queue);
err = crypto_enqueue_request(&cpu_queue->queue, request);
@ -151,6 +154,7 @@ static int cryptd_enqueue_request(struct cryptd_queue *queue,
refcount_inc(refcnt);
out:
local_unlock_nested_bh(&queue->cpu_queue->bh_lock);
local_bh_enable();
return err;
@ -169,8 +173,10 @@ static void cryptd_queue_worker(struct work_struct *work)
* Only handle one request at a time to avoid hogging crypto workqueue.
*/
local_bh_disable();
__local_lock_nested_bh(&cpu_queue->bh_lock);
backlog = crypto_get_backlog(&cpu_queue->queue);
req = crypto_dequeue_request(&cpu_queue->queue);
__local_unlock_nested_bh(&cpu_queue->bh_lock);
local_bh_enable();
if (!req)

View file

@ -74,7 +74,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
struct crypto_engine_alg *alg;
struct crypto_engine_op *op;
unsigned long flags;
bool was_busy = false;
int ret;
spin_lock_irqsave(&engine->queue_lock, flags);
@ -83,12 +82,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
if (!engine->retry_support && engine->cur_req)
goto out;
/* If another context is idling then defer */
if (engine->idling) {
kthread_queue_work(engine->kworker, &engine->pump_requests);
goto out;
}
/* Check if the engine queue is idle */
if (!crypto_queue_len(&engine->queue) || !engine->running) {
if (!engine->busy)
@ -102,15 +95,6 @@ static void crypto_pump_requests(struct crypto_engine *engine,
}
engine->busy = false;
engine->idling = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
if (engine->unprepare_crypt_hardware &&
engine->unprepare_crypt_hardware(engine))
dev_err(engine->dev, "failed to unprepare crypt hardware\n");
spin_lock_irqsave(&engine->queue_lock, flags);
engine->idling = false;
goto out;
}
@ -129,22 +113,11 @@ start_request:
if (!engine->retry_support)
engine->cur_req = async_req;
if (engine->busy)
was_busy = true;
else
if (!engine->busy)
engine->busy = true;
spin_unlock_irqrestore(&engine->queue_lock, flags);
/* Until here we get the request need to be encrypted successfully */
if (!was_busy && engine->prepare_crypt_hardware) {
ret = engine->prepare_crypt_hardware(engine);
if (ret) {
dev_err(engine->dev, "failed to prepare crypt hardware\n");
goto req_err_1;
}
}
alg = container_of(async_req->tfm->__crt_alg,
struct crypto_engine_alg, base);
op = &alg->op;
@ -195,17 +168,6 @@ retry:
out:
spin_unlock_irqrestore(&engine->queue_lock, flags);
/*
* Batch requests is possible only if
* hardware can enqueue multiple requests
*/
if (engine->do_batch_requests) {
ret = engine->do_batch_requests(engine);
if (ret)
dev_err(engine->dev, "failed to do batch requests: %d\n",
ret);
}
return;
}
@ -462,12 +424,6 @@ EXPORT_SYMBOL_GPL(crypto_engine_stop);
* crypto-engine queue.
* @dev: the device attached with one hardware engine
* @retry_support: whether hardware has support for retry mechanism
* @cbk_do_batch: pointer to a callback function to be invoked when executing
* a batch of requests.
* This has the form:
* callback(struct crypto_engine *engine)
* where:
* engine: the crypto engine structure.
* @rt: whether this queue is set to run as a realtime task
* @qlen: maximum size of the crypto-engine queue
*
@ -476,7 +432,6 @@ EXPORT_SYMBOL_GPL(crypto_engine_stop);
*/
struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev,
bool retry_support,
int (*cbk_do_batch)(struct crypto_engine *engine),
bool rt, int qlen)
{
struct crypto_engine *engine;
@ -492,14 +447,8 @@ struct crypto_engine *crypto_engine_alloc_init_and_set(struct device *dev,
engine->rt = rt;
engine->running = false;
engine->busy = false;
engine->idling = false;
engine->retry_support = retry_support;
engine->priv_data = dev;
/*
* Batch requests is possible only if
* hardware has support for retry mechanism.
*/
engine->do_batch_requests = retry_support ? cbk_do_batch : NULL;
snprintf(engine->name, sizeof(engine->name),
"%s-engine", dev_name(dev));
@ -534,7 +483,7 @@ EXPORT_SYMBOL_GPL(crypto_engine_alloc_init_and_set);
*/
struct crypto_engine *crypto_engine_alloc_init(struct device *dev, bool rt)
{
return crypto_engine_alloc_init_and_set(dev, false, NULL, rt,
return crypto_engine_alloc_init_and_set(dev, false, rt,
CRYPTO_ENGINE_MAX_QLEN);
}
EXPORT_SYMBOL_GPL(crypto_engine_alloc_init);

View file

@ -48,9 +48,14 @@ static void *deflate_alloc_stream(void)
return ctx;
}
static void deflate_free_stream(void *ctx)
{
kvfree(ctx);
}
static struct crypto_acomp_streams deflate_streams = {
.alloc_ctx = deflate_alloc_stream,
.cfree_ctx = kvfree,
.free_ctx = deflate_free_stream,
};
static int deflate_compress_one(struct acomp_req *req,

View file

@ -144,7 +144,7 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
* Inject the data from the previous loop into the pool. This data is
* not considered to contain any entropy, but it stirs the pool a bit.
*/
ret = crypto_shash_update(desc, intermediary, sizeof(intermediary));
ret = crypto_shash_update(hash_state_desc, intermediary, sizeof(intermediary));
if (ret)
goto err;
@ -157,11 +157,12 @@ int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
* conditioning operation to have an identical amount of input data
* according to section 3.1.5.
*/
if (!stuck) {
ret = crypto_shash_update(hash_state_desc, (u8 *)&time,
sizeof(__u64));
if (stuck) {
time = 0;
}
ret = crypto_shash_update(hash_state_desc, (u8 *)&time, sizeof(__u64));
err:
shash_desc_zero(desc);
memzero_explicit(intermediary, sizeof(intermediary));

View file

@ -145,6 +145,7 @@ struct rand_data {
*/
#define JENT_ENTROPY_SAFETY_FACTOR 64
#include <linux/array_size.h>
#include <linux/fips.h>
#include <linux/minmax.h>
#include "jitterentropy.h"
@ -178,7 +179,6 @@ static const unsigned int jent_apt_cutoff_lookup[15] = {
static const unsigned int jent_apt_cutoff_permanent_lookup[15] = {
355, 447, 479, 494, 502, 507, 510, 512,
512, 512, 512, 512, 512, 512, 512 };
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
static void jent_apt_init(struct rand_data *ec, unsigned int osr)
{

View file

@ -152,6 +152,7 @@ static int krb5_test_one_prf(const struct krb5_prf_test *test)
out:
clear_buf(&result);
clear_buf(&prf);
clear_buf(&octet);
clear_buf(&key);
return ret;

View file

@ -178,7 +178,7 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
static int pcrypt_aead_init_tfm(struct crypto_aead *tfm)
{
int cpu, cpu_index;
int cpu_index;
struct aead_instance *inst = aead_alg_instance(tfm);
struct pcrypt_instance_ctx *ictx = aead_instance_ctx(inst);
struct pcrypt_aead_ctx *ctx = crypto_aead_ctx(tfm);
@ -187,10 +187,7 @@ static int pcrypt_aead_init_tfm(struct crypto_aead *tfm)
cpu_index = (unsigned int)atomic_inc_return(&ictx->tfm_count) %
cpumask_weight(cpu_online_mask);
ctx->cb_cpu = cpumask_first(cpu_online_mask);
for (cpu = 0; cpu < cpu_index; cpu++)
ctx->cb_cpu = cpumask_next(ctx->cb_cpu, cpu_online_mask);
ctx->cb_cpu = cpumask_nth(cpu_index, cpu_online_mask);
cipher = crypto_spawn_aead(&ictx->spawn);
if (IS_ERR(cipher))

View file

@ -4186,7 +4186,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "authenc(hmac(sha1),cbc(aes))",
.generic_driver = "authenc(hmac-sha1-lib,cbc(aes-generic))",
.test = alg_test_aead,
.fips_allowed = 1,
.suite = {
.aead = __VECS(hmac_sha1_aes_cbc_tv_temp)
}
@ -4207,7 +4206,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha1),ctr(aes))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha1),ecb(cipher_null))",
.generic_driver = "authenc(hmac-sha1-lib,ecb-cipher_null)",
@ -4218,7 +4216,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, {
.alg = "authenc(hmac(sha1),rfc3686(ctr(aes)))",
.test = alg_test_null,
.fips_allowed = 1,
}, {
.alg = "authenc(hmac(sha224),cbc(des))",
.generic_driver = "authenc(hmac-sha224-lib,cbc(des-generic))",
@ -4712,6 +4709,7 @@ static const struct alg_test_desc alg_test_descs[] = {
*/
.alg = "drbg_nopr_hmac_sha384",
.test = alg_test_null,
.fips_allowed = 1
}, {
.alg = "drbg_nopr_hmac_sha512",
.test = alg_test_drbg,
@ -4730,6 +4728,7 @@ static const struct alg_test_desc alg_test_descs[] = {
/* covered by drbg_nopr_sha256 test */
.alg = "drbg_nopr_sha384",
.test = alg_test_null,
.fips_allowed = 1
}, {
.alg = "drbg_nopr_sha512",
.fips_allowed = 1,
@ -4761,6 +4760,7 @@ static const struct alg_test_desc alg_test_descs[] = {
/* covered by drbg_pr_hmac_sha256 test */
.alg = "drbg_pr_hmac_sha384",
.test = alg_test_null,
.fips_allowed = 1
}, {
.alg = "drbg_pr_hmac_sha512",
.test = alg_test_null,
@ -4776,6 +4776,7 @@ static const struct alg_test_desc alg_test_descs[] = {
/* covered by drbg_pr_sha256 test */
.alg = "drbg_pr_sha384",
.test = alg_test_null,
.fips_allowed = 1
}, {
.alg = "drbg_pr_sha512",
.fips_allowed = 1,
@ -5077,7 +5078,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "hmac(sha1)",
.generic_driver = "hmac-sha1-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
.hash = __VECS(hmac_sha1_tv_template)
}
@ -5291,6 +5291,36 @@ static const struct alg_test_desc alg_test_descs[] = {
.cipher = __VECS(fcrypt_pcbc_tv_template)
}
}, {
#if IS_ENABLED(CONFIG_CRYPTO_PHMAC_S390)
.alg = "phmac(sha224)",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
.hash = __VECS(hmac_sha224_tv_template)
}
}, {
.alg = "phmac(sha256)",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
.hash = __VECS(hmac_sha256_tv_template)
}
}, {
.alg = "phmac(sha384)",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
.hash = __VECS(hmac_sha384_tv_template)
}
}, {
.alg = "phmac(sha512)",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
.hash = __VECS(hmac_sha512_tv_template)
}
}, {
#endif
.alg = "pkcs1(rsa,none)",
.test = alg_test_sig,
.suite = {
@ -5418,7 +5448,6 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "sha1",
.generic_driver = "sha1-lib",
.test = alg_test_hash,
.fips_allowed = 1,
.suite = {
.hash = __VECS(sha1_tv_template)
}

View file

@ -12,188 +12,304 @@
#include <linux/net.h>
#include <linux/vmalloc.h>
#include <linux/zstd.h>
#include <crypto/internal/scompress.h>
#include <crypto/internal/acompress.h>
#include <crypto/scatterwalk.h>
#define ZSTD_DEF_LEVEL 3
#define ZSTD_DEF_LEVEL 3
#define ZSTD_MAX_WINDOWLOG 18
#define ZSTD_MAX_SIZE BIT(ZSTD_MAX_WINDOWLOG)
struct zstd_ctx {
zstd_cctx *cctx;
zstd_dctx *dctx;
void *cwksp;
void *dwksp;
size_t wksp_size;
zstd_parameters params;
u8 wksp[] __aligned(8);
};
static zstd_parameters zstd_params(void)
static DEFINE_MUTEX(zstd_stream_lock);
static void *zstd_alloc_stream(void)
{
return zstd_get_params(ZSTD_DEF_LEVEL, 0);
}
static int zstd_comp_init(struct zstd_ctx *ctx)
{
int ret = 0;
const zstd_parameters params = zstd_params();
const size_t wksp_size = zstd_cctx_workspace_bound(&params.cParams);
ctx->cwksp = vzalloc(wksp_size);
if (!ctx->cwksp) {
ret = -ENOMEM;
goto out;
}
ctx->cctx = zstd_init_cctx(ctx->cwksp, wksp_size);
if (!ctx->cctx) {
ret = -EINVAL;
goto out_free;
}
out:
return ret;
out_free:
vfree(ctx->cwksp);
goto out;
}
static int zstd_decomp_init(struct zstd_ctx *ctx)
{
int ret = 0;
const size_t wksp_size = zstd_dctx_workspace_bound();
ctx->dwksp = vzalloc(wksp_size);
if (!ctx->dwksp) {
ret = -ENOMEM;
goto out;
}
ctx->dctx = zstd_init_dctx(ctx->dwksp, wksp_size);
if (!ctx->dctx) {
ret = -EINVAL;
goto out_free;
}
out:
return ret;
out_free:
vfree(ctx->dwksp);
goto out;
}
static void zstd_comp_exit(struct zstd_ctx *ctx)
{
vfree(ctx->cwksp);
ctx->cwksp = NULL;
ctx->cctx = NULL;
}
static void zstd_decomp_exit(struct zstd_ctx *ctx)
{
vfree(ctx->dwksp);
ctx->dwksp = NULL;
ctx->dctx = NULL;
}
static int __zstd_init(void *ctx)
{
int ret;
ret = zstd_comp_init(ctx);
if (ret)
return ret;
ret = zstd_decomp_init(ctx);
if (ret)
zstd_comp_exit(ctx);
return ret;
}
static void *zstd_alloc_ctx(void)
{
int ret;
zstd_parameters params;
struct zstd_ctx *ctx;
size_t wksp_size;
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
params = zstd_get_params(ZSTD_DEF_LEVEL, ZSTD_MAX_SIZE);
wksp_size = max_t(size_t,
zstd_cstream_workspace_bound(&params.cParams),
zstd_dstream_workspace_bound(ZSTD_MAX_SIZE));
if (!wksp_size)
return ERR_PTR(-EINVAL);
ctx = kvmalloc(sizeof(*ctx) + wksp_size, GFP_KERNEL);
if (!ctx)
return ERR_PTR(-ENOMEM);
ret = __zstd_init(ctx);
if (ret) {
kfree(ctx);
return ERR_PTR(ret);
}
ctx->params = params;
ctx->wksp_size = wksp_size;
return ctx;
}
static void __zstd_exit(void *ctx)
static void zstd_free_stream(void *ctx)
{
zstd_comp_exit(ctx);
zstd_decomp_exit(ctx);
kvfree(ctx);
}
static void zstd_free_ctx(void *ctx)
static struct crypto_acomp_streams zstd_streams = {
.alloc_ctx = zstd_alloc_stream,
.free_ctx = zstd_free_stream,
};
static int zstd_init(struct crypto_acomp *acomp_tfm)
{
__zstd_exit(ctx);
kfree_sensitive(ctx);
int ret = 0;
mutex_lock(&zstd_stream_lock);
ret = crypto_acomp_alloc_streams(&zstd_streams);
mutex_unlock(&zstd_stream_lock);
return ret;
}
static int __zstd_compress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
static void zstd_exit(struct crypto_acomp *acomp_tfm)
{
size_t out_len;
struct zstd_ctx *zctx = ctx;
const zstd_parameters params = zstd_params();
crypto_acomp_free_streams(&zstd_streams);
}
out_len = zstd_compress_cctx(zctx->cctx, dst, *dlen, src, slen, &params);
static int zstd_compress_one(struct acomp_req *req, struct zstd_ctx *ctx,
const void *src, void *dst, unsigned int *dlen)
{
unsigned int out_len;
ctx->cctx = zstd_init_cctx(ctx->wksp, ctx->wksp_size);
if (!ctx->cctx)
return -EINVAL;
out_len = zstd_compress_cctx(ctx->cctx, dst, req->dlen, src, req->slen,
&ctx->params);
if (zstd_is_error(out_len))
return -EINVAL;
*dlen = out_len;
return 0;
}
static int zstd_scompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
static int zstd_compress(struct acomp_req *req)
{
return __zstd_compress(src, slen, dst, dlen, ctx);
}
struct crypto_acomp_stream *s;
unsigned int pos, scur, dcur;
unsigned int total_out = 0;
bool data_available = true;
zstd_out_buffer outbuf;
struct acomp_walk walk;
zstd_in_buffer inbuf;
struct zstd_ctx *ctx;
size_t pending_bytes;
size_t num_bytes;
int ret;
static int __zstd_decompress(const u8 *src, unsigned int slen,
u8 *dst, unsigned int *dlen, void *ctx)
{
size_t out_len;
struct zstd_ctx *zctx = ctx;
s = crypto_acomp_lock_stream_bh(&zstd_streams);
ctx = s->ctx;
out_len = zstd_decompress_dctx(zctx->dctx, dst, *dlen, src, slen);
if (zstd_is_error(out_len))
return -EINVAL;
*dlen = out_len;
return 0;
}
ret = acomp_walk_virt(&walk, req, true);
if (ret)
goto out;
static int zstd_sdecompress(struct crypto_scomp *tfm, const u8 *src,
unsigned int slen, u8 *dst, unsigned int *dlen,
void *ctx)
{
return __zstd_decompress(src, slen, dst, dlen, ctx);
}
static struct scomp_alg scomp = {
.alloc_ctx = zstd_alloc_ctx,
.free_ctx = zstd_free_ctx,
.compress = zstd_scompress,
.decompress = zstd_sdecompress,
.base = {
.cra_name = "zstd",
.cra_driver_name = "zstd-scomp",
.cra_module = THIS_MODULE,
ctx->cctx = zstd_init_cstream(&ctx->params, 0, ctx->wksp, ctx->wksp_size);
if (!ctx->cctx) {
ret = -EINVAL;
goto out;
}
do {
dcur = acomp_walk_next_dst(&walk);
if (!dcur) {
ret = -ENOSPC;
goto out;
}
outbuf.pos = 0;
outbuf.dst = (u8 *)walk.dst.virt.addr;
outbuf.size = dcur;
do {
scur = acomp_walk_next_src(&walk);
if (dcur == req->dlen && scur == req->slen) {
ret = zstd_compress_one(req, ctx, walk.src.virt.addr,
walk.dst.virt.addr, &total_out);
acomp_walk_done_src(&walk, scur);
acomp_walk_done_dst(&walk, dcur);
goto out;
}
if (scur) {
inbuf.pos = 0;
inbuf.src = walk.src.virt.addr;
inbuf.size = scur;
} else {
data_available = false;
break;
}
num_bytes = zstd_compress_stream(ctx->cctx, &outbuf, &inbuf);
if (ZSTD_isError(num_bytes)) {
ret = -EIO;
goto out;
}
pending_bytes = zstd_flush_stream(ctx->cctx, &outbuf);
if (ZSTD_isError(pending_bytes)) {
ret = -EIO;
goto out;
}
acomp_walk_done_src(&walk, inbuf.pos);
} while (dcur != outbuf.pos);
total_out += outbuf.pos;
acomp_walk_done_dst(&walk, dcur);
} while (data_available);
pos = outbuf.pos;
num_bytes = zstd_end_stream(ctx->cctx, &outbuf);
if (ZSTD_isError(num_bytes))
ret = -EIO;
else
total_out += (outbuf.pos - pos);
out:
if (ret)
req->dlen = 0;
else
req->dlen = total_out;
crypto_acomp_unlock_stream_bh(s);
return ret;
}
static int zstd_decompress_one(struct acomp_req *req, struct zstd_ctx *ctx,
const void *src, void *dst, unsigned int *dlen)
{
size_t out_len;
ctx->dctx = zstd_init_dctx(ctx->wksp, ctx->wksp_size);
if (!ctx->dctx)
return -EINVAL;
out_len = zstd_decompress_dctx(ctx->dctx, dst, req->dlen, src, req->slen);
if (zstd_is_error(out_len))
return -EINVAL;
*dlen = out_len;
return 0;
}
static int zstd_decompress(struct acomp_req *req)
{
struct crypto_acomp_stream *s;
unsigned int total_out = 0;
unsigned int scur, dcur;
zstd_out_buffer outbuf;
struct acomp_walk walk;
zstd_in_buffer inbuf;
struct zstd_ctx *ctx;
size_t pending_bytes;
int ret;
s = crypto_acomp_lock_stream_bh(&zstd_streams);
ctx = s->ctx;
ret = acomp_walk_virt(&walk, req, true);
if (ret)
goto out;
ctx->dctx = zstd_init_dstream(ZSTD_MAX_SIZE, ctx->wksp, ctx->wksp_size);
if (!ctx->dctx) {
ret = -EINVAL;
goto out;
}
do {
scur = acomp_walk_next_src(&walk);
if (scur) {
inbuf.pos = 0;
inbuf.size = scur;
inbuf.src = walk.src.virt.addr;
} else {
break;
}
do {
dcur = acomp_walk_next_dst(&walk);
if (dcur == req->dlen && scur == req->slen) {
ret = zstd_decompress_one(req, ctx, walk.src.virt.addr,
walk.dst.virt.addr, &total_out);
acomp_walk_done_dst(&walk, dcur);
acomp_walk_done_src(&walk, scur);
goto out;
}
if (!dcur) {
ret = -ENOSPC;
goto out;
}
outbuf.pos = 0;
outbuf.dst = (u8 *)walk.dst.virt.addr;
outbuf.size = dcur;
pending_bytes = zstd_decompress_stream(ctx->dctx, &outbuf, &inbuf);
if (ZSTD_isError(pending_bytes)) {
ret = -EIO;
goto out;
}
total_out += outbuf.pos;
acomp_walk_done_dst(&walk, outbuf.pos);
} while (inbuf.pos != scur);
acomp_walk_done_src(&walk, scur);
} while (ret == 0);
out:
if (ret)
req->dlen = 0;
else
req->dlen = total_out;
crypto_acomp_unlock_stream_bh(s);
return ret;
}
static struct acomp_alg zstd_acomp = {
.base = {
.cra_name = "zstd",
.cra_driver_name = "zstd-generic",
.cra_flags = CRYPTO_ALG_REQ_VIRT,
.cra_module = THIS_MODULE,
},
.init = zstd_init,
.exit = zstd_exit,
.compress = zstd_compress,
.decompress = zstd_decompress,
};
static int __init zstd_mod_init(void)
{
return crypto_register_scomp(&scomp);
return crypto_register_acomp(&zstd_acomp);
}
static void __exit zstd_mod_fini(void)
{
crypto_unregister_scomp(&scomp);
crypto_unregister_acomp(&zstd_acomp);
}
module_init(zstd_mod_init);

View file

@ -80,7 +80,6 @@ static int atmel_trng_read(struct hwrng *rng, void *buf, size_t max,
ret = 4;
out:
pm_runtime_mark_last_busy(trng->dev);
pm_runtime_put_sync_autosuspend(trng->dev);
return ret;
}

View file

@ -98,7 +98,6 @@ static void cc_trng_pm_put_suspend(struct device *dev)
{
int rc = 0;
pm_runtime_mark_last_busy(dev);
rc = pm_runtime_put_autosuspend(dev);
if (rc)
dev_err(dev, "pm_runtime_put_autosuspend returned %x\n", rc);

View file

@ -98,7 +98,6 @@ static int mtk_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
max -= sizeof(u32);
}
pm_runtime_mark_last_busy(priv->dev);
pm_runtime_put_sync_autosuspend(priv->dev);
return retval || !wait ? retval : -EIO;
@ -143,7 +142,9 @@ static int mtk_rng_probe(struct platform_device *pdev)
dev_set_drvdata(&pdev->dev, priv);
pm_runtime_set_autosuspend_delay(&pdev->dev, RNG_AUTOSUSPEND_TIMEOUT);
pm_runtime_use_autosuspend(&pdev->dev);
devm_pm_runtime_enable(&pdev->dev);
ret = devm_pm_runtime_enable(&pdev->dev);
if (ret)
return ret;
dev_info(&pdev->dev, "registered RNG driver\n");

View file

@ -80,7 +80,6 @@ static int npcm_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
max--;
}
pm_runtime_mark_last_busy(priv->dev);
pm_runtime_put_sync_autosuspend(priv->dev);
return retval || !wait ? retval : -EIO;

View file

@ -56,7 +56,6 @@ static int omap3_rom_rng_read(struct hwrng *rng, void *data, size_t max, bool w)
else
r = 4;
pm_runtime_mark_last_busy(ddata->dev);
pm_runtime_put_autosuspend(ddata->dev);
return r;

View file

@ -223,7 +223,6 @@ static int rk3568_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
/* Read random data stored in the registers */
memcpy_fromio(buf, rk_rng->base + TRNG_RNG_DOUT, to_read);
out:
pm_runtime_mark_last_busy(rk_rng->dev);
pm_runtime_put_sync_autosuspend(rk_rng->dev);
return (ret < 0) ? ret : to_read;
@ -263,7 +262,6 @@ static int rk3576_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
memcpy_fromio(buf, rk_rng->base + RKRNG_TRNG_DATA0, to_read);
out:
pm_runtime_mark_last_busy(rk_rng->dev);
pm_runtime_put_sync_autosuspend(rk_rng->dev);
return (ret < 0) ? ret : to_read;
@ -355,7 +353,6 @@ out:
/* close the TRNG */
rk_rng_writel(rk_rng, TRNG_V1_CTRL_NOP, TRNG_V1_CTRL);
pm_runtime_mark_last_busy(rk_rng->dev);
pm_runtime_put_sync_autosuspend(rk_rng->dev);
return (ret < 0) ? ret : to_read;

View file

@ -255,7 +255,6 @@ static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
}
exit_rpm:
pm_runtime_mark_last_busy(priv->dev);
pm_runtime_put_sync_autosuspend(priv->dev);
return retval || !wait ? retval : -EIO;

View file

@ -188,6 +188,19 @@ config CRYPTO_PAES_S390
Select this option if you want to use the paes cipher
for example to use protected key encrypted devices.
config CRYPTO_PHMAC_S390
tristate "PHMAC cipher algorithms"
depends on S390
depends on PKEY
select CRYPTO_HASH
select CRYPTO_ENGINE
help
This is the s390 hardware accelerated implementation of the
protected key HMAC support for SHA224, SHA256, SHA384 and SHA512.
Select this option if you want to use the phmac digests
for example to use dm-integrity with secure/protected keys.
config S390_PRNG
tristate "Pseudo random number generator device driver"
depends on S390

View file

@ -206,15 +206,14 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
cet->t_key = desc_addr_val_le32(ce, rctx->addr_key);
ivsize = crypto_skcipher_ivsize(tfm);
if (areq->iv && crypto_skcipher_ivsize(tfm) > 0) {
rctx->ivlen = ivsize;
if (areq->iv && ivsize > 0) {
if (rctx->op_dir & CE_DECRYPTION) {
offset = areq->cryptlen - ivsize;
scatterwalk_map_and_copy(chan->backup_iv, areq->src,
offset, ivsize, 0);
}
memcpy(chan->bounce_iv, areq->iv, ivsize);
rctx->addr_iv = dma_map_single(ce->dev, chan->bounce_iv, rctx->ivlen,
rctx->addr_iv = dma_map_single(ce->dev, chan->bounce_iv, ivsize,
DMA_TO_DEVICE);
if (dma_mapping_error(ce->dev, rctx->addr_iv)) {
dev_err(ce->dev, "Cannot DMA MAP IV\n");
@ -278,8 +277,8 @@ static int sun8i_ce_cipher_prepare(struct crypto_engine *engine, void *async_req
}
chan->timeout = areq->cryptlen;
rctx->nr_sgs = nr_sgs;
rctx->nr_sgd = nr_sgd;
rctx->nr_sgs = ns;
rctx->nr_sgd = nd;
return 0;
theend_sgs:
@ -296,7 +295,8 @@ theend_sgs:
theend_iv:
if (areq->iv && ivsize > 0) {
if (!dma_mapping_error(ce->dev, rctx->addr_iv))
dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
dma_unmap_single(ce->dev, rctx->addr_iv, ivsize,
DMA_TO_DEVICE);
offset = areq->cryptlen - ivsize;
if (rctx->op_dir & CE_DECRYPTION) {
@ -345,7 +345,8 @@ static void sun8i_ce_cipher_unprepare(struct crypto_engine *engine,
if (areq->iv && ivsize > 0) {
if (cet->t_iv)
dma_unmap_single(ce->dev, rctx->addr_iv, rctx->ivlen, DMA_TO_DEVICE);
dma_unmap_single(ce->dev, rctx->addr_iv, ivsize,
DMA_TO_DEVICE);
offset = areq->cryptlen - ivsize;
if (rctx->op_dir & CE_DECRYPTION) {
memcpy(areq->iv, chan->backup_iv, ivsize);

View file

@ -342,8 +342,8 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
algt = container_of(alg, struct sun8i_ce_alg_template, alg.hash.base);
ce = algt->ce;
bs = algt->alg.hash.base.halg.base.cra_blocksize;
digestsize = algt->alg.hash.base.halg.digestsize;
bs = crypto_ahash_blocksize(tfm);
digestsize = crypto_ahash_digestsize(tfm);
if (digestsize == SHA224_DIGEST_SIZE)
digestsize = SHA256_DIGEST_SIZE;
if (digestsize == SHA384_DIGEST_SIZE)
@ -455,7 +455,7 @@ int sun8i_ce_hash_run(struct crypto_engine *engine, void *breq)
err_unmap_result:
dma_unmap_single(ce->dev, addr_res, digestsize, DMA_FROM_DEVICE);
if (!err)
memcpy(areq->result, result, algt->alg.hash.base.halg.digestsize);
memcpy(areq->result, result, crypto_ahash_digestsize(tfm));
err_unmap_src:
dma_unmap_sg(ce->dev, areq->src, ns, DMA_TO_DEVICE);

View file

@ -260,7 +260,6 @@ static inline __le32 desc_addr_val_le32(struct sun8i_ce_dev *dev,
* struct sun8i_cipher_req_ctx - context for a skcipher request
* @op_dir: direction (encrypt vs decrypt) for this request
* @flow: the flow to use for this request
* @ivlen: size of bounce_iv
* @nr_sgs: The number of source SG (as given by dma_map_sg())
* @nr_sgd: The number of destination SG (as given by dma_map_sg())
* @addr_iv: The IV addr returned by dma_map_single, need to unmap later
@ -270,7 +269,6 @@ static inline __le32 desc_addr_val_le32(struct sun8i_ce_dev *dev,
struct sun8i_cipher_req_ctx {
u32 op_dir;
int flow;
unsigned int ivlen;
int nr_sgs;
int nr_sgd;
dma_addr_t addr_iv;

File diff suppressed because it is too large Load diff

View file

@ -119,7 +119,6 @@
#define SHA_FLAGS_SHA512 BIT(4)
#define SHA_FLAGS_SHA512_224 BIT(5)
#define SHA_FLAGS_SHA512_256 BIT(6)
#define SHA_FLAGS_HMAC BIT(8)
#define SHA_FLAGS_FINUP BIT(9)
#define SHA_FLAGS_MASK (0xff)
@ -161,22 +160,18 @@ struct aspeed_engine_hash {
aspeed_hace_fn_t dma_prepare;
};
struct aspeed_sha_hmac_ctx {
struct crypto_shash *shash;
u8 ipad[SHA512_BLOCK_SIZE];
u8 opad[SHA512_BLOCK_SIZE];
};
struct aspeed_sham_ctx {
struct aspeed_hace_dev *hace_dev;
unsigned long flags; /* hmac flag */
struct aspeed_sha_hmac_ctx base[];
};
struct aspeed_sham_reqctx {
/* DMA buffer written by hardware */
u8 digest[SHA512_DIGEST_SIZE] __aligned(64);
/* Software state sorted by size. */
u64 digcnt[2];
unsigned long flags; /* final update flag should no use*/
unsigned long op; /* final or update */
u32 cmd; /* trigger cmd */
/* walk state */
@ -188,17 +183,12 @@ struct aspeed_sham_reqctx {
size_t digsize;
size_t block_size;
size_t ivsize;
const __be32 *sha_iv;
/* remain data buffer */
u8 buffer[SHA512_BLOCK_SIZE * 2];
dma_addr_t buffer_dma_addr;
size_t bufcnt; /* buffer counter */
/* output buffer */
u8 digest[SHA512_DIGEST_SIZE] __aligned(64);
dma_addr_t digest_dma_addr;
u64 digcnt[2];
/* This is DMA too but read-only for hardware. */
u8 buffer[SHA512_BLOCK_SIZE + 16];
};
struct aspeed_engine_crypto {

View file

@ -2297,6 +2297,7 @@ static void atmel_aes_get_cap(struct atmel_aes_dev *dd)
/* keep only major version number */
switch (dd->hw_version & 0xff0) {
case 0x800:
case 0x700:
case 0x600:
case 0x500:

View file

@ -2534,6 +2534,7 @@ static void atmel_sha_get_cap(struct atmel_sha_dev *dd)
/* keep only major version number */
switch (dd->hw_version & 0xff0) {
case 0x800:
case 0x700:
case 0x600:
case 0x510:

View file

@ -25,10 +25,6 @@ caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PKC_API) += caampkc.o pkc_desc.o
caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_BLOB_GEN) += blob_gen.o
caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += qi.o
ifneq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI),)
ccflags-y += -DCONFIG_CAAM_QI
endif
caam-$(CONFIG_DEBUG_FS) += debugfs.o
obj-$(CONFIG_CRYPTO_DEV_FSL_DPAA2_CAAM) += dpaa2_caam.o

View file

@ -24,7 +24,7 @@
bool caam_dpaa2;
EXPORT_SYMBOL(caam_dpaa2);
#ifdef CONFIG_CAAM_QI
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
#include "qi.h"
#endif
@ -573,7 +573,7 @@ static const struct soc_device_attribute caam_imx_soc_table[] = {
{ .soc_id = "i.MX7*", .data = &caam_imx7_data },
{ .soc_id = "i.MX8M*", .data = &caam_imx7_data },
{ .soc_id = "i.MX8ULP", .data = &caam_imx8ulp_data },
{ .soc_id = "i.MX8QM", .data = &caam_imx8ulp_data },
{ .soc_id = "i.MX8Q*", .data = &caam_imx8ulp_data },
{ .soc_id = "VF*", .data = &caam_vf610_data },
{ .family = "Freescale i.MX" },
{ /* sentinel */ }
@ -831,7 +831,7 @@ static int caam_ctrl_suspend(struct device *dev)
{
const struct caam_drv_private *ctrlpriv = dev_get_drvdata(dev);
if (ctrlpriv->caam_off_during_pm && !ctrlpriv->optee_en)
if (ctrlpriv->caam_off_during_pm && !ctrlpriv->no_page0)
caam_state_save(dev);
return 0;
@ -842,7 +842,7 @@ static int caam_ctrl_resume(struct device *dev)
struct caam_drv_private *ctrlpriv = dev_get_drvdata(dev);
int ret = 0;
if (ctrlpriv->caam_off_during_pm && !ctrlpriv->optee_en) {
if (ctrlpriv->caam_off_during_pm && !ctrlpriv->no_page0) {
caam_state_restore(dev);
/* HW and rng will be reset so deinstantiation can be removed */
@ -908,6 +908,7 @@ static int caam_probe(struct platform_device *pdev)
imx_soc_data = imx_soc_match->data;
reg_access = reg_access && imx_soc_data->page0_access;
ctrlpriv->no_page0 = !reg_access;
/*
* CAAM clocks cannot be controlled from kernel.
*/
@ -967,7 +968,7 @@ iomap_ctrl:
caam_dpaa2 = !!(comp_params & CTPR_MS_DPAA2);
ctrlpriv->qi_present = !!(comp_params & CTPR_MS_QI_MASK);
#ifdef CONFIG_CAAM_QI
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
/* If (DPAA 1.x) QI present, check whether dependencies are available */
if (ctrlpriv->qi_present && !caam_dpaa2) {
ret = qman_is_probed();
@ -1098,7 +1099,7 @@ set_dma_mask:
wr_reg32(&ctrlpriv->qi->qi_control_lo, QICTL_DQEN);
/* If QMAN driver is present, init CAAM-QI backend */
#ifdef CONFIG_CAAM_QI
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
ret = caam_qi_init(pdev);
if (ret)
dev_err(dev, "caam qi i/f init failed: %d\n", ret);

View file

@ -22,7 +22,7 @@ static int caam_debugfs_u32_get(void *data, u64 *val)
DEFINE_DEBUGFS_ATTRIBUTE(caam_fops_u32_ro, caam_debugfs_u32_get, NULL, "%llu\n");
DEFINE_DEBUGFS_ATTRIBUTE(caam_fops_u64_ro, caam_debugfs_u64_get, NULL, "%llu\n");
#ifdef CONFIG_CAAM_QI
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
/*
* This is a counter for the number of times the congestion group (where all
* the request and response queueus are) reached congestion. Incremented

View file

@ -18,7 +18,7 @@ static inline void caam_debugfs_init(struct caam_drv_private *ctrlpriv,
{}
#endif
#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_CAAM_QI)
#if defined(CONFIG_DEBUG_FS) && defined(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI)
void caam_debugfs_qi_congested(void);
void caam_debugfs_qi_init(struct caam_drv_private *ctrlpriv);
#else

View file

@ -115,6 +115,7 @@ struct caam_drv_private {
u8 blob_present; /* Nonzero if BLOB support present in device */
u8 mc_en; /* Nonzero if MC f/w is active */
u8 optee_en; /* Nonzero if OP-TEE f/w is active */
u8 no_page0; /* Nonzero if register page 0 is not controlled by Linux */
bool pr_support; /* RNG prediction resistance available */
int secvio_irq; /* Security violation interrupt number */
int virt_en; /* Virtualization enabled in CAAM */
@ -226,7 +227,7 @@ static inline int caam_prng_register(struct device *dev)
static inline void caam_prng_unregister(void *data) {}
#endif /* CONFIG_CRYPTO_DEV_FSL_CAAM_PRNG_API */
#ifdef CONFIG_CAAM_QI
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI
int caam_qi_algapi_init(struct device *dev);
void caam_qi_algapi_exit(void);
@ -242,7 +243,7 @@ static inline void caam_qi_algapi_exit(void)
{
}
#endif /* CONFIG_CAAM_QI */
#endif /* CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI */
static inline u64 caam_get_dma_mask(struct device *dev)
{

View file

@ -629,8 +629,7 @@ static int caam_jr_probe(struct platform_device *pdev)
}
/* Initialize crypto engine */
jrpriv->engine = crypto_engine_alloc_init_and_set(jrdev, true, NULL,
false,
jrpriv->engine = crypto_engine_alloc_init_and_set(jrdev, true, false,
CRYPTO_ENGINE_MAX_QLEN);
if (!jrpriv->engine) {
dev_err(jrdev, "Could not init crypto-engine\n");

View file

@ -442,11 +442,8 @@ struct caam_drv_ctx *caam_drv_ctx_init(struct device *qidev,
if (!cpumask_test_cpu(*cpu, cpus)) {
int *pcpu = &get_cpu_var(last_cpu);
*pcpu = cpumask_next(*pcpu, cpus);
if (*pcpu >= nr_cpu_ids)
*pcpu = cpumask_first(cpus);
*pcpu = cpumask_next_wrap(*pcpu, cpus);
*cpu = *pcpu;
put_cpu_var(last_cpu);
}
drv_ctx->cpu = *cpu;

View file

@ -319,5 +319,8 @@ void ccp5_debugfs_setup(struct ccp_device *ccp)
void ccp5_debugfs_destroy(void)
{
mutex_lock(&ccp_debugfs_lock);
debugfs_remove_recursive(ccp_debugfs_dir);
ccp_debugfs_dir = NULL;
mutex_unlock(&ccp_debugfs_lock);
}

View file

@ -633,10 +633,16 @@ static noinline_for_stack int
ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
{
struct ccp_aes_engine *aes = &cmd->u.aes;
struct ccp_dm_workarea key, ctx, final_wa, tag;
struct ccp_data src, dst;
struct ccp_data aad;
struct ccp_op op;
struct {
struct ccp_dm_workarea key;
struct ccp_dm_workarea ctx;
struct ccp_dm_workarea final;
struct ccp_dm_workarea tag;
struct ccp_data src;
struct ccp_data dst;
struct ccp_data aad;
struct ccp_op op;
} *wa __cleanup(kfree) = kzalloc(sizeof *wa, GFP_KERNEL);
unsigned int dm_offset;
unsigned int authsize;
unsigned int jobid;
@ -650,6 +656,9 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
struct scatterlist *p_outp, sg_outp[2];
struct scatterlist *p_aad;
if (!wa)
return -ENOMEM;
if (!aes->iv)
return -EINVAL;
@ -696,26 +705,26 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
jobid = CCP_NEW_JOBID(cmd_q->ccp);
memset(&op, 0, sizeof(op));
op.cmd_q = cmd_q;
op.jobid = jobid;
op.sb_key = cmd_q->sb_key; /* Pre-allocated */
op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */
op.init = 1;
op.u.aes.type = aes->type;
memset(&wa->op, 0, sizeof(wa->op));
wa->op.cmd_q = cmd_q;
wa->op.jobid = jobid;
wa->op.sb_key = cmd_q->sb_key; /* Pre-allocated */
wa->op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */
wa->op.init = 1;
wa->op.u.aes.type = aes->type;
/* Copy the key to the LSB */
ret = ccp_init_dm_workarea(&key, cmd_q,
ret = ccp_init_dm_workarea(&wa->key, cmd_q,
CCP_AES_CTX_SB_COUNT * CCP_SB_BYTES,
DMA_TO_DEVICE);
if (ret)
return ret;
dm_offset = CCP_SB_BYTES - aes->key_len;
ret = ccp_set_dm_area(&key, dm_offset, aes->key, 0, aes->key_len);
ret = ccp_set_dm_area(&wa->key, dm_offset, aes->key, 0, aes->key_len);
if (ret)
goto e_key;
ret = ccp_copy_to_sb(cmd_q, &key, op.jobid, op.sb_key,
ret = ccp_copy_to_sb(cmd_q, &wa->key, wa->op.jobid, wa->op.sb_key,
CCP_PASSTHRU_BYTESWAP_256BIT);
if (ret) {
cmd->engine_error = cmd_q->cmd_error;
@ -726,58 +735,58 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
* There is an assumption here that the IV is 96 bits in length, plus
* a nonce of 32 bits. If no IV is present, use a zeroed buffer.
*/
ret = ccp_init_dm_workarea(&ctx, cmd_q,
ret = ccp_init_dm_workarea(&wa->ctx, cmd_q,
CCP_AES_CTX_SB_COUNT * CCP_SB_BYTES,
DMA_BIDIRECTIONAL);
if (ret)
goto e_key;
dm_offset = CCP_AES_CTX_SB_COUNT * CCP_SB_BYTES - aes->iv_len;
ret = ccp_set_dm_area(&ctx, dm_offset, aes->iv, 0, aes->iv_len);
ret = ccp_set_dm_area(&wa->ctx, dm_offset, aes->iv, 0, aes->iv_len);
if (ret)
goto e_ctx;
ret = ccp_copy_to_sb(cmd_q, &ctx, op.jobid, op.sb_ctx,
ret = ccp_copy_to_sb(cmd_q, &wa->ctx, wa->op.jobid, wa->op.sb_ctx,
CCP_PASSTHRU_BYTESWAP_256BIT);
if (ret) {
cmd->engine_error = cmd_q->cmd_error;
goto e_ctx;
}
op.init = 1;
wa->op.init = 1;
if (aes->aad_len > 0) {
/* Step 1: Run a GHASH over the Additional Authenticated Data */
ret = ccp_init_data(&aad, cmd_q, p_aad, aes->aad_len,
ret = ccp_init_data(&wa->aad, cmd_q, p_aad, aes->aad_len,
AES_BLOCK_SIZE,
DMA_TO_DEVICE);
if (ret)
goto e_ctx;
op.u.aes.mode = CCP_AES_MODE_GHASH;
op.u.aes.action = CCP_AES_GHASHAAD;
wa->op.u.aes.mode = CCP_AES_MODE_GHASH;
wa->op.u.aes.action = CCP_AES_GHASHAAD;
while (aad.sg_wa.bytes_left) {
ccp_prepare_data(&aad, NULL, &op, AES_BLOCK_SIZE, true);
while (wa->aad.sg_wa.bytes_left) {
ccp_prepare_data(&wa->aad, NULL, &wa->op, AES_BLOCK_SIZE, true);
ret = cmd_q->ccp->vdata->perform->aes(&op);
ret = cmd_q->ccp->vdata->perform->aes(&wa->op);
if (ret) {
cmd->engine_error = cmd_q->cmd_error;
goto e_aad;
}
ccp_process_data(&aad, NULL, &op);
op.init = 0;
ccp_process_data(&wa->aad, NULL, &wa->op);
wa->op.init = 0;
}
}
op.u.aes.mode = CCP_AES_MODE_GCTR;
op.u.aes.action = aes->action;
wa->op.u.aes.mode = CCP_AES_MODE_GCTR;
wa->op.u.aes.action = aes->action;
if (ilen > 0) {
/* Step 2: Run a GCTR over the plaintext */
in_place = (sg_virt(p_inp) == sg_virt(p_outp)) ? true : false;
ret = ccp_init_data(&src, cmd_q, p_inp, ilen,
ret = ccp_init_data(&wa->src, cmd_q, p_inp, ilen,
AES_BLOCK_SIZE,
in_place ? DMA_BIDIRECTIONAL
: DMA_TO_DEVICE);
@ -785,52 +794,52 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
goto e_aad;
if (in_place) {
dst = src;
wa->dst = wa->src;
} else {
ret = ccp_init_data(&dst, cmd_q, p_outp, ilen,
ret = ccp_init_data(&wa->dst, cmd_q, p_outp, ilen,
AES_BLOCK_SIZE, DMA_FROM_DEVICE);
if (ret)
goto e_src;
}
op.soc = 0;
op.eom = 0;
op.init = 1;
while (src.sg_wa.bytes_left) {
ccp_prepare_data(&src, &dst, &op, AES_BLOCK_SIZE, true);
if (!src.sg_wa.bytes_left) {
wa->op.soc = 0;
wa->op.eom = 0;
wa->op.init = 1;
while (wa->src.sg_wa.bytes_left) {
ccp_prepare_data(&wa->src, &wa->dst, &wa->op, AES_BLOCK_SIZE, true);
if (!wa->src.sg_wa.bytes_left) {
unsigned int nbytes = ilen % AES_BLOCK_SIZE;
if (nbytes) {
op.eom = 1;
op.u.aes.size = (nbytes * 8) - 1;
wa->op.eom = 1;
wa->op.u.aes.size = (nbytes * 8) - 1;
}
}
ret = cmd_q->ccp->vdata->perform->aes(&op);
ret = cmd_q->ccp->vdata->perform->aes(&wa->op);
if (ret) {
cmd->engine_error = cmd_q->cmd_error;
goto e_dst;
}
ccp_process_data(&src, &dst, &op);
op.init = 0;
ccp_process_data(&wa->src, &wa->dst, &wa->op);
wa->op.init = 0;
}
}
/* Step 3: Update the IV portion of the context with the original IV */
ret = ccp_copy_from_sb(cmd_q, &ctx, op.jobid, op.sb_ctx,
ret = ccp_copy_from_sb(cmd_q, &wa->ctx, wa->op.jobid, wa->op.sb_ctx,
CCP_PASSTHRU_BYTESWAP_256BIT);
if (ret) {
cmd->engine_error = cmd_q->cmd_error;
goto e_dst;
}
ret = ccp_set_dm_area(&ctx, dm_offset, aes->iv, 0, aes->iv_len);
ret = ccp_set_dm_area(&wa->ctx, dm_offset, aes->iv, 0, aes->iv_len);
if (ret)
goto e_dst;
ret = ccp_copy_to_sb(cmd_q, &ctx, op.jobid, op.sb_ctx,
ret = ccp_copy_to_sb(cmd_q, &wa->ctx, wa->op.jobid, wa->op.sb_ctx,
CCP_PASSTHRU_BYTESWAP_256BIT);
if (ret) {
cmd->engine_error = cmd_q->cmd_error;
@ -840,75 +849,75 @@ ccp_run_aes_gcm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
/* Step 4: Concatenate the lengths of the AAD and source, and
* hash that 16 byte buffer.
*/
ret = ccp_init_dm_workarea(&final_wa, cmd_q, AES_BLOCK_SIZE,
ret = ccp_init_dm_workarea(&wa->final, cmd_q, AES_BLOCK_SIZE,
DMA_BIDIRECTIONAL);
if (ret)
goto e_dst;
final = (__be64 *)final_wa.address;
final = (__be64 *)wa->final.address;
final[0] = cpu_to_be64(aes->aad_len * 8);
final[1] = cpu_to_be64(ilen * 8);
memset(&op, 0, sizeof(op));
op.cmd_q = cmd_q;
op.jobid = jobid;
op.sb_key = cmd_q->sb_key; /* Pre-allocated */
op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */
op.init = 1;
op.u.aes.type = aes->type;
op.u.aes.mode = CCP_AES_MODE_GHASH;
op.u.aes.action = CCP_AES_GHASHFINAL;
op.src.type = CCP_MEMTYPE_SYSTEM;
op.src.u.dma.address = final_wa.dma.address;
op.src.u.dma.length = AES_BLOCK_SIZE;
op.dst.type = CCP_MEMTYPE_SYSTEM;
op.dst.u.dma.address = final_wa.dma.address;
op.dst.u.dma.length = AES_BLOCK_SIZE;
op.eom = 1;
op.u.aes.size = 0;
ret = cmd_q->ccp->vdata->perform->aes(&op);
memset(&wa->op, 0, sizeof(wa->op));
wa->op.cmd_q = cmd_q;
wa->op.jobid = jobid;
wa->op.sb_key = cmd_q->sb_key; /* Pre-allocated */
wa->op.sb_ctx = cmd_q->sb_ctx; /* Pre-allocated */
wa->op.init = 1;
wa->op.u.aes.type = aes->type;
wa->op.u.aes.mode = CCP_AES_MODE_GHASH;
wa->op.u.aes.action = CCP_AES_GHASHFINAL;
wa->op.src.type = CCP_MEMTYPE_SYSTEM;
wa->op.src.u.dma.address = wa->final.dma.address;
wa->op.src.u.dma.length = AES_BLOCK_SIZE;
wa->op.dst.type = CCP_MEMTYPE_SYSTEM;
wa->op.dst.u.dma.address = wa->final.dma.address;
wa->op.dst.u.dma.length = AES_BLOCK_SIZE;
wa->op.eom = 1;
wa->op.u.aes.size = 0;
ret = cmd_q->ccp->vdata->perform->aes(&wa->op);
if (ret)
goto e_final_wa;
if (aes->action == CCP_AES_ACTION_ENCRYPT) {
/* Put the ciphered tag after the ciphertext. */
ccp_get_dm_area(&final_wa, 0, p_tag, 0, authsize);
ccp_get_dm_area(&wa->final, 0, p_tag, 0, authsize);
} else {
/* Does this ciphered tag match the input? */
ret = ccp_init_dm_workarea(&tag, cmd_q, authsize,
ret = ccp_init_dm_workarea(&wa->tag, cmd_q, authsize,
DMA_BIDIRECTIONAL);
if (ret)
goto e_final_wa;
ret = ccp_set_dm_area(&tag, 0, p_tag, 0, authsize);
ret = ccp_set_dm_area(&wa->tag, 0, p_tag, 0, authsize);
if (ret) {
ccp_dm_free(&tag);
ccp_dm_free(&wa->tag);
goto e_final_wa;
}
ret = crypto_memneq(tag.address, final_wa.address,
ret = crypto_memneq(wa->tag.address, wa->final.address,
authsize) ? -EBADMSG : 0;
ccp_dm_free(&tag);
ccp_dm_free(&wa->tag);
}
e_final_wa:
ccp_dm_free(&final_wa);
ccp_dm_free(&wa->final);
e_dst:
if (ilen > 0 && !in_place)
ccp_free_data(&dst, cmd_q);
ccp_free_data(&wa->dst, cmd_q);
e_src:
if (ilen > 0)
ccp_free_data(&src, cmd_q);
ccp_free_data(&wa->src, cmd_q);
e_aad:
if (aes->aad_len)
ccp_free_data(&aad, cmd_q);
ccp_free_data(&wa->aad, cmd_q);
e_ctx:
ccp_dm_free(&ctx);
ccp_dm_free(&wa->ctx);
e_key:
ccp_dm_free(&key);
ccp_dm_free(&wa->key);
return ret;
}

View file

@ -434,7 +434,7 @@ cleanup:
return rc;
}
static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order)
static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order, bool locked)
{
unsigned long npages = 1ul << order, paddr;
struct sev_device *sev;
@ -453,7 +453,7 @@ static struct page *__snp_alloc_firmware_pages(gfp_t gfp_mask, int order)
return page;
paddr = __pa((unsigned long)page_address(page));
if (rmp_mark_pages_firmware(paddr, npages, false))
if (rmp_mark_pages_firmware(paddr, npages, locked))
return NULL;
return page;
@ -463,7 +463,7 @@ void *snp_alloc_firmware_page(gfp_t gfp_mask)
{
struct page *page;
page = __snp_alloc_firmware_pages(gfp_mask, 0);
page = __snp_alloc_firmware_pages(gfp_mask, 0, false);
return page ? page_address(page) : NULL;
}
@ -498,7 +498,7 @@ static void *sev_fw_alloc(unsigned long len)
{
struct page *page;
page = __snp_alloc_firmware_pages(GFP_KERNEL, get_order(len));
page = __snp_alloc_firmware_pages(GFP_KERNEL, get_order(len), true);
if (!page)
return NULL;
@ -1276,9 +1276,11 @@ static int __sev_platform_init_handle_init_ex_path(struct sev_device *sev)
static int __sev_platform_init_locked(int *error)
{
int rc, psp_ret = SEV_RET_NO_FW_CALL;
int rc, psp_ret, dfflush_error;
struct sev_device *sev;
psp_ret = dfflush_error = SEV_RET_NO_FW_CALL;
if (!psp_master || !psp_master->sev_data)
return -ENODEV;
@ -1320,10 +1322,10 @@ static int __sev_platform_init_locked(int *error)
/* Prepare for first SEV guest launch after INIT */
wbinvd_on_all_cpus();
rc = __sev_do_cmd_locked(SEV_CMD_DF_FLUSH, NULL, error);
rc = __sev_do_cmd_locked(SEV_CMD_DF_FLUSH, NULL, &dfflush_error);
if (rc) {
dev_err(sev->dev, "SEV: DF_FLUSH failed %#x, rc %d\n",
*error, rc);
dfflush_error, rc);
return rc;
}
@ -1785,8 +1787,14 @@ static int __sev_snp_shutdown_locked(int *error, bool panic)
sev->snp_initialized = false;
dev_dbg(sev->dev, "SEV-SNP firmware shutdown\n");
atomic_notifier_chain_unregister(&panic_notifier_list,
&snp_panic_notifier);
/*
* __sev_snp_shutdown_locked() deadlocks when it tries to unregister
* itself during panic as the panic notifier is called with RCU read
* lock held and notifier unregistration does RCU synchronization.
*/
if (!panic)
atomic_notifier_chain_unregister(&panic_notifier_list,
&snp_panic_notifier);
/* Reset TMR size back to default */
sev_es_tmr_size = SEV_TMR_SIZE;

View file

@ -453,6 +453,7 @@ static const struct psp_vdata pspv6 = {
.cmdresp_reg = 0x10944, /* C2PMSG_17 */
.cmdbuff_addr_lo_reg = 0x10948, /* C2PMSG_18 */
.cmdbuff_addr_hi_reg = 0x1094c, /* C2PMSG_19 */
.bootloader_info_reg = 0x109ec, /* C2PMSG_59 */
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10510, /* P2CMSG_INTEN */
.intsts_reg = 0x10514, /* P2CMSG_INTSTS */

View file

@ -224,7 +224,7 @@ static int cc_generate_mlli(struct device *dev, struct buffer_array *sg_data,
/* Set MLLI size for the bypass operation */
mlli_params->mlli_len = (total_nents * LLI_ENTRY_BYTE_SIZE);
dev_dbg(dev, "MLLI params: virt_addr=%pK dma_addr=%pad mlli_len=0x%X\n",
dev_dbg(dev, "MLLI params: virt_addr=%p dma_addr=%pad mlli_len=0x%X\n",
mlli_params->mlli_virt_addr, &mlli_params->mlli_dma_addr,
mlli_params->mlli_len);
@ -239,7 +239,7 @@ static void cc_add_sg_entry(struct device *dev, struct buffer_array *sgl_data,
{
unsigned int index = sgl_data->num_of_buffers;
dev_dbg(dev, "index=%u nents=%u sgl=%pK data_len=0x%08X is_last=%d\n",
dev_dbg(dev, "index=%u nents=%u sgl=%p data_len=0x%08X is_last=%d\n",
index, nents, sgl, data_len, is_last_table);
sgl_data->nents[index] = nents;
sgl_data->entry[index].sgl = sgl;
@ -298,7 +298,7 @@ cc_set_aead_conf_buf(struct device *dev, struct aead_req_ctx *areq_ctx,
dev_err(dev, "dma_map_sg() config buffer failed\n");
return -ENOMEM;
}
dev_dbg(dev, "Mapped curr_buff: dma_address=%pad page=%p addr=%pK offset=%u length=%u\n",
dev_dbg(dev, "Mapped curr_buff: dma_address=%pad page=%p addr=%p offset=%u length=%u\n",
&sg_dma_address(&areq_ctx->ccm_adata_sg),
sg_page(&areq_ctx->ccm_adata_sg),
sg_virt(&areq_ctx->ccm_adata_sg),
@ -323,7 +323,7 @@ static int cc_set_hash_buf(struct device *dev, struct ahash_req_ctx *areq_ctx,
dev_err(dev, "dma_map_sg() src buffer failed\n");
return -ENOMEM;
}
dev_dbg(dev, "Mapped curr_buff: dma_address=%pad page=%p addr=%pK offset=%u length=%u\n",
dev_dbg(dev, "Mapped curr_buff: dma_address=%pad page=%p addr=%p offset=%u length=%u\n",
&sg_dma_address(areq_ctx->buff_sg), sg_page(areq_ctx->buff_sg),
sg_virt(areq_ctx->buff_sg), areq_ctx->buff_sg->offset,
areq_ctx->buff_sg->length);
@ -359,11 +359,11 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx,
if (src != dst) {
dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE);
dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE);
dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst));
dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
dev_dbg(dev, "Unmapped req->dst=%p\n", sg_virt(dst));
dev_dbg(dev, "Unmapped req->src=%p\n", sg_virt(src));
} else {
dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
dev_dbg(dev, "Unmapped req->src=%p\n", sg_virt(src));
}
}
@ -391,11 +391,11 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
req_ctx->gen_ctx.iv_dma_addr =
dma_map_single(dev, info, ivsize, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, req_ctx->gen_ctx.iv_dma_addr)) {
dev_err(dev, "Mapping iv %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping iv %u B at va=%p for DMA failed\n",
ivsize, info);
return -ENOMEM;
}
dev_dbg(dev, "Mapped iv %u B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped iv %u B at va=%p to dma=%pad\n",
ivsize, info, &req_ctx->gen_ctx.iv_dma_addr);
} else {
req_ctx->gen_ctx.iv_dma_addr = 0;
@ -506,7 +506,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) &&
(areq_ctx->mlli_params.mlli_virt_addr)) {
dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%p\n",
&areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
dma_pool_free(areq_ctx->mlli_params.curr_pool,
@ -514,13 +514,13 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
areq_ctx->mlli_params.mlli_dma_addr);
}
dev_dbg(dev, "Unmapping src sgl: req->src=%pK areq_ctx->src.nents=%u areq_ctx->assoc.nents=%u assoclen:%u cryptlen=%u\n",
dev_dbg(dev, "Unmapping src sgl: req->src=%p areq_ctx->src.nents=%u areq_ctx->assoc.nents=%u assoclen:%u cryptlen=%u\n",
sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
areq_ctx->assoclen, req->cryptlen);
dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction);
if (req->src != req->dst) {
dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
dev_dbg(dev, "Unmapping dst sgl: req->dst=%p\n",
sg_virt(req->dst));
dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE);
}
@ -566,7 +566,7 @@ static int cc_aead_chain_iv(struct cc_drvdata *drvdata,
dma_map_single(dev, areq_ctx->gen_ctx.iv, hw_iv_size,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, areq_ctx->gen_ctx.iv_dma_addr)) {
dev_err(dev, "Mapping iv %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping iv %u B at va=%p for DMA failed\n",
hw_iv_size, req->iv);
kfree_sensitive(areq_ctx->gen_ctx.iv);
areq_ctx->gen_ctx.iv = NULL;
@ -574,7 +574,7 @@ static int cc_aead_chain_iv(struct cc_drvdata *drvdata,
goto chain_iv_exit;
}
dev_dbg(dev, "Mapped iv %u B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped iv %u B at va=%p to dma=%pad\n",
hw_iv_size, req->iv, &areq_ctx->gen_ctx.iv_dma_addr);
chain_iv_exit:
@ -977,7 +977,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
dma_addr = dma_map_single(dev, areq_ctx->mac_buf, MAX_MAC_SIZE,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "Mapping mac_buf %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping mac_buf %u B at va=%p for DMA failed\n",
MAX_MAC_SIZE, areq_ctx->mac_buf);
rc = -ENOMEM;
goto aead_map_failure;
@ -991,7 +991,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "Mapping mac_buf %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping mac_buf %u B at va=%p for DMA failed\n",
AES_BLOCK_SIZE, addr);
areq_ctx->ccm_iv0_dma_addr = 0;
rc = -ENOMEM;
@ -1009,7 +1009,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
dma_addr = dma_map_single(dev, areq_ctx->hkey, AES_BLOCK_SIZE,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "Mapping hkey %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping hkey %u B at va=%p for DMA failed\n",
AES_BLOCK_SIZE, areq_ctx->hkey);
rc = -ENOMEM;
goto aead_map_failure;
@ -1019,7 +1019,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
dma_addr = dma_map_single(dev, &areq_ctx->gcm_len_block,
AES_BLOCK_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "Mapping gcm_len_block %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping gcm_len_block %u B at va=%p for DMA failed\n",
AES_BLOCK_SIZE, &areq_ctx->gcm_len_block);
rc = -ENOMEM;
goto aead_map_failure;
@ -1030,7 +1030,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
AES_BLOCK_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "Mapping gcm_iv_inc1 %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping gcm_iv_inc1 %u B at va=%p for DMA failed\n",
AES_BLOCK_SIZE, (areq_ctx->gcm_iv_inc1));
areq_ctx->gcm_iv_inc1_dma_addr = 0;
rc = -ENOMEM;
@ -1042,7 +1042,7 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
AES_BLOCK_SIZE, DMA_TO_DEVICE);
if (dma_mapping_error(dev, dma_addr)) {
dev_err(dev, "Mapping gcm_iv_inc2 %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping gcm_iv_inc2 %u B at va=%p for DMA failed\n",
AES_BLOCK_SIZE, (areq_ctx->gcm_iv_inc2));
areq_ctx->gcm_iv_inc2_dma_addr = 0;
rc = -ENOMEM;
@ -1152,7 +1152,7 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
u32 dummy = 0;
u32 mapped_nents = 0;
dev_dbg(dev, "final params : curr_buff=%pK curr_buff_cnt=0x%X nbytes = 0x%X src=%pK curr_index=%u\n",
dev_dbg(dev, "final params : curr_buff=%p curr_buff_cnt=0x%X nbytes = 0x%X src=%p curr_index=%u\n",
curr_buff, *curr_buff_cnt, nbytes, src, areq_ctx->buff_index);
/* Init the type of the dma buffer */
areq_ctx->data_dma_buf_type = CC_DMA_BUF_NULL;
@ -1236,7 +1236,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
u32 dummy = 0;
u32 mapped_nents = 0;
dev_dbg(dev, " update params : curr_buff=%pK curr_buff_cnt=0x%X nbytes=0x%X src=%pK curr_index=%u\n",
dev_dbg(dev, " update params : curr_buff=%p curr_buff_cnt=0x%X nbytes=0x%X src=%p curr_index=%u\n",
curr_buff, *curr_buff_cnt, nbytes, src, areq_ctx->buff_index);
/* Init the type of the dma buffer */
areq_ctx->data_dma_buf_type = CC_DMA_BUF_NULL;
@ -1246,7 +1246,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
areq_ctx->in_nents = 0;
if (total_in_len < block_size) {
dev_dbg(dev, " less than one block: curr_buff=%pK *curr_buff_cnt=0x%X copy_to=%pK\n",
dev_dbg(dev, " less than one block: curr_buff=%p *curr_buff_cnt=0x%X copy_to=%p\n",
curr_buff, *curr_buff_cnt, &curr_buff[*curr_buff_cnt]);
areq_ctx->in_nents = sg_nents_for_len(src, nbytes);
sg_copy_to_buffer(src, areq_ctx->in_nents,
@ -1265,7 +1265,7 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
/* Copy the new residue to next buffer */
if (*next_buff_cnt) {
dev_dbg(dev, " handle residue: next buff %pK skip data %u residue %u\n",
dev_dbg(dev, " handle residue: next buff %p skip data %u residue %u\n",
next_buff, (update_data_len - *curr_buff_cnt),
*next_buff_cnt);
cc_copy_sg_portion(dev, next_buff, src,
@ -1338,7 +1338,7 @@ void cc_unmap_hash_request(struct device *dev, void *ctx,
*allocated and should be released
*/
if (areq_ctx->mlli_params.curr_pool) {
dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n",
dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%p\n",
&areq_ctx->mlli_params.mlli_dma_addr,
areq_ctx->mlli_params.mlli_virt_addr);
dma_pool_free(areq_ctx->mlli_params.curr_pool,
@ -1347,14 +1347,14 @@ void cc_unmap_hash_request(struct device *dev, void *ctx,
}
if (src && areq_ctx->in_nents) {
dev_dbg(dev, "Unmapped sg src: virt=%pK dma=%pad len=0x%X\n",
dev_dbg(dev, "Unmapped sg src: virt=%p dma=%pad len=0x%X\n",
sg_virt(src), &sg_dma_address(src), sg_dma_len(src));
dma_unmap_sg(dev, src,
areq_ctx->in_nents, DMA_TO_DEVICE);
}
if (*prev_len) {
dev_dbg(dev, "Unmapped buffer: areq_ctx->buff_sg=%pK dma=%pad len 0x%X\n",
dev_dbg(dev, "Unmapped buffer: areq_ctx->buff_sg=%p dma=%pad len 0x%X\n",
sg_virt(areq_ctx->buff_sg),
&sg_dma_address(areq_ctx->buff_sg),
sg_dma_len(areq_ctx->buff_sg));

View file

@ -211,11 +211,11 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
max_key_buf_size,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, ctx_p->user.key_dma_addr)) {
dev_err(dev, "Mapping Key %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping Key %u B at va=%p for DMA failed\n",
max_key_buf_size, ctx_p->user.key);
goto free_key;
}
dev_dbg(dev, "Mapped key %u B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped key %u B at va=%p to dma=%pad\n",
max_key_buf_size, ctx_p->user.key, &ctx_p->user.key_dma_addr);
return 0;

View file

@ -125,7 +125,7 @@ static int cc_map_result(struct device *dev, struct ahash_req_ctx *state,
digestsize);
return -ENOMEM;
}
dev_dbg(dev, "Mapped digest result buffer %u B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped digest result buffer %u B at va=%p to dma=%pad\n",
digestsize, state->digest_result_buff,
&state->digest_result_dma_addr);
@ -184,11 +184,11 @@ static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
dma_map_single(dev, state->digest_buff,
ctx->inter_digestsize, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, state->digest_buff_dma_addr)) {
dev_err(dev, "Mapping digest len %d B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping digest len %d B at va=%p for DMA failed\n",
ctx->inter_digestsize, state->digest_buff);
return -EINVAL;
}
dev_dbg(dev, "Mapped digest %d B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped digest %d B at va=%p to dma=%pad\n",
ctx->inter_digestsize, state->digest_buff,
&state->digest_buff_dma_addr);
@ -197,11 +197,11 @@ static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
dma_map_single(dev, state->digest_bytes_len,
HASH_MAX_LEN_SIZE, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, state->digest_bytes_len_dma_addr)) {
dev_err(dev, "Mapping digest len %u B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping digest len %u B at va=%p for DMA failed\n",
HASH_MAX_LEN_SIZE, state->digest_bytes_len);
goto unmap_digest_buf;
}
dev_dbg(dev, "Mapped digest len %u B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped digest len %u B at va=%p to dma=%pad\n",
HASH_MAX_LEN_SIZE, state->digest_bytes_len,
&state->digest_bytes_len_dma_addr);
}
@ -212,12 +212,12 @@ static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
ctx->inter_digestsize,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, state->opad_digest_dma_addr)) {
dev_err(dev, "Mapping opad digest %d B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping opad digest %d B at va=%p for DMA failed\n",
ctx->inter_digestsize,
state->opad_digest_buff);
goto unmap_digest_len;
}
dev_dbg(dev, "Mapped opad digest %d B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped opad digest %d B at va=%p to dma=%pad\n",
ctx->inter_digestsize, state->opad_digest_buff,
&state->opad_digest_dma_addr);
}
@ -272,7 +272,7 @@ static void cc_unmap_result(struct device *dev, struct ahash_req_ctx *state,
if (state->digest_result_dma_addr) {
dma_unmap_single(dev, state->digest_result_dma_addr, digestsize,
DMA_BIDIRECTIONAL);
dev_dbg(dev, "unmpa digest result buffer va (%pK) pa (%pad) len %u\n",
dev_dbg(dev, "unmpa digest result buffer va (%p) pa (%pad) len %u\n",
state->digest_result_buff,
&state->digest_result_dma_addr, digestsize);
memcpy(result, state->digest_result_buff, digestsize);
@ -287,7 +287,7 @@ static void cc_update_complete(struct device *dev, void *cc_req, int err)
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm);
dev_dbg(dev, "req=%pK\n", req);
dev_dbg(dev, "req=%p\n", req);
if (err != -EINPROGRESS) {
/* Not a BACKLOG notification */
@ -306,7 +306,7 @@ static void cc_digest_complete(struct device *dev, void *cc_req, int err)
struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm);
u32 digestsize = crypto_ahash_digestsize(tfm);
dev_dbg(dev, "req=%pK\n", req);
dev_dbg(dev, "req=%p\n", req);
if (err != -EINPROGRESS) {
/* Not a BACKLOG notification */
@ -326,7 +326,7 @@ static void cc_hash_complete(struct device *dev, void *cc_req, int err)
struct cc_hash_ctx *ctx = crypto_ahash_ctx_dma(tfm);
u32 digestsize = crypto_ahash_digestsize(tfm);
dev_dbg(dev, "req=%pK\n", req);
dev_dbg(dev, "req=%p\n", req);
if (err != -EINPROGRESS) {
/* Not a BACKLOG notification */
@ -1077,11 +1077,11 @@ static int cc_alloc_ctx(struct cc_hash_ctx *ctx)
dma_map_single(dev, ctx->digest_buff, sizeof(ctx->digest_buff),
DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, ctx->digest_buff_dma_addr)) {
dev_err(dev, "Mapping digest len %zu B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping digest len %zu B at va=%p for DMA failed\n",
sizeof(ctx->digest_buff), ctx->digest_buff);
goto fail;
}
dev_dbg(dev, "Mapped digest %zu B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped digest %zu B at va=%p to dma=%pad\n",
sizeof(ctx->digest_buff), ctx->digest_buff,
&ctx->digest_buff_dma_addr);
@ -1090,12 +1090,12 @@ static int cc_alloc_ctx(struct cc_hash_ctx *ctx)
sizeof(ctx->opad_tmp_keys_buff),
DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, ctx->opad_tmp_keys_dma_addr)) {
dev_err(dev, "Mapping opad digest %zu B at va=%pK for DMA failed\n",
dev_err(dev, "Mapping opad digest %zu B at va=%p for DMA failed\n",
sizeof(ctx->opad_tmp_keys_buff),
ctx->opad_tmp_keys_buff);
goto fail;
}
dev_dbg(dev, "Mapped opad_tmp_keys %zu B at va=%pK to dma=%pad\n",
dev_dbg(dev, "Mapped opad_tmp_keys %zu B at va=%p to dma=%pad\n",
sizeof(ctx->opad_tmp_keys_buff), ctx->opad_tmp_keys_buff,
&ctx->opad_tmp_keys_dma_addr);

View file

@ -77,6 +77,5 @@ int cc_pm_get(struct device *dev)
void cc_pm_put_suspend(struct device *dev)
{
pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
}

View file

@ -1491,11 +1491,13 @@ static void hpre_ecdh_cb(struct hpre_ctx *ctx, void *resp)
if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
/* Do unmap before data processing */
hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
p = sg_virt(areq->dst);
memmove(p, p + ctx->key_sz - curve_sz, curve_sz);
memmove(p + curve_sz, p + areq->dst_len - curve_sz, curve_sz);
hpre_ecdh_hw_data_clr_all(ctx, req, areq->dst, areq->src);
kpp_request_complete(areq, ret);
atomic64_inc(&dfx[HPRE_RECV_CNT].value);
@ -1808,9 +1810,11 @@ static void hpre_curve25519_cb(struct hpre_ctx *ctx, void *resp)
if (overtime_thrhld && hpre_is_bd_timeout(req, overtime_thrhld))
atomic64_inc(&dfx[HPRE_OVER_THRHLD_CNT].value);
/* Do unmap before data processing */
hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
hpre_key_to_big_end(sg_virt(areq->dst), CURVE25519_KEY_SIZE);
hpre_curve25519_hw_data_clr_all(ctx, req, areq->dst, areq->src);
kpp_request_complete(areq, ret);
atomic64_inc(&dfx[HPRE_RECV_CNT].value);

View file

@ -912,7 +912,6 @@ static void qm_pm_put_sync(struct hisi_qm *qm)
if (!test_bit(QM_SUPPORT_RPM, &qm->caps))
return;
pm_runtime_mark_last_busy(dev);
pm_runtime_put_autosuspend(dev);
}

View file

@ -7,6 +7,12 @@
#include <linux/hisi_acc_qm.h>
#include "sec_crypto.h"
#define SEC_PBUF_SZ 512
#define SEC_MAX_MAC_LEN 64
#define SEC_IV_SIZE 24
#define SEC_SGE_NR_NUM 4
#define SEC_SGL_ALIGN_SIZE 64
/* Algorithm resource per hardware SEC queue */
struct sec_alg_res {
u8 *pbuf;
@ -20,6 +26,40 @@ struct sec_alg_res {
u16 depth;
};
struct sec_hw_sge {
dma_addr_t buf;
void *page_ctrl;
__le32 len;
__le32 pad;
__le32 pad0;
__le32 pad1;
};
struct sec_hw_sgl {
dma_addr_t next_dma;
__le16 entry_sum_in_chain;
__le16 entry_sum_in_sgl;
__le16 entry_length_in_sgl;
__le16 pad0;
__le64 pad1[5];
struct sec_hw_sgl *next;
struct sec_hw_sge sge_entries[SEC_SGE_NR_NUM];
} __aligned(SEC_SGL_ALIGN_SIZE);
struct sec_src_dst_buf {
struct sec_hw_sgl in;
struct sec_hw_sgl out;
};
struct sec_request_buf {
union {
struct sec_src_dst_buf data_buf;
__u8 pbuf[SEC_PBUF_SZ];
};
dma_addr_t in_dma;
dma_addr_t out_dma;
};
/* Cipher request of SEC private */
struct sec_cipher_req {
struct hisi_acc_hw_sgl *c_out;
@ -29,6 +69,7 @@ struct sec_cipher_req {
struct skcipher_request *sk_req;
u32 c_len;
bool encrypt;
__u8 c_ivin_buf[SEC_IV_SIZE];
};
struct sec_aead_req {
@ -37,6 +78,13 @@ struct sec_aead_req {
u8 *a_ivin;
dma_addr_t a_ivin_dma;
struct aead_request *aead_req;
__u8 a_ivin_buf[SEC_IV_SIZE];
__u8 out_mac_buf[SEC_MAX_MAC_LEN];
};
struct sec_instance_backlog {
struct list_head list;
spinlock_t lock;
};
/* SEC request of Crypto */
@ -55,15 +103,17 @@ struct sec_req {
dma_addr_t in_dma;
struct sec_cipher_req c_req;
struct sec_aead_req aead_req;
struct list_head backlog_head;
struct crypto_async_request *base;
int err_type;
int req_id;
u32 flag;
/* Status of the SEC request */
bool fake_busy;
bool use_pbuf;
struct list_head list;
struct sec_instance_backlog *backlog;
struct sec_request_buf buf;
};
/**
@ -119,9 +169,11 @@ struct sec_qp_ctx {
struct sec_alg_res *res;
struct sec_ctx *ctx;
spinlock_t req_lock;
struct list_head backlog;
spinlock_t id_lock;
struct hisi_acc_sgl_pool *c_in_pool;
struct hisi_acc_sgl_pool *c_out_pool;
struct sec_instance_backlog backlog;
u16 send_head;
};
enum sec_alg_type {
@ -139,9 +191,6 @@ struct sec_ctx {
/* Half queues for encipher, and half for decipher */
u32 hlf_q_num;
/* Threshold for fake busy, trigger to return -EBUSY to user */
u32 fake_req_limit;
/* Current cyclic index to select a queue for encipher */
atomic_t enc_qcyclic;

View file

@ -67,7 +67,6 @@
#define SEC_MAX_CCM_AAD_LEN 65279
#define SEC_TOTAL_MAC_SZ(depth) (SEC_MAX_MAC_LEN * (depth))
#define SEC_PBUF_SZ 512
#define SEC_PBUF_IV_OFFSET SEC_PBUF_SZ
#define SEC_PBUF_MAC_OFFSET (SEC_PBUF_SZ + SEC_IV_SIZE)
#define SEC_PBUF_PKG (SEC_PBUF_SZ + SEC_IV_SIZE + \
@ -102,6 +101,8 @@
#define IV_LAST_BYTE_MASK 0xFF
#define IV_CTR_INIT 0x1
#define IV_BYTE_OFFSET 0x8
#define SEC_GCM_MIN_AUTH_SZ 0x8
#define SEC_RETRY_MAX_CNT 5U
static DEFINE_MUTEX(sec_algs_lock);
static unsigned int sec_available_devs;
@ -116,40 +117,19 @@ struct sec_aead {
struct aead_alg alg;
};
/* Get an en/de-cipher queue cyclically to balance load over queues of TFM */
static inline u32 sec_alloc_queue_id(struct sec_ctx *ctx, struct sec_req *req)
{
if (req->c_req.encrypt)
return (u32)atomic_inc_return(&ctx->enc_qcyclic) %
ctx->hlf_q_num;
return (u32)atomic_inc_return(&ctx->dec_qcyclic) % ctx->hlf_q_num +
ctx->hlf_q_num;
}
static inline void sec_free_queue_id(struct sec_ctx *ctx, struct sec_req *req)
{
if (req->c_req.encrypt)
atomic_dec(&ctx->enc_qcyclic);
else
atomic_dec(&ctx->dec_qcyclic);
}
static int sec_aead_soft_crypto(struct sec_ctx *ctx,
struct aead_request *aead_req,
bool encrypt);
static int sec_skcipher_soft_crypto(struct sec_ctx *ctx,
struct skcipher_request *sreq, bool encrypt);
static int sec_alloc_req_id(struct sec_req *req, struct sec_qp_ctx *qp_ctx)
{
int req_id;
spin_lock_bh(&qp_ctx->req_lock);
spin_lock_bh(&qp_ctx->id_lock);
req_id = idr_alloc_cyclic(&qp_ctx->req_idr, NULL, 0, qp_ctx->qp->sq_depth, GFP_ATOMIC);
spin_unlock_bh(&qp_ctx->req_lock);
if (unlikely(req_id < 0)) {
dev_err(req->ctx->dev, "alloc req id fail!\n");
return req_id;
}
req->qp_ctx = qp_ctx;
qp_ctx->req_list[req_id] = req;
spin_unlock_bh(&qp_ctx->id_lock);
return req_id;
}
@ -163,12 +143,9 @@ static void sec_free_req_id(struct sec_req *req)
return;
}
qp_ctx->req_list[req_id] = NULL;
req->qp_ctx = NULL;
spin_lock_bh(&qp_ctx->req_lock);
spin_lock_bh(&qp_ctx->id_lock);
idr_remove(&qp_ctx->req_idr, req_id);
spin_unlock_bh(&qp_ctx->req_lock);
spin_unlock_bh(&qp_ctx->id_lock);
}
static u8 pre_parse_finished_bd(struct bd_status *status, void *resp)
@ -229,6 +206,90 @@ static int sec_cb_status_check(struct sec_req *req,
return 0;
}
static int qp_send_message(struct sec_req *req)
{
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
int ret;
if (atomic_read(&qp_ctx->qp->qp_status.used) == qp_ctx->qp->sq_depth - 1)
return -EBUSY;
spin_lock_bh(&qp_ctx->req_lock);
if (atomic_read(&qp_ctx->qp->qp_status.used) == qp_ctx->qp->sq_depth - 1) {
spin_unlock_bh(&qp_ctx->req_lock);
return -EBUSY;
}
if (qp_ctx->ctx->type_supported == SEC_BD_TYPE2) {
req->sec_sqe.type2.tag = cpu_to_le16((u16)qp_ctx->send_head);
qp_ctx->req_list[qp_ctx->send_head] = req;
}
ret = hisi_qp_send(qp_ctx->qp, &req->sec_sqe);
if (ret) {
spin_unlock_bh(&qp_ctx->req_lock);
return ret;
}
if (qp_ctx->ctx->type_supported == SEC_BD_TYPE2)
qp_ctx->send_head = (qp_ctx->send_head + 1) % qp_ctx->qp->sq_depth;
spin_unlock_bh(&qp_ctx->req_lock);
atomic64_inc(&req->ctx->sec->debug.dfx.send_cnt);
return -EINPROGRESS;
}
static void sec_alg_send_backlog_soft(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx)
{
struct sec_req *req, *tmp;
int ret;
list_for_each_entry_safe(req, tmp, &qp_ctx->backlog.list, list) {
list_del(&req->list);
ctx->req_op->buf_unmap(ctx, req);
if (req->req_id >= 0)
sec_free_req_id(req);
if (ctx->alg_type == SEC_AEAD)
ret = sec_aead_soft_crypto(ctx, req->aead_req.aead_req,
req->c_req.encrypt);
else
ret = sec_skcipher_soft_crypto(ctx, req->c_req.sk_req,
req->c_req.encrypt);
/* Wake up the busy thread first, then return the errno. */
crypto_request_complete(req->base, -EINPROGRESS);
crypto_request_complete(req->base, ret);
}
}
static void sec_alg_send_backlog(struct sec_ctx *ctx, struct sec_qp_ctx *qp_ctx)
{
struct sec_req *req, *tmp;
int ret;
spin_lock_bh(&qp_ctx->backlog.lock);
list_for_each_entry_safe(req, tmp, &qp_ctx->backlog.list, list) {
ret = qp_send_message(req);
switch (ret) {
case -EINPROGRESS:
list_del(&req->list);
crypto_request_complete(req->base, -EINPROGRESS);
break;
case -EBUSY:
/* Device is busy and stop send any request. */
goto unlock;
default:
/* Release memory resources and send all requests through software. */
sec_alg_send_backlog_soft(ctx, qp_ctx);
goto unlock;
}
}
unlock:
spin_unlock_bh(&qp_ctx->backlog.lock);
}
static void sec_req_cb(struct hisi_qp *qp, void *resp)
{
struct sec_qp_ctx *qp_ctx = qp->qp_ctx;
@ -273,40 +334,54 @@ static void sec_req_cb(struct hisi_qp *qp, void *resp)
ctx->req_op->callback(ctx, req, err);
}
static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
static int sec_alg_send_message_retry(struct sec_req *req)
{
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
int ctr = 0;
int ret;
if (ctx->fake_req_limit <=
atomic_read(&qp_ctx->qp->qp_status.used) &&
!(req->flag & CRYPTO_TFM_REQ_MAY_BACKLOG))
return -EBUSY;
spin_lock_bh(&qp_ctx->req_lock);
ret = hisi_qp_send(qp_ctx->qp, &req->sec_sqe);
if (ctx->fake_req_limit <=
atomic_read(&qp_ctx->qp->qp_status.used) && !ret) {
list_add_tail(&req->backlog_head, &qp_ctx->backlog);
atomic64_inc(&ctx->sec->debug.dfx.send_cnt);
atomic64_inc(&ctx->sec->debug.dfx.send_busy_cnt);
spin_unlock_bh(&qp_ctx->req_lock);
return -EBUSY;
}
spin_unlock_bh(&qp_ctx->req_lock);
if (unlikely(ret == -EBUSY))
return -ENOBUFS;
if (likely(!ret)) {
ret = -EINPROGRESS;
atomic64_inc(&ctx->sec->debug.dfx.send_cnt);
}
do {
ret = qp_send_message(req);
} while (ret == -EBUSY && ctr++ < SEC_RETRY_MAX_CNT);
return ret;
}
/* Get DMA memory resources */
static int sec_alg_try_enqueue(struct sec_req *req)
{
/* Check if any request is already backlogged */
if (!list_empty(&req->backlog->list))
return -EBUSY;
/* Try to enqueue to HW ring */
return qp_send_message(req);
}
static int sec_alg_send_message_maybacklog(struct sec_req *req)
{
int ret;
ret = sec_alg_try_enqueue(req);
if (ret != -EBUSY)
return ret;
spin_lock_bh(&req->backlog->lock);
ret = sec_alg_try_enqueue(req);
if (ret == -EBUSY)
list_add_tail(&req->list, &req->backlog->list);
spin_unlock_bh(&req->backlog->lock);
return ret;
}
static int sec_bd_send(struct sec_ctx *ctx, struct sec_req *req)
{
if (req->flag & CRYPTO_TFM_REQ_MAY_BACKLOG)
return sec_alg_send_message_maybacklog(req);
return sec_alg_send_message_retry(req);
}
static int sec_alloc_civ_resource(struct device *dev, struct sec_alg_res *res)
{
u16 q_depth = res->depth;
@ -558,7 +633,10 @@ static int sec_create_qp_ctx(struct sec_ctx *ctx, int qp_ctx_id)
spin_lock_init(&qp_ctx->req_lock);
idr_init(&qp_ctx->req_idr);
INIT_LIST_HEAD(&qp_ctx->backlog);
spin_lock_init(&qp_ctx->backlog.lock);
spin_lock_init(&qp_ctx->id_lock);
INIT_LIST_HEAD(&qp_ctx->backlog.list);
qp_ctx->send_head = 0;
ret = sec_alloc_qp_ctx_resource(ctx, qp_ctx);
if (ret)
@ -602,9 +680,6 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
ctx->hlf_q_num = sec->ctx_q_num >> 1;
ctx->pbuf_supported = ctx->sec->iommu_used;
/* Half of queue depth is taken as fake requests limit in the queue. */
ctx->fake_req_limit = ctx->qps[0]->sq_depth >> 1;
ctx->qp_ctx = kcalloc(sec->ctx_q_num, sizeof(struct sec_qp_ctx),
GFP_KERNEL);
if (!ctx->qp_ctx) {
@ -706,7 +781,7 @@ static int sec_skcipher_init(struct crypto_skcipher *tfm)
int ret;
ctx->alg_type = SEC_SKCIPHER;
crypto_skcipher_set_reqsize(tfm, sizeof(struct sec_req));
crypto_skcipher_set_reqsize_dma(tfm, sizeof(struct sec_req));
ctx->c_ctx.ivsize = crypto_skcipher_ivsize(tfm);
if (ctx->c_ctx.ivsize > SEC_IV_SIZE) {
pr_err("get error skcipher iv size!\n");
@ -883,24 +958,25 @@ GEN_SEC_SETKEY_FUNC(sm4_ctr, SEC_CALG_SM4, SEC_CMODE_CTR)
static int sec_cipher_pbuf_map(struct sec_ctx *ctx, struct sec_req *req,
struct scatterlist *src)
{
struct sec_aead_req *a_req = &req->aead_req;
struct aead_request *aead_req = a_req->aead_req;
struct aead_request *aead_req = req->aead_req.aead_req;
struct sec_cipher_req *c_req = &req->c_req;
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
struct sec_request_buf *buf = &req->buf;
struct device *dev = ctx->dev;
int copy_size, pbuf_length;
int req_id = req->req_id;
struct crypto_aead *tfm;
u8 *mac_offset, *pbuf;
size_t authsize;
u8 *mac_offset;
if (ctx->alg_type == SEC_AEAD)
copy_size = aead_req->cryptlen + aead_req->assoclen;
else
copy_size = c_req->c_len;
pbuf_length = sg_copy_to_buffer(src, sg_nents(src),
qp_ctx->res[req_id].pbuf, copy_size);
pbuf = req->req_id < 0 ? buf->pbuf : qp_ctx->res[req_id].pbuf;
pbuf_length = sg_copy_to_buffer(src, sg_nents(src), pbuf, copy_size);
if (unlikely(pbuf_length != copy_size)) {
dev_err(dev, "copy src data to pbuf error!\n");
return -EINVAL;
@ -908,8 +984,17 @@ static int sec_cipher_pbuf_map(struct sec_ctx *ctx, struct sec_req *req,
if (!c_req->encrypt && ctx->alg_type == SEC_AEAD) {
tfm = crypto_aead_reqtfm(aead_req);
authsize = crypto_aead_authsize(tfm);
mac_offset = qp_ctx->res[req_id].pbuf + copy_size - authsize;
memcpy(a_req->out_mac, mac_offset, authsize);
mac_offset = pbuf + copy_size - authsize;
memcpy(req->aead_req.out_mac, mac_offset, authsize);
}
if (req->req_id < 0) {
buf->in_dma = dma_map_single(dev, buf->pbuf, SEC_PBUF_SZ, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, buf->in_dma)))
return -ENOMEM;
buf->out_dma = buf->in_dma;
return 0;
}
req->in_dma = qp_ctx->res[req_id].pbuf_dma;
@ -924,6 +1009,7 @@ static void sec_cipher_pbuf_unmap(struct sec_ctx *ctx, struct sec_req *req,
struct aead_request *aead_req = req->aead_req.aead_req;
struct sec_cipher_req *c_req = &req->c_req;
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
struct sec_request_buf *buf = &req->buf;
int copy_size, pbuf_length;
int req_id = req->req_id;
@ -932,10 +1018,16 @@ static void sec_cipher_pbuf_unmap(struct sec_ctx *ctx, struct sec_req *req,
else
copy_size = c_req->c_len;
pbuf_length = sg_copy_from_buffer(dst, sg_nents(dst),
qp_ctx->res[req_id].pbuf, copy_size);
if (req->req_id < 0)
pbuf_length = sg_copy_from_buffer(dst, sg_nents(dst), buf->pbuf, copy_size);
else
pbuf_length = sg_copy_from_buffer(dst, sg_nents(dst), qp_ctx->res[req_id].pbuf,
copy_size);
if (unlikely(pbuf_length != copy_size))
dev_err(ctx->dev, "copy pbuf data to dst error!\n");
if (req->req_id < 0)
dma_unmap_single(ctx->dev, buf->in_dma, SEC_PBUF_SZ, DMA_BIDIRECTIONAL);
}
static int sec_aead_mac_init(struct sec_aead_req *req)
@ -957,14 +1049,95 @@ static int sec_aead_mac_init(struct sec_aead_req *req)
return 0;
}
static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
struct scatterlist *src, struct scatterlist *dst)
static void fill_sg_to_hw_sge(struct scatterlist *sgl, struct sec_hw_sge *hw_sge)
{
hw_sge->buf = sg_dma_address(sgl);
hw_sge->len = cpu_to_le32(sg_dma_len(sgl));
hw_sge->page_ctrl = sg_virt(sgl);
}
static int sec_cipher_to_hw_sgl(struct device *dev, struct scatterlist *src,
struct sec_hw_sgl *src_in, dma_addr_t *hw_sgl_dma,
int dma_dir)
{
struct sec_hw_sge *curr_hw_sge = src_in->sge_entries;
u32 i, sg_n, sg_n_mapped;
struct scatterlist *sg;
u32 sge_var = 0;
sg_n = sg_nents(src);
sg_n_mapped = dma_map_sg(dev, src, sg_n, dma_dir);
if (unlikely(!sg_n_mapped)) {
dev_err(dev, "dma mapping for SG error!\n");
return -EINVAL;
} else if (unlikely(sg_n_mapped > SEC_SGE_NR_NUM)) {
dev_err(dev, "the number of entries in input scatterlist error!\n");
dma_unmap_sg(dev, src, sg_n, dma_dir);
return -EINVAL;
}
for_each_sg(src, sg, sg_n_mapped, i) {
fill_sg_to_hw_sge(sg, curr_hw_sge);
curr_hw_sge++;
sge_var++;
}
src_in->entry_sum_in_sgl = cpu_to_le16(sge_var);
src_in->entry_sum_in_chain = cpu_to_le16(SEC_SGE_NR_NUM);
src_in->entry_length_in_sgl = cpu_to_le16(SEC_SGE_NR_NUM);
*hw_sgl_dma = dma_map_single(dev, src_in, sizeof(struct sec_hw_sgl), dma_dir);
if (unlikely(dma_mapping_error(dev, *hw_sgl_dma))) {
dma_unmap_sg(dev, src, sg_n, dma_dir);
return -ENOMEM;
}
return 0;
}
static void sec_cipher_put_hw_sgl(struct device *dev, struct scatterlist *src,
dma_addr_t src_in, int dma_dir)
{
dma_unmap_single(dev, src_in, sizeof(struct sec_hw_sgl), dma_dir);
dma_unmap_sg(dev, src, sg_nents(src), dma_dir);
}
static int sec_cipher_map_sgl(struct device *dev, struct sec_req *req,
struct scatterlist *src, struct scatterlist *dst)
{
struct sec_hw_sgl *src_in = &req->buf.data_buf.in;
struct sec_hw_sgl *dst_out = &req->buf.data_buf.out;
int ret;
if (dst == src) {
ret = sec_cipher_to_hw_sgl(dev, src, src_in, &req->buf.in_dma,
DMA_BIDIRECTIONAL);
req->buf.out_dma = req->buf.in_dma;
return ret;
}
ret = sec_cipher_to_hw_sgl(dev, src, src_in, &req->buf.in_dma, DMA_TO_DEVICE);
if (unlikely(ret))
return ret;
ret = sec_cipher_to_hw_sgl(dev, dst, dst_out, &req->buf.out_dma,
DMA_FROM_DEVICE);
if (unlikely(ret)) {
sec_cipher_put_hw_sgl(dev, src, req->buf.in_dma, DMA_TO_DEVICE);
return ret;
}
return 0;
}
static int sec_cipher_map_inner(struct sec_ctx *ctx, struct sec_req *req,
struct scatterlist *src, struct scatterlist *dst)
{
struct sec_cipher_req *c_req = &req->c_req;
struct sec_aead_req *a_req = &req->aead_req;
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
struct sec_alg_res *res = &qp_ctx->res[req->req_id];
struct device *dev = ctx->dev;
enum dma_data_direction src_direction;
int ret;
if (req->use_pbuf) {
@ -977,10 +1150,9 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
a_req->out_mac_dma = res->pbuf_dma +
SEC_PBUF_MAC_OFFSET;
}
ret = sec_cipher_pbuf_map(ctx, req, src);
return ret;
return sec_cipher_pbuf_map(ctx, req, src);
}
c_req->c_ivin = res->c_ivin;
c_req->c_ivin_dma = res->c_ivin_dma;
if (ctx->alg_type == SEC_AEAD) {
@ -990,10 +1162,11 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
a_req->out_mac_dma = res->out_mac_dma;
}
src_direction = dst == src ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE;
req->in = hisi_acc_sg_buf_map_to_hw_sgl(dev, src,
qp_ctx->c_in_pool,
req->req_id,
&req->in_dma);
&req->in_dma, src_direction);
if (IS_ERR(req->in)) {
dev_err(dev, "fail to dma map input sgl buffers!\n");
return PTR_ERR(req->in);
@ -1003,7 +1176,7 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
ret = sec_aead_mac_init(a_req);
if (unlikely(ret)) {
dev_err(dev, "fail to init mac data for ICV!\n");
hisi_acc_sg_buf_unmap(dev, src, req->in);
hisi_acc_sg_buf_unmap(dev, src, req->in, src_direction);
return ret;
}
}
@ -1015,11 +1188,12 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
c_req->c_out = hisi_acc_sg_buf_map_to_hw_sgl(dev, dst,
qp_ctx->c_out_pool,
req->req_id,
&c_req->c_out_dma);
&c_req->c_out_dma,
DMA_FROM_DEVICE);
if (IS_ERR(c_req->c_out)) {
dev_err(dev, "fail to dma map output sgl buffers!\n");
hisi_acc_sg_buf_unmap(dev, src, req->in);
hisi_acc_sg_buf_unmap(dev, src, req->in, src_direction);
return PTR_ERR(c_req->c_out);
}
}
@ -1027,19 +1201,108 @@ static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
return 0;
}
static int sec_cipher_map(struct sec_ctx *ctx, struct sec_req *req,
struct scatterlist *src, struct scatterlist *dst)
{
struct sec_aead_req *a_req = &req->aead_req;
struct sec_cipher_req *c_req = &req->c_req;
bool is_aead = (ctx->alg_type == SEC_AEAD);
struct device *dev = ctx->dev;
int ret = -ENOMEM;
if (req->req_id >= 0)
return sec_cipher_map_inner(ctx, req, src, dst);
c_req->c_ivin = c_req->c_ivin_buf;
c_req->c_ivin_dma = dma_map_single(dev, c_req->c_ivin,
SEC_IV_SIZE, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, c_req->c_ivin_dma)))
return -ENOMEM;
if (is_aead) {
a_req->a_ivin = a_req->a_ivin_buf;
a_req->out_mac = a_req->out_mac_buf;
a_req->a_ivin_dma = dma_map_single(dev, a_req->a_ivin,
SEC_IV_SIZE, DMA_TO_DEVICE);
if (unlikely(dma_mapping_error(dev, a_req->a_ivin_dma)))
goto free_c_ivin_dma;
a_req->out_mac_dma = dma_map_single(dev, a_req->out_mac,
SEC_MAX_MAC_LEN, DMA_BIDIRECTIONAL);
if (unlikely(dma_mapping_error(dev, a_req->out_mac_dma)))
goto free_a_ivin_dma;
}
if (req->use_pbuf) {
ret = sec_cipher_pbuf_map(ctx, req, src);
if (unlikely(ret))
goto free_out_mac_dma;
return 0;
}
if (!c_req->encrypt && is_aead) {
ret = sec_aead_mac_init(a_req);
if (unlikely(ret)) {
dev_err(dev, "fail to init mac data for ICV!\n");
goto free_out_mac_dma;
}
}
ret = sec_cipher_map_sgl(dev, req, src, dst);
if (unlikely(ret)) {
dev_err(dev, "fail to dma map input sgl buffers!\n");
goto free_out_mac_dma;
}
return 0;
free_out_mac_dma:
if (is_aead)
dma_unmap_single(dev, a_req->out_mac_dma, SEC_MAX_MAC_LEN, DMA_BIDIRECTIONAL);
free_a_ivin_dma:
if (is_aead)
dma_unmap_single(dev, a_req->a_ivin_dma, SEC_IV_SIZE, DMA_TO_DEVICE);
free_c_ivin_dma:
dma_unmap_single(dev, c_req->c_ivin_dma, SEC_IV_SIZE, DMA_TO_DEVICE);
return ret;
}
static void sec_cipher_unmap(struct sec_ctx *ctx, struct sec_req *req,
struct scatterlist *src, struct scatterlist *dst)
{
struct sec_aead_req *a_req = &req->aead_req;
struct sec_cipher_req *c_req = &req->c_req;
struct device *dev = ctx->dev;
if (req->req_id >= 0) {
if (req->use_pbuf) {
sec_cipher_pbuf_unmap(ctx, req, dst);
} else {
if (dst != src) {
hisi_acc_sg_buf_unmap(dev, dst, c_req->c_out, DMA_FROM_DEVICE);
hisi_acc_sg_buf_unmap(dev, src, req->in, DMA_TO_DEVICE);
} else {
hisi_acc_sg_buf_unmap(dev, src, req->in, DMA_BIDIRECTIONAL);
}
}
return;
}
if (req->use_pbuf) {
sec_cipher_pbuf_unmap(ctx, req, dst);
} else {
if (dst != src)
hisi_acc_sg_buf_unmap(dev, src, req->in);
if (dst != src) {
sec_cipher_put_hw_sgl(dev, dst, req->buf.out_dma, DMA_FROM_DEVICE);
sec_cipher_put_hw_sgl(dev, src, req->buf.in_dma, DMA_TO_DEVICE);
} else {
sec_cipher_put_hw_sgl(dev, src, req->buf.in_dma, DMA_BIDIRECTIONAL);
}
}
hisi_acc_sg_buf_unmap(dev, dst, c_req->c_out);
dma_unmap_single(dev, c_req->c_ivin_dma, SEC_IV_SIZE, DMA_TO_DEVICE);
if (ctx->alg_type == SEC_AEAD) {
dma_unmap_single(dev, a_req->a_ivin_dma, SEC_IV_SIZE, DMA_TO_DEVICE);
dma_unmap_single(dev, a_req->out_mac_dma, SEC_MAX_MAC_LEN, DMA_BIDIRECTIONAL);
}
}
@ -1257,8 +1520,15 @@ static int sec_skcipher_bd_fill(struct sec_ctx *ctx, struct sec_req *req)
sec_sqe->type2.c_key_addr = cpu_to_le64(c_ctx->c_key_dma);
sec_sqe->type2.c_ivin_addr = cpu_to_le64(c_req->c_ivin_dma);
sec_sqe->type2.data_src_addr = cpu_to_le64(req->in_dma);
sec_sqe->type2.data_dst_addr = cpu_to_le64(c_req->c_out_dma);
if (req->req_id < 0) {
sec_sqe->type2.data_src_addr = cpu_to_le64(req->buf.in_dma);
sec_sqe->type2.data_dst_addr = cpu_to_le64(req->buf.out_dma);
} else {
sec_sqe->type2.data_src_addr = cpu_to_le64(req->in_dma);
sec_sqe->type2.data_dst_addr = cpu_to_le64(c_req->c_out_dma);
}
if (sec_sqe->type2.data_src_addr != sec_sqe->type2.data_dst_addr)
de = 0x1 << SEC_DE_OFFSET;
sec_sqe->type2.icvw_kmode |= cpu_to_le16(((u16)c_ctx->c_mode) <<
SEC_CMODE_OFFSET);
@ -1284,13 +1554,10 @@ static int sec_skcipher_bd_fill(struct sec_ctx *ctx, struct sec_req *req)
sec_sqe->sdm_addr_type |= da_type;
scene = SEC_COMM_SCENE << SEC_SCENE_OFFSET;
if (req->in_dma != c_req->c_out_dma)
de = 0x1 << SEC_DE_OFFSET;
sec_sqe->sds_sa_type = (de | scene | sa_type);
sec_sqe->type2.clen_ivhlen |= cpu_to_le32(c_req->c_len);
sec_sqe->type2.tag = cpu_to_le16((u16)req->req_id);
return 0;
}
@ -1307,8 +1574,15 @@ static int sec_skcipher_bd_fill_v3(struct sec_ctx *ctx, struct sec_req *req)
sec_sqe3->c_key_addr = cpu_to_le64(c_ctx->c_key_dma);
sec_sqe3->no_scene.c_ivin_addr = cpu_to_le64(c_req->c_ivin_dma);
sec_sqe3->data_src_addr = cpu_to_le64(req->in_dma);
sec_sqe3->data_dst_addr = cpu_to_le64(c_req->c_out_dma);
if (req->req_id < 0) {
sec_sqe3->data_src_addr = cpu_to_le64(req->buf.in_dma);
sec_sqe3->data_dst_addr = cpu_to_le64(req->buf.out_dma);
} else {
sec_sqe3->data_src_addr = cpu_to_le64(req->in_dma);
sec_sqe3->data_dst_addr = cpu_to_le64(c_req->c_out_dma);
}
if (sec_sqe3->data_src_addr != sec_sqe3->data_dst_addr)
bd_param |= 0x1 << SEC_DE_OFFSET_V3;
sec_sqe3->c_mode_alg = ((u8)c_ctx->c_alg << SEC_CALG_OFFSET_V3) |
c_ctx->c_mode;
@ -1334,8 +1608,6 @@ static int sec_skcipher_bd_fill_v3(struct sec_ctx *ctx, struct sec_req *req)
}
bd_param |= SEC_COMM_SCENE << SEC_SCENE_OFFSET_V3;
if (req->in_dma != c_req->c_out_dma)
bd_param |= 0x1 << SEC_DE_OFFSET_V3;
bd_param |= SEC_BD_TYPE3;
sec_sqe3->bd_param = cpu_to_le32(bd_param);
@ -1367,15 +1639,12 @@ static void sec_update_iv(struct sec_req *req, enum sec_alg_type alg_type)
size_t sz;
u8 *iv;
if (req->c_req.encrypt)
sgl = alg_type == SEC_SKCIPHER ? sk_req->dst : aead_req->dst;
else
sgl = alg_type == SEC_SKCIPHER ? sk_req->src : aead_req->src;
if (alg_type == SEC_SKCIPHER) {
sgl = req->c_req.encrypt ? sk_req->dst : sk_req->src;
iv = sk_req->iv;
cryptlen = sk_req->cryptlen;
} else {
sgl = req->c_req.encrypt ? aead_req->dst : aead_req->src;
iv = aead_req->iv;
cryptlen = aead_req->cryptlen;
}
@ -1386,57 +1655,26 @@ static void sec_update_iv(struct sec_req *req, enum sec_alg_type alg_type)
if (unlikely(sz != iv_size))
dev_err(req->ctx->dev, "copy output iv error!\n");
} else {
sz = cryptlen / iv_size;
if (cryptlen % iv_size)
sz += 1;
sz = (cryptlen + iv_size - 1) / iv_size;
ctr_iv_inc(iv, iv_size, sz);
}
}
static struct sec_req *sec_back_req_clear(struct sec_ctx *ctx,
struct sec_qp_ctx *qp_ctx)
{
struct sec_req *backlog_req = NULL;
spin_lock_bh(&qp_ctx->req_lock);
if (ctx->fake_req_limit >=
atomic_read(&qp_ctx->qp->qp_status.used) &&
!list_empty(&qp_ctx->backlog)) {
backlog_req = list_first_entry(&qp_ctx->backlog,
typeof(*backlog_req), backlog_head);
list_del(&backlog_req->backlog_head);
}
spin_unlock_bh(&qp_ctx->req_lock);
return backlog_req;
}
static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req,
int err)
{
struct skcipher_request *sk_req = req->c_req.sk_req;
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
struct skcipher_request *backlog_sk_req;
struct sec_req *backlog_req;
sec_free_req_id(req);
if (req->req_id >= 0)
sec_free_req_id(req);
/* IV output at encrypto of CBC/CTR mode */
if (!err && (ctx->c_ctx.c_mode == SEC_CMODE_CBC ||
ctx->c_ctx.c_mode == SEC_CMODE_CTR) && req->c_req.encrypt)
sec_update_iv(req, SEC_SKCIPHER);
while (1) {
backlog_req = sec_back_req_clear(ctx, qp_ctx);
if (!backlog_req)
break;
backlog_sk_req = backlog_req->c_req.sk_req;
skcipher_request_complete(backlog_sk_req, -EINPROGRESS);
atomic64_inc(&ctx->sec->debug.dfx.recv_busy_cnt);
}
skcipher_request_complete(sk_req, err);
crypto_request_complete(req->base, err);
sec_alg_send_backlog(ctx, qp_ctx);
}
static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req)
@ -1675,21 +1913,14 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
struct aead_request *a_req = req->aead_req.aead_req;
struct crypto_aead *tfm = crypto_aead_reqtfm(a_req);
size_t authsize = crypto_aead_authsize(tfm);
struct sec_aead_req *aead_req = &req->aead_req;
struct sec_cipher_req *c_req = &req->c_req;
struct sec_qp_ctx *qp_ctx = req->qp_ctx;
struct aead_request *backlog_aead_req;
struct sec_req *backlog_req;
size_t sz;
if (!err && c->c_ctx.c_mode == SEC_CMODE_CBC && c_req->encrypt)
sec_update_iv(req, SEC_AEAD);
if (!err && req->c_req.encrypt) {
if (c->c_ctx.c_mode == SEC_CMODE_CBC)
sec_update_iv(req, SEC_AEAD);
/* Copy output mac */
if (!err && c_req->encrypt) {
struct scatterlist *sgl = a_req->dst;
sz = sg_pcopy_from_buffer(sgl, sg_nents(sgl), aead_req->out_mac,
sz = sg_pcopy_from_buffer(a_req->dst, sg_nents(a_req->dst), req->aead_req.out_mac,
authsize, a_req->cryptlen + a_req->assoclen);
if (unlikely(sz != authsize)) {
dev_err(c->dev, "copy out mac err!\n");
@ -1697,48 +1928,39 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
}
}
sec_free_req_id(req);
if (req->req_id >= 0)
sec_free_req_id(req);
while (1) {
backlog_req = sec_back_req_clear(c, qp_ctx);
if (!backlog_req)
break;
backlog_aead_req = backlog_req->aead_req.aead_req;
aead_request_complete(backlog_aead_req, -EINPROGRESS);
atomic64_inc(&c->sec->debug.dfx.recv_busy_cnt);
}
aead_request_complete(a_req, err);
crypto_request_complete(req->base, err);
sec_alg_send_backlog(c, qp_ctx);
}
static void sec_request_uninit(struct sec_ctx *ctx, struct sec_req *req)
static void sec_request_uninit(struct sec_req *req)
{
sec_free_req_id(req);
sec_free_queue_id(ctx, req);
if (req->req_id >= 0)
sec_free_req_id(req);
}
static int sec_request_init(struct sec_ctx *ctx, struct sec_req *req)
{
struct sec_qp_ctx *qp_ctx;
int queue_id;
int i;
/* To load balance */
queue_id = sec_alloc_queue_id(ctx, req);
qp_ctx = &ctx->qp_ctx[queue_id];
req->req_id = sec_alloc_req_id(req, qp_ctx);
if (unlikely(req->req_id < 0)) {
sec_free_queue_id(ctx, req);
return req->req_id;
for (i = 0; i < ctx->sec->ctx_q_num; i++) {
qp_ctx = &ctx->qp_ctx[i];
req->req_id = sec_alloc_req_id(req, qp_ctx);
if (req->req_id >= 0)
break;
}
req->qp_ctx = qp_ctx;
req->backlog = &qp_ctx->backlog;
return 0;
}
static int sec_process(struct sec_ctx *ctx, struct sec_req *req)
{
struct sec_cipher_req *c_req = &req->c_req;
int ret;
ret = sec_request_init(ctx, req);
@ -1755,8 +1977,7 @@ static int sec_process(struct sec_ctx *ctx, struct sec_req *req)
sec_update_iv(req, ctx->alg_type);
ret = ctx->req_op->bd_send(ctx, req);
if (unlikely((ret != -EBUSY && ret != -EINPROGRESS) ||
(ret == -EBUSY && !(req->flag & CRYPTO_TFM_REQ_MAY_BACKLOG)))) {
if (unlikely((ret != -EBUSY && ret != -EINPROGRESS))) {
dev_err_ratelimited(ctx->dev, "send sec request failed!\n");
goto err_send_req;
}
@ -1767,16 +1988,23 @@ err_send_req:
/* As failing, restore the IV from user */
if (ctx->c_ctx.c_mode == SEC_CMODE_CBC && !req->c_req.encrypt) {
if (ctx->alg_type == SEC_SKCIPHER)
memcpy(req->c_req.sk_req->iv, c_req->c_ivin,
memcpy(req->c_req.sk_req->iv, req->c_req.c_ivin,
ctx->c_ctx.ivsize);
else
memcpy(req->aead_req.aead_req->iv, c_req->c_ivin,
memcpy(req->aead_req.aead_req->iv, req->c_req.c_ivin,
ctx->c_ctx.ivsize);
}
sec_request_untransfer(ctx, req);
err_uninit_req:
sec_request_uninit(ctx, req);
sec_request_uninit(req);
if (ctx->alg_type == SEC_AEAD)
ret = sec_aead_soft_crypto(ctx, req->aead_req.aead_req,
req->c_req.encrypt);
else
ret = sec_skcipher_soft_crypto(ctx, req->c_req.sk_req,
req->c_req.encrypt);
return ret;
}
@ -1850,7 +2078,7 @@ static int sec_aead_init(struct crypto_aead *tfm)
struct sec_ctx *ctx = crypto_aead_ctx(tfm);
int ret;
crypto_aead_set_reqsize(tfm, sizeof(struct sec_req));
crypto_aead_set_reqsize_dma(tfm, sizeof(struct sec_req));
ctx->alg_type = SEC_AEAD;
ctx->c_ctx.ivsize = crypto_aead_ivsize(tfm);
if (ctx->c_ctx.ivsize < SEC_AIV_SIZE ||
@ -2087,7 +2315,7 @@ static int sec_skcipher_soft_crypto(struct sec_ctx *ctx,
static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sk_req);
struct sec_req *req = skcipher_request_ctx(sk_req);
struct sec_req *req = skcipher_request_ctx_dma(sk_req);
struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
bool need_fallback = false;
int ret;
@ -2102,6 +2330,7 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
req->c_req.sk_req = sk_req;
req->c_req.encrypt = encrypt;
req->ctx = ctx;
req->base = &sk_req->base;
ret = sec_skcipher_param_check(ctx, req, &need_fallback);
if (unlikely(ret))
@ -2236,6 +2465,9 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
return -EINVAL;
if (unlikely(ctx->a_ctx.a_key_len & WORD_MASK))
return -EINVAL;
} else if (c_mode == SEC_CMODE_GCM) {
if (unlikely(sz < SEC_GCM_MIN_AUTH_SZ))
return -EINVAL;
}
return 0;
@ -2309,7 +2541,7 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
{
struct crypto_aead *tfm = crypto_aead_reqtfm(a_req);
struct sec_req *req = aead_request_ctx(a_req);
struct sec_req *req = aead_request_ctx_dma(a_req);
struct sec_ctx *ctx = crypto_aead_ctx(tfm);
size_t sz = crypto_aead_authsize(tfm);
bool need_fallback = false;
@ -2319,6 +2551,7 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
req->aead_req.aead_req = a_req;
req->c_req.encrypt = encrypt;
req->ctx = ctx;
req->base = &a_req->base;
req->c_req.c_len = a_req->cryptlen - (req->c_req.encrypt ? 0 : sz);
ret = sec_aead_param_check(ctx, req, &need_fallback);

View file

@ -210,15 +210,15 @@ static void clear_hw_sgl_sge(struct hisi_acc_hw_sgl *hw_sgl)
* @pool: Pool which hw sgl memory will be allocated in.
* @index: Index of hisi_acc_hw_sgl in pool.
* @hw_sgl_dma: The dma address of allocated hw sgl.
* @dir: DMA direction.
*
* This function builds hw sgl according input sgl, user can use hw_sgl_dma
* as src/dst in its BD. Only support single hw sgl currently.
*/
struct hisi_acc_hw_sgl *
hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
struct scatterlist *sgl,
struct hisi_acc_sgl_pool *pool,
u32 index, dma_addr_t *hw_sgl_dma)
hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev, struct scatterlist *sgl,
struct hisi_acc_sgl_pool *pool, u32 index,
dma_addr_t *hw_sgl_dma, enum dma_data_direction dir)
{
struct hisi_acc_hw_sgl *curr_hw_sgl;
unsigned int i, sg_n_mapped;
@ -232,7 +232,7 @@ hisi_acc_sg_buf_map_to_hw_sgl(struct device *dev,
sg_n = sg_nents(sgl);
sg_n_mapped = dma_map_sg(dev, sgl, sg_n, DMA_BIDIRECTIONAL);
sg_n_mapped = dma_map_sg(dev, sgl, sg_n, dir);
if (!sg_n_mapped) {
dev_err(dev, "DMA mapping for SG error!\n");
return ERR_PTR(-EINVAL);
@ -276,16 +276,17 @@ EXPORT_SYMBOL_GPL(hisi_acc_sg_buf_map_to_hw_sgl);
* @dev: The device which hw sgl belongs to.
* @sgl: Related scatterlist.
* @hw_sgl: Virtual address of hw sgl.
* @dir: DMA direction.
*
* This function unmaps allocated hw sgl.
*/
void hisi_acc_sg_buf_unmap(struct device *dev, struct scatterlist *sgl,
struct hisi_acc_hw_sgl *hw_sgl)
struct hisi_acc_hw_sgl *hw_sgl, enum dma_data_direction dir)
{
if (!dev || !sgl || !hw_sgl)
return;
dma_unmap_sg(dev, sgl, sg_nents(sgl), DMA_BIDIRECTIONAL);
dma_unmap_sg(dev, sgl, sg_nents(sgl), dir);
clear_hw_sgl_sge(hw_sgl);
hw_sgl->entry_sum_in_chain = 0;
hw_sgl->entry_sum_in_sgl = 0;

View file

@ -224,7 +224,8 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
return -EINVAL;
req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->src, pool,
req->req_id << 1, &req->dma_src);
req->req_id << 1, &req->dma_src,
DMA_TO_DEVICE);
if (IS_ERR(req->hw_src)) {
dev_err(dev, "failed to map the src buffer to hw sgl (%ld)!\n",
PTR_ERR(req->hw_src));
@ -233,7 +234,7 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
req->hw_dst = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->dst, pool,
(req->req_id << 1) + 1,
&req->dma_dst);
&req->dma_dst, DMA_FROM_DEVICE);
if (IS_ERR(req->hw_dst)) {
ret = PTR_ERR(req->hw_dst);
dev_err(dev, "failed to map the dst buffer to hw slg (%d)!\n",
@ -258,9 +259,9 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
return -EINPROGRESS;
err_unmap_output:
hisi_acc_sg_buf_unmap(dev, a_req->dst, req->hw_dst);
hisi_acc_sg_buf_unmap(dev, a_req->dst, req->hw_dst, DMA_FROM_DEVICE);
err_unmap_input:
hisi_acc_sg_buf_unmap(dev, a_req->src, req->hw_src);
hisi_acc_sg_buf_unmap(dev, a_req->src, req->hw_src, DMA_TO_DEVICE);
return ret;
}
@ -303,8 +304,8 @@ static void hisi_zip_acomp_cb(struct hisi_qp *qp, void *data)
err = -EIO;
}
hisi_acc_sg_buf_unmap(dev, acomp_req->src, req->hw_src);
hisi_acc_sg_buf_unmap(dev, acomp_req->dst, req->hw_dst);
hisi_acc_sg_buf_unmap(dev, acomp_req->dst, req->hw_dst, DMA_FROM_DEVICE);
hisi_acc_sg_buf_unmap(dev, acomp_req->src, req->hw_src, DMA_TO_DEVICE);
acomp_req->dlen = ops->get_dstlen(sqe);

View file

@ -436,7 +436,7 @@ static int img_hash_write_via_dma_stop(struct img_hash_dev *hdev)
struct img_hash_request_ctx *ctx = ahash_request_ctx(hdev->req);
if (ctx->flags & DRIVER_FLAGS_SG)
dma_unmap_sg(hdev->dev, ctx->sg, ctx->dma_ct, DMA_TO_DEVICE);
dma_unmap_sg(hdev->dev, ctx->sg, 1, DMA_TO_DEVICE);
return 0;
}

View file

@ -249,7 +249,9 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv,
safexcel_complete(priv, ring);
if (sreq->nents) {
dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
dma_unmap_sg(priv->dev, areq->src,
sg_nents_for_len(areq->src, areq->nbytes),
DMA_TO_DEVICE);
sreq->nents = 0;
}
@ -491,7 +493,9 @@ unmap_result:
DMA_FROM_DEVICE);
unmap_sg:
if (req->nents) {
dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
dma_unmap_sg(priv->dev, areq->src,
sg_nents_for_len(areq->src, areq->nbytes),
DMA_TO_DEVICE);
req->nents = 0;
}
cdesc_rollback:

View file

@ -68,6 +68,7 @@ struct ocs_hcu_ctx {
* @sg_data_total: Total data in the SG list at any time.
* @sg_data_offset: Offset into the data of the current individual SG node.
* @sg_dma_nents: Number of sg entries mapped in dma_list.
* @nents: Number of entries in the scatterlist.
*/
struct ocs_hcu_rctx {
struct ocs_hcu_dev *hcu_dev;
@ -91,6 +92,7 @@ struct ocs_hcu_rctx {
unsigned int sg_data_total;
unsigned int sg_data_offset;
unsigned int sg_dma_nents;
unsigned int nents;
};
/**
@ -199,7 +201,7 @@ static void kmb_ocs_hcu_dma_cleanup(struct ahash_request *req,
/* Unmap req->src (if mapped). */
if (rctx->sg_dma_nents) {
dma_unmap_sg(dev, req->src, rctx->sg_dma_nents, DMA_TO_DEVICE);
dma_unmap_sg(dev, req->src, rctx->nents, DMA_TO_DEVICE);
rctx->sg_dma_nents = 0;
}
@ -260,6 +262,10 @@ static int kmb_ocs_dma_prepare(struct ahash_request *req)
rc = -ENOMEM;
goto cleanup;
}
/* Save the value of nents to pass to dma_unmap_sg. */
rctx->nents = nents;
/*
* The value returned by dma_map_sg() can be < nents; so update
* nents accordingly.

View file

@ -7,6 +7,7 @@
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/swab.h>
@ -1473,8 +1474,7 @@ int ocs_create_linked_list_from_sg(const struct ocs_aes_dev *aes_dev,
ll = dll_desc->vaddr;
for (i = 0; i < dma_nents; i++, sg = sg_next(sg)) {
ll[i].src_addr = sg_dma_address(sg) + data_offset;
ll[i].src_len = (sg_dma_len(sg) - data_offset) < data_size ?
(sg_dma_len(sg) - data_offset) : data_size;
ll[i].src_len = min(sg_dma_len(sg) - data_offset, data_size);
data_offset = 0;
data_size -= ll[i].src_len;
/* Current element points to the DMA address of the next one. */

View file

@ -191,7 +191,6 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
ICP_ACCEL_CAPABILITIES_SM4 |
ICP_ACCEL_CAPABILITIES_AES_V2 |
ICP_ACCEL_CAPABILITIES_ZUC |
ICP_ACCEL_CAPABILITIES_ZUC_256 |
ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT |
ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN;
@ -223,17 +222,11 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
if (fusectl1 & ICP_ACCEL_GEN4_MASK_WCP_WAT_SLICE) {
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT;
}
if (fusectl1 & ICP_ACCEL_GEN4_MASK_EIA3_SLICE) {
if (fusectl1 & ICP_ACCEL_GEN4_MASK_EIA3_SLICE)
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
}
if (fusectl1 & ICP_ACCEL_GEN4_MASK_ZUC_256_SLICE)
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
capabilities_asym = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
ICP_ACCEL_CAPABILITIES_SM2 |
@ -303,11 +296,13 @@ static void adf_init_rl_data(struct adf_rl_hw_data *rl_data)
rl_data->pcie_scale_div = ADF_420XX_RL_PCIE_SCALE_FACTOR_DIV;
rl_data->pcie_scale_mul = ADF_420XX_RL_PCIE_SCALE_FACTOR_MUL;
rl_data->dcpr_correction = ADF_420XX_RL_DCPR_CORRECTION;
rl_data->max_tp[ADF_SVC_ASYM] = ADF_420XX_RL_MAX_TP_ASYM;
rl_data->max_tp[ADF_SVC_SYM] = ADF_420XX_RL_MAX_TP_SYM;
rl_data->max_tp[ADF_SVC_DC] = ADF_420XX_RL_MAX_TP_DC;
rl_data->max_tp[SVC_ASYM] = ADF_420XX_RL_MAX_TP_ASYM;
rl_data->max_tp[SVC_SYM] = ADF_420XX_RL_MAX_TP_SYM;
rl_data->max_tp[SVC_DC] = ADF_420XX_RL_MAX_TP_DC;
rl_data->scan_interval = ADF_420XX_RL_SCANS_PER_SEC;
rl_data->scale_ref = ADF_420XX_RL_SLICE_REF;
adf_gen4_init_num_svc_aes(rl_data);
}
static int get_rp_group(struct adf_accel_dev *accel_dev, u32 ae_mask)
@ -473,6 +468,7 @@ void adf_init_hw_data_420xx(struct adf_hw_device_data *hw_data, u32 dev_id)
hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE;
hw_data->clock_frequency = ADF_420XX_AE_FREQ;
hw_data->services_supported = adf_gen4_services_supported;
hw_data->get_svc_slice_cnt = adf_gen4_get_svc_slice_cnt;
adf_gen4_set_err_mask(&hw_data->dev_err_mask);
adf_gen4_init_hw_csr_ops(&hw_data->csr_ops);

View file

@ -3,6 +3,7 @@
#include <linux/iopoll.h>
#include <adf_accel_devices.h>
#include <adf_admin.h>
#include <adf_bank_state.h>
#include <adf_cfg.h>
#include <adf_cfg_services.h>
#include <adf_clock.h>
@ -221,11 +222,13 @@ static void adf_init_rl_data(struct adf_rl_hw_data *rl_data)
rl_data->pcie_scale_div = ADF_4XXX_RL_PCIE_SCALE_FACTOR_DIV;
rl_data->pcie_scale_mul = ADF_4XXX_RL_PCIE_SCALE_FACTOR_MUL;
rl_data->dcpr_correction = ADF_4XXX_RL_DCPR_CORRECTION;
rl_data->max_tp[ADF_SVC_ASYM] = ADF_4XXX_RL_MAX_TP_ASYM;
rl_data->max_tp[ADF_SVC_SYM] = ADF_4XXX_RL_MAX_TP_SYM;
rl_data->max_tp[ADF_SVC_DC] = ADF_4XXX_RL_MAX_TP_DC;
rl_data->max_tp[SVC_ASYM] = ADF_4XXX_RL_MAX_TP_ASYM;
rl_data->max_tp[SVC_SYM] = ADF_4XXX_RL_MAX_TP_SYM;
rl_data->max_tp[SVC_DC] = ADF_4XXX_RL_MAX_TP_DC;
rl_data->scan_interval = ADF_4XXX_RL_SCANS_PER_SEC;
rl_data->scale_ref = ADF_4XXX_RL_SLICE_REF;
adf_gen4_init_num_svc_aes(rl_data);
}
static u32 uof_get_num_objs(struct adf_accel_dev *accel_dev)
@ -448,8 +451,8 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
hw_data->get_ring_to_svc_map = adf_gen4_get_ring_to_svc_map;
hw_data->disable_iov = adf_disable_sriov;
hw_data->ring_pair_reset = adf_gen4_ring_pair_reset;
hw_data->bank_state_save = adf_gen4_bank_state_save;
hw_data->bank_state_restore = adf_gen4_bank_state_restore;
hw_data->bank_state_save = adf_bank_state_save;
hw_data->bank_state_restore = adf_bank_state_restore;
hw_data->enable_pm = adf_gen4_enable_pm;
hw_data->handle_pm_interrupt = adf_gen4_handle_pm_interrupt;
hw_data->dev_config = adf_gen4_dev_config;
@ -459,6 +462,7 @@ void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id)
hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE;
hw_data->clock_frequency = ADF_4XXX_AE_FREQ;
hw_data->services_supported = adf_gen4_services_supported;
hw_data->get_svc_slice_cnt = adf_gen4_get_svc_slice_cnt;
adf_gen4_set_err_mask(&hw_data->dev_err_mask);
adf_gen4_init_hw_csr_ops(&hw_data->csr_ops);

View file

@ -10,6 +10,7 @@
#include <adf_accel_devices.h>
#include <adf_admin.h>
#include <adf_bank_state.h>
#include <adf_cfg.h>
#include <adf_cfg_services.h>
#include <adf_clock.h>
@ -18,6 +19,7 @@
#include <adf_gen6_pm.h>
#include <adf_gen6_ras.h>
#include <adf_gen6_shared.h>
#include <adf_gen6_tl.h>
#include <adf_timer.h>
#include "adf_6xxx_hw_data.h"
#include "icp_qat_fw_comp.h"
@ -76,6 +78,10 @@ static const unsigned long thrd_mask_dcc[ADF_6XXX_MAX_ACCELENGINES] = {
0x00, 0x00, 0x00, 0x00, 0x07, 0x07, 0x03, 0x03, 0x00
};
static const unsigned long thrd_mask_dcpr[ADF_6XXX_MAX_ACCELENGINES] = {
0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x00
};
static const char *const adf_6xxx_fw_objs[] = {
[ADF_FW_CY_OBJ] = ADF_6XXX_CY_OBJ,
[ADF_FW_DC_OBJ] = ADF_6XXX_DC_OBJ,
@ -97,7 +103,7 @@ static bool services_supported(unsigned long mask)
{
int num_svc;
if (mask >= BIT(SVC_BASE_COUNT))
if (mask >= BIT(SVC_COUNT))
return false;
num_svc = hweight_long(mask);
@ -126,10 +132,13 @@ static int get_service(unsigned long *mask)
if (test_and_clear_bit(SVC_DCC, mask))
return SVC_DCC;
if (test_and_clear_bit(SVC_DECOMP, mask))
return SVC_DECOMP;
return -EINVAL;
}
static enum adf_cfg_service_type get_ring_type(enum adf_services service)
static enum adf_cfg_service_type get_ring_type(unsigned int service)
{
switch (service) {
case SVC_SYM:
@ -139,12 +148,14 @@ static enum adf_cfg_service_type get_ring_type(enum adf_services service)
case SVC_DC:
case SVC_DCC:
return COMP;
case SVC_DECOMP:
return DECOMP;
default:
return UNUSED;
}
}
static const unsigned long *get_thrd_mask(enum adf_services service)
static const unsigned long *get_thrd_mask(unsigned int service)
{
switch (service) {
case SVC_SYM:
@ -155,6 +166,8 @@ static const unsigned long *get_thrd_mask(enum adf_services service)
return thrd_mask_cpr;
case SVC_DCC:
return thrd_mask_dcc;
case SVC_DECOMP:
return thrd_mask_dcpr;
default:
return NULL;
}
@ -511,6 +524,55 @@ static int adf_gen6_init_thd2arb_map(struct adf_accel_dev *accel_dev)
return 0;
}
static void init_num_svc_aes(struct adf_rl_hw_data *device_data)
{
enum adf_fw_objs obj_type, obj_iter;
unsigned int svc, i, num_grp;
u32 ae_mask;
for (svc = 0; svc < SVC_BASE_COUNT; svc++) {
switch (svc) {
case SVC_SYM:
case SVC_ASYM:
obj_type = ADF_FW_CY_OBJ;
break;
case SVC_DC:
case SVC_DECOMP:
obj_type = ADF_FW_DC_OBJ;
break;
}
num_grp = ARRAY_SIZE(adf_default_fw_config);
for (i = 0; i < num_grp; i++) {
obj_iter = adf_default_fw_config[i].obj;
if (obj_iter == obj_type) {
ae_mask = adf_default_fw_config[i].ae_mask;
device_data->svc_ae_mask[svc] = hweight32(ae_mask);
break;
}
}
}
}
static u32 adf_gen6_get_svc_slice_cnt(struct adf_accel_dev *accel_dev,
enum adf_base_services svc)
{
struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
switch (svc) {
case SVC_SYM:
return device_data->slices.cph_cnt;
case SVC_ASYM:
return device_data->slices.pke_cnt;
case SVC_DC:
return device_data->slices.cpr_cnt + device_data->slices.dcpr_cnt;
case SVC_DECOMP:
return device_data->slices.dcpr_cnt;
default:
return 0;
}
}
static void set_vc_csr_for_bank(void __iomem *csr, u32 bank_number)
{
u32 value;
@ -520,8 +582,8 @@ static void set_vc_csr_for_bank(void __iomem *csr, u32 bank_number)
* driver must program the ringmodectl CSRs.
*/
value = ADF_CSR_RD(csr, ADF_GEN6_CSR_RINGMODECTL(bank_number));
value |= FIELD_PREP(ADF_GEN6_RINGMODECTL_TC_MASK, ADF_GEN6_RINGMODECTL_TC_DEFAULT);
value |= FIELD_PREP(ADF_GEN6_RINGMODECTL_TC_EN_MASK, ADF_GEN6_RINGMODECTL_TC_EN_OP1);
FIELD_MODIFY(ADF_GEN6_RINGMODECTL_TC_MASK, &value, ADF_GEN6_RINGMODECTL_TC_DEFAULT);
FIELD_MODIFY(ADF_GEN6_RINGMODECTL_TC_EN_MASK, &value, ADF_GEN6_RINGMODECTL_TC_EN_OP1);
ADF_CSR_WR(csr, ADF_GEN6_CSR_RINGMODECTL(bank_number), value);
}
@ -537,7 +599,7 @@ static int set_vc_config(struct adf_accel_dev *accel_dev)
* Read PVC0CTL then write the masked values.
*/
pci_read_config_dword(pdev, ADF_GEN6_PVC0CTL_OFFSET, &value);
value |= FIELD_PREP(ADF_GEN6_PVC0CTL_TCVCMAP_MASK, ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT);
FIELD_MODIFY(ADF_GEN6_PVC0CTL_TCVCMAP_MASK, &value, ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT);
err = pci_write_config_dword(pdev, ADF_GEN6_PVC0CTL_OFFSET, value);
if (err) {
dev_err(&GET_DEV(accel_dev), "pci write to PVC0CTL failed\n");
@ -546,8 +608,8 @@ static int set_vc_config(struct adf_accel_dev *accel_dev)
/* Read PVC1CTL then write masked values */
pci_read_config_dword(pdev, ADF_GEN6_PVC1CTL_OFFSET, &value);
value |= FIELD_PREP(ADF_GEN6_PVC1CTL_TCVCMAP_MASK, ADF_GEN6_PVC1CTL_TCVCMAP_DEFAULT);
value |= FIELD_PREP(ADF_GEN6_PVC1CTL_VCEN_MASK, ADF_GEN6_PVC1CTL_VCEN_ON);
FIELD_MODIFY(ADF_GEN6_PVC1CTL_TCVCMAP_MASK, &value, ADF_GEN6_PVC1CTL_TCVCMAP_DEFAULT);
FIELD_MODIFY(ADF_GEN6_PVC1CTL_VCEN_MASK, &value, ADF_GEN6_PVC1CTL_VCEN_ON);
err = pci_write_config_dword(pdev, ADF_GEN6_PVC1CTL_OFFSET, value);
if (err)
dev_err(&GET_DEV(accel_dev), "pci write to PVC1CTL failed\n");
@ -618,7 +680,6 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CHACHA_POLY;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_AESGCM_SPC;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_AES_V2;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
}
if (fusectl1 & ICP_ACCEL_GEN6_MASK_AUTH_SLICE) {
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_AUTHENTICATION;
@ -627,7 +688,15 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_CIPHER;
}
capabilities_asym = 0;
capabilities_asym = ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC |
ICP_ACCEL_CAPABILITIES_SM2 |
ICP_ACCEL_CAPABILITIES_ECEDMONT;
if (fusectl1 & ICP_ACCEL_GEN6_MASK_PKE_SLICE) {
capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_CRYPTO_ASYMMETRIC;
capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_SM2;
capabilities_asym &= ~ICP_ACCEL_CAPABILITIES_ECEDMONT;
}
capabilities_dc = ICP_ACCEL_CAPABILITIES_COMPRESSION |
ICP_ACCEL_CAPABILITIES_LZ4_COMPRESSION |
@ -648,7 +717,7 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
caps |= capabilities_asym;
if (test_bit(SVC_SYM, &mask))
caps |= capabilities_sym;
if (test_bit(SVC_DC, &mask))
if (test_bit(SVC_DC, &mask) || test_bit(SVC_DECOMP, &mask))
caps |= capabilities_dc;
if (test_bit(SVC_DCC, &mask)) {
/*
@ -744,7 +813,16 @@ static int adf_init_device(struct adf_accel_dev *accel_dev)
static int enable_pm(struct adf_accel_dev *accel_dev)
{
return adf_init_admin_pm(accel_dev, ADF_GEN6_PM_DEFAULT_IDLE_FILTER);
int ret;
ret = adf_init_admin_pm(accel_dev, ADF_GEN6_PM_DEFAULT_IDLE_FILTER);
if (ret)
return ret;
/* Initialize PM internal data */
adf_gen6_init_dev_pm_data(accel_dev);
return 0;
}
static int dev_config(struct adf_accel_dev *accel_dev)
@ -776,6 +854,25 @@ static int dev_config(struct adf_accel_dev *accel_dev)
return ret;
}
static void adf_gen6_init_rl_data(struct adf_rl_hw_data *rl_data)
{
rl_data->pciout_tb_offset = ADF_GEN6_RL_TOKEN_PCIEOUT_BUCKET_OFFSET;
rl_data->pciin_tb_offset = ADF_GEN6_RL_TOKEN_PCIEIN_BUCKET_OFFSET;
rl_data->r2l_offset = ADF_GEN6_RL_R2L_OFFSET;
rl_data->l2c_offset = ADF_GEN6_RL_L2C_OFFSET;
rl_data->c2s_offset = ADF_GEN6_RL_C2S_OFFSET;
rl_data->pcie_scale_div = ADF_6XXX_RL_PCIE_SCALE_FACTOR_DIV;
rl_data->pcie_scale_mul = ADF_6XXX_RL_PCIE_SCALE_FACTOR_MUL;
rl_data->max_tp[SVC_ASYM] = ADF_6XXX_RL_MAX_TP_ASYM;
rl_data->max_tp[SVC_SYM] = ADF_6XXX_RL_MAX_TP_SYM;
rl_data->max_tp[SVC_DC] = ADF_6XXX_RL_MAX_TP_DC;
rl_data->max_tp[SVC_DECOMP] = ADF_6XXX_RL_MAX_TP_DECOMP;
rl_data->scan_interval = ADF_6XXX_RL_SCANS_PER_SEC;
rl_data->scale_ref = ADF_6XXX_RL_SLICE_REF;
init_num_svc_aes(rl_data);
}
void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &adf_6xxx_class;
@ -824,6 +921,8 @@ void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data)
hw_data->disable_iov = adf_disable_sriov;
hw_data->ring_pair_reset = ring_pair_reset;
hw_data->dev_config = dev_config;
hw_data->bank_state_save = adf_bank_state_save;
hw_data->bank_state_restore = adf_bank_state_restore;
hw_data->get_hb_clock = get_heartbeat_clock;
hw_data->num_hb_ctrs = ADF_NUM_HB_CNT_PER_AE;
hw_data->start_timer = adf_timer_start;
@ -831,11 +930,17 @@ void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data)
hw_data->init_device = adf_init_device;
hw_data->enable_pm = enable_pm;
hw_data->services_supported = services_supported;
hw_data->num_rps = ADF_GEN6_ETR_MAX_BANKS;
hw_data->clock_frequency = ADF_6XXX_AE_FREQ;
hw_data->get_svc_slice_cnt = adf_gen6_get_svc_slice_cnt;
adf_gen6_init_hw_csr_ops(&hw_data->csr_ops);
adf_gen6_init_pf_pfvf_ops(&hw_data->pfvf_ops);
adf_gen6_init_dc_ops(&hw_data->dc_ops);
adf_gen6_init_vf_mig_ops(&hw_data->vfmig_ops);
adf_gen6_init_ras_ops(&hw_data->ras_ops);
adf_gen6_init_tl_data(&hw_data->tl_data);
adf_gen6_init_rl_data(&hw_data->rl_data);
}
void adf_clean_hw_data_6xxx(struct adf_hw_device_data *hw_data)

View file

@ -99,7 +99,7 @@
#define ADF_GEN6_PVC0CTL_OFFSET 0x204
#define ADF_GEN6_PVC0CTL_TCVCMAP_OFFSET 1
#define ADF_GEN6_PVC0CTL_TCVCMAP_MASK GENMASK(7, 1)
#define ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT 0x7F
#define ADF_GEN6_PVC0CTL_TCVCMAP_DEFAULT 0x3F
/* VC1 Resource Control Register */
#define ADF_GEN6_PVC1CTL_OFFSET 0x210
@ -122,6 +122,13 @@
/* Number of heartbeat counter pairs */
#define ADF_NUM_HB_CNT_PER_AE ADF_NUM_THREADS_PER_AE
/* Rate Limiting */
#define ADF_GEN6_RL_R2L_OFFSET 0x508000
#define ADF_GEN6_RL_L2C_OFFSET 0x509000
#define ADF_GEN6_RL_C2S_OFFSET 0x508818
#define ADF_GEN6_RL_TOKEN_PCIEIN_BUCKET_OFFSET 0x508800
#define ADF_GEN6_RL_TOKEN_PCIEOUT_BUCKET_OFFSET 0x508804
/* Physical function fuses */
#define ADF_6XXX_ACCELENGINES_MASK GENMASK(8, 0)
#define ADF_6XXX_ADMIN_AE_MASK GENMASK(8, 8)
@ -133,6 +140,19 @@
#define ADF_6XXX_DC_OBJ "qat_6xxx_dc.bin"
#define ADF_6XXX_ADMIN_OBJ "qat_6xxx_admin.bin"
/* RL constants */
#define ADF_6XXX_RL_PCIE_SCALE_FACTOR_DIV 100
#define ADF_6XXX_RL_PCIE_SCALE_FACTOR_MUL 102
#define ADF_6XXX_RL_SCANS_PER_SEC 954
#define ADF_6XXX_RL_MAX_TP_ASYM 173750UL
#define ADF_6XXX_RL_MAX_TP_SYM 95000UL
#define ADF_6XXX_RL_MAX_TP_DC 40000UL
#define ADF_6XXX_RL_MAX_TP_DECOMP 40000UL
#define ADF_6XXX_RL_SLICE_REF 1000UL
/* Clock frequency */
#define ADF_6XXX_AE_FREQ (1000 * HZ_PER_MHZ)
enum icp_qat_gen6_slice_mask {
ICP_ACCEL_GEN6_MASK_UCS_SLICE = BIT(0),
ICP_ACCEL_GEN6_MASK_AUTH_SLICE = BIT(1),

View file

@ -4,6 +4,7 @@ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"CRYPTO_QAT"'
intel_qat-y := adf_accel_engine.o \
adf_admin.o \
adf_aer.o \
adf_bank_state.o \
adf_cfg.o \
adf_cfg_services.o \
adf_clock.o \
@ -48,9 +49,12 @@ intel_qat-$(CONFIG_DEBUG_FS) += adf_cnv_dbgfs.o \
adf_fw_counters.o \
adf_gen4_pm_debugfs.o \
adf_gen4_tl.o \
adf_gen6_pm_dbgfs.o \
adf_gen6_tl.o \
adf_heartbeat_dbgfs.o \
adf_heartbeat.o \
adf_pm_dbgfs.o \
adf_pm_dbgfs_utils.o \
adf_telemetry.o \
adf_tl_debugfs.o \
adf_transport_debug.o

View file

@ -157,39 +157,7 @@ struct admin_info {
u32 mailbox_offset;
};
struct ring_config {
u64 base;
u32 config;
u32 head;
u32 tail;
u32 reserved0;
};
struct bank_state {
u32 ringstat0;
u32 ringstat1;
u32 ringuostat;
u32 ringestat;
u32 ringnestat;
u32 ringnfstat;
u32 ringfstat;
u32 ringcstat0;
u32 ringcstat1;
u32 ringcstat2;
u32 ringcstat3;
u32 iaintflagen;
u32 iaintflagreg;
u32 iaintflagsrcsel0;
u32 iaintflagsrcsel1;
u32 iaintcolen;
u32 iaintcolctl;
u32 iaintflagandcolen;
u32 ringexpstat;
u32 ringexpintenable;
u32 ringsrvarben;
u32 reserved0;
struct ring_config rings[ADF_ETR_MAX_RINGS_PER_BANK];
};
struct adf_bank_state;
struct adf_hw_csr_ops {
u64 (*build_csr_ring_base_addr)(dma_addr_t addr, u32 size);
@ -338,9 +306,9 @@ struct adf_hw_device_data {
void (*set_ssm_wdtimer)(struct adf_accel_dev *accel_dev);
int (*ring_pair_reset)(struct adf_accel_dev *accel_dev, u32 bank_nr);
int (*bank_state_save)(struct adf_accel_dev *accel_dev, u32 bank_number,
struct bank_state *state);
struct adf_bank_state *state);
int (*bank_state_restore)(struct adf_accel_dev *accel_dev,
u32 bank_number, struct bank_state *state);
u32 bank_number, struct adf_bank_state *state);
void (*reset_device)(struct adf_accel_dev *accel_dev);
void (*set_msix_rttable)(struct adf_accel_dev *accel_dev);
const char *(*uof_get_name)(struct adf_accel_dev *accel_dev, u32 obj_num);
@ -351,6 +319,8 @@ struct adf_hw_device_data {
u32 (*get_ena_thd_mask)(struct adf_accel_dev *accel_dev, u32 obj_num);
int (*dev_config)(struct adf_accel_dev *accel_dev);
bool (*services_supported)(unsigned long mask);
u32 (*get_svc_slice_cnt)(struct adf_accel_dev *accel_dev,
enum adf_base_services svc);
struct adf_pfvf_ops pfvf_ops;
struct adf_hw_csr_ops csr_ops;
struct adf_dc_ops dc_ops;

View file

@ -229,7 +229,7 @@ const struct pci_error_handlers adf_err_handler = {
};
EXPORT_SYMBOL_GPL(adf_err_handler);
int adf_dev_autoreset(struct adf_accel_dev *accel_dev)
static int adf_dev_autoreset(struct adf_accel_dev *accel_dev)
{
if (accel_dev->autoreset_on_error)
return adf_dev_aer_schedule_reset(accel_dev, ADF_DEV_RESET_ASYNC);

View file

@ -0,0 +1,238 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2025 Intel Corporation */
#define pr_fmt(fmt) "QAT: " fmt
#include <linux/bits.h>
#include <linux/dev_printk.h>
#include <linux/printk.h>
#include "adf_accel_devices.h"
#include "adf_bank_state.h"
#include "adf_common_drv.h"
/* Ring interrupt masks */
#define ADF_RP_INT_SRC_SEL_F_RISE_MASK GENMASK(1, 0)
#define ADF_RP_INT_SRC_SEL_F_FALL_MASK GENMASK(2, 0)
#define ADF_RP_INT_SRC_SEL_RANGE_WIDTH 4
static inline int check_stat(u32 (*op)(void __iomem *, u32), u32 expect_val,
const char *name, void __iomem *base, u32 bank)
{
u32 actual_val = op(base, bank);
if (expect_val == actual_val)
return 0;
pr_err("Fail to restore %s register. Expected %#x, actual %#x\n",
name, expect_val, actual_val);
return -EINVAL;
}
static void bank_state_save(struct adf_hw_csr_ops *ops, void __iomem *base,
u32 bank, struct adf_bank_state *state, u32 num_rings)
{
u32 i;
state->ringstat0 = ops->read_csr_stat(base, bank);
state->ringuostat = ops->read_csr_uo_stat(base, bank);
state->ringestat = ops->read_csr_e_stat(base, bank);
state->ringnestat = ops->read_csr_ne_stat(base, bank);
state->ringnfstat = ops->read_csr_nf_stat(base, bank);
state->ringfstat = ops->read_csr_f_stat(base, bank);
state->ringcstat0 = ops->read_csr_c_stat(base, bank);
state->iaintflagen = ops->read_csr_int_en(base, bank);
state->iaintflagreg = ops->read_csr_int_flag(base, bank);
state->iaintflagsrcsel0 = ops->read_csr_int_srcsel(base, bank);
state->iaintcolen = ops->read_csr_int_col_en(base, bank);
state->iaintcolctl = ops->read_csr_int_col_ctl(base, bank);
state->iaintflagandcolen = ops->read_csr_int_flag_and_col(base, bank);
state->ringexpstat = ops->read_csr_exp_stat(base, bank);
state->ringexpintenable = ops->read_csr_exp_int_en(base, bank);
state->ringsrvarben = ops->read_csr_ring_srv_arb_en(base, bank);
for (i = 0; i < num_rings; i++) {
state->rings[i].head = ops->read_csr_ring_head(base, bank, i);
state->rings[i].tail = ops->read_csr_ring_tail(base, bank, i);
state->rings[i].config = ops->read_csr_ring_config(base, bank, i);
state->rings[i].base = ops->read_csr_ring_base(base, bank, i);
}
}
static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
u32 bank, struct adf_bank_state *state, u32 num_rings,
int tx_rx_gap)
{
u32 val, tmp_val, i;
int ret;
for (i = 0; i < num_rings; i++)
ops->write_csr_ring_base(base, bank, i, state->rings[i].base);
for (i = 0; i < num_rings; i++)
ops->write_csr_ring_config(base, bank, i, state->rings[i].config);
for (i = 0; i < num_rings / 2; i++) {
int tx = i * (tx_rx_gap + 1);
int rx = tx + tx_rx_gap;
ops->write_csr_ring_head(base, bank, tx, state->rings[tx].head);
ops->write_csr_ring_tail(base, bank, tx, state->rings[tx].tail);
/*
* The TX ring head needs to be updated again to make sure that
* the HW will not consider the ring as full when it is empty
* and the correct state flags are set to match the recovered state.
*/
if (state->ringestat & BIT(tx)) {
val = ops->read_csr_int_srcsel(base, bank);
val |= ADF_RP_INT_SRC_SEL_F_RISE_MASK;
ops->write_csr_int_srcsel_w_val(base, bank, val);
ops->write_csr_ring_head(base, bank, tx, state->rings[tx].head);
}
ops->write_csr_ring_tail(base, bank, rx, state->rings[rx].tail);
val = ops->read_csr_int_srcsel(base, bank);
val |= ADF_RP_INT_SRC_SEL_F_RISE_MASK << ADF_RP_INT_SRC_SEL_RANGE_WIDTH;
ops->write_csr_int_srcsel_w_val(base, bank, val);
ops->write_csr_ring_head(base, bank, rx, state->rings[rx].head);
val = ops->read_csr_int_srcsel(base, bank);
val |= ADF_RP_INT_SRC_SEL_F_FALL_MASK << ADF_RP_INT_SRC_SEL_RANGE_WIDTH;
ops->write_csr_int_srcsel_w_val(base, bank, val);
/*
* The RX ring tail needs to be updated again to make sure that
* the HW will not consider the ring as empty when it is full
* and the correct state flags are set to match the recovered state.
*/
if (state->ringfstat & BIT(rx))
ops->write_csr_ring_tail(base, bank, rx, state->rings[rx].tail);
}
ops->write_csr_int_flag_and_col(base, bank, state->iaintflagandcolen);
ops->write_csr_int_en(base, bank, state->iaintflagen);
ops->write_csr_int_col_en(base, bank, state->iaintcolen);
ops->write_csr_int_srcsel_w_val(base, bank, state->iaintflagsrcsel0);
ops->write_csr_exp_int_en(base, bank, state->ringexpintenable);
ops->write_csr_int_col_ctl(base, bank, state->iaintcolctl);
/*
* Verify whether any exceptions were raised during the bank save process.
* If exceptions occurred, the status and exception registers cannot
* be directly restored. Consequently, further restoration is not
* feasible, and the current state of the ring should be maintained.
*/
val = state->ringexpstat;
if (val) {
pr_info("Bank %u state not fully restored due to exception in saved state (%#x)\n",
bank, val);
return 0;
}
/* Ensure that the restoration process completed without exceptions */
tmp_val = ops->read_csr_exp_stat(base, bank);
if (tmp_val) {
pr_err("Bank %u restored with exception: %#x\n", bank, tmp_val);
return -EFAULT;
}
ops->write_csr_ring_srv_arb_en(base, bank, state->ringsrvarben);
/* Check that all ring statuses match the saved state. */
ret = check_stat(ops->read_csr_stat, state->ringstat0, "ringstat",
base, bank);
if (ret)
return ret;
ret = check_stat(ops->read_csr_e_stat, state->ringestat, "ringestat",
base, bank);
if (ret)
return ret;
ret = check_stat(ops->read_csr_ne_stat, state->ringnestat, "ringnestat",
base, bank);
if (ret)
return ret;
ret = check_stat(ops->read_csr_nf_stat, state->ringnfstat, "ringnfstat",
base, bank);
if (ret)
return ret;
ret = check_stat(ops->read_csr_f_stat, state->ringfstat, "ringfstat",
base, bank);
if (ret)
return ret;
ret = check_stat(ops->read_csr_c_stat, state->ringcstat0, "ringcstat",
base, bank);
if (ret)
return ret;
return 0;
}
/**
* adf_bank_state_save() - save state of bank-related registers
* @accel_dev: Pointer to the device structure
* @bank_number: Bank number
* @state: Pointer to bank state structure
*
* This function saves the state of a bank by reading the bank CSRs and
* writing them in the @state structure.
*
* Returns 0 on success, error code otherwise
*/
int adf_bank_state_save(struct adf_accel_dev *accel_dev, u32 bank_number,
struct adf_bank_state *state)
{
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
void __iomem *csr_base = adf_get_etr_base(accel_dev);
if (bank_number >= hw_data->num_banks || !state)
return -EINVAL;
dev_dbg(&GET_DEV(accel_dev), "Saving state of bank %d\n", bank_number);
bank_state_save(csr_ops, csr_base, bank_number, state,
hw_data->num_rings_per_bank);
return 0;
}
EXPORT_SYMBOL_GPL(adf_bank_state_save);
/**
* adf_bank_state_restore() - restore state of bank-related registers
* @accel_dev: Pointer to the device structure
* @bank_number: Bank number
* @state: Pointer to bank state structure
*
* This function attempts to restore the state of a bank by writing the
* bank CSRs to the values in the state structure.
*
* Returns 0 on success, error code otherwise
*/
int adf_bank_state_restore(struct adf_accel_dev *accel_dev, u32 bank_number,
struct adf_bank_state *state)
{
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
void __iomem *csr_base = adf_get_etr_base(accel_dev);
int ret;
if (bank_number >= hw_data->num_banks || !state)
return -EINVAL;
dev_dbg(&GET_DEV(accel_dev), "Restoring state of bank %d\n", bank_number);
ret = bank_state_restore(csr_ops, csr_base, bank_number, state,
hw_data->num_rings_per_bank, hw_data->tx_rx_gap);
if (ret)
dev_err(&GET_DEV(accel_dev),
"Unable to restore state of bank %d\n", bank_number);
return ret;
}
EXPORT_SYMBOL_GPL(adf_bank_state_restore);

View file

@ -0,0 +1,49 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2025 Intel Corporation */
#ifndef ADF_BANK_STATE_H_
#define ADF_BANK_STATE_H_
#include <linux/types.h>
struct adf_accel_dev;
struct ring_config {
u64 base;
u32 config;
u32 head;
u32 tail;
u32 reserved0;
};
struct adf_bank_state {
u32 ringstat0;
u32 ringstat1;
u32 ringuostat;
u32 ringestat;
u32 ringnestat;
u32 ringnfstat;
u32 ringfstat;
u32 ringcstat0;
u32 ringcstat1;
u32 ringcstat2;
u32 ringcstat3;
u32 iaintflagen;
u32 iaintflagreg;
u32 iaintflagsrcsel0;
u32 iaintflagsrcsel1;
u32 iaintcolen;
u32 iaintcolctl;
u32 iaintflagandcolen;
u32 ringexpstat;
u32 ringexpintenable;
u32 ringsrvarben;
u32 reserved0;
struct ring_config rings[ADF_ETR_MAX_RINGS_PER_BANK];
};
int adf_bank_state_restore(struct adf_accel_dev *accel_dev, u32 bank_number,
struct adf_bank_state *state);
int adf_bank_state_save(struct adf_accel_dev *accel_dev, u32 bank_number,
struct adf_bank_state *state);
#endif

View file

@ -29,6 +29,7 @@ enum adf_cfg_service_type {
COMP,
SYM,
ASYM,
DECOMP,
USED
};

View file

@ -7,6 +7,7 @@
#include <linux/pci.h>
#include <linux/string.h>
#include "adf_cfg.h"
#include "adf_cfg_common.h"
#include "adf_cfg_services.h"
#include "adf_cfg_strings.h"
@ -15,13 +16,14 @@ static const char *const adf_cfg_services[] = {
[SVC_SYM] = ADF_CFG_SYM,
[SVC_DC] = ADF_CFG_DC,
[SVC_DCC] = ADF_CFG_DCC,
[SVC_DECOMP] = ADF_CFG_DECOMP,
};
/*
* Ensure that the size of the array matches the number of services,
* SVC_BASE_COUNT, that is used to size the bitmap.
* SVC_COUNT, that is used to size the bitmap.
*/
static_assert(ARRAY_SIZE(adf_cfg_services) == SVC_BASE_COUNT);
static_assert(ARRAY_SIZE(adf_cfg_services) == SVC_COUNT);
/*
* Ensure that the maximum number of concurrent services that can be
@ -34,7 +36,7 @@ static_assert(ARRAY_SIZE(adf_cfg_services) >= MAX_NUM_CONCURR_SVC);
* Ensure that the number of services fit a single unsigned long, as each
* service is represented by a bit in the mask.
*/
static_assert(BITS_PER_LONG >= SVC_BASE_COUNT);
static_assert(BITS_PER_LONG >= SVC_COUNT);
/*
* Ensure that size of the concatenation of all service strings is smaller
@ -43,6 +45,7 @@ static_assert(BITS_PER_LONG >= SVC_BASE_COUNT);
static_assert(sizeof(ADF_CFG_SYM ADF_SERVICES_DELIMITER
ADF_CFG_ASYM ADF_SERVICES_DELIMITER
ADF_CFG_DC ADF_SERVICES_DELIMITER
ADF_CFG_DECOMP ADF_SERVICES_DELIMITER
ADF_CFG_DCC) < ADF_CFG_MAX_VAL_LEN_IN_BYTES);
static int adf_service_string_to_mask(struct adf_accel_dev *accel_dev, const char *buf,
@ -88,7 +91,7 @@ static int adf_service_mask_to_string(unsigned long mask, char *buf, size_t len)
if (len < ADF_CFG_MAX_VAL_LEN_IN_BYTES)
return -ENOSPC;
for_each_set_bit(bit, &mask, SVC_BASE_COUNT) {
for_each_set_bit(bit, &mask, SVC_COUNT) {
if (offset)
offset += scnprintf(buf + offset, len - offset,
ADF_SERVICES_DELIMITER);
@ -167,9 +170,43 @@ int adf_get_service_enabled(struct adf_accel_dev *accel_dev)
if (test_bit(SVC_DC, &mask))
return SVC_DC;
if (test_bit(SVC_DECOMP, &mask))
return SVC_DECOMP;
if (test_bit(SVC_DCC, &mask))
return SVC_DCC;
return -EINVAL;
}
EXPORT_SYMBOL_GPL(adf_get_service_enabled);
enum adf_cfg_service_type adf_srv_to_cfg_svc_type(enum adf_base_services svc)
{
switch (svc) {
case SVC_ASYM:
return ASYM;
case SVC_SYM:
return SYM;
case SVC_DC:
return COMP;
case SVC_DECOMP:
return DECOMP;
default:
return UNUSED;
}
}
bool adf_is_service_enabled(struct adf_accel_dev *accel_dev, enum adf_base_services svc)
{
enum adf_cfg_service_type arb_srv = adf_srv_to_cfg_svc_type(svc);
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
u8 rps_per_bundle = hw_data->num_banks_per_vf;
int i;
for (i = 0; i < rps_per_bundle; i++) {
if (GET_SRV_TYPE(accel_dev, i) == arb_srv)
return true;
}
return false;
}

View file

@ -7,16 +7,21 @@
struct adf_accel_dev;
enum adf_services {
enum adf_base_services {
SVC_ASYM = 0,
SVC_SYM,
SVC_DC,
SVC_DCC,
SVC_DECOMP,
SVC_BASE_COUNT
};
enum adf_extended_services {
SVC_DCC = SVC_BASE_COUNT,
SVC_COUNT
};
enum adf_composed_services {
SVC_SYM_ASYM = SVC_BASE_COUNT,
SVC_SYM_ASYM = SVC_COUNT,
SVC_SYM_DC,
SVC_ASYM_DC,
};
@ -33,5 +38,7 @@ int adf_parse_service_string(struct adf_accel_dev *accel_dev, const char *in,
size_t in_len, char *out, size_t out_len);
int adf_get_service_enabled(struct adf_accel_dev *accel_dev);
int adf_get_service_mask(struct adf_accel_dev *accel_dev, unsigned long *mask);
enum adf_cfg_service_type adf_srv_to_cfg_svc_type(enum adf_base_services svc);
bool adf_is_service_enabled(struct adf_accel_dev *accel_dev, enum adf_base_services svc);
#endif

View file

@ -24,6 +24,7 @@
#define ADF_CY "Cy"
#define ADF_DC "Dc"
#define ADF_CFG_DC "dc"
#define ADF_CFG_DECOMP "decomp"
#define ADF_CFG_CY "sym;asym"
#define ADF_CFG_SYM "sym"
#define ADF_CFG_ASYM "asym"

View file

@ -86,7 +86,6 @@ int adf_ae_stop(struct adf_accel_dev *accel_dev);
extern const struct pci_error_handlers adf_err_handler;
void adf_reset_sbr(struct adf_accel_dev *accel_dev);
void adf_reset_flr(struct adf_accel_dev *accel_dev);
int adf_dev_autoreset(struct adf_accel_dev *accel_dev);
void adf_dev_restore(struct adf_accel_dev *accel_dev);
int adf_init_aer(void);
void adf_exit_aer(void);
@ -189,6 +188,7 @@ void adf_exit_misc_wq(void);
bool adf_misc_wq_queue_work(struct work_struct *work);
bool adf_misc_wq_queue_delayed_work(struct delayed_work *work,
unsigned long delay);
void adf_misc_wq_flush(void);
#if defined(CONFIG_PCI_IOV)
int adf_sriov_configure(struct pci_dev *pdev, int numvfs);
void adf_disable_sriov(struct adf_accel_dev *accel_dev);

View file

@ -1,5 +1,8 @@
// SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0-only)
/* Copyright(c) 2020 Intel Corporation */
#define pr_fmt(fmt) "QAT: " fmt
#include <linux/bitops.h>
#include <linux/iopoll.h>
#include <asm/div64.h>
@ -259,7 +262,10 @@ bool adf_gen4_services_supported(unsigned long mask)
{
unsigned long num_svc = hweight_long(mask);
if (mask >= BIT(SVC_BASE_COUNT))
if (mask >= BIT(SVC_COUNT))
return false;
if (test_bit(SVC_DECOMP, &mask))
return false;
switch (num_svc) {
@ -485,187 +491,6 @@ int adf_gen4_bank_drain_start(struct adf_accel_dev *accel_dev,
return ret;
}
static void bank_state_save(struct adf_hw_csr_ops *ops, void __iomem *base,
u32 bank, struct bank_state *state, u32 num_rings)
{
u32 i;
state->ringstat0 = ops->read_csr_stat(base, bank);
state->ringuostat = ops->read_csr_uo_stat(base, bank);
state->ringestat = ops->read_csr_e_stat(base, bank);
state->ringnestat = ops->read_csr_ne_stat(base, bank);
state->ringnfstat = ops->read_csr_nf_stat(base, bank);
state->ringfstat = ops->read_csr_f_stat(base, bank);
state->ringcstat0 = ops->read_csr_c_stat(base, bank);
state->iaintflagen = ops->read_csr_int_en(base, bank);
state->iaintflagreg = ops->read_csr_int_flag(base, bank);
state->iaintflagsrcsel0 = ops->read_csr_int_srcsel(base, bank);
state->iaintcolen = ops->read_csr_int_col_en(base, bank);
state->iaintcolctl = ops->read_csr_int_col_ctl(base, bank);
state->iaintflagandcolen = ops->read_csr_int_flag_and_col(base, bank);
state->ringexpstat = ops->read_csr_exp_stat(base, bank);
state->ringexpintenable = ops->read_csr_exp_int_en(base, bank);
state->ringsrvarben = ops->read_csr_ring_srv_arb_en(base, bank);
for (i = 0; i < num_rings; i++) {
state->rings[i].head = ops->read_csr_ring_head(base, bank, i);
state->rings[i].tail = ops->read_csr_ring_tail(base, bank, i);
state->rings[i].config = ops->read_csr_ring_config(base, bank, i);
state->rings[i].base = ops->read_csr_ring_base(base, bank, i);
}
}
#define CHECK_STAT(op, expect_val, name, args...) \
({ \
u32 __expect_val = (expect_val); \
u32 actual_val = op(args); \
(__expect_val == actual_val) ? 0 : \
(pr_err("QAT: Fail to restore %s register. Expected 0x%x, actual 0x%x\n", \
name, __expect_val, actual_val), -EINVAL); \
})
static int bank_state_restore(struct adf_hw_csr_ops *ops, void __iomem *base,
u32 bank, struct bank_state *state, u32 num_rings,
int tx_rx_gap)
{
u32 val, tmp_val, i;
int ret;
for (i = 0; i < num_rings; i++)
ops->write_csr_ring_base(base, bank, i, state->rings[i].base);
for (i = 0; i < num_rings; i++)
ops->write_csr_ring_config(base, bank, i, state->rings[i].config);
for (i = 0; i < num_rings / 2; i++) {
int tx = i * (tx_rx_gap + 1);
int rx = tx + tx_rx_gap;
ops->write_csr_ring_head(base, bank, tx, state->rings[tx].head);
ops->write_csr_ring_tail(base, bank, tx, state->rings[tx].tail);
/*
* The TX ring head needs to be updated again to make sure that
* the HW will not consider the ring as full when it is empty
* and the correct state flags are set to match the recovered state.
*/
if (state->ringestat & BIT(tx)) {
val = ops->read_csr_int_srcsel(base, bank);
val |= ADF_RP_INT_SRC_SEL_F_RISE_MASK;
ops->write_csr_int_srcsel_w_val(base, bank, val);
ops->write_csr_ring_head(base, bank, tx, state->rings[tx].head);
}
ops->write_csr_ring_tail(base, bank, rx, state->rings[rx].tail);
val = ops->read_csr_int_srcsel(base, bank);
val |= ADF_RP_INT_SRC_SEL_F_RISE_MASK << ADF_RP_INT_SRC_SEL_RANGE_WIDTH;
ops->write_csr_int_srcsel_w_val(base, bank, val);
ops->write_csr_ring_head(base, bank, rx, state->rings[rx].head);
val = ops->read_csr_int_srcsel(base, bank);
val |= ADF_RP_INT_SRC_SEL_F_FALL_MASK << ADF_RP_INT_SRC_SEL_RANGE_WIDTH;
ops->write_csr_int_srcsel_w_val(base, bank, val);
/*
* The RX ring tail needs to be updated again to make sure that
* the HW will not consider the ring as empty when it is full
* and the correct state flags are set to match the recovered state.
*/
if (state->ringfstat & BIT(rx))
ops->write_csr_ring_tail(base, bank, rx, state->rings[rx].tail);
}
ops->write_csr_int_flag_and_col(base, bank, state->iaintflagandcolen);
ops->write_csr_int_en(base, bank, state->iaintflagen);
ops->write_csr_int_col_en(base, bank, state->iaintcolen);
ops->write_csr_int_srcsel_w_val(base, bank, state->iaintflagsrcsel0);
ops->write_csr_exp_int_en(base, bank, state->ringexpintenable);
ops->write_csr_int_col_ctl(base, bank, state->iaintcolctl);
ops->write_csr_ring_srv_arb_en(base, bank, state->ringsrvarben);
/* Check that all ring statuses match the saved state. */
ret = CHECK_STAT(ops->read_csr_stat, state->ringstat0, "ringstat",
base, bank);
if (ret)
return ret;
ret = CHECK_STAT(ops->read_csr_e_stat, state->ringestat, "ringestat",
base, bank);
if (ret)
return ret;
ret = CHECK_STAT(ops->read_csr_ne_stat, state->ringnestat, "ringnestat",
base, bank);
if (ret)
return ret;
ret = CHECK_STAT(ops->read_csr_nf_stat, state->ringnfstat, "ringnfstat",
base, bank);
if (ret)
return ret;
ret = CHECK_STAT(ops->read_csr_f_stat, state->ringfstat, "ringfstat",
base, bank);
if (ret)
return ret;
ret = CHECK_STAT(ops->read_csr_c_stat, state->ringcstat0, "ringcstat",
base, bank);
if (ret)
return ret;
tmp_val = ops->read_csr_exp_stat(base, bank);
val = state->ringexpstat;
if (tmp_val && !val) {
pr_err("QAT: Bank was restored with exception: 0x%x\n", val);
return -EINVAL;
}
return 0;
}
int adf_gen4_bank_state_save(struct adf_accel_dev *accel_dev, u32 bank_number,
struct bank_state *state)
{
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
void __iomem *csr_base = adf_get_etr_base(accel_dev);
if (bank_number >= hw_data->num_banks || !state)
return -EINVAL;
dev_dbg(&GET_DEV(accel_dev), "Saving state of bank %d\n", bank_number);
bank_state_save(csr_ops, csr_base, bank_number, state,
hw_data->num_rings_per_bank);
return 0;
}
EXPORT_SYMBOL_GPL(adf_gen4_bank_state_save);
int adf_gen4_bank_state_restore(struct adf_accel_dev *accel_dev, u32 bank_number,
struct bank_state *state)
{
struct adf_hw_device_data *hw_data = GET_HW_DATA(accel_dev);
struct adf_hw_csr_ops *csr_ops = GET_CSR_OPS(accel_dev);
void __iomem *csr_base = adf_get_etr_base(accel_dev);
int ret;
if (bank_number >= hw_data->num_banks || !state)
return -EINVAL;
dev_dbg(&GET_DEV(accel_dev), "Restoring state of bank %d\n", bank_number);
ret = bank_state_restore(csr_ops, csr_base, bank_number, state,
hw_data->num_rings_per_bank, hw_data->tx_rx_gap);
if (ret)
dev_err(&GET_DEV(accel_dev),
"Unable to restore state of bank %d\n", bank_number);
return ret;
}
EXPORT_SYMBOL_GPL(adf_gen4_bank_state_restore);
static int adf_gen4_build_comp_block(void *ctx, enum adf_dc_algo algo)
{
struct icp_qat_fw_comp_req *req_tmpl = ctx;
@ -733,3 +558,43 @@ void adf_gen4_init_dc_ops(struct adf_dc_ops *dc_ops)
dc_ops->build_decomp_block = adf_gen4_build_decomp_block;
}
EXPORT_SYMBOL_GPL(adf_gen4_init_dc_ops);
void adf_gen4_init_num_svc_aes(struct adf_rl_hw_data *device_data)
{
struct adf_hw_device_data *hw_data;
unsigned int i;
u32 ae_cnt;
hw_data = container_of(device_data, struct adf_hw_device_data, rl_data);
ae_cnt = hweight32(hw_data->get_ae_mask(hw_data));
if (!ae_cnt)
return;
for (i = 0; i < SVC_BASE_COUNT; i++)
device_data->svc_ae_mask[i] = ae_cnt - 1;
/*
* The decompression service is not supported on QAT GEN4 devices.
* Therefore, set svc_ae_mask to 0.
*/
device_data->svc_ae_mask[SVC_DECOMP] = 0;
}
EXPORT_SYMBOL_GPL(adf_gen4_init_num_svc_aes);
u32 adf_gen4_get_svc_slice_cnt(struct adf_accel_dev *accel_dev,
enum adf_base_services svc)
{
struct adf_rl_hw_data *device_data = &accel_dev->hw_device->rl_data;
switch (svc) {
case SVC_SYM:
return device_data->slices.cph_cnt;
case SVC_ASYM:
return device_data->slices.pke_cnt;
case SVC_DC:
return device_data->slices.dcpr_cnt;
default:
return 0;
}
}
EXPORT_SYMBOL_GPL(adf_gen4_get_svc_slice_cnt);

Some files were not shown because too many files have changed in this diff Show more