Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc8).

Conflicts:

drivers/net/ethernet/microsoft/mana/gdma_main.c
  9669ddda18 ("net: mana: Fix warnings for missing export.h header inclusion")
  7553911210 ("net: mana: Allocate MSI-X vectors dynamically")
https://lore.kernel.org/20250711130752.23023d98@canb.auug.org.au

Adjacent changes:

drivers/net/ethernet/ti/icssg/icssg_prueth.h
  6e86fb73de ("net: ti: icssg-prueth: Fix buffer allocation for ICSSG")
  ffe8a49091 ("net: ti: icssg-prueth: Read firmware-names from device tree")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2025-07-17 10:56:56 -07:00
commit 8b5a19b4ff
293 changed files with 2497 additions and 1273 deletions

View file

@ -223,12 +223,6 @@ allOf:
- required:
- pwms
- oneOf:
- required:
- interrupts
- required:
- io-backends
- if:
properties:
compatible:

View file

@ -21,7 +21,7 @@ properties:
vlogic-supply: true
interrupts:
minItems: 1
maxItems: 1
description:
Interrupt mapping for the trigger interrupt from the internal oscillator.

View file

@ -65,7 +65,7 @@ Additional sysfs entries for sq52206
------------------------------------
======================= =======================================================
energy1_input Energy measurement (mJ)
energy1_input Energy measurement (uJ)
power1_input_highest Peak Power (uW)
======================= =======================================================

View file

@ -2008,6 +2008,13 @@ If the KVM_CAP_VM_TSC_CONTROL capability is advertised, this can also
be used as a vm ioctl to set the initial tsc frequency of subsequently
created vCPUs.
For TSC protected Confidential Computing (CoCo) VMs where TSC frequency
is configured once at VM scope and remains unchanged during VM's
lifetime, the vm ioctl should be used to configure the TSC frequency
and the vcpu ioctl is not supported.
Example of such CoCo VMs: TDX guests.
4.56 KVM_GET_TSC_KHZ
--------------------

View file

@ -7,7 +7,7 @@ Review checklist for kvm patches
1. The patch must follow Documentation/process/coding-style.rst and
Documentation/process/submitting-patches.rst.
2. Patches should be against kvm.git master branch.
2. Patches should be against kvm.git master or next branches.
3. If the patch introduces or modifies a new userspace API:
- the API must be documented in Documentation/virt/kvm/api.rst
@ -18,10 +18,10 @@ Review checklist for kvm patches
5. New features must default to off (userspace should explicitly request them).
Performance improvements can and should default to on.
6. New cpu features should be exposed via KVM_GET_SUPPORTED_CPUID2
6. New cpu features should be exposed via KVM_GET_SUPPORTED_CPUID2,
or its equivalent for non-x86 architectures
7. Emulator changes should be accompanied by unit tests for qemu-kvm.git
kvm/test directory.
7. The feature should be testable (see below).
8. Changes should be vendor neutral when possible. Changes to common code
are better than duplicating changes to vendor code.
@ -36,6 +36,87 @@ Review checklist for kvm patches
11. New guest visible features must either be documented in a hardware manual
or be accompanied by documentation.
12. Features must be robust against reset and kexec - for example, shared
host/guest memory must be unshared to prevent the host from writing to
guest memory that the guest has not reserved for this purpose.
Testing of KVM code
-------------------
All features contributed to KVM, and in many cases bugfixes too, should be
accompanied by some kind of tests and/or enablement in open source guests
and VMMs. KVM is covered by multiple test suites:
*Selftests*
These are low level tests that allow granular testing of kernel APIs.
This includes API failure scenarios, invoking APIs after specific
guest instructions, and testing multiple calls to ``KVM_CREATE_VM``
within a single test. They are included in the kernel tree at
``tools/testing/selftests/kvm``.
``kvm-unit-tests``
A collection of small guests that test CPU and emulated device features
from a guest's perspective. They run under QEMU or ``kvmtool``, and
are generally not KVM-specific: they can be run with any accelerator
that QEMU support or even on bare metal, making it possible to compare
behavior across hypervisors and processor families.
Functional test suites
Various sets of functional tests exist, such as QEMU's ``tests/functional``
suite and `avocado-vt <https://avocado-vt.readthedocs.io/en/latest/>`__.
These typically involve running a full operating system in a virtual
machine.
The best testing approach depends on the feature's complexity and
operation. Here are some examples and guidelines:
New instructions (no new registers or APIs)
The corresponding CPU features (if applicable) should be made available
in QEMU. If the instructions require emulation support or other code in
KVM, it is worth adding coverage to ``kvm-unit-tests`` or selftests;
the latter can be a better choice if the instructions relate to an API
that already has good selftest coverage.
New hardware features (new registers, no new APIs)
These should be tested via ``kvm-unit-tests``; this more or less implies
supporting them in QEMU and/or ``kvmtool``. In some cases selftests
can be used instead, similar to the previous case, or specifically to
test corner cases in guest state save/restore.
Bug fixes and performance improvements
These usually do not introduce new APIs, but it's worth sharing
any benchmarks and tests that will validate your contribution,
ideally in the form of regression tests. Tests and benchmarks
can be included in either ``kvm-unit-tests`` or selftests, depending
on the specifics of your change. Selftests are especially useful for
regression tests because they are included directly in Linux's tree.
Large scale internal changes
While it's difficult to provide a single policy, you should ensure that
the changed code is covered by either ``kvm-unit-tests`` or selftests.
In some cases the affected code is run for any guests and functional
tests suffice. Explain your testing process in the cover letter,
as that can help identify gaps in existing test suites.
New APIs
It is important to demonstrate your use case. This can be as simple as
explaining that the feature is already in use on bare metal, or it can be
a proof-of-concept implementation in userspace. The latter need not be
open source, though that is of course preferrable for easier testing.
Selftests should test corner cases of the APIs, and should also cover
basic host and guest operation if no open source VMM uses the feature.
Bigger features, usually spanning host and guest
These should be supported by Linux guests, with limited exceptions for
Hyper-V features that are testable on Windows guests. It is strongly
suggested that the feature be usable with an open source host VMM, such
as at least one of QEMU or crosvm, and guest firmware. Selftests should
test at least API error cases. Guest operation can be covered by
either selftests of ``kvm-unit-tests`` (this is especially important for
paravirtualized and Windows-only features). Strong selftest coverage
can also be a replacement for implementation in an open source VMM,
but this is generally not recommended.
Following the above suggestions for testing in selftests and
``kvm-unit-tests`` will make it easier for the maintainers to review
and accept your code. In fact, even before you contribute your changes
upstream it will make it easier for you to develop for KVM.
Of course, the KVM maintainers reserve the right to require more tests,
though they may also waive the requirement from time to time.

View file

@ -5581,6 +5581,7 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
F: drivers/char/
F: drivers/misc/
F: include/linux/miscdevice.h
F: rust/kernel/miscdevice.rs
F: samples/rust/rust_misc_device.rs
X: drivers/char/agp/
X: drivers/char/hw_random/
@ -12200,9 +12201,8 @@ F: drivers/dma/idxd/*
F: include/uapi/linux/idxd.h
INTEL IN FIELD SCAN (IFS) DEVICE
M: Jithu Joseph <jithu.joseph@intel.com>
M: Tony Luck <tony.luck@intel.com>
R: Ashok Raj <ashok.raj.linux@gmail.com>
R: Tony Luck <tony.luck@intel.com>
S: Maintained
F: drivers/platform/x86/intel/ifs
F: include/trace/events/intel_ifs.h
@ -12542,8 +12542,7 @@ T: git https://git.kernel.org/pub/scm/linux/kernel/git/iwlwifi/iwlwifi-next.git/
F: drivers/net/wireless/intel/iwlwifi/
INTEL WMI SLIM BOOTLOADER (SBL) FIRMWARE UPDATE DRIVER
M: Jithu Joseph <jithu.joseph@intel.com>
S: Maintained
S: Orphan
W: https://slimbootloader.github.io/security/firmware-update.html
F: drivers/platform/x86/intel/wmi/sbl-fw-update.c
@ -17405,6 +17404,7 @@ F: include/linux/ethtool.h
F: include/linux/framer/framer-provider.h
F: include/linux/framer/framer.h
F: include/linux/in.h
F: include/linux/in6.h
F: include/linux/indirect_call_wrapper.h
F: include/linux/inet.h
F: include/linux/inet_diag.h
@ -25923,6 +25923,8 @@ F: fs/hostfs/
USERSPACE COPYIN/COPYOUT (UIOVEC)
M: Alexander Viro <viro@zeniv.linux.org.uk>
L: linux-block@vger.kernel.org
L: linux-fsdevel@vger.kernel.org
S: Maintained
F: include/linux/uio.h
F: lib/iov_iter.c

View file

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 16
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View file

@ -2624,7 +2624,7 @@ static bool access_mdcr(struct kvm_vcpu *vcpu,
*/
if (hpmn > vcpu->kvm->arch.nr_pmu_counters) {
hpmn = vcpu->kvm->arch.nr_pmu_counters;
u64_replace_bits(val, hpmn, MDCR_EL2_HPMN);
u64p_replace_bits(&val, hpmn, MDCR_EL2_HPMN);
}
__vcpu_assign_sys_reg(vcpu, MDCR_EL2, val);

View file

@ -98,6 +98,7 @@ config RISCV
select CLONE_BACKWARDS
select COMMON_CLK
select CPU_PM if CPU_IDLE || HIBERNATION || SUSPEND
select DYNAMIC_FTRACE if FUNCTION_TRACER
select EDAC_SUPPORT
select FRAME_POINTER if PERF_EVENTS || (FUNCTION_TRACER && !DYNAMIC_FTRACE)
select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY if DYNAMIC_FTRACE
@ -162,7 +163,7 @@ config RISCV
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_GRAPH_TRACER if HAVE_DYNAMIC_FTRACE_WITH_ARGS
select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_TRACER if !XIP_KERNEL
select HAVE_FUNCTION_TRACER if !XIP_KERNEL && HAVE_DYNAMIC_FTRACE
select HAVE_EBPF_JIT if MMU
select HAVE_GUP_FAST if MMU
select HAVE_FUNCTION_ARG_ACCESS_API

View file

@ -87,6 +87,9 @@ DECLARE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
extern struct kvm_device_ops kvm_riscv_aia_device_ops;
bool kvm_riscv_vcpu_aia_imsic_has_interrupt(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_aia_imsic_load(struct kvm_vcpu *vcpu, int cpu);
void kvm_riscv_vcpu_aia_imsic_put(struct kvm_vcpu *vcpu);
void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu);
int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu);
@ -161,7 +164,6 @@ void kvm_riscv_aia_destroy_vm(struct kvm *kvm);
int kvm_riscv_aia_alloc_hgei(int cpu, struct kvm_vcpu *owner,
void __iomem **hgei_va, phys_addr_t *hgei_pa);
void kvm_riscv_aia_free_hgei(int cpu, int hgei);
void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable);
void kvm_riscv_aia_enable(void);
void kvm_riscv_aia_disable(void);

View file

@ -306,6 +306,9 @@ static inline bool kvm_arch_pmi_in_guest(struct kvm_vcpu *vcpu)
return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu;
}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12
void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid,

View file

@ -311,8 +311,8 @@ do { \
do { \
if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) && \
!IS_ALIGNED((uintptr_t)__gu_ptr, sizeof(*__gu_ptr))) { \
__inttype(x) val = (__inttype(x))x; \
if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(val), sizeof(*__gu_ptr))) \
__inttype(x) ___val = (__inttype(x))x; \
if (__asm_copy_to_user_sum_enabled(__gu_ptr, &(___val), sizeof(*__gu_ptr))) \
goto label; \
break; \
} \

View file

@ -14,6 +14,18 @@
#include <asm/text-patching.h>
#ifdef CONFIG_DYNAMIC_FTRACE
void ftrace_arch_code_modify_prepare(void)
__acquires(&text_mutex)
{
mutex_lock(&text_mutex);
}
void ftrace_arch_code_modify_post_process(void)
__releases(&text_mutex)
{
mutex_unlock(&text_mutex);
}
unsigned long ftrace_call_adjust(unsigned long addr)
{
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))
@ -29,10 +41,8 @@ unsigned long arch_ftrace_get_symaddr(unsigned long fentry_ip)
void arch_ftrace_update_code(int command)
{
mutex_lock(&text_mutex);
command |= FTRACE_MAY_SLEEP;
ftrace_modify_all_code(command);
mutex_unlock(&text_mutex);
flush_icache_all();
}
@ -149,6 +159,8 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
unsigned int nops[2], offset;
int ret;
guard(mutex)(&text_mutex);
ret = ftrace_rec_set_nop_ops(rec);
if (ret)
return ret;
@ -157,9 +169,7 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
nops[0] = to_auipc_t0(offset);
nops[1] = RISCV_INSN_NOP4;
mutex_lock(&text_mutex);
ret = patch_insn_write((void *)pc, nops, 2 * MCOUNT_INSN_SIZE);
mutex_unlock(&text_mutex);
return ret;
}

View file

@ -6,6 +6,7 @@
#include <linux/cpu.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/irqflags.h>
#include <linux/randomize_kstack.h>
#include <linux/sched.h>
#include <linux/sched/debug.h>
@ -151,7 +152,9 @@ asmlinkage __visible __trap_section void name(struct pt_regs *regs) \
{ \
if (user_mode(regs)) { \
irqentry_enter_from_user_mode(regs); \
local_irq_enable(); \
do_trap_error(regs, signo, code, regs->epc, "Oops - " str); \
local_irq_disable(); \
irqentry_exit_to_user_mode(regs); \
} else { \
irqentry_state_t state = irqentry_nmi_enter(regs); \
@ -173,17 +176,14 @@ asmlinkage __visible __trap_section void do_trap_insn_illegal(struct pt_regs *re
if (user_mode(regs)) {
irqentry_enter_from_user_mode(regs);
local_irq_enable();
handled = riscv_v_first_use_handler(regs);
local_irq_disable();
if (!handled)
do_trap_error(regs, SIGILL, ILL_ILLOPC, regs->epc,
"Oops - illegal instruction");
local_irq_disable();
irqentry_exit_to_user_mode(regs);
} else {
irqentry_state_t state = irqentry_nmi_enter(regs);
@ -308,9 +308,11 @@ asmlinkage __visible __trap_section void do_trap_break(struct pt_regs *regs)
{
if (user_mode(regs)) {
irqentry_enter_from_user_mode(regs);
local_irq_enable();
handle_break(regs);
local_irq_disable();
irqentry_exit_to_user_mode(regs);
} else {
irqentry_state_t state = irqentry_nmi_enter(regs);

View file

@ -461,7 +461,7 @@ static int handle_scalar_misaligned_load(struct pt_regs *regs)
}
if (!fp)
SET_RD(insn, regs, val.data_ulong << shift >> shift);
SET_RD(insn, regs, (long)(val.data_ulong << shift) >> shift);
else if (len == 8)
set_f64_rd(insn, regs, val.data_u64);
else

View file

@ -30,28 +30,6 @@ unsigned int kvm_riscv_aia_nr_hgei;
unsigned int kvm_riscv_aia_max_ids;
DEFINE_STATIC_KEY_FALSE(kvm_riscv_aia_available);
static int aia_find_hgei(struct kvm_vcpu *owner)
{
int i, hgei;
unsigned long flags;
struct aia_hgei_control *hgctrl = get_cpu_ptr(&aia_hgei);
raw_spin_lock_irqsave(&hgctrl->lock, flags);
hgei = -1;
for (i = 1; i <= kvm_riscv_aia_nr_hgei; i++) {
if (hgctrl->owners[i] == owner) {
hgei = i;
break;
}
}
raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
put_cpu_ptr(&aia_hgei);
return hgei;
}
static inline unsigned long aia_hvictl_value(bool ext_irq_pending)
{
unsigned long hvictl;
@ -95,7 +73,6 @@ void kvm_riscv_vcpu_aia_sync_interrupts(struct kvm_vcpu *vcpu)
bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
{
int hgei;
unsigned long seip;
if (!kvm_riscv_aia_available())
@ -114,11 +91,7 @@ bool kvm_riscv_vcpu_aia_has_interrupts(struct kvm_vcpu *vcpu, u64 mask)
if (!kvm_riscv_aia_initialized(vcpu->kvm) || !seip)
return false;
hgei = aia_find_hgei(vcpu);
if (hgei > 0)
return !!(ncsr_read(CSR_HGEIP) & BIT(hgei));
return false;
return kvm_riscv_vcpu_aia_imsic_has_interrupt(vcpu);
}
void kvm_riscv_vcpu_aia_update_hvip(struct kvm_vcpu *vcpu)
@ -164,6 +137,9 @@ void kvm_riscv_vcpu_aia_load(struct kvm_vcpu *vcpu, int cpu)
csr_write(CSR_HVIPRIO2H, csr->hviprio2h);
#endif
}
if (kvm_riscv_aia_initialized(vcpu->kvm))
kvm_riscv_vcpu_aia_imsic_load(vcpu, cpu);
}
void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
@ -174,6 +150,9 @@ void kvm_riscv_vcpu_aia_put(struct kvm_vcpu *vcpu)
if (!kvm_riscv_aia_available())
return;
if (kvm_riscv_aia_initialized(vcpu->kvm))
kvm_riscv_vcpu_aia_imsic_put(vcpu);
if (kvm_riscv_nacl_available()) {
nsh = nacl_shmem();
csr->vsiselect = nacl_csr_read(nsh, CSR_VSISELECT);
@ -472,22 +451,6 @@ void kvm_riscv_aia_free_hgei(int cpu, int hgei)
raw_spin_unlock_irqrestore(&hgctrl->lock, flags);
}
void kvm_riscv_aia_wakeon_hgei(struct kvm_vcpu *owner, bool enable)
{
int hgei;
if (!kvm_riscv_aia_available())
return;
hgei = aia_find_hgei(owner);
if (hgei > 0) {
if (enable)
csr_set(CSR_HGEIE, BIT(hgei));
else
csr_clear(CSR_HGEIE, BIT(hgei));
}
}
static irqreturn_t hgei_interrupt(int irq, void *dev_id)
{
int i;

View file

@ -676,6 +676,48 @@ static void imsic_swfile_update(struct kvm_vcpu *vcpu,
imsic_swfile_extirq_update(vcpu);
}
bool kvm_riscv_vcpu_aia_imsic_has_interrupt(struct kvm_vcpu *vcpu)
{
struct imsic *imsic = vcpu->arch.aia_context.imsic_state;
unsigned long flags;
bool ret = false;
/*
* The IMSIC SW-file directly injects interrupt via hvip so
* only check for interrupt when IMSIC VS-file is being used.
*/
read_lock_irqsave(&imsic->vsfile_lock, flags);
if (imsic->vsfile_cpu > -1)
ret = !!(csr_read(CSR_HGEIP) & BIT(imsic->vsfile_hgei));
read_unlock_irqrestore(&imsic->vsfile_lock, flags);
return ret;
}
void kvm_riscv_vcpu_aia_imsic_load(struct kvm_vcpu *vcpu, int cpu)
{
/*
* No need to explicitly clear HGEIE CSR bits because the
* hgei interrupt handler (aka hgei_interrupt()) will always
* clear it for us.
*/
}
void kvm_riscv_vcpu_aia_imsic_put(struct kvm_vcpu *vcpu)
{
struct imsic *imsic = vcpu->arch.aia_context.imsic_state;
unsigned long flags;
if (!kvm_vcpu_is_blocking(vcpu))
return;
read_lock_irqsave(&imsic->vsfile_lock, flags);
if (imsic->vsfile_cpu > -1)
csr_set(CSR_HGEIE, BIT(imsic->vsfile_hgei));
read_unlock_irqrestore(&imsic->vsfile_lock, flags);
}
void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *vcpu)
{
unsigned long flags;
@ -781,6 +823,9 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vcpu)
* producers to the new IMSIC VS-file.
*/
/* Ensure HGEIE CSR bit is zero before using the new IMSIC VS-file */
csr_clear(CSR_HGEIE, BIT(new_vsfile_hgei));
/* Zero-out new IMSIC VS-file */
imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix);

View file

@ -207,16 +207,6 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
return kvm_riscv_vcpu_timer_pending(vcpu);
}
void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
{
kvm_riscv_aia_wakeon_hgei(vcpu, true);
}
void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
{
kvm_riscv_aia_wakeon_hgei(vcpu, false);
}
int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
{
return (kvm_riscv_vcpu_has_interrupts(vcpu, -1UL) &&

View file

@ -345,8 +345,24 @@ void kvm_riscv_vcpu_timer_save(struct kvm_vcpu *vcpu)
/*
* The vstimecmp CSRs are saved by kvm_riscv_vcpu_timer_sync()
* upon every VM exit so no need to save here.
*
* If VS-timer expires when no VCPU running on a host CPU then
* WFI executed by such host CPU will be effective NOP resulting
* in no power savings. This is because as-per RISC-V Privileged
* specificaiton: "WFI is also required to resume execution for
* locally enabled interrupts pending at any privilege level,
* regardless of the global interrupt enable at each privilege
* level."
*
* To address the above issue, vstimecmp CSR must be set to -1UL
* over here when VCPU is scheduled-out or exits to user space.
*/
csr_write(CSR_VSTIMECMP, -1UL);
#if defined(CONFIG_32BIT)
csr_write(CSR_VSTIMECMPH, -1UL);
#endif
/* timer should be enabled for the remaining operations */
if (unlikely(!t->init_done))
return;

View file

@ -14,7 +14,9 @@ bad_relocs=$(
${srctree}/scripts/relocs_check.sh "$@" |
# These relocations are okay
# R_RISCV_RELATIVE
grep -F -w -v 'R_RISCV_RELATIVE'
# R_RISCV_NONE
grep -F -w -v 'R_RISCV_RELATIVE
R_RISCV_NONE'
)
if [ -z "$bad_relocs" ]; then

View file

@ -566,7 +566,15 @@ static void bpf_jit_plt(struct bpf_plt *plt, void *ret, void *target)
{
memcpy(plt, &bpf_plt, sizeof(*plt));
plt->ret = ret;
plt->target = target;
/*
* (target == NULL) implies that the branch to this PLT entry was
* patched and became a no-op. However, some CPU could have jumped
* to this PLT entry before patching and may be still executing it.
*
* Since the intention in this case is to make the PLT entry a no-op,
* make the target point to the return label instead of NULL.
*/
plt->target = target ?: ret;
}
/*

View file

@ -5,5 +5,6 @@ obj-y += core.o sev-nmi.o vc-handle.o
# Clang 14 and older may fail to respect __no_sanitize_undefined when inlining
UBSAN_SANITIZE_sev-nmi.o := n
# GCC may fail to respect __no_sanitize_address when inlining
# GCC may fail to respect __no_sanitize_address or __no_kcsan when inlining
KASAN_SANITIZE_sev-nmi.o := n
KCSAN_SANITIZE_sev-nmi.o := n

View file

@ -34,6 +34,7 @@
#include <linux/syscore_ops.h>
#include <clocksource/hyperv_timer.h>
#include <linux/highmem.h>
#include <linux/export.h>
void *hv_hypercall_pg;
EXPORT_SYMBOL_GPL(hv_hypercall_pg);

View file

@ -10,6 +10,7 @@
#include <linux/pci.h>
#include <linux/irq.h>
#include <linux/export.h>
#include <asm/mshyperv.h>
static int hv_map_interrupt(union hv_device_id device_id, bool level,
@ -46,7 +47,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level,
if (nr_bank < 0) {
local_irq_restore(flags);
pr_err("%s: unable to generate VP set\n", __func__);
return EINVAL;
return -EINVAL;
}
intr_desc->target.flags = HV_DEVICE_INTERRUPT_TARGET_PROCESSOR_SET;
@ -66,7 +67,7 @@ static int hv_map_interrupt(union hv_device_id device_id, bool level,
if (!hv_result_success(status))
hv_status_err(status, "\n");
return hv_result(status);
return hv_result_to_errno(status);
}
static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *old_entry)
@ -88,7 +89,10 @@ static int hv_unmap_interrupt(u64 id, struct hv_interrupt_entry *old_entry)
status = hv_do_hypercall(HVCALL_UNMAP_DEVICE_INTERRUPT, input, NULL);
local_irq_restore(flags);
return hv_result(status);
if (!hv_result_success(status))
hv_status_err(status, "\n");
return hv_result_to_errno(status);
}
#ifdef CONFIG_PCI_MSI
@ -169,13 +173,34 @@ static union hv_device_id hv_build_pci_dev_id(struct pci_dev *dev)
return dev_id;
}
static int hv_map_msi_interrupt(struct pci_dev *dev, int cpu, int vector,
struct hv_interrupt_entry *entry)
/**
* hv_map_msi_interrupt() - "Map" the MSI IRQ in the hypervisor.
* @data: Describes the IRQ
* @out_entry: Hypervisor (MSI) interrupt entry (can be NULL)
*
* Map the IRQ in the hypervisor by issuing a MAP_DEVICE_INTERRUPT hypercall.
*
* Return: 0 on success, -errno on failure
*/
int hv_map_msi_interrupt(struct irq_data *data,
struct hv_interrupt_entry *out_entry)
{
union hv_device_id device_id = hv_build_pci_dev_id(dev);
struct irq_cfg *cfg = irqd_cfg(data);
struct hv_interrupt_entry dummy;
union hv_device_id device_id;
struct msi_desc *msidesc;
struct pci_dev *dev;
int cpu;
return hv_map_interrupt(device_id, false, cpu, vector, entry);
msidesc = irq_data_get_msi_desc(data);
dev = msi_desc_to_pci_dev(msidesc);
device_id = hv_build_pci_dev_id(dev);
cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
return hv_map_interrupt(device_id, false, cpu, cfg->vector,
out_entry ? out_entry : &dummy);
}
EXPORT_SYMBOL_GPL(hv_map_msi_interrupt);
static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi_msg *msg)
{
@ -188,13 +213,11 @@ static inline void entry_to_msi_msg(struct hv_interrupt_entry *entry, struct msi
static int hv_unmap_msi_interrupt(struct pci_dev *dev, struct hv_interrupt_entry *old_entry);
static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
{
struct hv_interrupt_entry *stored_entry;
struct irq_cfg *cfg = irqd_cfg(data);
struct msi_desc *msidesc;
struct pci_dev *dev;
struct hv_interrupt_entry out_entry, *stored_entry;
struct irq_cfg *cfg = irqd_cfg(data);
const cpumask_t *affinity;
int cpu;
u64 status;
int ret;
msidesc = irq_data_get_msi_desc(data);
dev = msi_desc_to_pci_dev(msidesc);
@ -204,9 +227,6 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}
affinity = irq_data_get_effective_affinity_mask(data);
cpu = cpumask_first_and(affinity, cpu_online_mask);
if (data->chip_data) {
/*
* This interrupt is already mapped. Let's unmap first.
@ -219,15 +239,13 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
stored_entry = data->chip_data;
data->chip_data = NULL;
status = hv_unmap_msi_interrupt(dev, stored_entry);
ret = hv_unmap_msi_interrupt(dev, stored_entry);
kfree(stored_entry);
if (status != HV_STATUS_SUCCESS) {
hv_status_debug(status, "failed to unmap\n");
if (ret)
return;
}
}
stored_entry = kzalloc(sizeof(*stored_entry), GFP_ATOMIC);
if (!stored_entry) {
@ -235,15 +253,14 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}
status = hv_map_msi_interrupt(dev, cpu, cfg->vector, &out_entry);
if (status != HV_STATUS_SUCCESS) {
ret = hv_map_msi_interrupt(data, stored_entry);
if (ret) {
kfree(stored_entry);
return;
}
*stored_entry = out_entry;
data->chip_data = stored_entry;
entry_to_msi_msg(&out_entry, msg);
entry_to_msi_msg(data->chip_data, msg);
return;
}
@ -257,7 +274,6 @@ static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd)
{
struct hv_interrupt_entry old_entry;
struct msi_msg msg;
u64 status;
if (!irqd->chip_data) {
pr_debug("%s: no chip data\n!", __func__);
@ -270,10 +286,7 @@ static void hv_teardown_msi_irq(struct pci_dev *dev, struct irq_data *irqd)
kfree(irqd->chip_data);
irqd->chip_data = NULL;
status = hv_unmap_msi_interrupt(dev, &old_entry);
if (status != HV_STATUS_SUCCESS)
hv_status_err(status, "\n");
(void)hv_unmap_msi_interrupt(dev, &old_entry);
}
static void hv_msi_free_irq(struct irq_domain *domain,

View file

@ -10,6 +10,7 @@
#include <linux/types.h>
#include <linux/slab.h>
#include <linux/cpu.h>
#include <linux/export.h>
#include <asm/svm.h>
#include <asm/sev.h>
#include <asm/io.h>

View file

@ -11,6 +11,7 @@
#include <linux/types.h>
#include <linux/export.h>
#include <hyperv/hvhdk.h>
#include <asm/mshyperv.h>
#include <asm/tlbflush.h>

View file

@ -112,12 +112,6 @@ static inline u64 hv_do_hypercall(u64 control, void *input, void *output)
return hv_status;
}
/* Hypercall to the L0 hypervisor */
static inline u64 hv_do_nested_hypercall(u64 control, void *input, void *output)
{
return hv_do_hypercall(control | HV_HYPERCALL_NESTED, input, output);
}
/* Fast hypercall with 8 bytes of input and no output */
static inline u64 _hv_do_fast_hypercall8(u64 control, u64 input1)
{
@ -165,13 +159,6 @@ static inline u64 hv_do_fast_hypercall8(u16 code, u64 input1)
return _hv_do_fast_hypercall8(control, input1);
}
static inline u64 hv_do_fast_nested_hypercall8(u16 code, u64 input1)
{
u64 control = (u64)code | HV_HYPERCALL_FAST_BIT | HV_HYPERCALL_NESTED;
return _hv_do_fast_hypercall8(control, input1);
}
/* Fast hypercall with 16 bytes of input */
static inline u64 _hv_do_fast_hypercall16(u64 control, u64 input1, u64 input2)
{
@ -223,13 +210,6 @@ static inline u64 hv_do_fast_hypercall16(u16 code, u64 input1, u64 input2)
return _hv_do_fast_hypercall16(control, input1, input2);
}
static inline u64 hv_do_fast_nested_hypercall16(u16 code, u64 input1, u64 input2)
{
u64 control = (u64)code | HV_HYPERCALL_FAST_BIT | HV_HYPERCALL_NESTED;
return _hv_do_fast_hypercall16(control, input1, input2);
}
extern struct hv_vp_assist_page **hv_vp_assist_page;
static inline struct hv_vp_assist_page *hv_get_vp_assist_page(unsigned int cpu)
@ -262,6 +242,8 @@ static inline void hv_apic_init(void) {}
struct irq_domain *hv_create_pci_msi_domain(void);
int hv_map_msi_interrupt(struct irq_data *data,
struct hv_interrupt_entry *out_entry);
int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector,
struct hv_interrupt_entry *entry);
int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);

View file

@ -173,7 +173,6 @@ static void td_init_cpuid_entry2(struct kvm_cpuid_entry2 *entry, unsigned char i
tdx_clear_unsupported_cpuid(entry);
}
#define TDVMCALLINFO_GET_QUOTE BIT(0)
#define TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT BIT(1)
static int init_kvm_tdx_caps(const struct tdx_sys_info_td_conf *td_conf,
@ -192,7 +191,6 @@ static int init_kvm_tdx_caps(const struct tdx_sys_info_td_conf *td_conf,
caps->cpuid.nent = td_conf->num_cpuid_config;
caps->user_tdvmcallinfo_1_r11 =
TDVMCALLINFO_GET_QUOTE |
TDVMCALLINFO_SETUP_EVENT_NOTIFY_INTERRUPT;
for (i = 0; i < td_conf->num_cpuid_config; i++)
@ -2271,25 +2269,26 @@ static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd)
const struct tdx_sys_info_td_conf *td_conf = &tdx_sysinfo->td_conf;
struct kvm_tdx_capabilities __user *user_caps;
struct kvm_tdx_capabilities *caps = NULL;
u32 nr_user_entries;
int ret = 0;
/* flags is reserved for future use */
if (cmd->flags)
return -EINVAL;
caps = kmalloc(sizeof(*caps) +
caps = kzalloc(sizeof(*caps) +
sizeof(struct kvm_cpuid_entry2) * td_conf->num_cpuid_config,
GFP_KERNEL);
if (!caps)
return -ENOMEM;
user_caps = u64_to_user_ptr(cmd->data);
if (copy_from_user(caps, user_caps, sizeof(*caps))) {
if (get_user(nr_user_entries, &user_caps->cpuid.nent)) {
ret = -EFAULT;
goto out;
}
if (caps->cpuid.nent < td_conf->num_cpuid_config) {
if (nr_user_entries < td_conf->num_cpuid_config) {
ret = -E2BIG;
goto out;
}

View file

@ -6188,6 +6188,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
u32 user_tsc_khz;
r = -EINVAL;
if (vcpu->arch.guest_tsc_protected)
goto out;
user_tsc_khz = (u32)arg;
if (kvm_caps.has_tsc_control &&

View file

@ -1526,7 +1526,7 @@ static bool kvm_xen_schedop_poll(struct kvm_vcpu *vcpu, bool longmode,
if (kvm_read_guest_virt(vcpu, (gva_t)sched_poll.ports, ports,
sched_poll.nr_ports * sizeof(*ports), &e)) {
*r = -EFAULT;
return true;
goto out;
}
for (i = 0; i < sched_poll.nr_ports; i++) {

View file

@ -960,4 +960,5 @@ void blk_unregister_queue(struct gendisk *disk)
elevator_set_none(q);
blk_debugfs_remove(disk);
kobject_put(&disk->queue_kobj);
}

View file

@ -37,10 +37,8 @@ static int __init sbi_cppc_init(void)
{
if (sbi_spec_version >= sbi_mk_version(2, 0) &&
sbi_probe_extension(SBI_EXT_CPPC) > 0) {
pr_info("SBI CPPC extension detected\n");
cppc_ext_present = true;
} else {
pr_info("SBI CPPC extension NOT detected!!\n");
cppc_ext_present = false;
}

View file

@ -1173,6 +1173,8 @@ err_name:
err_map:
kfree(map);
err:
if (bus && bus->free_on_exit)
kfree(bus);
return ERR_PTR(ret);
}
EXPORT_SYMBOL_GPL(__regmap_init);

View file

@ -308,14 +308,13 @@ end_io:
static void lo_rw_aio_do_completion(struct loop_cmd *cmd)
{
struct request *rq = blk_mq_rq_from_pdu(cmd);
struct loop_device *lo = rq->q->queuedata;
if (!atomic_dec_and_test(&cmd->ref))
return;
kfree(cmd->bvec);
cmd->bvec = NULL;
if (req_op(rq) == REQ_OP_WRITE)
file_end_write(lo->lo_backing_file);
kiocb_end_write(&cmd->iocb);
if (likely(!blk_should_fake_timeout(rq->q)))
blk_mq_complete_request(rq);
}
@ -391,7 +390,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
}
if (rw == ITER_SOURCE) {
file_start_write(lo->lo_backing_file);
kiocb_start_write(&cmd->iocb);
ret = file->f_op->write_iter(&cmd->iocb, &iter);
} else
ret = file->f_op->read_iter(&cmd->iocb, &iter);

View file

@ -943,6 +943,7 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
struct fsl_mc_obj_desc endpoint_desc = {{ 0 }};
struct dprc_endpoint endpoint1 = {{ 0 }};
struct dprc_endpoint endpoint2 = {{ 0 }};
struct fsl_mc_bus *mc_bus;
int state, err;
mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent);
@ -966,6 +967,8 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
strcpy(endpoint_desc.type, endpoint2.type);
endpoint_desc.id = endpoint2.id;
endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
if (endpoint)
return endpoint;
/*
* We know that the device has an endpoint because we verified by
@ -973,17 +976,13 @@ struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev,
* yet discovered by the fsl-mc bus, thus the lookup returned NULL.
* Force a rescan of the devices in this container and retry the lookup.
*/
if (!endpoint) {
struct fsl_mc_bus *mc_bus = to_fsl_mc_bus(mc_bus_dev);
mc_bus = to_fsl_mc_bus(mc_bus_dev);
if (mutex_trylock(&mc_bus->scan_mutex)) {
err = dprc_scan_objects(mc_bus_dev, true);
mutex_unlock(&mc_bus->scan_mutex);
}
if (err < 0)
return ERR_PTR(err);
}
endpoint = fsl_mc_device_lookup(&endpoint_desc, mc_bus_dev);
/*

View file

@ -22,6 +22,7 @@
#include <linux/irq.h>
#include <linux/acpi.h>
#include <linux/hyperv.h>
#include <linux/export.h>
#include <clocksource/hyperv_timer.h>
#include <hyperv/hvhdk.h>
#include <asm/mshyperv.h>

View file

@ -1556,21 +1556,27 @@ static int do_insnlist_ioctl(struct comedi_device *dev,
}
for (i = 0; i < n_insns; ++i) {
unsigned int n = insns[i].n;
if (insns[i].insn & INSN_MASK_WRITE) {
if (copy_from_user(data, insns[i].data,
insns[i].n * sizeof(unsigned int))) {
n * sizeof(unsigned int))) {
dev_dbg(dev->class_dev,
"copy_from_user failed\n");
ret = -EFAULT;
goto error;
}
if (n < MIN_SAMPLES) {
memset(&data[n], 0, (MIN_SAMPLES - n) *
sizeof(unsigned int));
}
}
ret = parse_insn(dev, insns + i, data, file);
if (ret < 0)
goto error;
if (insns[i].insn & INSN_MASK_READ) {
if (copy_to_user(insns[i].data, data,
insns[i].n * sizeof(unsigned int))) {
n * sizeof(unsigned int))) {
dev_dbg(dev->class_dev,
"copy_to_user failed\n");
ret = -EFAULT;
@ -1589,6 +1595,16 @@ error:
return i;
}
#define MAX_INSNS MAX_SAMPLES
static int check_insnlist_len(struct comedi_device *dev, unsigned int n_insns)
{
if (n_insns > MAX_INSNS) {
dev_dbg(dev->class_dev, "insnlist length too large\n");
return -EINVAL;
}
return 0;
}
/*
* COMEDI_INSN ioctl
* synchronous instruction
@ -1633,6 +1649,10 @@ static int do_insn_ioctl(struct comedi_device *dev,
ret = -EFAULT;
goto error;
}
if (insn->n < MIN_SAMPLES) {
memset(&data[insn->n], 0,
(MIN_SAMPLES - insn->n) * sizeof(unsigned int));
}
}
ret = parse_insn(dev, insn, data, file);
if (ret < 0)
@ -2239,6 +2259,9 @@ static long comedi_unlocked_ioctl(struct file *file, unsigned int cmd,
rc = -EFAULT;
break;
}
rc = check_insnlist_len(dev, insnlist.n_insns);
if (rc)
break;
insns = kcalloc(insnlist.n_insns, sizeof(*insns), GFP_KERNEL);
if (!insns) {
rc = -ENOMEM;
@ -3142,6 +3165,9 @@ static int compat_insnlist(struct file *file, unsigned long arg)
if (copy_from_user(&insnlist32, compat_ptr(arg), sizeof(insnlist32)))
return -EFAULT;
rc = check_insnlist_len(dev, insnlist32.n_insns);
if (rc)
return rc;
insns = kcalloc(insnlist32.n_insns, sizeof(*insns), GFP_KERNEL);
if (!insns)
return -ENOMEM;

View file

@ -339,10 +339,10 @@ int comedi_dio_insn_config(struct comedi_device *dev,
unsigned int *data,
unsigned int mask)
{
unsigned int chan_mask = 1 << CR_CHAN(insn->chanspec);
unsigned int chan = CR_CHAN(insn->chanspec);
if (!mask)
mask = chan_mask;
if (!mask && chan < 32)
mask = 1U << chan;
switch (data[0]) {
case INSN_CONFIG_DIO_INPUT:
@ -382,7 +382,7 @@ EXPORT_SYMBOL_GPL(comedi_dio_insn_config);
unsigned int comedi_dio_update_state(struct comedi_subdevice *s,
unsigned int *data)
{
unsigned int chanmask = (s->n_chan < 32) ? ((1 << s->n_chan) - 1)
unsigned int chanmask = (s->n_chan < 32) ? ((1U << s->n_chan) - 1)
: 0xffffffff;
unsigned int mask = data[0] & chanmask;
unsigned int bits = data[1];
@ -615,6 +615,9 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
unsigned int _data[2];
int ret;
if (insn->n == 0)
return 0;
memset(_data, 0, sizeof(_data));
memset(&_insn, 0, sizeof(_insn));
_insn.insn = INSN_BITS;
@ -625,8 +628,8 @@ static int insn_rw_emulate_bits(struct comedi_device *dev,
if (insn->insn == INSN_WRITE) {
if (!(s->subdev_flags & SDF_WRITABLE))
return -EINVAL;
_data[0] = 1 << (chan - base_chan); /* mask */
_data[1] = data[0] ? (1 << (chan - base_chan)) : 0; /* bits */
_data[0] = 1U << (chan - base_chan); /* mask */
_data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */
}
ret = s->insn_bits(dev, s, &_insn, _data);
@ -709,7 +712,7 @@ static int __comedi_device_postconfig(struct comedi_device *dev)
if (s->type == COMEDI_SUBD_DO) {
if (s->n_chan < 32)
s->io_bits = (1 << s->n_chan) - 1;
s->io_bits = (1U << s->n_chan) - 1;
else
s->io_bits = 0xffffffff;
}

View file

@ -177,7 +177,8 @@ static int aio_iiro_16_attach(struct comedi_device *dev,
* Digital input change of state interrupts are optionally supported
* using IRQ 2-7, 10-12, 14, or 15.
*/
if ((1 << it->options[1]) & 0xdcfc) {
if (it->options[1] > 0 && it->options[1] < 16 &&
(1 << it->options[1]) & 0xdcfc) {
ret = request_irq(it->options[1], aio_iiro_16_cos, 0,
dev->board_name, dev);
if (ret == 0)

View file

@ -792,7 +792,7 @@ static void waveform_detach(struct comedi_device *dev)
{
struct waveform_private *devpriv = dev->private;
if (devpriv) {
if (devpriv && dev->n_subdevices) {
timer_delete_sync(&devpriv->ai_timer);
timer_delete_sync(&devpriv->ao_timer);
}

View file

@ -522,7 +522,8 @@ static int das16m1_attach(struct comedi_device *dev,
devpriv->extra_iobase = dev->iobase + DAS16M1_8255_IOBASE;
/* only irqs 2, 3, 4, 5, 6, 7, 10, 11, 12, 14, and 15 are valid */
if ((1 << it->options[1]) & 0xdcfc) {
if (it->options[1] >= 2 && it->options[1] <= 15 &&
(1 << it->options[1]) & 0xdcfc) {
ret = request_irq(it->options[1], das16m1_interrupt, 0,
dev->board_name, dev);
if (ret == 0)

View file

@ -567,7 +567,8 @@ static int das6402_attach(struct comedi_device *dev,
das6402_reset(dev);
/* IRQs 2,3,5,6,7, 10,11,15 are valid for "enhanced" mode */
if ((1 << it->options[1]) & 0x8cec) {
if (it->options[1] > 0 && it->options[1] < 16 &&
(1 << it->options[1]) & 0x8cec) {
ret = request_irq(it->options[1], das6402_interrupt, 0,
dev->board_name, dev);
if (ret == 0) {

View file

@ -1149,7 +1149,8 @@ static int pcl812_attach(struct comedi_device *dev, struct comedi_devconfig *it)
if (IS_ERR(dev->pacer))
return PTR_ERR(dev->pacer);
if ((1 << it->options[1]) & board->irq_bits) {
if (it->options[1] > 0 && it->options[1] < 16 &&
(1 << it->options[1]) & board->irq_bits) {
ret = request_irq(it->options[1], pcl812_interrupt, 0,
dev->board_name, dev);
if (ret == 0)

View file

@ -45,7 +45,6 @@ struct psci_cpuidle_domain_state {
static DEFINE_PER_CPU_READ_MOSTLY(struct psci_cpuidle_data, psci_cpuidle_data);
static DEFINE_PER_CPU(struct psci_cpuidle_domain_state, psci_domain_state);
static bool psci_cpuidle_use_syscore;
static bool psci_cpuidle_use_cpuhp;
void psci_set_domain_state(struct generic_pm_domain *pd, unsigned int state_idx,
u32 state)
@ -124,8 +123,12 @@ static int psci_idle_cpuhp_up(unsigned int cpu)
{
struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
if (pd_dev)
if (pd_dev) {
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
pm_runtime_get_sync(pd_dev);
else
dev_pm_genpd_resume(pd_dev);
}
return 0;
}
@ -135,7 +138,11 @@ static int psci_idle_cpuhp_down(unsigned int cpu)
struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
if (pd_dev) {
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
pm_runtime_put_sync(pd_dev);
else
dev_pm_genpd_suspend(pd_dev);
/* Clear domain state to start fresh at next online. */
psci_clear_domain_state();
}
@ -196,9 +203,6 @@ static void psci_idle_init_cpuhp(void)
{
int err;
if (!psci_cpuidle_use_cpuhp)
return;
err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
"cpuidle/psci:online",
psci_idle_cpuhp_up,
@ -259,10 +263,8 @@ static int psci_dt_cpu_init_topology(struct cpuidle_driver *drv,
* s2ram and s2idle.
*/
drv->states[state_count - 1].enter_s2idle = psci_enter_s2idle_domain_idle_state;
if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
drv->states[state_count - 1].enter = psci_enter_domain_idle_state;
psci_cpuidle_use_cpuhp = true;
}
return 0;
}
@ -339,7 +341,6 @@ static void psci_cpu_deinit_idle(int cpu)
dt_idle_detach_cpu(data->dev);
psci_cpuidle_use_syscore = false;
psci_cpuidle_use_cpuhp = false;
}
static int psci_idle_init_cpu(struct device *dev, int cpu)

View file

@ -314,30 +314,30 @@ static int chcr_compute_partial_hash(struct shash_desc *desc,
if (digest_size == SHA1_DIGEST_SIZE) {
error = crypto_shash_init(desc) ?:
crypto_shash_update(desc, iopad, SHA1_BLOCK_SIZE) ?:
crypto_shash_export(desc, (void *)&sha1_st);
crypto_shash_export_core(desc, &sha1_st);
memcpy(result_hash, sha1_st.state, SHA1_DIGEST_SIZE);
} else if (digest_size == SHA224_DIGEST_SIZE) {
error = crypto_shash_init(desc) ?:
crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?:
crypto_shash_export(desc, (void *)&sha256_st);
crypto_shash_export_core(desc, &sha256_st);
memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE);
} else if (digest_size == SHA256_DIGEST_SIZE) {
error = crypto_shash_init(desc) ?:
crypto_shash_update(desc, iopad, SHA256_BLOCK_SIZE) ?:
crypto_shash_export(desc, (void *)&sha256_st);
crypto_shash_export_core(desc, &sha256_st);
memcpy(result_hash, sha256_st.state, SHA256_DIGEST_SIZE);
} else if (digest_size == SHA384_DIGEST_SIZE) {
error = crypto_shash_init(desc) ?:
crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?:
crypto_shash_export(desc, (void *)&sha512_st);
crypto_shash_export_core(desc, &sha512_st);
memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE);
} else if (digest_size == SHA512_DIGEST_SIZE) {
error = crypto_shash_init(desc) ?:
crypto_shash_update(desc, iopad, SHA512_BLOCK_SIZE) ?:
crypto_shash_export(desc, (void *)&sha512_st);
crypto_shash_export_core(desc, &sha512_st);
memcpy(result_hash, sha512_st.state, SHA512_DIGEST_SIZE);
} else {
error = -EINVAL;

View file

@ -5,11 +5,11 @@
#include <linux/crypto.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/cipher.h>
#include <crypto/internal/hash.h>
#include <crypto/internal/skcipher.h>
#include <crypto/aes.h>
#include <crypto/sha1.h>
#include <crypto/sha2.h>
#include <crypto/hash.h>
#include <crypto/hmac.h>
#include <crypto/algapi.h>
#include <crypto/authenc.h>
@ -154,19 +154,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
switch (ctx->qat_hash_alg) {
case ICP_QAT_HW_AUTH_ALGO_SHA1:
if (crypto_shash_export(shash, &ctx->sha1))
if (crypto_shash_export_core(shash, &ctx->sha1))
return -EFAULT;
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
*hash_state_out = cpu_to_be32(ctx->sha1.state[i]);
break;
case ICP_QAT_HW_AUTH_ALGO_SHA256:
if (crypto_shash_export(shash, &ctx->sha256))
if (crypto_shash_export_core(shash, &ctx->sha256))
return -EFAULT;
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
*hash_state_out = cpu_to_be32(ctx->sha256.state[i]);
break;
case ICP_QAT_HW_AUTH_ALGO_SHA512:
if (crypto_shash_export(shash, &ctx->sha512))
if (crypto_shash_export_core(shash, &ctx->sha512))
return -EFAULT;
for (i = 0; i < digest_size >> 3; i++, hash512_state_out++)
*hash512_state_out = cpu_to_be64(ctx->sha512.state[i]);
@ -190,19 +190,19 @@ static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
switch (ctx->qat_hash_alg) {
case ICP_QAT_HW_AUTH_ALGO_SHA1:
if (crypto_shash_export(shash, &ctx->sha1))
if (crypto_shash_export_core(shash, &ctx->sha1))
return -EFAULT;
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
*hash_state_out = cpu_to_be32(ctx->sha1.state[i]);
break;
case ICP_QAT_HW_AUTH_ALGO_SHA256:
if (crypto_shash_export(shash, &ctx->sha256))
if (crypto_shash_export_core(shash, &ctx->sha256))
return -EFAULT;
for (i = 0; i < digest_size >> 2; i++, hash_state_out++)
*hash_state_out = cpu_to_be32(ctx->sha256.state[i]);
break;
case ICP_QAT_HW_AUTH_ALGO_SHA512:
if (crypto_shash_export(shash, &ctx->sha512))
if (crypto_shash_export_core(shash, &ctx->sha512))
return -EFAULT;
for (i = 0; i < digest_size >> 3; i++, hash512_state_out++)
*hash512_state_out = cpu_to_be64(ctx->sha512.state[i]);

View file

@ -161,12 +161,16 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
const struct pci_device_id *pid)
{
struct dw_edma_pcie_data *pdata = (void *)pid->driver_data;
struct dw_edma_pcie_data vsec_data;
struct dw_edma_pcie_data *vsec_data __free(kfree) = NULL;
struct device *dev = &pdev->dev;
struct dw_edma_chip *chip;
int err, nr_irqs;
int i, mask;
vsec_data = kmalloc(sizeof(*vsec_data), GFP_KERNEL);
if (!vsec_data)
return -ENOMEM;
/* Enable PCI device */
err = pcim_enable_device(pdev);
if (err) {
@ -174,23 +178,23 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
return err;
}
memcpy(&vsec_data, pdata, sizeof(struct dw_edma_pcie_data));
memcpy(vsec_data, pdata, sizeof(struct dw_edma_pcie_data));
/*
* Tries to find if exists a PCIe Vendor-Specific Extended Capability
* for the DMA, if one exists, then reconfigures it.
*/
dw_edma_pcie_get_vsec_dma_data(pdev, &vsec_data);
dw_edma_pcie_get_vsec_dma_data(pdev, vsec_data);
/* Mapping PCI BAR regions */
mask = BIT(vsec_data.rg.bar);
for (i = 0; i < vsec_data.wr_ch_cnt; i++) {
mask |= BIT(vsec_data.ll_wr[i].bar);
mask |= BIT(vsec_data.dt_wr[i].bar);
mask = BIT(vsec_data->rg.bar);
for (i = 0; i < vsec_data->wr_ch_cnt; i++) {
mask |= BIT(vsec_data->ll_wr[i].bar);
mask |= BIT(vsec_data->dt_wr[i].bar);
}
for (i = 0; i < vsec_data.rd_ch_cnt; i++) {
mask |= BIT(vsec_data.ll_rd[i].bar);
mask |= BIT(vsec_data.dt_rd[i].bar);
for (i = 0; i < vsec_data->rd_ch_cnt; i++) {
mask |= BIT(vsec_data->ll_rd[i].bar);
mask |= BIT(vsec_data->dt_rd[i].bar);
}
err = pcim_iomap_regions(pdev, mask, pci_name(pdev));
if (err) {
@ -213,7 +217,7 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
return -ENOMEM;
/* IRQs allocation */
nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data.irqs,
nr_irqs = pci_alloc_irq_vectors(pdev, 1, vsec_data->irqs,
PCI_IRQ_MSI | PCI_IRQ_MSIX);
if (nr_irqs < 1) {
pci_err(pdev, "fail to alloc IRQ vector (number of IRQs=%u)\n",
@ -224,22 +228,22 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
/* Data structure initialization */
chip->dev = dev;
chip->mf = vsec_data.mf;
chip->mf = vsec_data->mf;
chip->nr_irqs = nr_irqs;
chip->ops = &dw_edma_pcie_plat_ops;
chip->ll_wr_cnt = vsec_data.wr_ch_cnt;
chip->ll_rd_cnt = vsec_data.rd_ch_cnt;
chip->ll_wr_cnt = vsec_data->wr_ch_cnt;
chip->ll_rd_cnt = vsec_data->rd_ch_cnt;
chip->reg_base = pcim_iomap_table(pdev)[vsec_data.rg.bar];
chip->reg_base = pcim_iomap_table(pdev)[vsec_data->rg.bar];
if (!chip->reg_base)
return -ENOMEM;
for (i = 0; i < chip->ll_wr_cnt; i++) {
struct dw_edma_region *ll_region = &chip->ll_region_wr[i];
struct dw_edma_region *dt_region = &chip->dt_region_wr[i];
struct dw_edma_block *ll_block = &vsec_data.ll_wr[i];
struct dw_edma_block *dt_block = &vsec_data.dt_wr[i];
struct dw_edma_block *ll_block = &vsec_data->ll_wr[i];
struct dw_edma_block *dt_block = &vsec_data->dt_wr[i];
ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar];
if (!ll_region->vaddr.io)
@ -263,8 +267,8 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
for (i = 0; i < chip->ll_rd_cnt; i++) {
struct dw_edma_region *ll_region = &chip->ll_region_rd[i];
struct dw_edma_region *dt_region = &chip->dt_region_rd[i];
struct dw_edma_block *ll_block = &vsec_data.ll_rd[i];
struct dw_edma_block *dt_block = &vsec_data.dt_rd[i];
struct dw_edma_block *ll_block = &vsec_data->ll_rd[i];
struct dw_edma_block *dt_block = &vsec_data->dt_rd[i];
ll_region->vaddr.io = pcim_iomap_table(pdev)[ll_block->bar];
if (!ll_region->vaddr.io)
@ -298,31 +302,31 @@ static int dw_edma_pcie_probe(struct pci_dev *pdev,
pci_dbg(pdev, "Version:\tUnknown (0x%x)\n", chip->mf);
pci_dbg(pdev, "Registers:\tBAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p)\n",
vsec_data.rg.bar, vsec_data.rg.off, vsec_data.rg.sz,
vsec_data->rg.bar, vsec_data->rg.off, vsec_data->rg.sz,
chip->reg_base);
for (i = 0; i < chip->ll_wr_cnt; i++) {
pci_dbg(pdev, "L. List:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.ll_wr[i].bar,
vsec_data.ll_wr[i].off, chip->ll_region_wr[i].sz,
i, vsec_data->ll_wr[i].bar,
vsec_data->ll_wr[i].off, chip->ll_region_wr[i].sz,
chip->ll_region_wr[i].vaddr.io, &chip->ll_region_wr[i].paddr);
pci_dbg(pdev, "Data:\tWRITE CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.dt_wr[i].bar,
vsec_data.dt_wr[i].off, chip->dt_region_wr[i].sz,
i, vsec_data->dt_wr[i].bar,
vsec_data->dt_wr[i].off, chip->dt_region_wr[i].sz,
chip->dt_region_wr[i].vaddr.io, &chip->dt_region_wr[i].paddr);
}
for (i = 0; i < chip->ll_rd_cnt; i++) {
pci_dbg(pdev, "L. List:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.ll_rd[i].bar,
vsec_data.ll_rd[i].off, chip->ll_region_rd[i].sz,
i, vsec_data->ll_rd[i].bar,
vsec_data->ll_rd[i].off, chip->ll_region_rd[i].sz,
chip->ll_region_rd[i].vaddr.io, &chip->ll_region_rd[i].paddr);
pci_dbg(pdev, "Data:\tREAD CH%.2u, BAR=%u, off=0x%.8lx, sz=0x%zx bytes, addr(v=%p, p=%pa)\n",
i, vsec_data.dt_rd[i].bar,
vsec_data.dt_rd[i].off, chip->dt_region_rd[i].sz,
i, vsec_data->dt_rd[i].bar,
vsec_data->dt_rd[i].off, chip->dt_region_rd[i].sz,
chip->dt_region_rd[i].vaddr.io, &chip->dt_region_rd[i].paddr);
}

View file

@ -449,9 +449,9 @@ static enum dma_status mtk_cqdma_tx_status(struct dma_chan *c,
return ret;
spin_lock_irqsave(&cvc->pc->lock, flags);
spin_lock_irqsave(&cvc->vc.lock, flags);
spin_lock(&cvc->vc.lock);
vd = mtk_cqdma_find_active_desc(c, cookie);
spin_unlock_irqrestore(&cvc->vc.lock, flags);
spin_unlock(&cvc->vc.lock);
spin_unlock_irqrestore(&cvc->pc->lock, flags);
if (vd) {

View file

@ -1351,7 +1351,7 @@ static int nbpf_probe(struct platform_device *pdev)
if (irqs == 1) {
eirq = irqbuf[0];
for (i = 0; i <= num_channels; i++)
for (i = 0; i < num_channels; i++)
nbpf->chan[i].irq = irqbuf[0];
} else {
eirq = platform_get_irq_byname(pdev, "error");
@ -1361,16 +1361,15 @@ static int nbpf_probe(struct platform_device *pdev)
if (irqs == num_channels + 1) {
struct nbpf_channel *chan;
for (i = 0, chan = nbpf->chan; i <= num_channels;
for (i = 0, chan = nbpf->chan; i < num_channels;
i++, chan++) {
/* Skip the error IRQ */
if (irqbuf[i] == eirq)
i++;
if (i >= ARRAY_SIZE(irqbuf))
return -EINVAL;
chan->irq = irqbuf[i];
}
if (chan != nbpf->chan + num_channels)
return -EINVAL;
} else {
/* 2 IRQs and more than one channel */
if (irqbuf[0] == eirq)
@ -1378,7 +1377,7 @@ static int nbpf_probe(struct platform_device *pdev)
else
irq = irqbuf[0];
for (i = 0; i <= num_channels; i++)
for (i = 0; i < num_channels; i++)
nbpf->chan[i].irq = irq;
}
}

View file

@ -331,6 +331,19 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
.ignore_interrupt = "AMDI0030:00@11",
},
},
{
/*
* Wakeup only works when keyboard backlight is turned off
* https://gitlab.freedesktop.org/drm/amd/-/issues/4169
*/
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
DMI_MATCH(DMI_PRODUCT_FAMILY, "Acer Nitro V 15"),
},
.driver_data = &(struct acpi_gpiolib_dmi_quirk) {
.ignore_interrupt = "AMDI0030:00@8",
},
},
{} /* Terminating entry */
};

View file

@ -319,7 +319,7 @@ EXPORT_SYMBOL_GPL(devm_gpiod_unhinge);
*/
void devm_gpiod_put_array(struct device *dev, struct gpio_descs *descs)
{
devm_remove_action(dev, devm_gpiod_release_array, descs);
devm_release_action(dev, devm_gpiod_release_array, descs);
}
EXPORT_SYMBOL_GPL(devm_gpiod_put_array);

View file

@ -5193,6 +5193,8 @@ exit:
dev->dev->power.disable_depth--;
#endif
}
amdgpu_vram_mgr_clear_reset_blocks(adev);
adev->in_suspend = false;
if (amdgpu_acpi_smart_shift_update(dev, AMDGPU_SS_DEV_D0))

View file

@ -427,6 +427,7 @@ bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
{
unsigned long flags;
ktime_t deadline;
bool ret;
if (unlikely(ring->adev->debug_disable_soft_recovery))
return false;
@ -441,12 +442,16 @@ bool amdgpu_ring_soft_recovery(struct amdgpu_ring *ring, unsigned int vmid,
dma_fence_set_error(fence, -ENODATA);
spin_unlock_irqrestore(fence->lock, flags);
atomic_inc(&ring->adev->gpu_reset_counter);
while (!dma_fence_is_signaled(fence) &&
ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0)
ring->funcs->soft_recovery(ring, vmid);
return dma_fence_is_signaled(fence);
ret = dma_fence_is_signaled(fence);
/* increment the counter only if soft reset worked */
if (ret)
atomic_inc(&ring->adev->gpu_reset_counter);
return ret;
}
/*

View file

@ -154,6 +154,7 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
uint64_t start, uint64_t size);
int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
uint64_t start);
void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev);
bool amdgpu_res_cpu_visible(struct amdgpu_device *adev,
struct ttm_resource *res);

View file

@ -782,6 +782,23 @@ uint64_t amdgpu_vram_mgr_vis_usage(struct amdgpu_vram_mgr *mgr)
return atomic64_read(&mgr->vis_usage);
}
/**
* amdgpu_vram_mgr_clear_reset_blocks - reset clear blocks
*
* @adev: amdgpu device pointer
*
* Reset the cleared drm buddy blocks.
*/
void amdgpu_vram_mgr_clear_reset_blocks(struct amdgpu_device *adev)
{
struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
struct drm_buddy *mm = &mgr->mm;
mutex_lock(&mgr->lock);
drm_buddy_reset_clear(mm, false);
mutex_unlock(&mgr->lock);
}
/**
* amdgpu_vram_mgr_intersects - test each drm buddy block for intersection
*

View file

@ -4640,6 +4640,7 @@ static int gfx_v8_0_kcq_init_queue(struct amdgpu_ring *ring)
memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(struct vi_mqd_allocation));
/* reset ring buffer */
ring->wptr = 0;
atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
amdgpu_ring_clear_ring(ring);
}
return 0;

View file

@ -728,7 +728,16 @@ int amdgpu_dm_crtc_init(struct amdgpu_display_manager *dm,
* support programmable degamma anywhere.
*/
is_dcn = dm->adev->dm.dc->caps.color.dpp.dcn_arch;
drm_crtc_enable_color_mgmt(&acrtc->base, is_dcn ? MAX_COLOR_LUT_ENTRIES : 0,
/* Dont't enable DRM CRTC degamma property for DCN401 since the
* pre-blending degamma LUT doesn't apply to cursor, and therefore
* can't work similar to a post-blending degamma LUT as in other hw
* versions.
* TODO: revisit it once KMS plane color API is merged.
*/
drm_crtc_enable_color_mgmt(&acrtc->base,
(is_dcn &&
dm->adev->dm.dc->ctx->dce_version != DCN_VERSION_4_01) ?
MAX_COLOR_LUT_ENTRIES : 0,
true, MAX_COLOR_LUT_ENTRIES);
drm_mode_crtc_set_gamma_size(&acrtc->base, MAX_COLOR_LEGACY_LUT_ENTRIES);

View file

@ -1565,7 +1565,7 @@ struct clk_mgr_internal *dcn401_clk_mgr_construct(
clk_mgr->base.bw_params = kzalloc(sizeof(*clk_mgr->base.bw_params), GFP_KERNEL);
if (!clk_mgr->base.bw_params) {
BREAK_TO_DEBUGGER();
kfree(clk_mgr);
kfree(clk_mgr401);
return NULL;
}
@ -1576,6 +1576,7 @@ struct clk_mgr_internal *dcn401_clk_mgr_construct(
if (!clk_mgr->wm_range_table) {
BREAK_TO_DEBUGGER();
kfree(clk_mgr->base.bw_params);
kfree(clk_mgr401);
return NULL;
}

View file

@ -1373,7 +1373,7 @@ static int ti_sn_bridge_probe(struct auxiliary_device *adev,
regmap_update_bits(pdata->regmap, SN_HPD_DISABLE_REG,
HPD_DISABLE, 0);
mutex_unlock(&pdata->comms_mutex);
};
}
drm_bridge_add(&pdata->bridge);

View file

@ -725,7 +725,7 @@ ssize_t drm_dp_dpcd_read(struct drm_dp_aux *aux, unsigned int offset,
* monitor doesn't power down exactly after the throw away read.
*/
if (!aux->is_remote) {
ret = drm_dp_dpcd_probe(aux, DP_LANE0_1_STATUS);
ret = drm_dp_dpcd_probe(aux, DP_TRAINING_PATTERN_SET);
if (ret < 0)
return ret;
}

View file

@ -404,6 +404,49 @@ drm_get_buddy(struct drm_buddy_block *block)
}
EXPORT_SYMBOL(drm_get_buddy);
/**
* drm_buddy_reset_clear - reset blocks clear state
*
* @mm: DRM buddy manager
* @is_clear: blocks clear state
*
* Reset the clear state based on @is_clear value for each block
* in the freelist.
*/
void drm_buddy_reset_clear(struct drm_buddy *mm, bool is_clear)
{
u64 root_size, size, start;
unsigned int order;
int i;
size = mm->size;
for (i = 0; i < mm->n_roots; ++i) {
order = ilog2(size) - ilog2(mm->chunk_size);
start = drm_buddy_block_offset(mm->roots[i]);
__force_merge(mm, start, start + size, order);
root_size = mm->chunk_size << order;
size -= root_size;
}
for (i = 0; i <= mm->max_order; ++i) {
struct drm_buddy_block *block;
list_for_each_entry_reverse(block, &mm->free_list[i], link) {
if (is_clear != drm_buddy_block_is_clear(block)) {
if (is_clear) {
mark_cleared(block);
mm->clear_avail += drm_buddy_block_size(mm, block);
} else {
clear_reset(block);
mm->clear_avail -= drm_buddy_block_size(mm, block);
}
}
}
}
}
EXPORT_SYMBOL(drm_buddy_reset_clear);
/**
* drm_buddy_free_block - free a block
*

View file

@ -230,7 +230,7 @@ void drm_gem_dma_free(struct drm_gem_dma_object *dma_obj)
if (drm_gem_is_imported(gem_obj)) {
if (dma_obj->vaddr)
dma_buf_vunmap_unlocked(gem_obj->dma_buf, &map);
dma_buf_vunmap_unlocked(gem_obj->import_attach->dmabuf, &map);
drm_prime_gem_destroy(gem_obj, dma_obj->sgt);
} else if (dma_obj->vaddr) {
if (dma_obj->map_noncoherent)

View file

@ -419,6 +419,7 @@ EXPORT_SYMBOL(drm_gem_fb_vunmap);
static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir,
unsigned int num_planes)
{
struct dma_buf_attachment *import_attach;
struct drm_gem_object *obj;
int ret;
@ -427,9 +428,10 @@ static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_dat
obj = drm_gem_fb_get_obj(fb, num_planes);
if (!obj)
continue;
import_attach = obj->import_attach;
if (!drm_gem_is_imported(obj))
continue;
ret = dma_buf_end_cpu_access(obj->dma_buf, dir);
ret = dma_buf_end_cpu_access(import_attach->dmabuf, dir);
if (ret)
drm_err(fb->dev, "dma_buf_end_cpu_access(%u, %d) failed: %d\n",
ret, num_planes, dir);
@ -452,6 +454,7 @@ static void __drm_gem_fb_end_cpu_access(struct drm_framebuffer *fb, enum dma_dat
*/
int drm_gem_fb_begin_cpu_access(struct drm_framebuffer *fb, enum dma_data_direction dir)
{
struct dma_buf_attachment *import_attach;
struct drm_gem_object *obj;
unsigned int i;
int ret;
@ -462,9 +465,10 @@ int drm_gem_fb_begin_cpu_access(struct drm_framebuffer *fb, enum dma_data_direct
ret = -EINVAL;
goto err___drm_gem_fb_end_cpu_access;
}
import_attach = obj->import_attach;
if (!drm_gem_is_imported(obj))
continue;
ret = dma_buf_begin_cpu_access(obj->dma_buf, dir);
ret = dma_buf_begin_cpu_access(import_attach->dmabuf, dir);
if (ret)
goto err___drm_gem_fb_end_cpu_access;
}

View file

@ -349,7 +349,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
int ret = 0;
if (drm_gem_is_imported(obj)) {
ret = dma_buf_vmap(obj->dma_buf, map);
ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
} else {
pgprot_t prot = PAGE_KERNEL;
@ -409,7 +409,7 @@ void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
struct drm_gem_object *obj = &shmem->base;
if (drm_gem_is_imported(obj)) {
dma_buf_vunmap(obj->dma_buf, map);
dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
dma_resv_assert_held(shmem->base.resv);

View file

@ -453,7 +453,13 @@ struct dma_buf *drm_gem_prime_handle_to_dmabuf(struct drm_device *dev,
}
mutex_lock(&dev->object_name_lock);
/* re-export the original imported/exported object */
/* re-export the original imported object */
if (obj->import_attach) {
dmabuf = obj->import_attach->dmabuf;
get_dma_buf(dmabuf);
goto out_have_obj;
}
if (obj->dma_buf) {
get_dma_buf(obj->dma_buf);
dmabuf = obj->dma_buf;

View file

@ -65,7 +65,7 @@ static void etnaviv_gem_prime_release(struct etnaviv_gem_object *etnaviv_obj)
struct iosys_map map = IOSYS_MAP_INIT_VADDR(etnaviv_obj->vaddr);
if (etnaviv_obj->vaddr)
dma_buf_vunmap_unlocked(etnaviv_obj->base.dma_buf, &map);
dma_buf_vunmap_unlocked(etnaviv_obj->base.import_attach->dmabuf, &map);
/* Don't drop the pages for imported dmabuf, as they are not
* ours, just free the array we allocated:
@ -82,7 +82,7 @@ static void *etnaviv_gem_prime_vmap_impl(struct etnaviv_gem_object *etnaviv_obj)
lockdep_assert_held(&etnaviv_obj->lock);
ret = dma_buf_vmap(etnaviv_obj->base.dma_buf, &map);
ret = dma_buf_vmap(etnaviv_obj->base.import_attach->dmabuf, &map);
if (ret)
return NULL;
return map.vaddr;

View file

@ -719,6 +719,39 @@ int mtk_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
return 0;
}
void mtk_crtc_plane_disable(struct drm_crtc *crtc, struct drm_plane *plane)
{
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
struct mtk_crtc *mtk_crtc = to_mtk_crtc(crtc);
struct mtk_plane_state *plane_state = to_mtk_plane_state(plane->state);
int i;
/* no need to wait for disabling the plane by CPU */
if (!mtk_crtc->cmdq_client.chan)
return;
if (!mtk_crtc->enabled)
return;
/* set pending plane state to disabled */
for (i = 0; i < mtk_crtc->layer_nr; i++) {
struct drm_plane *mtk_plane = &mtk_crtc->planes[i];
struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(mtk_plane->state);
if (mtk_plane->index == plane->index) {
memcpy(mtk_plane_state, plane_state, sizeof(*plane_state));
break;
}
}
mtk_crtc_update_config(mtk_crtc, false);
/* wait for planes to be disabled by CMDQ */
wait_event_timeout(mtk_crtc->cb_blocking_queue,
mtk_crtc->cmdq_vblank_cnt == 0,
msecs_to_jiffies(500));
#endif
}
void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
struct drm_atomic_state *state)
{
@ -930,7 +963,8 @@ static int mtk_crtc_init_comp_planes(struct drm_device *drm_dev,
mtk_ddp_comp_supported_rotations(comp),
mtk_ddp_comp_get_blend_modes(comp),
mtk_ddp_comp_get_formats(comp),
mtk_ddp_comp_get_num_formats(comp), i);
mtk_ddp_comp_get_num_formats(comp),
mtk_ddp_comp_is_afbc_supported(comp), i);
if (ret)
return ret;

View file

@ -21,6 +21,7 @@ int mtk_crtc_create(struct drm_device *drm_dev, const unsigned int *path,
unsigned int num_conn_routes);
int mtk_crtc_plane_check(struct drm_crtc *crtc, struct drm_plane *plane,
struct mtk_plane_state *state);
void mtk_crtc_plane_disable(struct drm_crtc *crtc, struct drm_plane *plane);
void mtk_crtc_async_update(struct drm_crtc *crtc, struct drm_plane *plane,
struct drm_atomic_state *plane_state);
struct device *mtk_crtc_dma_dev_get(struct drm_crtc *crtc);

View file

@ -366,6 +366,7 @@ static const struct mtk_ddp_comp_funcs ddp_ovl = {
.get_blend_modes = mtk_ovl_get_blend_modes,
.get_formats = mtk_ovl_get_formats,
.get_num_formats = mtk_ovl_get_num_formats,
.is_afbc_supported = mtk_ovl_is_afbc_supported,
};
static const struct mtk_ddp_comp_funcs ddp_postmask = {

View file

@ -83,6 +83,7 @@ struct mtk_ddp_comp_funcs {
u32 (*get_blend_modes)(struct device *dev);
const u32 *(*get_formats)(struct device *dev);
size_t (*get_num_formats)(struct device *dev);
bool (*is_afbc_supported)(struct device *dev);
void (*connect)(struct device *dev, struct device *mmsys_dev, unsigned int next);
void (*disconnect)(struct device *dev, struct device *mmsys_dev, unsigned int next);
void (*add)(struct device *dev, struct mtk_mutex *mutex);
@ -294,6 +295,14 @@ size_t mtk_ddp_comp_get_num_formats(struct mtk_ddp_comp *comp)
return 0;
}
static inline bool mtk_ddp_comp_is_afbc_supported(struct mtk_ddp_comp *comp)
{
if (comp->funcs && comp->funcs->is_afbc_supported)
return comp->funcs->is_afbc_supported(comp->dev);
return false;
}
static inline bool mtk_ddp_comp_add(struct mtk_ddp_comp *comp, struct mtk_mutex *mutex)
{
if (comp->funcs && comp->funcs->add) {

View file

@ -106,6 +106,7 @@ void mtk_ovl_disable_vblank(struct device *dev);
u32 mtk_ovl_get_blend_modes(struct device *dev);
const u32 *mtk_ovl_get_formats(struct device *dev);
size_t mtk_ovl_get_num_formats(struct device *dev);
bool mtk_ovl_is_afbc_supported(struct device *dev);
void mtk_ovl_adaptor_add_comp(struct device *dev, struct mtk_mutex *mutex);
void mtk_ovl_adaptor_remove_comp(struct device *dev, struct mtk_mutex *mutex);

View file

@ -236,6 +236,13 @@ size_t mtk_ovl_get_num_formats(struct device *dev)
return ovl->data->num_formats;
}
bool mtk_ovl_is_afbc_supported(struct device *dev)
{
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
return ovl->data->supports_afbc;
}
int mtk_ovl_clk_enable(struct device *dev)
{
struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);

View file

@ -1095,7 +1095,6 @@ static const u32 mt8183_output_fmts[] = {
};
static const u32 mt8195_dpi_output_fmts[] = {
MEDIA_BUS_FMT_BGR888_1X24,
MEDIA_BUS_FMT_RGB888_1X24,
MEDIA_BUS_FMT_RGB888_2X12_LE,
MEDIA_BUS_FMT_RGB888_2X12_BE,
@ -1103,18 +1102,19 @@ static const u32 mt8195_dpi_output_fmts[] = {
MEDIA_BUS_FMT_YUYV8_1X16,
MEDIA_BUS_FMT_YUYV10_1X20,
MEDIA_BUS_FMT_YUYV12_1X24,
MEDIA_BUS_FMT_BGR888_1X24,
MEDIA_BUS_FMT_YUV8_1X24,
MEDIA_BUS_FMT_YUV10_1X30,
};
static const u32 mt8195_dp_intf_output_fmts[] = {
MEDIA_BUS_FMT_BGR888_1X24,
MEDIA_BUS_FMT_RGB888_1X24,
MEDIA_BUS_FMT_RGB888_2X12_LE,
MEDIA_BUS_FMT_RGB888_2X12_BE,
MEDIA_BUS_FMT_RGB101010_1X30,
MEDIA_BUS_FMT_YUYV8_1X16,
MEDIA_BUS_FMT_YUYV10_1X20,
MEDIA_BUS_FMT_BGR888_1X24,
MEDIA_BUS_FMT_YUV8_1X24,
MEDIA_BUS_FMT_YUV10_1X30,
};

View file

@ -285,9 +285,14 @@ static void mtk_plane_atomic_disable(struct drm_plane *plane,
struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state,
plane);
struct mtk_plane_state *mtk_plane_state = to_mtk_plane_state(new_state);
struct drm_plane_state *old_state = drm_atomic_get_old_plane_state(state,
plane);
mtk_plane_state->pending.enable = false;
wmb(); /* Make sure the above parameter is set before update */
mtk_plane_state->pending.dirty = true;
mtk_crtc_plane_disable(old_state->crtc, plane);
}
static void mtk_plane_atomic_update(struct drm_plane *plane,
@ -321,7 +326,8 @@ static const struct drm_plane_helper_funcs mtk_plane_helper_funcs = {
int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
unsigned long possible_crtcs, enum drm_plane_type type,
unsigned int supported_rotations, const u32 blend_modes,
const u32 *formats, size_t num_formats, unsigned int plane_idx)
const u32 *formats, size_t num_formats,
bool supports_afbc, unsigned int plane_idx)
{
int err;
@ -332,7 +338,9 @@ int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
err = drm_universal_plane_init(dev, plane, possible_crtcs,
&mtk_plane_funcs, formats,
num_formats, modifiers, type, NULL);
num_formats,
supports_afbc ? modifiers : NULL,
type, NULL);
if (err) {
DRM_ERROR("failed to initialize plane\n");
return err;

View file

@ -49,5 +49,6 @@ to_mtk_plane_state(struct drm_plane_state *state)
int mtk_plane_init(struct drm_device *dev, struct drm_plane *plane,
unsigned long possible_crtcs, enum drm_plane_type type,
unsigned int supported_rotations, const u32 blend_modes,
const u32 *formats, size_t num_formats, unsigned int plane_idx);
const u32 *formats, size_t num_formats,
bool supports_afbc, unsigned int plane_idx);
#endif

View file

@ -39,6 +39,9 @@ nvif_chan_gpfifo_post(struct nvif_chan *chan)
const u32 pbptr = (chan->push.cur - map) + chan->func->gpfifo.post_size;
const u32 gpptr = (chan->gpfifo.cur + 1) & chan->gpfifo.max;
if (!chan->func->gpfifo.post)
return 0;
return chan->func->gpfifo.post(chan, gpptr, pbptr);
}

View file

@ -841,7 +841,6 @@ int panfrost_job_init(struct panfrost_device *pfdev)
.num_rqs = DRM_SCHED_PRIORITY_COUNT,
.credit_limit = 2,
.timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
.timeout_wq = pfdev->reset.wq,
.name = "pan_js",
.dev = pfdev->dev,
};
@ -879,6 +878,7 @@ int panfrost_job_init(struct panfrost_device *pfdev)
pfdev->reset.wq = alloc_ordered_workqueue("panfrost-reset", 0);
if (!pfdev->reset.wq)
return -ENOMEM;
args.timeout_wq = pfdev->reset.wq;
for (j = 0; j < NUM_JOB_SLOTS; j++) {
js->queue[j].fence_context = dma_fence_context_alloc(1);

View file

@ -26,7 +26,6 @@
* Jerome Glisse
*/
#include <linux/console.h>
#include <linux/efi.h>
#include <linux/pci.h>
#include <linux/pm_runtime.h>
@ -1635,11 +1634,9 @@ int radeon_suspend_kms(struct drm_device *dev, bool suspend,
pci_set_power_state(pdev, PCI_D3hot);
}
if (notify_clients) {
console_lock();
drm_client_dev_suspend(dev, true);
console_unlock();
}
if (notify_clients)
drm_client_dev_suspend(dev, false);
return 0;
}
@ -1661,18 +1658,12 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool notify_clients)
if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
return 0;
if (notify_clients) {
console_lock();
}
if (resume) {
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
if (pci_enable_device(pdev)) {
if (notify_clients)
console_unlock();
if (pci_enable_device(pdev))
return -1;
}
}
/* resume AGP if in use */
radeon_agp_resume(rdev);
radeon_resume(rdev);
@ -1747,10 +1738,8 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool notify_clients)
if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled)
radeon_pm_compute_clocks(rdev);
if (notify_clients) {
drm_client_dev_resume(dev, true);
console_unlock();
}
if (notify_clients)
drm_client_dev_resume(dev, false);
return 0;
}

View file

@ -355,17 +355,6 @@ void drm_sched_entity_destroy(struct drm_sched_entity *entity)
}
EXPORT_SYMBOL(drm_sched_entity_destroy);
/* drm_sched_entity_clear_dep - callback to clear the entities dependency */
static void drm_sched_entity_clear_dep(struct dma_fence *f,
struct dma_fence_cb *cb)
{
struct drm_sched_entity *entity =
container_of(cb, struct drm_sched_entity, cb);
entity->dependency = NULL;
dma_fence_put(f);
}
/*
* drm_sched_entity_wakeup - callback to clear the entity's dependency and
* wake up the scheduler
@ -376,7 +365,8 @@ static void drm_sched_entity_wakeup(struct dma_fence *f,
struct drm_sched_entity *entity =
container_of(cb, struct drm_sched_entity, cb);
drm_sched_entity_clear_dep(f, cb);
entity->dependency = NULL;
dma_fence_put(f);
drm_sched_wakeup(entity->rq->sched);
}
@ -429,13 +419,6 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity)
fence = dma_fence_get(&s_fence->scheduled);
dma_fence_put(entity->dependency);
entity->dependency = fence;
if (!dma_fence_add_callback(fence, &entity->cb,
drm_sched_entity_clear_dep))
return true;
/* Ignore it when it is already scheduled */
dma_fence_put(fence);
return false;
}
if (!dma_fence_add_callback(entity->dependency, &entity->cb,

View file

@ -204,15 +204,16 @@ static void virtgpu_dma_buf_free_obj(struct drm_gem_object *obj)
{
struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
struct virtio_gpu_device *vgdev = obj->dev->dev_private;
struct dma_buf_attachment *attach = obj->import_attach;
if (drm_gem_is_imported(obj)) {
struct dma_buf *dmabuf = obj->dma_buf;
struct dma_buf *dmabuf = attach->dmabuf;
dma_resv_lock(dmabuf->resv, NULL);
virtgpu_dma_buf_unmap(bo);
dma_resv_unlock(dmabuf->resv);
dma_buf_detach(dmabuf, obj->import_attach);
dma_buf_detach(dmabuf, attach);
dma_buf_put(dmabuf);
}

View file

@ -85,10 +85,10 @@ static int vmw_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
int ret;
if (drm_gem_is_imported(obj)) {
ret = dma_buf_vmap(obj->dma_buf, map);
ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
if (!ret) {
if (drm_WARN_ON(obj->dev, map->is_iomem)) {
dma_buf_vunmap(obj->dma_buf, map);
dma_buf_vunmap(obj->import_attach->dmabuf, map);
return -EIO;
}
}
@ -102,7 +102,7 @@ static int vmw_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
static void vmw_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
{
if (drm_gem_is_imported(obj))
dma_buf_vunmap(obj->dma_buf, map);
dma_buf_vunmap(obj->import_attach->dmabuf, map);
else
drm_gem_ttm_vunmap(obj, map);
}

View file

@ -417,6 +417,8 @@ int xe_gt_init_early(struct xe_gt *gt)
if (err)
return err;
xe_mocs_init_early(gt);
return 0;
}
@ -630,12 +632,6 @@ int xe_gt_init(struct xe_gt *gt)
if (err)
return err;
err = xe_gt_pagefault_init(gt);
if (err)
return err;
xe_mocs_init_early(gt);
err = xe_gt_sysfs_init(gt);
if (err)
return err;
@ -644,6 +640,10 @@ int xe_gt_init(struct xe_gt *gt)
if (err)
return err;
err = xe_gt_pagefault_init(gt);
if (err)
return err;
err = xe_gt_idle_init(&gt->gtidle);
if (err)
return err;
@ -839,6 +839,9 @@ static int gt_reset(struct xe_gt *gt)
goto err_out;
}
if (IS_SRIOV_PF(gt_to_xe(gt)))
xe_gt_sriov_pf_stop_prepare(gt);
xe_uc_gucrc_disable(&gt->uc);
xe_uc_stop_prepare(&gt->uc);
xe_gt_pagefault_reset(gt);

View file

@ -172,6 +172,25 @@ void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid)
pf_clear_vf_scratch_regs(gt, vfid);
}
static void pf_cancel_restart(struct xe_gt *gt)
{
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
if (cancel_work_sync(&gt->sriov.pf.workers.restart))
xe_gt_sriov_dbg_verbose(gt, "pending restart canceled!\n");
}
/**
* xe_gt_sriov_pf_stop_prepare() - Prepare to stop SR-IOV support.
* @gt: the &xe_gt
*
* This function can only be called on the PF.
*/
void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt)
{
pf_cancel_restart(gt);
}
static void pf_restart(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);

View file

@ -13,6 +13,7 @@ int xe_gt_sriov_pf_init_early(struct xe_gt *gt);
int xe_gt_sriov_pf_init(struct xe_gt *gt);
void xe_gt_sriov_pf_init_hw(struct xe_gt *gt);
void xe_gt_sriov_pf_sanitize_hw(struct xe_gt *gt, unsigned int vfid);
void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt);
void xe_gt_sriov_pf_restart(struct xe_gt *gt);
#else
static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
@ -29,6 +30,10 @@ static inline void xe_gt_sriov_pf_init_hw(struct xe_gt *gt)
{
}
static inline void xe_gt_sriov_pf_stop_prepare(struct xe_gt *gt)
{
}
static inline void xe_gt_sriov_pf_restart(struct xe_gt *gt)
{
}

View file

@ -2364,6 +2364,21 @@ int xe_gt_sriov_pf_config_restore(struct xe_gt *gt, unsigned int vfid,
return err;
}
static int pf_push_self_config(struct xe_gt *gt)
{
int err;
err = pf_push_full_vf_config(gt, PFID);
if (err) {
xe_gt_sriov_err(gt, "Failed to push self configuration (%pe)\n",
ERR_PTR(err));
return err;
}
xe_gt_sriov_dbg_verbose(gt, "self configuration completed\n");
return 0;
}
static void fini_config(void *arg)
{
struct xe_gt *gt = arg;
@ -2387,9 +2402,17 @@ static void fini_config(void *arg)
int xe_gt_sriov_pf_config_init(struct xe_gt *gt)
{
struct xe_device *xe = gt_to_xe(gt);
int err;
xe_gt_assert(gt, IS_SRIOV_PF(xe));
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
err = pf_push_self_config(gt);
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
if (err)
return err;
return devm_add_action_or_reset(xe->drm.dev, fini_config, gt);
}
@ -2407,6 +2430,10 @@ void xe_gt_sriov_pf_config_restart(struct xe_gt *gt)
unsigned int n, total_vfs = xe_sriov_pf_get_totalvfs(gt_to_xe(gt));
unsigned int fail = 0, skip = 0;
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
pf_push_self_config(gt);
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
for (n = 1; n <= total_vfs; n++) {
if (xe_gt_sriov_pf_config_is_empty(gt, n))
skip++;

View file

@ -1817,8 +1817,8 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
xe_bo_assert_held(bo);
/* Use bounce buffer for small access and unaligned access */
if (len & XE_CACHELINE_MASK ||
((uintptr_t)buf | offset) & XE_CACHELINE_MASK) {
if (!IS_ALIGNED(len, XE_CACHELINE_BYTES) ||
!IS_ALIGNED((unsigned long)buf + offset, XE_CACHELINE_BYTES)) {
int buf_offset = 0;
/*
@ -1848,7 +1848,7 @@ int xe_migrate_access_memory(struct xe_migrate *m, struct xe_bo *bo,
err = xe_migrate_access_memory(m, bo,
offset & ~XE_CACHELINE_MASK,
(void *)ptr,
sizeof(bounce), 0);
sizeof(bounce), write);
if (err)
return err;
} else {

View file

@ -110,13 +110,14 @@ static int emit_bb_start(u64 batch_addr, u32 ppgtt_flag, u32 *dw, int i)
return i;
}
static int emit_flush_invalidate(u32 *dw, int i)
static int emit_flush_invalidate(u32 addr, u32 val, u32 *dw, int i)
{
dw[i++] = MI_FLUSH_DW | MI_INVALIDATE_TLB | MI_FLUSH_DW_OP_STOREDW |
MI_FLUSH_IMM_DW | MI_FLUSH_DW_STORE_INDEX;
dw[i++] = LRC_PPHWSP_FLUSH_INVAL_SCRATCH_ADDR;
dw[i++] = 0;
MI_FLUSH_IMM_DW;
dw[i++] = addr | MI_FLUSH_DW_USE_GTT;
dw[i++] = 0;
dw[i++] = val;
return i;
}
@ -397,23 +398,20 @@ static void __emit_job_gen12_render_compute(struct xe_sched_job *job,
static void emit_migration_job_gen12(struct xe_sched_job *job,
struct xe_lrc *lrc, u32 seqno)
{
u32 saddr = xe_lrc_start_seqno_ggtt_addr(lrc);
u32 dw[MAX_JOB_SIZE_DW], i = 0;
i = emit_copy_timestamp(lrc, dw, i);
i = emit_store_imm_ggtt(xe_lrc_start_seqno_ggtt_addr(lrc),
seqno, dw, i);
i = emit_store_imm_ggtt(saddr, seqno, dw, i);
dw[i++] = MI_ARB_ON_OFF | MI_ARB_DISABLE; /* Enabled again below */
i = emit_bb_start(job->ptrs[0].batch_addr, BIT(8), dw, i);
if (!IS_SRIOV_VF(gt_to_xe(job->q->gt))) {
/* XXX: Do we need this? Leaving for now. */
dw[i++] = preparser_disable(true);
i = emit_flush_invalidate(dw, i);
i = emit_flush_invalidate(saddr, seqno, dw, i);
dw[i++] = preparser_disable(false);
}
i = emit_bb_start(job->ptrs[1].batch_addr, BIT(8), dw, i);

View file

@ -9,7 +9,7 @@ config HYPERV
select PARAVIRT
select X86_HV_CALLBACK_VECTOR if X86
select OF_EARLY_FLATTREE if OF
select SYSFB if !HYPERV_VTL_MODE
select SYSFB if EFI && !HYPERV_VTL_MODE
help
Select this option to run Linux as a Hyper-V client operating
system.

View file

@ -18,6 +18,7 @@
#include <linux/uio.h>
#include <linux/interrupt.h>
#include <linux/set_memory.h>
#include <linux/export.h>
#include <asm/page.h>
#include <asm/mshyperv.h>

View file

@ -20,6 +20,7 @@
#include <linux/delay.h>
#include <linux/cpu.h>
#include <linux/hyperv.h>
#include <linux/export.h>
#include <asm/mshyperv.h>
#include <linux/sched/isolation.h>

View file

@ -519,7 +519,10 @@ void vmbus_set_event(struct vmbus_channel *channel)
else
WARN_ON_ONCE(1);
} else {
hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event);
u64 control = HVCALL_SIGNAL_EVENT;
control |= hv_nested ? HV_HYPERCALL_NESTED : 0;
hv_do_fast_hypercall8(control, channel->sig_event);
}
}
EXPORT_SYMBOL_GPL(vmbus_set_event);

View file

@ -85,8 +85,10 @@ int hv_post_message(union hv_connection_id connection_id,
else
status = HV_STATUS_INVALID_PARAMETER;
} else {
status = hv_do_hypercall(HVCALL_POST_MESSAGE,
aligned_msg, NULL);
u64 control = HVCALL_POST_MESSAGE;
control |= hv_nested ? HV_HYPERCALL_NESTED : 0;
status = hv_do_hypercall(control, aligned_msg, NULL);
}
local_irq_restore(flags);

View file

@ -6,6 +6,7 @@
#include <linux/slab.h>
#include <linux/cpuhotplug.h>
#include <linux/minmax.h>
#include <linux/export.h>
#include <asm/mshyperv.h>
/*

View file

@ -13,6 +13,7 @@
#include <linux/mm.h>
#include <asm/mshyperv.h>
#include <linux/resume_user_mode.h>
#include <linux/export.h>
#include "mshv.h"

View file

@ -9,6 +9,7 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/export.h>
#include <asm/mshyperv.h>
#include "mshv_root.h"

View file

@ -18,6 +18,7 @@
#include <linux/slab.h>
#include <linux/prefetch.h>
#include <linux/io.h>
#include <linux/export.h>
#include <asm/mshyperv.h>
#include "hyperv_vmbus.h"

View file

@ -2509,7 +2509,7 @@ static int vmbus_acpi_add(struct platform_device *pdev)
return 0;
}
#endif
#ifndef HYPERVISOR_CALLBACK_VECTOR
static int vmbus_set_irq(struct platform_device *pdev)
{
struct irq_data *data;
@ -2534,6 +2534,7 @@ static int vmbus_set_irq(struct platform_device *pdev)
return 0;
}
#endif
static int vmbus_device_add(struct platform_device *pdev)
{
@ -2549,11 +2550,11 @@ static int vmbus_device_add(struct platform_device *pdev)
if (ret)
return ret;
if (!__is_defined(HYPERVISOR_CALLBACK_VECTOR))
#ifndef HYPERVISOR_CALLBACK_VECTOR
ret = vmbus_set_irq(pdev);
if (ret)
return ret;
#endif
for_each_of_range(&parser, &range) {
struct resource *res;

View file

@ -89,6 +89,7 @@ struct ccp_device {
struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */
u8 *cmd_buffer;
u8 *buffer;
int buffer_recv_size; /* number of received bytes in buffer */
int target[6];
DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS);
DECLARE_BITMAP(fan_cnct, NUM_FANS);
@ -146,6 +147,9 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2,
if (!t)
return -ETIMEDOUT;
if (ccp->buffer_recv_size != IN_BUFFER_SIZE)
return -EPROTO;
return ccp_get_errno(ccp);
}
@ -157,6 +161,7 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8
spin_lock(&ccp->wait_input_report_lock);
if (!completion_done(&ccp->wait_input_report)) {
memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size));
ccp->buffer_recv_size = size;
complete_all(&ccp->wait_input_report);
}
spin_unlock(&ccp->wait_input_report_lock);

View file

@ -97,7 +97,7 @@
* Power (mW) = 0.2 * register value * 20000 / rshunt / 4 * gain
* (Specific for SQ52206)
* Power (mW) = 0.24 * register value * 20000 / rshunt / 4 * gain
* Energy (mJ) = 16 * 0.24 * register value * 20000 / rshunt / 4 * gain
* Energy (uJ) = 16 * 0.24 * register value * 20000 / rshunt / 4 * gain * 1000
*/
#define INA238_CALIBRATION_VALUE 16384
#define INA238_FIXED_SHUNT 20000
@ -500,9 +500,9 @@ static ssize_t energy1_input_show(struct device *dev,
if (ret)
return ret;
/* result in mJ */
energy = div_u64(regval * INA238_FIXED_SHUNT * data->gain * 16 *
data->config->power_calculate_factor, 4 * 100 * data->rshunt);
/* result in uJ */
energy = div_u64(regval * INA238_FIXED_SHUNT * data->gain * 16 * 10 *
data->config->power_calculate_factor, 4 * data->rshunt);
return sysfs_emit(buf, "%llu\n", energy);
}

View file

@ -226,15 +226,15 @@ static int ucd9000_gpio_set(struct gpio_chip *gc, unsigned int offset,
}
if (value) {
if (ret & UCD9000_GPIO_CONFIG_STATUS)
if (ret & UCD9000_GPIO_CONFIG_OUT_VALUE)
return 0;
ret |= UCD9000_GPIO_CONFIG_STATUS;
ret |= UCD9000_GPIO_CONFIG_OUT_VALUE;
} else {
if (!(ret & UCD9000_GPIO_CONFIG_STATUS))
if (!(ret & UCD9000_GPIO_CONFIG_OUT_VALUE))
return 0;
ret &= ~UCD9000_GPIO_CONFIG_STATUS;
ret &= ~UCD9000_GPIO_CONFIG_OUT_VALUE;
}
ret |= UCD9000_GPIO_CONFIG_ENABLE;

Some files were not shown because too many files have changed in this diff Show more