From patchwork Fri Jun 1 15:27:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924072 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="dYeNBQUK"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7ZM1M3jz9ry1 for ; Sat, 2 Jun 2018 01:30:35 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vMdGbkh2UK+jW+znyXorvlZvqbEiLysBaOwDEBlywcA=; b=dYeNBQUKfzo7hC rdomdmeeINJS4EXkDeGmnkeWA0+xG/8HpeTolnTazkf4A6CmbbBjucqs1J31XKV4mwwI9WDjtacfW rEgfo/Re6QLtCduXOo94xSgkCi1YELqgqYZHyYjBxU4MNCaH1Eqs5++ERdCHnnonbhflONLYwuCUx kk9qTKAal8EPtSu2I+07gxHCMi2GQMBTM3qU/rjbbxbB1nVYiNn4nW+eZ+oQunDC1+ezrsaFAOAnX uvqB7o/cz4Zh1l3RNrdWacs9rRzFRFgtE0lqELEunC8Fsmfwwiqbdoxg7c+2iwAy+oyzc8sUY1WtY 10/HExgjnveIpNdWtpeA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm0c-0004cW-9T; Fri, 01 Jun 2018 15:30:30 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyW-0002bj-4n for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 244FD15BE; Fri, 1 Jun 2018 08:28:14 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 13CAD3F557; Fri, 1 Jun 2018 08:28:11 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 02/33] arm64: fpsimd: Fix TIF_FOREIGN_FPSTATE after invalidating cpu regs Date: Fri, 1 Jun 2018 16:27:16 +0100 Message-Id: <20180601152747.23613-3-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082820_244809_30D33BA7 X-CRM114-Status: GOOD ( 18.55 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin fpsimd_last_state.st is set to NULL as a way of indicating that current's FPSIMD registers are no longer loaded in the cpu. In particular, this is done when the kernel temporarily uses or clobbers the FPSIMD registers for its own purposes, as in CPU PM or kernel-mode NEON, resulting in them being populated with garbage data not belonging to a task. Commit 17eed27b02da ("arm64/sve: KVM: Prevent guests from using SVE") factors this operation out as a new helper fpsimd_flush_cpu_state() to make it clearer what is being done here, and on SVE systems this helper is now used, via kvm_fpsimd_flush_cpu_state(), to invalidate the registers after KVM has run a vcpu. The reason for this is that KVM does not yet understand how to restore the full host SVE registers itself after loading the guest FPSIMD context into them. This exposes a particular problem: if fpsimd_last_state.st is set to NULL without also setting TIF_FOREIGN_FPSTATE, the kernel may continue to think that current's FPSIMD registers are live even though they have actually been clobbered. Prior to the aforementioned commit, the only path where fpsimd_last_state.st is set to NULL without setting TIF_FOREIGN_FPSTATE is when kernel_neon_begin() is called by a kernel thread (where current->mm can be NULL). This does not matter, because the only harm is that at context-switch time fpsimd_thread_switch() may unnecessarily save the FPSIMD registers back to current's thread_struct (even though kernel threads are not considered to have any FPSIMD context of their own and the registers will never be reloaded). Note that although CPU_PM_ENTER lacks the TIF_FOREIGN_FPSTATE setting, every CPU passing through that path must subsequently pass through CPU_PM_EXIT before it can re-enter the kernel proper. CPU_PM_EXIT sets the flag. The sve_flush_cpu_state() function added by commit 17eed27b02da also lacks the proper maintenance of TIF_FOREIGN_FPSTATE. This may cause the bits of a host task's SVE registers that do not alias the FPSIMD register file to spontaneously appear zeroed if a KVM vcpu runs in the same task in the meantime. Although this effect is hidden by the fact that the non-FPSIMD bits of the SVE registers are zeroed by a syscall anyway, it is doubtless a bad idea to rely on these different code paths interacting correctly under future maintenance. This patch makes TIF_FOREIGN_FPSTATE an unconditional side-effect of fpsimd_flush_cpu_state(), and removes the set_thread_flag() calls that become redundant as a result. This ensures that TIF_FOREIGN_FPSTATE cannot remain clear if the FPSIMD state in the FPSIMD registers is invalid. Signed-off-by: Dave Martin Reviewed-by: Christoffer Dall Reviewed-by: Alex Bennée Reviewed-by: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 87a35364e750..12e1c967c7b5 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -1067,6 +1067,7 @@ void fpsimd_flush_task_state(struct task_struct *t) static inline void fpsimd_flush_cpu_state(void) { __this_cpu_write(fpsimd_last_state.st, NULL); + set_thread_flag(TIF_FOREIGN_FPSTATE); } /* @@ -1121,10 +1122,8 @@ void kernel_neon_begin(void) __this_cpu_write(kernel_neon_busy, true); /* Save unsaved task fpsimd state, if any: */ - if (current->mm) { + if (current->mm) task_fpsimd_save(); - set_thread_flag(TIF_FOREIGN_FPSTATE); - } /* Invalidate any task state remaining in the fpsimd regs: */ fpsimd_flush_cpu_state(); @@ -1251,8 +1250,6 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self, fpsimd_flush_cpu_state(); break; case CPU_PM_EXIT: - if (current->mm) - set_thread_flag(TIF_FOREIGN_FPSTATE); break; case CPU_PM_ENTER_FAILED: default: From patchwork Fri Jun 1 15:27:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924064 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Sp26IbvB"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7Xl5WwRz9ry1 for ; Sat, 2 Jun 2018 01:29:11 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=a/oB0KpD84AurcFz5Y8/8JoJqqQLaRNS70JAxWJ8iKc=; b=Sp26IbvBQpMJXw XunGzoMiNoxG3sPgUR/ZA9nilljOhBs4mmn9S2UzpLaKlx9qQbAzSKmdEL/TWWAkC1KEN9fjD8p8O wmKEZk0hAcwuBEjAvGl2by8qZS3xIWjw6uUNga6907I12qqayAFV9pyR3SjChNxYNNR38Z5J00D9E WoVuzpoWBdQCKRJGwNQVuzqSkSdHZXWJ5njbuJHsIA6ByInrBH4uT2X9XzkdIZj8dGylKclD0cXfz pWzibtAv6D9Nya6/bWWlLQXhg6lNxwpkJZgbvJdRMHyg4aMMyOZWThul6ZkS4cUO9DNp5sdNAfMeD 7apwN2JDRuIo18Y9k10Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlzH-0002ym-4g; Fri, 01 Jun 2018 15:29:07 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyW-0002cF-4k for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:23 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 72E0D164F; Fri, 1 Jun 2018 08:28:16 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 627FC3F557; Fri, 1 Jun 2018 08:28:14 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 03/33] thread_info: Add update_thread_flag() helpers Date: Fri, 1 Jun 2018 16:27:17 +0100 Message-Id: <20180601152747.23613-4-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082820_228356_3410C46B X-CRM114-Status: GOOD ( 14.00 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin There are a number of bits of code sprinkled around the kernel to set a thread flag if a certain condition is true, and clear it otherwise. To help make those call sites terser and less cumbersome, this patch adds a new family of thread flag manipulators update*_thread_flag([...,] flag, cond) which do the equivalent of: if (cond) set*_thread_flag([...,] flag); else clear*_thread_flag([...,] flag); Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Steven Rostedt (VMware) Acked-by: Marc Zyngier Acked-by: Catalin Marinas Acked-by: Peter Zijlstra (Intel) Cc: Ingo Molnar Cc: Oleg Nesterov Signed-off-by: Marc Zyngier --- include/linux/sched.h | 6 ++++++ include/linux/thread_info.h | 11 +++++++++++ 2 files changed, 17 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index b3d697f3b573..c2c305199721 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1578,6 +1578,12 @@ static inline void clear_tsk_thread_flag(struct task_struct *tsk, int flag) clear_ti_thread_flag(task_thread_info(tsk), flag); } +static inline void update_tsk_thread_flag(struct task_struct *tsk, int flag, + bool value) +{ + update_ti_thread_flag(task_thread_info(tsk), flag, value); +} + static inline int test_and_set_tsk_thread_flag(struct task_struct *tsk, int flag) { return test_and_set_ti_thread_flag(task_thread_info(tsk), flag); diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h index cf2862bd134a..8d8821b3689a 100644 --- a/include/linux/thread_info.h +++ b/include/linux/thread_info.h @@ -60,6 +60,15 @@ static inline void clear_ti_thread_flag(struct thread_info *ti, int flag) clear_bit(flag, (unsigned long *)&ti->flags); } +static inline void update_ti_thread_flag(struct thread_info *ti, int flag, + bool value) +{ + if (value) + set_ti_thread_flag(ti, flag); + else + clear_ti_thread_flag(ti, flag); +} + static inline int test_and_set_ti_thread_flag(struct thread_info *ti, int flag) { return test_and_set_bit(flag, (unsigned long *)&ti->flags); @@ -79,6 +88,8 @@ static inline int test_ti_thread_flag(struct thread_info *ti, int flag) set_ti_thread_flag(current_thread_info(), flag) #define clear_thread_flag(flag) \ clear_ti_thread_flag(current_thread_info(), flag) +#define update_thread_flag(flag, value) \ + update_ti_thread_flag(current_thread_info(), flag, value) #define test_and_set_thread_flag(flag) \ test_and_set_ti_thread_flag(current_thread_info(), flag) #define test_and_clear_thread_flag(flag) \ From patchwork Fri Jun 1 15:27:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924132 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="s/dAIf52"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y8371qDPz9s1b for ; Sat, 2 Jun 2018 01:52:00 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=peR0t9SjogpV1WT2qoTOOTsNirsC5kjL97tw8AXMstg=; b=s/dAIf5297tn2L GFeVycsN9TdMS7xnrhkjXkQgrQ/M/HtQGPQmST1O8v5nOGSw72K7c6moIg+/e+GO4jElygPWYWZMR jz4CD4wp0Y/n9FoWAB/cq+Ovx5U8MX1PQxzD6j3eLb315H3wODsRh47w/ETRMvfvmvi5qWK1ZHQUb Gc81hrjp8YXH/I93JP1rQ1n7eqDSmtHPkTuiNCM1gAdVvK07hRpvyAzAAZ7lJAc9Ukmlv6XUqhwoF a1lqyDJDRo/KyR9j3vgFRdPhYB3dYMhLD0Letfm1j0eUJzAqeGCbIPNaBoe6Cm7RppMTFm5mQM+1r QIf/8GlUEe0O05exTdvg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmLK-0001kI-Da; Fri, 01 Jun 2018 15:51:54 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyW-0002ct-4l for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:26 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C2C701650; Fri, 1 Jun 2018 08:28:18 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B24DF3F557; Fri, 1 Jun 2018 08:28:16 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 04/33] arm64: Use update{,_tsk}_thread_flag() Date: Fri, 1 Jun 2018 16:27:18 +0100 Message-Id: <20180601152747.23613-5-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082820_226455_6D8E0103 X-CRM114-Status: GOOD ( 13.93 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin This patch uses the new update_thread_flag() helpers to simplify a couple of if () set; else clear; constructs. No functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Marc Zyngier Acked-by: Catalin Marinas Cc: Will Deacon Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 12e1c967c7b5..9d853732f9f4 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -618,10 +618,8 @@ int sve_set_vector_length(struct task_struct *task, task->thread.sve_vl = vl; out: - if (flags & PR_SVE_VL_INHERIT) - set_tsk_thread_flag(task, TIF_SVE_VL_INHERIT); - else - clear_tsk_thread_flag(task, TIF_SVE_VL_INHERIT); + update_tsk_thread_flag(task, TIF_SVE_VL_INHERIT, + flags & PR_SVE_VL_INHERIT); return 0; } @@ -910,12 +908,12 @@ void fpsimd_thread_switch(struct task_struct *next) * the TIF_FOREIGN_FPSTATE flag so the state will be loaded * upon the next return to userland. */ - if (__this_cpu_read(fpsimd_last_state.st) == - &next->thread.uw.fpsimd_state - && next->thread.fpsimd_cpu == smp_processor_id()) - clear_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE); - else - set_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE); + bool wrong_task = __this_cpu_read(fpsimd_last_state.st) != + &next->thread.uw.fpsimd_state; + bool wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id(); + + update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE, + wrong_task || wrong_cpu); } } From patchwork Fri Jun 1 15:27:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924135 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="VjzD7fkF"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y845363Qz9rxs for ; Sat, 2 Jun 2018 01:52:53 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=BXcvtBFcToy9tfeKxKhgimUzLFZFCfuX/yAjIwd3yy0=; b=VjzD7fkF6PKfJP 9gzQY/pV20skLy1pB1ERdwtGlFJcOU3FVP9x4Z+/g2/73EMUMuJ0S3eCuGQ/pyPnRL1j0+tlZzsMO YT7hYr5QedwpbQAKBUVhuuz3Tviws+IfdAYMzdIv84Nrjemj4ZeBNViBqJrZi6+DoapMwL0kyJdun XEe4ia/ae4HKa0L8BIYyhEFnejNwSjbvBLTwsrNcqqeca3F4ePPvOQO7RMn6LDwZnScCN1yyCp3mc FhzJyPtIRtWmjAI16gdRjBZ4Lwa57Bz439lV/Fuy6ongGKzSPxSN/aQcbnFdbcfQRdM8/7TwYbSJy Hz9fKUKcMQoyywLDgq9w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmM9-0002Gf-B2; Fri, 01 Jun 2018 15:52:45 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyj-0002ei-M8 for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 41889165D; Fri, 1 Jun 2018 08:28:21 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0DCE53F557; Fri, 1 Jun 2018 08:28:18 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 05/33] KVM: arm/arm64: Introduce kvm_arch_vcpu_run_pid_change Date: Fri, 1 Jun 2018 16:27:19 +0100 Message-Id: <20180601152747.23613-6-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082833_826003_D7FF4108 X-CRM114-Status: GOOD ( 18.12 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , Christoffer Dall , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Christoffer Dall KVM/ARM differs from other architectures in having to maintain an additional virtual address space from that of the host and the guest, because we split the execution of KVM across both EL1 and EL2. This results in a need to explicitly map data structures into EL2 (hyp) which are accessed from the hyp code. As we are about to be more clever with our FPSIMD handling on arm64, which stores data in the task struct and uses thread_info flags, we will have to map parts of the currently executing task struct into the EL2 virtual address space. However, we don't want to do this on every KVM_RUN, because it is a fairly expensive operation to walk the page tables, and the common execution mode is to map a single thread to a VCPU. By introducing a hook that architectures can select with HAVE_KVM_VCPU_RUN_PID_CHANGE, we do not introduce overhead for other architectures, but have a simple way to only map the data we need when required for arm64. This patch introduces the framework only, and wires it up in the arm/arm64 KVM common code. No functional change. Signed-off-by: Christoffer Dall Signed-off-by: Dave Martin Reviewed-by: Marc Zyngier Reviewed-by: Alex Bennée Signed-off-by: Marc Zyngier --- include/linux/kvm_host.h | 9 +++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 7 ++++++- 3 files changed, 18 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6930c63126c7..4268ace60bf1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1276,4 +1276,13 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp, void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, unsigned long start, unsigned long end); +#ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE +int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu); +#else +static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) +{ + return 0; +} +#endif /* CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE */ + #endif diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index cca7e065a075..72143cfaf6ec 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -54,3 +54,6 @@ config HAVE_KVM_IRQ_BYPASS config HAVE_KVM_VCPU_ASYNC_IOCTL bool + +config HAVE_KVM_VCPU_RUN_PID_CHANGE + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index c7b2e927f699..c32e2407713d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2550,8 +2550,13 @@ static long kvm_vcpu_ioctl(struct file *filp, oldpid = rcu_access_pointer(vcpu->pid); if (unlikely(oldpid != current->pids[PIDTYPE_PID].pid)) { /* The thread running this VCPU changed. */ - struct pid *newpid = get_task_pid(current, PIDTYPE_PID); + struct pid *newpid; + r = kvm_arch_vcpu_run_pid_change(vcpu); + if (r) + break; + + newpid = get_task_pid(current, PIDTYPE_PID); rcu_assign_pointer(vcpu->pid, newpid); if (oldpid) synchronize_rcu(); From patchwork Fri Jun 1 15:27:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924134 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="iTew6Mkn"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y83t3QJjz9rxs for ; Sat, 2 Jun 2018 01:52:42 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=h/m7kHJAes8a0LYbuScUaaDvCdch/P5A54I2ircqGyk=; b=iTew6Mkn/VBNfN Wt6jF+/lhmzOfhAsxE6NGSNPGl8tVXruo8Lfy4AZkkhI5iz4yLj5kAT4BpFZOYPO74VdsCmS6N0oX CbADxDaYrcT2Qux97yifSIkaRjZcF7VK1EUsnxWGfWUc7kAIQsjnJ9CuNLo6WrPZ73zLm+srQKpHC 38az2+EtRExBllInHANnQAXvL2KAkd3LZf84YOt3+88f9D/t+pMHKFdD/3QHC4qlLMZrzZ56OULQF LvnK5mFkWlNuPLT1V9d07JM772F4kk8t+yEMjuEYJI/k1gaQ5pSzc2XJKx4W+pD++uhkgf/Y+1T6J iWsJTzDKFusHmIj26+ow==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmLv-00027R-5j; Fri, 01 Jun 2018 15:52:31 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyj-0002ej-Lt for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:55 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8F9B8164F; Fri, 1 Jun 2018 08:28:23 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7F4793F557; Fri, 1 Jun 2018 08:28:21 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 06/33] KVM: arm64: Convert lazy FPSIMD context switch trap to C Date: Fri, 1 Jun 2018 16:27:20 +0100 Message-Id: <20180601152747.23613-7-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082833_791316_B92E18A2 X-CRM114-Status: GOOD ( 14.46 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin To make the lazy FPSIMD context switch trap code easier to hack on, this patch converts it to C. This is not amazingly efficient, but the trap should typically only be taken once per host context switch. Signed-off-by: Dave Martin Reviewed-by: Marc Zyngier Reviewed-by: Alex Bennée Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/entry.S | 57 ++++++++++++++----------------------- arch/arm64/kvm/hyp/switch.c | 24 ++++++++++++++++ 2 files changed, 46 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index e41a161d313a..40f349bc1079 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -172,40 +172,27 @@ ENTRY(__fpsimd_guest_restore) // x1: vcpu // x2-x29,lr: vcpu regs // vcpu x0-x1 on the stack - stp x2, x3, [sp, #-16]! - stp x4, lr, [sp, #-16]! - -alternative_if_not ARM64_HAS_VIRT_HOST_EXTN - mrs x2, cptr_el2 - bic x2, x2, #CPTR_EL2_TFP - msr cptr_el2, x2 -alternative_else - mrs x2, cpacr_el1 - orr x2, x2, #CPACR_EL1_FPEN - msr cpacr_el1, x2 -alternative_endif - isb - - mov x3, x1 - - ldr x0, [x3, #VCPU_HOST_CONTEXT] - kern_hyp_va x0 - add x0, x0, #CPU_GP_REG_OFFSET(CPU_FP_REGS) - bl __fpsimd_save_state - - add x2, x3, #VCPU_CONTEXT - add x0, x2, #CPU_GP_REG_OFFSET(CPU_FP_REGS) - bl __fpsimd_restore_state - - // Skip restoring fpexc32 for AArch64 guests - mrs x1, hcr_el2 - tbnz x1, #HCR_RW_SHIFT, 1f - ldr x4, [x3, #VCPU_FPEXC32_EL2] - msr fpexc32_el2, x4 -1: - ldp x4, lr, [sp], #16 - ldp x2, x3, [sp], #16 - ldp x0, x1, [sp], #16 - + stp x2, x3, [sp, #-144]! + stp x4, x5, [sp, #16] + stp x6, x7, [sp, #32] + stp x8, x9, [sp, #48] + stp x10, x11, [sp, #64] + stp x12, x13, [sp, #80] + stp x14, x15, [sp, #96] + stp x16, x17, [sp, #112] + stp x18, lr, [sp, #128] + + bl __hyp_switch_fpsimd + + ldp x4, x5, [sp, #16] + ldp x6, x7, [sp, #32] + ldp x8, x9, [sp, #48] + ldp x10, x11, [sp, #64] + ldp x12, x13, [sp, #80] + ldp x14, x15, [sp, #96] + ldp x16, x17, [sp, #112] + ldp x18, lr, [sp, #128] + ldp x0, x1, [sp, #144] + ldp x2, x3, [sp], #160 eret ENDPROC(__fpsimd_guest_restore) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index d9645236e474..c0796c4d93a5 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -318,6 +318,30 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) } } +void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, + struct kvm_vcpu *vcpu) +{ + kvm_cpu_context_t *host_ctxt; + + if (has_vhe()) + write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, + cpacr_el1); + else + write_sysreg(read_sysreg(cptr_el2) & ~(u64)CPTR_EL2_TFP, + cptr_el2); + + isb(); + + host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); + __fpsimd_save_state(&host_ctxt->gp_regs.fp_regs); + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); + + /* Skip restoring fpexc32 for AArch64 guests */ + if (!(read_sysreg(hcr_el2) & HCR_RW)) + write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], + fpexc32_el2); +} + /* * Return true when we were able to fixup the guest exit and should return to * the guest, false when we should restore the host state and return to the From patchwork Fri Jun 1 15:27:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924091 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="D23si7Fj"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7cB2TR4z9s1b for ; Sat, 2 Jun 2018 01:32:10 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=vVogO8k1mcGm5mKw75RilGKeHkk8JxXaY3hIIdvvt4g=; b=D23si7FjpCjDDb 7LhmsGEFEFbaFeZT29LryThv0z2G7UEWItJ6mVw0iCYR4ZJl1PsM9oQuPnyfeS0aar0DI9Eap8HCy tSLxLO/3S+lwr4Tn/SMUIsoE3mSLo6TEd3UVIQBCjI7D3SN9vmXeymuIe6Kp5moaa67mXEBLvzey/ j5ghU5FbCLDGxJP3SZZpLBICXO+wWbFzwiyQHM14pu6srZcTkv5GxwY9IaI5h4FBlqTmqxDOUH9ry N+90v3JPj5PlYe03ZPGbP0fg2fUsn30HUZrbqa+9dv58yEtjsGSlvuc8Sobkt079nCvjfeAVJek3R OPjeoenodemiGDNwq7PQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm2A-0005vo-7j; Fri, 01 Jun 2018 15:32:06 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyj-0002fJ-Lv for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:52 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DDEDE1688; Fri, 1 Jun 2018 08:28:25 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD7583F557; Fri, 1 Jun 2018 08:28:23 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 07/33] arm64: fpsimd: Generalise context saving for non-task contexts Date: Fri, 1 Jun 2018 16:27:21 +0100 Message-Id: <20180601152747.23613-8-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082833_788417_E8C0D54C X-CRM114-Status: GOOD ( 16.42 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin In preparation for allowing non-task (i.e., KVM vcpu) FPSIMD contexts to be handled by the fpsimd common code, this patch adapts task_fpsimd_save() to save back the currently loaded context, removing the explicit dependency on current. The relevant storage to write back to in memory is now found by examining the fpsimd_last_state percpu struct. fpsimd_save() does nothing unless TIF_FOREIGN_FPSTATE is clear, and fpsimd_last_state is updated under local_bh_disable() or local_irq_disable() everywhere that TIF_FOREIGN_FPSTATE is cleared: thus, fpsimd_save() will write back to the correct storage for the loaded context. No functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Marc Zyngier Acked-by: Catalin Marinas Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 9d853732f9f4..2d9a9e8ed826 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -270,13 +270,16 @@ static void task_fpsimd_load(void) } /* - * Ensure current's FPSIMD/SVE storage in thread_struct is up to date - * with respect to the CPU registers. + * Ensure FPSIMD/SVE storage in memory for the loaded context is up to + * date with respect to the CPU registers. * * Softirqs (and preemption) must be disabled. */ -static void task_fpsimd_save(void) +static void fpsimd_save(void) { + struct user_fpsimd_state *st = __this_cpu_read(fpsimd_last_state.st); + /* set by fpsimd_bind_to_cpu() */ + WARN_ON(!in_softirq() && !irqs_disabled()); if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { @@ -291,10 +294,9 @@ static void task_fpsimd_save(void) return; } - sve_save_state(sve_pffr(current), - ¤t->thread.uw.fpsimd_state.fpsr); + sve_save_state(sve_pffr(current), &st->fpsr); } else - fpsimd_save_state(¤t->thread.uw.fpsimd_state); + fpsimd_save_state(st); } } @@ -598,7 +600,7 @@ int sve_set_vector_length(struct task_struct *task, if (task == current) { local_bh_disable(); - task_fpsimd_save(); + fpsimd_save(); set_thread_flag(TIF_FOREIGN_FPSTATE); } @@ -837,7 +839,7 @@ asmlinkage void do_sve_acc(unsigned int esr, struct pt_regs *regs) local_bh_disable(); - task_fpsimd_save(); + fpsimd_save(); fpsimd_to_sve(current); /* Force ret_to_user to reload the registers: */ @@ -898,7 +900,7 @@ void fpsimd_thread_switch(struct task_struct *next) * 'current'. */ if (current->mm) - task_fpsimd_save(); + fpsimd_save(); if (next->mm) { /* @@ -980,7 +982,7 @@ void fpsimd_preserve_current_state(void) return; local_bh_disable(); - task_fpsimd_save(); + fpsimd_save(); local_bh_enable(); } @@ -1121,7 +1123,7 @@ void kernel_neon_begin(void) /* Save unsaved task fpsimd state, if any: */ if (current->mm) - task_fpsimd_save(); + fpsimd_save(); /* Invalidate any task state remaining in the fpsimd regs: */ fpsimd_flush_cpu_state(); @@ -1244,7 +1246,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self, switch (cmd) { case CPU_PM_ENTER: if (current->mm) - task_fpsimd_save(); + fpsimd_save(); fpsimd_flush_cpu_state(); break; case CPU_PM_EXIT: From patchwork Fri Jun 1 15:27:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924077 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="JcmjNxGF"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7b031pYz9ry1 for ; Sat, 2 Jun 2018 01:31:08 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=NCHyCgYfncAQaoFAHet+pl2o8uUn1jyxN9eR/4abZlI=; b=JcmjNxGFW/X40n JftXrZrgXGq/8UamPFxZii2XeTEm+lIXQ0i3HJeaO0wRpoT5yA0xTCqo5QXWbE1pCjAp6c35VCADx bW5K+cJX5i18KKrF3gYibJDSCceWPNhoc7VupBI9ciNOl3f5naMJQwsLGDvBOJ8xq9KMmG9KkyIc1 /B6/fBMc1Q3l210INLLjET+/uelOwLB7sGfsXJFbpl1T14Lbzu7uXpRMh4tuZJ9bWesVoNCz3qvP5 8CuThMHxpBEIRAy9xseMDTG0ebonU9XZSyMfVkZeyKY21nVKIcNTPQ3QUXF3yAjXQ0JsBABCJzZ6P hIUED9jEzHhfUqWHhuhQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm1A-0005GE-0O; Fri, 01 Jun 2018 15:31:04 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyj-0002g2-Lu for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:42 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 398A615AB; Fri, 1 Jun 2018 08:28:28 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 290A13F557; Fri, 1 Jun 2018 08:28:26 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 08/33] arm64: fpsimd: Avoid FPSIMD context leakage for the init task Date: Fri, 1 Jun 2018 16:27:22 +0100 Message-Id: <20180601152747.23613-9-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082833_754491_2C388EE7 X-CRM114-Status: GOOD ( 14.16 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin The init task is started with thread_flags equal to 0, which means that TIF_FOREIGN_FPSTATE is initially clear. It is theoretically possible (if unlikely) that the init task could reach userspace without ever being scheduled out. If this occurs, data left in the FPSIMD registers by the kernel could be exposed. This patch fixes this anomaly by ensuring that the init task's initial TIF_FOREIGN_FPSTATE is set. Signed-off-by: Dave Martin Fixes: 005f78cd8849 ("arm64: defer reloading a task's FPSIMD state to userland resume") Reviewed-by: Catalin Marinas Reviewed-by: Alex Bennée Cc: Will Deacon Cc: Ard Biesheuvel Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/thread_info.h | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 740aa03c5f0d..af271f9a6c9f 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -45,12 +45,6 @@ struct thread_info { int preempt_count; /* 0 => preemptable, <0 => bug */ }; -#define INIT_THREAD_INFO(tsk) \ -{ \ - .preempt_count = INIT_PREEMPT_COUNT, \ - .addr_limit = KERNEL_DS, \ -} - #define thread_saved_pc(tsk) \ ((unsigned long)(tsk->thread.cpu_context.pc)) #define thread_saved_sp(tsk) \ @@ -117,5 +111,12 @@ void arch_release_task_struct(struct task_struct *tsk); _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \ _TIF_NOHZ) +#define INIT_THREAD_INFO(tsk) \ +{ \ + .flags = _TIF_FOREIGN_FPSTATE, \ + .preempt_count = INIT_PREEMPT_COUNT, \ + .addr_limit = KERNEL_DS, \ +} + #endif /* __KERNEL__ */ #endif /* __ASM_THREAD_INFO_H */ From patchwork Fri Jun 1 15:27:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924099 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="lFBIZKsl"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7d92Yllz9s1b for ; Sat, 2 Jun 2018 01:33:01 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7znmAvFm3C5+14AYWZ87LbqoqpNazWro0zhG/xvJ3m8=; b=lFBIZKslxnuC+D im/wiiqqYQjxTitsV/A7jYe7TJicKUSY5lhE4zr6nh8hk1F6/vwm5CJ+dve74ehwYWUw/QWGNB5vn XxSCu2ic8BBzBnLhi0YfQXAt83Lp9GUoXRL56vzEMk1o73zqtoALOb7giyaAKqVNQ3WrK0o06lE8y 5++rQtDKtnuS6k7r17aSx/OuylrY958+A0qWsV/tw5Ime5d00VCLCGvEYZMNzDHYSNLmJpQioeDoT 73/Fiy8xMW30mwMlZRbmlPbTHr5ZC8Z85cWqrWuF/1/BhcsrX3kykA8RQ5LtQJid+bRBwZqPhKS9w LQTsCH9ZQqFChr5TyF7g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm2y-0006TN-IN; Fri, 01 Jun 2018 15:32:56 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyj-0002h2-Lw for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88ACE1529; Fri, 1 Jun 2018 08:28:30 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 786173F557; Fri, 1 Jun 2018 08:28:28 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 09/33] arm64: fpsimd: Eliminate task->mm checks Date: Fri, 1 Jun 2018 16:27:23 +0100 Message-Id: <20180601152747.23613-10-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082833_779114_981E9B88 X-CRM114-Status: GOOD ( 21.21 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin Currently the FPSIMD handling code uses the condition task->mm == NULL as a hint that task has no FPSIMD register context. The ->mm check is only there to filter out tasks that cannot possibly have FPSIMD context loaded, for optimisation purposes. Also, TIF_FOREIGN_FPSTATE must always be checked anyway before saving FPSIMD context back to memory. For these reasons, the ->mm checks are not useful, providing that TIF_FOREIGN_FPSTATE is maintained in a consistent way for all threads. The context switch logic is already deliberately optimised to defer reloads of the regs until ret_to_user (or sigreturn as a special case), and save them only if they have been previously loaded. These paths are the only places where the wrong_task and wrong_cpu conditions can be made false, by calling fpsimd_bind_task_to_cpu(). Kernel threads by definition never reach these paths. As a result, the wrong_task and wrong_cpu tests in fpsimd_thread_switch() will always yield true for kernel threads. This patch removes the redundant checks and special-case code, ensuring that TIF_FOREIGN_FPSTATE is set whenever a kernel thread is scheduled in, and ensures that this flag is set for the init task. The fpsimd_flush_task_state() call already present in copy_thread() ensures the same for any new task. With TIF_FOREIGN_FPSTATE always set for kernel threads, this patch ensures that no extra context save work is added for kernel threads, and eliminates the redundant context saving that may currently occur for kernel threads that have acquired an mm via use_mm(). Signed-off-by: Dave Martin Reviewed-by: Catalin Marinas Reviewed-by: Alex Bennée Reviewed-by: Christoffer Dall Cc: Catalin Marinas Cc: Will Deacon Cc: Ard Biesheuvel Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/processor.h | 4 ++- arch/arm64/kernel/fpsimd.c | 40 ++++++++++++------------------ 2 files changed, 19 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 767598932549..36d64f83cdfb 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -156,7 +156,9 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset, /* Sync TPIDR_EL0 back to thread_struct for current */ void tls_preserve_current_state(void); -#define INIT_THREAD { } +#define INIT_THREAD { \ + .fpsimd_cpu = NR_CPUS, \ +} static inline void start_thread_common(struct pt_regs *regs, unsigned long pc) { diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 2d9a9e8ed826..d736b6c412ef 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -892,31 +892,25 @@ asmlinkage void do_fpsimd_exc(unsigned int esr, struct pt_regs *regs) void fpsimd_thread_switch(struct task_struct *next) { + bool wrong_task, wrong_cpu; + if (!system_supports_fpsimd()) return; + + /* Save unsaved fpsimd state, if any: */ + fpsimd_save(); + /* - * Save the current FPSIMD state to memory, but only if whatever is in - * the registers is in fact the most recent userland FPSIMD state of - * 'current'. + * Fix up TIF_FOREIGN_FPSTATE to correctly describe next's + * state. For kernel threads, FPSIMD registers are never loaded + * and wrong_task and wrong_cpu will always be true. */ - if (current->mm) - fpsimd_save(); - - if (next->mm) { - /* - * If we are switching to a task whose most recent userland - * FPSIMD state is already in the registers of *this* cpu, - * we can skip loading the state from memory. Otherwise, set - * the TIF_FOREIGN_FPSTATE flag so the state will be loaded - * upon the next return to userland. - */ - bool wrong_task = __this_cpu_read(fpsimd_last_state.st) != + wrong_task = __this_cpu_read(fpsimd_last_state.st) != &next->thread.uw.fpsimd_state; - bool wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id(); + wrong_cpu = next->thread.fpsimd_cpu != smp_processor_id(); - update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE, - wrong_task || wrong_cpu); - } + update_tsk_thread_flag(next, TIF_FOREIGN_FPSTATE, + wrong_task || wrong_cpu); } void fpsimd_flush_thread(void) @@ -1121,9 +1115,8 @@ void kernel_neon_begin(void) __this_cpu_write(kernel_neon_busy, true); - /* Save unsaved task fpsimd state, if any: */ - if (current->mm) - fpsimd_save(); + /* Save unsaved fpsimd state, if any: */ + fpsimd_save(); /* Invalidate any task state remaining in the fpsimd regs: */ fpsimd_flush_cpu_state(); @@ -1245,8 +1238,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self, { switch (cmd) { case CPU_PM_ENTER: - if (current->mm) - fpsimd_save(); + fpsimd_save(); fpsimd_flush_cpu_state(); break; case CPU_PM_EXIT: From patchwork Fri Jun 1 15:27:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924087 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="ctIRhAB/"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7bh0GPqz9ry1 for ; Sat, 2 Jun 2018 01:31:43 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=QfLFjpy0bGhOkYx/Rk8O4TINGeVhaECVOFTuwiI/uac=; b=ctIRhAB/HBsJH5 s2oy6wIS+lWH9xUegZVT8Jk1oqi3+r+8V3MGOPPhEzCxkeTQ8pqL3suOfxHu8MsQbxiuqfv9zep/r cTQ5pL0qXjF7xnsc0YroeesWnNI8GDgXubXgq6Pxnv8KKaigFhSFsThLqULUbugCEZiN9rtZTVdab 2IlDt/ZRSQ63csB4QMC9gSFgrhOqur5UfM1F8H5QxAHokYLVM9Q3ZtQ1wiR5sRfwnHQfYrbnBnaab 4cBCilH4CPfBAscp51MedvZQh0y6DZSthhCvzbyWR+XdxYSBYSe69DN4oV0nuPA2NaS8dJWkCrhh6 Y1A0dHN38pTNeIN5GL3w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm1i-0005gW-9k; Fri, 01 Jun 2018 15:31:38 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyj-0002iI-Lo for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D804C15BE; Fri, 1 Jun 2018 08:28:32 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C760F3F557; Fri, 1 Jun 2018 08:28:30 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 10/33] arm64/sve: Refactor user SVE trap maintenance for external use Date: Fri, 1 Jun 2018 16:27:24 +0100 Message-Id: <20180601152747.23613-11-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082833_762978_FC8A507F X-CRM114-Status: GOOD ( 19.73 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin In preparation for optimising the way KVM manages switching the guest and host FPSIMD state, it is necessary to provide a means for code outside arch/arm64/kernel/fpsimd.c to restore the user trap configuration for SVE correctly for the current task. Rather than requiring external code to duplicate the maintenance explicitly, this patch moves the trap maintenenace to fpsimd_bind_to_cpu(), since it is logically part of the work of associating the current task with the cpu. Because fpsimd_bind_to_cpu() is rather a cryptic name to publish alongside fpsimd_bind_state_to_cpu(), the former function is renamed to fpsimd_bind_task_to_cpu() to make its purpose more explicit. This patch makes appropriate changes to ensure that fpsimd_bind_task_to_cpu() is always called alongside task_fpsimd_load(), so that the trap maintenance continues to be done in every situation where it was done prior to this patch. As a side-effect, the metadata updates done by fpsimd_bind_task_to_cpu() now change from conditional to unconditional in the "already bound" case of sigreturn. This is harmless, and a couple of extra stores on this slow path will not impact performance. I consider this a reasonable price to pay for a slightly cleaner interface. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Marc Zyngier Acked-by: Catalin Marinas Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index d736b6c412ef..d5f659f476a8 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -257,16 +257,6 @@ static void task_fpsimd_load(void) sve_vq_from_vl(current->thread.sve_vl) - 1); else fpsimd_load_state(¤t->thread.uw.fpsimd_state); - - if (system_supports_sve()) { - /* Toggle SVE trapping for userspace if needed */ - if (test_thread_flag(TIF_SVE)) - sve_user_enable(); - else - sve_user_disable(); - - /* Serialised by exception return to user */ - } } /* @@ -278,7 +268,7 @@ static void task_fpsimd_load(void) static void fpsimd_save(void) { struct user_fpsimd_state *st = __this_cpu_read(fpsimd_last_state.st); - /* set by fpsimd_bind_to_cpu() */ + /* set by fpsimd_bind_task_to_cpu() */ WARN_ON(!in_softirq() && !irqs_disabled()); @@ -996,7 +986,7 @@ void fpsimd_signal_preserve_current_state(void) * Associate current's FPSIMD context with this cpu * Preemption must be disabled when calling this function. */ -static void fpsimd_bind_to_cpu(void) +static void fpsimd_bind_task_to_cpu(void) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1004,6 +994,16 @@ static void fpsimd_bind_to_cpu(void) last->st = ¤t->thread.uw.fpsimd_state; last->sve_in_use = test_thread_flag(TIF_SVE); current->thread.fpsimd_cpu = smp_processor_id(); + + if (system_supports_sve()) { + /* Toggle SVE trapping for userspace if needed */ + if (test_thread_flag(TIF_SVE)) + sve_user_enable(); + else + sve_user_disable(); + + /* Serialised by exception return to user */ + } } /* @@ -1020,7 +1020,7 @@ void fpsimd_restore_current_state(void) if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) { task_fpsimd_load(); - fpsimd_bind_to_cpu(); + fpsimd_bind_task_to_cpu(); } local_bh_enable(); @@ -1043,9 +1043,9 @@ void fpsimd_update_current_state(struct user_fpsimd_state const *state) fpsimd_to_sve(current); task_fpsimd_load(); + fpsimd_bind_task_to_cpu(); - if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) - fpsimd_bind_to_cpu(); + clear_thread_flag(TIF_FOREIGN_FPSTATE); local_bh_enable(); } From patchwork Fri Jun 1 15:27:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924149 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="K+GlaqkI"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y8P833yqz9s1w for ; Sat, 2 Jun 2018 02:07:40 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZEUemmy6ACpYXAJBjw5ULUl5M96EGs48ZzqUeTlsGEM=; b=K+GlaqkI/vhcgj 1zjb5gdkdmFOb+h6P0pLDpGB2tYMEqeKQq09n6T3yrsHAzhiGKLTIUXaM7mCgEdXcYUA26MgYqaIt TgneN8c9QJj0eUWT+65nxDUDxRlFHUmX/YGMwD1f1llgyhj/0x9p463I+gb1Ys4Lxi0jDlAKkthdQ iGUok8lAPRHqrb7vuOihYTUOzLJMQjsIiW2KLxue7YaDu5RnK1htrbYa4dM/UA7INoi6/vprLfBPH SQrxLxRS5fBg1v+9/eFS/bPmmvRcTexs9AreeoknG2xXx+eSJ+L7pvwvfZ5OkXwkvnF3H849rN+ve 3QBGXGG3A8B9MgUek/2Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmaS-000293-BF; Fri, 01 Jun 2018 16:07:32 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyv-0002kw-Oz for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:32 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 573C61650; Fri, 1 Jun 2018 08:28:35 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 233813F557; Fri, 1 Jun 2018 08:28:32 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 11/33] KVM: arm64: Repurpose vcpu_arch.debug_flags for general-purpose flags Date: Fri, 1 Jun 2018 16:27:25 +0100 Message-Id: <20180601152747.23613-12-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082845_880405_14995981 X-CRM114-Status: GOOD ( 22.70 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin In struct vcpu_arch, the debug_flags field is used to store debug-related flags about the vcpu state. Since we are about to add some more flags related to FPSIMD and SVE, it makes sense to add them to the existing flags field rather than adding new fields. Since there is only one debug_flags flag defined so far, there is plenty of free space for expansion. In preparation for adding more flags, this patch renames the debug_flags field to simply "flags", and updates comments appropriately. The flag definitions are also moved to , since their presence in was for purely historical reasons: these definitions are not used from asm any more, and not very likely to be as more Hyp asm is migrated to C. KVM_ARM64_DEBUG_DIRTY_SHIFT has not been used since commit 1ea66d27e7b0 ("arm64: KVM: Move away from the assembly version of the world switch"), so this patch gets rid of that too. No functional change. Signed-off-by: Dave Martin Reviewed-by: Marc Zyngier Reviewed-by: Alex Bennée Acked-by: Christoffer Dall [maz: fixed minor conflict] Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 3 --- arch/arm64/include/asm/kvm_host.h | 7 +++++-- arch/arm64/kvm/debug.c | 8 ++++---- arch/arm64/kvm/hyp/debug-sr.c | 6 +++--- arch/arm64/kvm/hyp/sysreg-sr.c | 4 ++-- arch/arm64/kvm/sys_regs.c | 9 ++++----- 6 files changed, 18 insertions(+), 19 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index a9ceeec5a76f..821a7032c0f7 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -30,9 +30,6 @@ /* The hyp-stub will return this for any kvm_call_hyp() call */ #define ARM_EXCEPTION_HYP_GONE HVC_STUB_ERR -#define KVM_ARM64_DEBUG_DIRTY_SHIFT 0 -#define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT) - #ifndef __ASSEMBLY__ #include diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 469de8acd06f..146c16794d32 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -216,8 +216,8 @@ struct kvm_vcpu_arch { /* Exception Information */ struct kvm_vcpu_fault_info fault; - /* Guest debug state */ - u64 debug_flags; + /* Miscellaneous vcpu state flags */ + u64 flags; /* * We maintain more than a single set of debug registers to support @@ -293,6 +293,9 @@ struct kvm_vcpu_arch { bool sysregs_loaded_on_cpu; }; +/* vcpu_arch flags field values: */ +#define KVM_ARM64_DEBUG_DIRTY (1 << 0) + #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) /* diff --git a/arch/arm64/kvm/debug.c b/arch/arm64/kvm/debug.c index a1f4ebdfe6d3..00d422336a45 100644 --- a/arch/arm64/kvm/debug.c +++ b/arch/arm64/kvm/debug.c @@ -103,7 +103,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) * * Additionally, KVM only traps guest accesses to the debug registers if * the guest is not actively using them (see the KVM_ARM64_DEBUG_DIRTY - * flag on vcpu->arch.debug_flags). Since the guest must not interfere + * flag on vcpu->arch.flags). Since the guest must not interfere * with the hardware state when debugging the guest, we must ensure that * trapping is enabled whenever we are debugging the guest using the * debug registers. @@ -111,7 +111,7 @@ void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu) void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) { - bool trap_debug = !(vcpu->arch.debug_flags & KVM_ARM64_DEBUG_DIRTY); + bool trap_debug = !(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY); unsigned long mdscr; trace_kvm_arm_setup_debug(vcpu, vcpu->guest_debug); @@ -184,7 +184,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) vcpu_write_sys_reg(vcpu, mdscr, MDSCR_EL1); vcpu->arch.debug_ptr = &vcpu->arch.external_debug_state; - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; trap_debug = true; trace_kvm_arm_set_regset("BKPTS", get_num_brps(), @@ -206,7 +206,7 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) /* If KDE or MDE are set, perform a full save/restore cycle. */ if (vcpu_read_sys_reg(vcpu, MDSCR_EL1) & (DBG_MDSCR_KDE | DBG_MDSCR_MDE)) - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; trace_kvm_arm_set_dreg32("MDCR_EL2", vcpu->arch.mdcr_el2); trace_kvm_arm_set_dreg32("MDSCR_EL1", vcpu_read_sys_reg(vcpu, MDSCR_EL1)); diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c index 3e717f66f011..50009766e5e5 100644 --- a/arch/arm64/kvm/hyp/debug-sr.c +++ b/arch/arm64/kvm/hyp/debug-sr.c @@ -163,7 +163,7 @@ void __hyp_text __debug_switch_to_guest(struct kvm_vcpu *vcpu) if (!has_vhe()) __debug_save_spe_nvhe(&vcpu->arch.host_debug_state.pmscr_el1); - if (!(vcpu->arch.debug_flags & KVM_ARM64_DEBUG_DIRTY)) + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); @@ -185,7 +185,7 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) if (!has_vhe()) __debug_restore_spe_nvhe(vcpu->arch.host_debug_state.pmscr_el1); - if (!(vcpu->arch.debug_flags & KVM_ARM64_DEBUG_DIRTY)) + if (!(vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY)) return; host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); @@ -196,7 +196,7 @@ void __hyp_text __debug_switch_to_host(struct kvm_vcpu *vcpu) __debug_save_state(vcpu, guest_dbg, guest_ctxt); __debug_restore_state(vcpu, host_dbg, host_ctxt); - vcpu->arch.debug_flags &= ~KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags &= ~KVM_ARM64_DEBUG_DIRTY; } u32 __hyp_text __kvm_get_mdcr_el2(void) diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index b3894df6bf1a..35bc16832efe 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -196,7 +196,7 @@ void __hyp_text __sysreg32_save_state(struct kvm_vcpu *vcpu) sysreg[DACR32_EL2] = read_sysreg(dacr32_el2); sysreg[IFSR32_EL2] = read_sysreg(ifsr32_el2); - if (has_vhe() || vcpu->arch.debug_flags & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) sysreg[DBGVCR32_EL2] = read_sysreg(dbgvcr32_el2); } @@ -218,7 +218,7 @@ void __hyp_text __sysreg32_restore_state(struct kvm_vcpu *vcpu) write_sysreg(sysreg[DACR32_EL2], dacr32_el2); write_sysreg(sysreg[IFSR32_EL2], ifsr32_el2); - if (has_vhe() || vcpu->arch.debug_flags & KVM_ARM64_DEBUG_DIRTY) + if (has_vhe() || vcpu->arch.flags & KVM_ARM64_DEBUG_DIRTY) write_sysreg(sysreg[DBGVCR32_EL2], dbgvcr32_el2); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 6e3b969391fd..a4363735d3f8 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -31,7 +31,6 @@ #include #include #include -#include #include #include #include @@ -338,7 +337,7 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu, { if (p->is_write) { vcpu_write_sys_reg(vcpu, p->regval, r->reg); - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; } else { p->regval = vcpu_read_sys_reg(vcpu, r->reg); } @@ -369,7 +368,7 @@ static void reg_to_dbg(struct kvm_vcpu *vcpu, } *dbg_reg = val; - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; } static void dbg_to_reg(struct kvm_vcpu *vcpu, @@ -1441,7 +1440,7 @@ static bool trap_debug32(struct kvm_vcpu *vcpu, { if (p->is_write) { vcpu_cp14(vcpu, r->reg) = p->regval; - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; } else { p->regval = vcpu_cp14(vcpu, r->reg); } @@ -1473,7 +1472,7 @@ static bool trap_xvr(struct kvm_vcpu *vcpu, val |= p->regval << 32; *dbg_reg = val; - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + vcpu->arch.flags |= KVM_ARM64_DEBUG_DIRTY; } else { p->regval = *dbg_reg >> 32; } From patchwork Fri Jun 1 15:27:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924136 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="YQhj82QN"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y84s6pC9z9s1b for ; Sat, 2 Jun 2018 01:53:33 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8wquXlm6e96HVs/iQwimRTsoPZRQmn0di0uGXCRIsUY=; b=YQhj82QN4uVnyv 7EMqSdGfpaefRfshvhDI24WaRNcrrAfkaWgVUQ/IVGcbYAKxObpFdsJChMMmRK7kdcS9mTnRxZLtP ElwfwdVFtmf9qJAB4BXq1GDr5gTuBUkmvmju4j74QWiToWJAtSn4PXgMgiWNQ/3+HgpBx8uAa+x0r XVx7meI4FryOZRvwlJDBSSQ1HzsuzT5/LcUxHPzU/drYzAEc3TlB/STIGHS3znr4LV5A5GaI4zE9N Xn4hrH7Ukvzl5qi8EyvGGQf8qRNMykvNW9SCLPEFss5J28lkCgORJ4mz9z0AQvqJNcTCj1mlitNYA SeRJD437V5GbDAIK2aHQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmMp-0002fg-DZ; Fri, 01 Jun 2018 15:53:27 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyv-0002m5-P0 for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA577169F; Fri, 1 Jun 2018 08:28:37 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 967A23F557; Fri, 1 Jun 2018 08:28:35 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 12/33] KVM: arm64: Optimise FPSIMD handling to reduce guest/host thrashing Date: Fri, 1 Jun 2018 16:27:26 +0100 Message-Id: <20180601152747.23613-13-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082845_868867_929D9D75 X-CRM114-Status: GOOD ( 28.75 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin This patch refactors KVM to align the host and guest FPSIMD save/restore logic with each other for arm64. This reduces the number of redundant save/restore operations that must occur, and reduces the common-case IRQ blackout time during guest exit storms by saving the host state lazily and optimising away the need to restore the host state before returning to the run loop. Four hooks are defined in order to enable this: * kvm_arch_vcpu_run_map_fp(): Called on PID change to map necessary bits of current to Hyp. * kvm_arch_vcpu_load_fp(): Set up FP/SIMD for entering the KVM run loop (parse as "vcpu_load fp"). * kvm_arch_vcpu_ctxsync_fp(): Get FP/SIMD into a safe state for re-enabling interrupts after a guest exit back to the run loop. For arm64 specifically, this involves updating the host kernel's FPSIMD context tracking metadata so that kernel-mode NEON use will cause the vcpu's FPSIMD state to be saved back correctly into the vcpu struct. This must be done before re-enabling interrupts because kernel-mode NEON may be used by softirqs. * kvm_arch_vcpu_put_fp(): Save guest FP/SIMD state back to memory and dissociate from the CPU ("vcpu_put fp"). Also, the arm64 FPSIMD context switch code is updated to enable it to save back FPSIMD state for a vcpu, not just current. A few helpers drive this: * fpsimd_bind_state_to_cpu(struct user_fpsimd_state *fp): mark this CPU as having context fp (which may belong to a vcpu) currently loaded in its registers. This is the non-task equivalent of the static function fpsimd_bind_to_cpu() in fpsimd.c. * task_fpsimd_save(): exported to allow KVM to save the guest's FPSIMD state back to memory on exit from the run loop. * fpsimd_flush_state(): invalidate any context's FPSIMD state that is currently loaded. Used to disassociate the vcpu from the CPU regs on run loop exit. These changes allow the run loop to enable interrupts (and thus softirqs that may use kernel-mode NEON) without having to save the guest's FPSIMD state eagerly. Some new vcpu_arch fields are added to make all this work. Because host FPSIMD state can now be saved back directly into current's thread_struct as appropriate, host_cpu_context is no longer used for preserving the FPSIMD state. However, it is still needed for preserving other things such as the host's system registers. To avoid ABI churn, the redundant storage space in host_cpu_context is not removed for now. arch/arm is not addressed by this patch and continues to use its current save/restore logic. It could provide implementations of the helpers later if desired. Signed-off-by: Dave Martin Reviewed-by: Marc Zyngier Reviewed-by: Christoffer Dall Reviewed-by: Alex Bennée Acked-by: Catalin Marinas Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 8 +++ arch/arm64/include/asm/fpsimd.h | 6 ++ arch/arm64/include/asm/kvm_host.h | 21 ++++++ arch/arm64/kernel/fpsimd.c | 19 +++-- arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/Makefile | 2 +- arch/arm64/kvm/fpsimd.c | 111 ++++++++++++++++++++++++++++++ arch/arm64/kvm/hyp/switch.c | 51 +++++++------- virt/kvm/arm/arm.c | 4 ++ 9 files changed, 192 insertions(+), 31 deletions(-) create mode 100644 arch/arm64/kvm/fpsimd.c diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index c7c28c885a19..ac870b2cd5d1 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -303,6 +303,14 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu, int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu, struct kvm_device_attr *attr); +/* + * VFP/NEON switching is all done by the hyp switch code, so no need to + * coordinate with host context handling for this state: + */ +static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {} +static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {} + /* All host FP/SIMD state is restored on guest exit, so nothing to save: */ static inline void kvm_fpsimd_flush_cpu_state(void) {} diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index aa7162ae93e3..3e00f701cb9c 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -41,6 +41,8 @@ struct task_struct; extern void fpsimd_save_state(struct user_fpsimd_state *state); extern void fpsimd_load_state(struct user_fpsimd_state *state); +extern void fpsimd_save(void); + extern void fpsimd_thread_switch(struct task_struct *next); extern void fpsimd_flush_thread(void); @@ -49,7 +51,11 @@ extern void fpsimd_preserve_current_state(void); extern void fpsimd_restore_current_state(void); extern void fpsimd_update_current_state(struct user_fpsimd_state const *state); +extern void fpsimd_bind_task_to_cpu(void); +extern void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *state); + extern void fpsimd_flush_task_state(struct task_struct *target); +extern void fpsimd_flush_cpu_state(void); extern void sve_flush_cpu_state(void); /* Maximum VL that SVE VL-agnostic software can transparently support */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 146c16794d32..b3fe7301bdbe 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -30,6 +30,7 @@ #include #include #include +#include #define __KVM_HAVE_ARCH_INTC_INITIALIZED @@ -238,6 +239,10 @@ struct kvm_vcpu_arch { /* Pointer to host CPU context */ kvm_cpu_context_t *host_cpu_context; + + struct thread_info *host_thread_info; /* hyp VA */ + struct user_fpsimd_state *host_fpsimd_state; /* hyp VA */ + struct { /* {Break,watch}point registers */ struct kvm_guest_debug_arch regs; @@ -295,6 +300,9 @@ struct kvm_vcpu_arch { /* vcpu_arch flags field values: */ #define KVM_ARM64_DEBUG_DIRTY (1 << 0) +#define KVM_ARM64_FP_ENABLED (1 << 1) /* guest FP regs loaded */ +#define KVM_ARM64_FP_HOST (1 << 2) /* host FP regs loaded */ +#define KVM_ARM64_HOST_SVE_IN_USE (1 << 3) /* backup for host TIF_SVE */ #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) @@ -423,6 +431,19 @@ static inline void __cpu_init_stage2(void) "PARange is %d bits, unsupported configuration!", parange); } +/* Guest/host FPSIMD coordination helpers */ +int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); +void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu); +void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu); +void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu); + +#ifdef CONFIG_KVM /* Avoid conflicts with core headers if CONFIG_KVM=n */ +static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) +{ + return kvm_arch_vcpu_run_map_fp(vcpu); +} +#endif + /* * All host FP/SIMD state is restored on guest exit, so nothing needs * doing here except in the SVE case: diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index d5f659f476a8..794dd990da82 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -265,10 +265,10 @@ static void task_fpsimd_load(void) * * Softirqs (and preemption) must be disabled. */ -static void fpsimd_save(void) +void fpsimd_save(void) { struct user_fpsimd_state *st = __this_cpu_read(fpsimd_last_state.st); - /* set by fpsimd_bind_task_to_cpu() */ + /* set by fpsimd_bind_task_to_cpu() or fpsimd_bind_state_to_cpu() */ WARN_ON(!in_softirq() && !irqs_disabled()); @@ -986,7 +986,7 @@ void fpsimd_signal_preserve_current_state(void) * Associate current's FPSIMD context with this cpu * Preemption must be disabled when calling this function. */ -static void fpsimd_bind_task_to_cpu(void) +void fpsimd_bind_task_to_cpu(void) { struct fpsimd_last_state_struct *last = this_cpu_ptr(&fpsimd_last_state); @@ -1006,6 +1006,17 @@ static void fpsimd_bind_task_to_cpu(void) } } +void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st) +{ + struct fpsimd_last_state_struct *last = + this_cpu_ptr(&fpsimd_last_state); + + WARN_ON(!in_softirq() && !irqs_disabled()); + + last->st = st; + last->sve_in_use = false; +} + /* * Load the userland FPSIMD state of 'current' from memory, but only if the * FPSIMD state already held in the registers is /not/ the most recent FPSIMD @@ -1058,7 +1069,7 @@ void fpsimd_flush_task_state(struct task_struct *t) t->thread.fpsimd_cpu = NR_CPUS; } -static inline void fpsimd_flush_cpu_state(void) +void fpsimd_flush_cpu_state(void) { __this_cpu_write(fpsimd_last_state.st, NULL); set_thread_flag(TIF_FOREIGN_FPSTATE); diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index a2e3a5af1113..47b23bf617c7 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -39,6 +39,7 @@ config KVM select HAVE_KVM_IRQ_ROUTING select IRQ_BYPASS_MANAGER select HAVE_KVM_IRQ_BYPASS + select HAVE_KVM_VCPU_RUN_PID_CHANGE ---help--- Support hosting virtualized guest machines. We don't support KVM with 16K page tables yet, due to the multiple diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 93afff91cb7c..0f2a135ba15b 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -19,7 +19,7 @@ kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o va_layout.o kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o -kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o +kvm-$(CONFIG_KVM_ARM_HOST) += vgic-sys-reg-v3.o fpsimd.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/aarch32.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic.o diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c new file mode 100644 index 000000000000..365933a98a7c --- /dev/null +++ b/arch/arm64/kvm/fpsimd.c @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * arch/arm64/kvm/fpsimd.c: Guest/host FPSIMD context coordination helpers + * + * Copyright 2018 Arm Limited + * Author: Dave Martin + */ +#include +#include +#include +#include +#include +#include +#include + +/* + * Called on entry to KVM_RUN unless this vcpu previously ran at least + * once and the most recent prior KVM_RUN for this vcpu was called from + * the same task as current (highly likely). + * + * This is guaranteed to execute before kvm_arch_vcpu_load_fp(vcpu), + * such that on entering hyp the relevant parts of current are already + * mapped. + */ +int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) +{ + int ret; + + struct thread_info *ti = ¤t->thread_info; + struct user_fpsimd_state *fpsimd = ¤t->thread.uw.fpsimd_state; + + /* + * Make sure the host task thread flags and fpsimd state are + * visible to hyp: + */ + ret = create_hyp_mappings(ti, ti + 1, PAGE_HYP); + if (ret) + goto error; + + ret = create_hyp_mappings(fpsimd, fpsimd + 1, PAGE_HYP); + if (ret) + goto error; + + vcpu->arch.host_thread_info = kern_hyp_va(ti); + vcpu->arch.host_fpsimd_state = kern_hyp_va(fpsimd); +error: + return ret; +} + +/* + * Prepare vcpu for saving the host's FPSIMD state and loading the guest's. + * The actual loading is done by the FPSIMD access trap taken to hyp. + * + * Here, we just set the correct metadata to indicate that the FPSIMD + * state in the cpu regs (if any) belongs to current on the host. + * + * TIF_SVE is backed up here, since it may get clobbered with guest state. + * This flag is restored by kvm_arch_vcpu_put_fp(vcpu). + */ +void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) +{ + BUG_ON(system_supports_sve()); + BUG_ON(!current->mm); + + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_HOST_SVE_IN_USE); + vcpu->arch.flags |= KVM_ARM64_FP_HOST; + if (test_thread_flag(TIF_SVE)) + vcpu->arch.flags |= KVM_ARM64_HOST_SVE_IN_USE; +} + +/* + * If the guest FPSIMD state was loaded, update the host's context + * tracking data mark the CPU FPSIMD regs as dirty and belonging to vcpu + * so that they will be written back if the kernel clobbers them due to + * kernel-mode NEON before re-entry into the guest. + */ +void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) +{ + WARN_ON_ONCE(!irqs_disabled()); + + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + fpsimd_bind_state_to_cpu(&vcpu->arch.ctxt.gp_regs.fp_regs); + clear_thread_flag(TIF_FOREIGN_FPSTATE); + clear_thread_flag(TIF_SVE); + } +} + +/* + * Write back the vcpu FPSIMD regs if they are dirty, and invalidate the + * cpu FPSIMD regs so that they can't be spuriously reused if this vcpu + * disappears and another task or vcpu appears that recycles the same + * struct fpsimd_state. + */ +void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) +{ + local_bh_disable(); + + update_thread_flag(TIF_SVE, + vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE); + + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) { + /* Clean guest FP state to memory and invalidate cpu view */ + fpsimd_save(); + fpsimd_flush_cpu_state(); + } else if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) { + /* Ensure user trap controls are correctly restored */ + fpsimd_bind_task_to_cpu(); + } + + local_bh_enable(); +} diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index c0796c4d93a5..118f3002b9ce 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -23,19 +23,21 @@ #include #include +#include #include #include #include #include +#include -static bool __hyp_text __fpsimd_enabled_nvhe(void) +/* Check whether the FP regs were dirtied while in the host-side run loop: */ +static bool __hyp_text update_fp_enabled(struct kvm_vcpu *vcpu) { - return !(read_sysreg(cptr_el2) & CPTR_EL2_TFP); -} + if (vcpu->arch.host_thread_info->flags & _TIF_FOREIGN_FPSTATE) + vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | + KVM_ARM64_FP_HOST); -static bool fpsimd_enabled_vhe(void) -{ - return !!(read_sysreg(cpacr_el1) & CPACR_EL1_FPEN); + return !!(vcpu->arch.flags & KVM_ARM64_FP_ENABLED); } /* Save the 32-bit only FPSIMD system register state */ @@ -92,7 +94,10 @@ static void activate_traps_vhe(struct kvm_vcpu *vcpu) val = read_sysreg(cpacr_el1); val |= CPACR_EL1_TTA; - val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN); + val &= ~CPACR_EL1_ZEN; + if (!update_fp_enabled(vcpu)) + val &= ~CPACR_EL1_FPEN; + write_sysreg(val, cpacr_el1); write_sysreg(kvm_get_hyp_vector(), vbar_el1); @@ -105,7 +110,10 @@ static void __hyp_text __activate_traps_nvhe(struct kvm_vcpu *vcpu) __activate_traps_common(vcpu); val = CPTR_EL2_DEFAULT; - val |= CPTR_EL2_TTA | CPTR_EL2_TFP | CPTR_EL2_TZ; + val |= CPTR_EL2_TTA | CPTR_EL2_TZ; + if (!update_fp_enabled(vcpu)) + val |= CPTR_EL2_TFP; + write_sysreg(val, cptr_el2); } @@ -321,8 +329,6 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, struct kvm_vcpu *vcpu) { - kvm_cpu_context_t *host_ctxt; - if (has_vhe()) write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, cpacr_el1); @@ -332,14 +338,19 @@ void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, isb(); - host_ctxt = kern_hyp_va(vcpu->arch.host_cpu_context); - __fpsimd_save_state(&host_ctxt->gp_regs.fp_regs); + if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { + __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; + } + __fpsimd_restore_state(&vcpu->arch.ctxt.gp_regs.fp_regs); /* Skip restoring fpexc32 for AArch64 guests */ if (!(read_sysreg(hcr_el2) & HCR_RW)) write_sysreg(vcpu->arch.ctxt.sys_regs[FPEXC32_EL2], fpexc32_el2); + + vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; } /* @@ -418,7 +429,6 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; - bool fp_enabled; u64 exit_code; host_ctxt = vcpu->arch.host_cpu_context; @@ -440,19 +450,14 @@ int kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu) /* And we're baaack! */ } while (fixup_guest_exit(vcpu, &exit_code)); - fp_enabled = fpsimd_enabled_vhe(); - sysreg_save_guest_state_vhe(guest_ctxt); __deactivate_traps(vcpu); sysreg_restore_host_state_vhe(host_ctxt); - if (fp_enabled) { - __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); - __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); - } __debug_switch_to_host(vcpu); @@ -464,7 +469,6 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) { struct kvm_cpu_context *host_ctxt; struct kvm_cpu_context *guest_ctxt; - bool fp_enabled; u64 exit_code; vcpu = kern_hyp_va(vcpu); @@ -496,8 +500,6 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) /* And we're baaack! */ } while (fixup_guest_exit(vcpu, &exit_code)); - fp_enabled = __fpsimd_enabled_nvhe(); - __sysreg_save_state_nvhe(guest_ctxt); __sysreg32_save_state(vcpu); __timer_disable_traps(vcpu); @@ -508,11 +510,8 @@ int __hyp_text __kvm_vcpu_run_nvhe(struct kvm_vcpu *vcpu) __sysreg_restore_state_nvhe(host_ctxt); - if (fp_enabled) { - __fpsimd_save_state(&guest_ctxt->gp_regs.fp_regs); - __fpsimd_restore_state(&host_ctxt->gp_regs.fp_regs); + if (vcpu->arch.flags & KVM_ARM64_FP_ENABLED) __fpsimd_save_fpexc32(vcpu); - } /* * This must come after restoring the host sysregs, since a non-VHE diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index a4c1b76240df..bee226cec40b 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -363,10 +363,12 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_vgic_load(vcpu); kvm_timer_vcpu_load(vcpu); kvm_vcpu_load_sysregs(vcpu); + kvm_arch_vcpu_load_fp(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + kvm_arch_vcpu_put_fp(vcpu); kvm_vcpu_put_sysregs(vcpu); kvm_timer_vcpu_put(vcpu); kvm_vgic_put(vcpu); @@ -778,6 +780,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) if (static_branch_unlikely(&userspace_irqchip_in_use)) kvm_timer_sync_hwstate(vcpu); + kvm_arch_vcpu_ctxsync_fp(vcpu); + /* * We may have taken a host interrupt in HYP mode (ie * while executing the guest). This interrupt is still From patchwork Fri Jun 1 15:27:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924101 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="FqCcXbl7"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7f53KB1z9ry1 for ; Sat, 2 Jun 2018 01:33:49 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=o/OMnlvgDhgNTwf5PWBjAivIC6egROBw/K5TAj03siM=; b=FqCcXbl7EnfTix jqRbOgYI5owVZUYf4H0ZHFkfPCTTGYfsReHLmYNAXt47mSW1Nh76mMiHtLn/aikPaSmWFqGR2tE1d AoofxzQmttnpi7Lhsmi9dEn0G2fKAEyWIC2+e/xIRBUxvylf3TKfIQZDFc6/+wiy1J4C5Tv7aCAos ufUtK9NvqiJUHHdMIGgL8KbPPfe2E3DJ18AAFHDlcel8ELHVfduLW/od3Rc5Omzga3N9+mqlhXcky /ItQ/er/fJFx6rcHNOBHNQMGC2DzGrGjFrHlTSByGlHFOxiRp4fV+CeF4j5lu/DnxCrl9bmPxLvci PPioQuPGq7SPAIDrObQQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm3i-0006rp-L1; Fri, 01 Jun 2018 15:33:42 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyv-0002n0-Ow for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:25 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 25F4E16A0; Fri, 1 Jun 2018 08:28:40 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 158F93F557; Fri, 1 Jun 2018 08:28:37 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 13/33] arm64/sve: Move read_zcr_features() out of cpufeature.h Date: Fri, 1 Jun 2018 16:27:27 +0100 Message-Id: <20180601152747.23613-14-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082845_875917_BE81ECA6 X-CRM114-Status: GOOD ( 20.14 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin Having read_zcr_features() inline in cpufeature.h results in that header requiring #includes which make it hard to include elsewhere without triggering header inclusion cycles. This is not a hot-path function and arguably should not be in cpufeature.h in the first place, so this patch moves it to fpsimd.c, compiled conditionally if CONFIG_ARM64_SVE=y. This allows some SVE-related #includes to be dropped from cpufeature.h, which will ease future maintenance. A couple of missing #includes of are exposed by this change under arch/arm64/. This patch adds the missing #includes as necessary. No functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Catalin Marinas Acked-by: Marc Zyngier Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/cpufeature.h | 29 ----------------------------- arch/arm64/include/asm/fpsimd.h | 2 ++ arch/arm64/include/asm/processor.h | 1 + arch/arm64/kernel/fpsimd.c | 28 ++++++++++++++++++++++++++++ arch/arm64/kernel/ptrace.c | 1 + 5 files changed, 32 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 09b0f2a80c8f..0a6b7133195e 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -11,9 +11,7 @@ #include #include -#include #include -#include #include /* @@ -510,33 +508,6 @@ static inline bool system_supports_sve(void) cpus_have_const_cap(ARM64_SVE); } -/* - * Read the pseudo-ZCR used by cpufeatures to identify the supported SVE - * vector length. - * - * Use only if SVE is present. - * This function clobbers the SVE vector length. - */ -static inline u64 read_zcr_features(void) -{ - u64 zcr; - unsigned int vq_max; - - /* - * Set the maximum possible VL, and write zeroes to all other - * bits to see if they stick. - */ - sve_kernel_enable(NULL); - write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1); - - zcr = read_sysreg_s(SYS_ZCR_EL1); - zcr &= ~(u64)ZCR_ELx_LEN_MASK; /* find sticky 1s outside LEN field */ - vq_max = sve_vq_from_vl(sve_get_vl()); - zcr |= vq_max - 1; /* set LEN field to maximum effective value */ - - return zcr; -} - #endif /* __ASSEMBLY__ */ #endif diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index 3e00f701cb9c..fb60b22b8bbf 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -69,6 +69,8 @@ extern unsigned int sve_get_vl(void); struct arm64_cpu_capabilities; extern void sve_kernel_enable(const struct arm64_cpu_capabilities *__unused); +extern u64 read_zcr_features(void); + extern int __ro_after_init sve_max_vl; #ifdef CONFIG_ARM64_SVE diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 36d64f83cdfb..9231b8762ca6 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -40,6 +40,7 @@ #include #include +#include #include #include #include diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 794dd990da82..6c01ee2062c4 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include @@ -755,6 +756,33 @@ void sve_kernel_enable(const struct arm64_cpu_capabilities *__always_unused p) isb(); } +/* + * Read the pseudo-ZCR used by cpufeatures to identify the supported SVE + * vector length. + * + * Use only if SVE is present. + * This function clobbers the SVE vector length. + */ +u64 read_zcr_features(void) +{ + u64 zcr; + unsigned int vq_max; + + /* + * Set the maximum possible VL, and write zeroes to all other + * bits to see if they stick. + */ + sve_kernel_enable(NULL); + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL1); + + zcr = read_sysreg_s(SYS_ZCR_EL1); + zcr &= ~(u64)ZCR_ELx_LEN_MASK; /* find sticky 1s outside LEN field */ + vq_max = sve_vq_from_vl(sve_get_vl()); + zcr |= vq_max - 1; /* set LEN field to maximum effective value */ + + return zcr; +} + void __init sve_setup(void) { u64 zcr; diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c index 7ff81fed46e1..78889c4546d7 100644 --- a/arch/arm64/kernel/ptrace.c +++ b/arch/arm64/kernel/ptrace.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include From patchwork Fri Jun 1 15:27:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924168 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="WsTR3REE"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y8jm6s6Jz9rxs for ; Sat, 2 Jun 2018 02:22:04 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KRiioaXShGAUrvHx765XlL7fjVWE2OBPqSeuJ6SiXmk=; b=WsTR3REE+ajFWB ACrvpJMb+O6QjSt2aT4OC4GWsdOFT+65TNinBuDUKJQIIxVUy5kCghVEUFxtAePmYaiFEUjtpkqfg ImhjsaNh8Ol6fpgrY4FSuq1OT9oVGvmDtO9jDsL7mhWBEJJ3dJTxrWuPfTxnyhz8crmPWmD3zI6H9 RQK+x6mFkqts3gC616DeWA2ZQMoTQuKvsfBoJ5H+6/rcatzhmikkEow9M+N+0kVuZzJMv+c1zY7Rd 5Xa+0H7CHD0TNVPVLOW2Af8SPQkc79DWDDJWeGY1v7a1FL/QiyBMk9tcmG7D91Q2RgXa9367tAb93 +1nlnG24mp6xISi0bwBw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmoQ-0005IS-IZ; Fri, 01 Jun 2018 16:21:58 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlys-0002g2-Gf for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:06 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74D4F15AB; Fri, 1 Jun 2018 08:28:42 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 643B83F557; Fri, 1 Jun 2018 08:28:40 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 14/33] arm64/sve: Switch sve_pffr() argument from task to thread Date: Fri, 1 Jun 2018 16:27:28 +0100 Message-Id: <20180601152747.23613-15-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082842_593807_1031E44B X-CRM114-Status: GOOD ( 15.41 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin sve_pffr(), which is used to derive the base address used for low-level SVE save/restore routines, currently takes the relevant task_struct as an argument. The only accessed fields are actually part of thread_struct, so this patch changes the argument type accordingly. This is done in preparation for moving this function to a header, where we do not want to have to include due to the consequent circular #include problems. No functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Catalin Marinas Acked-by: Marc Zyngier Signed-off-by: Marc Zyngier --- arch/arm64/kernel/fpsimd.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 6c01ee2062c4..842b2ad08bec 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include @@ -167,10 +168,9 @@ static size_t sve_ffr_offset(int vl) return SVE_SIG_FFR_OFFSET(sve_vq_from_vl(vl)) - SVE_SIG_REGS_OFFSET; } -static void *sve_pffr(struct task_struct *task) +static void *sve_pffr(struct thread_struct *thread) { - return (char *)task->thread.sve_state + - sve_ffr_offset(task->thread.sve_vl); + return (char *)thread->sve_state + sve_ffr_offset(thread->sve_vl); } static void change_cpacr(u64 val, u64 mask) @@ -253,7 +253,7 @@ static void task_fpsimd_load(void) WARN_ON(!in_softirq() && !irqs_disabled()); if (system_supports_sve() && test_thread_flag(TIF_SVE)) - sve_load_state(sve_pffr(current), + sve_load_state(sve_pffr(¤t->thread), ¤t->thread.uw.fpsimd_state.fpsr, sve_vq_from_vl(current->thread.sve_vl) - 1); else @@ -285,7 +285,7 @@ void fpsimd_save(void) return; } - sve_save_state(sve_pffr(current), &st->fpsr); + sve_save_state(sve_pffr(¤t->thread), &st->fpsr); } else fpsimd_save_state(st); } From patchwork Fri Jun 1 15:27:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924103 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="eg6syrjd"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7kr3tynz9rvt for ; Sat, 2 Jun 2018 01:37:56 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pgFOxx7i6rSQbeLxJhweYZHypo+48+A/5ak9tcL+WkA=; b=eg6syrjd+As80B 4Sjk9uyIiYtUK6pOJgeeiAhj3RxQdXWf6fp0K8L9zv5MIEdgCYRy67xiv9olgHfP4PreAhgQcHxJZ rryDxS2LmxJDkVZ2/lOfiej3Yi+Vfjtrq2Syslm/kw7GCWvjek63uNK6VPL4wxaCvcYYbDWRf9yzd B9Hkgt5d7D280er+XjSO/Hh5R5K8WJK5JkNi5KvlEmOvsA5UPRomgWItPRgPEtZP/TInRH47dRhtt ftcwqx4T33UbAoPzFJs4/AgEKhdA5JVWs7cvnh8Pe3ohPKkamNsfE17Mss+hTvVfIbti0efe7jRgG yP4DjdIkefhBDm9yLv9Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm7X-0001kS-1f; Fri, 01 Jun 2018 15:37:39 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlyu-0002h2-SG for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:09 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C4DCA1529; Fri, 1 Jun 2018 08:28:44 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B4AE93F557; Fri, 1 Jun 2018 08:28:42 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 15/33] arm64/sve: Move sve_pffr() to fpsimd.h and make inline Date: Fri, 1 Jun 2018 16:27:29 +0100 Message-Id: <20180601152747.23613-16-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082844_992772_C786827C X-CRM114-Status: GOOD ( 15.76 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin In order to make sve_save_state()/sve_load_state() more easily reusable and to get rid of a potential branch on context switch critical paths, this patch makes sve_pffr() inline and moves it to fpsimd.h. must be included in fpsimd.h in order to make this work, and this creates an #include cycle that is tricky to avoid without modifying core code, due to the way the PR_SVE_*() prctl helpers are included in the core prctl implementation. Instead of breaking the cycle, this patch defers inclusion of in until the point where it is actually needed: i.e., immediately before the prctl definitions. No functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Catalin Marinas Acked-by: Marc Zyngier Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/fpsimd.h | 13 +++++++++++++ arch/arm64/include/asm/processor.h | 12 +++++++++++- arch/arm64/kernel/fpsimd.c | 12 ------------ 3 files changed, 24 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h index fb60b22b8bbf..fa92747a49c8 100644 --- a/arch/arm64/include/asm/fpsimd.h +++ b/arch/arm64/include/asm/fpsimd.h @@ -18,6 +18,8 @@ #include #include +#include +#include #ifndef __ASSEMBLY__ @@ -61,6 +63,17 @@ extern void sve_flush_cpu_state(void); /* Maximum VL that SVE VL-agnostic software can transparently support */ #define SVE_VL_ARCH_MAX 0x100 +/* Offset of FFR in the SVE register dump */ +static inline size_t sve_ffr_offset(int vl) +{ + return SVE_SIG_FFR_OFFSET(sve_vq_from_vl(vl)) - SVE_SIG_REGS_OFFSET; +} + +static inline void *sve_pffr(struct thread_struct *thread) +{ + return (char *)thread->sve_state + sve_ffr_offset(thread->sve_vl); +} + extern void sve_save_state(void *state, u32 *pfpsr); extern void sve_load_state(void const *state, u32 const *pfpsr, unsigned long vq_minus_1); diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index 9231b8762ca6..c99e657fdd57 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -40,7 +40,6 @@ #include #include -#include #include #include #include @@ -247,6 +246,17 @@ void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused); void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused); void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused); +/* + * Not at the top of the file due to a direct #include cycle between + * and . Deferring this #include + * ensures that contents of processor.h are visible to fpsimd.h even if + * processor.h is included first. + * + * These prctl helpers are the only things in this file that require + * fpsimd.h. The core code expects them to be in this header. + */ +#include + /* Userspace interface for PR_SVE_{SET,GET}_VL prctl()s: */ #define SVE_SET_VL(arg) sve_set_current_vl(arg) #define SVE_GET_VL() sve_get_current_vl() diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 842b2ad08bec..e60c3a28380f 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -161,18 +161,6 @@ static void sve_free(struct task_struct *task) __sve_free(task); } - -/* Offset of FFR in the SVE register dump */ -static size_t sve_ffr_offset(int vl) -{ - return SVE_SIG_FFR_OFFSET(sve_vq_from_vl(vl)) - SVE_SIG_REGS_OFFSET; -} - -static void *sve_pffr(struct thread_struct *thread) -{ - return (char *)thread->sve_state + sve_ffr_offset(thread->sve_vl); -} - static void change_cpacr(u64 val, u64 mask) { u64 cpacr = read_sysreg(CPACR_EL1); From patchwork Fri Jun 1 15:27:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924160 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="sSlNT67f"; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=infradead.org header.i=@infradead.org header.b="dymwd+wI"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y8bB2499z9s1b for ; Sat, 2 Jun 2018 02:16:22 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mRMybBG9hr+oe1tBHASP2rjnbpTkRELnm8hZeCi/yvw=; b=sSlNT67fXa+GBX TW2WIwxABj3T2/idXsbOisIgVo2zbxDaq+hifOZ1LOQ6nZKZq5HemcIXIc17T0EKliotuIhYCFSWu 4fld7ucGSd7qFmieMi2sUyF1/q44pHipSYINMEf88jFbY0aJ92XEhgoqp8rk4GdQNzqP8wAXeo339 66DTBQ+yVoIZF8mBPbWCPwAC997HIjugp9y5sdWk/BxT32vhmODisj9FjBfkUyhyjOaQf6mK0LpIK gbAchPiIv4Io6jDNsVI5DryuonrZu2HBDwKN7YBh4HRwE2khjd6I7NKkrgU8dpYzixJr2ZBq4FH8w xZnzQ7nLe4rKXJZSD13w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmis-0001QS-EN; Fri, 01 Jun 2018 16:16:14 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmda-00031U-Lb for linux-arm-kernel@bombadil.infradead.org; Fri, 01 Jun 2018 16:10:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:Content-Type: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=w5qlAwy8N1ewmSCdMeeSTB901kBF5o/+4yVFAFBYKFc=; b=dymwd+wIR0+PGXMaLKuBLTHpwT e16xlVhBGmr8KuTdb/rykPVnYigTLgv7CPC7CRber6iHr625ZEp4pW9Nv+neQ165+po1v9jGheoPL F9Ln5N+Sfi+p8xyvL7nIFCphQphXfjw6o20Qya6g8FPr0SpTxJPjZSwMnab1jebzHkoxtN+z0BTJt 3TgXctF4j8X+vG5KWKi5DfOPI5SnkUiQ0QVyXYlxFX1FAQU4k/EJUxdqL2QLXwk+BxLJsduYMPI4x xrPeWAvmd3wpgZKZohjP7qOV0fmP3UzucAkHtKH2zypAE8hlcsaP7tmNk/2VrUcHVAY6Me9mkgOJW s6IT5UHw==; Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlz8-0007pk-01 for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:00 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F87E16A3; Fri, 1 Jun 2018 08:28:47 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0F2453F557; Fri, 1 Jun 2018 08:28:44 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 16/33] KVM: arm64: Save host SVE context as appropriate Date: Fri, 1 Jun 2018 16:27:30 +0100 Message-Id: <20180601152747.23613-17-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_162858_316191_7859BF90 X-CRM114-Status: GOOD ( 23.02 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on casper.infradead.org summary: Content analysis details: (-5.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin This patch adds SVE context saving to the hyp FPSIMD context switch path. This means that it is no longer necessary to save the host SVE state in advance of entering the guest, when in use. In order to avoid adding pointless complexity to the code, VHE is assumed if SVE is in use. VHE is an architectural prerequisite for SVE, so there is no good reason to turn CONFIG_ARM64_VHE off in kernels that support both SVE and KVM. Historically, software models exist that can expose the architecturally invalid configuration of SVE without VHE, so if this situation is detected at kvm_init() time then KVM will be disabled. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Catalin Marinas Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 1 + arch/arm64/Kconfig | 7 +++++++ arch/arm64/include/asm/kvm_host.h | 13 +++++++++++++ arch/arm64/kvm/fpsimd.c | 1 - arch/arm64/kvm/hyp/switch.c | 20 +++++++++++++++++++- virt/kvm/arm/arm.c | 7 +++++++ 6 files changed, 47 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index ac870b2cd5d1..3b85bbb4b23e 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -280,6 +280,7 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); +static inline bool kvm_arch_check_sve_has_vhe(void) { return true; } static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index eb2cf4938f6d..b0d3820081c8 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1130,6 +1130,7 @@ endmenu config ARM64_SVE bool "ARM Scalable Vector Extension support" default y + depends on !KVM || ARM64_VHE help The Scalable Vector Extension (SVE) is an extension to the AArch64 execution state which complements and extends the SIMD functionality @@ -1155,6 +1156,12 @@ config ARM64_SVE booting the kernel. If unsure and you are not observing these symptoms, you should assume that it is safe to say Y. + CPUs that support SVE are architecturally required to support the + Virtualization Host Extensions (VHE), so the kernel makes no + provision for supporting SVE alongside KVM without VHE enabled. + Thus, you will need to enable CONFIG_ARM64_VHE if you want to support + KVM in the same kernel image. + config ARM64_MODULE_PLTS bool select HAVE_MOD_ARCH_SPECIFIC diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index b3fe7301bdbe..fda9289f3b9c 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -405,6 +405,19 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, kvm_call_hyp(__kvm_set_tpidr_el2, tpidr_el2); } +static inline bool kvm_arch_check_sve_has_vhe(void) +{ + /* + * The Arm architecture specifies that implementation of SVE + * requires VHE also to be implemented. The KVM code for arm64 + * relies on this when SVE is present: + */ + if (system_supports_sve()) + return has_vhe(); + else + return true; +} + static inline void kvm_arch_hardware_unsetup(void) {} static inline void kvm_arch_sync_events(struct kvm *kvm) {} static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c index 365933a98a7c..dc6ecfa5a2d2 100644 --- a/arch/arm64/kvm/fpsimd.c +++ b/arch/arm64/kvm/fpsimd.c @@ -59,7 +59,6 @@ int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu) */ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) { - BUG_ON(system_supports_sve()); BUG_ON(!current->mm); vcpu->arch.flags &= ~(KVM_ARM64_FP_ENABLED | KVM_ARM64_HOST_SVE_IN_USE); diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 118f3002b9ce..a6a8c7d9157d 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -21,6 +21,7 @@ #include +#include #include #include #include @@ -28,6 +29,7 @@ #include #include #include +#include #include /* Check whether the FP regs were dirtied while in the host-side run loop: */ @@ -329,6 +331,8 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, struct kvm_vcpu *vcpu) { + struct user_fpsimd_state *host_fpsimd = vcpu->arch.host_fpsimd_state; + if (has_vhe()) write_sysreg(read_sysreg(cpacr_el1) | CPACR_EL1_FPEN, cpacr_el1); @@ -339,7 +343,21 @@ void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, isb(); if (vcpu->arch.flags & KVM_ARM64_FP_HOST) { - __fpsimd_save_state(vcpu->arch.host_fpsimd_state); + /* + * In the SVE case, VHE is assumed: it is enforced by + * Kconfig and kvm_arch_init(). + */ + if (system_supports_sve() && + (vcpu->arch.flags & KVM_ARM64_HOST_SVE_IN_USE)) { + struct thread_struct *thread = container_of( + host_fpsimd, + struct thread_struct, uw.fpsimd_state); + + sve_save_state(sve_pffr(thread), &host_fpsimd->fpsr); + } else { + __fpsimd_save_state(host_fpsimd); + } + vcpu->arch.flags &= ~KVM_ARM64_FP_HOST; } diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index bee226cec40b..ce7c6f36471b 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -16,6 +16,7 @@ * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ +#include #include #include #include @@ -41,6 +42,7 @@ #include #include #include +#include #include #include #include @@ -1574,6 +1576,11 @@ int kvm_arch_init(void *opaque) return -ENODEV; } + if (!kvm_arch_check_sve_has_vhe()) { + kvm_pr_unimpl("SVE system without VHE unsupported. Broken cpu?"); + return -ENODEV; + } + for_each_online_cpu(cpu) { smp_call_function_single(cpu, check_kvm_target_cpu, &ret, 1); if (ret < 0) { From patchwork Fri Jun 1 15:27:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924171 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="GF4iULy+"; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=infradead.org header.i=@infradead.org header.b="m9bAEJdg"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y8mH5Ycrz9s1b for ; Sat, 2 Jun 2018 02:24:15 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=M22tin4pCpYu0nsMzqxhJhn1UKN1loPgqykH9VjwfxE=; b=GF4iULy+c5lOSK 8VquLiC8xYjoJC4E6Z6Jk1iLqrj1wTUu+sA7OM+Fd/Xsgl8Gc2++gQa5h93RsXEdluU9wbA2uwruF wmfddb4kTElkrjTW/ila6wRSJVanvKryqmcfa0+p8bh9b9B4+WYVZjSesVNIRQ4UuFKxrxfIeMz4v dBwhI2uN7DwTaCZy+aR00tplf2nOtsnZa/wvHHwIEuDjeuJ4/UGZbjV10+ry5V4GT+HoNzJXsdb9Q s+BvT0DZ1V2reRckTOxLPxyLpi3uAzevgGahN3haEWZEhWuQ+Rpcp4xB2u4q21/hxBO+MkXc+K9Y2 l1pJ66bsL4Nj6QDErv+A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmqO-0006Up-Ks; Fri, 01 Jun 2018 16:24:00 +0000 Received: from merlin.infradead.org ([2001:8b0:10b:1231::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmYX-0000rX-Lx for linux-arm-kernel@bombadil.infradead.org; Fri, 01 Jun 2018 16:05:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:Content-Type: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=saz3VDWQC5tfGPULgBHblXv2A1W4abtZCe7YbQmEe7Q=; b=m9bAEJdgyDB6Q+UVN5JBewH8IH 3LsZPYk/hwR8c4fJjeJIX0xAEL437NTnitLDSgsFtaa6NCUPVJ0/uLhLnt3gI/hIRXPUcDyWlTPl9 F5oFt499HL08U0ae8bbslwu8Rno3B8yh5lPlmHJPHF8NIylRBUAkmQD6sfhm69vDAOLzog8PCBjYk 5OxOpvy/EaQ/P951AewRdEsr+ZG9fYguOpd6Gi6zCqNsNlLLZIG5MOJKdy1QyTCukf2x0VEYm61HK +0G9o3R1fmbVIGU7Ozliz7/iXwuYKMssiRNs6XcT+H+3iaRYW89Sxp6vGRSeQiQR6VwLVJPAfirec BFIW0lwA==; Received: from foss.arm.com ([217.140.101.70]) by merlin.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlzC-0005J7-56 for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:04 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6DB3A16EA; Fri, 1 Jun 2018 08:28:49 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5D4903F557; Fri, 1 Jun 2018 08:28:47 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 17/33] KVM: arm64: Remove eager host SVE state saving Date: Fri, 1 Jun 2018 16:27:31 +0100 Message-Id: <20180601152747.23613-18-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_112902_370270_44A880C2 X-CRM114-Status: GOOD ( 17.97 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on merlin.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin Now that the host SVE context can be saved on demand from Hyp, there is no longer any need to save this state in advance before entering the guest. This patch removes the relevant call to kvm_fpsimd_flush_cpu_state(). Since the problem that function was intended to solve now no longer exists, the function and its dependencies are also deleted. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Christoffer Dall Acked-by: Marc Zyngier Acked-by: Catalin Marinas Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_host.h | 3 --- arch/arm64/include/asm/kvm_host.h | 10 ---------- arch/arm64/kernel/fpsimd.c | 21 --------------------- virt/kvm/arm/arm.c | 3 --- 4 files changed, 37 deletions(-) diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 3b85bbb4b23e..f079a2039c8a 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -312,9 +312,6 @@ static inline void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_ctxsync_fp(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu) {} -/* All host FP/SIMD state is restored on guest exit, so nothing to save: */ -static inline void kvm_fpsimd_flush_cpu_state(void) {} - static inline void kvm_arm_vhe_guest_enter(void) {} static inline void kvm_arm_vhe_guest_exit(void) {} diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index fda9289f3b9c..a4ca202ff3f2 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -457,16 +457,6 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu) } #endif -/* - * All host FP/SIMD state is restored on guest exit, so nothing needs - * doing here except in the SVE case: -*/ -static inline void kvm_fpsimd_flush_cpu_state(void) -{ - if (system_supports_sve()) - sve_flush_cpu_state(); -} - static inline void kvm_arm_vhe_guest_enter(void) { local_daif_mask(); diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index e60c3a28380f..7074c4cd0e0e 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -120,7 +120,6 @@ */ struct fpsimd_last_state_struct { struct user_fpsimd_state *st; - bool sve_in_use; }; static DEFINE_PER_CPU(struct fpsimd_last_state_struct, fpsimd_last_state); @@ -1008,7 +1007,6 @@ void fpsimd_bind_task_to_cpu(void) this_cpu_ptr(&fpsimd_last_state); last->st = ¤t->thread.uw.fpsimd_state; - last->sve_in_use = test_thread_flag(TIF_SVE); current->thread.fpsimd_cpu = smp_processor_id(); if (system_supports_sve()) { @@ -1030,7 +1028,6 @@ void fpsimd_bind_state_to_cpu(struct user_fpsimd_state *st) WARN_ON(!in_softirq() && !irqs_disabled()); last->st = st; - last->sve_in_use = false; } /* @@ -1091,24 +1088,6 @@ void fpsimd_flush_cpu_state(void) set_thread_flag(TIF_FOREIGN_FPSTATE); } -/* - * Invalidate any task SVE state currently held in this CPU's regs. - * - * This is used to prevent the kernel from trying to reuse SVE register data - * that is detroyed by KVM guest enter/exit. This function should go away when - * KVM SVE support is implemented. Don't use it for anything else. - */ -#ifdef CONFIG_ARM64_SVE -void sve_flush_cpu_state(void) -{ - struct fpsimd_last_state_struct const *last = - this_cpu_ptr(&fpsimd_last_state); - - if (last->st && last->sve_in_use) - fpsimd_flush_cpu_state(); -} -#endif /* CONFIG_ARM64_SVE */ - #ifdef CONFIG_KERNEL_MODE_NEON DEFINE_PER_CPU(bool, kernel_neon_busy); diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index ce7c6f36471b..39e777155e7c 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -682,9 +682,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) */ preempt_disable(); - /* Flush FP/SIMD state that can't survive guest entry/exit */ - kvm_fpsimd_flush_cpu_state(); - kvm_pmu_flush_hwstate(vcpu); local_irq_disable(); From patchwork Fri Jun 1 15:27:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924138 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="QTMAhVbZ"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y85W5SzBz9rxs for ; Sat, 2 Jun 2018 01:54:07 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ha2IRXGlxJajc0KNFJISO0PkKGc+Sh+BoITI274Z0u0=; b=QTMAhVbZrsbyvH ZiurV7lhCSIsDpYonmonw5wx7lDoagZjm6lHJHhAT99CgMV7le4arC78Y8HUnF0O2ManFs0BZ3DAk vgyUguA7k7gxcAk6eH4qCrdk/5EdO6QMLpiJOHEnG3RJdxJ9j9Yu5X5DYBpYaAXNa/kWO7WDZa3Ng sTfhAXHad62EKv2l7NNzcGz06k9+aiwaC8dgx+9oCfSx5NyFaAb04zIe+BChulGYj4b1p/IYJ1SXb /olX02t3V3fFFGzWb79nxqW+1gtocR8F2myW9ikgLmed0q90dlWSl2v/hDtsOUS1CK7357OfppMJk rPtyD6AVCf+N/ClQSjwg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmNH-0002w3-Pr; Fri, 01 Jun 2018 15:53:55 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlz1-0002iI-Q6 for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:45 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD47615BE; Fri, 1 Jun 2018 08:28:51 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ACDE43F557; Fri, 1 Jun 2018 08:28:49 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 18/33] KVM: arm64: Remove redundant *exit_code changes in fpsimd_guest_exit() Date: Fri, 1 Jun 2018 16:27:32 +0100 Message-Id: <20180601152747.23613-19-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082851_901382_845D4631 X-CRM114-Status: GOOD ( 16.52 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin In fixup_guest_exit(), there are a couple of cases where after checking what the exit code was, we assign it explicitly with the value it already had. Assuming this is not indicative of a bug, these assignments are not needed. This patch removes the redundant assignments, and simplifies some if-nesting that becomes trivial as a result. No functional change. Signed-off-by: Dave Martin Reviewed-by: Alex Bennée Acked-by: Marc Zyngier Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/switch.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index a6a8c7d9157d..18d0faa8c806 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -403,12 +403,8 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) if (valid) { int ret = __vgic_v2_perform_cpuif_access(vcpu); - if (ret == 1) { - if (__skip_instr(vcpu)) - return true; - else - *exit_code = ARM_EXCEPTION_TRAP; - } + if (ret == 1 && __skip_instr(vcpu)) + return true; if (ret == -1) { /* Promote an illegal access to an @@ -430,12 +426,8 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { int ret = __vgic_v3_perform_cpuif_access(vcpu); - if (ret == 1) { - if (__skip_instr(vcpu)) - return true; - else - *exit_code = ARM_EXCEPTION_TRAP; - } + if (ret == 1 && __skip_instr(vcpu)) + return true; } /* Return to the host kernel and handle the exit */ From patchwork Fri Jun 1 15:27:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924161 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="G58z5rrn"; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=infradead.org header.i=@infradead.org header.b="jyvA+QJv"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y8cW5fJVz9rxs for ; Sat, 2 Jun 2018 02:17:31 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2xWRisAKbUjAFY3pYta36yu3JxA0pTHvfuv4vJCb/Zs=; b=G58z5rrnjepFUc 1Y6YOn+krUovFzaTaQxglHQ6m0uU6URn3LAr6qXlDKjSoC0XecFDl7DuvzMVjKLwFJA183oHW538N ykYYv+ufKBwzCRgldxLU+abmnHlXzOpYd+swVzl3ijensflEW/67iyJOhBi45JtkvAQrbGelX1ml3 hYMuEghbDU41CvSbODpfvwzz95+f49OrBJuQJjB2AAFbvOF8lzjFJU2UbYMrIlWugJlc3A2+0TEpU xXmbK+7ENMWcxA/jDPzcS1xU5FFs25E94JAsY/sFEy4ssh360HTG2Hm0xsseWTBI/EZtU9HnW95wp t1gJSl+rDmbMVx2DM3Hw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmk0-00025W-UH; Fri, 01 Jun 2018 16:17:25 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOmda-00037E-LD for linux-arm-kernel@bombadil.infradead.org; Fri, 01 Jun 2018 16:10:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:Content-Type: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SYLkbdMCdjwEKt1+YhVGYClMYGdqMQocSJDKr4DsOOw=; b=jyvA+QJvRVbdHDuDb+HWH5kaVs awAie09zfsnXNTz2nB7aAlkMYORBLM+/2M19JmtW7cQ8ubR2nCy1lAfqJGAHnzUOUEr6YpXuuDNPM CVCYSivdHHMVDcK/hF23oxXjVxRh6u+ib+aKxRXx5Yzn+eNlLqjzGzqtuoGzUhpIxBDPsSaFuJCfi j7aJUII/TPHO3QofEiNQ8udqLnpPjkVVJGG5+5i4eLH0goXA2Jdmbj9+MNCeV2eeg7BQM2ecylCUw xYTpU9g4BiHs4tqkKmuAqdDUb5V1+/0l8pSITHzbS+qsdXpFv4TlZvMUUBGb4NCcdpn8n1TP0aoup ECcNtXqQ==; Received: from foss.arm.com ([217.140.101.70]) by casper.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlz8-0007pv-00 for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:28:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 17A451688; Fri, 1 Jun 2018 08:28:54 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 07B5C3F557; Fri, 1 Jun 2018 08:28:51 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 19/33] KVM: arm64: Fold redundant exit code checks out of fixup_guest_exit() Date: Fri, 1 Jun 2018 16:27:33 +0100 Message-Id: <20180601152747.23613-20-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_162858_314774_BF614D05 X-CRM114-Status: GOOD ( 17.15 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on casper.infradead.org summary: Content analysis details: (-5.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin The entire tail of fixup_guest_exit() is contained in if statements of the form if (x && *exit_code == ARM_EXCEPTION_TRAP). As a result, we can check just once and bail out of the function early, allowing the remaining if conditions to be simplified. The only awkward case is where *exit_code is changed to ARM_EXCEPTION_EL1_SERROR in the case of an illegal GICv2 CPU interface access: in that case, the GICv3 trap handling code is skipped using a goto. This avoids pointlessly evaluating the static branch check for the GICv3 case, even though we can't have vgic_v2_cpuif_trap and vgic_v3_cpuif_trap true simultaneously unless we have a GICv3 and GICv2 on the host: that sounds stupid, but I haven't satisfied myself that it can't happen. No functional change. Signed-off-by: Dave Martin Reviewed-by: Marc Zyngier Reviewed-by: Alex Bennée Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/switch.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 18d0faa8c806..4fbee9502162 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -387,11 +387,13 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) * same PC once the SError has been injected, and replay the * trapping instruction. */ - if (*exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu)) + if (*exit_code != ARM_EXCEPTION_TRAP) + goto exit; + + if (!__populate_fault_info(vcpu)) return true; - if (static_branch_unlikely(&vgic_v2_cpuif_trap) && - *exit_code == ARM_EXCEPTION_TRAP) { + if (static_branch_unlikely(&vgic_v2_cpuif_trap)) { bool valid; valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW && @@ -417,11 +419,12 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; *exit_code = ARM_EXCEPTION_EL1_SERROR; } + + goto exit; } } if (static_branch_unlikely(&vgic_v3_cpuif_trap) && - *exit_code == ARM_EXCEPTION_TRAP && (kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_SYS64 || kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_CP15_32)) { int ret = __vgic_v3_perform_cpuif_access(vcpu); @@ -430,6 +433,7 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) return true; } +exit: /* Return to the host kernel and handle the exit */ return false; } From patchwork Fri Jun 1 15:27:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 924102 Return-Path: X-Original-To: incoming-imx@patchwork.ozlabs.org Delivered-To: patchwork-incoming-imx@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="kwUzkUBx"; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40y7g31wNkz9ry1 for ; Sat, 2 Jun 2018 01:34:39 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pYzf6j1xDQw2ECf6r9+RC/D1Gos7rXZliivZmIxBfoI=; b=kwUzkUBx6fLr0w 5ArCb98CP/s2Y7vZsbMBMaQEkj2AfShDK/FfAZO4emENQRTpo5FgesbOxSy0782uLX7GTkSF5XFpg 74pkCOITre1fN7tKjgNMQgdWvIhooSWAiYSwvu7J9erN/5THn6ZuTAdlNL6WNwGaj2+uJKS32+2dH 1hHevaweLYu9h/4p+Y1EMu7snYwK66Lti/SEPYFctzeSFMCTQo04B148jVqtLtp7mvVz4koybUyh1 ni+PXzUaad3ybqN9+ZBJUCj8veqsERVTgJZu0/ncXemHS/rzTBdno7BfbY3ltp25mUTVBnHpJ6CQk 319OPrqsaVrfEHCXzn4w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOm4X-0007J2-EP; Fri, 01 Jun 2018 15:34:33 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fOlz6-0002ej-FL for linux-arm-kernel@lists.infradead.org; Fri, 01 Jun 2018 15:29:59 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6563D165D; Fri, 1 Jun 2018 08:28:56 -0700 (PDT) Received: from approximate.cambridge.arm.com (approximate.cambridge.arm.com [10.1.206.75]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 556D93F557; Fri, 1 Jun 2018 08:28:54 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Subject: [PATCH 20/33] KVM: arm64: Invoke FPSIMD context switch trap from C Date: Fri, 1 Jun 2018 16:27:34 +0100 Message-Id: <20180601152747.23613-21-marc.zyngier@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180601152747.23613-1-marc.zyngier@arm.com> References: <20180601152747.23613-1-marc.zyngier@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180601_082856_537293_B5D69A72 X-CRM114-Status: GOOD ( 14.93 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.101.70 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Peter Maydell , kvm@vger.kernel.org, Catalin Marinas , Christoffer Dall , kvmarm@lists.cs.columbia.edu, Eric Auger , =?utf-8?q?Alex_Benn=C3=A9e?= , Dave Martin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+incoming-imx=patchwork.ozlabs.org@lists.infradead.org List-Id: linux-imx-kernel.lists.patchwork.ozlabs.org From: Dave Martin The conversion of the FPSIMD context switch trap code to C has added some overhead to calling it, due to the need to save registers that the procedure call standard defines as caller-saved. So, perhaps it is no longer worth invoking this trap handler quite so early. Instead, we can invoke it from fixup_guest_exit(), with little likelihood of increasing the overhead much further. As a convenience, this patch gives __hyp_switch_fpsimd() the same return semantics fixup_guest_exit(). For now there is no possibility of a spurious FPSIMD trap, so the function always returns true, but this allows it to be tail-called with a single return statement. Signed-off-by: Dave Martin Reviewed-by: Marc Zyngier Reviewed-by: Christoffer Dall Reviewed-by: Alex Bennée Signed-off-by: Marc Zyngier --- arch/arm64/kvm/hyp/entry.S | 30 ------------------------------ arch/arm64/kvm/hyp/hyp-entry.S | 19 ------------------- arch/arm64/kvm/hyp/switch.c | 15 +++++++++++++-- 3 files changed, 13 insertions(+), 51 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index 40f349bc1079..fad1e164fe48 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -166,33 +166,3 @@ abort_guest_exit_end: orr x0, x0, x5 1: ret ENDPROC(__guest_exit) - -ENTRY(__fpsimd_guest_restore) - // x0: esr - // x1: vcpu - // x2-x29,lr: vcpu regs - // vcpu x0-x1 on the stack - stp x2, x3, [sp, #-144]! - stp x4, x5, [sp, #16] - stp x6, x7, [sp, #32] - stp x8, x9, [sp, #48] - stp x10, x11, [sp, #64] - stp x12, x13, [sp, #80] - stp x14, x15, [sp, #96] - stp x16, x17, [sp, #112] - stp x18, lr, [sp, #128] - - bl __hyp_switch_fpsimd - - ldp x4, x5, [sp, #16] - ldp x6, x7, [sp, #32] - ldp x8, x9, [sp, #48] - ldp x10, x11, [sp, #64] - ldp x12, x13, [sp, #80] - ldp x14, x15, [sp, #96] - ldp x16, x17, [sp, #112] - ldp x18, lr, [sp, #128] - ldp x0, x1, [sp, #144] - ldp x2, x3, [sp], #160 - eret -ENDPROC(__fpsimd_guest_restore) diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S index bffece27b5c1..753b9d213651 100644 --- a/arch/arm64/kvm/hyp/hyp-entry.S +++ b/arch/arm64/kvm/hyp/hyp-entry.S @@ -113,25 +113,6 @@ el1_hvc_guest: el1_trap: get_vcpu_ptr x1, x0 - - mrs x0, esr_el2 - lsr x0, x0, #ESR_ELx_EC_SHIFT - /* - * x0: ESR_EC - * x1: vcpu pointer - */ - - /* - * We trap the first access to the FP/SIMD to save the host context - * and restore the guest context lazily. - * If FP/SIMD is not implemented, handle the trap and inject an - * undefined instruction exception to the guest. - */ -alternative_if_not ARM64_HAS_NO_FPSIMD - cmp x0, #ESR_ELx_EC_FP_ASIMD - b.eq __fpsimd_guest_restore -alternative_else_nop_endif - mov x0, #ARM_EXCEPTION_TRAP b __guest_exit diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index 4fbee9502162..2d45bd719a5d 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -328,8 +328,7 @@ static bool __hyp_text __skip_instr(struct kvm_vcpu *vcpu) } } -void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, - struct kvm_vcpu *vcpu) +static bool __hyp_text __hyp_switch_fpsimd(struct kvm_vcpu *vcpu) { struct user_fpsimd_state *host_fpsimd = vcpu->arch.host_fpsimd_state; @@ -369,6 +368,8 @@ void __hyp_text __hyp_switch_fpsimd(u64 esr __always_unused, fpexc32_el2); vcpu->arch.flags |= KVM_ARM64_FP_ENABLED; + + return true; } /* @@ -390,6 +391,16 @@ static bool __hyp_text fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code) if (*exit_code != ARM_EXCEPTION_TRAP) goto exit; + /* + * We trap the first access to the FP/SIMD to save the host context + * and restore the guest context lazily. + * If FP/SIMD is not implemented, handle the trap and inject an + * undefined instruction exception to the guest. + */ + if (system_supports_fpsimd() && + kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_FP_ASIMD) + return __hyp_switch_fpsimd(vcpu); + if (!__populate_fault_info(vcpu)) return true;