From patchwork Thu Aug 29 23:34:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Deepak Gupta X-Patchwork-Id: 1978653 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.a=rsa-sha256 header.s=20230601 header.b=saKeHUii; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WvyP71BsBz1ydy for ; Fri, 30 Aug 2024 09:37:15 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sjofL-0004B5-Sl; Thu, 29 Aug 2024 19:35:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sjofI-0003yE-BQ for qemu-devel@nongnu.org; Thu, 29 Aug 2024 19:34:56 -0400 Received: from mail-pg1-x534.google.com ([2607:f8b0:4864:20::534]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sjofB-0003BH-7n for qemu-devel@nongnu.org; Thu, 29 Aug 2024 19:34:56 -0400 Received: by mail-pg1-x534.google.com with SMTP id 41be03b00d2f7-76cb5b6b3e4so767808a12.1 for ; Thu, 29 Aug 2024 16:34:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1724974487; x=1725579287; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2jJ9rNHZ8TmIRu/hved5qqdKgb/mf9QAjdp4rhoe2Dc=; b=saKeHUii1i5JIhY7YhqGkSGzKvGeqwU6D/rvW+xlN70on9rGxDpgT90ThUjKUYsMbV dO8Clgckm9yS3ePib1/t1x8ZD7XwG7l+1SrTri4M5rDUVvLnLSKs5hO/97iDRgAemJCH R7pvpoNQ6SKgJ6zJIKZ5fVeOxzQgyuxZTmdM2o3nX/m5WtmOMQ6VAopgGL0C8a687Dys e+7HvbFTrJaxl/T5qm4wENT3Plc9Z16duclMTCcgsD/v3Hk3yku6xdk46LD7dzd98u2C NDJwHYgaapIo48ZKDr+X/VGEMK9U0KFnBZ1gnNxpJbmhD3m2ruGqg9bgXcfGvIabHmby 9FQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724974487; x=1725579287; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2jJ9rNHZ8TmIRu/hved5qqdKgb/mf9QAjdp4rhoe2Dc=; b=RD6QtKl8rXtdmIHT4YzdBONZeOVPx4QroXFF2Hq5joImYKCaXVGwxIce9zb5ifZofv pbaCLJCzaRE5xXLNbunoF7wuse4lzy9o7v3Y6l1IM0bH39/xTyU4Pyec9AcMdXCtLrht 8s/4MmkB/v1PuXMYKLGW4c+Tvi8D9d21wKFXK41anVnkNmGBc+wNPZ9rSjtC661YmYQ3 uqNy3mZIwCSTvpthJP6fQOAKvxBC+yi7i/e9YlLY4fBoeVXdO+WalXBKuBGsUT3Nnheo cY7yv/Eo9YvhVOv1n3/HmP97uySLzWQh9YYs93TA7XYlvqJ7j0tZyT8m3tMkc+lrvqip 7gdQ== X-Forwarded-Encrypted: i=1; AJvYcCVVIFkn7UaoDRgoYN/UGWcJmgdNTmuzjy6u4h8UFYF+y0lfbCN48PPWHYaeAWOPTp4Ramci/5sPNS4m@nongnu.org X-Gm-Message-State: AOJu0Yx32Cimg5bCtqEUnQxKlceHFdXBiC3FnpJKlv+9N0ewEvH7W3n0 613WX91NkawdXclXjHz/caCk7qUNai76dkdo7Ies+4pxTXoI9o1WIUd3V42LYMw= X-Google-Smtp-Source: AGHT+IG5WUGh78sTOdfM0YzwCdV+pRxLDJ3O9YShul6AxDjkoAvWdoigXd5dJQZdjP+j7jOGRwIycA== X-Received: by 2002:a17:90a:6585:b0:2c8:e888:26a2 with SMTP id 98e67ed59e1d1-2d8561b18damr4984033a91.13.1724974486582; Thu, 29 Aug 2024 16:34:46 -0700 (PDT) Received: from debug.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-2d85b13df55sm2331074a91.27.2024.08.29.16.34.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Aug 2024 16:34:46 -0700 (PDT) From: Deepak Gupta To: qemu-riscv@nongnu.org, qemu-devel@nongnu.org Cc: palmer@dabbelt.com, Alistair.Francis@wdc.com, bmeng.cn@gmail.com, liwei1518@gmail.com, dbarboza@ventanamicro.com, zhiwei_liu@linux.alibaba.com, jim.shu@sifive.com, andy.chiu@sifive.com, kito.cheng@sifive.com, Deepak Gupta , Richard Henderson Subject: [PATCH v12 13/20] target/riscv: mmu changes for zicfiss shadow stack protection Date: Thu, 29 Aug 2024 16:34:17 -0700 Message-ID: <20240829233425.1005029-14-debug@rivosinc.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240829233425.1005029-1-debug@rivosinc.com> References: <20240829233425.1005029-1-debug@rivosinc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::534; envelope-from=debug@rivosinc.com; helo=mail-pg1-x534.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org zicfiss protects shadow stack using new page table encodings PTE.W=1, PTE.R=0 and PTE.X=0. This encoding is reserved if zicfiss is not implemented or if shadow stack are not enabled. Loads on shadow stack memory are allowed while stores to shadow stack memory leads to access faults. Shadow stack accesses to RO memory leads to store page fault. To implement special nature of shadow stack memory where only selected stores (shadow stack stores from sspush) have to be allowed while rest of regular stores disallowed, new MMU TLB index is created for shadow stack. Signed-off-by: Deepak Gupta Suggested-by: Richard Henderson Reviewed-by: Richard Henderson --- target/riscv/cpu_helper.c | 46 +++++++++++++++++++++++++++++++++------ target/riscv/internals.h | 3 +++ 2 files changed, 42 insertions(+), 7 deletions(-) diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index be4ac3d54e..f0c75d8b0a 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -893,6 +893,8 @@ static int get_physical_address(CPURISCVState *env, hwaddr *physical, hwaddr ppn; int napot_bits = 0; target_ulong napot_mask; + bool is_sstack_idx = ((mmu_idx & MMU_IDX_SS_WRITE) == MMU_IDX_SS_WRITE); + bool sstack_page = false; /* * Check if we should use the background registers for the two @@ -1101,21 +1103,40 @@ restart: return TRANSLATE_FAIL; } + target_ulong rwx = pte & (PTE_R | PTE_W | PTE_X); /* Check for reserved combinations of RWX flags. */ - switch (pte & (PTE_R | PTE_W | PTE_X)) { - case PTE_W: + switch (rwx) { case PTE_W | PTE_X: return TRANSLATE_FAIL; + case PTE_W: + /* if bcfi enabled, PTE_W is not reserved and shadow stack page */ + if (cpu_get_bcfien(env) && first_stage) { + sstack_page = true; + /* if ss index, read and write allowed. else only read allowed */ + rwx = is_sstack_idx ? PTE_R | PTE_W : PTE_R; + break; + } + return TRANSLATE_FAIL; + case PTE_R: + /* + * no matter what's the `access_type`, shadow stack access to readonly + * memory are always store page faults. During unwind, loads will be + * promoted as store fault. + */ + if (is_sstack_idx) { + return TRANSLATE_FAIL; + } + break; } int prot = 0; - if (pte & PTE_R) { + if (rwx & PTE_R) { prot |= PAGE_READ; } - if (pte & PTE_W) { + if (rwx & PTE_W) { prot |= PAGE_WRITE; } - if (pte & PTE_X) { + if (rwx & PTE_X) { bool mxr = false; /* @@ -1159,8 +1180,11 @@ restart: } if (!((prot >> access_type) & 1)) { - /* Access check failed */ - return TRANSLATE_FAIL; + /* + * Access check failed, access check failures for shadow stack are + * access faults. + */ + return sstack_page ? TRANSLATE_PMP_FAIL : TRANSLATE_FAIL; } target_ulong updated_pte = pte; @@ -1347,9 +1371,17 @@ void riscv_cpu_do_unaligned_access(CPUState *cs, vaddr addr, break; case MMU_DATA_LOAD: cs->exception_index = RISCV_EXCP_LOAD_ADDR_MIS; + /* shadow stack mis aligned accesses are access faults */ + if (mmu_idx & MMU_IDX_SS_WRITE) { + cs->exception_index = RISCV_EXCP_LOAD_ACCESS_FAULT; + } break; case MMU_DATA_STORE: cs->exception_index = RISCV_EXCP_STORE_AMO_ADDR_MIS; + /* shadow stack mis aligned accesses are access faults */ + if (mmu_idx & MMU_IDX_SS_WRITE) { + cs->exception_index = RISCV_EXCP_STORE_AMO_ACCESS_FAULT; + } break; default: g_assert_not_reached(); diff --git a/target/riscv/internals.h b/target/riscv/internals.h index 0ac17bc5ad..ddbdee885b 100644 --- a/target/riscv/internals.h +++ b/target/riscv/internals.h @@ -30,12 +30,15 @@ * - U+2STAGE 0b100 * - S+2STAGE 0b101 * - S+SUM+2STAGE 0b110 + * - Shadow stack+U 0b1000 + * - Shadow stack+S 0b1001 */ #define MMUIdx_U 0 #define MMUIdx_S 1 #define MMUIdx_S_SUM 2 #define MMUIdx_M 3 #define MMU_2STAGE_BIT (1 << 2) +#define MMU_IDX_SS_WRITE (1 << 3) static inline int mmuidx_priv(int mmu_idx) {