From patchwork Tue Apr 19 13:39:27 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alvise Rigo X-Patchwork-Id: 612155 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3qq5qK4kSKz9t8n for ; Tue, 19 Apr 2016 23:45:01 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=virtualopensystems-com.20150623.gappssmtp.com header.i=@virtualopensystems-com.20150623.gappssmtp.com header.b=TS6820ye; dkim-atps=neutral Received: from localhost ([::1]:58175 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVxb-0005oM-Ti for incoming@patchwork.ozlabs.org; Tue, 19 Apr 2016 09:44:59 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57166) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsi-0003aa-TN for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1asVsf-0000iL-4f for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:56 -0400 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]:36585) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVse-0000iD-Qs for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:53 -0400 Received: by mail-wm0-x241.google.com with SMTP id l6so5279046wml.3 for ; Tue, 19 Apr 2016 06:39:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtualopensystems-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HxZ+FyRwu63yp/zSKlY1ywwJ6ol2E4OrI3TnTEqDxsg=; b=TS6820yeFs1lf5orJGyWOfb7skSAC/rFJPDuxZt1fRBuMkHTgbR5yS6Fb1tZUSLTqI y7tIkknEZTywtYX6lmgyD6PBulFwodg6+E90RHt2ZGefPjIOo0jLVevfqejwG9ZV/v5l VW9bwpKT03jp6EDiIbbkoXZet9MDPKvBgk0+XB1I1aEynrYdh9uvDp7/p5cR6GuNQSBn iAjiHSPgZZNYU0/Cy0Lcs+UBSdBg1kBldEV+TPHlTR7qHw06ukUOlz65VLWJCpKDamjf qt0QAs6FuMusbYAHfT925L1xIAgPRn+GG2G/8n87l16bnhAbuDCbG+4FXR2nWMXlvpEi llPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HxZ+FyRwu63yp/zSKlY1ywwJ6ol2E4OrI3TnTEqDxsg=; b=Wei9p2yeZ51+yR789qYvYHPJWZEFnkbfyusFgMkihWbYy2vlYFmtJta/+x6WaGeer4 SCR1LlLLSFgXgzL2uH6reRL+kBLtUIhW2fwWzUu2jWuZDyFdLGOeV6q6TNdOf2buMnc/ UzHjHF6PXeoM9FwfSsDas+UlSNjgmK47fmjVGK9AiR4V6VlglEeir+aeGwQj/XOT0Bec qgKEb+vduzuGol0mV8zHh+on3kufAToY18uAgmPrTT5YU1d6k6leDIk+fKD+Ms4yYWQM DScqXcAU2690vM2DxXqusLCR61XEBtqH/AJ1iktW3Y26qZsYYtgDgk8rAqBKEo89pHQr xK+w== X-Gm-Message-State: AOPr4FVLxO73ZV7naINbhD1W74FzlarfFcZt++/52fKFlzKfOho+Azl8epToNjkn5zk6uA== X-Received: by 10.194.184.139 with SMTP id eu11mr3730392wjc.154.1461073192156; Tue, 19 Apr 2016 06:39:52 -0700 (PDT) Received: from localhost.localdomain (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by smtp.googlemail.com with ESMTPSA id c85sm2789756wmd.0.2016.04.19.06.39.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 19 Apr 2016 06:39:51 -0700 (PDT) From: Alvise Rigo To: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com Date: Tue, 19 Apr 2016 15:39:27 +0200 Message-Id: <1461073171-22953-11-git-send-email-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.8.0 In-Reply-To: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> References: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::241 Subject: [Qemu-devel] [RFC v8 10/14] softmmu: Support MMIO exclusive accesses X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Crosthwaite , claudio.fontana@huawei.com, Alvise Rigo , serge.fdrv@gmail.com, pbonzini@redhat.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, rth@twiddle.net Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Enable exclusive accesses when the MMIO flag is set in the TLB entry. In case a LL access is done to MMIO memory, we treat it differently from a RAM access in that we do not rely on the EXCL bitmap to flag the page as exclusive. In fact, we don't even need the TLB_EXCL flag to force the slow path, since it is always forced anyway. As for the RAM case, also the MMIO exclusive ranges have to be protected by other CPU's accesses. In order to do that, we flag the accessed MemoryRegion to mark that an exclusive access has been performed and is not concluded yet. This flag will force the other CPUs to invalidate the exclusive range in case of collision: basically, it serves the same purpose as TLB_EXCL for the TLBEntries referring exclusive memory. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo --- cputlb.c | 7 +++++-- include/exec/memory.h | 1 + softmmu_llsc_template.h | 11 +++++++---- softmmu_template.h | 22 ++++++++++++++++++++++ 4 files changed, 35 insertions(+), 6 deletions(-) diff --git a/cputlb.c b/cputlb.c index e5df3a5..3cf40a3 100644 --- a/cputlb.c +++ b/cputlb.c @@ -29,7 +29,6 @@ #include "exec/memory-internal.h" #include "exec/ram_addr.h" #include "tcg/tcg.h" -#include "hw/hw.h" //#define DEBUG_TLB //#define DEBUG_TLB_CHECK @@ -508,9 +507,10 @@ static inline void excl_history_put_addr(hwaddr addr) /* For every vCPU compare the exclusive address and reset it in case of a * match. Since only one vCPU is running at once, no lock has to be held to * guard this operation. */ -static inline void reset_other_cpus_colliding_ll_addr(hwaddr addr, hwaddr size) +static inline bool reset_other_cpus_colliding_ll_addr(hwaddr addr, hwaddr size) { CPUState *cpu; + bool ret = false; CPU_FOREACH(cpu) { if (current_cpu != cpu && @@ -520,8 +520,11 @@ static inline void reset_other_cpus_colliding_ll_addr(hwaddr addr, hwaddr size) cpu->excl_protected_range.begin, addr, size)) { cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; + ret = true; } } + + return ret; } #define MMUSUFFIX _mmu diff --git a/include/exec/memory.h b/include/exec/memory.h index 71e0480..bacb3ad 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -171,6 +171,7 @@ struct MemoryRegion { bool rom_device; bool flush_coalesced_mmio; bool global_locking; + bool pending_excl_access; /* A vCPU issued an exclusive access */ uint8_t dirty_log_mask; ram_addr_t ram_addr; Object *owner; diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h index 1e24fec..ca55502 100644 --- a/softmmu_llsc_template.h +++ b/softmmu_llsc_template.h @@ -84,15 +84,18 @@ WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr, } } } + /* For this vCPU, just update the TLB entry, no need to flush. */ + env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; } else { - hw_error("EXCL accesses to MMIO regions not supported yet."); + /* Set a pending exclusive access in the MemoryRegion */ + MemoryRegion *mr = iotlb_to_region(this_cpu, + env->iotlb[mmu_idx][index].addr, + env->iotlb[mmu_idx][index].attrs); + mr->pending_excl_access = true; } cc->cpu_set_excl_protected_range(this_cpu, hw_addr, DATA_SIZE); - /* For this vCPU, just update the TLB entry, no need to flush. */ - env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; - /* From now on we are in LL/SC context */ this_cpu->ll_sc_context = true; diff --git a/softmmu_template.h b/softmmu_template.h index 2934a0c..2dc5e01 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -360,6 +360,28 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env, MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs); physaddr = (physaddr & TARGET_PAGE_MASK) + addr; + + /* While for normal RAM accesses we define exclusive memory at TLBEntry + * granularity, for MMIO memory we use a MemoryRegion granularity. + * The pending_excl_access flag is the analogous of TLB_EXCL. */ + if (unlikely(mr->pending_excl_access)) { + if (cpu->excl_succeeded) { + /* This SC access finalizes the LL/SC pair, thus the MemoryRegion + * has no pending exclusive access anymore. + * N.B.: Here excl_succeeded == true means that this access + * comes from an exclusive instruction. */ + MemoryRegion *mr = iotlb_to_region(cpu, iotlbentry->addr, + iotlbentry->attrs); + mr->pending_excl_access = false; + } else { + /* This is a normal MMIO write access. Check if it collides + * with an existing exclusive range. */ + if (reset_other_cpus_colliding_ll_addr(physaddr, 1 << SHIFT)) { + mr->pending_excl_access = false; + } + } + } + if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); }