From patchwork Mon Jun 15 11:51:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alvise Rigo X-Patchwork-Id: 484226 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 38DC9140273 for ; Mon, 15 Jun 2015 21:50:44 +1000 (AEST) Received: from localhost ([::1]:33557 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z4SuY-0005tr-Ez for incoming@patchwork.ozlabs.org; Mon, 15 Jun 2015 07:50:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37184) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z4Stt-0004l5-Q3 for qemu-devel@nongnu.org; Mon, 15 Jun 2015 07:50:03 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z4Stp-000424-Nm for qemu-devel@nongnu.org; Mon, 15 Jun 2015 07:50:01 -0400 Received: from mail-wg0-f50.google.com ([74.125.82.50]:33619) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z4Stp-00041l-Ew for qemu-devel@nongnu.org; Mon, 15 Jun 2015 07:49:57 -0400 Received: by wgez8 with SMTP id z8so66702332wge.0 for ; Mon, 15 Jun 2015 04:49:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=aQu5FGcim2HFtwhr+51OsH9pG9QQUPXdDC5D8nyckbg=; b=EQd/VoUEC5mtHNPkgNYezbtSredsE+8+7+ae/TAuqSQf41kWVDeBsmcLkaKsMVgC2M XM9F8RFwBQaSd2k91E5m9LaGM6T+Y7fM3P7kqJbTNH1D4NM9ixdcsj3GPm3NTvCpBiEt LtjVN1aKsVzL9IOIjvHqoAltXLwhEy7HDIrIk/ULvKMzjtOyQgScaaclLcnONNs3yCQ5 Vo29EgoHSEd3dqTSaUTNdtVIMTYtbotJBm2+J4IVMtN0F4LmNMpLvG+8uw0sAprrXwbz 2PQ9RNEJ6uHvaGsp9t14ucRcZq8MdlpWtlmKn7GQyBFdl8fSK0VIeBElY9cv0T6FimdW 5ZzQ== X-Gm-Message-State: ALoCoQkqIvupFfU2HhQo5I9CyUxj7poACHlFIVtj0cbDZqGOdR2cWfwDNIlFOe1lvYNeYHdO1SkA X-Received: by 10.194.178.225 with SMTP id db1mr51946955wjc.153.1434368996844; Mon, 15 Jun 2015 04:49:56 -0700 (PDT) Received: from linarch.home (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by mx.google.com with ESMTPSA id tl3sm18570524wjc.20.2015.06.15.04.49.55 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 15 Jun 2015 04:49:56 -0700 (PDT) From: Alvise Rigo To: qemu-devel@nongnu.org Date: Mon, 15 Jun 2015 13:51:25 +0200 Message-Id: <1434369088-15076-5-git-send-email-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1434369088-15076-1-git-send-email-a.rigo@virtualopensystems.com> References: <1434369088-15076-1-git-send-email-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 74.125.82.50 Cc: mttcg@listserver.greensocs.com, claudio.fontana@huawei.com, cota@braap.org, jani.kokkonen@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, rth@twiddle.net Subject: [Qemu-devel] [RFC v2 4/7] softmmu: Add helpers for a new slow-path X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The new helpers rely on the legacy ones to perform the actual read/write. The StoreConditional helper (helper_le_stcond_name) returns 1 if the store has to fail due to a concurrent access to the same page by another vCPU. A 'concurrent access' can be a store made by *any* vCPU (although, some implementations allow stores made by the CPU that issued the LoadLink). These helpers also update the TLB entry of the page involved in the LL/SC, so that all the following accesses made by any vCPU will follow the slow path. In real multi-threading, these helpers will require to temporarily pause the execution of the other vCPUs in order to update accordingly (flush) the TLB cache. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo --- cputlb.c | 3 + softmmu_llsc_template.h | 155 ++++++++++++++++++++++++++++++++++++++++++++++++ softmmu_template.h | 4 ++ tcg/tcg.h | 18 ++++++ 4 files changed, 180 insertions(+) create mode 100644 softmmu_llsc_template.h diff --git a/cputlb.c b/cputlb.c index 630c11c..1145a33 100644 --- a/cputlb.c +++ b/cputlb.c @@ -394,6 +394,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) #define MMUSUFFIX _mmu +/* Generates LoadLink/StoreConditional helpers in softmmu_template.h */ +#define GEN_EXCLUSIVE_HELPERS #define SHIFT 0 #include "softmmu_template.h" @@ -406,6 +408,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) #define SHIFT 3 #include "softmmu_template.h" #undef MMUSUFFIX +#undef GEN_EXCLUSIVE_HELPERS #define MMUSUFFIX _cmmu #undef GETPC_ADJ diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h new file mode 100644 index 0000000..81e9d8e --- /dev/null +++ b/softmmu_llsc_template.h @@ -0,0 +1,155 @@ +/* + * Software MMU support (esclusive load/store operations) + * + * Generate helpers used by TCG for qemu_ldlink/stcond ops. + * + * Included from softmmu_template.h only. + * + * Copyright (c) 2015 Virtual Open Systems + * + * Authors: + * Alvise Rigo + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +/* + * TODO configurations not implemented: + * - Signed/Unsigned Big-endian + * - Signed Little-endian + * */ + +#if DATA_SIZE > 1 +#define helper_le_ldlink_name glue(glue(helper_le_ldlink, USUFFIX), MMUSUFFIX) +#define helper_le_stcond_name glue(glue(helper_le_stcond, SUFFIX), MMUSUFFIX) +#else +#define helper_le_ldlink_name glue(glue(helper_ret_ldlink, USUFFIX), MMUSUFFIX) +#define helper_le_stcond_name glue(glue(helper_ret_stcond, SUFFIX), MMUSUFFIX) +#endif + +/* helpers from cpu_ldst.h, byte-order independent versions */ +#if DATA_SIZE > 1 +#define helper_ld_legacy glue(glue(helper_le_ld, USUFFIX), MMUSUFFIX) +#define helper_st_legacy glue(glue(helper_le_st, SUFFIX), MMUSUFFIX) +#else +#define helper_ld_legacy glue(glue(helper_ret_ld, USUFFIX), MMUSUFFIX) +#define helper_st_legacy glue(glue(helper_ret_st, SUFFIX), MMUSUFFIX) +#endif + +#define is_write_tlb_entry_set(env, page, index) \ +({ \ + (addr & TARGET_PAGE_MASK) \ + == ((env->tlb_table[mmu_idx][index].addr_write) & \ + (TARGET_PAGE_MASK | TLB_INVALID_MASK)); \ +}) \ + +#define EXCLUSIVE_RESET_ADDR ULLONG_MAX + +WORD_TYPE helper_le_ldlink_name(CPUArchState *env, target_ulong addr, + TCGMemOpIdx oi, uintptr_t retaddr) +{ + WORD_TYPE ret; + int index; + CPUState *cpu; + hwaddr hw_addr; + unsigned mmu_idx = get_mmuidx(oi); + + /* Use the proper load helper from cpu_ldst.h */ + ret = helper_ld_legacy(env, addr, mmu_idx, retaddr); + + /* The last legacy access ensures that the TLB and IOTLB entry for 'addr' + * have been created. */ + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + + /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat) + * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */ + hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr; + + /* Set the exclusive-protected hwaddr. */ + env->excl_protected_hwaddr = hw_addr; + env->ll_sc_context = true; + + /* No need to mask hw_addr with TARGET_PAGE_MASK since + * cpu_physical_memory_excl_is_dirty() will take care of that. */ + if (cpu_physical_memory_excl_is_dirty(hw_addr)) { + cpu_physical_memory_clear_excl_dirty(hw_addr); + + /* Invalidate the TLB entry for the other processors. The next TLB + * entries for this page will have the TLB_EXCL flag set. */ + CPU_FOREACH(cpu) { + if (cpu != current_cpu) { + tlb_flush(cpu, 1); + } + } + } + + /* For this vCPU, just update the TLB entry, no need to flush. */ + env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; + + return ret; +} + +WORD_TYPE helper_le_stcond_name(CPUArchState *env, target_ulong addr, + DATA_TYPE val, TCGMemOpIdx oi, + uintptr_t retaddr) +{ + WORD_TYPE ret; + int index; + hwaddr hw_addr; + unsigned mmu_idx = get_mmuidx(oi); + + /* If the TLB entry is not the right one, create it. */ + index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + if (!is_write_tlb_entry_set(env, addr, index)) { + tlb_fill(ENV_GET_CPU(env), addr, MMU_DATA_STORE, mmu_idx, retaddr); + } + + /* hw_addr = hwaddr of the page (i.e. section->mr->ram_addr + xlat) + * plus the offset (i.e. addr & ~TARGET_PAGE_MASK) */ + hw_addr = (env->iotlb[mmu_idx][index].addr & TARGET_PAGE_MASK) + addr; + + if (!env->ll_sc_context) { + /* No LoakLink has been set, the StoreCond has to fail. */ + return 1; + } + + env->ll_sc_context = 0; + + if (cpu_physical_memory_excl_is_dirty(hw_addr)) { + /* Another vCPU has accessed the memory after the LoadLink. */ + ret = 1; + } else { + helper_st_legacy(env, addr, val, mmu_idx, retaddr); + + /* The StoreConditional succeeded */ + ret = 0; + } + + env->tlb_table[mmu_idx][index].addr_write &= ~TLB_EXCL; + env->excl_protected_hwaddr = EXCLUSIVE_RESET_ADDR; + /* It's likely that the page will be used again for exclusive accesses, + * for this reason we don't flush any TLB cache at the price of some + * additional slow paths and we don't set the page bit as dirty. + * The EXCL TLB entries will not remain there forever since they will be + * eventually removed to serve another guest page; when this happens we + * remove also the dirty bit (see cputlb.c). + * */ + + return ret; +} + +#undef helper_le_ldlink_name +#undef helper_le_stcond_name +#undef helper_ld_legacy +#undef helper_st_legacy diff --git a/softmmu_template.h b/softmmu_template.h index f1782f6..2a61284 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -601,6 +601,10 @@ glue(glue(helper_st, SUFFIX), MMUSUFFIX)(CPUArchState *env, target_ulong addr, #endif /* !defined(SOFTMMU_CODE_ACCESS) */ +#ifdef GEN_EXCLUSIVE_HELPERS +#include "softmmu_llsc_template.h" +#endif + #undef READ_ACCESS_TYPE #undef SHIFT #undef DATA_TYPE diff --git a/tcg/tcg.h b/tcg/tcg.h index 41e4869..d0180fe 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -955,6 +955,15 @@ tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr); uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi, uintptr_t retaddr); +/* Exclusive variants */ +tcg_target_ulong helper_ret_ldlinkub_mmu(CPUArchState *env, target_ulong addr, + int mmu_idx, uintptr_t retaddr); +tcg_target_ulong helper_le_ldlinkuw_mmu(CPUArchState *env, target_ulong addr, + int mmu_idx, uintptr_t retaddr); +tcg_target_ulong helper_le_ldlinkul_mmu(CPUArchState *env, target_ulong addr, + int mmu_idx, uintptr_t retaddr); +uint64_t helper_le_ldlinkq_mmu(CPUArchState *env, target_ulong addr, + int mmu_idx, uintptr_t retaddr); /* Value sign-extended to tcg register size. */ tcg_target_ulong helper_ret_ldsb_mmu(CPUArchState *env, target_ulong addr, @@ -982,6 +991,15 @@ void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val, TCGMemOpIdx oi, uintptr_t retaddr); void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val, TCGMemOpIdx oi, uintptr_t retaddr); +/* Exclusive variants */ +tcg_target_ulong helper_ret_stcondb_mmu(CPUArchState *env, target_ulong addr, + uint8_t val, int mmu_idx, uintptr_t retaddr); +tcg_target_ulong helper_le_stcondw_mmu(CPUArchState *env, target_ulong addr, + uint16_t val, int mmu_idx, uintptr_t retaddr); +tcg_target_ulong helper_le_stcondl_mmu(CPUArchState *env, target_ulong addr, + uint32_t val, int mmu_idx, uintptr_t retaddr); +uint64_t helper_le_stcondq_mmu(CPUArchState *env, target_ulong addr, + uint64_t val, int mmu_idx, uintptr_t retaddr); /* Temporary aliases until backends are converted. */ #ifdef TARGET_WORDS_BIGENDIAN