From patchwork Wed May 6 15:38:07 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alvise Rigo X-Patchwork-Id: 469000 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id C5FD514030A for ; Thu, 7 May 2015 01:38:42 +1000 (AEST) Received: from localhost ([::1]:45795 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Yq1PE-00061E-WF for incoming@patchwork.ozlabs.org; Wed, 06 May 2015 11:38:41 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44305) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Yq1OY-0004rc-JU for qemu-devel@nongnu.org; Wed, 06 May 2015 11:38:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Yq1OT-0003s2-57 for qemu-devel@nongnu.org; Wed, 06 May 2015 11:37:58 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:35182) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Yq1OS-0003qr-T7 for qemu-devel@nongnu.org; Wed, 06 May 2015 11:37:53 -0400 Received: by widdi4 with SMTP id di4so207192087wid.0 for ; Wed, 06 May 2015 08:37:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vsEiFzeHjkFrQWc5RP+tOaR7i1+9czp0KRQM9QgjSQ4=; b=hEREAgJhG1Qeu1Zq8P0XGHQfqtAUz0paWjM1LUTAxfrnx4NgP9eET15hhdaJVISLYW oVzrNLqQx4b+QnPQw/XKw21Ihg7Y21hcJZKdh8WcpKmqHSIpD7c9UVl/1bt0R2qhdDRS sCPq2BCRyTBAaIT7qmellZY6OHhS0peNnTfjiBy8sRPj1kYL1ibzPafebwGw8414mpio GVbisNXZb80/WvaVzM9/oX71u2abzULAUZiMEIRwcNhpwU1Nb+DaPgPd63ZvqbV1ZjAu 46/pnLRPgaZ+/xqgVB+jwxqK6vYi6NUNjZizc2v2WxpPaeETW6l0Etr2Iog6GgbWkBIf SnzQ== X-Gm-Message-State: ALoCoQmH2XzhQi1oEuhyC0eroHLa8UlX4BUQMzg0jz3nibL5WOD1lHuZIdKVf+RyvPwm9gH5C96q X-Received: by 10.180.75.197 with SMTP id e5mr6050007wiw.94.1430926672313; Wed, 06 May 2015 08:37:52 -0700 (PDT) Received: from linarch.home (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by mx.google.com with ESMTPSA id gy8sm2698818wib.13.2015.05.06.08.37.51 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 06 May 2015 08:37:51 -0700 (PDT) From: Alvise Rigo To: qemu-devel@nongnu.org Date: Wed, 6 May 2015 17:38:07 +0200 Message-Id: <1430926687-25875-6-git-send-email-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.4.0 In-Reply-To: <1430926687-25875-1-git-send-email-a.rigo@virtualopensystems.com> References: <1430926687-25875-1-git-send-email-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.85.212.181 Cc: mttcg@listserver.greensocs.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, claudio.fontana@huawei.com Subject: [Qemu-devel] [RFC 5/5] target-arm: translate: implement qemu_ldlink and qemu_stcond ops X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Implement strex and ldrex instruction relying on TCG's qemu_ldlink and qemu_stcond. For the time being only the 32bit instructions are supported. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo --- target-arm/translate.c | 94 ++++++++++++++++++++++++++++++++++++++++++- tcg/arm/tcg-target.c | 105 ++++++++++++++++++++++++++++++++++++------------- 2 files changed, 170 insertions(+), 29 deletions(-) diff --git a/target-arm/translate.c b/target-arm/translate.c index 9116529..3b209a5 100644 --- a/target-arm/translate.c +++ b/target-arm/translate.c @@ -956,6 +956,18 @@ DO_GEN_ST(8, MO_UB) DO_GEN_ST(16, MO_TEUW) DO_GEN_ST(32, MO_TEUL) +/* Load/Store exclusive generators (always unsigned) */ +static inline void gen_aa32_ldex32(TCGv_i32 val, TCGv_i32 addr, int index) +{ + tcg_gen_qemu_ldlink_i32(val, addr, index, MO_TEUL); +} + +static inline void gen_aa32_stex32(TCGv_i32 is_excl, TCGv_i32 val, + TCGv_i32 addr, int index) +{ + tcg_gen_qemu_stcond_i32(is_excl, val, addr, index, MO_TEUL); +} + static inline void gen_set_pc_im(DisasContext *s, target_ulong val) { tcg_gen_movi_i32(cpu_R[15], val); @@ -7420,6 +7432,26 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2, tcg_gen_extu_i32_i64(cpu_exclusive_addr, addr); } +static void gen_load_exclusive_multi(DisasContext *s, int rt, int rt2, + TCGv_i32 addr, int size) +{ + TCGv_i32 tmp = tcg_temp_new_i32(); + + switch (size) { + case 0: + case 1: + abort(); + case 2: + gen_aa32_ldex32(tmp, addr, get_mem_index(s)); + break; + case 3: + default: + abort(); + } + + store_reg(s, rt, tmp); +} + static void gen_clrex(DisasContext *s) { tcg_gen_movi_i64(cpu_exclusive_addr, -1); @@ -7518,6 +7550,63 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2, gen_set_label(done_label); tcg_gen_movi_i64(cpu_exclusive_addr, -1); } + +static void gen_store_exclusive_multi(DisasContext *s, int rd, int rt, int rt2, + TCGv_i32 addr, int size) +{ + TCGv_i32 tmp; + TCGv_i32 is_dirty; + TCGv_i32 success; + TCGLabel *done_label; + TCGLabel *fail_label; + + fail_label = gen_new_label(); + done_label = gen_new_label(); + + tmp = tcg_temp_new_i32(); + is_dirty = tcg_temp_new_i32(); + success = tcg_temp_new_i32(); + switch (size) { + case 0: + case 1: + abort(); + break; + case 2: + gen_aa32_stex32(is_dirty, tmp, addr, get_mem_index(s)); + break; + case 3: + default: + abort(); + } + + tcg_temp_free_i32(tmp); + + /* Check if the store conditional has to fail */ + tcg_gen_movi_i32(success, 1); + tcg_gen_brcond_i32(TCG_COND_EQ, is_dirty, success, fail_label); + tcg_temp_free_i32(success); + tcg_temp_free_i32(is_dirty); + + tmp = load_reg(s, rt); + switch (size) { + case 0: + case 1: + abort(); + case 2: + gen_aa32_st32(tmp, addr, get_mem_index(s)); + break; + case 3: + default: + abort(); + } + tcg_temp_free_i32(tmp); + + tcg_gen_movi_i32(cpu_R[rd], 0); // is_dirty = 0 + tcg_gen_br(done_label); + gen_set_label(fail_label); + tcg_gen_movi_i32(cpu_R[rd], 1); // is_dirty = 1 + gen_set_label(done_label); +} #endif /* gen_srs: @@ -8365,7 +8454,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn) } else if (insn & (1 << 20)) { switch (op1) { case 0: /* ldrex */ - gen_load_exclusive(s, rd, 15, addr, 2); + gen_load_exclusive_multi(s, rd, 15, addr, 2); break; case 1: /* ldrexd */ gen_load_exclusive(s, rd, rd + 1, addr, 3); @@ -8383,7 +8472,8 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn) rm = insn & 0xf; switch (op1) { case 0: /* strex */ - gen_store_exclusive(s, rd, rm, 15, addr, 2); + gen_store_exclusive_multi(s, rd, rm, 15, + addr, 2); break; case 1: /* strexd */ gen_store_exclusive(s, rd, rm, rm + 1, addr, 3); diff --git a/tcg/arm/tcg-target.c b/tcg/arm/tcg-target.c index 01e6fbf..1a301be 100644 --- a/tcg/arm/tcg-target.c +++ b/tcg/arm/tcg-target.c @@ -1069,6 +1069,11 @@ static void * const qemu_ld_helpers[16] = { [MO_BESL] = helper_be_ldul_mmu, }; +/* LoadLink helpers, only unsigned */ +static void * const qemu_ldex_helpers[16] = { + [MO_LEUL] = helper_le_ldlinkul_mmu, +}; + /* helper signature: helper_ret_st_mmu(CPUState *env, target_ulong addr, * uintxx_t val, int mmu_idx, uintptr_t ra) */ @@ -1082,6 +1087,11 @@ static void * const qemu_st_helpers[16] = { [MO_BEQ] = helper_be_stq_mmu, }; +/* StoreConditional helpers, only unsigned */ +static void * const qemu_stex_helpers[16] = { + [MO_LEUL] = helper_le_stcondl_mmu, +}; + /* Helper routines for marshalling helper function arguments into * the correct registers and stack. * argreg is where we want to put this argument, arg is the argument itself. @@ -1222,6 +1232,7 @@ static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi, path for a load or store, so that we can later generate the correct helper code. */ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOp opc, + bool is_excl, TCGReg llsc_success, TCGReg datalo, TCGReg datahi, TCGReg addrlo, TCGReg addrhi, int mem_index, tcg_insn_unit *raddr, tcg_insn_unit *label_ptr) @@ -1229,6 +1240,8 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOp opc, TCGLabelQemuLdst *label = new_ldst_label(s); label->is_ld = is_ld; + label->is_llsc = is_excl; + label->llsc_success = llsc_success; label->opc = opc; label->datalo_reg = datalo; label->datahi_reg = datahi; @@ -1259,12 +1272,16 @@ static void tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) /* For armv6 we can use the canonical unsigned helpers and minimize icache usage. For pre-armv6, use the signed helpers since we do not have a single insn sign-extend. */ - if (use_armv6_instructions) { - func = qemu_ld_helpers[opc & ~MO_SIGN]; + if (lb->is_llsc) { + func = qemu_ldex_helpers[opc]; } else { - func = qemu_ld_helpers[opc]; - if (opc & MO_SIGN) { - opc = MO_UL; + if (use_armv6_instructions) { + func = qemu_ld_helpers[opc & ~MO_SIGN]; + } else { + func = qemu_ld_helpers[opc]; + if (opc & MO_SIGN) { + opc = MO_UL; + } } } tcg_out_call(s, func); @@ -1336,7 +1353,14 @@ static void tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb) argreg = tcg_out_arg_reg32(s, argreg, TCG_REG_R14); /* Tail-call to the helper, which will return to the fast path. */ - tcg_out_goto(s, COND_AL, qemu_st_helpers[opc]); + if (lb->is_llsc) { + tcg_out_call(s, qemu_stex_helpers[opc]); + /* Save the output of the StoreConditional */ + tcg_out_mov_reg(s, COND_AL, lb->llsc_success, TCG_REG_R0); + tcg_out_goto(s, COND_AL, lb->raddr); + } else { + tcg_out_goto(s, COND_AL, qemu_st_helpers[opc]); + } } #endif /* SOFTMMU */ @@ -1460,7 +1484,8 @@ static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, } } -static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64) +static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64, + bool isLoadLink) { TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused)); TCGMemOp opc; @@ -1478,17 +1503,23 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64) #ifdef CONFIG_SOFTMMU mem_index = *args; - addend = tcg_out_tlb_read(s, addrlo, addrhi, opc & MO_SIZE, mem_index, 1); /* This a conditional BL only to load a pointer within this opcode into LR - for the slow path. We will not be using the value for a tail call. */ - label_ptr = s->code_ptr; - tcg_out_bl_noaddr(s, COND_NE); - - tcg_out_qemu_ld_index(s, opc, datalo, datahi, addrlo, addend); + for the slow path. We will not be using the value for a tail call. + In the context of a LoadLink instruction, we don't check the TLB but we + always follow the slow path. */ + if (isLoadLink) { + label_ptr = s->code_ptr; + tcg_out_bl_noaddr(s, COND_AL); + } else { + addend = tcg_out_tlb_read(s, addrlo, addrhi, opc & MO_SIZE, mem_index, 1); + label_ptr = s->code_ptr; + tcg_out_bl_noaddr(s, COND_NE); + tcg_out_qemu_ld_index(s, opc, datalo, datahi, addrlo, addend); + } - add_qemu_ldst_label(s, true, opc, datalo, datahi, addrlo, addrhi, - mem_index, s->code_ptr, label_ptr); + add_qemu_ldst_label(s, true, opc, isLoadLink, 0, datalo, datahi, addrlo, + addrhi, mem_index, s->code_ptr, label_ptr); #else /* !CONFIG_SOFTMMU */ if (GUEST_BASE) { tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, GUEST_BASE); @@ -1589,9 +1620,11 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, } } -static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) +static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64, + bool isStoreCond) { TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused)); + TCGReg llsc_success; TCGMemOp opc; #ifdef CONFIG_SOFTMMU int mem_index; @@ -1599,6 +1632,8 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) tcg_insn_unit *label_ptr; #endif + llsc_success = (isStoreCond ? *args++ : 0); + datalo = *args++; datahi = (is64 ? *args++ : 0); addrlo = *args++; @@ -1607,16 +1642,24 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64) #ifdef CONFIG_SOFTMMU mem_index = *args; - addend = tcg_out_tlb_read(s, addrlo, addrhi, opc & MO_SIZE, mem_index, 0); - tcg_out_qemu_st_index(s, COND_EQ, opc, datalo, datahi, addrlo, addend); + if (isStoreCond) { + /* Always follow the slow-path for an exclusive access */ + label_ptr = s->code_ptr; + tcg_out_bl_noaddr(s, COND_AL); + } else { + addend = tcg_out_tlb_read(s, addrlo, addrhi, opc & MO_SIZE, mem_index, 0); - /* The conditional call must come last, as we're going to return here. */ - label_ptr = s->code_ptr; - tcg_out_bl_noaddr(s, COND_NE); + tcg_out_qemu_st_index(s, COND_EQ, opc, datalo, datahi, addrlo, addend); - add_qemu_ldst_label(s, false, opc, datalo, datahi, addrlo, addrhi, - mem_index, s->code_ptr, label_ptr); + /* The conditional call must come last, as we're going to return here. */ + label_ptr = s->code_ptr; + tcg_out_bl_noaddr(s, COND_NE); + } + + add_qemu_ldst_label(s, false, opc, isStoreCond, llsc_success, datalo, + datahi, addrlo, addrhi, mem_index, s->code_ptr, + label_ptr); #else /* !CONFIG_SOFTMMU */ if (GUEST_BASE) { tcg_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, GUEST_BASE); @@ -1859,16 +1902,22 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_qemu_ld_i32: - tcg_out_qemu_ld(s, args, 0); + tcg_out_qemu_ld(s, args, 0, 0); + break; + case INDEX_op_qemu_ldlink_i32: + tcg_out_qemu_ld(s, args, 0, 1); // LoadLink break; case INDEX_op_qemu_ld_i64: - tcg_out_qemu_ld(s, args, 1); + tcg_out_qemu_ld(s, args, 1, 0); break; case INDEX_op_qemu_st_i32: - tcg_out_qemu_st(s, args, 0); + tcg_out_qemu_st(s, args, 0, 0); + break; + case INDEX_op_qemu_stcond_i32: + tcg_out_qemu_st(s, args, 0, 1); // StoreConditional break; case INDEX_op_qemu_st_i64: - tcg_out_qemu_st(s, args, 1); + tcg_out_qemu_st(s, args, 1, 0); break; case INDEX_op_bswap16_i32: @@ -1952,8 +2001,10 @@ static const TCGTargetOpDef arm_op_defs[] = { #if TARGET_LONG_BITS == 32 { INDEX_op_qemu_ld_i32, { "r", "l" } }, + { INDEX_op_qemu_ldlink_i32, { "r", "l" } }, { INDEX_op_qemu_ld_i64, { "r", "r", "l" } }, { INDEX_op_qemu_st_i32, { "s", "s" } }, + { INDEX_op_qemu_stcond_i32, { "r", "s", "s" } }, { INDEX_op_qemu_st_i64, { "s", "s", "s" } }, #else { INDEX_op_qemu_ld_i32, { "r", "l", "l" } },