From patchwork Wed Apr 7 17:51:19 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aurelien Jarno X-Patchwork-Id: 49633 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 7BF88B7C48 for ; Thu, 8 Apr 2010 04:37:34 +1000 (EST) Received: from localhost ([127.0.0.1]:59607 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NzZxe-00008c-AZ for incoming@patchwork.ozlabs.org; Wed, 07 Apr 2010 14:26:46 -0400 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NzZQJ-0006D5-Ij for qemu-devel@nongnu.org; Wed, 07 Apr 2010 13:52:19 -0400 Received: from [140.186.70.92] (port=55396 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NzZQ4-000654-5d for qemu-devel@nongnu.org; Wed, 07 Apr 2010 13:52:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1NzZPl-0007L0-Rz for qemu-devel@nongnu.org; Wed, 07 Apr 2010 13:52:02 -0400 Received: from hall.aurel32.net ([88.191.82.174]:45322) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1NzZPi-0007J6-LU for qemu-devel@nongnu.org; Wed, 07 Apr 2010 13:51:43 -0400 Received: from [2a01:e35:2e80:2fb0:21e:8cff:feb0:693b] (helo=volta.aurel32.net) by hall.aurel32.net with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.69) (envelope-from ) id 1NzZPh-00011W-Qd; Wed, 07 Apr 2010 19:51:42 +0200 Received: from aurel32 by volta.aurel32.net with local (Exim 4.71) (envelope-from ) id 1NzZPc-0001y7-Js; Wed, 07 Apr 2010 19:51:36 +0200 From: Aurelien Jarno To: qemu-devel@nongnu.org Date: Wed, 7 Apr 2010 19:51:19 +0200 Message-Id: <1270662685-7379-13-git-send-email-aurelien@aurel32.net> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1270662685-7379-1-git-send-email-aurelien@aurel32.net> References: <1270662685-7379-1-git-send-email-aurelien@aurel32.net> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 3) Cc: Andrzej Zaborowski , Aurelien Jarno Subject: [Qemu-devel] [PATCH 12/18] tcg/arm: remove conditional argument for qemu_ld/st X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org While it make sense to pass a conditional argument to tcg_out_*() functions as the ARM architecture allows that, it doesn't make sense for qemu_ld/st functions. These functions use comparison instructions and conditional execution already, so it is not possible to use a second level of conditional execution. Signed-off-by: Aurelien Jarno --- tcg/arm/tcg-target.c | 100 ++++++++++++++++++++++++------------------------- 1 files changed, 49 insertions(+), 51 deletions(-) diff --git a/tcg/arm/tcg-target.c b/tcg/arm/tcg-target.c index aec1183..d24a245 100644 --- a/tcg/arm/tcg-target.c +++ b/tcg/arm/tcg-target.c @@ -886,8 +886,7 @@ static void *qemu_st_helpers[4] = { #define TLB_SHIFT (CPU_TLB_ENTRY_BITS + CPU_TLB_BITS) -static inline void tcg_out_qemu_ld(TCGContext *s, int cond, - const TCGArg *args, int opc) +static inline void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc) { int addr_reg, data_reg, data_reg2; #ifdef CONFIG_SOFTMMU @@ -983,32 +982,32 @@ static inline void tcg_out_qemu_ld(TCGContext *s, int cond, /* TODO: move this code to where the constants pool will be */ if (addr_reg != TCG_REG_R0) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R0, 0, addr_reg, SHIFT_IMM_LSL(0)); } # if TARGET_LONG_BITS == 32 - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R1, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, mem_index); # else if (addr_reg2 != TCG_REG_R1) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, addr_reg2, SHIFT_IMM_LSL(0)); } - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R2, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index); # endif - tcg_out_bl(s, cond, (tcg_target_long) qemu_ld_helpers[s_bits] - + tcg_out_bl(s, COND_AL, (tcg_target_long) qemu_ld_helpers[s_bits] - (tcg_target_long) s->code_ptr); switch (opc) { case 0 | 4: - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R0, 0, TCG_REG_R0, SHIFT_IMM_LSL(24)); - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, data_reg, 0, TCG_REG_R0, SHIFT_IMM_ASR(24)); break; case 1 | 4: - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R0, 0, TCG_REG_R0, SHIFT_IMM_LSL(16)); - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, data_reg, 0, TCG_REG_R0, SHIFT_IMM_ASR(16)); break; case 0: @@ -1016,17 +1015,17 @@ static inline void tcg_out_qemu_ld(TCGContext *s, int cond, case 2: default: if (data_reg != TCG_REG_R0) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, data_reg, 0, TCG_REG_R0, SHIFT_IMM_LSL(0)); } break; case 3: if (data_reg != TCG_REG_R0) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, data_reg, 0, TCG_REG_R0, SHIFT_IMM_LSL(0)); } if (data_reg2 != TCG_REG_R1) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, data_reg2, 0, TCG_REG_R1, SHIFT_IMM_LSL(0)); } break; @@ -1081,8 +1080,7 @@ static inline void tcg_out_qemu_ld(TCGContext *s, int cond, #endif } -static inline void tcg_out_qemu_st(TCGContext *s, int cond, - const TCGArg *args, int opc) +static inline void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc) { int addr_reg, data_reg, data_reg2; #ifdef CONFIG_SOFTMMU @@ -1169,85 +1167,85 @@ static inline void tcg_out_qemu_st(TCGContext *s, int cond, /* TODO: move this code to where the constants pool will be */ if (addr_reg != TCG_REG_R0) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R0, 0, addr_reg, SHIFT_IMM_LSL(0)); } # if TARGET_LONG_BITS == 32 switch (opc) { case 0: - tcg_out_dat_imm(s, cond, ARITH_AND, TCG_REG_R1, data_reg, 0xff); - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R2, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_AND, TCG_REG_R1, data_reg, 0xff); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index); break; case 1: - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, data_reg, SHIFT_IMM_LSL(16)); - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, TCG_REG_R1, SHIFT_IMM_LSR(16)); - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R2, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index); break; case 2: if (data_reg != TCG_REG_R1) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, data_reg, SHIFT_IMM_LSL(0)); } - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R2, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index); break; case 3: if (data_reg != TCG_REG_R1) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, data_reg, SHIFT_IMM_LSL(0)); } if (data_reg2 != TCG_REG_R2) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, data_reg2, SHIFT_IMM_LSL(0)); } - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R3, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index); break; } # else if (addr_reg2 != TCG_REG_R1) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, addr_reg2, SHIFT_IMM_LSL(0)); } switch (opc) { case 0: - tcg_out_dat_imm(s, cond, ARITH_AND, TCG_REG_R2, data_reg, 0xff); - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R3, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_AND, TCG_REG_R2, data_reg, 0xff); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index); break; case 1: - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, data_reg, SHIFT_IMM_LSL(16)); - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, TCG_REG_R2, SHIFT_IMM_LSR(16)); - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R3, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index); break; case 2: if (data_reg != TCG_REG_R2) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, data_reg, SHIFT_IMM_LSL(0)); } - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R3, 0, mem_index); + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index); break; case 3: - tcg_out_dat_imm(s, cond, ARITH_MOV, TCG_REG_R8, 0, mem_index); - tcg_out32(s, (cond << 28) | 0x052d8010); /* str r8, [sp, #-0x10]! */ + tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R8, 0, mem_index); + tcg_out32(s, (COND_AL << 28) | 0x052d8010); /* str r8, [sp, #-0x10]! */ if (data_reg != TCG_REG_R2) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, data_reg, SHIFT_IMM_LSL(0)); } if (data_reg2 != TCG_REG_R3) { - tcg_out_dat_reg(s, cond, ARITH_MOV, + tcg_out_dat_reg(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, data_reg2, SHIFT_IMM_LSL(0)); } break; } # endif - tcg_out_bl(s, cond, (tcg_target_long) qemu_st_helpers[s_bits] - + tcg_out_bl(s, COND_AL, (tcg_target_long) qemu_st_helpers[s_bits] - (tcg_target_long) s->code_ptr); # if TARGET_LONG_BITS == 64 if (opc == 3) - tcg_out_dat_imm(s, cond, ARITH_ADD, TCG_REG_R13, TCG_REG_R13, 0x10); + tcg_out_dat_imm(s, COND_AL, ARITH_ADD, TCG_REG_R13, TCG_REG_R13, 0x10); # endif *label_ptr += ((void *) s->code_ptr - (void *) label_ptr - 8) >> 2; @@ -1527,35 +1525,35 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc, break; case INDEX_op_qemu_ld8u: - tcg_out_qemu_ld(s, COND_AL, args, 0); + tcg_out_qemu_ld(s, args, 0); break; case INDEX_op_qemu_ld8s: - tcg_out_qemu_ld(s, COND_AL, args, 0 | 4); + tcg_out_qemu_ld(s, args, 0 | 4); break; case INDEX_op_qemu_ld16u: - tcg_out_qemu_ld(s, COND_AL, args, 1); + tcg_out_qemu_ld(s, args, 1); break; case INDEX_op_qemu_ld16s: - tcg_out_qemu_ld(s, COND_AL, args, 1 | 4); + tcg_out_qemu_ld(s, args, 1 | 4); break; case INDEX_op_qemu_ld32: - tcg_out_qemu_ld(s, COND_AL, args, 2); + tcg_out_qemu_ld(s, args, 2); break; case INDEX_op_qemu_ld64: - tcg_out_qemu_ld(s, COND_AL, args, 3); + tcg_out_qemu_ld(s, args, 3); break; case INDEX_op_qemu_st8: - tcg_out_qemu_st(s, COND_AL, args, 0); + tcg_out_qemu_st(s, args, 0); break; case INDEX_op_qemu_st16: - tcg_out_qemu_st(s, COND_AL, args, 1); + tcg_out_qemu_st(s, args, 1); break; case INDEX_op_qemu_st32: - tcg_out_qemu_st(s, COND_AL, args, 2); + tcg_out_qemu_st(s, args, 2); break; case INDEX_op_qemu_st64: - tcg_out_qemu_st(s, COND_AL, args, 3); + tcg_out_qemu_st(s, args, 3); break; case INDEX_op_bswap16_i32: