From patchwork Fri Jan 8 15:53:14 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 564925 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id BCFCF14018C for ; Sat, 9 Jan 2016 02:53:50 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b=BqBHRb3u; dkim-atps=neutral Received: from localhost ([::1]:37004 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHZMJ-0008LF-AU for incoming@patchwork.ozlabs.org; Fri, 08 Jan 2016 10:53:47 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53751) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHZLw-0007kC-9J for qemu-devel@nongnu.org; Fri, 08 Jan 2016 10:53:25 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aHZLu-0006Sn-JA for qemu-devel@nongnu.org; Fri, 08 Jan 2016 10:53:24 -0500 Received: from mail-wm0-x234.google.com ([2a00:1450:400c:c09::234]:35027) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aHZLu-0006Sg-9t for qemu-devel@nongnu.org; Fri, 08 Jan 2016 10:53:22 -0500 Received: by mail-wm0-x234.google.com with SMTP id f206so140090659wmf.0 for ; Fri, 08 Jan 2016 07:53:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=uQvno837ip9qXdnliQdHzz8H2YarqKMf3CyROyhFxUs=; b=BqBHRb3uyiGaSYpQguKgVto609ajO2VG8HNTsYf90PbhmsODqQt9FdnUBKX2PmMnha qA4wyuXbYu6kpiPNBgGnjYoi/PblGiebAgKOJqNZS++lXWX8P6dNzXE5hBnMAe+5+gCM GxVQV/v7RnCgCTF/7e9Ra9+YerH5eE8QfzHaU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=uQvno837ip9qXdnliQdHzz8H2YarqKMf3CyROyhFxUs=; b=ME9QwNS1QgOM4L/oh9gghxEeCNxJi02vXEoep4l4jQpy2NOOg+3lXpul7iYo4lMzCb 2teX3QIIoTWa9QqgBG2LjRxWsGBwTz4PgEqjD98FeV4tdEcgRLYOt9zyulUz6nR3SgfK VMznne2xdEWXi3g8i6l9AlkX8VieVipPdjR+GtJeQNZszD8/AuvRCEAoiUs2CNXiVjVU kBWv6rBPt59mC3l+BvGfwQZkPJwE+kuOnTb9ITrCnUNWDDUqzuUZmWsg9Ro+B6PZaWMM zi8HfMUZGtrGQcJpeImRZlt7vWOr6ImioWRVzmk6cTS7mWHQE0/EVunflteqqkl+isPI Hn1A== X-Gm-Message-State: ALoCoQnh5yc2gZJwndkyGj3uZ+VqinbQVQiF6e0WsiX/A8mBhXLcdJr0ZzCUx0WshK9bd4Bcg98gFX0ATjQb5b7GlkSJ9d3MNw== X-Received: by 10.28.104.130 with SMTP id d124mr22737797wmc.72.1452268401612; Fri, 08 Jan 2016 07:53:21 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id kb5sm106227384wjc.20.2016.01.08.07.53.18 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 Jan 2016 07:53:20 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id C59AB3E0817; Fri, 8 Jan 2016 15:53:17 +0000 (GMT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: qemu-devel@nongnu.org Date: Fri, 8 Jan 2016 15:53:14 +0000 Message-Id: <1452268394-31252-3-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1452268394-31252-1-git-send-email-alex.bennee@linaro.org> References: <1452268394-31252-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::234 Cc: Peter Crosthwaite , claudio.fontana@huawei.com, a.rigo@virtualopensystems.com, Paolo Bonzini , jani.kokkonen@huawei.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , Richard Henderson Subject: [Qemu-devel] [RFC PATCH 2/2] softmmu: simplify helper_*_st_name with smmu_helper(do_unl_store) X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Alvise Rigo Attempting to simplify the helper_*_st_name, wrap the do_unaligned_access code into an shared inline function. As this also removes the goto statement the inline code is expanded twice in each helper. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana CC: Alvise Rigo Signed-off-by: Alex Bennée --- v2 - based on original patch from Alvise - uses a single shared inline function to reduce duplication --- softmmu_template.h | 75 ++++++++++++++++++++++++++++-------------------------- 1 file changed, 39 insertions(+), 36 deletions(-) diff --git a/softmmu_template.h b/softmmu_template.h index 0074bd7..ac0b4ac 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -159,6 +159,39 @@ static inline int smmu_helper(victim_tlb_hit) (const bool is_read, CPUArchState } #ifndef SOFTMMU_CODE_ACCESS + +static inline void smmu_helper(do_unl_store)(CPUArchState *env, + bool little_endian, + DATA_TYPE val, + target_ulong addr, + TCGMemOpIdx oi, + unsigned mmu_idx, + uintptr_t retaddr) +{ + int i; + + if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + /* Note: relies on the fact that tlb_fill() does not remove the + * previous page from the TLB cache. */ + for (i = DATA_SIZE - 1; i >= 0; i--) { + uint8_t val8; + if (little_endian) { + /* Little-endian extract. */ + val8 = val >> (i * 8); + } else { + /* Big-endian extract. */ + val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); + } + /* Note the adjustment at the beginning of the function. + Undo that for the recursion. */ + glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, + oi, retaddr + GETPC_ADJ); + } +} + static inline DATA_TYPE glue(io_read, SUFFIX)(CPUArchState *env, CPUIOTLBEntry *iotlbentry, target_ulong addr, @@ -416,7 +449,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { CPUIOTLBEntry *iotlbentry; if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; + smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr); + return; } iotlbentry = &env->iotlb[mmu_idx][index]; @@ -431,23 +465,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; - do_unaligned_access: - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - /* Note the adjustment at the beginning of the function. - Undo that for the recursion. */ - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr + GETPC_ADJ); - } + smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr); return; } @@ -496,7 +514,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { CPUIOTLBEntry *iotlbentry; if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; + smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr); + return; } iotlbentry = &env->iotlb[mmu_idx][index]; @@ -511,23 +530,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; - do_unaligned_access: - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); - /* Note the adjustment at the beginning of the function. - Undo that for the recursion. */ - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr + GETPC_ADJ); - } + smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr); return; }