From patchwork Tue Jun 2 11:26:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Aurelien Jarno X-Patchwork-Id: 479423 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id D92011412E7 for ; Tue, 2 Jun 2015 21:35:01 +1000 (AEST) Received: from localhost ([::1]:57681 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YzkTD-0004Oy-Qu for incoming@patchwork.ozlabs.org; Tue, 02 Jun 2015 07:34:59 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39090) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YzkLe-0006fn-Nz for qemu-devel@nongnu.org; Tue, 02 Jun 2015 07:27:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YzkLY-0003Ob-Oc for qemu-devel@nongnu.org; Tue, 02 Jun 2015 07:27:10 -0400 Received: from hall.aurel32.net ([2001:bc8:30d7:101::1]:59984) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YzkLY-0003NW-GS for qemu-devel@nongnu.org; Tue, 02 Jun 2015 07:27:04 -0400 Received: from weber.rr44.fr ([2001:470:d4ed:0:7e05:7ff:fe0d:f152]) by hall.aurel32.net with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.84) (envelope-from ) id 1YzkLV-0006Gn-2Y; Tue, 02 Jun 2015 13:27:01 +0200 Received: from aurel32 by weber.rr44.fr with local (Exim 4.85) (envelope-from ) id 1YzkLT-0002YB-Tt; Tue, 02 Jun 2015 13:26:59 +0200 From: Aurelien Jarno To: qemu-devel@nongnu.org Date: Tue, 2 Jun 2015 13:26:49 +0200 Message-Id: <1433244411-9693-4-git-send-email-aurelien@aurel32.net> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1433244411-9693-1-git-send-email-aurelien@aurel32.net> References: <1433244411-9693-1-git-send-email-aurelien@aurel32.net> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2001:bc8:30d7:101::1 Cc: Peter Maydell , Alexander Graf , Yongbok Kim , Paolo Bonzini , Leon Alrae , =?UTF-8?q?Andreas=20F=C3=A4rber?= , Aurelien Jarno , Richard Henderson Subject: [Qemu-devel] [PATCH RFC 3/5] softmmu: add a tlb_vaddr_to_host_fill function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The softmmu code already provides a tlb_vaddr_to_host function, which returns the host address corresponding to a guest virtual address, *if it is already in the QEMU MMU TLB*. This patch is an attempt to have a function which try to fill the TLB entry if it is not already in the QEMU MMU TLB, possibly trigger a guest fault. It can be used directly in helpers. For that it creates a common function with a boolean to tell if the TLB needs to be filled or not. If yes, it causes tlb_fill, which might trigger an exception or succeed in which case the tlbentry pointer need to be reloaded. I also had to change the MMIO test part. It seems that in write mode some TLB entries are filled with TLB_NOTDIRTY. They are caught by the MMIO test and a NULL pointer is returned instead. I am not sure of my change, but I guess the current softmmu code has the same issue. At the same time, it defines the same function for the user mode, so that we can write helpers using the same code for softmmu and user mode, just like cpu_ldxx_data() functions works for both. It also replaces hard-coded values for the access-type by the corresponding constants. Cc: Richard Henderson Cc: Alexander Graf Cc: Paolo Bonzini Cc: Yongbok Kim Cc: Leon Alrae Cc: Andreas Färber Cc: Peter Maydell Signed-off-by: Aurelien Jarno --- include/exec/cpu_ldst.h | 100 +++++++++++++++++++++++++++++++++++++----------- 1 file changed, 77 insertions(+), 23 deletions(-) diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h index 1673287..64fe806 100644 --- a/include/exec/cpu_ldst.h +++ b/include/exec/cpu_ldst.h @@ -307,37 +307,40 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx); #undef MEMSUFFIX #undef SOFTMMU_CODE_ACCESS -/** - * tlb_vaddr_to_host: - * @env: CPUArchState - * @addr: guest virtual address to look up - * @access_type: 0 for read, 1 for write, 2 for execute - * @mmu_idx: MMU index to use for lookup - * - * Look up the specified guest virtual index in the TCG softmmu TLB. - * If the TLB contains a host virtual address suitable for direct RAM - * access, then return it. Otherwise (TLB miss, TLB entry is for an - * I/O access, etc) return NULL. - * - * This is the equivalent of the initial fast-path code used by - * TCG backends for guest load and store accesses. - */ -static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr, - int access_type, int mmu_idx) +#endif /* defined(CONFIG_USER_ONLY) */ + + + +#if defined(CONFIG_USER_ONLY) +static inline void *tlb_vaddr_to_host_common(CPUArchState *env, + target_ulong addr, + int access_type, int mmu_idx, + uintptr_t retaddr, bool fill) +{ + return g2h(vaddr); +} +#else +static inline void *tlb_vaddr_to_host_common(CPUArchState *env, + target_ulong addr, + int access_type, int mmu_idx, + uintptr_t retaddr, bool fill) { int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); - CPUTLBEntry *tlbentry = &env->tlb_table[mmu_idx][index]; + CPUTLBEntry *tlbentry; target_ulong tlb_addr; uintptr_t haddr; +again: + tlbentry = &env->tlb_table[mmu_idx][index]; + switch (access_type) { - case 0: + case MMU_DATA_LOAD: tlb_addr = tlbentry->addr_read; break; - case 1: + case MMU_DATA_STORE: tlb_addr = tlbentry->addr_write; break; - case 2: + case MMU_INST_FETCH: tlb_addr = tlbentry->addr_code; break; default: @@ -347,10 +350,14 @@ static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr, if ((addr & TARGET_PAGE_MASK) != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) { /* TLB entry is for a different page */ + if (fill) { + tlb_fill(ENV_GET_CPU(env), addr, access_type, mmu_idx, retaddr); + goto again; + } return NULL; } - if (tlb_addr & ~TARGET_PAGE_MASK) { + if (tlb_addr & TLB_MMIO) { /* IO access */ return NULL; } @@ -358,7 +365,54 @@ static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr, haddr = addr + env->tlb_table[mmu_idx][index].addend; return (void *)haddr; } +#endif -#endif /* defined(CONFIG_USER_ONLY) */ +/** + * tlb_vaddr_to_host: + * @env: CPUArchState + * @addr: guest virtual address to look up + * @access_type: MMU_DATA_LOAD for read, MMU_DATA_STORE for write, + * MMU_INST_FETCH for execute + * @mmu_idx: MMU index to use for lookup + * + * Look up the specified guest virtual index in the TCG softmmu TLB. + * If the TLB contains a host virtual address suitable for direct RAM + * access, then return it. Otherwise (TLB miss, TLB entry is for an + * I/O access, etc) return NULL. + * + * This is the equivalent of the initial fast-path code used by + * TCG backends for guest load and store accesses. + */ +static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr, + int access_type, int mmu_idx) +{ + return tlb_vaddr_to_host_common(env, addr, access_type, + mmu_idx, 0, false); +} + +/** + * tlb_vaddr_to_host_fill: + * @env: CPUArchState + * @addr: guest virtual address to look up + * @access_type: MMU_DATA_LOAD for read, MMU_DATA_STORE for write, + * MMU_INST_FETCH for execute + * @mmu_idx: MMU index to use for lookup + * @retaddr: address returned by GETPC() when called from a helper or 0 + * + * Look up the specified guest virtual index in the TCG softmmu TLB. + * If the TLB contains a host virtual address suitable for direct RAM + * access, then return it. In case of a TLB miss, it trigger an exception. + * Otherwise (TLB entry is for an I/O access, etc), it returns NULL. + * + * It is the responsability of the caller to ensure endian conversion, that + * page boundaries are not crossed and that access alignement is correct. + */ +static inline void *tlb_vaddr_to_host_fill(CPUArchState *env, target_ulong addr, + int access_type, int mmu_idx, + uintptr_t retaddr) +{ + return tlb_vaddr_to_host_common(env, addr, access_type, + mmu_idx, retaddr, true); +} #endif /* CPU_LDST_H */