From patchwork Sun Aug 6 03:36:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 1817421 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=HfgWji21; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RJQD11ktfz1yYl for ; Sun, 6 Aug 2023 13:38:57 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qSUa3-0003cL-6o; Sat, 05 Aug 2023 23:37:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qSUa2-0003Zp-0s for qemu-devel@nongnu.org; Sat, 05 Aug 2023 23:37:22 -0400 Received: from mail-pj1-x102f.google.com ([2607:f8b0:4864:20::102f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qSUZz-0007W2-U9 for qemu-devel@nongnu.org; Sat, 05 Aug 2023 23:37:21 -0400 Received: by mail-pj1-x102f.google.com with SMTP id 98e67ed59e1d1-267f870e6ffso1845774a91.0 for ; Sat, 05 Aug 2023 20:37:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1691293038; x=1691897838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+ogyDL6MEqk+X9fxmfimDp67js4GEaptZ9YAfaG485E=; b=HfgWji213wDej8djNt2WWlGLTkXoU7rrxJuFVplq8NdHgJr5ehLo9wghTjEgBd5XkK OM+fzTIu0Wx+NThffBKxS3048/t03oSQr2Hq0bxfciwO2oK2DyRJo5k+/uDNliXqEOzd 6ACrwtZHlr7nAQVGlcH+N9Gs2FFQkct0IbMrDzHSzbpmcsBFRLqDIdHwwLCxZg7I1g99 IbVH3tIgKm6G/LNCInaWb94nh+GxlXhWdstrKFb0BxAFN9H7h5NSKXPX8krjM/T6FkZz O6WdKb38uCv638FLPdyd8yyij6Y84YUFMWlnEgZFuc3S3foVOghCze+hWojabYy/+0Qj 1xAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691293038; x=1691897838; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+ogyDL6MEqk+X9fxmfimDp67js4GEaptZ9YAfaG485E=; b=a/oY2rKFiQ6r8jOe2eXZz3Jcev+cKkHkXnrM2r9njdDzHK7HMgh+oK/PrZL/Uhbkwf cI57W7+xCt69XS4DydKT02CByDSeHMhnz8PZDRowbiu2c9GpET6rQTJxiug3gfFjO3Od WqDZOTgX5zp5+7fn9Mcb1a9O+Erwl39cUb34SxUBfwwXf6OJk5CY+sPkxZft9lxdRUci kAnGNRpk9ys/rEnaWUdEucoQnbyNDmJZRjaBx5FERS+SB04yXaKs6Q1dmmIiuMB4wxZp bQOs36oUi/cOQYz8kBH9oYaIRNkK2y4gASBV/weV6u5PyRga01W4tE4mr39dteskAyn0 miMA== X-Gm-Message-State: AOJu0YxD6v08ud+2YhmaK/EANZQB7m4Cpq2k+NF+n1YW7pOdr9cJ0lyn SSn8BkAI1YeUBI+YnZfSBg70VAIUxXwe4Af98Mg= X-Google-Smtp-Source: AGHT+IF+2IyYmuN3xbpNERkHyuN7/ZWttO97WWoWAaLPBPQSIGTFA/lwJUTPZC/74ifuHBBp0TOlbg== X-Received: by 2002:a17:90a:1bc6:b0:267:e011:3e9a with SMTP id r6-20020a17090a1bc600b00267e0113e9amr4029993pjr.3.1691293038530; Sat, 05 Aug 2023 20:37:18 -0700 (PDT) Received: from stoup.. ([2602:47:d490:6901:9454:a46f:1c22:a7c6]) by smtp.gmail.com with ESMTPSA id a5-20020a17090a740500b00262e604724dsm6306451pjg.50.2023.08.05.20.37.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 05 Aug 2023 20:37:18 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= Subject: [PULL 03/24] accel/tcg: Do not issue misaligned i/o Date: Sat, 5 Aug 2023 20:36:54 -0700 Message-Id: <20230806033715.244648-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230806033715.244648-1-richard.henderson@linaro.org> References: <20230806033715.244648-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102f; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org In the single-page case we were issuing misaligned i/o to the memory subsystem, which does not handle it properly. Split such accesses via do_{ld,st}_mmio_*. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1800 Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Richard Henderson --- accel/tcg/cputlb.c | 118 +++++++++++++++++++++++++++------------------ 1 file changed, 72 insertions(+), 46 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index a308cb7534..4b1bfaa53d 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -2370,16 +2370,20 @@ static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_idx, static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_idx, MMUAccessType type, MemOp memop, uintptr_t ra) { - uint64_t ret; + uint16_t ret; if (unlikely(p->flags & TLB_MMIO)) { - return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); - } - - /* Perform the load host endian, then swap if necessary. */ - ret = load_atom_2(env, ra, p->haddr, memop); - if (memop & MO_BSWAP) { - ret = bswap16(ret); + QEMU_IOTHREAD_LOCK_GUARD(); + ret = do_ld_mmio_beN(env, p->full, 0, p->addr, 2, mmu_idx, type, ra); + if ((memop & MO_BSWAP) == MO_LE) { + ret = bswap16(ret); + } + } else { + /* Perform the load host endian, then swap if necessary. */ + ret = load_atom_2(env, ra, p->haddr, memop); + if (memop & MO_BSWAP) { + ret = bswap16(ret); + } } return ret; } @@ -2390,13 +2394,17 @@ static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_idx, uint32_t ret; if (unlikely(p->flags & TLB_MMIO)) { - return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); - } - - /* Perform the load host endian. */ - ret = load_atom_4(env, ra, p->haddr, memop); - if (memop & MO_BSWAP) { - ret = bswap32(ret); + QEMU_IOTHREAD_LOCK_GUARD(); + ret = do_ld_mmio_beN(env, p->full, 0, p->addr, 4, mmu_idx, type, ra); + if ((memop & MO_BSWAP) == MO_LE) { + ret = bswap32(ret); + } + } else { + /* Perform the load host endian. */ + ret = load_atom_4(env, ra, p->haddr, memop); + if (memop & MO_BSWAP) { + ret = bswap32(ret); + } } return ret; } @@ -2407,13 +2415,17 @@ static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx, uint64_t ret; if (unlikely(p->flags & TLB_MMIO)) { - return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop); - } - - /* Perform the load host endian. */ - ret = load_atom_8(env, ra, p->haddr, memop); - if (memop & MO_BSWAP) { - ret = bswap64(ret); + QEMU_IOTHREAD_LOCK_GUARD(); + ret = do_ld_mmio_beN(env, p->full, 0, p->addr, 8, mmu_idx, type, ra); + if ((memop & MO_BSWAP) == MO_LE) { + ret = bswap64(ret); + } + } else { + /* Perform the load host endian. */ + ret = load_atom_8(env, ra, p->haddr, memop); + if (memop & MO_BSWAP) { + ret = bswap64(ret); + } } return ret; } @@ -2561,20 +2573,22 @@ static Int128 do_ld16_mmu(CPUArchState *env, vaddr addr, cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD); crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD, &l); if (likely(!crosspage)) { - /* Perform the load host endian. */ if (unlikely(l.page[0].flags & TLB_MMIO)) { QEMU_IOTHREAD_LOCK_GUARD(); - a = io_readx(env, l.page[0].full, l.mmu_idx, addr, - ra, MMU_DATA_LOAD, MO_64); - b = io_readx(env, l.page[0].full, l.mmu_idx, addr + 8, - ra, MMU_DATA_LOAD, MO_64); - ret = int128_make128(HOST_BIG_ENDIAN ? b : a, - HOST_BIG_ENDIAN ? a : b); + a = do_ld_mmio_beN(env, l.page[0].full, 0, addr, 8, + l.mmu_idx, MMU_DATA_LOAD, ra); + b = do_ld_mmio_beN(env, l.page[0].full, 0, addr + 8, 8, + l.mmu_idx, MMU_DATA_LOAD, ra); + ret = int128_make128(b, a); + if ((l.memop & MO_BSWAP) == MO_LE) { + ret = bswap128(ret); + } } else { + /* Perform the load host endian. */ ret = load_atom_16(env, ra, l.page[0].haddr, l.memop); - } - if (l.memop & MO_BSWAP) { - ret = bswap128(ret); + if (l.memop & MO_BSWAP) { + ret = bswap128(ret); + } } return ret; } @@ -2874,7 +2888,11 @@ static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val, int mmu_idx, MemOp memop, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { - io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + if ((memop & MO_BSWAP) != MO_LE) { + val = bswap16(val); + } + QEMU_IOTHREAD_LOCK_GUARD(); + do_st_mmio_leN(env, p->full, val, p->addr, 2, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -2890,7 +2908,11 @@ static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val, int mmu_idx, MemOp memop, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { - io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + if ((memop & MO_BSWAP) != MO_LE) { + val = bswap32(val); + } + QEMU_IOTHREAD_LOCK_GUARD(); + do_st_mmio_leN(env, p->full, val, p->addr, 4, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -2906,7 +2928,11 @@ static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val, int mmu_idx, MemOp memop, uintptr_t ra) { if (unlikely(p->flags & TLB_MMIO)) { - io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop); + if ((memop & MO_BSWAP) != MO_LE) { + val = bswap64(val); + } + QEMU_IOTHREAD_LOCK_GUARD(); + do_st_mmio_leN(env, p->full, val, p->addr, 8, mmu_idx, ra); } else if (unlikely(p->flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { @@ -3029,22 +3055,22 @@ static void do_st16_mmu(CPUArchState *env, vaddr addr, Int128 val, cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST); crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l); if (likely(!crosspage)) { - /* Swap to host endian if necessary, then store. */ - if (l.memop & MO_BSWAP) { - val = bswap128(val); - } if (unlikely(l.page[0].flags & TLB_MMIO)) { - QEMU_IOTHREAD_LOCK_GUARD(); - if (HOST_BIG_ENDIAN) { - b = int128_getlo(val), a = int128_gethi(val); - } else { - a = int128_getlo(val), b = int128_gethi(val); + if ((l.memop & MO_BSWAP) != MO_LE) { + val = bswap128(val); } - io_writex(env, l.page[0].full, l.mmu_idx, a, addr, ra, MO_64); - io_writex(env, l.page[0].full, l.mmu_idx, b, addr + 8, ra, MO_64); + a = int128_getlo(val); + b = int128_gethi(val); + QEMU_IOTHREAD_LOCK_GUARD(); + do_st_mmio_leN(env, l.page[0].full, a, addr, 8, l.mmu_idx, ra); + do_st_mmio_leN(env, l.page[0].full, b, addr + 8, 8, l.mmu_idx, ra); } else if (unlikely(l.page[0].flags & TLB_DISCARD_WRITE)) { /* nothing */ } else { + /* Swap to host endian if necessary, then store. */ + if (l.memop & MO_BSWAP) { + val = bswap128(val); + } store_atom_16(env, ra, l.page[0].haddr, l.memop, val); } return;