From patchwork Sun Sep 10 07:36:03 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Benjamin Herrenschmidt X-Patchwork-Id: 812081 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xqjdr3PV2z9s8J for ; Sun, 10 Sep 2017 17:40:32 +1000 (AEST) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3xqjdq72NMzDrnq for ; Sun, 10 Sep 2017 17:40:31 +1000 (AEST) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: ozlabs.org; spf=permerror (mailfrom) smtp.mailfrom=kernel.crashing.org (client-ip=63.228.1.57; helo=gate.crashing.org; envelope-from=benh@kernel.crashing.org; receiver=) Received: from gate.crashing.org (gate.crashing.org [63.228.1.57]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3xqjYT725PzDrJh for ; Sun, 10 Sep 2017 17:36:45 +1000 (AEST) Received: from pasglop.au.ibm.com (localhost.localdomain [127.0.0.1]) by gate.crashing.org (8.14.1/8.13.8) with ESMTP id v8A7a89q032628; Sun, 10 Sep 2017 02:36:26 -0500 From: Benjamin Herrenschmidt To: skiboot@lists.ozlabs.org Date: Sun, 10 Sep 2017 17:36:03 +1000 Message-Id: <20170910073607.27555-9-benh@kernel.crashing.org> X-Mailer: git-send-email 2.13.5 In-Reply-To: <20170910073607.27555-1-benh@kernel.crashing.org> References: <20170910073607.27555-1-benh@kernel.crashing.org> Subject: [Skiboot] [PATCH v3 09/13] xive: Fix locking around cache scrub & watch X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" Thankfully the missing locking only affects debug code and init code that doesn't run concurrently. Also adds a DEBUG option that checks the lock is properly held. Signed-off-by: Benjamin Herrenschmidt --- hw/xive.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/hw/xive.c b/hw/xive.c index 1d306b93..c2360550 100644 --- a/hw/xive.c +++ b/hw/xive.c @@ -45,11 +45,13 @@ #define XIVE_PERCPU_LOG #define XIVE_DEBUG_INIT_CACHE_UPDATES #define XIVE_EXTRA_CHECK_INIT_CACHE +#define XIVE_CHECK_LOCKS #else #undef XIVE_DEBUG_DUPLICATES #undef XIVE_PERCPU_LOG #undef XIVE_DEBUG_INIT_CACHE_UPDATES #undef XIVE_EXTRA_CHECK_INIT_CACHE +#undef XIVE_CHECK_LOCKS #endif /* @@ -1245,6 +1247,10 @@ static int64_t __xive_cache_scrub(struct xive *x, enum xive_cache_type ctype, uint64_t sreg, sregx, mreg, mregx; uint64_t mval, sval; +#ifdef XIVE_CHECK_LOCKS + assert(lock_held_by_me(&x->lock)); +#endif + /* Workaround a HW bug in XIVE where the scrub completion * isn't ordered by loads, thus the data might still be * in a queue and may not have reached coherency. @@ -1341,6 +1347,9 @@ static int64_t __xive_cache_watch(struct xive *x, enum xive_cache_type ctype, uint64_t dval0, sval, status; int64_t i; +#ifdef XIVE_CHECK_LOCKS + assert(lock_held_by_me(&x->lock)); +#endif switch (ctype) { case xive_cache_eqc: sreg = VC_EQC_CWATCH_SPEC; @@ -3016,6 +3025,7 @@ static void xive_setup_hw_for_emu(struct xive_cpu_state *xs) xs->eq_page, XIVE_EMULATION_PRIO); /* Use the cache watch to write it out */ + lock(&x_eq->lock); xive_eqc_cache_update(x_eq, xs->eq_blk, xs->eq_idx + XIVE_EMULATION_PRIO, 0, 4, &eq, false, true); @@ -3023,14 +3033,17 @@ static void xive_setup_hw_for_emu(struct xive_cpu_state *xs) /* Extra testing of cache watch & scrub facilities */ xive_special_cache_check(x_vp, xs->vp_blk, xs->vp_idx); + unlock(&x_eq->lock); /* Initialize/enable the VP */ xive_init_default_vp(&vp, xs->eq_blk, xs->eq_idx); /* Use the cache watch to write it out */ + lock(&x_vp->lock); xive_vpc_cache_update(x_vp, xs->vp_blk, xs->vp_idx, 0, 8, &vp, false, true); xive_check_vpc_update(x_vp, xs->vp_idx, &vp); + unlock(&x_vp->lock); } static void xive_init_cpu_emulation(struct xive_cpu_state *xs, @@ -3075,8 +3088,10 @@ static void xive_init_cpu_exploitation(struct xive_cpu_state *xs) xive_init_default_vp(&vp, xs->eq_blk, xs->eq_idx); /* Use the cache watch to write it out */ + lock(&x_vp->lock); xive_vpc_cache_update(x_vp, xs->vp_blk, xs->vp_idx, 0, 8, &vp, false, true); + unlock(&x_vp->lock); /* Clenaup remaining state */ xs->cppr = 0; @@ -3263,9 +3278,11 @@ static uint32_t xive_read_eq(struct xive_cpu_state *xs, bool just_peek) xs->eqbuf[(xs->eqptr + 2) & xs->eqmsk], xs->eqbuf[(xs->eqptr + 3) & xs->eqmsk], xs->eqgen, xs->eqptr, just_peek); + lock(&xs->xive->lock); __xive_cache_scrub(xs->xive, xive_cache_eqc, xs->eq_blk, xs->eq_idx + XIVE_EMULATION_PRIO, false, false); + unlock(&xs->xive->lock); eq = xive_get_eq(xs->xive, xs->eq_idx + XIVE_EMULATION_PRIO); prerror("EQ @%p W0=%08x W1=%08x qbuf @%p\n", eq, eq->w0, eq->w1, xs->eqbuf); @@ -3503,9 +3520,11 @@ static int64_t opal_xive_get_xirr(uint32_t *out_xirr, bool just_poll) #ifdef XIVE_PERCPU_LOG { struct xive_eq *eq; + lock(&xs->xive->lock); __xive_cache_scrub(xs->xive, xive_cache_eqc, xs->eq_blk, xs->eq_idx + XIVE_EMULATION_PRIO, false, false); + unlock(&xs->xive->lock); eq = xive_get_eq(xs->xive, xs->eq_idx + XIVE_EMULATION_PRIO); log_add(xs, LOG_TYPE_EQD, 2, eq->w0, eq->w1); }