From patchwork Wed May 30 09:35:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 922663 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40wlp06Zzwz9s2S; Wed, 30 May 2018 19:35:52 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fNxWD-0004W9-9P; Wed, 30 May 2018 09:35:45 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1fNxWC-0004Vs-Dm for kernel-team@lists.ubuntu.com; Wed, 30 May 2018 09:35:44 +0000 Received: from mail-wm0-f69.google.com ([74.125.82.69]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1fNxWC-0000Xj-6b for kernel-team@lists.ubuntu.com; Wed, 30 May 2018 09:35:44 +0000 Received: by mail-wm0-f69.google.com with SMTP id v2-v6so12701932wmc.0 for ; Wed, 30 May 2018 02:35:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=OjphkvLzEnpbatJXBr/n1FQt8ADMGPI6qwOrlsll+jI=; b=bHZgriJCF3HoeQcgiYcejt2zml2aYwBieEGbKKChUyTAz9OAjHu3e/ezmAKAe1UQ47 6hzDhnJrWrZ2j2/ZU/9hckOuFkMztyOQgXFtu5k/C3l42fdwyqRCdtqVMro027g1WddA djxPgsE+NDIKa30XM22N2Xmez44KjZ4yB6nSD88oFGI6d7UAEPcuOtSYPmJkUW+KUl7h bj+8X7kYueuRPIJ9PGy3stJ4WXc9PWUWRdHJgOiuEDAuUI+BivRmN2JTFSfbrEqBJGRC Plz63Hrl2pqe/dkPbEUEkDqaxa0VXTjnHMCPn5Hr8hZLJ50fm1GcYbPkxqygi1cXq8iM rlYg== X-Gm-Message-State: ALKqPwfUSy2LBVyj6eCrspFAJEGztrdkWOVffWj3XjMVsjBJssy6Oon3 LDtGeEnh/h34tm5Xqq9lzLmGiV5n/7dVf6ils2Pjb5EUrC+x1wJxWMLACyOiTuqiWSQ7J1y6hVX uWSUrLXfmNMgV/Gh6QICW9IhELPITK5u4hQwRdCnPyw== X-Received: by 2002:a50:95f0:: with SMTP id x45-v6mr2667288eda.99.1527672943582; Wed, 30 May 2018 02:35:43 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKUpqIaGFaFiSKfuIWfWhVn/SCiPurLReAKthafZm+uo0qAjMARbOB3kgLKx8ELW/nHNzcjkQ== X-Received: by 2002:a50:95f0:: with SMTP id x45-v6mr2667278eda.99.1527672943450; Wed, 30 May 2018 02:35:43 -0700 (PDT) Received: from gollum.fritz.box ([81.221.205.149]) by smtp.gmail.com with ESMTPSA id h39-v6sm5957659edb.65.2018.05.30.02.35.42 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 May 2018 02:35:42 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Xenial][PATCH 2/2] UBUNTU: SAUCE: rfi-flush: Make it possible to call setup_rfi_flush() again Date: Wed, 30 May 2018 11:35:39 +0200 Message-Id: <20180530093539.11917-3-juergh@canonical.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180530093539.11917-1-juergh@canonical.com> References: <20180530093539.11917-1-juergh@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Michael Ellerman BugLink: https://bugs.launchpad.net/bugs/1744173 For PowerVM migration we want to be able to call setup_rfi_flush() again after we've migrated the partition. To support that we need to check that we're not trying to allocate the fallback flush area after memblock has gone away. If so we just fail, we don't support migrating from a patched to an unpatched machine. Or we do support it, but there will be no RFI flush enabled on the destination. Signed-off-by: Michael Ellerman Signed-off-by: Breno Leitao Signed-off-by: Juerg Haefliger --- arch/powerpc/kernel/setup_64.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c index 70dfe49868e1..efc6371d62b3 100644 --- a/arch/powerpc/kernel/setup_64.c +++ b/arch/powerpc/kernel/setup_64.c @@ -961,14 +961,22 @@ void setup_stf_barrier(void) stf_barrier_enable(enable); } -static void init_fallback_flush(void) +static bool init_fallback_flush(void) { u64 l1d_size, limit; int cpu; /* Only allocate the fallback flush area once (at boot time). */ if (l1d_flush_fallback_area) - return; + return true; + + /* + * Once the slab allocator is up it's too late to allocate the fallback + * flush area, so return an error. This could happen if we migrated from + * a patched machine to an unpatched machine. + */ + if (slab_is_available()) + return false; l1d_size = ppc64_caches.dsize; limit = min(safe_stack_limit(), ppc64_rma_size); @@ -985,13 +993,19 @@ static void init_fallback_flush(void) paca[cpu].rfi_flush_fallback_area = l1d_flush_fallback_area; paca[cpu].l1d_flush_size = l1d_size; } + + return true; } void setup_rfi_flush(enum l1d_flush_type types, bool enable) { if (types & L1D_FLUSH_FALLBACK) { - pr_info("rfi-flush: fallback displacement flush available\n"); - init_fallback_flush(); + if (init_fallback_flush()) + pr_info("rfi-flush: Using fallback displacement flush\n"); + else { + pr_warn("rfi-flush: Error unable to use fallback displacement flush!\n"); + types &= ~L1D_FLUSH_FALLBACK; + } } if (types & L1D_FLUSH_ORI)