From patchwork Sun Jun 23 13:47:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Srivatsa S. Bhat" X-Patchwork-Id: 253506 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 76B7C2C13DA for ; Sun, 23 Jun 2013 23:51:45 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753151Ab3FWNvR (ORCPT ); Sun, 23 Jun 2013 09:51:17 -0400 Received: from e23smtp05.au.ibm.com ([202.81.31.147]:57739 "EHLO e23smtp05.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752687Ab3FWNvL (ORCPT ); Sun, 23 Jun 2013 09:51:11 -0400 Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 23 Jun 2013 23:45:04 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Sun, 23 Jun 2013 23:45:01 +1000 Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id 6CC722CE804D; Sun, 23 Jun 2013 23:51:06 +1000 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r5NDovCc5701940; Sun, 23 Jun 2013 23:50:57 +1000 Received: from d23av01.au.ibm.com (loopback [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r5NDp3nb000987; Sun, 23 Jun 2013 23:51:06 +1000 Received: from srivatsabhat.in.ibm.com ([9.79.195.141]) by d23av01.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r5NDopnO000893; Sun, 23 Jun 2013 23:50:58 +1000 From: "Srivatsa S. Bhat" Subject: [PATCH 43/45] sh: Use get/put_online_cpus_atomic() to prevent CPU offline To: tglx@linutronix.de, peterz@infradead.org, tj@kernel.org, oleg@redhat.com, paulmck@linux.vnet.ibm.com, rusty@rustcorp.com.au, mingo@kernel.org, akpm@linux-foundation.org, namhyung@kernel.org, walken@google.com, vincent.guittot@linaro.org, laijs@cn.fujitsu.com Cc: rostedt@goodmis.org, wangyun@linux.vnet.ibm.com, xiaoguangrong@linux.vnet.ibm.com, sbw@mit.edu, fweisbec@gmail.com, zhong@linux.vnet.ibm.com, nikunj@linux.vnet.ibm.com, srivatsa.bhat@linux.vnet.ibm.com, linux-pm@vger.kernel.org, linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Paul Mundt , Thomas Gleixner , linux-sh@vger.kernel.org Date: Sun, 23 Jun 2013 19:17:39 +0530 Message-ID: <20130623134735.19094.47323.stgit@srivatsabhat.in.ibm.com> In-Reply-To: <20130623133642.19094.16038.stgit@srivatsabhat.in.ibm.com> References: <20130623133642.19094.16038.stgit@srivatsabhat.in.ibm.com> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13062313-1396-0000-0000-000003285953 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Once stop_machine() is gone from the CPU offline path, we won't be able to depend on disabling preemption to prevent CPUs from going offline from under us. Use the get/put_online_cpus_atomic() APIs to prevent CPUs from going offline, while invoking from atomic context. Cc: Paul Mundt Cc: Thomas Gleixner Cc: linux-sh@vger.kernel.org Signed-off-by: Srivatsa S. Bhat --- arch/sh/kernel/smp.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sh/kernel/smp.c b/arch/sh/kernel/smp.c index 4569645..42ec182 100644 --- a/arch/sh/kernel/smp.c +++ b/arch/sh/kernel/smp.c @@ -357,7 +357,7 @@ static void flush_tlb_mm_ipi(void *mm) */ void flush_tlb_mm(struct mm_struct *mm) { - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { smp_call_function(flush_tlb_mm_ipi, (void *)mm, 1); @@ -369,7 +369,7 @@ void flush_tlb_mm(struct mm_struct *mm) } local_flush_tlb_mm(mm); - preempt_enable(); + put_online_cpus_atomic(); } struct flush_tlb_data { @@ -390,7 +390,7 @@ void flush_tlb_range(struct vm_area_struct *vma, { struct mm_struct *mm = vma->vm_mm; - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&mm->mm_users) != 1) || (current->mm != mm)) { struct flush_tlb_data fd; @@ -405,7 +405,7 @@ void flush_tlb_range(struct vm_area_struct *vma, cpu_context(i, mm) = 0; } local_flush_tlb_range(vma, start, end); - preempt_enable(); + put_online_cpus_atomic(); } static void flush_tlb_kernel_range_ipi(void *info) @@ -433,7 +433,7 @@ static void flush_tlb_page_ipi(void *info) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) { - preempt_disable(); + get_online_cpus_atomic(); if ((atomic_read(&vma->vm_mm->mm_users) != 1) || (current->mm != vma->vm_mm)) { struct flush_tlb_data fd; @@ -448,7 +448,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long page) cpu_context(i, vma->vm_mm) = 0; } local_flush_tlb_page(vma, page); - preempt_enable(); + put_online_cpus_atomic(); } static void flush_tlb_one_ipi(void *info)