From patchwork Mon Apr 8 20:29:46 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Tkhai X-Patchwork-Id: 234888 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id D6B6E2C0099 for ; Tue, 9 Apr 2013 06:36:10 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935836Ab3DHUgJ (ORCPT ); Mon, 8 Apr 2013 16:36:09 -0400 Received: from forward1h.mail.yandex.net ([84.201.187.146]:47682 "EHLO forward1h.mail.yandex.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936030Ab3DHUgI (ORCPT ); Mon, 8 Apr 2013 16:36:08 -0400 X-Greylist: delayed 380 seconds by postgrey-1.27 at vger.kernel.org; Mon, 08 Apr 2013 16:36:08 EDT Received: from web28h.yandex.ru (web28h.yandex.ru [84.201.187.162]) by forward1h.mail.yandex.net (Yandex) with ESMTP id BCA689E19A4; Tue, 9 Apr 2013 00:29:46 +0400 (MSK) Received: from 127.0.0.1 (localhost.localdomain [127.0.0.1]) by web28h.yandex.ru (Yandex) with ESMTP id 5280E62393BA; Tue, 9 Apr 2013 00:29:46 +0400 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1365452986; bh=bFfV6bDP+mF5F9SRy0bJWrbf4dm+hqVXeyQZISwzrik=; h=From:To:Cc:Subject:Date; b=P07wYT3R+UGha/MXRgG9wRW1NbHpVbaLV027FwwktLHLsnUs+pVdUOUvlgW4bV0cQ o+o4Pa/M81kYbj+OOBUf9uxGR4ubYGriBZEZNd4XIUzE+W/5sN6FiQWCW2aZuhvRDP Ykv0T0SoiSn1wS2rJNbjf/GQJ6wOWKMzsiyIUt9U= Received: from 128-69-149-229.broadband.corbina.ru (128-69-149-229.broadband.corbina.ru [128.69.149.229]) by web28h.yandex.ru with HTTP; Tue, 09 Apr 2013 00:29:46 +0400 From: Kirill Tkhai To: "sparclinux@vger.kernel.org" Cc: David Miller Subject: [PATCH] sparc64: Do not save/restore interrupts in get_new_mmu_context() MIME-Version: 1.0 Message-Id: <523971365452986@web28h.yandex.ru> X-Mailer: Yamail [ http://yandex.ru ] 5.0 Date: Tue, 09 Apr 2013 00:29:46 +0400 Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org get_new_mmu_context() is always called with interrupts disabled. So it's possible to do this micro optimization. (Also fix the comment to switch_mm, which is called in both cases) Signed-off-by: Kirill Tkhai CC: David Miller --- arch/sparc/include/asm/mmu_context_64.h | 2 +- arch/sparc/mm/init_64.c | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h index 9191ca6..3d528f0 100644 --- a/arch/sparc/include/asm/mmu_context_64.h +++ b/arch/sparc/include/asm/mmu_context_64.h @@ -68,7 +68,7 @@ extern void smp_tsb_sync(struct mm_struct *mm); extern void __flush_tlb_mm(unsigned long, unsigned long); -/* Switch the current MM context. Interrupts are disabled. */ +/* Switch the current MM context. */ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, struct task_struct *tsk) { unsigned long ctx_valid, flags; diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 076068f..4ccaa1b 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -681,10 +681,9 @@ void get_new_mmu_context(struct mm_struct *mm) { unsigned long ctx, new_ctx; unsigned long orig_pgsz_bits; - unsigned long flags; int new_version; - spin_lock_irqsave(&ctx_alloc_lock, flags); + spin_lock(&ctx_alloc_lock); orig_pgsz_bits = (mm->context.sparc64_ctx_val & CTX_PGSZ_MASK); ctx = (tlb_context_cache + 1) & CTX_NR_MASK; new_ctx = find_next_zero_bit(mmu_context_bmap, 1 << CTX_NR_BITS, ctx); @@ -720,7 +719,7 @@ void get_new_mmu_context(struct mm_struct *mm) out: tlb_context_cache = new_ctx; mm->context.sparc64_ctx_val = new_ctx | orig_pgsz_bits; - spin_unlock_irqrestore(&ctx_alloc_lock, flags); + spin_unlock(&ctx_alloc_lock); if (unlikely(new_version)) smp_new_mmu_context_version();