From patchwork Wed Feb 17 16:41:56 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 584222 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B795C1401DA for ; Thu, 18 Feb 2016 03:42:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965248AbcBQQmT (ORCPT ); Wed, 17 Feb 2016 11:42:19 -0500 Received: from mail-wm0-f45.google.com ([74.125.82.45]:36552 "EHLO mail-wm0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934355AbcBQQl7 (ORCPT ); Wed, 17 Feb 2016 11:41:59 -0500 Received: by mail-wm0-f45.google.com with SMTP id g62so169740087wme.1; Wed, 17 Feb 2016 08:41:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-type:content-disposition:in-reply-to :user-agent; bh=pM6N9aNJk0G0g+PWTURjx7l+Q2aPB42x8q1bASxf1rY=; b=I/9Vcni99agvc0b9EQQ09i2NpV1rm46aw6uksFu8578odd6dQGcUb/n+c0l8JmWQ44 b+rNtsmxrVIPJoOjDGgBu7HT/EXLifRrQQDGOu1/5oKLqor3fUYio6hpAbDuX+W4xmSf YI2t2cPlj4Dwadh8JEdmS2PfoQmN7wBUb46vOOLC3RWIDBmVt9tATduW245waHRTSgF1 ewXD243FA0FTFfnmR9I5nSLhdaVCJY/otjfZXYp+Ag5wTZ3B+o/5JJOQqS4Nfhn/rncO LACIJ7MRh2Q7esV8/pFdNYxJkzSWVF3bboN+dZfPZPxCoFevrZHbchk6DL36kyU2nlAQ pMTQ== X-Gm-Message-State: AG10YOQ8OiE1mnQkeAUBDVmeRV7A5BwiIBwcKQKwvh/9Qow3TFgmi+Vzze1fPwHtraeMOw== X-Received: by 10.28.216.141 with SMTP id p135mr17601172wmg.22.1455727317624; Wed, 17 Feb 2016 08:41:57 -0800 (PST) Received: from localhost (nat1.scz.suse.com. [213.151.88.250]) by smtp.gmail.com with ESMTPSA id lh1sm2313474wjb.20.2016.02.17.08.41.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 17 Feb 2016 08:41:56 -0800 (PST) Date: Wed, 17 Feb 2016 17:41:56 +0100 From: Michal Hocko To: LKML Cc: Peter Zijlstra , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , "David S. Miller" , Tony Luck , Andrew Morton , Chris Zankel , Max Filippov , x86@kernel.org, linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org Subject: [RFC 11/12 v1] x86, rwsem: provide __down_write_killable Message-ID: <20160217164156.GU29196@dhcp22.suse.cz> References: <1454444369-2146-1-git-send-email-mhocko@kernel.org> <1454444369-2146-12-git-send-email-mhocko@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1454444369-2146-12-git-send-email-mhocko@kernel.org> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org OK, so I have dropped patch 10 and reworked the x86 part to use the same asm as __down_write uses. Does this look any better? I am not an expert in inline asm so the way I am doing it might be not optimal but at least the gerenated code looks sane (no changes for the regular __down_write). --- From d9a24cd6d6eb48602b11df56ecc3ea4e223ac18d Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Mon, 1 Feb 2016 18:21:51 +0100 Subject: [PATCH] x86, rwsem: provide __down_write_killable which uses the same fast path as __down_write except it falls back to call_rwsem_down_write_failed_killable slow path and return -EINTR if killed. To prevent from code duplication extract the skeleton of __down_write into a helper macro which just takes the semaphore and the slow path function to be called. Signed-off-by: Michal Hocko --- arch/x86/include/asm/rwsem.h | 41 ++++++++++++++++++++++++++++------------- arch/x86/lib/rwsem.S | 8 ++++++++ 2 files changed, 36 insertions(+), 13 deletions(-) diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h index d79a218675bc..4c3d90dbe89a 100644 --- a/arch/x86/include/asm/rwsem.h +++ b/arch/x86/include/asm/rwsem.h @@ -99,21 +99,36 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) /* * lock for writing */ +#define ____down_write(sem, slow_path) \ +({ \ + long tmp; \ + struct rw_semaphore* ret = sem; \ + asm volatile("# beginning down_write\n\t" \ + LOCK_PREFIX " xadd %1,(%2)\n\t" \ + /* adds 0xffff0001, returns the old value */ \ + " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" \ + /* was the active mask 0 before? */\ + " jz 1f\n" \ + " call " slow_path "\n" \ + "1:\n" \ + "# ending down_write" \ + : "+m" (sem->count), "=d" (tmp), "+a" (ret) \ + : "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) \ + : "memory", "cc"); \ + ret; \ +}) + static inline void __down_write(struct rw_semaphore *sem) { - long tmp; - asm volatile("# beginning down_write\n\t" - LOCK_PREFIX " xadd %1,(%2)\n\t" - /* adds 0xffff0001, returns the old value */ - " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" - /* was the active mask 0 before? */ - " jz 1f\n" - " call call_rwsem_down_write_failed\n" - "1:\n" - "# ending down_write" - : "+m" (sem->count), "=d" (tmp) - : "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) - : "memory", "cc"); + ____down_write(sem, "call_rwsem_down_write_failed"); +} + +static inline int __down_write_killable(struct rw_semaphore *sem) +{ + if (IS_ERR(____down_write(sem, "call_rwsem_down_write_failed_killable"))) + return -EINTR; + + return 0; } /* diff --git a/arch/x86/lib/rwsem.S b/arch/x86/lib/rwsem.S index 40027db99140..d1a1397e1fb3 100644 --- a/arch/x86/lib/rwsem.S +++ b/arch/x86/lib/rwsem.S @@ -101,6 +101,14 @@ ENTRY(call_rwsem_down_write_failed) ret ENDPROC(call_rwsem_down_write_failed) +ENTRY(call_rwsem_down_write_failed_killable) + save_common_regs + movq %rax,%rdi + call rwsem_down_write_failed_killable + restore_common_regs + ret +ENDPROC(call_rwsem_down_write_failed_killable) + ENTRY(call_rwsem_wake) /* do nothing if still outstanding active readers */ __ASM_HALF_SIZE(dec) %__ASM_HALF_REG(dx)