From patchwork Tue Aug 22 08:30:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 804328 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=linux-gpio-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xc3gS52Sjz9t16 for ; Tue, 22 Aug 2017 18:31:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932236AbdHVIay (ORCPT ); Tue, 22 Aug 2017 04:30:54 -0400 Received: from mailout2.hostsharing.net ([83.223.90.233]:53105 "EHLO mailout2.hostsharing.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932133AbdHVIad (ORCPT ); Tue, 22 Aug 2017 04:30:33 -0400 Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mailout2.hostsharing.net (Postfix) with ESMTPS id B9C6910189D1F; Tue, 22 Aug 2017 10:30:31 +0200 (CEST) Received: from localhost (5-38-90-81.adsl.cmo.de [81.90.38.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 300576045F61; Tue, 22 Aug 2017 10:30:29 +0200 (CEST) Date: Tue, 22 Aug 2017 10:30:50 +0200 From: Lukas Wunner To: Bart Van Assche , Andrew Morton , Neil Brown , Peter Zijlstra , Ingo Molnar , Theodore Ts'o , Borislav Petkov , "H. Peter Anvin" , Denys Vlasenko Cc: "linus.walleij@linaro.org" , "agk@redhat.com" , "phil@raspberrypi.org" , "linux-gpio@vger.kernel.org" , "m.duckeck@kunbus.de" , "snitzer@redhat.com" , linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/4] bitops: Introduce assign_bit() Message-ID: <20170822083050.GA12241@wunner.de> References: <5487a5f7d1a4be1bb13e7d1f392281d18c0e935e.1503319573.git.lukas@wunner.de> <1503332323.2571.5.camel@wdc.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <1503332323.2571.5.camel@wdc.com> User-Agent: Mutt/1.6.1 (2016-04-27) Sender: linux-gpio-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-gpio@vger.kernel.org On Mon, Aug 21, 2017 at 04:18:44PM +0000, Bart Van Assche wrote: > On Mon, 2017-08-21 at 15:12 +0200, Lukas Wunner wrote: > > Cc: Bart Van Assche > > Cc: Alasdair Kergon > > Cc: Mike Snitzer > > Signed-off-by: Lukas Wunner > > This Cc-list is incomplete. Previous patches went in > through Andrew Morton's tree so I think an ack from Andrew Morton is > needed before this patch can be sent to Linus Torvalds. Please also > Cc other frequent contributors to this header file, e.g. Ingo Molnar > and Peter Zijlstra. Please also consider to Cc the LKML for this patch > or even for the entire series. Fair enough, adding more folks to cc. Does anyone have objections or comments to the below patch and to merging it through linux-gpio? It's part of this series: https://www.spinics.net/lists/linux-gpio/msg25067.html Looking at the mnemonics of x86 and arm I couldn't find one which would avoid the jump and be faster/shorter than the inline functions below, so putting this in include/linux/bitops.h (rather than arch/*/include/asm/) seemed appropriate. Can anyone imagine doing the same quicker with inline assembly? > > +static __always_inline void assign_bit(bool value, long nr, > > + volatile unsigned long *addr) > > Why has __always_inline been specified? What makes you think that you know > better than the compiler whether or not these functions should be inlined? I carried this over from existing functions, see e.g. commit 1a1d48a4a8fd ("linux/bitmap: Force inlining of bitmap weight functions"). Thanks, Lukas -- >8 -- Subject: [PATCH 1/4] bitops: Introduce assign_bit() A common idiom is to assign a value to a bit with: if (value) set_bit(nr, addr); else clear_bit(nr, addr); Likewise common is the one-line expression variant: value ? set_bit(nr, addr) : clear_bit(nr, addr); Commit 9a8ac3ae682e ("dm mpath: cleanup QUEUE_IF_NO_PATH bit manipulation by introducing assign_bit()") introduced assign_bit() to the md subsystem for brevity. Make it available to others, in particular gpiolib and the upcoming driver for Maxim MAX3191x industrial serializer chips. Cc: Bart Van Assche Cc: Alasdair Kergon Cc: Mike Snitzer Signed-off-by: Lukas Wunner --- drivers/md/dm-mpath.c | 8 -------- include/linux/bitops.h | 24 ++++++++++++++++++++++++ 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 0e8ab5bb3575..c79c113b7e7d 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -638,14 +638,6 @@ static void process_queued_bios(struct work_struct *work) blk_finish_plug(&plug); } -static void assign_bit(bool value, long nr, unsigned long *addr) -{ - if (value) - set_bit(nr, addr); - else - clear_bit(nr, addr); -} - /* * If we run out of usable paths, should we queue I/O or error it? */ diff --git a/include/linux/bitops.h b/include/linux/bitops.h index a83c822c35c2..097af36887c0 100644 --- a/include/linux/bitops.h +++ b/include/linux/bitops.h @@ -226,6 +226,30 @@ static inline unsigned long __ffs64(u64 word) return __ffs((unsigned long)word); } +/** + * assign_bit - Assign value to a bit in memory + * @value: the value to assign + * @nr: the bit to set + * @addr: the address to start counting from + */ +static __always_inline void assign_bit(bool value, long nr, + volatile unsigned long *addr) +{ + if (value) + set_bit(nr, addr); + else + clear_bit(nr, addr); +} + +static __always_inline void __assign_bit(bool value, long nr, + volatile unsigned long *addr) +{ + if (value) + __set_bit(nr, addr); + else + __clear_bit(nr, addr); +} + #ifdef __KERNEL__ #ifndef set_mask_bits