From patchwork Tue Dec 15 11:21:00 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ming Lei X-Patchwork-Id: 556929 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B6EDE1402D6 for ; Tue, 15 Dec 2015 22:23:01 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=mFosoqbT; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965132AbbLOLVc (ORCPT ); Tue, 15 Dec 2015 06:21:32 -0500 Received: from mail-qg0-f51.google.com ([209.85.192.51]:33670 "EHLO mail-qg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965034AbbLOLVa (ORCPT ); Tue, 15 Dec 2015 06:21:30 -0500 Received: by mail-qg0-f51.google.com with SMTP id v16so3698747qge.0; Tue, 15 Dec 2015 03:21:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qA7cXuBy30BP4ZzZxUHCl/6bDBVisIFY6TLGFx/1wkM=; b=mFosoqbTB+IyXM1ffUrCItirZSGz73GUFbH8qX8RC1yAwBh9Ainhim0qmHZ2XIxNZn h+MdNPJqWLArk9bpTz6/oLVEVX+71wMy6zZ9XHOHK492JMAzZEGoNmByFH/f3n2SPTXx N+MTSVrqnTky7K6JNZr52D4BMzLBVV8VTlnVh4lQAAByKzrmbHwnoLc7KrCZI/98jcL9 thrXp9/caWuqdLQKasHTk/Avjv1W+FRIe27d/BB5+Ahb9we1MsFDzP00LjbZaTIeZYWa IzvmjQL0c7euqnCWB7Mrw12qgEFk38+HPYmytFzstuAcDkvU7/mhJ134f5EvCEY/cSi3 OFcA== X-Received: by 10.140.106.69 with SMTP id d63mr19737097qgf.88.1450178489299; Tue, 15 Dec 2015 03:21:29 -0800 (PST) Received: from localhost (56.34.213.162.lcy-01.canonistack.canonical.com. [162.213.34.56]) by smtp.gmail.com with ESMTPSA id d129sm271461qka.31.2015.12.15.03.21.27 (version=TLS1_2 cipher=AES128-SHA bits=128/128); Tue, 15 Dec 2015 03:21:28 -0800 (PST) From: Ming Lei To: linux-kernel@vger.kernel.org, Alexei Starovoitov Cc: "David S. Miller" , netdev@vger.kernel.org, Ming Lei Subject: [PATCH 2/6] hlist: prepare for supporting bit spinlock Date: Tue, 15 Dec 2015 19:21:00 +0800 Message-Id: <1450178464-27721-3-git-send-email-tom.leiming@gmail.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1450178464-27721-1-git-send-email-tom.leiming@gmail.com> References: <1450178464-27721-1-git-send-email-tom.leiming@gmail.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org hlist_head is often used for implementing bucket header of hash table, and one lock is often needed for adding/deleting node in the bucket list. It can consume lots of memory if per-bucket spinlock is used, so this patch trys to use the 1st bit of hlist_head->first as bit lock for this purpose. bit spinlock isn't efficient as spin lock, but the contention shouldn't be very high for operating per-bucket hlist, so bit spinlock should be OK. Signed-off-by: Ming Lei --- include/linux/rculist.h | 55 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/include/linux/rculist.h b/include/linux/rculist.h index 14ec165..9fc394a 100644 --- a/include/linux/rculist.h +++ b/include/linux/rculist.h @@ -435,6 +435,61 @@ static inline void hlist_replace_rcu(struct hlist_node *old, #define hlist_next_rcu(node) (*((struct hlist_node __rcu **)(&(node)->next))) #define hlist_pprev_rcu(node) (*((struct hlist_node __rcu **)((node)->pprev))) +/* + * This bit of hlist_head->first can be used as per-bucket + * bit spin_lock in hash table use case, and the lock bit + * should always be kept when hlist_node is added, deleted + * and other update operations. + * + * When lock bit is used, only the _lock version of hlist + * helpers can be used for operating the hlist. + */ +#define HLIST_LOCK_BIT 0 +#define HLIST_LOCK_MASK (1UL << HLIST_LOCK_BIT) +#define hlist_get_first(h) \ + (struct hlist_node *)((unsigned long)(h)->first & ~HLIST_LOCK_MASK) +#define hlist_get_lock(h) ((unsigned long)(h)->first & HLIST_LOCK_MASK) +#define hlist_make_1st_node(h, n) \ + (struct hlist_node *)((unsigned long)(n) | hlist_get_lock(h)) + +static inline struct hlist_head *hlist_get_head_lock( + struct hlist_head *head, struct hlist_head *new_head) +{ + new_head->first = hlist_get_first(head); + return new_head; +} + +static inline void __hlist_del_lock(struct hlist_node *n) +{ + struct hlist_node *next = n->next; + struct hlist_node **pprev = n->pprev; + + WRITE_ONCE(*pprev, (unsigned long)next | + ((unsigned long)*pprev & HLIST_LOCK_MASK)); + if (next) + next->pprev = pprev; +} + +/* The bit lock should be held before calling this helper */ +static inline void hlist_del_rcu_lock(struct hlist_node *n) +{ + __hlist_del_lock(n); + n->pprev = LIST_POISON2; +} + +/* The bit lock should be held before calling this helper */ +static inline void hlist_add_head_rcu_lock(struct hlist_node *n, + struct hlist_head *h) +{ + struct hlist_node *first = hlist_get_first(h); + + n->next = first; + n->pprev = &h->first; + rcu_assign_pointer(hlist_first_rcu(h), hlist_make_1st_node(h, n)); + if (first) + first->pprev = &n->next; +} + /** * hlist_add_head_rcu * @n: the element to add to the hash list.