From patchwork Wed Mar 27 19:29:38 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 231815 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 53FC82C00D1 for ; Thu, 28 Mar 2013 06:30:17 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754383Ab3C0T3x (ORCPT ); Wed, 27 Mar 2013 15:29:53 -0400 Received: from mail-oa0-f48.google.com ([209.85.219.48]:38604 "EHLO mail-oa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754341Ab3C0T3v (ORCPT ); Wed, 27 Mar 2013 15:29:51 -0400 Received: by mail-oa0-f48.google.com with SMTP id j1so9184839oag.21 for ; Wed, 27 Mar 2013 12:29:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references:x-gm-message-state; bh=olgxLAS1jpWhjnJj7GyjIEVByofTCewgROgu6n2feEQ=; b=kpkR5jhzyq+seU7eu9mG+MI1v+kInoDgeb9STmSE3oMB0oTCYMCFp2uXJmnDWRae8O +0baImbPV8bEttRyTtGf/3tM1IDD7D6fHKbfWdoBk09Bzw6KFLDaZJeNbQbiR9x0CQLv AV5bIMzEWKu4b+NkZChCsRXm1RPBWuA8J6JVRci6H3/NRwzBE5K83eP0WcmUfPkajC6p OToSQgTMgEX85j23UfWu2YyEv2P4/kVyx74UE5RHd7bwrpVYFB0by4aW2fhbrSrCv3Ly +fhkIvUnRivW1EmDS2ATQzqEa4KUuuegT0BJQdJgS8K+odx8Ngf+cdc8T32qYsPJj4K9 ANtw== X-Received: by 10.60.137.194 with SMTP id qk2mr12519231oeb.138.1364412590482; Wed, 27 Mar 2013 12:29:50 -0700 (PDT) Received: from salusa.poochiereds.net (cpe-107-015-113-143.nc.res.rr.com. [107.15.113.143]) by mx.google.com with ESMTPS id t9sm21559418obk.13.2013.03.27.12.29.49 (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 27 Mar 2013 12:29:49 -0700 (PDT) From: Jeff Layton To: akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, tj@kernel.org, Vlad Yasevich , Sridhar Samudrala , Neil Horman , "David S. Miller" , linux-sctp@vger.kernel.org, netdev@vger.kernel.org Subject: [PATCH v2 6/6] sctp: convert sctp_assoc_set_id to use idr_alloc_cyclic Date: Wed, 27 Mar 2013 15:29:38 -0400 Message-Id: <1364412578-7462-7-git-send-email-jlayton@redhat.com> X-Mailer: git-send-email 1.7.11.7 In-Reply-To: <1364412578-7462-1-git-send-email-jlayton@redhat.com> References: <1364412578-7462-1-git-send-email-jlayton@redhat.com> X-Gm-Message-State: ALoCoQlwhcjfYzW2o36OjWo9iLXVzxkWoEt/mNB3fbMUoYyDDbEYuYldnJIOZ3fj+n/BvXUz7uV9 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org (Note: compile-tested only) Signed-off-by: Jeff Layton Cc: Vlad Yasevich Cc: Sridhar Samudrala Cc: Neil Horman Cc: "David S. Miller" Cc: linux-sctp@vger.kernel.org Cc: netdev@vger.kernel.org Acked-by: Neil Horman --- net/sctp/associola.c | 16 ++-------------- 1 file changed, 2 insertions(+), 14 deletions(-) diff --git a/net/sctp/associola.c b/net/sctp/associola.c index d2709e2..fa261a3 100644 --- a/net/sctp/associola.c +++ b/net/sctp/associola.c @@ -66,13 +66,6 @@ static void sctp_assoc_bh_rcv(struct work_struct *work); static void sctp_assoc_free_asconf_acks(struct sctp_association *asoc); static void sctp_assoc_free_asconf_queue(struct sctp_association *asoc); -/* Keep track of the new idr low so that we don't re-use association id - * numbers too fast. It is protected by they idr spin lock is in the - * range of 1 - INT_MAX. - */ -static u32 idr_low = 1; - - /* 1st Level Abstractions. */ /* Initialize a new association from provided memory. */ @@ -1601,13 +1594,8 @@ int sctp_assoc_set_id(struct sctp_association *asoc, gfp_t gfp) if (preload) idr_preload(gfp); spin_lock_bh(&sctp_assocs_id_lock); - /* 0 is not a valid id, idr_low is always >= 1 */ - ret = idr_alloc(&sctp_assocs_id, asoc, idr_low, 0, GFP_NOWAIT); - if (ret >= 0) { - idr_low = ret + 1; - if (idr_low == INT_MAX) - idr_low = 1; - } + /* 0 is not a valid assoc_id, must be >= 1 */ + ret = idr_alloc_cyclic(&sctp_assocs_id, asoc, 1, 0, GFP_NOWAIT); spin_unlock_bh(&sctp_assocs_id_lock); if (preload) idr_preload_end();