From patchwork Sat Feb 19 20:27:37 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Nicolas_de_Peslo=C3=BCan?= X-Patchwork-Id: 83712 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 351D2B715F for ; Sun, 20 Feb 2011 07:27:48 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751999Ab1BSU1n (ORCPT ); Sat, 19 Feb 2011 15:27:43 -0500 Received: from mail-fx0-f46.google.com ([209.85.161.46]:52231 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751307Ab1BSU1n (ORCPT ); Sat, 19 Feb 2011 15:27:43 -0500 Received: by fxm17 with SMTP id 17so318400fxm.19 for ; Sat, 19 Feb 2011 12:27:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=ejRu14j1QSSZXHGR4Guzo/1NVFyUqCE5+GXn17FsqtE=; b=aOyFlWb83WNYZPYoai6JimKk0H8wNMeqgcptCE8ihOusexaMLqvSAqG7UdZ7WvF6G7 vJSuEL4h65d1dJH8oFg5SdEWP40E2UyQ9GgUAiDpVZ1uh+/lh6z2sheOYKpqa55ywuB+ +/WVeHlqs5+xfo9yBRaDagEoH8zB4yAxghfM8= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; b=W53D7YLRxR7q2zyNCtVTsXZDscG/IRiZq+Jt4n/YndKEszofaBFBI14vyHsFzR6Cb3 WIvi2/dPK/62aE73OLJN99WbKgdwGaxm6mWnYCpjt2UrqNniEmfXBxgGlWy99KYX71pK qHBQeJ2Hiiin1XdYTtuttX1CXgyQGPrHl6hDI= Received: by 10.223.53.68 with SMTP id l4mr2864497fag.44.1298147261545; Sat, 19 Feb 2011 12:27:41 -0800 (PST) Received: from [192.168.1.12] (AReims-156-1-23-41.w86-192.abo.wanadoo.fr [86.192.150.41]) by mx.google.com with ESMTPS id 5sm1697429fak.47.2011.02.19.12.27.39 (version=SSLv3 cipher=OTHER); Sat, 19 Feb 2011 12:27:40 -0800 (PST) Message-ID: <4D6027B9.6050108@gmail.com> Date: Sat, 19 Feb 2011 21:27:37 +0100 From: =?ISO-8859-1?Q?Nicolas_de_Peslo=FCan?= User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20101226 Icedove/3.0.11 MIME-Version: 1.0 To: Jiri Pirko CC: Jay Vosburgh , David Miller , kaber@trash.net, eric.dumazet@gmail.com, netdev@vger.kernel.org, shemminger@linux-foundation.org, andy@greyhouse.net Subject: Re: [patch net-next-2.6 V3] net: convert bonding to use rx_handler References: <4D5E8655.5070304@trash.net> <20110218145850.GF2939@psychotron.redhat.com> <20110218.120656.104048936.davem@davemloft.net> <20110218205832.GE2602@psychotron.redhat.com> <21593.1298070371@death> <20110219080523.GB2782@psychotron.redhat.com> <4D5FA1D7.4050801@gmail.com> <20110219110830.GD2782@psychotron.redhat.com> <20110219112842.GE2782@psychotron.redhat.com> <4D5FC308.9020507@gmail.com> <20110219134645.GF2782@psychotron.redhat.com> In-Reply-To: <20110219134645.GF2782@psychotron.redhat.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Le 19/02/2011 14:46, Jiri Pirko a écrit : > Sat, Feb 19, 2011 at 02:18:00PM CET, nicolas.2p.debian@gmail.com wrote: >> Le 19/02/2011 12:28, Jiri Pirko a écrit : >>> Sat, Feb 19, 2011 at 12:08:31PM CET, jpirko@redhat.com wrote: >>>> Sat, Feb 19, 2011 at 11:56:23AM CET, nicolas.2p.debian@gmail.com wrote: >>>>> Le 19/02/2011 09:05, Jiri Pirko a écrit : >>>>>> This patch converts bonding to use rx_handler. Results in cleaner >>>>>> __netif_receive_skb() with much less exceptions needed. Also >>>>>> bond-specific work is moved into bond code. >>>>>> >>>>>> Signed-off-by: Jiri Pirko >>>>>> >>>>>> v1->v2: >>>>>> using skb_iif instead of new input_dev to remember original >>>>>> device >>>>>> v2->v3: >>>>>> set orig_dev = skb->dev if skb_iif is set >>>>>> >>>>> >>>>> Why do we need to let the rx_handlers call netif_rx() or __netif_receive_skb()? >>>>> >>>>> Bonding used to be handled with very few overhead, simply replacing >>>>> skb->dev with skb->dev->master. Time has passed and we eventually >>>>> added many special processing for bonding into __netif_receive_skb(), >>>>> but the overhead remained very light. >>>>> >>>>> Calling netif_rx() (or __netif_receive_skb()) to allow nesting would probably lead to some overhead. >>>>> >>>>> Can't we, instead, loop inside __netif_receive_skb(), and deliver >>>>> whatever need to be delivered, to whoever need, inside the loop ? >>>>> >>>>> rx_handler = rcu_dereference(skb->dev->rx_handler); >>>>> while (rx_handler) { >>>>> /* ... */ >>>>> orig_dev = skb->dev; >>>>> skb = rx_handler(skb); >>>>> /* ... */ >>>>> rx_handler = (skb->dev != orig_dev) ? rcu_dereference(skb->dev->rx_handler) : NULL; >>>>> } >>>>> >>>>> This would reduce the overhead, while still allowing nesting: vlan on >>>>> top on bonding, bridge on top on bonding, ... >>>> >>>> I see your point. Makes sense to me. But the loop would have to include >>>> at least processing of ptype_all too. I'm going to cook a follow-up >>>> patch. >>>> >>> >>> DRAFT (doesn't modify rx_handlers): >>> >>> diff --git a/net/core/dev.c b/net/core/dev.c >>> index 4ebf7fe..e5dba47 100644 >>> --- a/net/core/dev.c >>> +++ b/net/core/dev.c >>> @@ -3115,6 +3115,7 @@ static int __netif_receive_skb(struct sk_buff *skb) >>> { >>> struct packet_type *ptype, *pt_prev; >>> rx_handler_func_t *rx_handler; >>> + struct net_device *dev; >>> struct net_device *orig_dev; >>> struct net_device *null_or_dev; >>> int ret = NET_RX_DROP; >>> @@ -3129,7 +3130,9 @@ static int __netif_receive_skb(struct sk_buff *skb) >>> if (netpoll_receive_skb(skb)) >>> return NET_RX_DROP; >>> >>> - __this_cpu_inc(softnet_data.processed); >>> + skb->skb_iif = skb->dev->ifindex; >>> + orig_dev = skb->dev; >> >> orig_dev should be set inside the loop, to reflect "previously >> crossed device", while following the path: >> >> eth0 -> bond0 -> br0. >> >> First step inside loop: >> >> orig_dev = eth0 >> skb->dev = bond0 (at the end of the loop). >> >> Second step inside loop: >> >> orig_dev = bond0 >> skb->dev = br0 (et the end of the loop). >> >> This would allow for exact match delivery to bond0 if someone bind there. >> >>> + >>> skb_reset_network_header(skb); >>> skb_reset_transport_header(skb); >>> skb->mac_len = skb->network_header - skb->mac_header; >>> @@ -3138,12 +3141,9 @@ static int __netif_receive_skb(struct sk_buff *skb) >>> >>> rcu_read_lock(); >>> >>> - if (!skb->skb_iif) { >>> - skb->skb_iif = skb->dev->ifindex; >>> - orig_dev = skb->dev; >>> - } else { >>> - orig_dev = dev_get_by_index_rcu(dev_net(skb->dev), skb->skb_iif); >>> - } >> >> I like the fact that it removes the above part. >> >>> +another_round: >>> + __this_cpu_inc(softnet_data.processed); >>> + dev = skb->dev; >>> >>> #ifdef CONFIG_NET_CLS_ACT >>> if (skb->tc_verd& TC_NCLS) { >>> @@ -3153,7 +3153,7 @@ static int __netif_receive_skb(struct sk_buff *skb) >>> #endif >>> >>> list_for_each_entry_rcu(ptype,&ptype_all, list) { >>> - if (!ptype->dev || ptype->dev == skb->dev) { >>> + if (!ptype->dev || ptype->dev == dev) { >>> if (pt_prev) >>> ret = deliver_skb(skb, pt_prev, orig_dev); >>> pt_prev = ptype; >> >> Inside the loop, we should only do exact match delivery, for >> &ptype_all and for&ptype_base[ntohs(type)& PTYPE_HASH_MASK]: >> >> list_for_each_entry_rcu(ptype,&ptype_all, list) { >> - if (!ptype->dev || ptype->dev == dev) { >> + if (ptype->dev == dev) { >> if (pt_prev) >> ret = deliver_skb(skb, pt_prev, orig_dev); >> pt_prev = ptype; >> } >> } >> >> >> list_for_each_entry_rcu(ptype, >> &ptype_base[ntohs(type)& PTYPE_HASH_MASK], list) { >> if (ptype->type == type&& >> - (ptype->dev == null_or_dev || ptype->dev == skb->dev)) { >> + (ptype->dev == skb->dev)) { >> if (pt_prev) >> ret = deliver_skb(skb, pt_prev, orig_dev); >> pt_prev = ptype; >> } >> } >> >> After leaving the loop, we can do wilcard delivery, if skb is not NULL. >> >> list_for_each_entry_rcu(ptype,&ptype_all, list) { >> - if (!ptype->dev || ptype->dev == dev) { >> + if (!ptype->dev) { >> if (pt_prev) >> ret = deliver_skb(skb, pt_prev, orig_dev); >> pt_prev = ptype; >> } >> } >> >> >> list_for_each_entry_rcu(ptype, >> &ptype_base[ntohs(type)& PTYPE_HASH_MASK], list) { >> - if (ptype->type == type&& >> - (ptype->dev == null_or_dev || ptype->dev == skb->dev)) { >> + if (ptype->type == type&& !ptype->dev) { >> if (pt_prev) >> ret = deliver_skb(skb, pt_prev, orig_dev); >> pt_prev = ptype; >> } >> } >> >> This would reduce the number of tests inside the >> list_for_each_entry_rcu() loops. And because we match only ptype->dev >> == dev inside the loop and !ptype->dev outside the loop, this should >> avoid duplicate delivery. > > Would you care to put this into patch so I can see the whole picture? > Thanks. Here is what I have in mind. It is based on your previous DRAFT patch, and don't modify rx_handlers yet. Only compile tested !! I don't know if every pieces are at the right place. I wonder what to do with CONFIG_NET_CLS_ACT part, that currently is between ptype_all and ptype_base processing. Anyway, the general idea is there. Nicolas. net/core/dev.c | 70 ++++++++++++++++++++++++++++++++++++++++++++++++-------- 1 files changed, 60 insertions(+), 10 deletions(-) diff --git a/net/core/dev.c b/net/core/dev.c index e5dba47..7e007a9 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3117,7 +3117,6 @@ static int __netif_receive_skb(struct sk_buff *skb) rx_handler_func_t *rx_handler; struct net_device *dev; struct net_device *orig_dev; - struct net_device *null_or_dev; int ret = NET_RX_DROP; __be16 type; @@ -3130,9 +3129,6 @@ static int __netif_receive_skb(struct sk_buff *skb) if (netpoll_receive_skb(skb)) return NET_RX_DROP; - skb->skb_iif = skb->dev->ifindex; - orig_dev = skb->dev; - skb_reset_network_header(skb); skb_reset_transport_header(skb); skb->mac_len = skb->network_header - skb->mac_header; @@ -3143,6 +3139,8 @@ static int __netif_receive_skb(struct sk_buff *skb) another_round: __this_cpu_inc(softnet_data.processed); + skb->skb_iif = skb->dev->ifindex; + orig_dev = skb->dev; dev = skb->dev; #ifdef CONFIG_NET_CLS_ACT @@ -3152,8 +3150,13 @@ another_round: } #endif + /* + * Deliver to ptype_all protocol handlers that match current dev. + * This happens before rx_handler is given a chance to change skb->dev. + */ + list_for_each_entry_rcu(ptype, &ptype_all, list) { - if (!ptype->dev || ptype->dev == dev) { + if (ptype->dev == dev) { if (pt_prev) ret = deliver_skb(skb, pt_prev, orig_dev); pt_prev = ptype; @@ -3167,6 +3170,31 @@ another_round: ncls: #endif + /* + * Deliver to ptype_base protocol handlers that match current dev. + * This happens before rx_handler is given a chance to change skb->dev. + */ + + type = skb->protocol; + list_for_each_entry_rcu(ptype, + &ptype_base[ntohs(type) & PTYPE_HASH_MASK], list) { + if (ptype->type == type && ptype->dev == skb->dev) { + if (pt_prev) + ret = deliver_skb(skb, pt_prev, orig_dev); + pt_prev = ptype; + } + } + + /* + * Call rx_handler for current device. + * If rx_handler return NULL, skip wilcard protocol handler delivery. + * Else, if skb->dev changed, restart the whole delivery process, to + * allow for device nesting. + * + * Warning: + * rx_handlers must kfree_skb(skb) if they return NULL. + */ + rx_handler = rcu_dereference(dev->rx_handler); if (rx_handler) { if (pt_prev) { @@ -3176,10 +3204,15 @@ ncls: skb = rx_handler(skb); if (!skb) goto out; - if (dev != skb->dev) + if (skb->dev != dev) goto another_round; } + /* + * FIXME: The part below should use rx_handler instead of being hard + * coded here. + */ + if (vlan_tx_tag_present(skb)) { if (pt_prev) { ret = deliver_skb(skb, pt_prev, orig_dev); @@ -3192,16 +3225,33 @@ ncls: goto out; } + /* + * FIXME: Can't this be moved into the rx_handler for bonding, + * or into a futur rx_handler for vlan? + */ + vlan_on_bond_hook(skb); - /* deliver only exact match when indicated */ - null_or_dev = skb->deliver_no_wcard ? skb->dev : NULL; + /* + * Deliver to wildcard ptype_all protocol handlers. + */ + + list_for_each_entry_rcu(ptype, &ptype_all, list) { + if (!ptype->dev) { + if (pt_prev) + ret = deliver_skb(skb, pt_prev, orig_dev); + pt_prev = ptype; + } + } + + /* + * Deliver to wildcard ptype_all protocol handlers. + */ type = skb->protocol; list_for_each_entry_rcu(ptype, &ptype_base[ntohs(type) & PTYPE_HASH_MASK], list) { - if (ptype->type == type && - (ptype->dev == null_or_dev || ptype->dev == skb->dev)) { + if (ptype->type == type && !ptype->dev) { if (pt_prev) ret = deliver_skb(skb, pt_prev, orig_dev); pt_prev = ptype;