From patchwork Thu Jun 4 06:05:59 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Scott Feldman X-Patchwork-Id: 480426 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id DB6131401B5 for ; Thu, 4 Jun 2015 16:04:16 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=dw0Bmh/o; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751943AbbFDGEN (ORCPT ); Thu, 4 Jun 2015 02:04:13 -0400 Received: from mail-pa0-f50.google.com ([209.85.220.50]:34558 "EHLO mail-pa0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751115AbbFDGEK (ORCPT ); Thu, 4 Jun 2015 02:04:10 -0400 Received: by payr10 with SMTP id r10so22663938pay.1 for ; Wed, 03 Jun 2015 23:04:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:date:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version:content-type; bh=91KySjR0Yekhou7vKBok/2D8l2iEbLO/Jj+EoTwG/hA=; b=dw0Bmh/oxKyypbtvCT5LiS3whTZPiu6IMqU32rrKlbHnVBUgE/KodHAE2NMUjPDehA HPtlvT+hcg4QhNEFfKNOYN239hS9Hn6Lmky8/sBnAlOZhc9zeYw32AOqQLGSHnsi9L1u HlbGMrPsh8jxVWZQy/repxq1pmTGKJSszyzjhSF929MJkbbsfe/5vncMIdA/QUQontEy TZV9XLS4lVNHnUnCDmSxYrLnHKhiJX7wmevnDTBxHrkgRzMUGaUrbTVhRLaiOUwwtVVe CTjgjqoiiDPJSWK44zX2Z714mM0FvKw5hKNERtrZMKJu+p6f0mH8kANW1zFDBOM3K2hE FhzA== X-Received: by 10.70.37.9 with SMTP id u9mr65260802pdj.50.1433397849807; Wed, 03 Jun 2015 23:04:09 -0700 (PDT) Received: from rocker1.local ([199.58.98.143]) by mx.google.com with ESMTPSA id ae9sm2588400pac.25.2015.06.03.23.04.07 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 Jun 2015 23:04:08 -0700 (PDT) From: Scott Feldman X-Google-Original-From: Scott Feldman Date: Wed, 3 Jun 2015 23:05:59 -0700 (PDT) To: Toshiaki Makita cc: Scott Feldman , roopa , Jamal Hadi Salim , David Miller , Toshiaki Makita , Netdev , =?ISO-8859-2?Q?Ji=F8=ED_P=EDrko?= , "simon.horman@netronome.com" Subject: Re: [PATCH net-next 5/5] rocker: remove support for legacy VLAN ndo ops In-Reply-To: <556F209A.6090304@gmail.com> Message-ID: References: <1433183947-13095-1-git-send-email-sfeldma@gmail.com> <1433183947-13095-6-git-send-email-sfeldma@gmail.com> <556D363A.5010005@lab.ntt.co.jp> <20150601.222442.1854333703599698362.davem@davemloft.net> <556D96ED.9010309@mojatatu.com> <556DE0CD.9080000@cumulusnetworks.com> <556F209A.6090304@gmail.com> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Thu, 4 Jun 2015, Toshiaki Makita wrote: > Bridge's vlan_filtering is handled in master->op->foo(), not in > port->op->foo(). > Can't we introduce another switchdev handler that performs MASTER operation > instead of calling SELF operation? > > br_afspec() > nbp_vlan_add() > netdev_switch_port_vlan_add() > rocker->ndo_switch_port_vlan_add() <- only used for MASTER operation > > I'm wondering why SELF operation (rocker->ndo_bridge_setlink()) does what > should be done in MASTER operation. Something like this: --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c index 13013fe..df57ede 100644 --- a/net/bridge/br_vlan.c +++ b/net/bridge/br_vlan.c @@ -2,6 +2,7 @@ #include #include #include +#include #include "br_private.h" @@ -36,6 +37,36 @@ static void __vlan_add_flags(struct net_port_vlans *v, u16 vid, u16 flags) clear_bit(vid, v->untagged_bitmap); } + +static int __vlan_vid_add(struct net_device *dev, struct net_bridge *br, + u16 vid, u16 flags) +{ + const struct net_device_ops *ops = dev->netdev_ops; + struct switchdev_obj vlan_obj = { + .id = SWITCHDEV_OBJ_PORT_VLAN, + .u.vlan = { + .flags = flags, + .vid_start = vid, + .vid_end = vid, + }, + }; + int err; + + /* If driver uses VLAN ndo ops, use 8021q to install vid + * on device, otherwise try switchdev ops to install vid. + */ + + if (ops->ndo_vlan_rx_add_vid) { + err = vlan_vid_add(dev, br->vlan_proto, vid); + } else { + err = switchdev_port_obj_add(dev, &vlan_obj); + if (err == -EOPNOTSUPP) + err = 0; + } + + return err; +} + static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags) { struct net_bridge_port *p = NULL; @@ -62,7 +93,7 @@ static int __vlan_add(struct net_port_vlans *v, u16 vid, u16 flags) * This ensures tagged traffic enters the bridge when * promiscuous mode is disabled by br_manage_promisc(). */ - err = vlan_vid_add(dev, br->vlan_proto, vid); + err = __vlan_vid_add(dev, br, vid, flags); if (err) return err; } @@ -86,6 +117,28 @@ out_filt: return err; } +static void __vlan_vid_del(struct net_device *dev, struct net_bridge *br, + u16 vid) +{ + const struct net_device_ops *ops = dev->netdev_ops; + struct switchdev_obj vlan_obj = { + .id = SWITCHDEV_OBJ_PORT_VLAN, + .u.vlan = { + .vid_start = vid, + .vid_end = vid, + }, + }; + + /* If driver uses VLAN ndo ops, use 8021q to delete vid + * on device, otherwise try switchdev ops to delete vid. + */ + + if (ops->ndo_vlan_rx_kill_vid) + vlan_vid_del(dev, br->vlan_proto, vid); + else + switchdev_port_obj_del(dev, &vlan_obj); +} + static int __vlan_del(struct net_port_vlans *v, u16 vid) { if (!test_bit(vid, v->vlan_bitmap)) @@ -96,7 +149,7 @@ static int __vlan_del(struct net_port_vlans *v, u16 vid) if (v->port_idx) { struct net_bridge_port *p = v->parent.port; - vlan_vid_del(p->dev, p->br->vlan_proto, vid); + __vlan_vid_del(p->dev, p->br, vid); } clear_bit(vid, v->vlan_bitmap);