From patchwork Sat Sep 1 11:45:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964860 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422ZF80wX5z9sC2 for ; Sat, 1 Sep 2018 21:46:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727421AbeIAP6B (ORCPT ); Sat, 1 Sep 2018 11:58:01 -0400 Received: from mx1.mailbox.org ([80.241.60.212]:24240 "EHLO mx1.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727357AbeIAP6A (ORCPT ); Sat, 1 Sep 2018 11:58:00 -0400 Received: from smtp1.mailbox.org (smtp1.mailbox.org [80.241.60.240]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.mailbox.org (Postfix) with ESMTPS id 4D1124927A; Sat, 1 Sep 2018 13:46:13 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp1.mailbox.org ([80.241.60.240]) by spamfilter03.heinlein-hosting.de (spamfilter03.heinlein-hosting.de [80.241.56.117]) (amavisd-new, port 10030) with ESMTP id ZMqh0fK0RBmQ; Sat, 1 Sep 2018 13:46:12 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org, Hauke Mehrtens Subject: [PATCH v2 net-next 1/7] MIPS: lantiq: dma: add dev pointer Date: Sat, 1 Sep 2018 13:45:29 +0200 Message-Id: <20180901114535.9070-2-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org dma_zalloc_coherent() now crashes if no dev pointer is given. Add a dev pointer to the ltq_dma_channel structure and fill it in the driver using it. This fixes a bug introduced in kernel 4.19. Signed-off-by: Hauke Mehrtens --- arch/mips/include/asm/mach-lantiq/xway/xway_dma.h | 1 + arch/mips/lantiq/xway/dma.c | 4 ++-- drivers/net/ethernet/lantiq_etop.c | 1 + 3 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h b/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h index 4901833498f7..8441b2698e64 100644 --- a/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h +++ b/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h @@ -40,6 +40,7 @@ struct ltq_dma_channel { int desc; /* the current descriptor */ struct ltq_dma_desc *desc_base; /* the descriptor base */ int phys; /* physical addr */ + struct device *dev; }; enum { diff --git a/arch/mips/lantiq/xway/dma.c b/arch/mips/lantiq/xway/dma.c index 4b9fbb6744ad..664f2f7f55c1 100644 --- a/arch/mips/lantiq/xway/dma.c +++ b/arch/mips/lantiq/xway/dma.c @@ -130,7 +130,7 @@ ltq_dma_alloc(struct ltq_dma_channel *ch) unsigned long flags; ch->desc = 0; - ch->desc_base = dma_zalloc_coherent(NULL, + ch->desc_base = dma_zalloc_coherent(ch->dev, LTQ_DESC_NUM * LTQ_DESC_SIZE, &ch->phys, GFP_ATOMIC); @@ -182,7 +182,7 @@ ltq_dma_free(struct ltq_dma_channel *ch) if (!ch->desc_base) return; ltq_dma_close(ch); - dma_free_coherent(NULL, LTQ_DESC_NUM * LTQ_DESC_SIZE, + dma_free_coherent(ch->dev, LTQ_DESC_NUM * LTQ_DESC_SIZE, ch->desc_base, ch->phys); } EXPORT_SYMBOL_GPL(ltq_dma_free); diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c index 7a637b51c7d2..e08301d833e2 100644 --- a/drivers/net/ethernet/lantiq_etop.c +++ b/drivers/net/ethernet/lantiq_etop.c @@ -274,6 +274,7 @@ ltq_etop_hw_init(struct net_device *dev) struct ltq_etop_chan *ch = &priv->ch[i]; ch->idx = ch->dma.nr = i; + ch->dma.dev = &priv->pdev->dev; if (IS_TX(i)) { ltq_dma_alloc_tx(&ch->dma); From patchwork Sat Sep 1 11:45:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964863 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422ZF874nlz9sBn for ; Sat, 1 Sep 2018 21:46:20 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727408AbeIAP6B (ORCPT ); Sat, 1 Sep 2018 11:58:01 -0400 Received: from mx2.mailbox.org ([80.241.60.215]:49036 "EHLO mx2.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726430AbeIAP6B (ORCPT ); Sat, 1 Sep 2018 11:58:01 -0400 Received: from smtp1.mailbox.org (smtp1.mailbox.org [80.241.60.240]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx2.mailbox.org (Postfix) with ESMTPS id 8C4C742744; Sat, 1 Sep 2018 13:46:14 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp1.mailbox.org ([80.241.60.240]) by spamfilter01.heinlein-hosting.de (spamfilter01.heinlein-hosting.de [80.241.56.115]) (amavisd-new, port 10030) with ESMTP id 3QHk4kPrWLK0; Sat, 1 Sep 2018 13:46:13 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org, Hauke Mehrtens Subject: [PATCH v2 net-next 2/7] MIPS: lantiq: Do not enable IRQs in dma open Date: Sat, 1 Sep 2018 13:45:30 +0200 Message-Id: <20180901114535.9070-3-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When a DMA channel is opened the IRQ should not get activated automatically, this allows it to pull data out manually without the help of interrupts. This is needed for a workaround in the vrx200 Ethernet driver. Signed-off-by: Hauke Mehrtens Acked-by: Paul Burton --- arch/mips/lantiq/xway/dma.c | 1 - drivers/net/ethernet/lantiq_etop.c | 1 + 2 files changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/mips/lantiq/xway/dma.c b/arch/mips/lantiq/xway/dma.c index 664f2f7f55c1..982859f2b2a3 100644 --- a/arch/mips/lantiq/xway/dma.c +++ b/arch/mips/lantiq/xway/dma.c @@ -106,7 +106,6 @@ ltq_dma_open(struct ltq_dma_channel *ch) spin_lock_irqsave(<q_dma_lock, flag); ltq_dma_w32(ch->nr, LTQ_DMA_CS); ltq_dma_w32_mask(0, DMA_CHAN_ON, LTQ_DMA_CCTRL); - ltq_dma_w32_mask(0, 1 << ch->nr, LTQ_DMA_IRNEN); spin_unlock_irqrestore(<q_dma_lock, flag); } EXPORT_SYMBOL_GPL(ltq_dma_open); diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c index e08301d833e2..379db19a303c 100644 --- a/drivers/net/ethernet/lantiq_etop.c +++ b/drivers/net/ethernet/lantiq_etop.c @@ -439,6 +439,7 @@ ltq_etop_open(struct net_device *dev) if (!IS_TX(i) && (!IS_RX(i))) continue; ltq_dma_open(&ch->dma); + ltq_dma_enable_irq(&ch->dma); napi_enable(&ch->napi); } phy_start(dev->phydev); From patchwork Sat Sep 1 12:03:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964868 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422ZdF1kKKz9sBv for ; Sat, 1 Sep 2018 22:03:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727271AbeIAQPb (ORCPT ); Sat, 1 Sep 2018 12:15:31 -0400 Received: from mx2.mailbox.org ([80.241.60.215]:16822 "EHLO mx2.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbeIAQPb (ORCPT ); Sat, 1 Sep 2018 12:15:31 -0400 Received: from smtp1.mailbox.org (smtp1.mailbox.org [80.241.60.240]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx2.mailbox.org (Postfix) with ESMTPS id DF65A42716; Sat, 1 Sep 2018 14:03:39 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp1.mailbox.org ([80.241.60.240]) by hefe.heinlein-support.de (hefe.heinlein-support.de [91.198.250.172]) (amavisd-new, port 10030) with ESMTP id ot4qhyfaIS0V; Sat, 1 Sep 2018 14:03:38 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org Subject: [PATCH v2 net-next 3/7] net: dsa: Add Lantiq / Intel GSWIP tag support Date: Sat, 1 Sep 2018 14:03:32 +0200 Message-Id: <20180901120332.9792-1-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This handles the tag added by the PMAC on the VRX200 SoC line. The GSWIP uses internally a GSWIP special tag which is located after the Ethernet header. The PMAC which connects the GSWIP to the CPU converts this special tag used by the GSWIP into the PMAC special tag which is added in front of the Ethernet header. This was tested with GSWIP 2.1 found in the VRX200 SoCs, other GSWIP versions use slightly different PMAC special tags. Signed-off-by: Hauke Mehrtens Reviewed-by: Andrew Lunn Reviewed-by: Florian Fainelli --- MAINTAINERS | 6 +++ include/net/dsa.h | 1 + net/dsa/Kconfig | 3 ++ net/dsa/Makefile | 1 + net/dsa/dsa.c | 3 ++ net/dsa/dsa_priv.h | 3 ++ net/dsa/tag_gswip.c | 107 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 124 insertions(+) create mode 100644 net/dsa/tag_gswip.c diff --git a/MAINTAINERS b/MAINTAINERS index a5b256b25905..4b2ee65f6086 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8166,6 +8166,12 @@ S: Maintained F: net/l3mdev F: include/net/l3mdev.h +LANTIQ / INTEL Ethernet drivers +M: Hauke Mehrtens +L: netdev@vger.kernel.org +S: Maintained +F: net/dsa/tag_gswip.c + LANTIQ MIPS ARCHITECTURE M: John Crispin L: linux-mips@linux-mips.org diff --git a/include/net/dsa.h b/include/net/dsa.h index 461e8a7661b7..23690c44e167 100644 --- a/include/net/dsa.h +++ b/include/net/dsa.h @@ -35,6 +35,7 @@ enum dsa_tag_protocol { DSA_TAG_PROTO_BRCM_PREPEND, DSA_TAG_PROTO_DSA, DSA_TAG_PROTO_EDSA, + DSA_TAG_PROTO_GSWIP, DSA_TAG_PROTO_KSZ, DSA_TAG_PROTO_LAN9303, DSA_TAG_PROTO_MTK, diff --git a/net/dsa/Kconfig b/net/dsa/Kconfig index 4183e4ba27a5..48c41918fb35 100644 --- a/net/dsa/Kconfig +++ b/net/dsa/Kconfig @@ -38,6 +38,9 @@ config NET_DSA_TAG_DSA config NET_DSA_TAG_EDSA bool +config NET_DSA_TAG_GSWIP + bool + config NET_DSA_TAG_KSZ bool diff --git a/net/dsa/Makefile b/net/dsa/Makefile index 9e4d3536f977..6e721f7a2947 100644 --- a/net/dsa/Makefile +++ b/net/dsa/Makefile @@ -9,6 +9,7 @@ dsa_core-$(CONFIG_NET_DSA_TAG_BRCM) += tag_brcm.o dsa_core-$(CONFIG_NET_DSA_TAG_BRCM_PREPEND) += tag_brcm.o dsa_core-$(CONFIG_NET_DSA_TAG_DSA) += tag_dsa.o dsa_core-$(CONFIG_NET_DSA_TAG_EDSA) += tag_edsa.o +dsa_core-$(CONFIG_NET_DSA_TAG_GSWIP) += tag_gswip.o dsa_core-$(CONFIG_NET_DSA_TAG_KSZ) += tag_ksz.o dsa_core-$(CONFIG_NET_DSA_TAG_LAN9303) += tag_lan9303.o dsa_core-$(CONFIG_NET_DSA_TAG_MTK) += tag_mtk.o diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c index e63c554e0623..81212109c507 100644 --- a/net/dsa/dsa.c +++ b/net/dsa/dsa.c @@ -54,6 +54,9 @@ const struct dsa_device_ops *dsa_device_ops[DSA_TAG_LAST] = { #ifdef CONFIG_NET_DSA_TAG_EDSA [DSA_TAG_PROTO_EDSA] = &edsa_netdev_ops, #endif +#ifdef CONFIG_NET_DSA_TAG_GSWIP + [DSA_TAG_PROTO_GSWIP] = &gswip_netdev_ops, +#endif #ifdef CONFIG_NET_DSA_TAG_KSZ [DSA_TAG_PROTO_KSZ] = &ksz_netdev_ops, #endif diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h index 3964c6f7a7c0..824ca07a30aa 100644 --- a/net/dsa/dsa_priv.h +++ b/net/dsa/dsa_priv.h @@ -205,6 +205,9 @@ extern const struct dsa_device_ops dsa_netdev_ops; /* tag_edsa.c */ extern const struct dsa_device_ops edsa_netdev_ops; +/* tag_gswip.c */ +extern const struct dsa_device_ops gswip_netdev_ops; + /* tag_ksz.c */ extern const struct dsa_device_ops ksz_netdev_ops; diff --git a/net/dsa/tag_gswip.c b/net/dsa/tag_gswip.c new file mode 100644 index 000000000000..50c115fc924a --- /dev/null +++ b/net/dsa/tag_gswip.c @@ -0,0 +1,107 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Intel / Lantiq GSWIP V2.0 PMAC tag support + * + * Copyright (C) 2017 - 2018 Hauke Mehrtens + */ + +#include +#include +#include +#include + +#include "dsa_priv.h" + +#define GSWIP_TX_HEADER_LEN 4 + +/* special tag in TX path header */ +/* Byte 0 */ +#define GSWIP_TX_SLPID_SHIFT 0 /* source port ID */ +#define GSWIP_TX_SLPID_CPU 2 +#define GSWIP_TX_SLPID_APP1 3 +#define GSWIP_TX_SLPID_APP2 4 +#define GSWIP_TX_SLPID_APP3 5 +#define GSWIP_TX_SLPID_APP4 6 +#define GSWIP_TX_SLPID_APP5 7 + +/* Byte 1 */ +#define GSWIP_TX_CRCGEN_DIS BIT(7) +#define GSWIP_TX_DPID_SHIFT 0 /* destination group ID */ +#define GSWIP_TX_DPID_ELAN 0 +#define GSWIP_TX_DPID_EWAN 1 +#define GSWIP_TX_DPID_CPU 2 +#define GSWIP_TX_DPID_APP1 3 +#define GSWIP_TX_DPID_APP2 4 +#define GSWIP_TX_DPID_APP3 5 +#define GSWIP_TX_DPID_APP4 6 +#define GSWIP_TX_DPID_APP5 7 + +/* Byte 2 */ +#define GSWIP_TX_PORT_MAP_EN BIT(7) +#define GSWIP_TX_PORT_MAP_SEL BIT(6) +#define GSWIP_TX_LRN_DIS BIT(5) +#define GSWIP_TX_CLASS_EN BIT(4) +#define GSWIP_TX_CLASS_SHIFT 0 +#define GSWIP_TX_CLASS_MASK GENMASK(3, 0) + +/* Byte 3 */ +#define GSWIP_TX_DPID_EN BIT(0) +#define GSWIP_TX_PORT_MAP_SHIFT 1 +#define GSWIP_TX_PORT_MAP_MASK GENMASK(6, 1) + +#define GSWIP_RX_HEADER_LEN 8 + +/* special tag in RX path header */ +/* Byte 7 */ +#define GSWIP_RX_SPPID_SHIFT 4 +#define GSWIP_RX_SPPID_MASK GENMASK(6, 4) + +static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb, + struct net_device *dev) +{ + struct dsa_port *dp = dsa_slave_to_port(dev); + int err; + u8 *gswip_tag; + + err = skb_cow_head(skb, GSWIP_TX_HEADER_LEN); + if (err) + return NULL; + + skb_push(skb, GSWIP_TX_HEADER_LEN); + + gswip_tag = skb->data; + gswip_tag[0] = GSWIP_TX_SLPID_CPU; + gswip_tag[1] = GSWIP_TX_DPID_ELAN; + gswip_tag[2] = GSWIP_TX_PORT_MAP_EN | GSWIP_TX_PORT_MAP_SEL; + gswip_tag[3] = BIT(dp->index + GSWIP_TX_PORT_MAP_SHIFT) & GSWIP_TX_PORT_MAP_MASK; + gswip_tag[3] |= GSWIP_TX_DPID_EN; + + return skb; +} + +static struct sk_buff *gswip_tag_rcv(struct sk_buff *skb, + struct net_device *dev, + struct packet_type *pt) +{ + int port; + u8 *gswip_tag; + + if (unlikely(!pskb_may_pull(skb, GSWIP_RX_HEADER_LEN))) + return NULL; + + gswip_tag = skb->data - ETH_HLEN; + skb_pull_rcsum(skb, GSWIP_RX_HEADER_LEN); + + /* Get source port information */ + port = (gswip_tag[7] & GSWIP_RX_SPPID_MASK) >> GSWIP_RX_SPPID_SHIFT; + skb->dev = dsa_master_find_slave(dev, 0, port); + if (!skb->dev) + return NULL; + + return skb; +} + +const struct dsa_device_ops gswip_netdev_ops = { + .xmit = gswip_tag_xmit, + .rcv = gswip_tag_rcv, +}; From patchwork Sat Sep 1 12:04:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964870 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422Zdx2Mslz9sBJ for ; Sat, 1 Sep 2018 22:04:21 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727333AbeIAQQI (ORCPT ); Sat, 1 Sep 2018 12:16:08 -0400 Received: from mx1.mailbox.org ([80.241.60.212]:38350 "EHLO mx1.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbeIAQQH (ORCPT ); Sat, 1 Sep 2018 12:16:07 -0400 Received: from smtp2.mailbox.org (smtp2.mailbox.org [80.241.60.241]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.mailbox.org (Postfix) with ESMTPS id 6F87A48FF1; Sat, 1 Sep 2018 14:04:17 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp2.mailbox.org ([80.241.60.241]) by spamfilter01.heinlein-hosting.de (spamfilter01.heinlein-hosting.de [80.241.56.115]) (amavisd-new, port 10030) with ESMTP id sN6X9ho0V_0n; Sat, 1 Sep 2018 14:04:16 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org Subject: [PATCH v2 net-next 4/7] dt-bindings: net: Add lantiq, xrx200-net DT bindings Date: Sat, 1 Sep 2018 14:04:07 +0200 Message-Id: <20180901120407.9912-1-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This adds the binding for the PMAC core between the CPU and the GSWIP switch found on the xrx200 / VR9 Lantiq / Intel SoC. Signed-off-by: Hauke Mehrtens Cc: devicetree@vger.kernel.org Reviewed-by: Florian Fainelli --- .../devicetree/bindings/net/lantiq,xrx200-net.txt | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/lantiq,xrx200-net.txt diff --git a/Documentation/devicetree/bindings/net/lantiq,xrx200-net.txt b/Documentation/devicetree/bindings/net/lantiq,xrx200-net.txt new file mode 100644 index 000000000000..8a2fe5200cdc --- /dev/null +++ b/Documentation/devicetree/bindings/net/lantiq,xrx200-net.txt @@ -0,0 +1,21 @@ +Lantiq xRX200 GSWIP PMAC Ethernet driver +================================== + +Required properties: + +- compatible : "lantiq,xrx200-net" for the PMAC of the embedded + : GSWIP in the xXR200 +- reg : memory range of the PMAC core inside of the GSWIP core +- interrupts : TX and RX DMA interrupts. Use interrupt-names "tx" for + : the TX interrupt and "rx" for the RX interrupt. + +Example: + +eth0: eth@E10B308 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "lantiq,xrx200-net"; + reg = <0xE10B308 0x30>; + interrupts = <73>, <72>; + interrupt-names = "tx", "rx"; +}; From patchwork Sat Sep 1 12:04:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964872 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422ZfJ6XN5z9sBn for ; Sat, 1 Sep 2018 22:04:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727390AbeIAQQ2 (ORCPT ); Sat, 1 Sep 2018 12:16:28 -0400 Received: from mx2.mailbox.org ([80.241.60.215]:18408 "EHLO mx2.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbeIAQQ1 (ORCPT ); Sat, 1 Sep 2018 12:16:27 -0400 Received: from smtp1.mailbox.org (smtp1.mailbox.org [80.241.60.240]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx2.mailbox.org (Postfix) with ESMTPS id E938941680; Sat, 1 Sep 2018 14:04:35 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp1.mailbox.org ([80.241.60.240]) by spamfilter02.heinlein-hosting.de (spamfilter02.heinlein-hosting.de [80.241.56.116]) (amavisd-new, port 10030) with ESMTP id jUefShEYE2Ch; Sat, 1 Sep 2018 14:04:33 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org Subject: [PATCH v2 net-next 5/7] net: lantiq: Add Lantiq / Intel VRX200 Ethernet driver Date: Sat, 1 Sep 2018 14:04:27 +0200 Message-Id: <20180901120427.9983-1-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This drives the PMAC between the GSWIP Switch and the CPU in the VRX200 SoC. This is currently only the very basic version of the Ethernet driver. When the DMA channel is activated we receive some packets which were send to the SoC while it was still in U-Boot, these packets have the wrong header. Resetting the IP cores did not work so we read out the extra packets at the beginning and discard them. This also adapts the clock code in sysctrl.c to use the default name of the device node so that the driver gets the correct clock. sysctrl.c should be replaced with a proper common clock driver later. Signed-off-by: Hauke Mehrtens --- MAINTAINERS | 1 + arch/mips/lantiq/xway/sysctrl.c | 6 +- drivers/net/ethernet/Kconfig | 7 + drivers/net/ethernet/Makefile | 1 + drivers/net/ethernet/lantiq_xrx200.c | 591 +++++++++++++++++++++++++++++++++++ 5 files changed, 603 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/lantiq_xrx200.c diff --git a/MAINTAINERS b/MAINTAINERS index 4b2ee65f6086..ffff912d31b5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8171,6 +8171,7 @@ M: Hauke Mehrtens L: netdev@vger.kernel.org S: Maintained F: net/dsa/tag_gswip.c +F: drivers/net/ethernet/lantiq_xrx200.c LANTIQ MIPS ARCHITECTURE M: John Crispin diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c index e0af39b33e28..eeb89a37e27e 100644 --- a/arch/mips/lantiq/xway/sysctrl.c +++ b/arch/mips/lantiq/xway/sysctrl.c @@ -505,7 +505,7 @@ void __init ltq_soc_init(void) clkdev_add_pmu("1a800000.pcie", "msi", 1, 1, PMU1_PCIE2_MSI); clkdev_add_pmu("1a800000.pcie", "pdi", 1, 1, PMU1_PCIE2_PDI); clkdev_add_pmu("1a800000.pcie", "ctl", 1, 1, PMU1_PCIE2_CTL); - clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DP); + clkdev_add_pmu("1e10b308.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DP); clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF); clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); } else if (of_machine_is_compatible("lantiq,ar10")) { @@ -513,7 +513,7 @@ void __init ltq_soc_init(void) ltq_ar10_fpi_hz(), ltq_ar10_pp32_hz()); clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1); - clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH | + clkdev_add_pmu("1e10b308.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DP | PMU_PPE_TC); clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF); clkdev_add_pmu("1f203020.gphy", NULL, 1, 0, PMU_GPHY); @@ -536,7 +536,7 @@ void __init ltq_soc_init(void) clkdev_add_pmu(NULL, "ahb", 1, 0, PMU_AHBM | PMU_AHBS); clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF); - clkdev_add_pmu("1e108000.eth", NULL, 0, 0, + clkdev_add_pmu("1e10b308.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DPLUS | PMU_PPE_DPLUM | PMU_PPE_EMA | PMU_PPE_TC | PMU_PPE_SLL01 | PMU_PPE_QSB | PMU_PPE_TOP); diff --git a/drivers/net/ethernet/Kconfig b/drivers/net/ethernet/Kconfig index 6fde68aa13a4..885e00d17807 100644 --- a/drivers/net/ethernet/Kconfig +++ b/drivers/net/ethernet/Kconfig @@ -108,6 +108,13 @@ config LANTIQ_ETOP ---help--- Support for the MII0 inside the Lantiq SoC +config LANTIQ_XRX200 + tristate "Lantiq / Intel xRX200 PMAC network driver" + depends on SOC_TYPE_XWAY + ---help--- + Support for the PMAC of the Gigabit switch (GSWIP) inside the + Lantiq / Intel VRX200 VDSL SoC + source "drivers/net/ethernet/marvell/Kconfig" source "drivers/net/ethernet/mediatek/Kconfig" source "drivers/net/ethernet/mellanox/Kconfig" diff --git a/drivers/net/ethernet/Makefile b/drivers/net/ethernet/Makefile index b45d5f626b59..7b5bf9682066 100644 --- a/drivers/net/ethernet/Makefile +++ b/drivers/net/ethernet/Makefile @@ -49,6 +49,7 @@ obj-$(CONFIG_NET_VENDOR_XSCALE) += xscale/ obj-$(CONFIG_JME) += jme.o obj-$(CONFIG_KORINA) += korina.o obj-$(CONFIG_LANTIQ_ETOP) += lantiq_etop.o +obj-$(CONFIG_LANTIQ_XRX200) += lantiq_xrx200.o obj-$(CONFIG_NET_VENDOR_MARVELL) += marvell/ obj-$(CONFIG_NET_VENDOR_MEDIATEK) += mediatek/ obj-$(CONFIG_NET_VENDOR_MELLANOX) += mellanox/ diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c new file mode 100644 index 000000000000..328f0253fb4a --- /dev/null +++ b/drivers/net/ethernet/lantiq_xrx200.c @@ -0,0 +1,591 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Lantiq / Intel PMAC driver for XRX200 SoCs + * + * Copyright (C) 2010 Lantiq Deutschland + * Copyright (C) 2012 John Crispin + * Copyright (C) 2017 - 2018 Hauke Mehrtens + */ + +#include +#include +#include +#include +#include +#include + +#include +#include + +#include + +/* DMA */ +#define XRX200_DMA_DATA_LEN 0x600 +#define XRX200_DMA_RX 0 +#define XRX200_DMA_TX 1 + +/* cpu port mac */ +#define PMAC_RX_IPG 0x0024 +#define PMAC_RX_IPG_MASK 0xf + +#define PMAC_HD_CTL 0x0000 +/* Add Ethernet header to packets from DMA to PMAC */ +#define PMAC_HD_CTL_ADD BIT(0) +/* Add VLAN tag to Packets from DMA to PMAC */ +#define PMAC_HD_CTL_TAG BIT(1) +/* Add CRC to packets from DMA to PMAC */ +#define PMAC_HD_CTL_AC BIT(2) +/* Add status header to packets from PMAC to DMA */ +#define PMAC_HD_CTL_AS BIT(3) +/* Remove CRC from packets from PMAC to DMA */ +#define PMAC_HD_CTL_RC BIT(4) +/* Remove Layer-2 header from packets from PMAC to DMA */ +#define PMAC_HD_CTL_RL2 BIT(5) +/* Status header is present from DMA to PMAC */ +#define PMAC_HD_CTL_RXSH BIT(6) +/* Add special tag from PMAC to switch */ +#define PMAC_HD_CTL_AST BIT(7) +/* Remove specail Tag from PMAC to DMA */ +#define PMAC_HD_CTL_RST BIT(8) +/* Check CRC from DMA to PMAC */ +#define PMAC_HD_CTL_CCRC BIT(9) +/* Enable reaction to Pause frames in the PMAC */ +#define PMAC_HD_CTL_FC BIT(10) + +struct xrx200_chan { + int tx_free; + + struct tasklet_struct tasklet; + struct napi_struct napi; + struct ltq_dma_channel dma; + struct sk_buff *skb[LTQ_DESC_NUM]; + + struct xrx200_priv *priv; +}; + +struct xrx200_priv { + struct net_device_stats stats; + + struct clk *clk; + + struct xrx200_chan chan_tx; + struct xrx200_chan chan_rx; + + struct net_device *net_dev; + struct device *dev; + + __iomem void *pmac_reg; +}; + +static u32 xrx200_pmac_r32(struct xrx200_priv *priv, u32 offset) +{ + return __raw_readl(priv->pmac_reg + offset); +} + +static void xrx200_pmac_w32(struct xrx200_priv *priv, u32 val, u32 offset) +{ + return __raw_writel(val, priv->pmac_reg + offset); +} + +static void xrx200_pmac_mask(struct xrx200_priv *priv, u32 clear, u32 set, + u32 offset) +{ + u32 val = xrx200_pmac_r32(priv, offset); + + val &= ~(clear); + val |= set; + xrx200_pmac_w32(priv, val, offset); +} + +/* drop all the packets from the DMA ring */ +static void xrx200_flush_dma(struct xrx200_chan *ch) +{ + int i; + + for (i = 0; i < LTQ_DESC_NUM; i++) { + struct ltq_dma_desc *desc = &ch->dma.desc_base[ch->dma.desc]; + + if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) != LTQ_DMA_C) + break; + + desc->ctl = LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | + XRX200_DMA_DATA_LEN; + ch->dma.desc++; + ch->dma.desc %= LTQ_DESC_NUM; + } +} + +static int xrx200_open(struct net_device *dev) +{ + struct xrx200_priv *priv = netdev_priv(dev); + + ltq_dma_open(&priv->chan_tx.dma); + ltq_dma_enable_irq(&priv->chan_tx.dma); + + napi_enable(&priv->chan_rx.napi); + ltq_dma_open(&priv->chan_rx.dma); + /* The boot loader does not always deactivate the receiving of frames + * on the ports and then some packets queue up in the PPE buffers. + * They already passed the PMAC so they do not have the tags + * configured here. Read the these packets here and drop them. + * The HW should have written them into memory after 10us + */ + udelay(10); + xrx200_flush_dma(&priv->chan_rx); + ltq_dma_enable_irq(&priv->chan_rx.dma); + + netif_wake_queue(dev); + + return 0; +} + +static int xrx200_close(struct net_device *dev) +{ + struct xrx200_priv *priv = netdev_priv(dev); + + netif_stop_queue(dev); + + napi_disable(&priv->chan_rx.napi); + ltq_dma_close(&priv->chan_rx.dma); + + ltq_dma_close(&priv->chan_tx.dma); + + return 0; +} + +static int xrx200_alloc_skb(struct xrx200_chan *ch) +{ + int ret = 0; + +#define DMA_PAD (NET_IP_ALIGN + NET_SKB_PAD) + ch->skb[ch->dma.desc] = dev_alloc_skb(XRX200_DMA_DATA_LEN + DMA_PAD); + if (!ch->skb[ch->dma.desc]) { + ret = -ENOMEM; + goto skip; + } + + skb_reserve(ch->skb[ch->dma.desc], NET_SKB_PAD); + ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev, + ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN, + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(ch->priv->dev, + ch->dma.desc_base[ch->dma.desc].addr))) { + dev_kfree_skb_any(ch->skb[ch->dma.desc]); + ret = -ENOMEM; + goto skip; + } + + ch->dma.desc_base[ch->dma.desc].addr = + CPHYSADDR(ch->skb[ch->dma.desc]->data); + skb_reserve(ch->skb[ch->dma.desc], NET_IP_ALIGN); + +skip: + ch->dma.desc_base[ch->dma.desc].ctl = + LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | + XRX200_DMA_DATA_LEN; + + return ret; +} + +static int xrx200_hw_receive(struct xrx200_chan *ch) +{ + struct xrx200_priv *priv = ch->priv; + struct ltq_dma_desc *desc = &ch->dma.desc_base[ch->dma.desc]; + struct sk_buff *skb = ch->skb[ch->dma.desc]; + int len = (desc->ctl & LTQ_DMA_SIZE_MASK); + int ret; + + ret = xrx200_alloc_skb(ch); + + ch->dma.desc++; + ch->dma.desc %= LTQ_DESC_NUM; + + if (ret) { + netdev_err(priv->net_dev, + "failed to allocate new rx buffer\n"); + return ret; + } + + skb_put(skb, len); + skb->dev = priv->net_dev; + skb->protocol = eth_type_trans(skb, priv->net_dev); + netif_receive_skb(skb); + priv->stats.rx_packets++; + priv->stats.rx_bytes += len; + + return 0; +} + +static int xrx200_poll_rx(struct napi_struct *napi, int budget) +{ + struct xrx200_chan *ch = container_of(napi, + struct xrx200_chan, napi); + int rx = 0; + int ret; + + while (rx < budget) { + struct ltq_dma_desc *desc = &ch->dma.desc_base[ch->dma.desc]; + + if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) == LTQ_DMA_C) { + ret = xrx200_hw_receive(ch); + if (ret) + return ret; + rx++; + } else { + break; + } + } + + if (rx < budget) { + napi_complete(&ch->napi); + ltq_dma_enable_irq(&ch->dma); + } + + return rx; +} + +static void xrx200_tx_housekeeping(unsigned long ptr) +{ + struct xrx200_chan *ch = (struct xrx200_chan *)ptr; + int pkts = 0; + int bytes = 0; + + ltq_dma_ack_irq(&ch->dma); + while ((ch->dma.desc_base[ch->tx_free].ctl & + (LTQ_DMA_OWN | LTQ_DMA_C)) == LTQ_DMA_C) { + struct sk_buff *skb = ch->skb[ch->tx_free]; + + pkts++; + bytes += skb->len; + ch->skb[ch->tx_free] = NULL; + dev_kfree_skb(skb); + memset(&ch->dma.desc_base[ch->tx_free], 0, + sizeof(struct ltq_dma_desc)); + ch->tx_free++; + ch->tx_free %= LTQ_DESC_NUM; + } + ltq_dma_enable_irq(&ch->dma); + + netdev_completed_queue(ch->priv->net_dev, pkts, bytes); + + if (!pkts) + return; + + netif_wake_queue(ch->priv->net_dev); +} + +static struct net_device_stats *xrx200_get_stats(struct net_device *dev) +{ + struct xrx200_priv *priv = netdev_priv(dev); + + return &priv->stats; +} + +static int xrx200_start_xmit(struct sk_buff *skb, struct net_device *dev) +{ + struct xrx200_priv *priv = netdev_priv(dev); + struct xrx200_chan *ch; + struct ltq_dma_desc *desc; + u32 byte_offset; + dma_addr_t mapping; + int len; + + ch = &priv->chan_tx; + + desc = &ch->dma.desc_base[ch->dma.desc]; + + skb->dev = dev; + len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len; + + /* dma needs to start on a 16 byte aligned address */ + byte_offset = CPHYSADDR(skb->data) % 16; + + if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) { + netdev_err(dev, "tx ring full\n"); + netif_stop_queue(dev); + return NETDEV_TX_BUSY; + } + + ch->skb[ch->dma.desc] = skb; + + netif_trans_update(dev); + + mapping = dma_map_single(priv->dev, skb->data, len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(priv->dev, mapping))) + goto err_drop; + + desc->addr = mapping - byte_offset; + /* Make sure the address is written before we give it to HW */ + wmb(); + desc->ctl = LTQ_DMA_OWN | LTQ_DMA_SOP | LTQ_DMA_EOP | + LTQ_DMA_TX_OFFSET(byte_offset) | (len & LTQ_DMA_SIZE_MASK); + ch->dma.desc++; + ch->dma.desc %= LTQ_DESC_NUM; + if (ch->dma.desc == ch->tx_free) + netif_stop_queue(dev); + + netdev_sent_queue(dev, skb->len); + priv->stats.tx_packets++; + priv->stats.tx_bytes += len; + + return NETDEV_TX_OK; + +err_drop: + dev_kfree_skb(skb); + priv->stats.tx_dropped++; + priv->stats.tx_errors++; + return NETDEV_TX_OK; +} + +static const struct net_device_ops xrx200_netdev_ops = { + .ndo_open = xrx200_open, + .ndo_stop = xrx200_close, + .ndo_start_xmit = xrx200_start_xmit, + .ndo_set_mac_address = eth_mac_addr, + .ndo_validate_addr = eth_validate_addr, + .ndo_change_mtu = eth_change_mtu, + .ndo_get_stats = xrx200_get_stats, +}; + +static irqreturn_t xrx200_dma_irq_tx(int irq, void *ptr) +{ + struct xrx200_priv *priv = ptr; + struct xrx200_chan *ch = &priv->chan_tx; + + ltq_dma_disable_irq(&ch->dma); + ltq_dma_ack_irq(&ch->dma); + + tasklet_schedule(&ch->tasklet); + + return IRQ_HANDLED; +} + +static irqreturn_t xrx200_dma_irq_rx(int irq, void *ptr) +{ + struct xrx200_priv *priv = ptr; + struct xrx200_chan *ch = &priv->chan_rx; + + ltq_dma_disable_irq(&ch->dma); + ltq_dma_ack_irq(&ch->dma); + + napi_schedule(&ch->napi); + + return IRQ_HANDLED; +} + +static int xrx200_dma_init(struct xrx200_priv *priv) +{ + struct xrx200_chan *ch_rx = &priv->chan_rx; + struct xrx200_chan *ch_tx = &priv->chan_tx; + int ret = 0; + int i; + + ltq_dma_init_port(DMA_PORT_ETOP); + + ch_rx->dma.nr = XRX200_DMA_RX; + ch_rx->dma.dev = priv->dev; + ch_rx->priv = priv; + + ltq_dma_alloc_rx(&ch_rx->dma); + for (ch_rx->dma.desc = 0; ch_rx->dma.desc < LTQ_DESC_NUM; + ch_rx->dma.desc++) { + ret = xrx200_alloc_skb(ch_rx); + if (ret) + goto rx_free; + } + ch_rx->dma.desc = 0; + ret = devm_request_irq(priv->dev, ch_rx->dma.irq, xrx200_dma_irq_rx, 0, + "xrx200_net_rx", priv); + if (ret) { + dev_err(priv->dev, "failed to request RX irq %d\n", + ch_rx->dma.irq); + goto rx_ring_free; + } + + ch_tx->dma.nr = XRX200_DMA_TX; + ch_tx->dma.dev = priv->dev; + ch_tx->priv = priv; + + ltq_dma_alloc_tx(&ch_tx->dma); + ret = devm_request_irq(priv->dev, ch_tx->dma.irq, xrx200_dma_irq_tx, 0, + "xrx200_net_tx", priv); + if (ret) { + dev_err(priv->dev, "failed to request TX irq %d\n", + ch_tx->dma.irq); + goto tx_free; + } + + return ret; + +tx_free: + ltq_dma_free(&ch_tx->dma); + +rx_ring_free: + /* free the allocated RX ring */ + for (i = 0; i < LTQ_DESC_NUM; i++) { + if (priv->chan_rx.skb[i]) + dev_kfree_skb_any(priv->chan_rx.skb[i]); + } + +rx_free: + ltq_dma_free(&ch_rx->dma); + return ret; +} + +static void xrx200_hw_cleanup(struct xrx200_priv *priv) +{ + int i; + + ltq_dma_free(&priv->chan_tx.dma); + ltq_dma_free(&priv->chan_rx.dma); + + /* free the allocated RX ring */ + for (i = 0; i < LTQ_DESC_NUM; i++) + dev_kfree_skb_any(priv->chan_rx.skb[i]); +} + +static int xrx200_probe(struct platform_device *pdev) +{ + struct device *dev = &pdev->dev; + struct device_node *np = dev->of_node; + struct resource *res; + struct xrx200_priv *priv; + struct net_device *net_dev; + const u8 *mac; + int err; + + /* alloc the network device */ + net_dev = devm_alloc_etherdev(dev, sizeof(struct xrx200_priv)); + if (!net_dev) + return -ENOMEM; + + priv = netdev_priv(net_dev); + priv->net_dev = net_dev; + priv->dev = dev; + + net_dev->netdev_ops = &xrx200_netdev_ops; + SET_NETDEV_DEV(net_dev, dev); + net_dev->min_mtu = ETH_ZLEN; + net_dev->max_mtu = XRX200_DMA_DATA_LEN; + + /* load the memory ranges */ + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + dev_err(dev, "failed to get resources\n"); + return -ENOENT; + } + + priv->pmac_reg = devm_ioremap_resource(dev, res); + if (!priv->pmac_reg) { + dev_err(dev, "failed to request and remap io ranges\n"); + return -ENOMEM; + } + + priv->chan_rx.dma.irq = platform_get_irq_byname(pdev, "rx"); + if (priv->chan_rx.dma.irq < 0) { + dev_err(dev, "failed to get RX IRQ, %i\n", + priv->chan_rx.dma.irq); + return -ENOENT; + } + priv->chan_tx.dma.irq = platform_get_irq_byname(pdev, "tx"); + if (priv->chan_tx.dma.irq < 0) { + dev_err(dev, "failed to get TX IRQ, %i\n", + priv->chan_tx.dma.irq); + return -ENOENT; + } + + /* get the clock */ + priv->clk = devm_clk_get(dev, NULL); + if (IS_ERR(priv->clk)) { + dev_err(dev, "failed to get clock\n"); + return PTR_ERR(priv->clk); + } + + mac = of_get_mac_address(np); + if (mac && is_valid_ether_addr(mac)) + ether_addr_copy(net_dev->dev_addr, mac); + else + eth_hw_addr_random(net_dev); + + /* bring up the dma engine and IP core */ + err = xrx200_dma_init(priv); + if (err) + return err; + + /* enable clock gate */ + err = clk_prepare_enable(priv->clk); + if (err) + goto err_uninit_dma; + + /* set IPG to 12 */ + xrx200_pmac_mask(priv, PMAC_RX_IPG_MASK, 0xb, PMAC_RX_IPG); + + /* enable status header, enable CRC */ + xrx200_pmac_mask(priv, 0, + PMAC_HD_CTL_RST | PMAC_HD_CTL_AST | PMAC_HD_CTL_RXSH | + PMAC_HD_CTL_AS | PMAC_HD_CTL_AC | PMAC_HD_CTL_RC, + PMAC_HD_CTL); + + tasklet_init(&priv->chan_tx.tasklet, xrx200_tx_housekeeping, + (u32)&priv->chan_tx); + + /* setup NAPI */ + netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32); + + platform_set_drvdata(pdev, priv); + + err = register_netdev(net_dev); + if (err) + goto err_unprepare_clk; + return err; + +err_unprepare_clk: + clk_disable_unprepare(priv->clk); + +err_uninit_dma: + xrx200_hw_cleanup(priv); + + return 0; +} + +static int xrx200_remove(struct platform_device *pdev) +{ + struct xrx200_priv *priv = platform_get_drvdata(pdev); + struct net_device *net_dev = priv->net_dev; + + /* free stack related instances */ + netif_stop_queue(net_dev); + netif_napi_del(&priv->chan_rx.napi); + + /* remove the actual device */ + unregister_netdev(net_dev); + + /* release the clock */ + clk_disable_unprepare(priv->clk); + + /* shut down hardware */ + xrx200_hw_cleanup(priv); + + return 0; +} + +static const struct of_device_id xrx200_match[] = { + { .compatible = "lantiq,xrx200-net" }, + {}, +}; +MODULE_DEVICE_TABLE(of, xrx200_match); + +static struct platform_driver xrx200_driver = { + .probe = xrx200_probe, + .remove = xrx200_remove, + .driver = { + .name = "lantiq,xrx200-net", + .of_match_table = xrx200_match, + }, +}; + +module_platform_driver(xrx200_driver); + +MODULE_AUTHOR("John Crispin "); +MODULE_DESCRIPTION("Lantiq SoC XRX200 ethernet"); +MODULE_LICENSE("GPL"); From patchwork Sat Sep 1 12:04:56 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964874 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422Zfq1bJ8z9sBv for ; Sat, 1 Sep 2018 22:05:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727410AbeIAQQy (ORCPT ); Sat, 1 Sep 2018 12:16:54 -0400 Received: from mx1.mailbox.org ([80.241.60.212]:38964 "EHLO mx1.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbeIAQQy (ORCPT ); Sat, 1 Sep 2018 12:16:54 -0400 Received: from smtp1.mailbox.org (smtp1.mailbox.org [80.241.60.240]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.mailbox.org (Postfix) with ESMTPS id 8BA4548FF1; Sat, 1 Sep 2018 14:05:03 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp1.mailbox.org ([80.241.60.240]) by spamfilter01.heinlein-hosting.de (spamfilter01.heinlein-hosting.de [80.241.56.115]) (amavisd-new, port 10030) with ESMTP id bZ4kwxygMvZl; Sat, 1 Sep 2018 14:05:02 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org Subject: [PATCH v2 net-next 6/7] dt-bindings: net: dsa: Add lantiq, xrx200-gswip DT bindings Date: Sat, 1 Sep 2018 14:04:56 +0200 Message-Id: <20180901120456.10056-1-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This adds the binding for the GSWIP (Gigabit switch) core found in the xrx200 / VR9 Lantiq / Intel SoC. This part takes care of the switch, MDIO bus, and loading the FW into the embedded GPHYs. Signed-off-by: Hauke Mehrtens Cc: devicetree@vger.kernel.org --- .../devicetree/bindings/net/dsa/lantiq-gswip.txt | 141 +++++++++++++++++++++ 1 file changed, 141 insertions(+) create mode 100644 Documentation/devicetree/bindings/net/dsa/lantiq-gswip.txt diff --git a/Documentation/devicetree/bindings/net/dsa/lantiq-gswip.txt b/Documentation/devicetree/bindings/net/dsa/lantiq-gswip.txt new file mode 100644 index 000000000000..4921625e1a7d --- /dev/null +++ b/Documentation/devicetree/bindings/net/dsa/lantiq-gswip.txt @@ -0,0 +1,141 @@ +Lantiq GSWIP Ethernet switches +================================== + +Required properties for GSWIP core: + +- compatible : "lantiq,xrx200-gswip" for the embedded GSWIP in the + xRX200 SoC +- reg : memory range of the GSWIP core registers + : memory range of the GSWIP MDIO registers + : memory range of the GSWIP MII registers + +See Documentation/devicetree/bindings/net/dsa/dsa.txt for a list of +additional required and optional properties. + + +Required properties for MDIO bus: +- compatible : "lantiq,xrx200-mdio" for the MDIO bus inside the GSWIP + core of the xRX200 SoC and the PHYs connected to it. + +See Documentation/devicetree/bindings/net/mdio.txt for a list of additional +required and optional properties. + + +Required properties for GPHY firmware loading: +- compatible : "lantiq,gphy-fw" and "lantiq,xrx200-gphy-fw", + "lantiq,xrx200a1x-gphy-fw", "lantiq,xrx200a2x-gphy-fw", + "lantiq,xrx300-gphy-fw", or "lantiq,xrx330-gphy-fw" + for the loading of the firmware into the embedded + GPHY core of the SoC. +- lantiq,rcu : reference to the rcu syscon + +The GPHY firmware loader has a list of GPHY entries, one for each +embedded GPHY + +- reg : Offset of the GPHY firmware register in the RCU + register range +- resets : list of resets of the embedded GPHY +- reset-names : list of names of the resets + +Example: + +Ethernet switch on the VRX200 SoC: + +gswip: gswip@E108000 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "lantiq,xrx200-gswip"; + reg = < 0xE108000 0x3000 /* switch */ + 0xE10B100 0x70 /* mdio */ + 0xE10B1D8 0x30 /* mii */ + >; + dsa,member = <0 0>; + + ports { + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + label = "lan3"; + phy-mode = "rgmii"; + phy-handle = <&phy0>; + }; + + port@1 { + reg = <1>; + label = "lan4"; + phy-mode = "rgmii"; + phy-handle = <&phy1>; + }; + + port@2 { + reg = <2>; + label = "lan2"; + phy-mode = "gmii"; + phy-handle = <&phy11>; + }; + + port@4 { + reg = <4>; + label = "lan1"; + phy-mode = "gmii"; + phy-handle = <&phy13>; + }; + + port@5 { + reg = <5>; + label = "wan"; + phy-mode = "rgmii"; + phy-handle = <&phy5>; + }; + + port@6 { + reg = <0x6>; + label = "cpu"; + ethernet = <ð0>; + }; + }; + + mdio@0 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "lantiq,xrx200-mdio"; + reg = <0>; + + phy0: ethernet-phy@0 { + reg = <0x0>; + }; + phy1: ethernet-phy@1 { + reg = <0x1>; + }; + phy5: ethernet-phy@5 { + reg = <0x5>; + }; + phy11: ethernet-phy@11 { + reg = <0x11>; + }; + phy13: ethernet-phy@13 { + reg = <0x13>; + }; + }; + + gphy-fw { + compatible = "lantiq,xrx200a2x-gphy-fw", "lantiq,xrx200-gphy-fw", "lantiq,gphy-fw"; + lantiq,rcu = <&rcu0>; + + gphy@20 { + reg = <0x20>; + + resets = <&reset0 31 30>; + reset-names = "gphy"; + }; + + gphy@68 { + reg = <0x68>; + + resets = <&reset0 29 28>; + reset-names = "gphy"; + }; + }; +}; From patchwork Sat Sep 1 12:05:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hauke Mehrtens X-Patchwork-Id: 964875 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hauke-m.de Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 422ZgD10vZz9sBJ for ; Sat, 1 Sep 2018 22:05:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727425AbeIAQRO (ORCPT ); Sat, 1 Sep 2018 12:17:14 -0400 Received: from mx2.mailbox.org ([80.241.60.215]:19750 "EHLO mx2.mailbox.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726827AbeIAQRO (ORCPT ); Sat, 1 Sep 2018 12:17:14 -0400 Received: from smtp2.mailbox.org (smtp2.mailbox.org [80.241.60.241]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx2.mailbox.org (Postfix) with ESMTPS id 122A940E62; Sat, 1 Sep 2018 14:05:21 +0200 (CEST) X-Virus-Scanned: amavisd-new at heinlein-support.de Received: from smtp2.mailbox.org ([80.241.60.241]) by gerste.heinlein-support.de (gerste.heinlein-support.de [91.198.250.173]) (amavisd-new, port 10030) with ESMTP id noW-To8YazYO; Sat, 1 Sep 2018 14:05:16 +0200 (CEST) From: Hauke Mehrtens To: davem@davemloft.net Cc: netdev@vger.kernel.org, andrew@lunn.ch, vivien.didelot@savoirfairelinux.com, f.fainelli@gmail.com, john@phrozen.org, linux-mips@linux-mips.org, dev@kresin.me, hauke.mehrtens@intel.com, devicetree@vger.kernel.org Subject: [PATCH v2 net-next 7/7] net: dsa: Add Lantiq / Intel DSA driver for vrx200 Date: Sat, 1 Sep 2018 14:05:11 +0200 Message-Id: <20180901120511.10112-1-hauke@hauke-m.de> In-Reply-To: <20180901114535.9070-1-hauke@hauke-m.de> References: <20180901114535.9070-1-hauke@hauke-m.de> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This adds the DSA driver for the GSWIP Switch found in the VRX200 SoC. This switch is integrated in the DSL SoC, this SoC uses a GSWIP version 2.1, there are other SoCs using different versions of this IP block, but this driver was only tested with the version found in the VRX200. Currently only the basic features are implemented which will forward all packages to the CPU and let the CPU do the forwarding. The hardware also support Layer 2 offloading which is not yet implemented in this driver. The GPHY FW loaded is now done by this driver and not any more by the separate driver in drivers/soc/lantiq/gphy.c, I will remove this driver is a separate patch. to make use of the GPHY this switch driver is needed anyway. Other SoCs have more embedded GPHYs so this driver should support a variable number of GPHYs. After the firmware was loaded the GPHY can be probed on the MDIO bus and it behaves like an external GPHY, without the firmware it can not be probed on the MDIO bus. Currently this depends on SOC_TYPE_XWAY because the SoC revision detection function ltq_soc_type() is used to load the correct GPHY firmware on the VRX200 SoCs. The clock names in the sysctrl.c file have to be changed because the clocks are now used by a different driver. This should be cleaned up and a real common clock driver should provide the clocks instead. Signed-off-by: Hauke Mehrtens --- MAINTAINERS | 2 + arch/mips/lantiq/xway/sysctrl.c | 8 +- drivers/net/dsa/Kconfig | 8 + drivers/net/dsa/Makefile | 1 + drivers/net/dsa/lantiq_gswip.c | 1018 +++++++++++++++++++++++++++++++++++++++ drivers/net/dsa/lantiq_pce.h | 153 ++++++ 6 files changed, 1186 insertions(+), 4 deletions(-) create mode 100644 drivers/net/dsa/lantiq_gswip.c create mode 100644 drivers/net/dsa/lantiq_pce.h diff --git a/MAINTAINERS b/MAINTAINERS index ffff912d31b5..5ae7385840e3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -8172,6 +8172,8 @@ L: netdev@vger.kernel.org S: Maintained F: net/dsa/tag_gswip.c F: drivers/net/ethernet/lantiq_xrx200.c +F: drivers/net/dsa/lantiq_pce.h +F: drivers/net/dsa/intel_gswip.c LANTIQ MIPS ARCHITECTURE M: John Crispin diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c index eeb89a37e27e..fe25c99089b7 100644 --- a/arch/mips/lantiq/xway/sysctrl.c +++ b/arch/mips/lantiq/xway/sysctrl.c @@ -516,8 +516,8 @@ void __init ltq_soc_init(void) clkdev_add_pmu("1e10b308.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DP | PMU_PPE_TC); clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF); - clkdev_add_pmu("1f203020.gphy", NULL, 1, 0, PMU_GPHY); - clkdev_add_pmu("1f203068.gphy", NULL, 1, 0, PMU_GPHY); + clkdev_add_pmu("1e108000.gswip", "gphy0", 0, 0, PMU_GPHY); + clkdev_add_pmu("1e108000.gswip", "gphy1", 0, 0, PMU_GPHY); clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); clkdev_add_pmu("1e116000.mei", "afe", 1, 2, PMU_ANALOG_DSL_AFE); clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); @@ -540,8 +540,8 @@ void __init ltq_soc_init(void) PMU_SWITCH | PMU_PPE_DPLUS | PMU_PPE_DPLUM | PMU_PPE_EMA | PMU_PPE_TC | PMU_PPE_SLL01 | PMU_PPE_QSB | PMU_PPE_TOP); - clkdev_add_pmu("1f203020.gphy", NULL, 0, 0, PMU_GPHY); - clkdev_add_pmu("1f203068.gphy", NULL, 0, 0, PMU_GPHY); + clkdev_add_pmu("1e108000.gswip", "gphy0", 0, 0, PMU_GPHY); + clkdev_add_pmu("1e108000.gswip", "gphy1", 0, 0, PMU_GPHY); clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO); clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig index d3ce1e4cb4d3..785e1657046c 100644 --- a/drivers/net/dsa/Kconfig +++ b/drivers/net/dsa/Kconfig @@ -23,6 +23,14 @@ config NET_DSA_LOOP This enables support for a fake mock-up switch chip which exercises the DSA APIs. +config NET_DSA_LANTIQ_GSWIP + tristate "Lantiq / Intel GSWIP" + depends on NET_DSA && SOC_TYPE_XWAY + select NET_DSA_TAG_GSWIP + ---help--- + This enables support for the Lantiq / Intel GSWIP 2.1 found in + the xrx200 / VR9 SoC. + config NET_DSA_MT7530 tristate "Mediatek MT7530 Ethernet switch support" depends on NET_DSA diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile index 46c1cba91ffe..82e5d794c41f 100644 --- a/drivers/net/dsa/Makefile +++ b/drivers/net/dsa/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_NET_DSA_LOOP) += dsa_loop.o ifdef CONFIG_NET_DSA_LOOP obj-$(CONFIG_FIXED_PHY) += dsa_loop_bdinfo.o endif +obj-$(CONFIG_NET_DSA_LANTIQ_GSWIP) += lantiq_gswip.o obj-$(CONFIG_NET_DSA_MT7530) += mt7530.o obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o obj-$(CONFIG_NET_DSA_QCA8K) += qca8k.o diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c new file mode 100644 index 000000000000..7f0c308a3530 --- /dev/null +++ b/drivers/net/dsa/lantiq_gswip.c @@ -0,0 +1,1018 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Lantiq / Intel GSWIP switch driver for VRX200 SoCs + * + * Copyright (C) 2010 Lantiq Deutschland + * Copyright (C) 2012 John Crispin + * Copyright (C) 2017 - 2018 Hauke Mehrtens + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "lantiq_pce.h" + +/* GSWIP MDIO Registers */ +#define GSWIP_MDIO_GLOB 0x00 +#define GSWIP_MDIO_GLOB_ENABLE BIT(15) +#define GSWIP_MDIO_CTRL 0x08 +#define GSWIP_MDIO_CTRL_BUSY BIT(12) +#define GSWIP_MDIO_CTRL_RD BIT(11) +#define GSWIP_MDIO_CTRL_WR BIT(10) +#define GSWIP_MDIO_CTRL_PHYAD_MASK 0x1f +#define GSWIP_MDIO_CTRL_PHYAD_SHIFT 5 +#define GSWIP_MDIO_CTRL_REGAD_MASK 0x1f +#define GSWIP_MDIO_READ 0x09 +#define GSWIP_MDIO_WRITE 0x0A +#define GSWIP_MDIO_MDC_CFG0 0x0B +#define GSWIP_MDIO_MDC_CFG1 0x0C +#define GSWIP_MDIO_PHYp(p) (0x15 - (p)) +#define GSWIP_MDIO_PHY_LINK_DOWN 0x4000 +#define GSWIP_MDIO_PHY_LINK_UP 0x2000 +#define GSWIP_MDIO_PHY_SPEED_M10 0x0000 +#define GSWIP_MDIO_PHY_SPEED_M100 0x0800 +#define GSWIP_MDIO_PHY_SPEED_G1 0x1000 +#define GSWIP_MDIO_PHY_FDUP_EN 0x0200 +#define GSWIP_MDIO_PHY_FDUP_DIS 0x0600 +#define GSWIP_MDIO_PHY_FCONTX_EN 0x0100 +#define GSWIP_MDIO_PHY_FCONTX_DIS 0x0180 +#define GSWIP_MDIO_PHY_FCONRX_EN 0x0020 +#define GSWIP_MDIO_PHY_FCONRX_DIS 0x0060 +#define GSWIP_MDIO_PHY_LINK_MASK 0x6000 +#define GSWIP_MDIO_PHY_SPEED_MASK 0x1800 +#define GSWIP_MDIO_PHY_FDUP_MASK 0x0600 +#define GSWIP_MDIO_PHY_FCONTX_MASK 0x0180 +#define GSWIP_MDIO_PHY_FCONRX_MASK 0x0060 +#define GSWIP_MDIO_PHY_ADDR_MASK 0x001f +#define GSWIP_MDIO_PHY_MASK (GSWIP_MDIO_PHY_ADDR_MASK | \ + GSWIP_MDIO_PHY_FCONRX_MASK | \ + GSWIP_MDIO_PHY_FCONTX_MASK | \ + GSWIP_MDIO_PHY_LINK_MASK | \ + GSWIP_MDIO_PHY_SPEED_MASK | \ + GSWIP_MDIO_PHY_FDUP_MASK) + +/* GSWIP MII Registers */ +#define GSWIP_MII_CFGp(p) ((p) * 2) +#define GSWIP_MII_CFG_EN BIT(14) +#define GSWIP_MII_CFG_MODE_MIIP 0x0 +#define GSWIP_MII_CFG_MODE_MIIM 0x1 +#define GSWIP_MII_CFG_MODE_RMIIP 0x2 +#define GSWIP_MII_CFG_MODE_RMIIM 0x3 +#define GSWIP_MII_CFG_MODE_RGMII 0x4 +#define GSWIP_MII_CFG_MODE_MASK 0xf +#define GSWIP_MII_CFG_RATE_M2P5 0x00 +#define GSWIP_MII_CFG_RATE_M25 0x10 +#define GSWIP_MII_CFG_RATE_M125 0x20 +#define GSWIP_MII_CFG_RATE_M50 0x30 +#define GSWIP_MII_CFG_RATE_AUTO 0x40 +#define GSWIP_MII_CFG_RATE_MASK 0x70 + +/* GSWIP Core Registers */ +#define GSWIP_SWRES 0x000 +#define GSWIP_SWRES_R1 BIT(1) /* GSWIP Software reset */ +#define GSWIP_SWRES_R0 BIT(0) /* GSWIP Hardware reset */ +#define GSWIP_VERSION 0x013 +#define GSWIP_VERSION_REV_SHIFT 0 +#define GSWIP_VERSION_REV_MASK GENMASK(7, 0) +#define GSWIP_VERSION_MOD_SHIFT 8 +#define GSWIP_VERSION_MOD_MASK GENMASK(15, 8) + +#define GSWIP_BM_RAM_VAL(x) (0x043 - (x)) +#define GSWIP_BM_RAM_ADDR 0x044 +#define GSWIP_BM_RAM_CTRL 0x045 +#define GSWIP_BM_RAM_CTRL_BAS BIT(15) +#define GSWIP_BM_RAM_CTRL_OPMOD BIT(5) +#define GSWIP_BM_RAM_CTRL_ADDR_MASK GENMASK(4, 0) +#define GSWIP_BM_QUEUE_GCTRL 0x04A +#define GSWIP_BM_QUEUE_GCTRL_GL_MOD BIT(10) +/* buffer management Port Configuration Register */ +#define GSWIP_BM_PCFGp(p) (0x080 + ((p) * 2)) +#define GSWIP_BM_PCFG_CNTEN BIT(0) /* RMON Counter Enable */ +#define GSWIP_BM_PCFG_IGCNT BIT(1) /* Ingres Special Tag RMON count */ +/* buffer management Port Control Register */ +#define GSWIP_BM_RMON_CTRLp(p) (0x81 + ((p) * 2)) +#define GSWIP_BM_CTRL_RMON_RAM1_RES BIT(0) /* Software Reset for RMON RAM 1 */ +#define GSWIP_BM_CTRL_RMON_RAM2_RES BIT(1) /* Software Reset for RMON RAM 2 */ + +/* PCE */ +#define GSWIP_PCE_TBL_KEY(x) (0x447 - (x)) +#define GSWIP_PCE_TBL_MASK 0x448 +#define GSWIP_PCE_TBL_VAL(x) (0x44D - (x)) +#define GSWIP_PCE_TBL_ADDR 0x44E +#define GSWIP_PCE_TBL_CTRL 0x44F +#define GSWIP_PCE_TBL_CTRL_BAS BIT(15) +#define GSWIP_PCE_TBL_CTRL_TYPE BIT(13) +#define GSWIP_PCE_TBL_CTRL_VLD BIT(12) +#define GSWIP_PCE_TBL_CTRL_KEYFORM BIT(11) +#define GSWIP_PCE_TBL_CTRL_GMAP_MASK GENMASK(10, 7) +#define GSWIP_PCE_TBL_CTRL_OPMOD_MASK GENMASK(6, 5) +#define GSWIP_PCE_TBL_CTRL_OPMOD_ADRD 0x00 +#define GSWIP_PCE_TBL_CTRL_OPMOD_ADWR 0x20 +#define GSWIP_PCE_TBL_CTRL_OPMOD_KSRD 0x40 +#define GSWIP_PCE_TBL_CTRL_OPMOD_KSWR 0x60 +#define GSWIP_PCE_TBL_CTRL_ADDR_MASK GENMASK(4, 0) +#define GSWIP_PCE_PMAP1 0x453 /* Monitoring port map */ +#define GSWIP_PCE_PMAP2 0x454 /* Default Multicast port map */ +#define GSWIP_PCE_PMAP3 0x455 /* Default Unknown Unicast port map */ +#define GSWIP_PCE_GCTRL_0 0x456 +#define GSWIP_PCE_GCTRL_0_MC_VALID BIT(3) +#define GSWIP_PCE_GCTRL_0_VLAN BIT(14) /* VLAN aware Switching */ +#define GSWIP_PCE_GCTRL_1 0x457 +#define GSWIP_PCE_GCTRL_1_MAC_GLOCK BIT(2) /* MAC Address table lock */ +#define GSWIP_PCE_GCTRL_1_MAC_GLOCK_MOD BIT(3) /* Mac address table lock forwarding mode */ +#define GSWIP_PCE_PCTRL_0p(p) (0x480 + ((p) * 0xA)) +#define GSWIP_PCE_PCTRL_0_INGRESS BIT(11) +#define GSWIP_PCE_PCTRL_0_PSTATE_LISTEN 0x0 +#define GSWIP_PCE_PCTRL_0_PSTATE_RX 0x1 +#define GSWIP_PCE_PCTRL_0_PSTATE_TX 0x2 +#define GSWIP_PCE_PCTRL_0_PSTATE_LEARNING 0x3 +#define GSWIP_PCE_PCTRL_0_PSTATE_FORWARDING 0x7 +#define GSWIP_PCE_PCTRL_0_PSTATE_MASK GENMASK(2, 0) + +#define GSWIP_MAC_FLEN 0x8C5 +#define GSWIP_MAC_CTRL_2p(p) (0x905 + ((p) * 0xC)) +#define GSWIP_MAC_CTRL_2_MLEN BIT(3) /* Maximum Untagged Frame Lnegth */ + +/* Ethernet Switch Fetch DMA Port Control Register */ +#define GSWIP_FDMA_PCTRLp(p) (0xA80 + ((p) * 0x6)) +#define GSWIP_FDMA_PCTRL_EN BIT(0) /* FDMA Port Enable */ +#define GSWIP_FDMA_PCTRL_STEN BIT(1) /* Special Tag Insertion Enable */ +#define GSWIP_FDMA_PCTRL_VLANMOD_MASK GENMASK(4, 3) /* VLAN Modification Control */ +#define GSWIP_FDMA_PCTRL_VLANMOD_SHIFT 3 /* VLAN Modification Control */ +#define GSWIP_FDMA_PCTRL_VLANMOD_DIS (0x0 << GSWIP_FDMA_PCTRL_VLANMOD_SHIFT) +#define GSWIP_FDMA_PCTRL_VLANMOD_PRIO (0x1 << GSWIP_FDMA_PCTRL_VLANMOD_SHIFT) +#define GSWIP_FDMA_PCTRL_VLANMOD_ID (0x2 << GSWIP_FDMA_PCTRL_VLANMOD_SHIFT) +#define GSWIP_FDMA_PCTRL_VLANMOD_BOTH (0x3 << GSWIP_FDMA_PCTRL_VLANMOD_SHIFT) + +/* Ethernet Switch Store DMA Port Control Register */ +#define GSWIP_SDMA_PCTRLp(p) (0xBC0 + ((p) * 0x6)) +#define GSWIP_SDMA_PCTRL_EN BIT(0) /* SDMA Port Enable */ +#define GSWIP_SDMA_PCTRL_FCEN BIT(1) /* Flow Control Enable */ +#define GSWIP_SDMA_PCTRL_PAUFWD BIT(1) /* Pause Frame Forwarding */ + +#define XRX200_GPHY_FW_ALIGN (16 * 1024) + +struct gswip_hw_info { + int max_ports; + int cpu_port; +}; + +struct xway_gphy_match_data { + char *fe_firmware_name; + char *ge_firmware_name; +}; + +struct gswip_gphy_fw { + struct clk *clk_gate; + struct reset_control *reset; + u32 fw_addr_offset; + char *fw_name; +}; + +struct gswip_priv { + __iomem void *gswip; + __iomem void *mdio; + __iomem void *mii; + const struct gswip_hw_info *hw_info; + const struct xway_gphy_match_data *gphy_fw_name_cfg; + struct dsa_switch *ds; + struct device *dev; + struct regmap *rcu_regmap; + int num_gphy_fw; + struct gswip_gphy_fw *gphy_fw; +}; + +struct gswip_rmon_cnt_desc { + unsigned int size; + unsigned int offset; + const char *name; +}; + +#define MIB_DESC(_size, _offset, _name) {.size = _size, .offset = _offset, .name = _name} + +static const struct gswip_rmon_cnt_desc gswip_rmon_cnt[] = { + /** Receive Packet Count (only packets that are accepted and not discarded). */ + MIB_DESC(1, 0x1F, "RxGoodPkts"), + MIB_DESC(1, 0x23, "RxUnicastPkts"), + MIB_DESC(1, 0x22, "RxMulticastPkts"), + MIB_DESC(1, 0x21, "RxFCSErrorPkts"), + MIB_DESC(1, 0x1D, "RxUnderSizeGoodPkts"), + MIB_DESC(1, 0x1E, "RxUnderSizeErrorPkts"), + MIB_DESC(1, 0x1B, "RxOversizeGoodPkts"), + MIB_DESC(1, 0x1C, "RxOversizeErrorPkts"), + MIB_DESC(1, 0x20, "RxGoodPausePkts"), + MIB_DESC(1, 0x1A, "RxAlignErrorPkts"), + MIB_DESC(1, 0x12, "Rx64BytePkts"), + MIB_DESC(1, 0x13, "Rx127BytePkts"), + MIB_DESC(1, 0x14, "Rx255BytePkts"), + MIB_DESC(1, 0x15, "Rx511BytePkts"), + MIB_DESC(1, 0x16, "Rx1023BytePkts"), + /** Receive Size 1024-1522 (or more, if configured) Packet Count. */ + MIB_DESC(1, 0x17, "RxMaxBytePkts"), + MIB_DESC(1, 0x18, "RxDroppedPkts"), + MIB_DESC(1, 0x19, "RxFilteredPkts"), + MIB_DESC(2, 0x24, "RxGoodBytes"), + MIB_DESC(2, 0x26, "RxBadBytes"), + MIB_DESC(1, 0x11, "TxAcmDroppedPkts"), + MIB_DESC(1, 0x0C, "TxGoodPkts"), + MIB_DESC(1, 0x06, "TxUnicastPkts"), + MIB_DESC(1, 0x07, "TxMulticastPkts"), + MIB_DESC(1, 0x00, "Tx64BytePkts"), + MIB_DESC(1, 0x01, "Tx127BytePkts"), + MIB_DESC(1, 0x02, "Tx255BytePkts"), + MIB_DESC(1, 0x03, "Tx511BytePkts"), + MIB_DESC(1, 0x04, "Tx1023BytePkts"), + /** Transmit Size 1024-1522 (or more, if configured) Packet Count. */ + MIB_DESC(1, 0x05, "TxMaxBytePkts"), + MIB_DESC(1, 0x08, "TxSingleCollCount"), + MIB_DESC(1, 0x09, "TxMultCollCount"), + MIB_DESC(1, 0x0A, "TxLateCollCount"), + MIB_DESC(1, 0x0B, "TxExcessCollCount"), + MIB_DESC(1, 0x0D, "TxPauseCount"), + MIB_DESC(1, 0x10, "TxDroppedPkts"), + MIB_DESC(2, 0x0E, "TxGoodBytes"), +}; + +static u32 gswip_switch_r(struct gswip_priv *priv, u32 offset) +{ + return __raw_readl(priv->gswip + (offset * 4)); +} + +static void gswip_switch_w(struct gswip_priv *priv, u32 val, u32 offset) +{ + return __raw_writel(val, priv->gswip + (offset * 4)); +} + +static void gswip_switch_mask(struct gswip_priv *priv, u32 clear, u32 set, + u32 offset) +{ + u32 val = gswip_switch_r(priv, offset); + + val &= ~(clear); + val |= set; + gswip_switch_w(priv, val, offset); +} + +static u32 gswip_switch_r_timeout(struct gswip_priv *priv, u32 offset, + u32 cleared) +{ + u32 val; + + return readx_poll_timeout(__raw_readl, priv->gswip + (offset * 4), val, + (val & cleared) == 0, 20, 50000); +} + +static u32 gswip_mdio_r(struct gswip_priv *priv, u32 offset) +{ + return __raw_readl(priv->mdio + (offset * 4)); +} + +static void gswip_mdio_w(struct gswip_priv *priv, u32 val, u32 offset) +{ + return __raw_writel(val, priv->mdio + (offset * 4)); +} + +static void gswip_mdio_mask(struct gswip_priv *priv, u32 clear, u32 set, + u32 offset) +{ + u32 val = gswip_mdio_r(priv, offset); + + val &= ~(clear); + val |= set; + gswip_mdio_w(priv, val, offset); +} + +static u32 gswip_mii_r(struct gswip_priv *priv, u32 offset) +{ + return __raw_readl(priv->mii + (offset * 4)); +} + +static void gswip_mii_w(struct gswip_priv *priv, u32 val, u32 offset) +{ + return __raw_writel(val, priv->mii + (offset * 4)); +} + +static void gswip_mii_mask(struct gswip_priv *priv, u32 clear, u32 set, + u32 offset) +{ + u32 val = gswip_mii_r(priv, offset); + + val &= ~(clear); + val |= set; + gswip_mii_w(priv, val, offset); +} + +static int gswip_mdio_poll(struct gswip_priv *priv) +{ + int cnt = 10000; + + while (likely(cnt--)) { + u32 ctrl = gswip_mdio_r(priv, GSWIP_MDIO_CTRL); + + if ((ctrl & GSWIP_MDIO_CTRL_BUSY) == 0) + return 0; + cpu_relax(); + } + + return -ETIMEDOUT; +} + +static int gswip_mdio_wr(struct mii_bus *bus, int addr, int reg, u16 val) +{ + struct gswip_priv *priv = bus->priv; + int err; + + err = gswip_mdio_poll(priv); + if (err) + return err; + + gswip_mdio_w(priv, val, GSWIP_MDIO_WRITE); + gswip_mdio_w(priv, GSWIP_MDIO_CTRL_BUSY | GSWIP_MDIO_CTRL_WR | + ((addr & GSWIP_MDIO_CTRL_PHYAD_MASK) << GSWIP_MDIO_CTRL_PHYAD_SHIFT) | + (reg & GSWIP_MDIO_CTRL_REGAD_MASK), + GSWIP_MDIO_CTRL); + + return 0; +} + +static int gswip_mdio_rd(struct mii_bus *bus, int addr, int reg) +{ + struct gswip_priv *priv = bus->priv; + int err; + + err = gswip_mdio_poll(priv); + if (err) + return err; + + gswip_mdio_w(priv, GSWIP_MDIO_CTRL_BUSY | GSWIP_MDIO_CTRL_RD | + ((addr & GSWIP_MDIO_CTRL_PHYAD_MASK) << GSWIP_MDIO_CTRL_PHYAD_SHIFT) | + (reg & GSWIP_MDIO_CTRL_REGAD_MASK), + GSWIP_MDIO_CTRL); + + err = gswip_mdio_poll(priv); + if (err) + return err; + + return gswip_mdio_r(priv, GSWIP_MDIO_READ); +} + +static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np) +{ + struct dsa_switch *ds = priv->ds; + + ds->slave_mii_bus = devm_mdiobus_alloc(priv->dev); + if (!ds->slave_mii_bus) + return -ENOMEM; + + ds->slave_mii_bus->priv = priv; + ds->slave_mii_bus->read = gswip_mdio_rd; + ds->slave_mii_bus->write = gswip_mdio_wr; + ds->slave_mii_bus->name = "lantiq,xrx200-mdio"; + snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii", + dev_name(priv->dev)); + ds->slave_mii_bus->parent = priv->dev; + ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask; + + return of_mdiobus_register(ds->slave_mii_bus, mdio_np); +} + +static int gswip_port_enable(struct dsa_switch *ds, int port, + struct phy_device *phy) +{ + struct gswip_priv *priv = ds->priv; + + /* RMON Counter Enable for port */ + gswip_switch_w(priv, GSWIP_BM_PCFG_CNTEN, GSWIP_BM_PCFGp(port)); + + /* enable port fetch/store dma & VLAN Modification */ + gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_EN | + GSWIP_FDMA_PCTRL_VLANMOD_BOTH, + GSWIP_FDMA_PCTRLp(port)); + gswip_switch_mask(priv, 0, GSWIP_SDMA_PCTRL_EN, + GSWIP_SDMA_PCTRLp(port)); + gswip_switch_mask(priv, 0, GSWIP_PCE_PCTRL_0_INGRESS, + GSWIP_PCE_PCTRL_0p(port)); + + return 0; +} + +static void gswip_port_disable(struct dsa_switch *ds, int port, + struct phy_device *phy) +{ + struct gswip_priv *priv = ds->priv; + + gswip_switch_mask(priv, GSWIP_FDMA_PCTRL_EN, 0, + GSWIP_FDMA_PCTRLp(port)); + gswip_switch_mask(priv, GSWIP_SDMA_PCTRL_EN, 0, + GSWIP_SDMA_PCTRLp(port)); +} + +static int gswip_pce_load_microcode(struct gswip_priv *priv) +{ + int i; + int err; + + gswip_switch_mask(priv, GSWIP_PCE_TBL_CTRL_ADDR_MASK | + GSWIP_PCE_TBL_CTRL_OPMOD_MASK, + GSWIP_PCE_TBL_CTRL_OPMOD_ADWR, GSWIP_PCE_TBL_CTRL); + gswip_switch_w(priv, 0, GSWIP_PCE_TBL_MASK); + + for (i = 0; i < ARRAY_SIZE(gswip_pce_microcode); i++) { + gswip_switch_w(priv, i, GSWIP_PCE_TBL_ADDR); + gswip_switch_w(priv, gswip_pce_microcode[i].val_0, + GSWIP_PCE_TBL_VAL(0)); + gswip_switch_w(priv, gswip_pce_microcode[i].val_1, + GSWIP_PCE_TBL_VAL(1)); + gswip_switch_w(priv, gswip_pce_microcode[i].val_2, + GSWIP_PCE_TBL_VAL(2)); + gswip_switch_w(priv, gswip_pce_microcode[i].val_3, + GSWIP_PCE_TBL_VAL(3)); + + /* start the table access: */ + gswip_switch_mask(priv, 0, GSWIP_PCE_TBL_CTRL_BAS, + GSWIP_PCE_TBL_CTRL); + err = gswip_switch_r_timeout(priv, GSWIP_PCE_TBL_CTRL, + GSWIP_PCE_TBL_CTRL_BAS); + if (err) + return err; + } + + /* tell the switch that the microcode is loaded */ + gswip_switch_mask(priv, 0, GSWIP_PCE_GCTRL_0_MC_VALID, + GSWIP_PCE_GCTRL_0); + + return 0; +} + +static int gswip_setup(struct dsa_switch *ds) +{ + struct gswip_priv *priv = ds->priv; + unsigned int cpu_port = priv->hw_info->cpu_port; + int i; + int err; + + gswip_switch_w(priv, GSWIP_SWRES_R0, GSWIP_SWRES); + usleep_range(5000, 10000); + gswip_switch_w(priv, 0, GSWIP_SWRES); + + /* disable port fetch/store dma on all ports */ + for (i = 0; i < priv->hw_info->max_ports; i++) + gswip_port_disable(ds, i, NULL); + + /* enable Switch */ + gswip_mdio_mask(priv, 0, GSWIP_MDIO_GLOB_ENABLE, GSWIP_MDIO_GLOB); + + err = gswip_pce_load_microcode(priv); + if (err) { + dev_err(priv->dev, "writing PCE microcode failed, %i", err); + return err; + } + + /* Default unknown Broadcast/Multicast/Unicast port maps */ + gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP1); + gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP2); + gswip_switch_w(priv, BIT(cpu_port), GSWIP_PCE_PMAP3); + + /* disable auto polling */ + gswip_mdio_w(priv, 0x0, GSWIP_MDIO_MDC_CFG0); + + /* enable special tag insertion on cpu port */ + gswip_switch_mask(priv, 0, GSWIP_FDMA_PCTRL_STEN, + GSWIP_FDMA_PCTRLp(cpu_port)); + + gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN, + GSWIP_MAC_CTRL_2p(cpu_port)); + gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN); + gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD, + GSWIP_BM_QUEUE_GCTRL); + + /* VLAN aware Switching */ + gswip_switch_mask(priv, 0, GSWIP_PCE_GCTRL_0_VLAN, GSWIP_PCE_GCTRL_0); + + /* Mac Address Table Lock */ + gswip_switch_mask(priv, 0, GSWIP_PCE_GCTRL_1_MAC_GLOCK | + GSWIP_PCE_GCTRL_1_MAC_GLOCK_MOD, + GSWIP_PCE_GCTRL_1); + + gswip_port_enable(ds, cpu_port, NULL); + return 0; +} + +static void gswip_adjust_link(struct dsa_switch *ds, int port, + struct phy_device *phydev) +{ + struct gswip_priv *priv = ds->priv; + u16 macconf = phydev->mdio.addr & GSWIP_MDIO_PHY_ADDR_MASK; + u16 miirate = 0; + u16 miimode; + u16 lcl_adv = 0, rmt_adv = 0; + u8 flowctrl; + + /* do not run this for the CPU port */ + if (dsa_is_cpu_port(ds, port)) + return; + + miimode = gswip_mdio_r(priv, GSWIP_MII_CFGp(port)); + miimode &= GSWIP_MII_CFG_MODE_MASK; + + switch (phydev->speed) { + case SPEED_1000: + macconf |= GSWIP_MDIO_PHY_SPEED_G1; + miirate = GSWIP_MII_CFG_RATE_M125; + break; + + case SPEED_100: + macconf |= GSWIP_MDIO_PHY_SPEED_M100; + switch (miimode) { + case GSWIP_MII_CFG_MODE_RMIIM: + case GSWIP_MII_CFG_MODE_RMIIP: + miirate = GSWIP_MII_CFG_RATE_M50; + break; + default: + miirate = GSWIP_MII_CFG_RATE_M25; + break; + } + break; + + default: + macconf |= GSWIP_MDIO_PHY_SPEED_M10; + miirate = GSWIP_MII_CFG_RATE_M2P5; + break; + } + + if (phydev->link) + macconf |= GSWIP_MDIO_PHY_LINK_UP; + else + macconf |= GSWIP_MDIO_PHY_LINK_DOWN; + + if (phydev->duplex == DUPLEX_FULL) + macconf |= GSWIP_MDIO_PHY_FDUP_EN; + else + macconf |= GSWIP_MDIO_PHY_FDUP_DIS; + + if (phydev->pause) + rmt_adv = LPA_PAUSE_CAP; + if (phydev->asym_pause) + rmt_adv |= LPA_PAUSE_ASYM; + + if (phydev->advertising & ADVERTISED_Pause) + lcl_adv |= ADVERTISE_PAUSE_CAP; + if (phydev->advertising & ADVERTISED_Asym_Pause) + lcl_adv |= ADVERTISE_PAUSE_ASYM; + + flowctrl = mii_resolve_flowctrl_fdx(lcl_adv, rmt_adv); + + if (flowctrl & FLOW_CTRL_TX) + macconf |= GSWIP_MDIO_PHY_FCONTX_EN; + else + macconf |= GSWIP_MDIO_PHY_FCONTX_DIS; + if (flowctrl & FLOW_CTRL_RX) + macconf |= GSWIP_MDIO_PHY_FCONRX_EN; + else + macconf |= GSWIP_MDIO_PHY_FCONRX_DIS; + + gswip_mdio_mask(priv, GSWIP_MDIO_PHY_MASK, macconf, + GSWIP_MDIO_PHYp(port)); + gswip_mii_mask(priv, GSWIP_MII_CFG_RATE_MASK, miirate, + GSWIP_MII_CFGp(port)); +} + +static enum dsa_tag_protocol gswip_get_tag_protocol(struct dsa_switch *ds, + int port) +{ + return DSA_TAG_PROTO_GSWIP; +} + +static void gswip_get_strings(struct dsa_switch *ds, int port, u32 stringset, + uint8_t *data) +{ + int i; + + if (stringset != ETH_SS_STATS) + return; + + for (i = 0; i < ARRAY_SIZE(gswip_rmon_cnt); i++) + strncpy(data + i * ETH_GSTRING_LEN, gswip_rmon_cnt[i].name, + ETH_GSTRING_LEN); +} + +static u32 gswip_bcm_ram_entry_read(struct gswip_priv *priv, u32 table, + u32 index) +{ + u32 result; + int err; + + gswip_switch_w(priv, index, GSWIP_BM_RAM_ADDR); + gswip_switch_mask(priv, GSWIP_BM_RAM_CTRL_ADDR_MASK | + GSWIP_BM_RAM_CTRL_OPMOD, + table | GSWIP_BM_RAM_CTRL_BAS, + GSWIP_BM_RAM_CTRL); + + err = gswip_switch_r_timeout(priv, GSWIP_BM_RAM_CTRL, + GSWIP_BM_RAM_CTRL_BAS); + if (err) { + dev_err(priv->dev, "timeout while reading table: %u, index: %u", + table, index); + return 0; + } + + result = gswip_switch_r(priv, GSWIP_BM_RAM_VAL(0)); + result |= gswip_switch_r(priv, GSWIP_BM_RAM_VAL(1)) << 16; + + return result; +} + +static void gswip_get_ethtool_stats(struct dsa_switch *ds, int port, + uint64_t *data) +{ + struct gswip_priv *priv = ds->priv; + const struct gswip_rmon_cnt_desc *rmon_cnt; + int i; + u64 high; + + for (i = 0; i < ARRAY_SIZE(gswip_rmon_cnt); i++) { + rmon_cnt = &gswip_rmon_cnt[i]; + + data[i] = gswip_bcm_ram_entry_read(priv, port, + rmon_cnt->offset); + if (rmon_cnt->size == 2) { + high = gswip_bcm_ram_entry_read(priv, port, + rmon_cnt->offset + 1); + data[i] |= high << 32; + } + } +} + +static int gswip_get_sset_count(struct dsa_switch *ds, int port, int sset) +{ + if (sset != ETH_SS_STATS) + return 0; + + return ARRAY_SIZE(gswip_rmon_cnt); +} + +static const struct dsa_switch_ops gswip_switch_ops = { + .get_tag_protocol = gswip_get_tag_protocol, + .setup = gswip_setup, + .adjust_link = gswip_adjust_link, + .port_enable = gswip_port_enable, + .port_disable = gswip_port_disable, + .get_strings = gswip_get_strings, + .get_ethtool_stats = gswip_get_ethtool_stats, + .get_sset_count = gswip_get_sset_count, +}; + +static const struct xway_gphy_match_data xrx200a1x_gphy_data = { + .fe_firmware_name = "lantiq/xrx200_phy22f_a14.bin", + .ge_firmware_name = "lantiq/xrx200_phy11g_a14.bin", +}; + +static const struct xway_gphy_match_data xrx200a2x_gphy_data = { + .fe_firmware_name = "lantiq/xrx200_phy22f_a22.bin", + .ge_firmware_name = "lantiq/xrx200_phy11g_a22.bin", +}; + +static const struct xway_gphy_match_data xrx300_gphy_data = { + .fe_firmware_name = "lantiq/xrx300_phy22f_a21.bin", + .ge_firmware_name = "lantiq/xrx300_phy11g_a21.bin", +}; + +static const struct of_device_id xway_gphy_match[] = { + { .compatible = "lantiq,xrx200-gphy-fw", .data = NULL }, + { .compatible = "lantiq,xrx200a1x-gphy-fw", .data = &xrx200a1x_gphy_data }, + { .compatible = "lantiq,xrx200a2x-gphy-fw", .data = &xrx200a2x_gphy_data }, + { .compatible = "lantiq,xrx300-gphy-fw", .data = &xrx300_gphy_data }, + { .compatible = "lantiq,xrx330-gphy-fw", .data = &xrx300_gphy_data }, + {}, +}; + +static int gswip_gphy_fw_load(struct gswip_priv *priv, struct gswip_gphy_fw *gphy_fw) +{ + struct device *dev = priv->dev; + const struct firmware *fw; + void *fw_addr; + dma_addr_t dma_addr; + dma_addr_t dev_addr; + size_t size; + int ret; + + ret = clk_prepare_enable(gphy_fw->clk_gate); + if (ret) + return ret; + + reset_control_assert(gphy_fw->reset); + + ret = request_firmware(&fw, gphy_fw->fw_name, dev); + if (ret) { + dev_err(dev, "failed to load firmware: %s, error: %i\n", + gphy_fw->fw_name, ret); + return ret; + } + + /* GPHY cores need the firmware code in a persistent and contiguous + * memory area with a 16 kB boundary aligned start address. + */ + size = fw->size + XRX200_GPHY_FW_ALIGN; + + fw_addr = dmam_alloc_coherent(dev, size, &dma_addr, GFP_KERNEL); + if (fw_addr) { + fw_addr = PTR_ALIGN(fw_addr, XRX200_GPHY_FW_ALIGN); + dev_addr = ALIGN(dma_addr, XRX200_GPHY_FW_ALIGN); + memcpy(fw_addr, fw->data, fw->size); + } else { + dev_err(dev, "failed to alloc firmware memory\n"); + release_firmware(fw); + return -ENOMEM; + } + + release_firmware(fw); + + ret = regmap_write(priv->rcu_regmap, gphy_fw->fw_addr_offset, dev_addr); + if (ret) + return ret; + + reset_control_deassert(gphy_fw->reset); + + return ret; +} + +static int gswip_gphy_fw_probe(struct gswip_priv *priv, + struct gswip_gphy_fw *gphy_fw, + struct device_node *gphy_fw_np, int i) +{ + struct device *dev = priv->dev; + u32 gphy_mode; + int ret; + char gphyname[10]; + + snprintf(gphyname, sizeof(gphyname), "gphy%d", i); + + gphy_fw->clk_gate = devm_clk_get(dev, gphyname); + if (IS_ERR(gphy_fw->clk_gate)) { + dev_err(dev, "Failed to lookup gate clock\n"); + return PTR_ERR(gphy_fw->clk_gate); + } + + ret = of_property_read_u32(gphy_fw_np, "reg", &gphy_fw->fw_addr_offset); + if (ret) + return ret; + + ret = of_property_read_u32(gphy_fw_np, "lantiq,gphy-mode", &gphy_mode); + /* Default to GE mode */ + if (ret) + gphy_mode = GPHY_MODE_GE; + + switch (gphy_mode) { + case GPHY_MODE_FE: + gphy_fw->fw_name = priv->gphy_fw_name_cfg->fe_firmware_name; + break; + case GPHY_MODE_GE: + gphy_fw->fw_name = priv->gphy_fw_name_cfg->ge_firmware_name; + break; + default: + dev_err(dev, "Unknown GPHY mode %d\n", gphy_mode); + return -EINVAL; + } + + gphy_fw->reset = of_reset_control_array_get_exclusive(gphy_fw_np); + if (IS_ERR(priv->gphy_fw)) { + if (PTR_ERR(priv->gphy_fw) != -EPROBE_DEFER) + dev_err(dev, "Failed to lookup gphy reset\n"); + return PTR_ERR(priv->gphy_fw); + } + + return gswip_gphy_fw_load(priv, gphy_fw); +} + +static void gswip_gphy_fw_remove(struct gswip_priv *priv, + struct gswip_gphy_fw *gphy_fw) +{ + int ret; + + /* check if the device was fully probed */ + if (!gphy_fw->fw_name) + return; + + ret = regmap_write(priv->rcu_regmap, gphy_fw->fw_addr_offset, 0); + if (ret) + dev_err(priv->dev, "can not reset GPHY FW pointer"); + + clk_disable_unprepare(gphy_fw->clk_gate); + + reset_control_put(gphy_fw->reset); +} + +static int gswip_gphy_fw_list(struct gswip_priv *priv, + struct device_node *gphy_fw_list_np) +{ + struct device *dev = priv->dev; + struct device_node *gphy_fw_np; + const struct of_device_id *match; + int err; + int i = 0; + + if (of_device_is_compatible(dev->of_node, "lantiq,xrx200-gphy-fw")) { + switch (ltq_soc_type()) { + case SOC_TYPE_VR9: + priv->gphy_fw_name_cfg = &xrx200a1x_gphy_data; + break; + case SOC_TYPE_VR9_2: + priv->gphy_fw_name_cfg = &xrx200a2x_gphy_data; + break; + } + } + + match = of_match_node(xway_gphy_match, gphy_fw_list_np); + if (match && match->data) + priv->gphy_fw_name_cfg = match->data; + + if (!priv->gphy_fw_name_cfg) { + dev_err(dev, "GPHY compatible type not supported"); + return -ENOENT; + } + + priv->num_gphy_fw = of_get_available_child_count(gphy_fw_list_np); + if (!priv->num_gphy_fw) + return -ENOENT; + + priv->rcu_regmap = syscon_regmap_lookup_by_phandle(gphy_fw_list_np, + "lantiq,rcu"); + if (IS_ERR(priv->rcu_regmap)) + return PTR_ERR(priv->rcu_regmap); + + priv->gphy_fw = devm_kmalloc_array(dev, priv->num_gphy_fw, + sizeof(*priv->gphy_fw), + GFP_KERNEL | __GFP_ZERO); + if (!priv->gphy_fw) + return -ENOMEM; + + for_each_available_child_of_node(gphy_fw_list_np, gphy_fw_np) { + err = gswip_gphy_fw_probe(priv, &priv->gphy_fw[i], + gphy_fw_np, i); + if (err) + goto remove_gphy; + i++; + } + + return 0; + +remove_gphy: + for (i = 0; i < priv->num_gphy_fw; i++) + gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]); + return err; +} + +static int gswip_probe(struct platform_device *pdev) +{ + struct gswip_priv *priv; + struct resource *gswip_res, *mdio_res, *mii_res; + struct device_node *mdio_np, *gphy_fw_np; + struct device *dev = &pdev->dev; + int err; + int i; + u32 version; + + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); + if (!priv) + return -ENOMEM; + + gswip_res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + priv->gswip = devm_ioremap_resource(dev, gswip_res); + if (!priv->gswip) + return -ENOMEM; + + mdio_res = platform_get_resource(pdev, IORESOURCE_MEM, 1); + priv->mdio = devm_ioremap_resource(dev, mdio_res); + if (!priv->mdio) + return -ENOMEM; + + mii_res = platform_get_resource(pdev, IORESOURCE_MEM, 2); + priv->mii = devm_ioremap_resource(dev, mii_res); + if (!priv->mii) + return -ENOMEM; + + priv->hw_info = of_device_get_match_data(dev); + if (!priv->hw_info) + return -EINVAL; + + priv->ds = dsa_switch_alloc(dev, priv->hw_info->max_ports); + if (!priv->ds) + return -ENOMEM; + + priv->ds->priv = priv; + priv->ds->ops = &gswip_switch_ops; + priv->dev = dev; + + /* bring up the mdio bus */ + gphy_fw_np = of_find_compatible_node(pdev->dev.of_node, NULL, + "lantiq,gphy-fw"); + if (gphy_fw_np) { + err = gswip_gphy_fw_list(priv, gphy_fw_np); + if (err) { + dev_err(dev, "gphy fw probe failed\n"); + return err; + } + } + + /* bring up the mdio bus */ + mdio_np = of_find_compatible_node(pdev->dev.of_node, NULL, + "lantiq,xrx200-mdio"); + if (mdio_np) { + err = gswip_mdio(priv, mdio_np); + if (err) { + dev_err(dev, "mdio probe failed\n"); + goto gphy_fw; + } + } + + err = dsa_register_switch(priv->ds); + if (err) { + dev_err(dev, "dsa switch register failed: %i\n", err); + goto mdio_bus; + } + if (priv->ds->dst->cpu_dp->index != priv->hw_info->cpu_port) { + dev_err(dev, "wrong CPU port defined, HW only supports port: %i", + priv->hw_info->cpu_port); + err = -EINVAL; + goto mdio_bus; + } + + platform_set_drvdata(pdev, priv); + + version = gswip_switch_r(priv, GSWIP_VERSION); + dev_info(dev, "probed GSWIP version %lx mod %lx\n", + (version & GSWIP_VERSION_REV_MASK) >> GSWIP_VERSION_REV_SHIFT, + (version & GSWIP_VERSION_MOD_MASK) >> GSWIP_VERSION_MOD_SHIFT); + return 0; + +mdio_bus: + if (mdio_np) + mdiobus_unregister(priv->ds->slave_mii_bus); +gphy_fw: + for (i = 0; i < priv->num_gphy_fw; i++) + gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]); + return err; +} + +static int gswip_remove(struct platform_device *pdev) +{ + struct gswip_priv *priv = platform_get_drvdata(pdev); + int i; + + if (!priv) + return 0; + + /* disable the switch */ + gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB); + + dsa_unregister_switch(priv->ds); + + if (priv->ds->slave_mii_bus) + mdiobus_unregister(priv->ds->slave_mii_bus); + + for (i = 0; i < priv->num_gphy_fw; i++) + gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]); + + return 0; +} + +static const struct gswip_hw_info gswip_xrx200 = { + .max_ports = 7, + .cpu_port = 6, +}; + +static const struct of_device_id gswip_of_match[] = { + { .compatible = "lantiq,xrx200-gswip", .data = &gswip_xrx200 }, + {}, +}; +MODULE_DEVICE_TABLE(of, gswip_of_match); + +static struct platform_driver gswip_driver = { + .probe = gswip_probe, + .remove = gswip_remove, + .driver = { + .name = "gswip", + .of_match_table = gswip_of_match, + }, +}; + +module_platform_driver(gswip_driver); + +MODULE_AUTHOR("Hauke Mehrtens "); +MODULE_DESCRIPTION("Lantiq / Intel GSWIP driver"); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/net/dsa/lantiq_pce.h b/drivers/net/dsa/lantiq_pce.h new file mode 100644 index 000000000000..180663138e75 --- /dev/null +++ b/drivers/net/dsa/lantiq_pce.h @@ -0,0 +1,153 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * PCE microcode extracted from UGW 7.1.1 switch api + * + * Copyright (c) 2012, 2014, 2015 Lantiq Deutschland GmbH + * Copyright (C) 2012 John Crispin + * Copyright (C) 2017 - 2018 Hauke Mehrtens + */ + +enum { + OUT_MAC0 = 0, + OUT_MAC1, + OUT_MAC2, + OUT_MAC3, + OUT_MAC4, + OUT_MAC5, + OUT_ETHTYP, + OUT_VTAG0, + OUT_VTAG1, + OUT_ITAG0, + OUT_ITAG1, /*10 */ + OUT_ITAG2, + OUT_ITAG3, + OUT_IP0, + OUT_IP1, + OUT_IP2, + OUT_IP3, + OUT_SIP0, + OUT_SIP1, + OUT_SIP2, + OUT_SIP3, /*20*/ + OUT_SIP4, + OUT_SIP5, + OUT_SIP6, + OUT_SIP7, + OUT_DIP0, + OUT_DIP1, + OUT_DIP2, + OUT_DIP3, + OUT_DIP4, + OUT_DIP5, /*30*/ + OUT_DIP6, + OUT_DIP7, + OUT_SESID, + OUT_PROT, + OUT_APP0, + OUT_APP1, + OUT_IGMP0, + OUT_IGMP1, + OUT_IPOFF, /*39*/ + OUT_NONE = 63, +}; + +/* parser's microcode length type */ +#define INSTR 0 +#define IPV6 1 +#define LENACCU 2 + +/* parser's microcode flag type */ +enum { + FLAG_ITAG = 0, + FLAG_VLAN, + FLAG_SNAP, + FLAG_PPPOE, + FLAG_IPV6, + FLAG_IPV6FL, + FLAG_IPV4, + FLAG_IGMP, + FLAG_TU, + FLAG_HOP, + FLAG_NN1, /*10 */ + FLAG_NN2, + FLAG_END, + FLAG_NO, /*13*/ +}; + +struct gswip_pce_microcode { + u16 val_3; + u16 val_2; + u16 val_1; + u16 val_0; +}; + +#define MC_ENTRY(val, msk, ns, out, len, type, flags, ipv4_len) \ + { val, msk, ((ns) << 10 | (out) << 4 | (len) >> 1),\ + ((len) & 1) << 15 | (type) << 13 | (flags) << 9 | (ipv4_len) << 8 } +static const struct gswip_pce_microcode gswip_pce_microcode[] = { + /* value mask ns fields L type flags ipv4_len */ + MC_ENTRY(0x88c3, 0xFFFF, 1, OUT_ITAG0, 4, INSTR, FLAG_ITAG, 0), + MC_ENTRY(0x8100, 0xFFFF, 2, OUT_VTAG0, 2, INSTR, FLAG_VLAN, 0), + MC_ENTRY(0x88A8, 0xFFFF, 1, OUT_VTAG0, 2, INSTR, FLAG_VLAN, 0), + MC_ENTRY(0x8100, 0xFFFF, 1, OUT_VTAG0, 2, INSTR, FLAG_VLAN, 0), + MC_ENTRY(0x8864, 0xFFFF, 17, OUT_ETHTYP, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0800, 0xFFFF, 21, OUT_ETHTYP, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x86DD, 0xFFFF, 22, OUT_ETHTYP, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x8863, 0xFFFF, 16, OUT_ETHTYP, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0xF800, 10, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 40, OUT_ETHTYP, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0600, 0x0600, 40, OUT_ETHTYP, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 12, OUT_NONE, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0xAAAA, 0xFFFF, 14, OUT_NONE, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0300, 0xFF00, 41, OUT_NONE, 0, INSTR, FLAG_SNAP, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_DIP7, 3, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 18, OUT_DIP7, 3, INSTR, FLAG_PPPOE, 0), + MC_ENTRY(0x0021, 0xFFFF, 21, OUT_NONE, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0057, 0xFFFF, 22, OUT_NONE, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 40, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x4000, 0xF000, 24, OUT_IP0, 4, INSTR, FLAG_IPV4, 1), + MC_ENTRY(0x6000, 0xF000, 27, OUT_IP0, 3, INSTR, FLAG_IPV6, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 25, OUT_IP3, 2, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 26, OUT_SIP0, 4, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 40, OUT_NONE, 0, LENACCU, FLAG_NO, 0), + MC_ENTRY(0x1100, 0xFF00, 39, OUT_PROT, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0600, 0xFF00, 39, OUT_PROT, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0xFF00, 33, OUT_IP3, 17, INSTR, FLAG_HOP, 0), + MC_ENTRY(0x2B00, 0xFF00, 33, OUT_IP3, 17, INSTR, FLAG_NN1, 0), + MC_ENTRY(0x3C00, 0xFF00, 33, OUT_IP3, 17, INSTR, FLAG_NN2, 0), + MC_ENTRY(0x0000, 0x0000, 39, OUT_PROT, 1, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x00E0, 35, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 40, OUT_NONE, 0, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0xFF00, 33, OUT_NONE, 0, IPV6, FLAG_HOP, 0), + MC_ENTRY(0x2B00, 0xFF00, 33, OUT_NONE, 0, IPV6, FLAG_NN1, 0), + MC_ENTRY(0x3C00, 0xFF00, 33, OUT_NONE, 0, IPV6, FLAG_NN2, 0), + MC_ENTRY(0x0000, 0x0000, 40, OUT_PROT, 1, IPV6, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 40, OUT_SIP0, 16, INSTR, FLAG_NO, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_APP0, 4, INSTR, FLAG_IGMP, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), + MC_ENTRY(0x0000, 0x0000, 41, OUT_NONE, 0, INSTR, FLAG_END, 0), +};