diff mbox series

[net-next,6/6] net: mvneta: enable jumbo frames for XDP

Message ID 3e0d98fafaf955868205272354e36f0eccc80430.1597842004.git.lorenzo@kernel.org
State Changes Requested
Delegated to: David Miller
Headers show
Series mvneta: introduce XDP multi-buffer support | expand

Commit Message

Lorenzo Bianconi Aug. 19, 2020, 1:13 p.m. UTC
Enable the capability to receive jumbo frames even if the interface is
running in XDP mode

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 10 ----------
 1 file changed, 10 deletions(-)

Comments

Jakub Kicinski Aug. 19, 2020, 7:23 p.m. UTC | #1
On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:
> Enable the capability to receive jumbo frames even if the interface is
> running in XDP mode
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

Hm, already? Is all the infra in place? Or does it not imply
multi-buffer.
Lorenzo Bianconi Aug. 19, 2020, 8:22 p.m. UTC | #2
> On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:
> > Enable the capability to receive jumbo frames even if the interface is
> > running in XDP mode
> > 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> 
> Hm, already? Is all the infra in place? Or does it not imply
> multi-buffer.
> 

Hi Jakub,

with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
and ndo_xpd_xmit()) so we can remove MTU limitation.

Regards,
Lorenzo
Jakub Kicinski Aug. 19, 2020, 9:14 p.m. UTC | #3
On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:
> > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:  
> > > Enable the capability to receive jumbo frames even if the interface is
> > > running in XDP mode
> > > 
> > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>  
> > 
> > Hm, already? Is all the infra in place? Or does it not imply
> > multi-buffer.
> 
> with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> and ndo_xpd_xmit()) so we can remove MTU limitation.

Is there an API for programs to access the multi-buf frames?
John Fastabend Aug. 19, 2020, 9:58 p.m. UTC | #4
Jakub Kicinski wrote:
> On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:
> > > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:  
> > > > Enable the capability to receive jumbo frames even if the interface is
> > > > running in XDP mode
> > > > 
> > > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>  
> > > 
> > > Hm, already? Is all the infra in place? Or does it not imply
> > > multi-buffer.
> > 
> > with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> > and ndo_xpd_xmit()) so we can remove MTU limitation.
> 
> Is there an API for programs to access the multi-buf frames?

Hi Lorenzo,

This is not enough to support multi-buffer in my opinion. I have the
same comment as Jakub. We need an API to pull in the multiple
buffers otherwise we break the ability to parse the packets and that
is a hard requirement to me. I don't want to lose visibility to get
jumbo frames.

At minimum we need a bpf_xdp_pull_data() to adjust pointer. In the
skmsg case we use this,

  bpf_msg_pull_data(u32 start, u32 end, u64 flags)

Where start is the offset into the packet and end is the last byte we
want to adjust start/end pointers to. This way we can walk pages if
we want and avoid having to linearize the data unless the user actual
asks us for a block that crosses a page range. Smart users then never
do a start/end that crosses a page boundary if possible. I think the
same would apply here.

XDP by default gives you the first page start/end to use freely. If
you need to parse deeper into the payload then you call bpf_msg_pull_data
with the byte offsets needed.

Also we would want performance numbers to see how good/bad this is
compared to the base case.

Thanks,
John
Jesper Dangaard Brouer Aug. 20, 2020, 7:47 a.m. UTC | #5
On Wed, 19 Aug 2020 14:58:05 -0700
John Fastabend <john.fastabend@gmail.com> wrote:

> Jakub Kicinski wrote:
> > On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:  
> > > > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:    
> > > > > Enable the capability to receive jumbo frames even if the interface is
> > > > > running in XDP mode
> > > > > 
> > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>    
> > > > 
> > > > Hm, already? Is all the infra in place? Or does it not imply
> > > > multi-buffer.  
> > > 
> > > with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> > > and ndo_xpd_xmit()) so we can remove MTU limitation.  
> > 
> > Is there an API for programs to access the multi-buf frames?  
> 
> Hi Lorenzo,
> 
> This is not enough to support multi-buffer in my opinion. I have the
> same comment as Jakub. We need an API to pull in the multiple
> buffers otherwise we break the ability to parse the packets and that
> is a hard requirement to me. I don't want to lose visibility to get
> jumbo frames.
> 
> At minimum we need a bpf_xdp_pull_data() to adjust pointer. In the
> skmsg case we use this,
> 
>   bpf_msg_pull_data(u32 start, u32 end, u64 flags)
> 
> Where start is the offset into the packet and end is the last byte we
> want to adjust start/end pointers to. This way we can walk pages if
> we want and avoid having to linearize the data unless the user actual
> asks us for a block that crosses a page range. Smart users then never
> do a start/end that crosses a page boundary if possible. I think the
> same would apply here.
> 
> XDP by default gives you the first page start/end to use freely. If
> you need to parse deeper into the payload then you call bpf_msg_pull_data
> with the byte offsets needed.

I agree that we need a helper like this. (I also think Daniel have
proposed this before).  This would also be useful for Eric Dumazet /
Google's header-split use-case[1].  As I understood from his talk[1],
the NIC HW might not always split the packet correctly (due to HW
limits). This helper could solve part of this challenge.


[1] https://netdevconf.info/0x14/session.html?talk-the-path-to-tcp-4k-mtu-and-rx-zerocopy
Lorenzo Bianconi Aug. 20, 2020, 7:54 a.m. UTC | #6
> Jakub Kicinski wrote:
> > On Wed, 19 Aug 2020 22:22:23 +0200 Lorenzo Bianconi wrote:
> > > > On Wed, 19 Aug 2020 15:13:51 +0200 Lorenzo Bianconi wrote:  
> > > > > Enable the capability to receive jumbo frames even if the interface is
> > > > > running in XDP mode
> > > > > 
> > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>  
> > > > 
> > > > Hm, already? Is all the infra in place? Or does it not imply
> > > > multi-buffer.
> > > 
> > > with this series mvneta supports xdp multi-buff on both rx and tx sides (XDP_TX
> > > and ndo_xpd_xmit()) so we can remove MTU limitation.
> > 
> > Is there an API for programs to access the multi-buf frames?
> 
> Hi Lorenzo,

Hi Jakub and John,

> 
> This is not enough to support multi-buffer in my opinion. I have the
> same comment as Jakub. We need an API to pull in the multiple
> buffers otherwise we break the ability to parse the packets and that
> is a hard requirement to me. I don't want to lose visibility to get
> jumbo frames.

I have not been so clear in the commit message, sorry for that.
This series aims to finalize xdp multi-buff support for mvneta driver only.
Our plan is to work on the helpers/metadata in subsequent series since
driver support is quite orthogonal. If you think we need the helpers
in place before removing the mtu constraint, we could just drop last
patch (6/6) and apply patches from 1/6 to 5/6 since they are preliminary
to remove the mtu constraint. Do you agree?

> 
> At minimum we need a bpf_xdp_pull_data() to adjust pointer. In the
> skmsg case we use this,
> 
>   bpf_msg_pull_data(u32 start, u32 end, u64 flags)
> 
> Where start is the offset into the packet and end is the last byte we
> want to adjust start/end pointers to. This way we can walk pages if
> we want and avoid having to linearize the data unless the user actual
> asks us for a block that crosses a page range. Smart users then never
> do a start/end that crosses a page boundary if possible. I think the
> same would apply here.
> 
> XDP by default gives you the first page start/end to use freely. If
> you need to parse deeper into the payload then you call bpf_msg_pull_data
> with the byte offsets needed.

Our first proposal is described here [0][1]. In particular, we are assuming the
eBPF layer can access just the first fragment in the non-linear XDP buff and
we will provide some non-linear xdp metadata (e.g. # of segments in the xdp_buffer
or buffer total length) to the eBPF program attached to the interface.
Anyway IMHO this mvneta series is not strictly related to this approach.

Regards,
Lorenzo

[0] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
[1] http://people.redhat.com/lbiancon/conference/NetDevConf2020-0x14/add-xdp-on-driver.html (XDP multi-buffers section)

> 
> Also we would want performance numbers to see how good/bad this is
> compared to the base case.
> 
> Thanks,
> John
diff mbox series

Patch

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index efac4a5e0439..42f7614cd67c 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -3735,11 +3735,6 @@  static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8);
 	}
 
-	if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) {
-		netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu);
-		return -EINVAL;
-	}
-
 	dev->mtu = mtu;
 
 	if (!netif_running(dev)) {
@@ -4437,11 +4432,6 @@  static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
 	struct mvneta_port *pp = netdev_priv(dev);
 	struct bpf_prog *old_prog;
 
-	if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) {
-		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
-		return -EOPNOTSUPP;
-	}
-
 	if (pp->bm_priv) {
 		NL_SET_ERR_MSG_MOD(extack,
 				   "Hardware Buffer Management not supported on XDP");