mbox series

[bpf-next,RFCv2,0/3] AF_XDP support for veth.

Message ID 1545181493-8743-1-git-send-email-u9012063@gmail.com
Headers show
Series AF_XDP support for veth. | expand

Message

William Tu Dec. 19, 2018, 1:04 a.m. UTC
The patch series adds AF_XDP async xmit support for veth device.
First patch add a new API for supporting non-physical NIC device to get
packet's virtual address.  The second patch implements the async xmit,
and last patch adds example use cases.

I tested with 2 namespaces, one as sender, the other as receiver.
The packet rate is measure at the receiver side.
  ip netns add at_ns0
  ip link add p0 type veth peer name p1
  ip link set p0 netns at_ns0
  ip link set dev p1 up
  ip netns exec at_ns0 ip link set dev p0 up
  
  # receiver
  ip netns exec at_ns0 xdp_rxq_info --dev p0 --action XDP_DROP
  
  # sender with AF_XDP
  xdpsock -i p1 -t -N -z
  
  # or sender without AF_XDP
  xdpsock -i p1 -t -S

Without AF_XDP: 724 Kpps
RXQ stats       RXQ:CPU pps         issue-pps  
rx_queue_index    0:1   724339      0          
rx_queue_index    0:sum 724339     

With AF_XDP: 1.1 Mpps (with ksoftirqd 100% cpu)
RXQ stats       RXQ:CPU pps         issue-pps  
rx_queue_index    0:3   1188181     0          
rx_queue_index    0:sum 1188181    

v1->v2:
- refactor the xsk_umem_consume_tx_virtual
- use the umems provided by netdev
- fix bug from locating peer side rq with qid

William Tu (3):
  xsk: add xsk_umem_consume_tx_virtual.
  veth: support AF_XDP.
  samples: bpf: add veth AF_XDP example.

 drivers/net/veth.c             | 199 ++++++++++++++++++++++++++++++++++++++++-
 include/net/xdp_sock.h         |   7 ++
 net/xdp/xdp_umem.c             |   1 +
 net/xdp/xsk.c                  |  21 ++++-
 samples/bpf/test_veth_afxdp.sh |  67 ++++++++++++++
 5 files changed, 291 insertions(+), 4 deletions(-)
 create mode 100755 samples/bpf/test_veth_afxdp.sh