Message ID | 1431382788-6051-1-git-send-email-ast@plumgrid.com |
---|---|
State | Accepted, archived |
Delegated to: | David Miller |
Headers | show |
On Mon, 11 May 2015 15:19:48 -0700 Alexei Starovoitov <ast@plumgrid.com> wrote: > pkt_gen->last_ok was not set properly, so after the first burst > pktgen instead of allocating new packet, will reuse old one, advance > eth_type_trans further, which would mean the stack will be seeing very > short bogus packets. > > Fixes: 62f64aed622b ("pktgen: introduce xmit_mode '<start_xmit|netif_receive>'") > Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> > --- > This bug slipped through due to all code refactoring and can be seen > after clean reboot. If taps, rps or tx mode was used at least once, > the bug will be hidden. > > Note to users: if you don't see ip_rcv() in your perf profile, it means > you were hitting this. > As commit log of 62f64aed622b is saying, the baseline perf profile > should look like: > 37.69% kpktgend_0 [kernel.vmlinux] [k] __netif_receive_skb_core > 25.81% kpktgend_0 [kernel.vmlinux] [k] kfree_skb > 7.22% kpktgend_0 [kernel.vmlinux] [k] ip_rcv > 5.68% kpktgend_0 [pktgen] [k] pktgen_thread_worker > > Jesper, that explains why you were seeing hot: > atomic_long_inc(&skb->dev->rx_dropped); Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Yes, just confirmed that this problem is gone. E.g. the multiqueue script now scales without hitting this "skb->dev->rx_dropped". Device: eth4@0 21127682pps 10141Mb/sec (10141287360bps) errors: 10000000 Device: eth4@1 22170175pps 10641Mb/sec (10641684000bps) errors: 10000000 Device: eth4@2 22230977pps 10670Mb/sec (10670868960bps) errors: 10000000 Device: eth4@3 22269033pps 10689Mb/sec (10689135840bps) errors: 10000000 I was also a little puzzled that I was not seeing kmem_cache_{alloc/free} in my previous perf reports, but I just assumed burst was very efficient. Good this got fixed, as my plan is to use this to profile the memory allocators fast path for SKB alloc/free. Setting "burst = 0" (and flag NO_TIMESTAMP): Device: eth4@0 3938513pps 1890Mb/sec (1890486240bps) errors: 10000000 Thus, performance hit from 22.1Mpps to 3.9Mpps, thus 209 nanosec more expensive. 20% is the cost of pktgen itself, still I'm surprised that the hit is this big, as this should hit the most optimal cache-hot case of SKB alloc/free.
On 5/12/15 1:19 AM, Jesper Dangaard Brouer wrote: > On Mon, 11 May 2015 15:19:48 -0700 > Alexei Starovoitov <ast@plumgrid.com> wrote: > >> pkt_gen->last_ok was not set properly, so after the first burst >> pktgen instead of allocating new packet, will reuse old one, advance >> eth_type_trans further, which would mean the stack will be seeing very >> short bogus packets. >> >> Fixes: 62f64aed622b ("pktgen: introduce xmit_mode '<start_xmit|netif_receive>'") >> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> >> --- >> This bug slipped through due to all code refactoring and can be seen >> after clean reboot. If taps, rps or tx mode was used at least once, >> the bug will be hidden. >> >> Note to users: if you don't see ip_rcv() in your perf profile, it means >> you were hitting this. >> As commit log of 62f64aed622b is saying, the baseline perf profile >> should look like: >> 37.69% kpktgend_0 [kernel.vmlinux] [k] __netif_receive_skb_core >> 25.81% kpktgend_0 [kernel.vmlinux] [k] kfree_skb >> 7.22% kpktgend_0 [kernel.vmlinux] [k] ip_rcv >> 5.68% kpktgend_0 [pktgen] [k] pktgen_thread_worker >> >> Jesper, that explains why you were seeing hot: >> atomic_long_inc(&skb->dev->rx_dropped); > > Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> thanks! > Yes, just confirmed that this problem is gone. E.g. the multiqueue > script now scales without hitting this "skb->dev->rx_dropped". great. > Good this got fixed, as my plan is to use this to profile the memory > allocators fast path for SKB alloc/free. > > Setting "burst = 0" (and flag NO_TIMESTAMP): > Device: eth4@0 > 3938513pps 1890Mb/sec (1890486240bps) errors: 10000000 > > Thus, performance hit from 22.1Mpps to 3.9Mpps, thus 209 nanosec more > expensive. 20% is the cost of pktgen itself, still I'm surprised that > the hit is this big, as this should hit the most optimal cache-hot case > of SKB alloc/free. I tried similar and I think I've seen ip_send_check(iph); called by pktgen->fill_packet_ipv4 to be quite hot, so burst=1 will be measuring quite a bit more than just skb alloc/free. I think your skb allocator micro bench was more accurate. btw, this multi-core pktgen into netif_receive skb exposed all spin_locks in tc actions. We need to convert them to rcu. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
From: Alexei Starovoitov <ast@plumgrid.com> Date: Mon, 11 May 2015 15:19:48 -0700 > pkt_gen->last_ok was not set properly, so after the first burst > pktgen instead of allocating new packet, will reuse old one, advance > eth_type_trans further, which would mean the stack will be seeing very > short bogus packets. > > Fixes: 62f64aed622b ("pktgen: introduce xmit_mode '<start_xmit|netif_receive>'") > Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Applied, thanks for this fix. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/net/core/pktgen.c b/net/core/pktgen.c index 8f2687da058e..62f979984a23 100644 --- a/net/core/pktgen.c +++ b/net/core/pktgen.c @@ -1189,6 +1189,16 @@ static ssize_t pktgen_if_write(struct file *file, return -ENOTSUPP; pkt_dev->xmit_mode = M_NETIF_RECEIVE; + + /* make sure new packet is allocated every time + * pktgen_xmit() is called + */ + pkt_dev->last_ok = 1; + + /* override clone_skb if user passed default value + * at module loading time + */ + pkt_dev->clone_skb = 0; } else { sprintf(pg_result, "xmit_mode -:%s:- unknown\nAvailable modes: %s", @@ -3415,7 +3425,6 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev) /* get out of the loop and wait * until skb is consumed */ - pkt_dev->last_ok = 1; break; } /* skb was 'freed' by stack, so clean few
pkt_gen->last_ok was not set properly, so after the first burst pktgen instead of allocating new packet, will reuse old one, advance eth_type_trans further, which would mean the stack will be seeing very short bogus packets. Fixes: 62f64aed622b ("pktgen: introduce xmit_mode '<start_xmit|netif_receive>'") Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> --- This bug slipped through due to all code refactoring and can be seen after clean reboot. If taps, rps or tx mode was used at least once, the bug will be hidden. Note to users: if you don't see ip_rcv() in your perf profile, it means you were hitting this. As commit log of 62f64aed622b is saying, the baseline perf profile should look like: 37.69% kpktgend_0 [kernel.vmlinux] [k] __netif_receive_skb_core 25.81% kpktgend_0 [kernel.vmlinux] [k] kfree_skb 7.22% kpktgend_0 [kernel.vmlinux] [k] ip_rcv 5.68% kpktgend_0 [pktgen] [k] pktgen_thread_worker Jesper, that explains why you were seeing hot: atomic_long_inc(&skb->dev->rx_dropped); Only my V1 version was correct in this regard. V2 and above had this latent bug. Sorry about that. net/core/pktgen.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-)