Message ID | 1337621793-12486-1-git-send-email-konrad.wilk@oracle.com |
---|---|
State | Not Applicable, archived |
Delegated to: | David Miller |
Headers | show |
On Mon, 2012-05-21 at 13:36 -0400, Konrad Rzeszutek Wilk wrote: > From: Adnan Misherfi <adnan.misherfi@oracle.com> > > A programming error cause the calculation of receive SKB slots to be > wrong, which caused the RX ring to be erroneously declared full, > and the receive queue to be stopped. The problem shows up when two > guest running on the same server tries to communicates using large > MTUs. Each guest is connected to a bridge with VLAN over bond > interface, so traffic from one guest leaves the server on one bridge > and comes back to the second guest on the second bridge. This can be > reproduces using ping, and one guest as follow: > > - Create active-back bond (bond0) > - Set up VLAN 5 on bond0 (bond0.5) > - Create a bridge (br1) > - Add bond0.5 to a bridge (br1) > - Start a guest and connect it to br1 > - Set MTU of 9000 across the link > > Ping the guest from an external host using packet sizes of 3991, and > 4054; ping -s 3991 -c 128 "Guest-IP-Address" > > At the beginning ping works fine, but after a while ping packets do > not reach the guest because the RX ring becomes full, and the queue > get stopped. Once the problem accrued, the only way to get out of it > is to reboot the guest, or use xm network-detach/network-attach. > > ping works for packets sizes 3990,3992, and many other sizes including > 4000,5000,9000, and 1500 ..etc. MTU size of 3991,4054 are the sizes > that quickly reproduce this problem. > > Signed-off-by: Adnan Misherfi <adnan.misherfi@oracle.com> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > --- > drivers/net/xen-netback/netback.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > index 957cf9d..e382e5b 100644 > --- a/drivers/net/xen-netback/netback.c > +++ b/drivers/net/xen-netback/netback.c > @@ -212,7 +212,7 @@ unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb) The function name is xen_netbk_count_skb_slots() in net-next. This appears to depend on the series in <http://lists.xen.org/archives/html/xen-devel/2012-01/msg00982.html>. > int i, copy_off; > > count = DIV_ROUND_UP( > - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); > + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); The new version would be equivalent to: count = offset_in_page(skb->data + skb_headlen(skb)) != 0; which is not right, as netbk_gop_skb() will use one slot per page. The real problem is likely that you're not using the same condition to stop and wake the queue. Though it appears you're also missing an smp_mb() at the top of xenvif_notify_tx_completion(). Ben. > copy_off = skb_headlen(skb) % PAGE_SIZE; >
On Mon, 2012-05-21 at 20:14 +0100, Ben Hutchings wrote: > On Mon, 2012-05-21 at 13:36 -0400, Konrad Rzeszutek Wilk wrote: > > From: Adnan Misherfi <adnan.misherfi@oracle.com> > > > > A programming error cause the calculation of receive SKB slots to be > > wrong, which caused the RX ring to be erroneously declared full, > > and the receive queue to be stopped. The problem shows up when two > > guest running on the same server tries to communicates using large > > MTUs. Each guest is connected to a bridge with VLAN over bond > > interface, so traffic from one guest leaves the server on one bridge > > and comes back to the second guest on the second bridge. This can be > > reproduces using ping, and one guest as follow: > > > > - Create active-back bond (bond0) > > - Set up VLAN 5 on bond0 (bond0.5) > > - Create a bridge (br1) > > - Add bond0.5 to a bridge (br1) > > - Start a guest and connect it to br1 > > - Set MTU of 9000 across the link > > > > Ping the guest from an external host using packet sizes of 3991, and > > 4054; ping -s 3991 -c 128 "Guest-IP-Address" > > > > At the beginning ping works fine, but after a while ping packets do > > not reach the guest because the RX ring becomes full, and the queue > > get stopped. Once the problem accrued, the only way to get out of it > > is to reboot the guest, or use xm network-detach/network-attach. > > > > ping works for packets sizes 3990,3992, and many other sizes including > > 4000,5000,9000, and 1500 ..etc. MTU size of 3991,4054 are the sizes > > that quickly reproduce this problem. > > > > Signed-off-by: Adnan Misherfi <adnan.misherfi@oracle.com> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > > --- > > drivers/net/xen-netback/netback.c | 2 +- > > 1 files changed, 1 insertions(+), 1 deletions(-) > > > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > > index 957cf9d..e382e5b 100644 > > --- a/drivers/net/xen-netback/netback.c > > +++ b/drivers/net/xen-netback/netback.c > > @@ -212,7 +212,7 @@ unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb) > > The function name is xen_netbk_count_skb_slots() in net-next. This > appears to depend on the series in > <http://lists.xen.org/archives/html/xen-devel/2012-01/msg00982.html>. Yes, I don't think that patchset was intended for prime time just yet. Can this issue be reproduced without it? > > int i, copy_off; > > > > count = DIV_ROUND_UP( > > - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); > > + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); > > The new version would be equivalent to: > count = offset_in_page(skb->data + skb_headlen(skb)) != 0; > which is not right, as netbk_gop_skb() will use one slot per page. Just outside the context of this patch we separately count the frag pages. However I think you are right if skb->data covers > 1 page, since the new version can only ever return 0 or 1. I expect this patch papers over the underlying issue by not stopping often enough, rather than actually fixing the underlying issue. > The real problem is likely that you're not using the same condition to > stop and wake the queue. Agreed, it would be useful to see the argument for this patch presented in that light. In particular the relationship between xenvif_rx_schedulable() (used to wake queue) and xen_netbk_must_stop_queue() (used to stop queue). As it stands the description describes a setup which can repro the problem but doesn't really analyse what actually happens, nor justify the correctness of the fix. > Though it appears you're also missing an > smp_mb() at the top of xenvif_notify_tx_completion(). I think the necessary barrier is in RING_PUSH_RESPONSES_AND_CHECK_NOTIFY which is just prior to the single callsite of this function. Ian. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, May 21, 2012 at 08:14:00PM +0100, Ben Hutchings wrote: > On Mon, 2012-05-21 at 13:36 -0400, Konrad Rzeszutek Wilk wrote: > > From: Adnan Misherfi <adnan.misherfi@oracle.com> > > > > A programming error cause the calculation of receive SKB slots to be > > wrong, which caused the RX ring to be erroneously declared full, > > and the receive queue to be stopped. The problem shows up when two > > guest running on the same server tries to communicates using large > > MTUs. Each guest is connected to a bridge with VLAN over bond > > interface, so traffic from one guest leaves the server on one bridge > > and comes back to the second guest on the second bridge. This can be > > reproduces using ping, and one guest as follow: > > > > - Create active-back bond (bond0) > > - Set up VLAN 5 on bond0 (bond0.5) > > - Create a bridge (br1) > > - Add bond0.5 to a bridge (br1) > > - Start a guest and connect it to br1 > > - Set MTU of 9000 across the link > > > > Ping the guest from an external host using packet sizes of 3991, and > > 4054; ping -s 3991 -c 128 "Guest-IP-Address" > > > > At the beginning ping works fine, but after a while ping packets do > > not reach the guest because the RX ring becomes full, and the queue > > get stopped. Once the problem accrued, the only way to get out of it > > is to reboot the guest, or use xm network-detach/network-attach. > > > > ping works for packets sizes 3990,3992, and many other sizes including > > 4000,5000,9000, and 1500 ..etc. MTU size of 3991,4054 are the sizes > > that quickly reproduce this problem. > > > > Signed-off-by: Adnan Misherfi <adnan.misherfi@oracle.com> > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> > > --- > > drivers/net/xen-netback/netback.c | 2 +- > > 1 files changed, 1 insertions(+), 1 deletions(-) > > > > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c > > index 957cf9d..e382e5b 100644 > > --- a/drivers/net/xen-netback/netback.c > > +++ b/drivers/net/xen-netback/netback.c > > @@ -212,7 +212,7 @@ unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb) > > The function name is xen_netbk_count_skb_slots() in net-next. This > appears to depend on the series in > <http://lists.xen.org/archives/html/xen-devel/2012-01/msg00982.html>. Ah, this was based off 3.4. > > > int i, copy_off; > > > > count = DIV_ROUND_UP( > > - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); > > + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); > > The new version would be equivalent to: > count = offset_in_page(skb->data + skb_headlen(skb)) != 0; > which is not right, as netbk_gop_skb() will use one slot per page. > > The real problem is likely that you're not using the same condition to > stop and wake the queue. Though it appears you're also missing an Hmm.. > smp_mb() at the top of xenvif_notify_tx_completion(). > > Ben. > > > copy_off = skb_headlen(skb) % PAGE_SIZE; > > > > -- > Ben Hutchings, Staff Engineer, Solarflare > Not speaking for my employer; that's the marketing department's job. > They asked us to note that Solarflare product names are trademarked. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > wrong, which caused the RX ring to be erroneously declared full, > > > and the receive queue to be stopped. The problem shows up when two > > > guest running on the same server tries to communicates using large .. snip.. > > The function name is xen_netbk_count_skb_slots() in net-next. This > > appears to depend on the series in > > <http://lists.xen.org/archives/html/xen-devel/2012-01/msg00982.html>. > > Yes, I don't think that patchset was intended for prime time just yet. > Can this issue be reproduced without it? It was based on 3.4, but the bug and work to fix this was done on top of a 3.4 version of netback backported in a 3.0 kernel. Let me double check whether there were some missing patches. > > > > int i, copy_off; > > > > > > count = DIV_ROUND_UP( > > > - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); > > > + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); > > > > The new version would be equivalent to: > > count = offset_in_page(skb->data + skb_headlen(skb)) != 0; > > which is not right, as netbk_gop_skb() will use one slot per page. > > Just outside the context of this patch we separately count the frag > pages. > > However I think you are right if skb->data covers > 1 page, since the > new version can only ever return 0 or 1. I expect this patch papers over > the underlying issue by not stopping often enough, rather than actually > fixing the underlying issue. Ah, any thoughts? Have you guys seen this behavior as well? > > > The real problem is likely that you're not using the same condition to > > stop and wake the queue. > > Agreed, it would be useful to see the argument for this patch presented > in that light. In particular the relationship between > xenvif_rx_schedulable() (used to wake queue) and > xen_netbk_must_stop_queue() (used to stop queue). Do you have any debug patches to ... do open-heart surgery on the rings of netback as its hitting the issues Adnan has found? > > As it stands the description describes a setup which can repro the > problem but doesn't really analyse what actually happens, nor justify > the correctness of the fix. Hm, Adnan - you dug in to this and you got tons of notes. Could you describe what you saw that caused this? -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Konrad Rzeszutek Wilk wrote: >>>> wrong, which caused the RX ring to be erroneously declared full, >>>> and the receive queue to be stopped. The problem shows up when two >>>> guest running on the same server tries to communicates using large >>>> > .. snip.. > >>> The function name is xen_netbk_count_skb_slots() in net-next. This >>> appears to depend on the series in >>> <http://lists.xen.org/archives/html/xen-devel/2012-01/msg00982.html>. >>> >> Yes, I don't think that patchset was intended for prime time just yet. >> Can this issue be reproduced without it? >> > > It was based on 3.4, but the bug and work to fix this was done on top of > a 3.4 version of netback backported in a 3.0 kernel. Let me double check > whether there were some missing patches. > > >>>> int i, copy_off; >>>> >>>> count = DIV_ROUND_UP( >>>> - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); >>>> + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); >>>> >>> The new version would be equivalent to: >>> count = offset_in_page(skb->data + skb_headlen(skb)) != 0; >>> which is not right, as netbk_gop_skb() will use one slot per page. >>> >> Just outside the context of this patch we separately count the frag >> pages. >> >> However I think you are right if skb->data covers > 1 page, since the >> new version can only ever return 0 or 1. I expect this patch papers over >> the underlying issue by not stopping often enough, rather than actually >> fixing the underlying issue. >> > > Ah, any thoughts? Have you guys seen this behavior as well? > >>> The real problem is likely that you're not using the same condition to >>> stop and wake the queue. >>> >> Agreed, it would be useful to see the argument for this patch presented >> in that light. In particular the relationship between >> xenvif_rx_schedulable() (used to wake queue) and >> xen_netbk_must_stop_queue() (used to stop queue). >> > > Do you have any debug patches to ... do open-heart surgery on the > rings of netback as its hitting the issues Adnan has found? > > >> As it stands the description describes a setup which can repro the >> problem but doesn't really analyse what actually happens, nor justify >> the correctness of the fix. >> > > Hm, Adnan - you dug in to this and you got tons of notes. Could you > describe what you saw that caused this? > The problem is that the function xen_netbk_count_skb_slots() returns two different counts for same type packets of same size (ICMP,3991). At the start of the test the count is one, later on the count changes to two, soon after the counts becomes two, the condition ring full becomes true, and queue get stopped, and never gets started again.There are few point to make here: 1- It takes less that 128 ping packets to reproduce this 2- What is interesting here is that it works correct for many packet sizes including 1500,400,500 9000, (3990, but not 3991) 3- The inconsistent count for the same packet size and type 4- I do not believe the ring was actually full when it was declared full, I think the consumer pointer was wrong. (vif->rx_req_cons_peek in function xenvif_start_xmit()) 5- After changing the code the count returned from xen_netbk_count_skb_slots() was always consistent, and worked just fine, I let it runs for at least 12 hours. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, 2012-05-22 at 20:24 +0100, Adnan Misherfi wrote: > > Konrad Rzeszutek Wilk wrote: > >>>> wrong, which caused the RX ring to be erroneously declared full, > >>>> and the receive queue to be stopped. The problem shows up when two > >>>> guest running on the same server tries to communicates using large > >>>> > > .. snip.. > > > >>> The function name is xen_netbk_count_skb_slots() in net-next. This > >>> appears to depend on the series in > >>> <http://lists.xen.org/archives/html/xen-devel/2012-01/msg00982.html>. > >>> > >> Yes, I don't think that patchset was intended for prime time just yet. > >> Can this issue be reproduced without it? > >> > > > > It was based on 3.4, but the bug and work to fix this was done on top of > > a 3.4 version of netback backported in a 3.0 kernel. Let me double check > > whether there were some missing patches. > > > > > >>>> int i, copy_off; > >>>> > >>>> count = DIV_ROUND_UP( > >>>> - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); > >>>> + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); > >>>> > >>> The new version would be equivalent to: > >>> count = offset_in_page(skb->data + skb_headlen(skb)) != 0; > >>> which is not right, as netbk_gop_skb() will use one slot per page. > >>> > >> Just outside the context of this patch we separately count the frag > >> pages. > >> > >> However I think you are right if skb->data covers > 1 page, since the > >> new version can only ever return 0 or 1. I expect this patch papers over > >> the underlying issue by not stopping often enough, rather than actually > >> fixing the underlying issue. > >> > > > > Ah, any thoughts? Have you guys seen this behavior as well? > > > >>> The real problem is likely that you're not using the same condition to > >>> stop and wake the queue. > >>> > >> Agreed, it would be useful to see the argument for this patch presented > >> in that light. In particular the relationship between > >> xenvif_rx_schedulable() (used to wake queue) and > >> xen_netbk_must_stop_queue() (used to stop queue). > >> > > > > Do you have any debug patches to ... do open-heart surgery on the > > rings of netback as its hitting the issues Adnan has found? > > > > > >> As it stands the description describes a setup which can repro the > >> problem but doesn't really analyse what actually happens, nor justify > >> the correctness of the fix. > >> > > > > Hm, Adnan - you dug in to this and you got tons of notes. Could you > > describe what you saw that caused this? > > > The problem is that the function xen_netbk_count_skb_slots() returns two > different counts for same type packets of same size (ICMP,3991). At the > start of the test > the count is one, later on the count changes to two, soon after the > counts becomes two, the condition ring full becomes true, and queue get > stopped, and never gets > started again.There are few point to make here: > 1- It takes less that 128 ping packets to reproduce this > 2- What is interesting here is that it works correct for many packet > sizes including 1500,400,500 9000, (3990, but not 3991) > 3- The inconsistent count for the same packet size and type > 4- I do not believe the ring was actually full when it was declared > full, I think the consumer pointer was wrong. (vif->rx_req_cons_peek in > function xenvif_start_xmit()) > 5- After changing the code the count returned from > xen_netbk_count_skb_slots() was always consistent, and worked just fine, > I let it runs for at least 12 hours. That doesn't really explain why you think your fix is correct though, which is what I was asking for. In any case, does Simon's patch also fix things for you? As far as I can tell that is the right fix. Ian. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index 957cf9d..e382e5b 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -212,7 +212,7 @@ unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb) int i, copy_off; count = DIV_ROUND_UP( - offset_in_page(skb->data)+skb_headlen(skb), PAGE_SIZE); + offset_in_page(skb->data + skb_headlen(skb)), PAGE_SIZE); copy_off = skb_headlen(skb) % PAGE_SIZE;