From patchwork Tue Jun 3 13:32:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zoltan Kiss X-Patchwork-Id: 355548 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B79EB140094 for ; Tue, 3 Jun 2014 23:33:04 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753903AbaFCNc7 (ORCPT ); Tue, 3 Jun 2014 09:32:59 -0400 Received: from smtp02.citrix.com ([66.165.176.63]:41724 "EHLO SMTP02.CITRIX.COM" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753637AbaFCNc6 (ORCPT ); Tue, 3 Jun 2014 09:32:58 -0400 X-IronPort-AV: E=Sophos;i="4.98,965,1392163200"; d="scan'208";a="138598195" Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net) ([10.9.154.239]) by FTLPIPO02.CITRIX.COM with ESMTP; 03 Jun 2014 13:32:33 +0000 Received: from imagesandwords.uk.xensource.com (10.80.2.133) by FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id 14.3.181.6; Tue, 3 Jun 2014 09:32:32 -0400 From: Zoltan Kiss To: , , , , CC: , , , Zoltan Kiss Subject: [PATCH net] xen-netback: Fix slot estimation Date: Tue, 3 Jun 2014 14:32:16 +0100 Message-ID: <1401802336-25182-1-git-send-email-zoltan.kiss@citrix.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 X-Originating-IP: [10.80.2.133] X-DLP: MIA1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org A recent commit (a02eb4 "xen-netback: worse-case estimate in xenvif_rx_action is underestimating") capped the slot estimation to MAX_SKB_FRAGS, but that triggers the next BUG_ON a few lines down, as the packet consumes more slots than estimated. This patch remove that cap, and if the frontend doesn't provide enough slot, put back the skb to the top of the queue and caps rx_last_skb_slots. When the next try also fails, it drops the packet. Capping rx_last_skb_slots is needed because if the frontend never gives enough slots, the ring gets stalled. Signed-off-by: Zoltan Kiss Cc: Paul Durrant Cc: Wei Liu Cc: Ian Campbell Cc: David Vrabel --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index da85ffb..7164157 100644 --- a/drivers/net/xen-netback/netback.c +++ b/drivers/net/xen-netback/netback.c @@ -600,13 +600,6 @@ static void xenvif_rx_action(struct xenvif *vif) PAGE_SIZE); } - /* To avoid the estimate becoming too pessimal for some - * frontends that limit posted rx requests, cap the estimate - * at MAX_SKB_FRAGS. - */ - if (max_slots_needed > MAX_SKB_FRAGS) - max_slots_needed = MAX_SKB_FRAGS; - /* We may need one more slot for GSO metadata */ if (skb_is_gso(skb) && (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || @@ -615,9 +608,27 @@ static void xenvif_rx_action(struct xenvif *vif) /* If the skb may not fit then bail out now */ if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) { + /* If the skb needs more than MAX_SKB_FRAGS slots, it + * can happen that the frontend never gives us enough. + * To avoid spining on that packet, first we put it back + * to the top of the queue, but if the next try fail, + * we drop it. + */ + if (max_slots_needed > MAX_SKB_FRAGS && + vif->rx_last_skb_slots == MAX_SKB_FRAGS) { + kfree_skb(skb); + vif->rx_last_skb_slots = 0; + continue; + } skb_queue_head(&vif->rx_queue, skb); need_to_notify = true; - vif->rx_last_skb_slots = max_slots_needed; + /* Cap this otherwise if the guest never gives us + * enough slot, rx_work_todo will spin + */ + vif->rx_last_skb_slots = + max_slots_needed > MAX_SKB_FRAGS ? + MAX_SKB_FRAGS : + max_slots_needed; break; } else vif->rx_last_skb_slots = 0;