From patchwork Fri Sep 5 10:40:50 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 386292 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id AD5C81400BB for ; Fri, 5 Sep 2014 20:41:05 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756832AbaIEKk6 (ORCPT ); Fri, 5 Sep 2014 06:40:58 -0400 Received: from mail-wi0-f178.google.com ([209.85.212.178]:65110 "EHLO mail-wi0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756767AbaIEKk5 (ORCPT ); Fri, 5 Sep 2014 06:40:57 -0400 Received: by mail-wi0-f178.google.com with SMTP id r20so2717783wiv.17 for ; Fri, 05 Sep 2014 03:40:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=T32a9KrPLpm24D0cjpiAZzgSihB2/XdHyY4jzBcTHz4=; b=lEqa5bUbY2jsbcyKFlpGbWNQZ1hs4pJS20eHoF+72Kd8CVA/Zt5kAkQNGTnRx45FJ0 XPs1AxWM2LXqQSSLJDQGjk2/vSccSTu/fmVTZMBmY9g57q0+GSWWZEasxHBOt/jXETN+ 9JTnkSHq+trXlotZhvMyz2JNYGMS71VCpv462GwFdfs6byVAxQNRmfuS0sL9d2v3U0Vm Zet2oqeqdyXwRxUN5aDd5qdB7+yTjjSkJeBorsqe/d38aazwTvbXQvy/3cg6YBPeer+7 IZwRiUL6abBWtdQZkwCbMMYLDG4idxFx90m6BKWmsz42thuRxCh75XpbDJOXb8NtGoew gp5g== X-Received: by 10.180.186.10 with SMTP id fg10mr2664501wic.20.1409913655520; Fri, 05 Sep 2014 03:40:55 -0700 (PDT) Received: from yakj.usersys.redhat.com (net-2-35-196-143.cust.vodafonedsl.it. [2.35.196.143]) by mx.google.com with ESMTPSA id bt9sm997317wjc.44.2014.09.05.03.40.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 05 Sep 2014 03:40:54 -0700 (PDT) Message-ID: <54099332.7060909@redhat.com> Date: Fri, 05 Sep 2014 12:40:50 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0 MIME-Version: 1.0 To: Rusty Russell , netdev CC: "Michael S. Tsirkin" , virtualization@lists.linux-foundation.org, Andy Lutomirski Subject: Re: [PATCH 1/3] virtio_net: pass well-formed sgs to virtqueue_add_*() References: <1409718556-3041-1-git-send-email-rusty@rustcorp.com.au> <1409718556-3041-2-git-send-email-rusty__16380.2289790057$1409718628$gmane$org@rustcorp.com.au> In-Reply-To: <1409718556-3041-2-git-send-email-rusty__16380.2289790057$1409718628$gmane$org@rustcorp.com.au> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Il 03/09/2014 06:29, Rusty Russell ha scritto: > + sg_init_table(rq->sg, MAX_SKB_FRAGS + 2); I think 2 is enough here. That said... > sg_set_buf(rq->sg, &hdr->hdr, sizeof hdr->hdr); > - > skb_to_sgvec(skb, rq->sg + 1, 0, skb->len); > > err = virtqueue_add_inbuf(rq->vq, rq->sg, 2, skb, gfp); ... skb_to_sgvec will already make the sg well formed, so the sg_init_table is _almost_ redundant; it is only there to remove intermediate end marks. The block layer takes care to remove them, but skb_to_sgvec doesn't. If the following patch can be accepted to net/core/skbuff.c, the sg_init_table in virtnet_alloc_queues will suffice. Paolo -------------------- 8< ------------------- From: Paolo Bonzini Subject: [PATCH] net: skb_to_sgvec: do not leave intermediate marks in the sgvec sg_set_buf/sg_set_page will leave the end mark in place in their argument, which may be in the middle of a scatterlist. If we remove the mark before calling them, we can avoid calls to sg_init_table before skb_to_sgvec. However, users of skb_to_sgvec_nomark now need to be careful and possibly restore the mark. Signed-off-by: Paolo Bonzini --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 163b673f9e62..a3108ef1f1c0 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -3265,6 +3265,7 @@ __skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) if (copy > 0) { if (copy > len) copy = len; + sg_unmark_end(sg); sg_set_buf(sg, skb->data + offset, copy); elt++; if ((len -= copy) == 0) @@ -3283,6 +3284,7 @@ __skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) if (copy > len) copy = len; + sg_unmark_end(&sg[elt]); sg_set_page(&sg[elt], skb_frag_page(frag), copy, frag->page_offset+offset-start); elt++; @@ -3322,7 +3324,7 @@ __skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len) * Scenario to use skb_to_sgvec_nomark: * 1. sg_init_table * 2. skb_to_sgvec_nomark(payload1) - * 3. skb_to_sgvec_nomark(payload2) + * 3. skb_to_sgvec(payload2) * * This is equivalent to: * 1. sg_init_table diff --git a/net/ipv4/ah4.c b/net/ipv4/ah4.c index a2afa89513a0..9ae5756d9e5f 100644 --- a/net/ipv4/ah4.c +++ b/net/ipv4/ah4.c @@ -227,6 +227,7 @@ static int ah_output(struct xfrm_state *x, struct sk_buff *skb) *seqhi = htonl(XFRM_SKB_CB(skb)->seq.output.hi); sg_set_buf(seqhisg, seqhi, seqhi_len); } + sg_mark_end(&sg[nfrags + sglists]); ahash_request_set_crypt(req, sg, icv, skb->len + seqhi_len); ahash_request_set_callback(req, 0, ah_output_done, skb); @@ -395,6 +396,7 @@ static int ah_input(struct xfrm_state *x, struct sk_buff *skb) *seqhi = XFRM_SKB_CB(skb)->seq.input.hi; sg_set_buf(seqhisg, seqhi, seqhi_len); } + sg_mark_end(&sg[nfrags + sglists]); ahash_request_set_crypt(req, sg, icv, skb->len + seqhi_len); ahash_request_set_callback(req, 0, ah_input_done, skb); diff --git a/net/ipv6/ah6.c b/net/ipv6/ah6.c index 72a4930bdc0a..c680d82e43de 100644 --- a/net/ipv6/ah6.c +++ b/net/ipv6/ah6.c @@ -430,6 +430,8 @@ static int ah6_output(struct xfrm_state *x, struct sk_buff *skb) *seqhi = htonl(XFRM_SKB_CB(skb)->seq.output.hi); sg_set_buf(seqhisg, seqhi, seqhi_len); } + sg_mark_end(&sg[nfrags + sglists]); + ahash_request_set_crypt(req, sg, icv, skb->len + seqhi_len); ahash_request_set_callback(req, 0, ah6_output_done, skb); @@ -608,6 +610,7 @@ static int ah6_input(struct xfrm_state *x, struct sk_buff *skb) *seqhi = XFRM_SKB_CB(skb)->seq.input.hi; sg_set_buf(seqhisg, seqhi, seqhi_len); } + sg_mark_end(&sg[nfrags + sglists]); ahash_request_set_crypt(req, sg, icv, skb->len + seqhi_len); ahash_request_set_callback(req, 0, ah6_input_done, skb);