From patchwork Wed Dec 11 19:04:01 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerry Chu X-Patchwork-Id: 300288 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id ACE452C00B8 for ; Thu, 12 Dec 2013 06:04:27 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751290Ab3LKTEX (ORCPT ); Wed, 11 Dec 2013 14:04:23 -0500 Received: from mail-oa0-f73.google.com ([209.85.219.73]:34472 "EHLO mail-oa0-f73.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751084Ab3LKTEV (ORCPT ); Wed, 11 Dec 2013 14:04:21 -0500 Received: by mail-oa0-f73.google.com with SMTP id i4so1416954oah.2 for ; Wed, 11 Dec 2013 11:04:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=dlKXHG5ieJh8KjL1TXeS/E9jrt7gMuu74+MyzOrY3/k=; b=QTHfuU9rBmlrWavpTVDfFCodXlfcNs5y+Wtz2l7vQkUe/o1+ZSrB+Ralz9+5pPNLIi Hq5exE4NAw/Zb0x/YG96iokMCOlcxiwjScv4r4jNKhj491AacxBVi+hA2+wJ8QukW6sp wLcsHZgS7OgDc9hUbef5idFmXhxrT5L6l5ebDAxLt5U+eviXFNOQrg8aEm64HnENw10U Xp9d0YksuEmFvASyw7cvfinfebC5LRFswmyhSHGLa6igylq1qquO8BLOXYvd8G2kmLOh 3J/h58PD8XGmLPB6hcceCN3RbxDIfGy8iMdaQXl/DMrlkuCIrDrRgDQ9P8TapJMKC0ly thsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=dlKXHG5ieJh8KjL1TXeS/E9jrt7gMuu74+MyzOrY3/k=; b=S3O/N/lndPUZujwRqH72i6rGg5wm9QMBeLTWapy6CnykKnpUwfc37qeiUqFA07XZET muGE+tPe+dpWCGnd1qFAU8L7eyq6dSE1fTeSoGs6Llx5xqLBKAFv4O+a2B4iEh5yqNOE H6D7njysXtK0Vdmb5OHwtNQMPFOSjU0tfqBgABYAhksKQ7f9Gu2gPIoCbd5MOtX/Q0Nr VvZ8Oi9UMl18qXDkOqh4IA1dsbFhRWqoy+ByAPHQahHSeWQYkkmsv2KuZ/k+QVwyATJ0 3svYVHoLAt5GFZNiae6Zns57/deefRjnbJGHvDUAzi2UcLn0XBvhGi5MYgM8jUTaURLX ZvJA== X-Gm-Message-State: ALoCoQnFZezV8SvqzVV0WZLE7eT0eDdDVdgcr4wfnzJoRhdZWtzJExCZIK0CLbtdd+J5eoyIqr7R8jUiwoAiJLhXWy5QqmYOLaFGVO4QpyTGANaqwSda2iJ6femoXCci3dy5sPX0EvBO4Z3TNawK/YdmncPD4z7gJguG4MaiQ6xrgeHRoZ9TknSyJ7k8uFjrtoTDQtBQWM+lyZxa8ehgFjlXIe9dNLh4Cg== X-Received: by 10.42.240.19 with SMTP id ky19mr1091887icb.4.1386788660555; Wed, 11 Dec 2013 11:04:20 -0800 (PST) Received: from corp2gmr1-2.hot.corp.google.com (corp2gmr1-2.hot.corp.google.com [172.24.189.93]) by gmr-mx.google.com with ESMTPS id h47si1673736yhn.0.2013.12.11.11.04.20 for (version=TLSv1.1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 11 Dec 2013 11:04:20 -0800 (PST) Received: from taipei.mtv.corp.google.com (taipei.mtv.corp.google.com [172.17.130.75]) by corp2gmr1-2.hot.corp.google.com (Postfix) with ESMTP id 33CA65A42C0; Wed, 11 Dec 2013 11:04:20 -0800 (PST) Received: by taipei.mtv.corp.google.com (Postfix, from userid 19823) id CE990A0845; Wed, 11 Dec 2013 11:04:19 -0800 (PST) From: "H.K. Jerry Chu" To: edumazet@google.com, herbert@gondor.apana.org.au, ogerlitz@mellanox.com, bhutchings@solarflare.com, davem@davemloft.net, netdev@vger.kernel.org Cc: Jerry Chu Subject: [PATCH v2 net-next] net-gro: Prepare GRO stack for the upcoming tunneling support Date: Wed, 11 Dec 2013 11:04:01 -0800 Message-Id: <1386788641-16200-1-git-send-email-hkchu@google.com> X-Mailer: git-send-email 1.8.5.1 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Jerry Chu This patch modifies the GRO stack to avoid the use of "network_header" and associated macros like ip_hdr() and ipv6_hdr() in order to allow an arbitary number of IP hdrs (v4 or v6) to be used in the encapsulation chain. This lays the foundation for various IP tunneling support (IP-in-IP, GRE, VXLAN, SIT,...) to be added later. With this patch, the GRO stack traversing now is mostly based on skb_gro_offset rather than special hdr offsets saved in skb (e.g., skb->network_header). As a result all but the top layer (i.e., the the transport layer) must have hdrs of the same length in order for a pkt to be considered for aggregation. Therefore when adding a new encap layer (e.g., for tunneling), one must check and skip flows (e.g., by setting NAPI_GRO_CB(p)->same_flow to 0) that have a different hdr length. Note that unlike the network header, the transport header can and will continue to be set by the GRO code since there will be at most one "transport layer" in the encap chain. Signed-off-by: H.K. Jerry Chu Suggested-by: Eric Dumazet Reviewed-by: Eric Dumazet --- include/linux/netdevice.h | 2 +- include/net/tcp.h | 1 + net/core/dev.c | 76 ++++++++++++++++------------------------------- net/ipv4/af_inet.c | 26 +++++++++++----- net/ipv4/tcp_offload.c | 28 ++++++++++------- net/ipv6/ip6_offload.c | 64 +++++++++++++++++++++++++++++---------- net/ipv6/tcpv6_offload.c | 10 +++---- 7 files changed, 117 insertions(+), 90 deletions(-) diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 9d55e51..ba321f1b 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -1673,7 +1673,7 @@ struct offload_callbacks { int (*gso_send_check)(struct sk_buff *skb); struct sk_buff **(*gro_receive)(struct sk_buff **head, struct sk_buff *skb); - int (*gro_complete)(struct sk_buff *skb); + int (*gro_complete)(struct sk_buff *skb, int nhoff); }; struct packet_offload { diff --git a/include/net/tcp.h b/include/net/tcp.h index f7e1ab2..bcd73ff 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1549,6 +1549,7 @@ void tcp_v4_destroy_sock(struct sock *sk); struct sk_buff *tcp_gso_segment(struct sk_buff *skb, netdev_features_t features); struct sk_buff **tcp_gro_receive(struct sk_buff **head, struct sk_buff *skb); +int __tcp_gro_complete(struct sk_buff *skb, struct tcphdr *th); int tcp_gro_complete(struct sk_buff *skb); void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr); diff --git a/net/core/dev.c b/net/core/dev.c index 6cc98dd..bd71bec 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3767,7 +3767,7 @@ static int napi_gro_complete(struct sk_buff *skb) if (ptype->type != type || !ptype->callbacks.gro_complete) continue; - err = ptype->callbacks.gro_complete(skb); + err = ptype->callbacks.gro_complete(skb, 0); break; } rcu_read_unlock(); @@ -3833,6 +3833,23 @@ static void gro_list_prepare(struct napi_struct *napi, struct sk_buff *skb) } } +static void skb_gro_reset_offset(struct sk_buff *skb) +{ + const struct skb_shared_info *pinfo = skb_shinfo(skb); + const skb_frag_t *frag0 = &pinfo->frags[0]; + + NAPI_GRO_CB(skb)->data_offset = 0; + NAPI_GRO_CB(skb)->frag0 = NULL; + NAPI_GRO_CB(skb)->frag0_len = 0; + + if (skb_mac_header(skb) == skb_tail_pointer(skb) && + pinfo->nr_frags && + !PageHighMem(skb_frag_page(frag0))) { + NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); + NAPI_GRO_CB(skb)->frag0_len = skb_frag_size(frag0); + } +} + static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff *skb) { struct sk_buff **pp = NULL; @@ -3848,6 +3865,7 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff if (skb_is_gso(skb) || skb_has_frag_list(skb)) goto normal; + skb_gro_reset_offset(skb); gro_list_prepare(napi, skb); rcu_read_lock(); @@ -3953,27 +3971,8 @@ static gro_result_t napi_skb_finish(gro_result_t ret, struct sk_buff *skb) return ret; } -static void skb_gro_reset_offset(struct sk_buff *skb) -{ - const struct skb_shared_info *pinfo = skb_shinfo(skb); - const skb_frag_t *frag0 = &pinfo->frags[0]; - - NAPI_GRO_CB(skb)->data_offset = 0; - NAPI_GRO_CB(skb)->frag0 = NULL; - NAPI_GRO_CB(skb)->frag0_len = 0; - - if (skb_mac_header(skb) == skb_tail_pointer(skb) && - pinfo->nr_frags && - !PageHighMem(skb_frag_page(frag0))) { - NAPI_GRO_CB(skb)->frag0 = skb_frag_address(frag0); - NAPI_GRO_CB(skb)->frag0_len = skb_frag_size(frag0); - } -} - gro_result_t napi_gro_receive(struct napi_struct *napi, struct sk_buff *skb) { - skb_gro_reset_offset(skb); - return napi_skb_finish(dev_gro_receive(napi, skb), skb); } EXPORT_SYMBOL(napi_gro_receive); @@ -4007,12 +4006,7 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi, struct sk_buff * { switch (ret) { case GRO_NORMAL: - case GRO_HELD: - skb->protocol = eth_type_trans(skb, skb->dev); - - if (ret == GRO_HELD) - skb_gro_pull(skb, -ETH_HLEN); - else if (netif_receive_skb(skb)) + if (netif_receive_skb(skb)) ret = GRO_DROP; break; @@ -4021,6 +4015,7 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi, struct sk_buff * napi_reuse_skb(napi, skb); break; + case GRO_HELD: case GRO_MERGED: break; } @@ -4031,36 +4026,15 @@ static gro_result_t napi_frags_finish(struct napi_struct *napi, struct sk_buff * static struct sk_buff *napi_frags_skb(struct napi_struct *napi) { struct sk_buff *skb = napi->skb; - struct ethhdr *eth; - unsigned int hlen; - unsigned int off; napi->skb = NULL; - skb_reset_mac_header(skb); - skb_gro_reset_offset(skb); - - off = skb_gro_offset(skb); - hlen = off + sizeof(*eth); - eth = skb_gro_header_fast(skb, off); - if (skb_gro_header_hard(skb, hlen)) { - eth = skb_gro_header_slow(skb, hlen, off); - if (unlikely(!eth)) { - napi_reuse_skb(napi, skb); - skb = NULL; - goto out; - } + if (unlikely(!pskb_may_pull(skb, sizeof(struct ethhdr)))) { + napi_reuse_skb(napi, skb); + return NULL; } + skb->protocol = eth_type_trans(skb, skb->dev); - skb_gro_pull(skb, sizeof(*eth)); - - /* - * This works because the only protocols we care about don't require - * special handling. We'll fix it up properly at the end. - */ - skb->protocol = eth->h_proto; - -out: return skb; } diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 70011e0..cb406e9 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -1377,8 +1377,12 @@ static struct sk_buff **inet_gro_receive(struct sk_buff **head, if (!NAPI_GRO_CB(p)->same_flow) continue; - iph2 = ip_hdr(p); - + iph2 = (struct iphdr *)(p->data + off); + /* The above works because, with the exception of the top + * (inner most) layer, we only aggregate pkts with the same + * hdr length so all the hdrs we'll need to verify will start + * at the same offset. + */ if ((iph->protocol ^ iph2->protocol) | ((__force u32)iph->saddr ^ (__force u32)iph2->saddr) | ((__force u32)iph->daddr ^ (__force u32)iph2->daddr)) { @@ -1397,8 +1401,12 @@ static struct sk_buff **inet_gro_receive(struct sk_buff **head, } NAPI_GRO_CB(skb)->flush |= flush; + skb_set_network_header(skb, off); + /* The above will be needed by the transport layer if there is one + * immediately following this IP hdr. + */ + skb_gro_pull(skb, sizeof(*iph)); - skb_set_transport_header(skb, skb_gro_offset(skb)); pp = ops->callbacks.gro_receive(head, skb); @@ -1411,10 +1419,10 @@ out: return pp; } -static int inet_gro_complete(struct sk_buff *skb) +static int inet_gro_complete(struct sk_buff *skb, int nhoff) { - __be16 newlen = htons(skb->len - skb_network_offset(skb)); - struct iphdr *iph = ip_hdr(skb); + __be16 newlen = htons(skb->len - nhoff); + struct iphdr *iph = (struct iphdr *)(skb->data + nhoff); const struct net_offload *ops; int proto = iph->protocol; int err = -ENOSYS; @@ -1427,7 +1435,11 @@ static int inet_gro_complete(struct sk_buff *skb) if (WARN_ON(!ops || !ops->callbacks.gro_complete)) goto out_unlock; - err = ops->callbacks.gro_complete(skb); + /* Only need to add sizeof(*iph) to get to the next hdr below + * because any hdr with option will have been flushed in + * inet_gro_receive(). + */ + err = ops->callbacks.gro_complete(skb, nhoff + sizeof(*iph)); out_unlock: rcu_read_unlock(); diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c index 0560635..c67bb45 100644 --- a/net/ipv4/tcp_offload.c +++ b/net/ipv4/tcp_offload.c @@ -175,6 +175,7 @@ struct sk_buff **tcp_gro_receive(struct sk_buff **head, struct sk_buff *skb) goto out; } + skb_set_transport_header(skb, off); skb_gro_pull(skb, thlen); len = skb_gro_len(skb); @@ -184,7 +185,7 @@ struct sk_buff **tcp_gro_receive(struct sk_buff **head, struct sk_buff *skb) if (!NAPI_GRO_CB(p)->same_flow) continue; - th2 = tcp_hdr(p); + th2 = (struct tcphdr *)(p->data + off); if (*(u32 *)&th->source ^ *(u32 *)&th2->source) { NAPI_GRO_CB(p)->same_flow = 0; @@ -217,7 +218,7 @@ found: } p = *head; - th2 = tcp_hdr(p); + th2 = (struct tcphdr *)(p->data + off); tcp_flag_word(th2) |= flags & (TCP_FLAG_FIN | TCP_FLAG_PSH); out_check_final: @@ -236,11 +237,9 @@ out: } EXPORT_SYMBOL(tcp_gro_receive); -int tcp_gro_complete(struct sk_buff *skb) +int __tcp_gro_complete(struct sk_buff *skb, struct tcphdr *th) { - struct tcphdr *th = tcp_hdr(skb); - - skb->csum_start = skb_transport_header(skb) - skb->head; + skb->csum_start = (unsigned char *)th - skb->head; skb->csum_offset = offsetof(struct tcphdr, check); skb->ip_summed = CHECKSUM_PARTIAL; @@ -251,6 +250,12 @@ int tcp_gro_complete(struct sk_buff *skb) return 0; } +EXPORT_SYMBOL(__tcp_gro_complete); + +int tcp_gro_complete(struct sk_buff *skb) +{ + return __tcp_gro_complete(skb, tcp_hdr(skb)); +} EXPORT_SYMBOL(tcp_gro_complete); static int tcp_v4_gso_send_check(struct sk_buff *skb) @@ -272,6 +277,7 @@ static int tcp_v4_gso_send_check(struct sk_buff *skb) static struct sk_buff **tcp4_gro_receive(struct sk_buff **head, struct sk_buff *skb) { + /* Use the IP hdr immediately proceeding for this transport */ const struct iphdr *iph = skb_gro_network_header(skb); __wsum wsum; @@ -303,16 +309,16 @@ skip_csum: return tcp_gro_receive(head, skb); } -static int tcp4_gro_complete(struct sk_buff *skb) +static int tcp4_gro_complete(struct sk_buff *skb, int thoff) { const struct iphdr *iph = ip_hdr(skb); - struct tcphdr *th = tcp_hdr(skb); + struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); - th->check = ~tcp_v4_check(skb->len - skb_transport_offset(skb), - iph->saddr, iph->daddr, 0); + th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, + iph->daddr, 0); skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; - return tcp_gro_complete(skb); + return __tcp_gro_complete(skb, th); } static const struct net_offload tcpv4_offload = { diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c index 4b85169..448d238 100644 --- a/net/ipv6/ip6_offload.c +++ b/net/ipv6/ip6_offload.c @@ -154,6 +154,35 @@ out: return segs; } +/* Return the total length of all the extension hdrs, following the same + * logic in ipv6_gso_pull_exthdrs() when parsing ext-hdrs. + */ +static int ipv6_exthdrs_len(struct ipv6hdr *iph, + const struct net_offload **opps) +{ + struct ipv6_opt_hdr *opth = NULL; + int len = 0, proto, optlen; + + proto = iph->nexthdr; + for (;;) { + if (proto != NEXTHDR_HOP) { + *opps = rcu_dereference(inet6_offloads[proto]); + if (unlikely(!(*opps))) + break; + if (!((*opps)->flags & INET6_PROTO_GSO_EXTHDR)) + break; + } + if (opth == NULL) + opth = (void *)(iph+1); + else + opth = (void *)opth + optlen; + optlen = ipv6_optlen(opth); + len += optlen; + proto = opth->nexthdr; + } + return len; +} + static struct sk_buff **ipv6_gro_receive(struct sk_buff **head, struct sk_buff *skb) { @@ -167,6 +196,7 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head, int flush = 1; int proto; __wsum csum; + __u16 next_header; off = skb_gro_offset(skb); hlen = off + sizeof(*iph); @@ -176,9 +206,9 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head, if (unlikely(!iph)) goto out; } - + skb_set_network_header(skb, off); skb_gro_pull(skb, sizeof(*iph)); - skb_set_transport_header(skb, skb_gro_offset(skb)); + next_header = skb->data - skb->head + skb_gro_offset(skb); flush += ntohs(iph->payload_len) != skb_gro_len(skb); @@ -188,8 +218,8 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head, if (!ops || !ops->callbacks.gro_receive) { __pskb_pull(skb, skb_gro_offset(skb)); proto = ipv6_gso_pull_exthdrs(skb, proto); - skb_gro_pull(skb, -skb_transport_offset(skb)); - skb_reset_transport_header(skb); + skb_gro_pull(skb, -(int)(skb->head + next_header - skb->data)); + next_header = skb->data - skb->head; __skb_push(skb, skb_gro_offset(skb)); ops = rcu_dereference(inet6_offloads[proto]); @@ -202,7 +232,7 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head, NAPI_GRO_CB(skb)->proto = proto; flush--; - nlen = skb_network_header_len(skb); + nlen = next_header - skb->network_header; for (p = *head; p; p = p->next) { const struct ipv6hdr *iph2; @@ -211,12 +241,16 @@ static struct sk_buff **ipv6_gro_receive(struct sk_buff **head, if (!NAPI_GRO_CB(p)->same_flow) continue; - iph2 = ipv6_hdr(p); + iph2 = (struct ipv6hdr *)(p->data + off); first_word = *(__be32 *)iph ^ *(__be32 *)iph2 ; - /* All fields must match except length and Traffic Class. */ - if (nlen != skb_network_header_len(p) || - (first_word & htonl(0xF00FFFFF)) || + /* All fields must match except length and Traffic Class. + * XXX skbs on the gro_list have all been parsed and pulled + * already so we don't need to compare nlen + * (nlen != (sizeof(*iph2) + ipv6_exthdrs_len(iph2, &ops))) + * memcmp() alone below is suffcient, right? + */ + if ((first_word & htonl(0xF00FFFFF)) || memcmp(&iph->nexthdr, &iph2->nexthdr, nlen - offsetof(struct ipv6hdr, nexthdr))) { NAPI_GRO_CB(p)->same_flow = 0; @@ -245,21 +279,21 @@ out: return pp; } -static int ipv6_gro_complete(struct sk_buff *skb) +static int ipv6_gro_complete(struct sk_buff *skb, int nhoff) { const struct net_offload *ops; - struct ipv6hdr *iph = ipv6_hdr(skb); + struct ipv6hdr *iph = (struct ipv6hdr *)(skb->data + nhoff); int err = -ENOSYS; - iph->payload_len = htons(skb->len - skb_network_offset(skb) - - sizeof(*iph)); + iph->payload_len = htons(skb->len - nhoff - sizeof(*iph)); rcu_read_lock(); - ops = rcu_dereference(inet6_offloads[NAPI_GRO_CB(skb)->proto]); + + nhoff += sizeof(*iph) + ipv6_exthdrs_len(iph, &ops); if (WARN_ON(!ops || !ops->callbacks.gro_complete)) goto out_unlock; - err = ops->callbacks.gro_complete(skb); + err = ops->callbacks.gro_complete(skb, nhoff); out_unlock: rcu_read_unlock(); diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c index 6d18157..7878ca2 100644 --- a/net/ipv6/tcpv6_offload.c +++ b/net/ipv6/tcpv6_offload.c @@ -66,16 +66,16 @@ skip_csum: return tcp_gro_receive(head, skb); } -static int tcp6_gro_complete(struct sk_buff *skb) +static int tcp6_gro_complete(struct sk_buff *skb, int thoff) { const struct ipv6hdr *iph = ipv6_hdr(skb); - struct tcphdr *th = tcp_hdr(skb); + struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); - th->check = ~tcp_v6_check(skb->len - skb_transport_offset(skb), - &iph->saddr, &iph->daddr, 0); + th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, + &iph->daddr, 0); skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; - return tcp_gro_complete(skb); + return __tcp_gro_complete(skb, th); } static const struct net_offload tcpv6_offload = {