diff mbox

[net-next] xen-netfront: convert to GRO API and advertise this feature

Message ID 1379779543-27122-1-git-send-email-wei.liu2@citrix.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Wei Liu Sept. 21, 2013, 4:05 p.m. UTC
Anirban was seeing netfront received MTU size packets, which downgraded
throughput. The following patch makes netfront use GRO API which
improves throughput for that case.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
Cc: Ian Campbell <ian.campbell@citrix.com>
---
 drivers/net/xen-netfront.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Jason Wang Sept. 22, 2013, 6:29 a.m. UTC | #1
On 09/22/2013 12:05 AM, Wei Liu wrote:
> Anirban was seeing netfront received MTU size packets, which downgraded
> throughput. The following patch makes netfront use GRO API which
> improves throughput for that case.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
> Cc: Ian Campbell <ian.campbell@citrix.com>

Maybe a dumb question: doesn't Xen depends on the driver of host card to
do GRO and pass it to netfront? What the case that netfront can receive
a MTU size packet, for a card that does not support GRO in host? Doing
GRO twice may introduce extra overheads.

Thanks
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wei Liu Sept. 22, 2013, 12:09 p.m. UTC | #2
On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
> On 09/22/2013 12:05 AM, Wei Liu wrote:
> > Anirban was seeing netfront received MTU size packets, which downgraded
> > throughput. The following patch makes netfront use GRO API which
> > improves throughput for that case.
> >
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
> > Cc: Ian Campbell <ian.campbell@citrix.com>
> 
> Maybe a dumb question: doesn't Xen depends on the driver of host card to
> do GRO and pass it to netfront? What the case that netfront can receive

The would be the ideal situation. Netback pushes large packets to
netfront and netfront sees large packets.

> a MTU size packet, for a card that does not support GRO in host? Doing

However Anirban saw the case when backend interface receives large
packets but netfront sees MTU size packets, so my thought is there is
certain configuration that leads to this issue. As we cannot tell
users what to enable and what not to enable so I would like to solve
this within our driver.

> GRO twice may introduce extra overheads.
> 

AIUI if the packet that frontend sees is large already then the GRO path
is quite short which will not introduce heavy penalty, while on the
other hand if packet is segmented doing GRO improves throughput.

Wei.

> Thanks
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Sept. 22, 2013, 2:55 p.m. UTC | #3
On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
> Anirban was seeing netfront received MTU size packets, which downgraded
> throughput. The following patch makes netfront use GRO API which
> improves throughput for that case.

> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> +				  NETIF_F_GRO;


This part is not needed.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anirban Chakraborty Sept. 22, 2013, 11:04 p.m. UTC | #4
On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:

> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>> throughput. The following patch makes netfront use GRO API which
>>> improves throughput for that case.
>>> 
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> 
>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>> do GRO and pass it to netfront? What the case that netfront can receive
> 
> The would be the ideal situation. Netback pushes large packets to
> netfront and netfront sees large packets.
> 
>> a MTU size packet, for a card that does not support GRO in host? Doing
> 
> However Anirban saw the case when backend interface receives large
> packets but netfront sees MTU size packets, so my thought is there is
> certain configuration that leads to this issue. As we cannot tell
> users what to enable and what not to enable so I would like to solve
> this within our driver.
> 
>> GRO twice may introduce extra overheads.
>> 
> 
> AIUI if the packet that frontend sees is large already then the GRO path
> is quite short which will not introduce heavy penalty, while on the
> other hand if packet is segmented doing GRO improves throughput.
> 

Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already. 

-Anirban
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anirban Chakraborty Sept. 22, 2013, 11:09 p.m. UTC | #5
On Sep 22, 2013, at 7:55 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
>> Anirban was seeing netfront received MTU size packets, which downgraded
>> throughput. The following patch makes netfront use GRO API which
>> improves throughput for that case.
> 
>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
>> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
>> +				  NETIF_F_GRO;
> 
> 
> This part is not needed.

Shouldn't the flag be set? In dev_gro_receive() we do check if this flag is set or not:

        if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
               goto normal;

-Anirban



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jason Wang Sept. 23, 2013, 5:02 a.m. UTC | #6
On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:
>
>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>> throughput. The following patch makes netfront use GRO API which
>>>> improves throughput for that case.
>>>>
>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>> do GRO and pass it to netfront? What the case that netfront can receive
>> The would be the ideal situation. Netback pushes large packets to
>> netfront and netfront sees large packets.
>>
>>> a MTU size packet, for a card that does not support GRO in host? Doing
>> However Anirban saw the case when backend interface receives large
>> packets but netfront sees MTU size packets, so my thought is there is
>> certain configuration that leads to this issue. As we cannot tell
>> users what to enable and what not to enable so I would like to solve
>> this within our driver.
>>
>>> GRO twice may introduce extra overheads.
>>>
>> AIUI if the packet that frontend sees is large already then the GRO path
>> is quite short which will not introduce heavy penalty, while on the
>> other hand if packet is segmented doing GRO improves throughput.
>>
> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already. 
>
> -Anirban

In this case, even if you still want to do GRO. It's better to find the
root cause of why the GSO packet were segmented (maybe GSO were not
enabled for netback?), since it introduces extra overheads.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet Sept. 23, 2013, 5:58 a.m. UTC | #7
On Sun, 2013-09-22 at 23:09 +0000, Anirban Chakraborty wrote:
> On Sep 22, 2013, at 7:55 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> 
> > On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
> >> Anirban was seeing netfront received MTU size packets, which downgraded
> >> throughput. The following patch makes netfront use GRO API which
> >> improves throughput for that case.
> > 
> >> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> >> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> >> +				  NETIF_F_GRO;
> > 
> > 
> > This part is not needed.
> 
> Shouldn't the flag be set? In dev_gro_receive() we do check if this flag is set or not:
> 
>         if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
>                goto normal;

Drivers do not set NETIF_F_GRO themselves, they do not need to.

Look at other drivers which are GRO ready : NETIF_F_GRO is enabled by
default by core networking stack, in register_netdevice()


dev->hw_features |= NETIF_F_SOFT_FEATURES;
dev->features |= NETIF_F_SOFT_FEATURES;



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Annie.li Sept. 23, 2013, 6:22 a.m. UTC | #8
On 2013-9-23 13:02, Jason Wang wrote:
> On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
>> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:
>>
>>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>>> throughput. The following patch makes netfront use GRO API which
>>>>> improves throughput for that case.
>>>>>
>>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>>> do GRO and pass it to netfront? What the case that netfront can receive
>>> The would be the ideal situation. Netback pushes large packets to
>>> netfront and netfront sees large packets.
>>>
>>>> a MTU size packet, for a card that does not support GRO in host? Doing
>>> However Anirban saw the case when backend interface receives large
>>> packets but netfront sees MTU size packets, so my thought is there is
>>> certain configuration that leads to this issue. As we cannot tell
>>> users what to enable and what not to enable so I would like to solve
>>> this within our driver.
>>>
>>>> GRO twice may introduce extra overheads.
>>>>
>>> AIUI if the packet that frontend sees is large already then the GRO path
>>> is quite short which will not introduce heavy penalty, while on the
>>> other hand if packet is segmented doing GRO improves throughput.
>>>
>> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
>> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already.
>>
>> -Anirban
> In this case, even if you still want to do GRO. It's better to find the
> root cause of why the GSO packet were segmented

Totally agree, we need to find the cause why large packets is segmented 
only in different host case.

> (maybe GSO were not
> enabled for netback?), since it introduces extra overheads.

 From Anirban's feedback, large packets can be seen on vif interface, 
and even on guests running on the same host.

Thanks
Annie
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anirban Chakraborty Sept. 23, 2013, 8:27 p.m. UTC | #9
On Sep 22, 2013, at 10:58 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:

> On Sun, 2013-09-22 at 23:09 +0000, Anirban Chakraborty wrote:
>> On Sep 22, 2013, at 7:55 AM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
>> 
>>> On Sat, 2013-09-21 at 17:05 +0100, Wei Liu wrote:
>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>> throughput. The following patch makes netfront use GRO API which
>>>> improves throughput for that case.
>>> 
>>>> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
>>>> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
>>>> +				  NETIF_F_GRO;
>>> 
>>> 
>>> This part is not needed.
>> 
>> Shouldn't the flag be set? In dev_gro_receive() we do check if this flag is set or not:
>> 
>>        if (!(skb->dev->features & NETIF_F_GRO) || netpoll_rx_on(skb))
>>               goto normal;
> 
> Drivers do not set NETIF_F_GRO themselves, they do not need to.
> 
> Look at other drivers which are GRO ready : NETIF_F_GRO is enabled by
> default by core networking stack, in register_netdevice()
> 
> 
> dev->hw_features |= NETIF_F_SOFT_FEATURES;
> dev->features |= NETIF_F_SOFT_FEATURES;

I didn't realize that the drivers no longer need to set the GRO flag explicitly. It looks like it has been changed since 3.2. I was looking at the kernel version 2.6.32.43 (which corresponds to the dom0 kernel) where the problem is happening. 

-Anirban
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anirban Chakraborty Sept. 23, 2013, 8:32 p.m. UTC | #10
On Sep 22, 2013, at 11:22 PM, annie li <annie.li@oracle.com> wrote:

> 
> On 2013-9-23 13:02, Jason Wang wrote:
>> On 09/23/2013 07:04 AM, Anirban Chakraborty wrote:
>>> On Sep 22, 2013, at 5:09 AM, Wei Liu <wei.liu2@citrix.com> wrote:
>>> 
>>>> On Sun, Sep 22, 2013 at 02:29:15PM +0800, Jason Wang wrote:
>>>>> On 09/22/2013 12:05 AM, Wei Liu wrote:
>>>>>> Anirban was seeing netfront received MTU size packets, which downgraded
>>>>>> throughput. The following patch makes netfront use GRO API which
>>>>>> improves throughput for that case.
>>>>>> 
>>>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>>>> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
>>>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>>>> Maybe a dumb question: doesn't Xen depends on the driver of host card to
>>>>> do GRO and pass it to netfront? What the case that netfront can receive
>>>> The would be the ideal situation. Netback pushes large packets to
>>>> netfront and netfront sees large packets.
>>>> 
>>>>> a MTU size packet, for a card that does not support GRO in host? Doing
>>>> However Anirban saw the case when backend interface receives large
>>>> packets but netfront sees MTU size packets, so my thought is there is
>>>> certain configuration that leads to this issue. As we cannot tell
>>>> users what to enable and what not to enable so I would like to solve
>>>> this within our driver.
>>>> 
>>>>> GRO twice may introduce extra overheads.
>>>>> 
>>>> AIUI if the packet that frontend sees is large already then the GRO path
>>>> is quite short which will not introduce heavy penalty, while on the
>>>> other hand if packet is segmented doing GRO improves throughput.
>>>> 
>>> Thanks Wei, for explaining and submitting the patch. I would like add following to what you have already mentioned.
>>> In my configuration, I was seeing netback was pushing large packets to the guest (Centos 6.4) but the netfront was receiving MTU sized packets. With this patch on, I do see large packets received on the guest interface. As a result there was substantial throughput improvement in the guest side (2.8 Gbps to 3.8 Gbps). Also, note that the host NIC driver was enabled for GRO already.
>>> 
>>> -Anirban
>> In this case, even if you still want to do GRO. It's better to find the
>> root cause of why the GSO packet were segmented
> 
> Totally agree, we need to find the cause why large packets is segmented only in different host case.

It appears (from looking at the netback code) that although GSO is turned on at the netback, the guest receives large packet:
1. if it is a local packet (vm to vm on the same host), in which case netfront does a LRO or,
2. via turning on GRO explicitly (with this patch).

-Anirban
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Konrad Rzeszutek Wilk Sept. 24, 2013, 4:30 p.m. UTC | #11
On Sat, Sep 21, 2013 at 05:05:43PM +0100, Wei Liu wrote:
> Anirban was seeing netfront received MTU size packets, which downgraded
> throughput. The following patch makes netfront use GRO API which
> improves throughput for that case.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Anirban Chakraborty <abchak@juniper.net>
> Cc: Ian Campbell <ian.campbell@citrix.com>

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

> ---
>  drivers/net/xen-netfront.c |    7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 36808bf..5664165 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -952,7 +952,7 @@ static int handle_incoming_queue(struct net_device *dev,
>  		u64_stats_update_end(&stats->syncp);
>  
>  		/* Pass it up. */
> -		netif_receive_skb(skb);
> +		napi_gro_receive(&np->napi, skb);
>  	}
>  
>  	return packets_dropped;
> @@ -1051,6 +1051,8 @@ err:
>  	if (work_done < budget) {
>  		int more_to_do = 0;
>  
> +		napi_gro_flush(napi, false);
> +
>  		local_irq_save(flags);
>  
>  		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
> @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>  				  NETIF_F_GSO_ROBUST;
> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> +				  NETIF_F_GRO;
>  
>  	/*
>           * Assume that all hw features are available for now. This set
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Sept. 28, 2013, 7:38 p.m. UTC | #12
From: Wei Liu <wei.liu2@citrix.com>
Date: Sat, 21 Sep 2013 17:05:43 +0100

> @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
>  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
>  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
>  				  NETIF_F_GSO_ROBUST;
> -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> +				  NETIF_F_GRO;

Please post a new version of this patch with the feedback you've been
given integrated, in particular with this part removed because it is
not necessary.

Ian, please review the patch when Wei posts it.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ian Campbell Sept. 30, 2013, 9:12 a.m. UTC | #13
On Sat, 2013-09-28 at 15:38 -0400, David Miller wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Sat, 21 Sep 2013 17:05:43 +0100
> 
> > @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
> >  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> >  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> >  				  NETIF_F_GSO_ROBUST;
> > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> > +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> > +				  NETIF_F_GRO;
> 
> Please post a new version of this patch with the feedback you've been
> given integrated, in particular with this part removed because it is
> not necessary.
> 
> Ian, please review the patch when Wei posts it.

I will, but note:
        $ ./scripts/get_maintainer.pl -f drivers/net/xen-netfront.c 
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (supporter:XEN HYPERVISOR IN...)
        Jeremy Fitzhardinge <jeremy@goop.org> (supporter:XEN HYPERVISOR IN...)
        xen-devel@lists.xensource.com (moderated list:XEN HYPERVISOR IN...)
        virtualization@lists.linux-foundation.org (open list:XEN HYPERVISOR IN...)
        netdev@vger.kernel.org (open list:NETWORKING DRIVERS)
        linux-kernel@vger.kernel.org (open list)
        
Strictly speaking I maintain netback not front, so Wei please remember
to CC the right people (mainly Konrad) as well as me.

BTW I think this separation is a good thing since it keeps changes to
the protocol "honest". Doesn't matter so much for this particular patch
since don't think it actually touches the protocol.

Ian.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Konrad Rzeszutek Wilk Sept. 30, 2013, 2:43 p.m. UTC | #14
On Mon, Sep 30, 2013 at 10:12:11AM +0100, Ian Campbell wrote:
> On Sat, 2013-09-28 at 15:38 -0400, David Miller wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Sat, 21 Sep 2013 17:05:43 +0100
> > 
> > > @@ -1371,7 +1373,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
> > >  	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
> > >  	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
> > >  				  NETIF_F_GSO_ROBUST;
> > > -	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
> > > +	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
> > > +				  NETIF_F_GRO;
> > 
> > Please post a new version of this patch with the feedback you've been
> > given integrated, in particular with this part removed because it is
> > not necessary.
> > 
> > Ian, please review the patch when Wei posts it.
> 
> I will, but note:
>         $ ./scripts/get_maintainer.pl -f drivers/net/xen-netfront.c 
>         Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> (supporter:XEN HYPERVISOR IN...)
>         Jeremy Fitzhardinge <jeremy@goop.org> (supporter:XEN HYPERVISOR IN...)
>         xen-devel@lists.xensource.com (moderated list:XEN HYPERVISOR IN...)
>         virtualization@lists.linux-foundation.org (open list:XEN HYPERVISOR IN...)
>         netdev@vger.kernel.org (open list:NETWORKING DRIVERS)
>         linux-kernel@vger.kernel.org (open list)
>         
> Strictly speaking I maintain netback not front, so Wei please remember
> to CC the right people (mainly Konrad) as well as me.

Which was Acked:

http://mid.gmane.org/20130924163036.GB13979@phenom.dumpdata.com

> 
> BTW I think this separation is a good thing since it keeps changes to
> the protocol "honest". Doesn't matter so much for this particular patch
> since don't think it actually touches the protocol.
> 
> Ian.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 36808bf..5664165 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -952,7 +952,7 @@  static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		netif_receive_skb(skb);
+		napi_gro_receive(&np->napi, skb);
 	}
 
 	return packets_dropped;
@@ -1051,6 +1051,8 @@  err:
 	if (work_done < budget) {
 		int more_to_do = 0;
 
+		napi_gro_flush(napi, false);
+
 		local_irq_save(flags);
 
 		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
@@ -1371,7 +1373,8 @@  static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
-	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO;
+	netdev->hw_features	= NETIF_F_IP_CSUM | NETIF_F_SG | NETIF_F_TSO |
+				  NETIF_F_GRO;
 
 	/*
          * Assume that all hw features are available for now. This set