diff mbox

[net,v2,1/3] xen-netback: move NAPI add/remove calls

Message ID 1407515833-2550-2-git-send-email-wei.liu2@citrix.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Wei Liu Aug. 8, 2014, 4:37 p.m. UTC
Originally napi_add was in init_queue and napi_del was in deinit_queue,
while kthreads were handled in _connect and _disconnect. Move napi_add
and napi_remove to _connect and _disconnect so that they reside togother
with kthread operations.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 drivers/net/xen-netback/interface.c |   12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Sergei Shtylyov Aug. 8, 2014, 4:49 p.m. UTC | #1
Hello.

On 08/08/2014 08:37 PM, Wei Liu wrote:

> Originally napi_add was in init_queue and napi_del was in deinit_queue,
> while kthreads were handled in _connect and _disconnect. Move napi_add
> and napi_remove

    netif_napi_{add|del}()?

 > to _connect and _disconnect so that they reside togother

    Together.

> with kthread operations.

> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Zoltan Kiss <zoltan.kiss@citrix.com>

WBR, Sergei

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wei Liu Aug. 8, 2014, 4:52 p.m. UTC | #2
On Fri, Aug 08, 2014 at 08:49:10PM +0400, Sergei Shtylyov wrote:
> Hello.
> 
> On 08/08/2014 08:37 PM, Wei Liu wrote:
> 
> >Originally napi_add was in init_queue and napi_del was in deinit_queue,
> >while kthreads were handled in _connect and _disconnect. Move napi_add
> >and napi_remove
> 
>    netif_napi_{add|del}()?
> 
> > to _connect and _disconnect so that they reside togother
> 
>    Together.
> 

Thanks, I will fix these issues.

Wei.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Vrabel Aug. 11, 2014, 12:35 p.m. UTC | #3
On 08/08/14 17:37, Wei Liu wrote:
> Originally napi_add was in init_queue and napi_del was in deinit_queue,
> while kthreads were handled in _connect and _disconnect. Move napi_add
> and napi_remove to _connect and _disconnect so that they reside togother
> with kthread operations.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 48a55cd..fdb4fca 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -528,9 +528,6 @@ int xenvif_init_queue(struct xenvif_queue *queue)
>  
>  	init_timer(&queue->rx_stalled);
>  
> -	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
> -			XENVIF_NAPI_WEIGHT);
> -
>  	return 0;
>  }
>  
> @@ -618,6 +615,9 @@ int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  	wake_up_process(queue->task);
>  	wake_up_process(queue->dealloc_task);
>  
> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
> +			XENVIF_NAPI_WEIGHT);
> +
>  	return 0;
>  
>  err_rx_unbind:
> @@ -675,6 +675,11 @@ void xenvif_disconnect(struct xenvif *vif)
>  
>  	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
>  		queue = &vif->queues[queue_index];
> +		netif_napi_del(&queue->napi);
> +	}

Why have you added an additional loop over all the queues?  The ordering
looks wrong as well.  I think you want

  1. unbind from irqhandler
  2. napi del
  3. stop task
  4. stop dealloc task
  5. unmap frontend rings.

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Zoltan Kiss Aug. 11, 2014, 12:49 p.m. UTC | #4
On 11/08/14 13:35, David Vrabel wrote:
> On 08/08/14 17:37, Wei Liu wrote:
>> Originally napi_add was in init_queue and napi_del was in deinit_queue,
>> while kthreads were handled in _connect and _disconnect. Move napi_add
>> and napi_remove to _connect and _disconnect so that they reside togother
>> with kthread operations.
>>
>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> Cc: Ian Campbell <ian.campbell@citrix.com>
>> Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
>> ---
>>   drivers/net/xen-netback/interface.c |   12 ++++++++----
>>   1 file changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index 48a55cd..fdb4fca 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -528,9 +528,6 @@ int xenvif_init_queue(struct xenvif_queue *queue)
>>
>>   	init_timer(&queue->rx_stalled);
>>
>> -	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>> -			XENVIF_NAPI_WEIGHT);
>> -
>>   	return 0;
>>   }
>>
>> @@ -618,6 +615,9 @@ int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>>   	wake_up_process(queue->task);
>>   	wake_up_process(queue->dealloc_task);
>>
>> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>> +			XENVIF_NAPI_WEIGHT);
>> +
>>   	return 0;
>>
>>   err_rx_unbind:
>> @@ -675,6 +675,11 @@ void xenvif_disconnect(struct xenvif *vif)
>>
>>   	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
>>   		queue = &vif->queues[queue_index];
>> +		netif_napi_del(&queue->napi);
>> +	}
>
> Why have you added an additional loop over all the queues?  The ordering
> looks wrong as well.  I think you want
>
>    1. unbind from irqhandler
>    2. napi del
>    3. stop task
>    4. stop dealloc task
>    5. unmap frontend rings.
And that's how they are ordered. The idea for having the netif_napi_del 
as a separate loop came from me: it could be more efficient to start 
tearing down all the NAPI instances first, so by the time we stop the 
dealloc thread, it is likely it already done most of the work.
But now I realized that netif_napi_del just delete the instance from a 
list, the real thing happens in xenvif_carrier_off: xenvif_down calls 
napi_disable on all queues, and it waits until all the work is done. So 
it doesn't makes sense to have the netif_napi_del in a separate loop any 
more.
>
> David
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Vrabel Aug. 11, 2014, 1:01 p.m. UTC | #5
On 11/08/14 13:49, Zoltan Kiss wrote:
> On 11/08/14 13:35, David Vrabel wrote:
>> On 08/08/14 17:37, Wei Liu wrote:
>>> Originally napi_add was in init_queue and napi_del was in deinit_queue,
>>> while kthreads were handled in _connect and _disconnect. Move napi_add
>>> and napi_remove to _connect and _disconnect so that they reside togother
>>> with kthread operations.
>>>
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>> Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
>>> ---
>>>   drivers/net/xen-netback/interface.c |   12 ++++++++----
>>>   1 file changed, 8 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/net/xen-netback/interface.c
>>> b/drivers/net/xen-netback/interface.c
>>> index 48a55cd..fdb4fca 100644
>>> --- a/drivers/net/xen-netback/interface.c
>>> +++ b/drivers/net/xen-netback/interface.c
>>> @@ -528,9 +528,6 @@ int xenvif_init_queue(struct xenvif_queue *queue)
>>>
>>>       init_timer(&queue->rx_stalled);
>>>
>>> -    netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>>> -            XENVIF_NAPI_WEIGHT);
>>> -
>>>       return 0;
>>>   }
>>>
>>> @@ -618,6 +615,9 @@ int xenvif_connect(struct xenvif_queue *queue,
>>> unsigned long tx_ring_ref,
>>>       wake_up_process(queue->task);
>>>       wake_up_process(queue->dealloc_task);
>>>
>>> +    netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>>> +            XENVIF_NAPI_WEIGHT);
>>> +
>>>       return 0;
>>>
>>>   err_rx_unbind:
>>> @@ -675,6 +675,11 @@ void xenvif_disconnect(struct xenvif *vif)
>>>
>>>       for (queue_index = 0; queue_index < num_queues; ++queue_index) {
>>>           queue = &vif->queues[queue_index];
>>> +        netif_napi_del(&queue->napi);
>>> +    }
>>
>> Why have you added an additional loop over all the queues?  The ordering
>> looks wrong as well.  I think you want
>>
>>    1. unbind from irqhandler
>>    2. napi del
>>    3. stop task
>>    4. stop dealloc task
>>    5. unmap frontend rings.
> And that's how they are ordered.

No, it isn't.  Did you mistakenly look at netfront which is correctly
ordered already?

You must unbind the irq handler before calling netif_napi_del() or an
interrupt may occur and the handler may call napi_schedule() with a
deleted instance.

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Zoltan Kiss Aug. 11, 2014, 1:14 p.m. UTC | #6
On 11/08/14 14:01, David Vrabel wrote:
> On 11/08/14 13:49, Zoltan Kiss wrote:
>> On 11/08/14 13:35, David Vrabel wrote:
>>> On 08/08/14 17:37, Wei Liu wrote:
>>>> Originally napi_add was in init_queue and napi_del was in deinit_queue,
>>>> while kthreads were handled in _connect and _disconnect. Move napi_add
>>>> and napi_remove to _connect and _disconnect so that they reside togother
>>>> with kthread operations.
>>>>
>>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>>>> Cc: Ian Campbell <ian.campbell@citrix.com>
>>>> Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
>>>> ---
>>>>    drivers/net/xen-netback/interface.c |   12 ++++++++----
>>>>    1 file changed, 8 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/drivers/net/xen-netback/interface.c
>>>> b/drivers/net/xen-netback/interface.c
>>>> index 48a55cd..fdb4fca 100644
>>>> --- a/drivers/net/xen-netback/interface.c
>>>> +++ b/drivers/net/xen-netback/interface.c
>>>> @@ -528,9 +528,6 @@ int xenvif_init_queue(struct xenvif_queue *queue)
>>>>
>>>>        init_timer(&queue->rx_stalled);
>>>>
>>>> -    netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>>>> -            XENVIF_NAPI_WEIGHT);
>>>> -
>>>>        return 0;
>>>>    }
>>>>
>>>> @@ -618,6 +615,9 @@ int xenvif_connect(struct xenvif_queue *queue,
>>>> unsigned long tx_ring_ref,
>>>>        wake_up_process(queue->task);
>>>>        wake_up_process(queue->dealloc_task);
>>>>
>>>> +    netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
>>>> +            XENVIF_NAPI_WEIGHT);
>>>> +
>>>>        return 0;
>>>>
>>>>    err_rx_unbind:
>>>> @@ -675,6 +675,11 @@ void xenvif_disconnect(struct xenvif *vif)
>>>>
>>>>        for (queue_index = 0; queue_index < num_queues; ++queue_index) {
>>>>            queue = &vif->queues[queue_index];
>>>> +        netif_napi_del(&queue->napi);
>>>> +    }
>>>
>>> Why have you added an additional loop over all the queues?  The ordering
>>> looks wrong as well.  I think you want
>>>
>>>     1. unbind from irqhandler
>>>     2. napi del
>>>     3. stop task
>>>     4. stop dealloc task
>>>     5. unmap frontend rings.
>> And that's how they are ordered.
>
> No, it isn't.  Did you mistakenly look at netfront which is correctly
> ordered already?
>
> You must unbind the irq handler before calling netif_napi_del() or an
> interrupt may occur and the handler may call napi_schedule() with a
> deleted instance.
I think xenvif_carrier_off (which call xenvif_down) does that. It is 
right before this new napi_del in xenvif_disconnect.

Zoli
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wei Liu Aug. 11, 2014, 1:43 p.m. UTC | #7
In short, there's no need to reorder disconnect logic and no need have a
dedicated loop for netif_napi_del.

Wei.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Vrabel Aug. 11, 2014, 1:59 p.m. UTC | #8
On 11/08/14 14:43, Wei Liu wrote:
> In short, there's no need to reorder disconnect logic and no need have a
> dedicated loop for netif_napi_del.

Not for now.  But I would prefer it if it was re-ordered.  And similarly
in xenvif_down(), irq_disable() should be before napi_disable().

David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Wei Liu Aug. 11, 2014, 2:08 p.m. UTC | #9
On Mon, Aug 11, 2014 at 02:59:52PM +0100, David Vrabel wrote:
> On 11/08/14 14:43, Wei Liu wrote:
> > In short, there's no need to reorder disconnect logic and no need have a
> > dedicated loop for netif_napi_del.
> 
> Not for now.  But I would prefer it if it was re-ordered.  And similarly
> in xenvif_down(), irq_disable() should be before napi_disable().
> 

Something for another day then. In any case, even if napi_disable goes
before irq_disable, it's still safe if a tx interrupt takes place in
between. napi_schedule has no effect on a disabled instance.

Wei.

> David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 48a55cd..fdb4fca 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -528,9 +528,6 @@  int xenvif_init_queue(struct xenvif_queue *queue)
 
 	init_timer(&queue->rx_stalled);
 
-	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
-			XENVIF_NAPI_WEIGHT);
-
 	return 0;
 }
 
@@ -618,6 +615,9 @@  int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 	wake_up_process(queue->task);
 	wake_up_process(queue->dealloc_task);
 
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+
 	return 0;
 
 err_rx_unbind:
@@ -675,6 +675,11 @@  void xenvif_disconnect(struct xenvif *vif)
 
 	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
 		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
+
+	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
 		if (queue->task) {
 			del_timer_sync(&queue->rx_stalled);
@@ -708,7 +713,6 @@  void xenvif_disconnect(struct xenvif *vif)
 void xenvif_deinit_queue(struct xenvif_queue *queue)
 {
 	free_xenballooned_pages(MAX_PENDING_REQS, queue->mmap_pages);
-	netif_napi_del(&queue->napi);
 }
 
 void xenvif_free(struct xenvif *vif)