diff mbox

[net,-v2,BUGFIX] bonding: use flush_delayed_work_sync in bond_close

Message ID 20111019081757.12455.24788.stgit@ltc219.sdl.hitachi.co.jp
State Rejected, archived
Delegated to: David Miller
Headers show

Commit Message

Mitsuo Hayasaka Oct. 19, 2011, 8:17 a.m. UTC
The bond_close() calls cancel_delayed_work() to cancel delayed works.
It, however, cannot cancel works that were already queued in workqueue.
The bond_open() initializes work->data, and proccess_one_work() refers
get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
work->data has been initialized. Thus, a panic occurs.

This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
in bond_close(). It cancels delayed timer and waits for work to finish
execution. So, it can avoid the null pointer dereference due to the
parallel executions of proccess_one_work() and initializing proccess
of bond_open().

Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>
Cc: Jay Vosburgh <fubar@us.ibm.com>
Cc: Andy Gospodarek <andy@greyhouse.net>
Cc: WANG Cong <xiyou.wangcong@gmail.com>
---

 drivers/net/bonding/bond_main.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Jay Vosburgh Oct. 19, 2011, 6:01 p.m. UTC | #1
Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:

>The bond_close() calls cancel_delayed_work() to cancel delayed works.
>It, however, cannot cancel works that were already queued in workqueue.
>The bond_open() initializes work->data, and proccess_one_work() refers
>get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>work->data has been initialized. Thus, a panic occurs.
>
>This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>in bond_close(). It cancels delayed timer and waits for work to finish
>execution. So, it can avoid the null pointer dereference due to the
>parallel executions of proccess_one_work() and initializing proccess
>of bond_open().

	I'm setting up to test this.  I have a dim recollection that we
tried this some years ago, and there was a different deadlock that
manifested through the flush path.  Perhaps changes since then have
removed that problem.

	-J

>Signed-off-by: Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
>Reviewed-by: WANG Cong <xiyou.wangcong@gmail.com>
>Cc: Jay Vosburgh <fubar@us.ibm.com>
>Cc: Andy Gospodarek <andy@greyhouse.net>
>Cc: WANG Cong <xiyou.wangcong@gmail.com>
>---
>
> drivers/net/bonding/bond_main.c |   10 +++++-----
> 1 files changed, 5 insertions(+), 5 deletions(-)
>
>diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>index de3d351..a4353f9 100644
>--- a/drivers/net/bonding/bond_main.c
>+++ b/drivers/net/bonding/bond_main.c
>@@ -3504,27 +3504,27 @@ static int bond_close(struct net_device *bond_dev)
> 	write_unlock_bh(&bond->lock);
>
> 	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
>-		cancel_delayed_work(&bond->mii_work);
>+		flush_delayed_work_sync(&bond->mii_work);
> 	}
>
> 	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
>-		cancel_delayed_work(&bond->arp_work);
>+		flush_delayed_work_sync(&bond->arp_work);
> 	}
>
> 	switch (bond->params.mode) {
> 	case BOND_MODE_8023AD:
>-		cancel_delayed_work(&bond->ad_work);
>+		flush_delayed_work_sync(&bond->ad_work);
> 		break;
> 	case BOND_MODE_TLB:
> 	case BOND_MODE_ALB:
>-		cancel_delayed_work(&bond->alb_work);
>+		flush_delayed_work_sync(&bond->alb_work);
> 		break;
> 	default:
> 		break;
> 	}
>
> 	if (delayed_work_pending(&bond->mcast_work))
>-		cancel_delayed_work(&bond->mcast_work);
>+		flush_delayed_work_sync(&bond->mcast_work);
>
> 	if (bond_is_lb(bond)) {
> 		/* Must be called only after all
>

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
stephen hemminger Oct. 19, 2011, 6:41 p.m. UTC | #2
On Wed, 19 Oct 2011 11:01:02 -0700
Jay Vosburgh <fubar@us.ibm.com> wrote:

> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
> 
> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
> >It, however, cannot cancel works that were already queued in workqueue.
> >The bond_open() initializes work->data, and proccess_one_work() refers
> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
> >work->data has been initialized. Thus, a panic occurs.
> >
> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
> >in bond_close(). It cancels delayed timer and waits for work to finish
> >execution. So, it can avoid the null pointer dereference due to the
> >parallel executions of proccess_one_work() and initializing proccess
> >of bond_open().
> 
> 	I'm setting up to test this.  I have a dim recollection that we
> tried this some years ago, and there was a different deadlock that
> manifested through the flush path.  Perhaps changes since then have
> removed that problem.
> 
> 	-J

Won't this deadlock on RTNL.  The problem is that:

   CPU0                            CPU1
  rtnl_lock
      bond_close
                                 delayed_work
                                   mii_work
                                     read_lock(bond->lock);
                                     read_unlock(bond->lock);
                                     rtnl_lock... waiting for CPU0
      flush_delayed_work_sync
          waiting for delayed_work to finish...
              
                                    
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jay Vosburgh Oct. 19, 2011, 7:09 p.m. UTC | #3
Stephen Hemminger <shemminger@vyatta.com> wrote:

>On Wed, 19 Oct 2011 11:01:02 -0700
>Jay Vosburgh <fubar@us.ibm.com> wrote:
>
>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>> 
>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>> >It, however, cannot cancel works that were already queued in workqueue.
>> >The bond_open() initializes work->data, and proccess_one_work() refers
>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>> >work->data has been initialized. Thus, a panic occurs.
>> >
>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>> >in bond_close(). It cancels delayed timer and waits for work to finish
>> >execution. So, it can avoid the null pointer dereference due to the
>> >parallel executions of proccess_one_work() and initializing proccess
>> >of bond_open().
>> 
>> 	I'm setting up to test this.  I have a dim recollection that we
>> tried this some years ago, and there was a different deadlock that
>> manifested through the flush path.  Perhaps changes since then have
>> removed that problem.
>> 
>> 	-J
>
>Won't this deadlock on RTNL.  The problem is that:
>
>   CPU0                            CPU1
>  rtnl_lock
>      bond_close
>                                 delayed_work
>                                   mii_work
>                                     read_lock(bond->lock);
>                                     read_unlock(bond->lock);
>                                     rtnl_lock... waiting for CPU0
>      flush_delayed_work_sync
>          waiting for delayed_work to finish...

	Yah, that was it.  We discussed this a couple of years ago in
regards to a similar patch:

http://lists.openwall.net/netdev/2009/12/17/3

	The short version is that we could rework the rtnl_lock inside
the montiors to be conditional and retry on failure (where "retry" means
"reschedule the work and try again later," not "spin retrying on rtnl").
That should permit the use of flush or cancel to terminate the work
items.

	I'll fiddle with it some later today and see if that seems
viable.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Cong Wang Oct. 21, 2011, 5:45 a.m. UTC | #4
On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
> Stephen Hemminger <shemminger@vyatta.com> wrote:
>
>>On Wed, 19 Oct 2011 11:01:02 -0700
>>Jay Vosburgh <fubar@us.ibm.com> wrote:
>>
>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>
>>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>> >It, however, cannot cancel works that were already queued in workqueue.
>>> >The bond_open() initializes work->data, and proccess_one_work() refers
>>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>> >work->data has been initialized. Thus, a panic occurs.
>>> >
>>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>> >in bond_close(). It cancels delayed timer and waits for work to finish
>>> >execution. So, it can avoid the null pointer dereference due to the
>>> >parallel executions of proccess_one_work() and initializing proccess
>>> >of bond_open().
>>>
>>>      I'm setting up to test this.  I have a dim recollection that we
>>> tried this some years ago, and there was a different deadlock that
>>> manifested through the flush path.  Perhaps changes since then have
>>> removed that problem.
>>>
>>>      -J
>>
>>Won't this deadlock on RTNL.  The problem is that:
>>
>>   CPU0                            CPU1
>>  rtnl_lock
>>      bond_close
>>                                 delayed_work
>>                                   mii_work
>>                                     read_lock(bond->lock);
>>                                     read_unlock(bond->lock);
>>                                     rtnl_lock... waiting for CPU0
>>      flush_delayed_work_sync
>>          waiting for delayed_work to finish...
>
>        Yah, that was it.  We discussed this a couple of years ago in
> regards to a similar patch:
>
> http://lists.openwall.net/netdev/2009/12/17/3
>
>        The short version is that we could rework the rtnl_lock inside
> the montiors to be conditional and retry on failure (where "retry" means
> "reschedule the work and try again later," not "spin retrying on rtnl").
> That should permit the use of flush or cancel to terminate the work
> items.

Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
still queue the pending delayed work and wait for it to be finished?

Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
inside bond_close()?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jay Vosburgh Oct. 21, 2011, 6:26 a.m. UTC | #5
Américo Wang <xiyou.wangcong@gmail.com> wrote:

>On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
>> Stephen Hemminger <shemminger@vyatta.com> wrote:
>>
>>>On Wed, 19 Oct 2011 11:01:02 -0700
>>>Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>
>>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>>
>>>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>>> >It, however, cannot cancel works that were already queued in workqueue.
>>>> >The bond_open() initializes work->data, and proccess_one_work() refers
>>>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>>> >work->data has been initialized. Thus, a panic occurs.
>>>> >
>>>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>>> >in bond_close(). It cancels delayed timer and waits for work to finish
>>>> >execution. So, it can avoid the null pointer dereference due to the
>>>> >parallel executions of proccess_one_work() and initializing proccess
>>>> >of bond_open().
>>>>
>>>>      I'm setting up to test this.  I have a dim recollection that we
>>>> tried this some years ago, and there was a different deadlock that
>>>> manifested through the flush path.  Perhaps changes since then have
>>>> removed that problem.
>>>>
>>>>      -J
>>>
>>>Won't this deadlock on RTNL.  The problem is that:
>>>
>>>   CPU0                            CPU1
>>>  rtnl_lock
>>>      bond_close
>>>                                 delayed_work
>>>                                   mii_work
>>>                                     read_lock(bond->lock);
>>>                                     read_unlock(bond->lock);
>>>                                     rtnl_lock... waiting for CPU0
>>>      flush_delayed_work_sync
>>>          waiting for delayed_work to finish...
>>
>>        Yah, that was it.  We discussed this a couple of years ago in
>> regards to a similar patch:
>>
>> http://lists.openwall.net/netdev/2009/12/17/3
>>
>>        The short version is that we could rework the rtnl_lock inside
>> the montiors to be conditional and retry on failure (where "retry" means
>> "reschedule the work and try again later," not "spin retrying on rtnl").
>> That should permit the use of flush or cancel to terminate the work
>> items.
>
>Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
>still queue the pending delayed work and wait for it to be finished?

	Yes, it does.  The original patch wants to use flush instead of
cancel to wait for the work to finish, because there's evidently a
possibility of getting back into bond_open before the work item
executes, and bond_open would reinitialize the work queue and corrupt
the queued work item.

	The original patch series, and recipe for destruction, is here:

	http://www.spinics.net/lists/netdev/msg176382.html

	I've been unable to reproduce the work queue panic locally,
although it sounds plausible.

	Mitsuo: can you provide the precise bonding configuration you're
using to induce the problem?  Driver options, number and type of slaves,
etc.

>Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
>inside bond_close()?

	We don't need RTNL for cancel/flush.  However, bond_close is an
ndo_stop operation, and is called in the dev_close path, which always
occurs under RTNL.  The mii / arp monitor work functions separately
acquire RTNL if they need to perform various failover related
operations.

	I'm working on a patch that should resolve the mii / arp monitor
RTNL problem as I described above (if rtnl_trylock fails, punt and
reschedule the work).  I need to rearrange the netdev_bonding_change
stuff a bit as well, since it acquires RTNL separately.

	Once these changes are made to mii / arp monitor, then
bond_close can call flush instead of cancel, which should eliminate the
original problem described at the top.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jay Vosburgh Oct. 22, 2011, 12:59 a.m. UTC | #6
Jay Vosburgh <fubar@us.ibm.com> wrote:

>Américo Wang <xiyou.wangcong@gmail.com> wrote:
>
>>On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
>>> Stephen Hemminger <shemminger@vyatta.com> wrote:
>>>
>>>>On Wed, 19 Oct 2011 11:01:02 -0700
>>>>Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>>
>>>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>>>
>>>>> >The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>>>> >It, however, cannot cancel works that were already queued in workqueue.
>>>>> >The bond_open() initializes work->data, and proccess_one_work() refers
>>>>> >get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>>>> >work->data has been initialized. Thus, a panic occurs.
>>>>> >
>>>>> >This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>>>> >in bond_close(). It cancels delayed timer and waits for work to finish
>>>>> >execution. So, it can avoid the null pointer dereference due to the
>>>>> >parallel executions of proccess_one_work() and initializing proccess
>>>>> >of bond_open().
>>>>>
>>>>>      I'm setting up to test this.  I have a dim recollection that we
>>>>> tried this some years ago, and there was a different deadlock that
>>>>> manifested through the flush path.  Perhaps changes since then have
>>>>> removed that problem.
>>>>>
>>>>>      -J
>>>>
>>>>Won't this deadlock on RTNL.  The problem is that:
>>>>
>>>>   CPU0                            CPU1
>>>>  rtnl_lock
>>>>      bond_close
>>>>                                 delayed_work
>>>>                                   mii_work
>>>>                                     read_lock(bond->lock);
>>>>                                     read_unlock(bond->lock);
>>>>                                     rtnl_lock... waiting for CPU0
>>>>      flush_delayed_work_sync
>>>>          waiting for delayed_work to finish...
>>>
>>>        Yah, that was it.  We discussed this a couple of years ago in
>>> regards to a similar patch:
>>>
>>> http://lists.openwall.net/netdev/2009/12/17/3
>>>
>>>        The short version is that we could rework the rtnl_lock inside
>>> the montiors to be conditional and retry on failure (where "retry" means
>>> "reschedule the work and try again later," not "spin retrying on rtnl").
>>> That should permit the use of flush or cancel to terminate the work
>>> items.
>>
>>Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
>>still queue the pending delayed work and wait for it to be finished?
>
>	Yes, it does.  The original patch wants to use flush instead of
>cancel to wait for the work to finish, because there's evidently a
>possibility of getting back into bond_open before the work item
>executes, and bond_open would reinitialize the work queue and corrupt
>the queued work item.
>
>	The original patch series, and recipe for destruction, is here:
>
>	http://www.spinics.net/lists/netdev/msg176382.html
>
>	I've been unable to reproduce the work queue panic locally,
>although it sounds plausible.
>
>	Mitsuo: can you provide the precise bonding configuration you're
>using to induce the problem?  Driver options, number and type of slaves,
>etc.
>
>>Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
>>inside bond_close()?
>
>	We don't need RTNL for cancel/flush.  However, bond_close is an
>ndo_stop operation, and is called in the dev_close path, which always
>occurs under RTNL.  The mii / arp monitor work functions separately
>acquire RTNL if they need to perform various failover related
>operations.
>
>	I'm working on a patch that should resolve the mii / arp monitor
>RTNL problem as I described above (if rtnl_trylock fails, punt and
>reschedule the work).  I need to rearrange the netdev_bonding_change
>stuff a bit as well, since it acquires RTNL separately.
>
>	Once these changes are made to mii / arp monitor, then
>bond_close can call flush instead of cancel, which should eliminate the
>original problem described at the top.

	Just an update: there are three functions that may deadlock if
the cancel work calls are changed to flush_sync.  There are two
rtnl_lock calls in each of the bond_mii_monitor and
bond_activebackup_arp_mon functions, and one more in the
bond_alb_monitor.

	Still testing to make sure I haven't missed anything, and I
still haven't been able to reproduce Mitsuo's original failure.

	-J

---
	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mitsuo Hayasaka Oct. 24, 2011, 4 a.m. UTC | #7
(2011/10/22 9:59), Jay Vosburgh wrote:
> Jay Vosburgh <fubar@us.ibm.com> wrote:
> 
>> Américo Wang <xiyou.wangcong@gmail.com> wrote:
>>
>>> On Thu, Oct 20, 2011 at 3:09 AM, Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>> Stephen Hemminger <shemminger@vyatta.com> wrote:
>>>>
>>>>> On Wed, 19 Oct 2011 11:01:02 -0700
>>>>> Jay Vosburgh <fubar@us.ibm.com> wrote:
>>>>>
>>>>>> Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> wrote:
>>>>>>
>>>>>>> The bond_close() calls cancel_delayed_work() to cancel delayed works.
>>>>>>> It, however, cannot cancel works that were already queued in workqueue.
>>>>>>> The bond_open() initializes work->data, and proccess_one_work() refers
>>>>>>> get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
>>>>>>> work->data has been initialized. Thus, a panic occurs.
>>>>>>>
>>>>>>> This patch uses flush_delayed_work_sync() instead of cancel_delayed_work()
>>>>>>> in bond_close(). It cancels delayed timer and waits for work to finish
>>>>>>> execution. So, it can avoid the null pointer dereference due to the
>>>>>>> parallel executions of proccess_one_work() and initializing proccess
>>>>>>> of bond_open().
>>>>>>
>>>>>>      I'm setting up to test this.  I have a dim recollection that we
>>>>>> tried this some years ago, and there was a different deadlock that
>>>>>> manifested through the flush path.  Perhaps changes since then have
>>>>>> removed that problem.
>>>>>>
>>>>>>      -J
>>>>>
>>>>> Won't this deadlock on RTNL.  The problem is that:
>>>>>
>>>>>   CPU0                            CPU1
>>>>>  rtnl_lock
>>>>>      bond_close
>>>>>                                 delayed_work
>>>>>                                   mii_work
>>>>>                                     read_lock(bond->lock);
>>>>>                                     read_unlock(bond->lock);
>>>>>                                     rtnl_lock... waiting for CPU0
>>>>>      flush_delayed_work_sync
>>>>>          waiting for delayed_work to finish...
>>>>
>>>>        Yah, that was it.  We discussed this a couple of years ago in
>>>> regards to a similar patch:
>>>>
>>>> http://lists.openwall.net/netdev/2009/12/17/3
>>>>
>>>>        The short version is that we could rework the rtnl_lock inside
>>>> the montiors to be conditional and retry on failure (where "retry" means
>>>> "reschedule the work and try again later," not "spin retrying on rtnl").
>>>> That should permit the use of flush or cancel to terminate the work
>>>> items.
>>>
>>> Yes? Even if we use rtnl_trylock(), doesn't flush_delayed_work_sync()
>>> still queue the pending delayed work and wait for it to be finished?
>>
>> 	Yes, it does.  The original patch wants to use flush instead of
>> cancel to wait for the work to finish, because there's evidently a
>> possibility of getting back into bond_open before the work item
>> executes, and bond_open would reinitialize the work queue and corrupt
>> the queued work item.
>>
>> 	The original patch series, and recipe for destruction, is here:
>>
>> 	http://www.spinics.net/lists/netdev/msg176382.html
>>
>> 	I've been unable to reproduce the work queue panic locally,
>> although it sounds plausible.
>>
>> 	Mitsuo: can you provide the precise bonding configuration you're
>> using to induce the problem?  Driver options, number and type of slaves,
>> etc.
>>
>>> Maybe I am too blind, why do we need rtnl_lock for cancel_delayed_work()
>>> inside bond_close()?
>>
>> 	We don't need RTNL for cancel/flush.  However, bond_close is an
>> ndo_stop operation, and is called in the dev_close path, which always
>> occurs under RTNL.  The mii / arp monitor work functions separately
>> acquire RTNL if they need to perform various failover related
>> operations.
>>
>> 	I'm working on a patch that should resolve the mii / arp monitor
>> RTNL problem as I described above (if rtnl_trylock fails, punt and
>> reschedule the work).  I need to rearrange the netdev_bonding_change
>> stuff a bit as well, since it acquires RTNL separately.
>>
>> 	Once these changes are made to mii / arp monitor, then
>> bond_close can call flush instead of cancel, which should eliminate the
>> original problem described at the top.
> 
> 	Just an update: there are three functions that may deadlock if
> the cancel work calls are changed to flush_sync.  There are two
> rtnl_lock calls in each of the bond_mii_monitor and
> bond_activebackup_arp_mon functions, and one more in the
> bond_alb_monitor.
> 
> 	Still testing to make sure I haven't missed anything, and I
> still haven't been able to reproduce Mitsuo's original failure.


The interval of mii_mon was set to 1 to reproduce this bug easily and 
the 802.3ad mode was used. Then, I executed the following command.

# while true; do ifconfig bond0 down; done &
# while true; do ifconfig bond0 up; done &

This bug rarely occurs since it is the severe timing problem.
I found that it is more easily to reproduce this bug when using guest OS.

For example, it took one to three days for me to reproduce it on host OS, 
but some hours on guest OS.

Thanks.


> 
> 	-J
> 
> ---
> 	-Jay Vosburgh, IBM Linux Technology Center, fubar@us.ibm.com
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index de3d351..a4353f9 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -3504,27 +3504,27 @@  static int bond_close(struct net_device *bond_dev)
 	write_unlock_bh(&bond->lock);
 
 	if (bond->params.miimon) {  /* link check interval, in milliseconds. */
-		cancel_delayed_work(&bond->mii_work);
+		flush_delayed_work_sync(&bond->mii_work);
 	}
 
 	if (bond->params.arp_interval) {  /* arp interval, in milliseconds. */
-		cancel_delayed_work(&bond->arp_work);
+		flush_delayed_work_sync(&bond->arp_work);
 	}
 
 	switch (bond->params.mode) {
 	case BOND_MODE_8023AD:
-		cancel_delayed_work(&bond->ad_work);
+		flush_delayed_work_sync(&bond->ad_work);
 		break;
 	case BOND_MODE_TLB:
 	case BOND_MODE_ALB:
-		cancel_delayed_work(&bond->alb_work);
+		flush_delayed_work_sync(&bond->alb_work);
 		break;
 	default:
 		break;
 	}
 
 	if (delayed_work_pending(&bond->mcast_work))
-		cancel_delayed_work(&bond->mcast_work);
+		flush_delayed_work_sync(&bond->mcast_work);
 
 	if (bond_is_lb(bond)) {
 		/* Must be called only after all