diff mbox series

bond: take rcu lock in bond_poll_controller

Message ID 20180928161631.3pab3m2k6zvoemnz@codemonkey.org.uk
State Changes Requested, archived
Delegated to: David Miller
Headers show
Series bond: take rcu lock in bond_poll_controller | expand

Commit Message

Dave Jones Sept. 28, 2018, 4:16 p.m. UTC
Callers of bond_for_each_slave_rcu are expected to hold the rcu lock,
otherwise a trace like below is shown

WARNING: CPU: 2 PID: 179 at net/core/dev.c:6567 netdev_lower_get_next_private_rcu+0x34/0x40
CPU: 2 PID: 179 Comm: kworker/u16:15 Not tainted 4.19.0-rc5-backup+ #1
Workqueue: bond0 bond_mii_monitor
RIP: 0010:netdev_lower_get_next_private_rcu+0x34/0x40
Code: 48 89 fb e8 fe 29 63 ff 85 c0 74 1e 48 8b 45 00 48 81 c3 c0 00 00 00 48 8b 00 48 39 d8 74 0f 48 89 45 00 48 8b 40 f8 5b 5d c3 <0f> 0b eb de 31 c0 eb f5 0f 1f 40 00 0f 1f 44 00 00 48 8>
RSP: 0018:ffffc9000087fa68 EFLAGS: 00010046
RAX: 0000000000000000 RBX: ffff880429614560 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 00000000ffffffff RDI: ffffffffa184ada0
RBP: ffffc9000087fa80 R08: 0000000000000001 R09: 0000000000000000
R10: ffffc9000087f9f0 R11: ffff880429798040 R12: ffff8804289d5980
R13: ffffffffa1511f60 R14: 00000000000000c8 R15: 00000000ffffffff
FS:  0000000000000000(0000) GS:ffff88042f880000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f4b78fce180 CR3: 000000018180f006 CR4: 00000000001606e0
Call Trace:
 bond_poll_controller+0x52/0x170
 netpoll_poll_dev+0x79/0x290
 netpoll_send_skb_on_dev+0x158/0x2c0
 netpoll_send_udp+0x2d5/0x430
 write_ext_msg+0x1e0/0x210
 console_unlock+0x3c4/0x630
 vprintk_emit+0xfa/0x2f0
 printk+0x52/0x6e
 ? __netdev_printk+0x12b/0x220
 netdev_info+0x64/0x80
 ? bond_3ad_set_carrier+0xe9/0x180
 bond_select_active_slave+0x1fc/0x310
 bond_mii_monitor+0x709/0x9b0
 process_one_work+0x221/0x5e0
 worker_thread+0x4f/0x3b0
 kthread+0x100/0x140
 ? process_one_work+0x5e0/0x5e0
 ? kthread_delayed_work_timer_fn+0x90/0x90
 ret_from_fork+0x24/0x30

Signed-off-by: Dave Jones <davej@codemonkey.org.uk>

Comments

Cong Wang Sept. 28, 2018, 4:55 p.m. UTC | #1
On Fri, Sep 28, 2018 at 9:18 AM Dave Jones <davej@codemonkey.org.uk> wrote:
>
> Callers of bond_for_each_slave_rcu are expected to hold the rcu lock,
> otherwise a trace like below is shown

So why not take rcu read lock in netpoll_send_skb_on_dev() where
RCU is also assumed?

As I said, I can't explain why you didn't trigger the RCU warning in
netpoll_send_skb_on_dev()...
Dave Jones Sept. 28, 2018, 5:25 p.m. UTC | #2
On Fri, Sep 28, 2018 at 09:55:52AM -0700, Cong Wang wrote:
 > On Fri, Sep 28, 2018 at 9:18 AM Dave Jones <davej@codemonkey.org.uk> wrote:
 > >
 > > Callers of bond_for_each_slave_rcu are expected to hold the rcu lock,
 > > otherwise a trace like below is shown
 > 
 > So why not take rcu read lock in netpoll_send_skb_on_dev() where
 > RCU is also assumed?

that does seem to solve the backtrace spew I saw too, so if that's
preferable I can respin the patch.

 > As I said, I can't explain why you didn't trigger the RCU warning in
 > netpoll_send_skb_on_dev()...

netpoll_send_skb_on_dev takes the rcu lock itself.

	Dave
Cong Wang Sept. 28, 2018, 5:31 p.m. UTC | #3
On Fri, Sep 28, 2018 at 10:25 AM Dave Jones <davej@codemonkey.org.uk> wrote:
>
> On Fri, Sep 28, 2018 at 09:55:52AM -0700, Cong Wang wrote:
>  > On Fri, Sep 28, 2018 at 9:18 AM Dave Jones <davej@codemonkey.org.uk> wrote:
>  > >
>  > > Callers of bond_for_each_slave_rcu are expected to hold the rcu lock,
>  > > otherwise a trace like below is shown
>  >
>  > So why not take rcu read lock in netpoll_send_skb_on_dev() where
>  > RCU is also assumed?
>
> that does seem to solve the backtrace spew I saw too, so if that's
> preferable I can respin the patch.


From my observations, netpoll_send_skb_on_dev() does not take
RCU read lock _and_ it relies on rcu read lock because it calls
rcu_dereference_bh().

If my observation is correct, you should catch a RCU warning like
this but within netpoll_send_skb_on_dev().


>
>  > As I said, I can't explain why you didn't trigger the RCU warning in
>  > netpoll_send_skb_on_dev()...
>
> netpoll_send_skb_on_dev takes the rcu lock itself.

Could you please point me where exactly is the rcu lock here?

I am too stupid to see it. :)
Dave Jones Sept. 28, 2018, 6:21 p.m. UTC | #4
On Fri, Sep 28, 2018 at 10:31:39AM -0700, Cong Wang wrote:
 > On Fri, Sep 28, 2018 at 10:25 AM Dave Jones <davej@codemonkey.org.uk> wrote:
 > >
 > > On Fri, Sep 28, 2018 at 09:55:52AM -0700, Cong Wang wrote:
 > >  > On Fri, Sep 28, 2018 at 9:18 AM Dave Jones <davej@codemonkey.org.uk> wrote:
 > >  > >
 > >  > > Callers of bond_for_each_slave_rcu are expected to hold the rcu lock,
 > >  > > otherwise a trace like below is shown
 > >  >
 > >  > So why not take rcu read lock in netpoll_send_skb_on_dev() where
 > >  > RCU is also assumed?
 > >
 > > that does seem to solve the backtrace spew I saw too, so if that's
 > > preferable I can respin the patch.
 > 
 > 
 > >From my observations, netpoll_send_skb_on_dev() does not take
 > RCU read lock _and_ it relies on rcu read lock because it calls
 > rcu_dereference_bh().
 > 
 > If my observation is correct, you should catch a RCU warning like
 > this but within netpoll_send_skb_on_dev().
 >
 > >  > As I said, I can't explain why you didn't trigger the RCU warning in
 > >  > netpoll_send_skb_on_dev()...
 > >
 > > netpoll_send_skb_on_dev takes the rcu lock itself.
 > 
 > Could you please point me where exactly is the rcu lock here?
 > 
 > I am too stupid to see it. :)

No, I'm the stupid one. I looked at the tree I had just edited to try your
proposed change. 

Now that I've untangled myself, I'll repost with your suggested change.

	Dave
diff mbox series

Patch

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index c05c01a00755..77a3607a7099 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -977,6 +977,7 @@  static void bond_poll_controller(struct net_device *bond_dev)
 		if (bond_3ad_get_active_agg_info(bond, &ad_info))
 			return;
 
+	rcu_read_lock();
 	bond_for_each_slave_rcu(bond, slave, iter) {
 		if (!bond_slave_is_up(slave))
 			continue;
@@ -992,6 +993,7 @@  static void bond_poll_controller(struct net_device *bond_dev)
 
 		netpoll_poll_dev(slave->dev);
 	}
+	rcu_read_unlock();
 }
 
 static void bond_netpoll_cleanup(struct net_device *bond_dev)