diff mbox

[net-next] Phonet: use rwlock for sockets list

Message ID 1257514891-18917-1-git-send-email-remi@remlab.net
State Rejected, archived
Delegated to: David Miller
Headers show

Commit Message

Rémi Denis-Courmont Nov. 6, 2009, 1:41 p.m. UTC
From: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>

Signed-off-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>
---
 net/phonet/socket.c |   24 ++++++++++++------------
 1 files changed, 12 insertions(+), 12 deletions(-)

Comments

Eric Dumazet Nov. 6, 2009, 1:44 p.m. UTC | #1
Rémi Denis-Courmont a écrit :
> From: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>
> 
> Signed-off-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>
> ---
>  net/phonet/socket.c |   24 ++++++++++++------------
>  1 files changed, 12 insertions(+), 12 deletions(-)

Hmm... rwlocks are bad... 

Would you care to explain why you introduce a rwlock ?

Thanks
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rémi Denis-Courmont Nov. 6, 2009, 1:54 p.m. UTC | #2
On Fri, 06 Nov 2009 14:44:20 +0100, Eric Dumazet <eric.dumazet@gmail.com>
wrote:
> Rémi Denis-Courmont a écrit :
>> From: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>
>>
>> Signed-off-by: Rémi Denis-Courmont <remi.denis-courmont@nokia.com>
>> ---
>>  net/phonet/socket.c |   24 ++++++++++++------------
>>  1 files changed, 12 insertions(+), 12 deletions(-)
> 
> Hmm... rwlocks are bad...

They're in many places throughout the non-IP families...

> Would you care to explain why you introduce a rwlock ?

It seems better than a spinlock, assuming that sockets are
created/destroyed more seldom than they receive packets. And then
sk_for_each_rcu does not exist. I am sure there is a good reason for that,
though I wouldn't know. I guess I should try to use RCU hlist_nulls then?
Eric Dumazet Nov. 6, 2009, 2 p.m. UTC | #3
Rémi Denis-Courmont a écrit :
> It seems better than a spinlock, assuming that sockets are
> created/destroyed more seldom than they receive packets. And then
> sk_for_each_rcu does not exist. I am sure there is a good reason for that,
> though I wouldn't know. I guess I should try to use RCU hlist_nulls then?
> 

spin_lock()/spin_unlock() is faster than read_lock()/read_unlock(), unless
there is contention.  (two atomic ops instead of one)

So, unless you have a particular performance problem, it's actually
better to use a spinlock.

If you do have performance problem, a RCU conversion is better than rwlock.
I can do RCU conversion if you ask me...
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rémi Denis-Courmont Nov. 6, 2009, 2:29 p.m. UTC | #4
On Friday 06 November 2009 16:00:39 ext Eric Dumazet, you wrote:
> Rémi Denis-Courmont a écrit :
> > It seems better than a spinlock, assuming that sockets are
> > created/destroyed more seldom than they receive packets. And then
> > sk_for_each_rcu does not exist. I am sure there is a good reason for
> > that, though I wouldn't know. I guess I should try to use RCU hlist_nulls
> > then?
> 
> spin_lock()/spin_unlock() is faster than read_lock()/read_unlock(), unless
> there is contention.  (two atomic ops instead of one)
> 
> So, unless you have a particular performance problem, it's actually
> better to use a spinlock.
> 
> If you do have performance problem, a RCU conversion is better than rwlock.
> I can do RCU conversion if you ask me...

The most obvious current bottleneck is the sockets linked-list more so than 
the locking scheme (RCU vs spinlock vs rwlock). I'll fix that soonish 
regardless of the locking.

Then there is the devices lists. I did not want to clutter all net_device's 
with a Phonet-specific pointer, since most devices aren't Phonet-capable. So 
we have a linked-list for per-device data. There, I guess RCU would make 
sense, but it did not seem trivial a change at all.

That said, there are no real "problems" yet - I was just trying to improve 
things. N900 ships with the 2.6.30 Phonet stack, and it works fine :) The 3G 
access network is typically the biggest bottleneck anyway...

So if you say R/W locks suck, screw this patch.

Thanks!
David Miller Nov. 7, 2009, 2:03 a.m. UTC | #5
From: "Rémi Denis-Courmont" <remi.denis-courmont@nokia.com>
Date: Fri, 6 Nov 2009 16:29:42 +0200

> So if you say R/W locks suck, screw this patch.

They do, so I'm tossing it :-)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/phonet/socket.c b/net/phonet/socket.c
index 0412beb..ecb5f20 100644
--- a/net/phonet/socket.c
+++ b/net/phonet/socket.c
@@ -47,10 +47,10 @@  static int pn_socket_release(struct socket *sock)
 
 static struct  {
 	struct hlist_head hlist;
-	spinlock_t lock;
+	rwlock_t lock;
 } pnsocks = {
 	.hlist = HLIST_HEAD_INIT,
-	.lock = __SPIN_LOCK_UNLOCKED(pnsocks.lock),
+	.lock = __RW_LOCK_UNLOCKED(pnsocks.lock),
 };
 
 /*
@@ -65,7 +65,7 @@  struct sock *pn_find_sock_by_sa(struct net *net, const struct sockaddr_pn *spn)
 	u16 obj = pn_sockaddr_get_object(spn);
 	u8 res = spn->spn_resource;
 
-	spin_lock_bh(&pnsocks.lock);
+	read_lock_bh(&pnsocks.lock);
 
 	sk_for_each(sknode, node, &pnsocks.hlist) {
 		struct pn_sock *pn = pn_sk(sknode);
@@ -91,7 +91,7 @@  struct sock *pn_find_sock_by_sa(struct net *net, const struct sockaddr_pn *spn)
 		break;
 	}
 
-	spin_unlock_bh(&pnsocks.lock);
+	read_unlock_bh(&pnsocks.lock);
 
 	return rval;
 }
@@ -102,7 +102,7 @@  void pn_deliver_sock_broadcast(struct net *net, struct sk_buff *skb)
 	struct hlist_node *node;
 	struct sock *sknode;
 
-	spin_lock(&pnsocks.lock);
+	read_lock(&pnsocks.lock);
 	sk_for_each(sknode, node, &pnsocks.hlist) {
 		struct sk_buff *clone;
 
@@ -117,22 +117,22 @@  void pn_deliver_sock_broadcast(struct net *net, struct sk_buff *skb)
 			sk_receive_skb(sknode, clone, 0);
 		}
 	}
-	spin_unlock(&pnsocks.lock);
+	read_unlock(&pnsocks.lock);
 }
 
 void pn_sock_hash(struct sock *sk)
 {
-	spin_lock_bh(&pnsocks.lock);
+	write_lock_bh(&pnsocks.lock);
 	sk_add_node(sk, &pnsocks.hlist);
-	spin_unlock_bh(&pnsocks.lock);
+	write_unlock_bh(&pnsocks.lock);
 }
 EXPORT_SYMBOL(pn_sock_hash);
 
 void pn_sock_unhash(struct sock *sk)
 {
-	spin_lock_bh(&pnsocks.lock);
+	write_lock_bh(&pnsocks.lock);
 	sk_del_node_init(sk);
-	spin_unlock_bh(&pnsocks.lock);
+	write_unlock_bh(&pnsocks.lock);
 }
 EXPORT_SYMBOL(pn_sock_unhash);
 
@@ -466,7 +466,7 @@  static struct sock *pn_sock_get_next(struct seq_file *seq, struct sock *sk)
 static void *pn_sock_seq_start(struct seq_file *seq, loff_t *pos)
 	__acquires(pnsocks.lock)
 {
-	spin_lock_bh(&pnsocks.lock);
+	read_lock_bh(&pnsocks.lock);
 	return *pos ? pn_sock_get_idx(seq, *pos - 1) : SEQ_START_TOKEN;
 }
 
@@ -485,7 +485,7 @@  static void *pn_sock_seq_next(struct seq_file *seq, void *v, loff_t *pos)
 static void pn_sock_seq_stop(struct seq_file *seq, void *v)
 	__releases(pnsocks.lock)
 {
-	spin_unlock_bh(&pnsocks.lock);
+	read_unlock_bh(&pnsocks.lock);
 }
 
 static int pn_sock_seq_show(struct seq_file *seq, void *v)