diff mbox

[v2,2/4] NFS: release per-net clients lock before calling PipeFS dentries creation

Message ID 20120227155103.7941.17006.stgit@localhost6.localdomain6
State Not Applicable, archived
Delegated to: David Miller
Headers show

Commit Message

Stanislav Kinsbursky Feb. 27, 2012, 3:51 p.m. UTC
Lockdep is sad otherwise, because inode mutex is taken on PipeFS dentry
creation, which can be called on mount notification, where this per-net client
lock is taken on clients list walk.

Signed-off-by: Stanislav Kinsbursky <skinsbursky@parallels.com>

---
 fs/nfs/client.c |    2 +-
 fs/nfs/idmap.c  |    8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Laight Feb. 27, 2012, 3:59 p.m. UTC | #1
>  	spin_lock(&nn->nfs_client_lock);
> -	list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
> +	list_for_each_entry_safe(clp, tmp, &nn->nfs_client_list,
cl_share_link) {
>  		if (clp->rpc_ops != &nfs_v4_clientops)
>  			continue;
> +		atomic_inc(&clp->cl_count);
> +		spin_unlock(&nn->nfs_client_lock);
>  		error = __rpc_pipefs_event(clp, event, sb);
> +		nfs_put_client(clp);
>  		if (error)
>  			break;
> +		spin_lock(&nn->nfs_client_lock);
>  	}
>  	spin_unlock(&nn->nfs_client_lock);
>  	return error;

The locking doesn't look right if the loop breaks on error.
(Same applied to patch v2 1/4)

Although list_fo_each_entry_safe() allows the current entry
to be freed, I don't believe it allows the 'next' to be freed.
I doubt there is protection against that happening.

Do you need to use an atomic_inc() for cl_count.
I'd guess the nfs_client_lock is usually held?

	David


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stanislav Kinsbursky Feb. 27, 2012, 4:20 p.m. UTC | #2
27.02.2012 19:59, David Laight пишет:
>
>>   	spin_lock(&nn->nfs_client_lock);
>> -	list_for_each_entry(clp,&nn->nfs_client_list, cl_share_link) {
>> +	list_for_each_entry_safe(clp, tmp,&nn->nfs_client_list,
> cl_share_link) {
>>   		if (clp->rpc_ops !=&nfs_v4_clientops)
>>   			continue;
>> +		atomic_inc(&clp->cl_count);
>> +		spin_unlock(&nn->nfs_client_lock);
>>   		error = __rpc_pipefs_event(clp, event, sb);
>> +		nfs_put_client(clp);
>>   		if (error)
>>   			break;
>> +		spin_lock(&nn->nfs_client_lock);
>>   	}
>>   	spin_unlock(&nn->nfs_client_lock);
>>   	return error;
>
> The locking doesn't look right if the loop breaks on error.
> (Same applied to patch v2 1/4)
>

Thanks for the catch. I'll fix this.

> Although list_fo_each_entry_safe() allows the current entry
> to be freed, I don't believe it allows the 'next' to be freed.
> I doubt there is protection against that happening.
>

We need to use safe macro, because client can be destroyed on nfs_put_client() call.
About "protection against ... the 'next' to be freed" - I dont' think, that we 
need any protection against it. This will be done under nfs_client_lock, and 
current entry list pointers will be updated properly.

> Do you need to use an atomic_inc() for cl_count.
> I'd guess the nfs_client_lock is usually held?
>

Sorry, I don't understand this question.
diff mbox

Patch

diff --git a/fs/nfs/client.c b/fs/nfs/client.c
index 8563585..6aeb6b3 100644
--- a/fs/nfs/client.c
+++ b/fs/nfs/client.c
@@ -538,7 +538,7 @@  nfs_get_client(const struct nfs_client_initdata *cl_init,
 	/* install a new client and return with it unready */
 install_client:
 	clp = new;
-	list_add(&clp->cl_share_link, &nn->nfs_client_list);
+	list_add_tail(&clp->cl_share_link, &nn->nfs_client_list);
 	spin_unlock(&nn->nfs_client_lock);
 
 	error = cl_init->rpc_ops->init_client(clp, timeparms, ip_addr,
diff --git a/fs/nfs/idmap.c b/fs/nfs/idmap.c
index b5c6d8e..8a9e7a4 100644
--- a/fs/nfs/idmap.c
+++ b/fs/nfs/idmap.c
@@ -558,16 +558,20 @@  static int rpc_pipefs_event(struct notifier_block *nb, unsigned long event,
 {
 	struct super_block *sb = ptr;
 	struct nfs_net *nn = net_generic(sb->s_fs_info, nfs_net_id);
-	struct nfs_client *clp;
+	struct nfs_client *clp, *tmp;
 	int error = 0;
 
 	spin_lock(&nn->nfs_client_lock);
-	list_for_each_entry(clp, &nn->nfs_client_list, cl_share_link) {
+	list_for_each_entry_safe(clp, tmp, &nn->nfs_client_list, cl_share_link) {
 		if (clp->rpc_ops != &nfs_v4_clientops)
 			continue;
+		atomic_inc(&clp->cl_count);
+		spin_unlock(&nn->nfs_client_lock);
 		error = __rpc_pipefs_event(clp, event, sb);
+		nfs_put_client(clp);
 		if (error)
 			break;
+		spin_lock(&nn->nfs_client_lock);
 	}
 	spin_unlock(&nn->nfs_client_lock);
 	return error;