Message ID | 1274815488-29173-1-git-send-email-jlayton@redhat.com |
---|---|
State | New |
Headers | show |
Any rough idea of performance or memory savings (even in something artificial like dbench run)? On Tue, May 25, 2010 at 2:24 PM, Jeff Layton <jlayton@redhat.com> wrote: > The standard behavior for drop_inode is to delete the inode when the > last reference to it is put and the nlink count goes to 0. This helps > keep inodes that are still considered "not deleted" in cache as long as > possible even when there aren't dentries attached to them. > > When server inode numbers are disabled, it's not possible for cifs_iget > to ever match an existing inode (since inode numbers are generated via > iunique). In this situation, cifs can keep a lot of inodes in cache that > will never be used again. > > Implement a drop_inode routine that deletes the inode if server inode > numbers are disabled on the mount. This helps keep the cifs inode > caches down to a more manageable size when server inode numbers are > disabled. > > Signed-off-by: Jeff Layton <jlayton@redhat.com> > --- > fs/cifs/cifsfs.c | 14 ++++++++++++-- > 1 files changed, 12 insertions(+), 2 deletions(-) > > diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c > index 78c02eb..8f647db 100644 > --- a/fs/cifs/cifsfs.c > +++ b/fs/cifs/cifsfs.c > @@ -473,13 +473,23 @@ static int cifs_remount(struct super_block *sb, int *flags, char *data) > return 0; > } > > +void cifs_drop_inode(struct inode *inode) > +{ > + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); > + > + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) > + return generic_drop_inode(inode); > + > + return generic_delete_inode(inode); > +} > + > static const struct super_operations cifs_super_ops = { > .put_super = cifs_put_super, > .statfs = cifs_statfs, > .alloc_inode = cifs_alloc_inode, > .destroy_inode = cifs_destroy_inode, > -/* .drop_inode = generic_delete_inode, > - .delete_inode = cifs_delete_inode, */ /* Do not need above two > + .drop_inode = cifs_drop_inode, > +/* .delete_inode = cifs_delete_inode, */ /* Do not need above two > functions unless later we add lazy close of inodes or unless the > kernel forgets to call us with the same number of releases (closes) > as opens */ > -- > 1.6.6.1 > >
On Tue, 25 May 2010 17:14:10 -0500 Steve French <smfrench@gmail.com> wrote: > Any rough idea of performance or memory savings (even in something > artificial like dbench run)? > It's more of a memory savings thing. When I mount with -o noserverino and run fsstress on the mount, I'd regularly see the size of the cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this patch in place, it rarely goes over 2M in size. Eventually, memory pressure will force the size to go down, but if we know that they'll never be used again (which is the case with noserverino), it's better to go ahead and just free them. > On Tue, May 25, 2010 at 2:24 PM, Jeff Layton <jlayton@redhat.com> wrote: > > The standard behavior for drop_inode is to delete the inode when the > > last reference to it is put and the nlink count goes to 0. This helps > > keep inodes that are still considered "not deleted" in cache as long as > > possible even when there aren't dentries attached to them. > > > > When server inode numbers are disabled, it's not possible for cifs_iget > > to ever match an existing inode (since inode numbers are generated via > > iunique). In this situation, cifs can keep a lot of inodes in cache that > > will never be used again. > > > > Implement a drop_inode routine that deletes the inode if server inode > > numbers are disabled on the mount. This helps keep the cifs inode > > caches down to a more manageable size when server inode numbers are > > disabled. > > > > Signed-off-by: Jeff Layton <jlayton@redhat.com> > > --- > > fs/cifs/cifsfs.c | 14 ++++++++++++-- > > 1 files changed, 12 insertions(+), 2 deletions(-) > > > > diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c > > index 78c02eb..8f647db 100644 > > --- a/fs/cifs/cifsfs.c > > +++ b/fs/cifs/cifsfs.c > > @@ -473,13 +473,23 @@ static int cifs_remount(struct super_block *sb, int *flags, char *data) > > return 0; > > } > > > > +void cifs_drop_inode(struct inode *inode) > > +{ > > + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); > > + > > + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) > > + return generic_drop_inode(inode); > > + > > + return generic_delete_inode(inode); > > +} > > + > > static const struct super_operations cifs_super_ops = { > > .put_super = cifs_put_super, > > .statfs = cifs_statfs, > > .alloc_inode = cifs_alloc_inode, > > .destroy_inode = cifs_destroy_inode, > > -/* .drop_inode = generic_delete_inode, > > - .delete_inode = cifs_delete_inode, */ /* Do not need above two > > + .drop_inode = cifs_drop_inode, > > +/* .delete_inode = cifs_delete_inode, */ /* Do not need above two > > functions unless later we add lazy close of inodes or unless the > > kernel forgets to call us with the same number of releases (closes) > > as opens */ > > -- > > 1.6.6.1 > > > > > > >
Hi, Jeff Layton <jlayton@redhat.com> writes: [snip] > static const struct super_operations cifs_super_ops = { > .put_super = cifs_put_super, > .statfs = cifs_statfs, > .alloc_inode = cifs_alloc_inode, > .destroy_inode = cifs_destroy_inode, > -/* .drop_inode = generic_delete_inode, > - .delete_inode = cifs_delete_inode, */ /* Do not need above two > + .drop_inode = cifs_drop_inode, > +/* .delete_inode = cifs_delete_inode, */ /* Do not need above two > functions unless later we add lazy close of inodes or unless the > kernel forgets to call us with the same number of releases (closes) > as opens */ I think the comment needs to be updated, at least to decrement the "two functions". - Hari
>> Any rough idea of performance or memory savings (even in something >> artificial like dbench run)? >> >> > It's more of a memory savings thing. When I mount with -o noserverino > and run fsstress on the mount, I'd regularly see the size of the > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this > patch in place, it rarely goes over 2M in size. > > Eventually, memory pressure will force the size to go down, but if we > know that they'll never be used again (which is the case with > noserverino), it's better to go ahead and just free them. > > I take it this overrides the behavior of the vfs_cache_pressure before the memory pressure makes reclaiming cache necessary?
On Wed, 26 May 2010 19:19:11 -0400 Scott Lovenberg <scott.lovenberg@gmail.com> wrote: > > >> Any rough idea of performance or memory savings (even in something > >> artificial like dbench run)? > >> > >> > > It's more of a memory savings thing. When I mount with -o noserverino > > and run fsstress on the mount, I'd regularly see the size of the > > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this > > patch in place, it rarely goes over 2M in size. > > > > Eventually, memory pressure will force the size to go down, but if we > > know that they'll never be used again (which is the case with > > noserverino), it's better to go ahead and just free them. > > > > > I take it this overrides the behavior of the vfs_cache_pressure before > the memory pressure makes reclaiming cache necessary? Not exactly. vfs_cache_pressure just governs the way in which the VM subsystem will attempt to free memory when it needs it by changing the preference for flushing inode and dentry caches. This patch just aims to delete inodes that we know will never be used again as soon as their refcount drops to 0.
Cached metadata will still be valid for 1 second - do these still have dentries pointing to them? On Thu, May 27, 2010 at 8:38 AM, Jeff Layton <jlayton@redhat.com> wrote: > On Wed, 26 May 2010 19:19:11 -0400 > Scott Lovenberg <scott.lovenberg@gmail.com> wrote: > >> >> >> Any rough idea of performance or memory savings (even in something >> >> artificial like dbench run)? >> >> >> >> >> > It's more of a memory savings thing. When I mount with -o noserverino >> > and run fsstress on the mount, I'd regularly see the size of the >> > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this >> > patch in place, it rarely goes over 2M in size. >> > >> > Eventually, memory pressure will force the size to go down, but if we >> > know that they'll never be used again (which is the case with >> > noserverino), it's better to go ahead and just free them. >> > >> > >> I take it this overrides the behavior of the vfs_cache_pressure before >> the memory pressure makes reclaiming cache necessary? > > Not exactly. vfs_cache_pressure just governs the way in which the VM > subsystem will attempt to free memory when it needs it by changing the > preference for flushing inode and dentry caches. > > This patch just aims to delete inodes that we know will never be used > again as soon as their refcount drops to 0. > > -- > Jeff Layton <jlayton@redhat.com> >
On Thu, 27 May 2010 09:51:40 -0500 Steve French <smfrench@gmail.com> wrote: > Cached metadata will still be valid for 1 second - do these still have > dentries pointing to them? > No. If the i_count is 0, then any dentries attached to the inode would have gone away. These are inodes that are subject to get flushed out of the cache at any time. This patch just makes that happen sooner. > On Thu, May 27, 2010 at 8:38 AM, Jeff Layton <jlayton@redhat.com> wrote: > > On Wed, 26 May 2010 19:19:11 -0400 > > Scott Lovenberg <scott.lovenberg@gmail.com> wrote: > > > >> > >> >> Any rough idea of performance or memory savings (even in something > >> >> artificial like dbench run)? > >> >> > >> >> > >> > It's more of a memory savings thing. When I mount with -o noserverino > >> > and run fsstress on the mount, I'd regularly see the size of the > >> > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this > >> > patch in place, it rarely goes over 2M in size. > >> > > >> > Eventually, memory pressure will force the size to go down, but if we > >> > know that they'll never be used again (which is the case with > >> > noserverino), it's better to go ahead and just free them. > >> > > >> > > >> I take it this overrides the behavior of the vfs_cache_pressure before > >> the memory pressure makes reclaiming cache necessary? > > > > Not exactly. vfs_cache_pressure just governs the way in which the VM > > subsystem will attempt to free memory when it needs it by changing the > > preference for flushing inode and dentry caches. > > > > This patch just aims to delete inodes that we know will never be used > > again as soon as their refcount drops to 0. > > > > -- > > Jeff Layton <jlayton@redhat.com> > > > > > > -- > Thanks, > > Steve
diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index 78c02eb..8f647db 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -473,13 +473,23 @@ static int cifs_remount(struct super_block *sb, int *flags, char *data) return 0; } +void cifs_drop_inode(struct inode *inode) +{ + struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); + + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) + return generic_drop_inode(inode); + + return generic_delete_inode(inode); +} + static const struct super_operations cifs_super_ops = { .put_super = cifs_put_super, .statfs = cifs_statfs, .alloc_inode = cifs_alloc_inode, .destroy_inode = cifs_destroy_inode, -/* .drop_inode = generic_delete_inode, - .delete_inode = cifs_delete_inode, */ /* Do not need above two + .drop_inode = cifs_drop_inode, +/* .delete_inode = cifs_delete_inode, */ /* Do not need above two functions unless later we add lazy close of inodes or unless the kernel forgets to call us with the same number of releases (closes) as opens */
The standard behavior for drop_inode is to delete the inode when the last reference to it is put and the nlink count goes to 0. This helps keep inodes that are still considered "not deleted" in cache as long as possible even when there aren't dentries attached to them. When server inode numbers are disabled, it's not possible for cifs_iget to ever match an existing inode (since inode numbers are generated via iunique). In this situation, cifs can keep a lot of inodes in cache that will never be used again. Implement a drop_inode routine that deletes the inode if server inode numbers are disabled on the mount. This helps keep the cifs inode caches down to a more manageable size when server inode numbers are disabled. Signed-off-by: Jeff Layton <jlayton@redhat.com> --- fs/cifs/cifsfs.c | 14 ++++++++++++-- 1 files changed, 12 insertions(+), 2 deletions(-)