From patchwork Mon Aug 5 19:44:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 264769 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3D6FD2C0092 for ; Tue, 6 Aug 2013 05:46:33 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754924Ab3HEToj (ORCPT ); Mon, 5 Aug 2013 15:44:39 -0400 Received: from mail-pa0-f46.google.com ([209.85.220.46]:64355 "EHLO mail-pa0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754799Ab3HEToO (ORCPT ); Mon, 5 Aug 2013 15:44:14 -0400 Received: by mail-pa0-f46.google.com with SMTP id fa1so3687089pad.5 for ; Mon, 05 Aug 2013 12:44:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :in-reply-to:references:x-gm-message-state; bh=hODL8O9HD9FgUlUAWC7GBE+GZ/I8V+U3qTz4zsMzqVU=; b=ADP5YhR54i7/CHwYpXuL6NY8/WTn0hUmAQxVNfL9v+r62/lgQU3QxVdpbtuod++iB4 i2Uo8vHd2C9Pw1gdkcJRs1lczTnezVk3KyQ/AhpjnQpT6ov/TtrJh8GKXU1GKst449Ba Kb4nR/a34sCKCXVoz2b33XHh21LAarQFETnitPAAP0qxaZjAlvIILFmb4SahpGzGOjTt zmmKjZEy8UUDK78LMOeGwjXBMPCwCiSLcvizJQJ+V7EKnKleb6dP3PD8VSDZeQtr/+LL eLYNtu3yQIpsvWKTtBrPV/8vqaIOO3xnceBNz5xEfSO1tbHYm7I3KC6clQDzOdbvfz/4 m+CQ== X-Received: by 10.69.15.33 with SMTP id fl1mr24162782pbd.189.1375731853855; Mon, 05 Aug 2013 12:44:13 -0700 (PDT) Received: from localhost (50-76-60-73-ip-static.hfc.comcastbusiness.net. [50.76.60.73]) by mx.google.com with ESMTPSA id ar5sm637077pbc.40.2013.08.05.12.44.12 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 05 Aug 2013 12:44:13 -0700 (PDT) From: Andy Lutomirski To: linux-mm@kvack.org, linux-ext4@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Andy Lutomirski Subject: [RFC 2/3] fs: Add block_willwrite Date: Mon, 5 Aug 2013 12:44:00 -0700 Message-Id: <50d921ff5d6fcb1a5da59b4bdb755e886cecab1f.1375729665.git.luto@amacapital.net> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQl8FC7YPBGOfpmgfH0fhid1Fp3AKZfP3yB81MKhvdz5sabGMWQf6djbQ40LIXU7YIqBvDZl Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org This provides generic support for MADV_WILLWRITE. It creates and maps buffer heads, but it should not result in anything being marked dirty. Signed-off-by: Andy Lutomirski --- As described in the 0/0 summary, this may have issues. fs/buffer.c | 57 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/buffer_head.h | 3 +++ 2 files changed, 60 insertions(+) diff --git a/fs/buffer.c b/fs/buffer.c index 4d74335..017e822 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2444,6 +2444,63 @@ int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf, } EXPORT_SYMBOL(block_page_mkwrite); +long block_willwrite(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + get_block_t get_block) +{ + long ret = 0; + loff_t size; + struct inode *inode = file_inode(vma->vm_file); + struct super_block *sb = inode->i_sb; + + for (; start < end; start += PAGE_CACHE_SIZE) { + struct page *p; + int size_in_page; + int tmp = get_user_pages_fast(start, 1, 0, &p); + if (tmp == 0) + tmp = -EFAULT; + if (tmp != 1) { + ret = tmp; + break; + } + + sb_start_pagefault(sb); + + lock_page(p); + size = i_size_read(inode); + if (WARN_ON_ONCE(p->mapping != inode->i_mapping) || + (page_offset(p) > size)) { + ret = -EFAULT; /* A real write would have failed. */ + goto pagedone_unlock; + } + + /* page is partially inside EOF? */ + if (((p->index + 1) << PAGE_CACHE_SHIFT) > size) + size_in_page = size & ~PAGE_CACHE_MASK; + else + size_in_page = PAGE_CACHE_SIZE; + + tmp = __block_write_begin(p, 0, size_in_page, get_block); + if (tmp) { + ret = tmp; + goto pagedone_unlock; + } + + ret += PAGE_CACHE_SIZE; + + /* No need to commit -- we're not writing anything yet. */ + + pagedone_unlock: + unlock_page(p); + sb_end_pagefault(sb); + if (ret < 0) + break; + } + + return ret; +} +EXPORT_SYMBOL(block_willwrite); + /* * nobh_write_begin()'s prereads are special: the buffer_heads are freed * immediately, while under the page lock. So it needs a special end_io diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 91fa9a9..c84639d 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -230,6 +230,9 @@ int __block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf, get_block_t get_block); int block_page_mkwrite(struct vm_area_struct *vma, struct vm_fault *vmf, get_block_t get_block); +long block_willwrite(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + get_block_t get_block); /* Convert errno to return value from ->page_mkwrite() call */ static inline int block_page_mkwrite_return(int err) {