From patchwork Wed Feb 13 20:41:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 1041573 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="vk+9afpc"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 440BKs4HK9z9sML for ; Thu, 14 Feb 2019 07:42:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2436678AbfBMUmg (ORCPT ); Wed, 13 Feb 2019 15:42:36 -0500 Received: from mail-yw1-f74.google.com ([209.85.161.74]:33001 "EHLO mail-yw1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2393404AbfBMUmg (ORCPT ); Wed, 13 Feb 2019 15:42:36 -0500 Received: by mail-yw1-f74.google.com with SMTP id c8so2238310ywa.0 for ; Wed, 13 Feb 2019 12:42:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=qNCR0xN3839y7VMAmcBdO2qZgWS8pHFZK1yfc/zYXzk=; b=vk+9afpcYaiUM95aJA3YpkniHaAPKIKCIB0/xfPAYsnuF7/YNMPbNb4k5yafb8q/hg dVvuTWNKWxUYfWb8F6n2t+L5vbcXbDUy3v0W2SR65b+SIBSo/aW1YG0v4KZnPP49Yw8i 4RdXH8htDGSG2dUmswH0gKObCDVSIPoaKTXzxGC2+quA8rx4wmm1VU1/SPLn6lViG2bR YnuQuZSJVMNhR4T6L/wKH9RF32kEWhfaJ6A0OACNv8zBV3ni012T46sxjKfCeLhRCLy3 B9ecidL7VHAcArqfULSdG2dD3UjWr73rgCyOjB4szjIgotWbzRazwSg1rn3un1gjiHA8 R3Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=qNCR0xN3839y7VMAmcBdO2qZgWS8pHFZK1yfc/zYXzk=; b=ttxRuciIsA15F6TpygzVhwyIoEe3aveDpo+t6w/LZERPzfpdN5QqY9P0bT0UlD+p/G vP+wCl+C4lcW09hbrenT79n7VUVLcyoOBzBq93ZiX/AEbh6vz3VJeKU4Hm3Q5K1pAv0I RuoHALOUpahFuzphUMhLjegWKCzKpGslw/vA3jbk+2N44g2cHxxXoBVYJA3InKEOGPwC 7TRtiK99TcdKK3f745vHOc77i84uUvgjNbGtOyhUSbfehkKFqxUvMyDqMJDPIjpZ1hIk EGEMof7J3EzUgCrf4sotrCOj/TrwKZvaNeDNYJDKRf82AXhEeiCuJrhR+uHHtoMJChC7 l5LQ== X-Gm-Message-State: AHQUAuYZzQIOOBBOZ/Aj4qQKlotX+PoRqtPDstuDtGxz0ZVz5CPHk1Yv UCKGGn/xSLW7QKQoryTpm3GggXjhPw== X-Google-Smtp-Source: AHgI3IYGHa4+7qE44erSpdyH7XPkwQLF/QTlE5/7wNbNcq+pC/rZo8F6iVN+yBzCSGsts8+B5GUGhLRnOw== X-Received: by 2002:a81:e50:: with SMTP id 77mr19924ywo.20.1550090555029; Wed, 13 Feb 2019 12:42:35 -0800 (PST) Date: Wed, 13 Feb 2019 21:41:57 +0100 Message-Id: <20190213204157.12570-1-jannh@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.20.1.791.gb4d0f1c61a-goog Subject: [PATCH] mm: page_alloc: fix ref bias in page_frag_alloc() for 1-byte allocs From: Jann Horn To: linux-mm@kvack.org, Andrew Morton , jannh@google.com Cc: linux-kernel@vger.kernel.org, Michal Hocko , Vlastimil Babka , Pavel Tatashin , Oscar Salvador , Mel Gorman , Aaron Lu , netdev@vger.kernel.org, Alexander Duyck Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org The basic idea behind ->pagecnt_bias is: If we pre-allocate the maximum number of references that we might need to create in the fastpath later, the bump-allocation fastpath only has to modify the non-atomic bias value that tracks the number of extra references we hold instead of the atomic refcount. The maximum number of allocations we can serve (under the assumption that no allocation is made with size 0) is nc->size, so that's the bias used. However, even when all memory in the allocation has been given away, a reference to the page is still held; and in the `offset < 0` slowpath, the page may be reused if everyone else has dropped their references. This means that the necessary number of references is actually `nc->size+1`. Luckily, from a quick grep, it looks like the only path that can call page_frag_alloc(fragsz=1) is TAP with the IFF_NAPI_FRAGS flag, which requires CAP_NET_ADMIN in the init namespace and is only intended to be used for kernel testing and fuzzing. To test for this issue, put a `WARN_ON(page_ref_count(page) == 0)` in the `offset < 0` path, below the virt_to_page() call, and then repeatedly call writev() on a TAP device with IFF_TAP|IFF_NO_PI|IFF_NAPI_FRAGS|IFF_NAPI, with a vector consisting of 15 elements containing 1 byte each. Cc: stable@vger.kernel.org Signed-off-by: Jann Horn --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 35fdde041f5c..46285d28e43b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4675,11 +4675,11 @@ void *page_frag_alloc(struct page_frag_cache *nc, /* Even if we own the page, we do not use atomic_set(). * This would break get_page_unless_zero() users. */ - page_ref_add(page, size - 1); + page_ref_add(page, size); /* reset page count bias and offset to start of new frag */ nc->pfmemalloc = page_is_pfmemalloc(page); - nc->pagecnt_bias = size; + nc->pagecnt_bias = size + 1; nc->offset = size; } @@ -4695,10 +4695,10 @@ void *page_frag_alloc(struct page_frag_cache *nc, size = nc->size; #endif /* OK, page count is 0, we can safely set it */ - set_page_count(page, size); + set_page_count(page, size + 1); /* reset page count bias and offset to start of new frag */ - nc->pagecnt_bias = size; + nc->pagecnt_bias = size + 1; offset = size - fragsz; }