From patchwork Fri Apr 20 13:54:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kleber Sacilotto de Souza X-Patchwork-Id: 901939 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 40SHQg1KSQz9s5b; Fri, 20 Apr 2018 23:54:19 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1f9WUK-0000pR-SN; Fri, 20 Apr 2018 13:54:08 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1f9WUF-0000n7-8Z for kernel-team@lists.ubuntu.com; Fri, 20 Apr 2018 13:54:03 +0000 Received: from mail-wr0-f199.google.com ([209.85.128.199]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1f9WUF-0005Np-11 for kernel-team@lists.ubuntu.com; Fri, 20 Apr 2018 13:54:03 +0000 Received: by mail-wr0-f199.google.com with SMTP id m7-v6so8857368wrb.16 for ; Fri, 20 Apr 2018 06:54:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id; bh=p81sj+F2D/M+OyP/Xe0vhT47GZo4Gf+7ImuFonuoqTQ=; b=HAq6dRD+tR8bj37nHDRHfJIZjwFHyL54/62+tO6QcxMyAk8gnvnHLPbV+dO3yfg4iw vbBZlTA7nFWwzMxIw4RV8oAnK+Cg/UuYVcBX0tEoZpoUo+rk9ifAnd071nRrYwgMA+63 F10P8xW/QozzB6mQE4lrDLMIeS53KdapLiZ/AXtLNL2WjHPWqV836WYIBvuyKeRBLjQU rUSkoAKBJfonJrU0eFRZzl3TIWMtx1/uXsQ1J/H4YtB4Eim7nZwu96apCwMoAogYt+9R RQyL3nHKYnTmoaZMJno8XSFx0f5A+Pf6XqLGwRz6XM+KQz7U3yDvjZdQl0XfzS7vKtnQ eP3g== X-Gm-Message-State: ALQs6tAdzzdZ5D15JNroOW0xmlaSf5Kbo39eMX6ex7XNLiEofI/fkCWt fCRMj0W3hMkg03mheRSw0EJkQnNADWHHQZHC/tDPj4v6bBhpKV5fU2IXWiEQH/c4HZr2ni8ZTBD aJVnBSciDDB4gYFxmKzA+37O3FZWR+h0UpJWWmfOSYg== X-Received: by 2002:adf:dd03:: with SMTP id a3-v6mr5264344wrm.0.1524232442453; Fri, 20 Apr 2018 06:54:02 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+tmEqLgVxUXbMhocaN+vfg9zXMtv4XSi0AJ7tm0K8JsrifDR9/Lg+HTIgnnfqBVRjo1Yi0iA== X-Received: by 2002:adf:dd03:: with SMTP id a3-v6mr5264334wrm.0.1524232442257; Fri, 20 Apr 2018 06:54:02 -0700 (PDT) Received: from localhost ([212.121.131.210]) by smtp.gmail.com with ESMTPSA id f10-v6sm5812569wrg.0.2018.04.20.06.54.00 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 20 Apr 2018 06:54:01 -0700 (PDT) From: Kleber Sacilotto de Souza To: kernel-team@lists.ubuntu.com Subject: [SRU][Trusty][CVE-2017-18221][PATCH] mlock: fix mlock count can not decrease in race condition Date: Fri, 20 Apr 2018 15:54:00 +0200 Message-Id: <20180420135400.16358-1-kleber.souza@canonical.com> X-Mailer: git-send-email 2.14.1 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Yisheng Xie CVE-2017-18221 Kefeng reported that when running the follow test, the mlock count in meminfo will increase permanently: [1] testcase linux:~ # cat test_mlockal grep Mlocked /proc/meminfo for j in `seq 0 10` do for i in `seq 4 15` do ./p_mlockall >> log & done sleep 0.2 done # wait some time to let mlock counter decrease and 5s may not enough sleep 5 grep Mlocked /proc/meminfo linux:~ # cat p_mlockall.c #include #include #include #define SPACE_LEN 4096 int main(int argc, char ** argv) { int ret; void *adr = malloc(SPACE_LEN); if (!adr) return -1; ret = mlockall(MCL_CURRENT | MCL_FUTURE); printf("mlcokall ret = %d\n", ret); ret = munlockall(); printf("munlcokall ret = %d\n", ret); free(adr); return 0; } In __munlock_pagevec() we should decrement NR_MLOCK for each page where we clear the PageMlocked flag. Commit 1ebb7cc6a583 ("mm: munlock: batch NR_MLOCK zone state updates") has introduced a bug where we don't decrement NR_MLOCK for pages where we clear the flag, but fail to isolate them from the lru list (e.g. when the pages are on some other cpu's percpu pagevec). Since PageMlocked stays cleared, the NR_MLOCK accounting gets permanently disrupted by this. Fix it by counting the number of page whose PageMlock flag is cleared. Fixes: 1ebb7cc6a583 (" mm: munlock: batch NR_MLOCK zone state updates") Link: http://lkml.kernel.org/r/1495678405-54569-1-git-send-email-xieyisheng1@huawei.com Signed-off-by: Yisheng Xie Reported-by: Kefeng Wang Tested-by: Kefeng Wang Cc: Vlastimil Babka Cc: Joern Engel Cc: Mel Gorman Cc: Michel Lespinasse Cc: Hugh Dickins Cc: Rik van Riel Cc: Johannes Weiner Cc: Michal Hocko Cc: Xishi Qiu Cc: zhongjiang Cc: Hanjun Guo Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds (backported from commit 70feee0e1ef331b22cc51f383d532a0d043fbdcc) Signed-off-by: Kleber Sacilotto de Souza Acked-by: Khalid Elmously Acked-by: Colin Ian King --- mm/mlock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/mlock.c b/mm/mlock.c index 1b12dfad0794..a3569727baab 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -300,7 +300,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) { int i; int nr = pagevec_count(pvec); - int delta_munlocked; + int delta_munlocked = -nr; struct pagevec pvec_putback; int pgrescued = 0; @@ -330,6 +330,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) } } else { + delta_munlocked++; skip_munlock: /* * We won't be munlocking this page in the next phase @@ -341,7 +342,6 @@ skip_munlock: pvec->pages[i] = NULL; } } - delta_munlocked = -nr + pagevec_count(&pvec_putback); __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); spin_unlock_irq(&zone->lru_lock);