From patchwork Fri Apr 20 12:50:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kleber Sacilotto de Souza X-Patchwork-Id: 901892 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 40SG126C5Sz9s7M; Fri, 20 Apr 2018 22:50:30 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1f9VUg-0003gD-6F; Fri, 20 Apr 2018 12:50:26 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1f9VUc-0003fL-9T for kernel-team@lists.ubuntu.com; Fri, 20 Apr 2018 12:50:22 +0000 Received: from mail-wr0-f198.google.com ([209.85.128.198]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1f9VUc-0000xJ-1N for kernel-team@lists.ubuntu.com; Fri, 20 Apr 2018 12:50:22 +0000 Received: by mail-wr0-f198.google.com with SMTP id u56-v6so8463189wrf.18 for ; Fri, 20 Apr 2018 05:50:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=inS8iwrFtetfElRZIrYsz4AAR1TUdMYzgMy3RFZszxI=; b=PxlGB0STHpL/LXvPAwkQ+jLVqGN6JxW7oFnU++Ezvd14Sm9Y3rtrNVhmIYS0xClg0O jGrdE4BT0ER/9/OPwlvgSpoWMs5d4E1+0pG271RRIOxz0XHYHzaV0y6h7L2T13VvAVnM 4ZyBgU/TP8AgSJ5HcvoYPG9ZV9p1hoQYVKlRYYQdEA8jM78PAPhVijP3gYZDzEj6wlM6 IH64w0jiiMXZTccQNZfY1tii9aJRGJyzLV3BufMYECKT2sTzwOKCyuPmkmnpOvqe/buU hSm9F3mjSTOdwF0cvO5/LZNoCVkR+t5aB0lBvUHSFM61Q42HAzZ9wYnTcq8BlTnUSFig UpzA== X-Gm-Message-State: ALQs6tAtQUZs1zTmYQ3OzloD+tAlbNRRxC6W9jDTPvwLgNZAL04+yisf lOrHVc2Wh8Xnh4lc0ER+EYn7YaKIJphD9/MyfJdZMqfaLIs1w2hCjRIuc64JPhuC604RKtzYmOR DTE9ZIguwcAVYdnOJksROMGw/iHsJNr/vWqcdI1WOYA== X-Received: by 10.28.7.76 with SMTP id 73mr2001130wmh.128.1524228621475; Fri, 20 Apr 2018 05:50:21 -0700 (PDT) X-Google-Smtp-Source: AIpwx493xwoQXEp4hPeewtbiXGuRAG1NL0RnZ7Bocziw/8YSId422HBm54BWduS5gmWuaxHwCYRIgg== X-Received: by 10.28.7.76 with SMTP id 73mr2001117wmh.128.1524228621259; Fri, 20 Apr 2018 05:50:21 -0700 (PDT) Received: from localhost ([212.121.131.210]) by smtp.gmail.com with ESMTPSA id 19sm1763252wmv.18.2018.04.20.05.50.20 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 20 Apr 2018 05:50:20 -0700 (PDT) From: Kleber Sacilotto de Souza To: kernel-team@lists.ubuntu.com Subject: [SRU][Trusty][PATCH 1/1] mm/madvise.c: fix madvise() infinite loop under special circumstances Date: Fri, 20 Apr 2018 14:50:15 +0200 Message-Id: <20180420125016.10300-2-kleber.souza@canonical.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180420125016.10300-1-kleber.souza@canonical.com> References: <20180420125016.10300-1-kleber.souza@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: chenjie CVE-2017-18208 MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings. Unfortunately madvise_willneed() doesn't communicate this information properly to the generic madvise syscall implementation. The calling convention is quite subtle there. madvise_vma() is supposed to either return an error or update &prev otherwise the main loop will never advance to the next vma and it will keep looping for ever without a way to get out of the kernel. It seems this has been broken since introduction. Nobody has noticed because nobody seems to be using MADVISE_WILLNEED on these DAX mappings. [mhocko@suse.com: rewrite changelog] Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place") Signed-off-by: chenjie Signed-off-by: guoxuenan Acked-by: Michal Hocko Cc: Minchan Kim Cc: zhangyi (F) Cc: Miao Xie Cc: Mike Rapoport Cc: Shaohua Li Cc: Andrea Arcangeli Cc: Mel Gorman Cc: Kirill A. Shutemov Cc: David Rientjes Cc: Anshuman Khandual Cc: Rik van Riel Cc: Carsten Otte Cc: Dan Williams Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds (backported from commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91) Signed-off-by: Kleber Sacilotto de Souza Acked-by: Colin Ian King --- mm/madvise.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 539eeb96b323..08f7501b57d0 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -221,9 +221,9 @@ static long madvise_willneed(struct vm_area_struct *vma, { struct file *file = vma->vm_file; + *prev = vma; #ifdef CONFIG_SWAP if (!file || mapping_cap_swap_backed(file->f_mapping)) { - *prev = vma; if (!file) force_swapin_readahead(vma, start, end); else @@ -241,7 +241,6 @@ static long madvise_willneed(struct vm_area_struct *vma, return 0; } - *prev = vma; start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff; if (end > vma->vm_end) end = vma->vm_end;