From patchwork Fri Aug 12 13:36:54 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Bader X-Patchwork-Id: 658643 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 3s9mCC4HnNz9s9x; Fri, 12 Aug 2016 23:37:11 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1bYCe3-0004Jk-VL; Fri, 12 Aug 2016 13:37:07 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1bYCdz-0004Hf-FM for kernel-team@lists.ubuntu.com; Fri, 12 Aug 2016 13:37:03 +0000 Received: from 1.general.smb.uk.vpn ([10.172.193.28]) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1bYCdz-0005U6-5i; Fri, 12 Aug 2016 13:37:03 +0000 Subject: Re: Trusty CVE-2016-5696 To: Tim Gardner , kernel-team@lists.ubuntu.com References: <56dc5289-a906-8574-8401-60291e63d2cc@canonical.com> <1471008654-28755-1-git-send-email-tim.gardner@canonical.com> From: Stefan Bader Message-ID: <945ecdd2-4d9a-c520-48cf-895f6a4b551d@canonical.com> Date: Fri, 12 Aug 2016 15:36:54 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <1471008654-28755-1-git-send-email-tim.gardner@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com On 12.08.2016 15:30, Tim Gardner wrote: > In fact, how about this 4 patch set. the first 3 are clean cherry-picks, but required to provide > the inline macros upon which the real fix depends. > > [PATCH 1/4] kernel: Provide READ_ONCE and ASSIGN_ONCE > [PATCH 2/4] kernel: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val) > [PATCH 3/4] random32: add prandom_u32_max and convert open coded > [PATCH 4/4] tcp: make challenge acks less predictable Actually with the backport from Ben Hutchings that Seth mentioned... From ffdc560eac34cd5968c2d73ca1d724f4818d890b Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Sun, 10 Jul 2016 10:04:02 +0200 Subject: [PATCH] tcp: make challenge acks less predictable commit 75ff39ccc1bd5d3c455b6822ab09e533c551f758 upstream. Yue Cao claims that current host rate limiting of challenge ACKS (RFC 5961) could leak enough information to allow a patient attacker to hijack TCP sessions. He will soon provide details in an academic paper. This patch increases the default limit from 100 to 1000, and adds some randomization so that the attacker can no longer hijack sessions without spending a considerable amount of probes. Based on initial analysis and patch from Linus. Note that we also have per socket rate limiting, so it is tempting to remove the host limit in the future. v2: randomize the count of challenge acks per second, not the period. Fixes: 282f23c6ee34 ("tcp: implement RFC 5961 3.2") Reported-by: Yue Cao Signed-off-by: Eric Dumazet Suggested-by: Linus Torvalds Cc: Yuchung Cheng Cc: Neal Cardwell Acked-by: Neal Cardwell Acked-by: Yuchung Cheng Signed-off-by: David S. Miller [bwh: Backported to 3.2: - Adjust context - Use ACCESS_ONCE() instead of {READ,WRITE}_ONCE()] Signed-off-by: Ben Hutchings CVE-2016-5696 [smb: Backported from f516fa21b9275c7fef99a1075f39394bcd677dfc bwh - Replaced prandom_u32_max by later implementation Signed-off-by: Stefan Bader --- net/ipv4/tcp_input.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 2cc1313..5380c00 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -86,7 +86,7 @@ int sysctl_tcp_adv_win_scale __read_mostly = 1; EXPORT_SYMBOL(sysctl_tcp_adv_win_scale); /* rfc5961 challenge ack rate limiting */ -int sysctl_tcp_challenge_ack_limit = 100; +int sysctl_tcp_challenge_ack_limit = 1000; int sysctl_tcp_stdurg __read_mostly; int sysctl_tcp_rfc1337 __read_mostly; @@ -3288,13 +3288,20 @@ static void tcp_send_challenge_ack(struct sock *sk) /* unprotected vars, we dont care of overwrites */ static u32 challenge_timestamp; static unsigned int challenge_count; - u32 now = jiffies / HZ; + u32 count, now = jiffies / HZ; if (now != challenge_timestamp) { + u32 half = (sysctl_tcp_challenge_ack_limit + 1) >> 1; + challenge_timestamp = now; - challenge_count = 0; - } - if (++challenge_count <= sysctl_tcp_challenge_ack_limit) { + ACCESS_ONCE(challenge_count) = + half + (u32)( + ((u64) prandom_u32() * sysctl_tcp_challenge_ack_limit) + >> 32); + } + count = ACCESS_ONCE(challenge_count); + if (count > 0) { + ACCESS_ONCE(challenge_count) = count - 1; NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPCHALLENGEACK); tcp_send_ack(sk); } -- 1.9.1