From patchwork Wed Apr 5 00:08:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thadeu Lima de Souza Cascardo X-Patchwork-Id: 1765246 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=sPFxXKGc; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4PrlPT63wcz1yZT for ; Wed, 5 Apr 2023 10:09:49 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1pjqib-0007Iw-JA; Wed, 05 Apr 2023 00:09:41 +0000 Received: from smtp-relay-canonical-0.internal ([10.131.114.83] helo=smtp-relay-canonical-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pjqiW-0007Ds-7C for kernel-team@lists.ubuntu.com; Wed, 05 Apr 2023 00:09:36 +0000 Received: from localhost.localdomain (1.general.cascardo.us.vpn [10.172.70.58]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-canonical-0.canonical.com (Postfix) with ESMTPSA id 664F33F0EB for ; Wed, 5 Apr 2023 00:09:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1680653375; bh=eBlapGXnpi8z+YKmNM8lJP2mUo8GJIBeBdHfih5zt1w=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sPFxXKGci8kSKRUrTvFXpwpjVQQ+CqCsRU0GR6h0KARuf77K3mgk/upzB7w9jNr5H z3kGf5yd4S0H8Xyvi6S/OlIfW+ywYmJGoA3IQgLgRhzST+XDQOO82NkUu42XS3HIsa 4556dowVLB7Cb+2ZF1QlHFj35EDxJv/1Weww0DnzKVEllGpj8RaKDRm8sDPpGUUif3 WyB8yYMEuskk/ufN5Jyjxtn7ke4rZXjwXSRZ3It1FLCmJFqzbKrkbWu3kog7hTOVOi uw3ISt/jL6tTq0dhYI/ltCV3tatiGEvfu4KZpjpWXx4FPncO7eqkFhNabbRTgB+w+3 SOItGEYJaPwtQ== From: Thadeu Lima de Souza Cascardo To: kernel-team@lists.ubuntu.com Subject: [UBUNTU OEM-5.17 4/5] io_uring: make poll refs more robust Date: Tue, 4 Apr 2023 21:08:21 -0300 Message-Id: <20230405000827.2250965-5-cascardo@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230405000827.2250965-1-cascardo@canonical.com> References: <20230405000827.2250965-1-cascardo@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Pavel Begunkov [ upstream commit a26a35e9019fd70bf3cf647dcfdae87abc7bacea ] poll_refs carry two functions, the first is ownership over the request. The second is notifying the io_poll_check_events() that there was an event but wake up couldn't grab the ownership, so io_poll_check_events() should retry. We want to make poll_refs more robust against overflows. Instead of always incrementing it, which covers two purposes with one atomic, check if poll_refs is elevated enough and if so set a retry flag without attempts to grab ownership. The gap between the bias check and following atomics may seem racy, but we don't need it to be strict. Moreover there might only be maximum 4 parallel updates: by the first and the second poll entries, __io_arm_poll_handler() and cancellation. From those four, only poll wake ups may be executed multiple times, but they're protected by a spin. Cc: stable@vger.kernel.org Reported-by: Lin Ma Fixes: aa43477b04025 ("io_uring: poll rework") Signed-off-by: Pavel Begunkov Link: https://lore.kernel.org/r/c762bc31f8683b3270f3587691348a7119ef9c9d.1668963050.git.asml.silence@gmail.com Signed-off-by: Jens Axboe Signed-off-by: Greg Kroah-Hartman (cherry picked from commit 4b702b7d11ce1b9d26fc6d7c5a7ef4ac1d455048 linux-5.15.y) CVE-2023-0468 Signed-off-by: Thadeu Lima de Souza Cascardo --- fs/io_uring.c | 36 +++++++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 001328aa4f5f..14a12f8ccd97 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -5403,7 +5403,29 @@ struct io_poll_table { }; #define IO_POLL_CANCEL_FLAG BIT(31) -#define IO_POLL_REF_MASK GENMASK(30, 0) +#define IO_POLL_RETRY_FLAG BIT(30) +#define IO_POLL_REF_MASK GENMASK(29, 0) + +/* + * We usually have 1-2 refs taken, 128 is more than enough and we want to + * maximise the margin between this amount and the moment when it overflows. + */ +#define IO_POLL_REF_BIAS 128 + +static bool io_poll_get_ownership_slowpath(struct io_kiocb *req) +{ + int v; + + /* + * poll_refs are already elevated and we don't have much hope for + * grabbing the ownership. Instead of incrementing set a retry flag + * to notify the loop that there might have been some change. + */ + v = atomic_fetch_or(IO_POLL_RETRY_FLAG, &req->poll_refs); + if (v & IO_POLL_REF_MASK) + return false; + return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK); +} /* * If refs part of ->poll_refs (see IO_POLL_REF_MASK) is 0, it's free. We can @@ -5413,6 +5435,8 @@ struct io_poll_table { */ static inline bool io_poll_get_ownership(struct io_kiocb *req) { + if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS)) + return io_poll_get_ownership_slowpath(req); return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK); } @@ -5533,6 +5557,16 @@ static int io_poll_check_events(struct io_kiocb *req, bool locked) */ if ((v & IO_POLL_REF_MASK) != 1) req->result = 0; + if (v & IO_POLL_RETRY_FLAG) { + req->result = 0; + /* + * We won't find new events that came in between + * vfs_poll and the ref put unless we clear the + * flag in advance. + */ + atomic_andnot(IO_POLL_RETRY_FLAG, &req->poll_refs); + v &= ~IO_POLL_RETRY_FLAG; + } if (!req->result) { struct poll_table_struct pt = { ._key = poll->events };