From patchwork Fri Jul 1 02:16:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1650965 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=mDklp0bj; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4LYzNf58JJz9sGt for ; Fri, 1 Jul 2022 12:17:06 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1o76DM-0001FC-S4; Fri, 01 Jul 2022 02:17:00 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1o76DL-0001Ei-60 for kernel-team@lists.ubuntu.com; Fri, 01 Jul 2022 02:16:59 +0000 Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id F0DC23F32C for ; Fri, 1 Jul 2022 02:16:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1656641818; bh=nTyUwbqnLz3CgI8aRJ5jr3FukjDrjDclPTVtvVI99lA=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mDklp0bjVpMmvsrdfCFgfplfhyO2z6ztin4fMRVoc4AtsMu+DFMr67U6S15ou6wAf iIAjqneCNlp420cj97ckIShIheccspy/4u73P0v6FCe5oIuSvzwmDT2fjbmIhzDP2M zLyeFoNUMcHvvbDSohjV9P5YUV0cYReMW52cbc+29CuAcWULTsO3C9H/VTOnn6kS3d dd+mXII+Xtvf9oVSCKJXOvkDjH5STElfan6lD1SY3lwW6aAeCYFKDx8y6ym7GKykES e7RfA/ErFWeu9J5Oy4h6u+kpkMk2mmHA1HYD7guXkPRfbPueIjxnmrlPner1XClaSm imff7FzlJ204w== Received: by mail-ed1-f70.google.com with SMTP id v16-20020a056402349000b00435a1c942a9so715645edc.15 for ; Thu, 30 Jun 2022 19:16:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nTyUwbqnLz3CgI8aRJ5jr3FukjDrjDclPTVtvVI99lA=; b=N+W/NIAwZ9L9Ilua6on8dmwVvnR8nQLNccawSXa3zGGyg2OOiYWFnevMHwgvIZU7H1 Z4pVveQed01GFLnQn1SWF0gZUlkK1bAFe4CuhvXAAk1rJJahSTx8hvhLI0d5TpkVJVaQ uXZlSvrZB0wewJFecGcFh9XpUjBMAX/wR+wBLxsUglww2MCHWZ6CLCSxHmVzVpYgur3+ IT5Dbd6AaZU2qoVQDfCITKWvKTAdky/WsxW12v0f37Ao5/c92asoAepv4TcLNwnIZrKr qSVSlFTmzlz/xRI7azMkFPBsJlXZNTCvYe1mLzlmhp3ilxE3ij5ULY0zOCDieboZd8wD kpaQ== X-Gm-Message-State: AJIora9cjfZh43Y7bo8DEhB/IGMmSySKo27BtvxrVO9Qf4iSLhRAhrMw 4/VfUuTDG1izZxY8j0C4NIsz5cyaS84C6kLScRopT2dGgaGmTQ+/gB4dY3RLFxfe31eXZLAUlbf z2N+ESHKmYmQDKAtfc97vjjsD5UMA5dTtsbVMtrxthg== X-Received: by 2002:a17:906:c048:b0:718:ca61:e7b9 with SMTP id bm8-20020a170906c04800b00718ca61e7b9mr11808661ejb.120.1656641818319; Thu, 30 Jun 2022 19:16:58 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tas9vP9NRTaf/u0OY/veNh6KHyhXfWHJIAWedKKYO7Hsw+BkTWd1AFPORUOC3kAILLzmtS0A== X-Received: by 2002:a17:906:c048:b0:718:ca61:e7b9 with SMTP id bm8-20020a170906c04800b00718ca61e7b9mr11808649ejb.120.1656641818107; Thu, 30 Jun 2022 19:16:58 -0700 (PDT) Received: from localhost ([31.206.166.85]) by smtp.gmail.com with ESMTPSA id cf11-20020a0564020b8b00b00437ae26e728sm5865352edb.3.2022.06.30.19.16.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Jun 2022 19:16:57 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU][Focal][PATCH 1/1] SUNRPC: Ensure we flush any closed sockets before xs_xprt_free() Date: Fri, 1 Jul 2022 05:16:20 +0300 Message-Id: <20220701021620.232797-2-cengiz.can@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220701021620.232797-1-cengiz.can@canonical.com> References: <20220701021620.232797-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Meena Shanmugam From: Trond Myklebust commit f00432063db1a0db484e85193eccc6845435b80e upstream. We must ensure that all sockets are closed before we call xprt_free() and release the reference to the net namespace. The problem is that calling fput() will defer closing the socket until delayed_fput() gets called. Let's fix the situation by allowing rpciod and the transport teardown code (which runs on the system wq) to call __fput_sync(), and directly close the socket. Reported-by: Felix Fu Acked-by: Al Viro Fixes: a73881c96d73 ("SUNRPC: Fix an Oops in udp_poll()") Cc: stable@vger.kernel.org # 5.1.x: 3be232f11a3c: SUNRPC: Prevent immediate close+reconnect Cc: stable@vger.kernel.org # 5.1.x: 89f42494f92f: SUNRPC: Don't call connect() more than once on a TCP socket Cc: stable@vger.kernel.org # 5.1.x Signed-off-by: Trond Myklebust [meenashanmugam: Fix merge conflict in xprt_connect] Signed-off-by: Meena Shanmugam Signed-off-by: Greg Kroah-Hartman CVE-2022-28893 (backported from commit 2f8f6c393b11b5da059b1fc10a69fc2f2b6c446a linux-5.4.y) [cengizcan: we already have __fput_sync exported as GPL only so do not EXPORT_SYMBOL it again] Signed-off-by: Cengiz Can --- net/sunrpc/xprt.c | 5 +---- net/sunrpc/xprtsock.c | 16 +++++++++++++--- 2 files changed, 14 insertions(+), 7 deletions(-) diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index 8ac579778e487..6454a122416e6 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -863,10 +863,7 @@ void xprt_connect(struct rpc_task *task) if (!xprt_lock_write(xprt, task)) return; - if (test_and_clear_bit(XPRT_CLOSE_WAIT, &xprt->state)) - xprt->ops->close(xprt); - - if (!xprt_connected(xprt)) { + if (!xprt_connected(xprt) && !test_bit(XPRT_CLOSE_WAIT, &xprt->state)) { task->tk_rqstp->rq_connect_cookie = xprt->connect_cookie; rpc_sleep_on_timeout(&xprt->pending, task, NULL, xprt_request_timeout(task->tk_rqstp)); diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index 480e879e74ae5..af4b12ff1d6f6 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -989,7 +989,7 @@ static int xs_local_send_request(struct rpc_rqst *req) /* Close the stream if the previous transmission was incomplete */ if (xs_send_request_was_aborted(transport, req)) { - xs_close(xprt); + xprt_force_disconnect(xprt); return -ENOTCONN; } @@ -1027,7 +1027,7 @@ static int xs_local_send_request(struct rpc_rqst *req) -status); /* fall through */ case -EPIPE: - xs_close(xprt); + xprt_force_disconnect(xprt); status = -ENOTCONN; } @@ -1303,6 +1303,16 @@ static void xs_reset_transport(struct sock_xprt *transport) if (sk == NULL) return; + /* + * Make sure we're calling this in a context from which it is safe + * to call __fput_sync(). In practice that means rpciod and the + * system workqueue. + */ + if (!(current->flags & PF_WQ_WORKER)) { + WARN_ON_ONCE(1); + set_bit(XPRT_CLOSE_WAIT, &xprt->state); + return; + } if (atomic_read(&transport->xprt.swapper)) sk_clear_memalloc(sk); @@ -1326,7 +1336,7 @@ static void xs_reset_transport(struct sock_xprt *transport) mutex_unlock(&transport->recv_mutex); trace_rpc_socket_close(xprt, sock); - fput(filp); + __fput_sync(filp); xprt_disconnect_done(xprt); }