From patchwork Fri Jul 1 02:16:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1650967 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=WRMLW/yf; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4LYzP06dCsz9sGt for ; Fri, 1 Jul 2022 12:17:24 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1o76Df-0001Zh-AZ; Fri, 01 Jul 2022 02:17:19 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1o76Dd-0001YM-EW for kernel-team@lists.ubuntu.com; Fri, 01 Jul 2022 02:17:17 +0000 Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 0B2BA3F32C for ; Fri, 1 Jul 2022 02:17:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1656641837; bh=iSZpBwlVazVoaOiLOYoDIxtilmkvqIK3pkqLROdtX9E=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=WRMLW/yfsebweC+mgBQJlM7tU3/WrCZuOdRucPaGv2pUEtUQGmW6jSiS1yQaqwJU1 DXjue2+2b81HT5aiGI/Z2CSlnq0yPvrXrdg3yEkUQ/VZHg7eNjYyFVCIVVpTXVzoCQ 1KlbfTILMAsMCn2GPJLqpx8gW8XOUT7pOpHy8OTVCOK327vcULDrzo4AslgibjtEAM HfhyiDDC3mO+4GxNZQ7Cv4Z14CWsOCNTBIIypU4nN8U2ECubshjWQ77bzbZ7htjccE 5ygefUmKWJRFcUNUHXLbKcx5pFcrLU638SykWkUKAvrIMTHGlOjvKbWyLVEyxBKSUh ULZA33fkPnWtg== Received: by mail-ed1-f71.google.com with SMTP id b7-20020a056402350700b00435bd1c4523so730924edd.5 for ; Thu, 30 Jun 2022 19:17:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=iSZpBwlVazVoaOiLOYoDIxtilmkvqIK3pkqLROdtX9E=; b=Z4wldzKLl5CrVvXe+YRlEWMQGSFEI9gZY4xlPTHQqfIhiTQ6SAKj3hmg3b/16AQ6tS OPuvV9mCw7rxEGFu2p0Q16TQLhP+Kho5CsEUC/tcVDl7rI7cLmfas40gl8ks3zPuK9FH LJ571KQvC8KiIZVJg189NoBj3n74vl93BnmGG3P7JwEoURCpDUJjHq7Ii3iXjkAWWC1R m5Kj429TQo4SzHIFf77tv47NotX1NUs+QhikFwxPTrgGALgP6gTBrCczbrWeg+o+9cCS auWT0siKMCGfUl8boDB8KLu9TP4ndsI+eFgknT4/a03gYTuQfsHbO9GRe5Udo/OPZv3/ ENzA== X-Gm-Message-State: AJIora+xg+9qpDFnrbP/8kRGnOrpsQEvVDPYqykZ41I06AY2GyJ3qmko bPOeNrn2Ai0IuIE8gSlo1BbCp+OO8Z0xjU/WLPspi4/k2cSRnox3hdQWURcwrYqM/K1XamLsLO6 X474veL1E/NzrzRJ4vYPxqXwQQvY0+V8ja2kuuVQlOQ== X-Received: by 2002:a05:6402:1854:b0:435:7f6e:e553 with SMTP id v20-20020a056402185400b004357f6ee553mr15600687edy.282.1656641836572; Thu, 30 Jun 2022 19:17:16 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uAO1Ma2cWUQSCjJmwBI1UOUbq4nqcGjiNW1UEr3Mm8Ic2bPrV6wYQ1QGj+jDvY6eQ+dgWQmw== X-Received: by 2002:a05:6402:1854:b0:435:7f6e:e553 with SMTP id v20-20020a056402185400b004357f6ee553mr15600668edy.282.1656641836371; Thu, 30 Jun 2022 19:17:16 -0700 (PDT) Received: from localhost ([31.206.166.85]) by smtp.gmail.com with ESMTPSA id jw14-20020a170906e94e00b007263481a43fsm9506772ejb.81.2022.06.30.19.17.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 Jun 2022 19:17:16 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU][OEM-5.17][PATCH 1/1] SUNRPC: Ensure we flush any closed sockets before xs_xprt_free() Date: Fri, 1 Jul 2022 05:16:23 +0300 Message-Id: <20220701021620.232797-4-cengiz.can@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220701021620.232797-1-cengiz.can@canonical.com> References: <20220701021620.232797-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Trond Myklebust commit f00432063db1a0db484e85193eccc6845435b80e upstream. We must ensure that all sockets are closed before we call xprt_free() and release the reference to the net namespace. The problem is that calling fput() will defer closing the socket until delayed_fput() gets called. Let's fix the situation by allowing rpciod and the transport teardown code (which runs on the system wq) to call __fput_sync(), and directly close the socket. Reported-by: Felix Fu Acked-by: Al Viro Fixes: a73881c96d73 ("SUNRPC: Fix an Oops in udp_poll()") Cc: stable@vger.kernel.org # 5.1.x: 3be232f11a3c: SUNRPC: Prevent immediate close+reconnect Cc: stable@vger.kernel.org # 5.1.x: 89f42494f92f: SUNRPC: Don't call connect() more than once on a TCP socket Cc: stable@vger.kernel.org # 5.1.x Signed-off-by: Trond Myklebust Signed-off-by: Greg Kroah-Hartman CVE-2022-28893 (cherry picked from commit d21287d8a4589dd8513038f887ece980fbc399cf linux-5.17.y) Signed-off-by: Cengiz Can --- fs/file_table.c | 1 + include/trace/events/sunrpc.h | 1 - net/sunrpc/xprt.c | 7 +------ net/sunrpc/xprtsock.c | 16 +++++++++++++--- 4 files changed, 15 insertions(+), 10 deletions(-) diff --git a/fs/file_table.c b/fs/file_table.c index 7d2e692b66a94..ada8fe814db97 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -412,6 +412,7 @@ void __fput_sync(struct file *file) } EXPORT_SYMBOL(fput); +EXPORT_SYMBOL(__fput_sync); void __init files_init(void) { diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h index 29982d60b68ab..5be3faf88c1a1 100644 --- a/include/trace/events/sunrpc.h +++ b/include/trace/events/sunrpc.h @@ -1005,7 +1005,6 @@ DEFINE_RPC_XPRT_LIFETIME_EVENT(connect); DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_auto); DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_done); DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_force); -DEFINE_RPC_XPRT_LIFETIME_EVENT(disconnect_cleanup); DEFINE_RPC_XPRT_LIFETIME_EVENT(destroy); DECLARE_EVENT_CLASS(rpc_xprt_event, diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index a02de2bddb28b..07481b05577df 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -929,12 +929,7 @@ void xprt_connect(struct rpc_task *task) if (!xprt_lock_write(xprt, task)) return; - if (test_and_clear_bit(XPRT_CLOSE_WAIT, &xprt->state)) { - trace_xprt_disconnect_cleanup(xprt); - xprt->ops->close(xprt); - } - - if (!xprt_connected(xprt)) { + if (!xprt_connected(xprt) && !test_bit(XPRT_CLOSE_WAIT, &xprt->state)) { task->tk_rqstp->rq_connect_cookie = xprt->connect_cookie; rpc_sleep_on_timeout(&xprt->pending, task, NULL, xprt_request_timeout(task->tk_rqstp)); diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index 0f39e08ee580e..ca562443debd8 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -856,7 +856,7 @@ static int xs_local_send_request(struct rpc_rqst *req) /* Close the stream if the previous transmission was incomplete */ if (xs_send_request_was_aborted(transport, req)) { - xs_close(xprt); + xprt_force_disconnect(xprt); return -ENOTCONN; } @@ -894,7 +894,7 @@ static int xs_local_send_request(struct rpc_rqst *req) -status); fallthrough; case -EPIPE: - xs_close(xprt); + xprt_force_disconnect(xprt); status = -ENOTCONN; } @@ -1179,6 +1179,16 @@ static void xs_reset_transport(struct sock_xprt *transport) if (sk == NULL) return; + /* + * Make sure we're calling this in a context from which it is safe + * to call __fput_sync(). In practice that means rpciod and the + * system workqueue. + */ + if (!(current->flags & PF_WQ_WORKER)) { + WARN_ON_ONCE(1); + set_bit(XPRT_CLOSE_WAIT, &xprt->state); + return; + } if (atomic_read(&transport->xprt.swapper)) sk_clear_memalloc(sk); @@ -1202,7 +1212,7 @@ static void xs_reset_transport(struct sock_xprt *transport) mutex_unlock(&transport->recv_mutex); trace_rpc_socket_close(xprt, sock); - fput(filp); + __fput_sync(filp); xprt_disconnect_done(xprt); }