From patchwork Fri Feb 17 17:13:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuxuan Luo X-Patchwork-Id: 1744376 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=dAiT9ZRO; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4PJJLz2wPVz240c for ; Sat, 18 Feb 2023 04:14:03 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1pT4J3-0006RM-VE; Fri, 17 Feb 2023 17:13:57 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1pT4J0-0006QC-Ls for kernel-team@lists.ubuntu.com; Fri, 17 Feb 2023 17:13:54 +0000 Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 660C83F0E1 for ; Fri, 17 Feb 2023 17:13:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1676654034; bh=xOurczxU+bTaM7xsKO8ZfoI/G8WDuGPhG55IEC/uS5s=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dAiT9ZROviQQj4z5vp2N71HbB6uAZyxpo8odVI4V4BdmkgNUWoRNz25YPUwYoQv+J WdvgLtW7EkhxJoFWGzUG/k0RyHUJK8vmcl7VhMmTZczyeoBp+MKk5QTg9sXrQKus7r 9TyK8ItxeGbERw1jWSOWNRme7T2aImwl3vKGsm+pha5J9Dsx36Vovb9+zMOvINg9SK l5xq12vJvaWDtig0fPPwzHIrfAj96gEe3Rke9XBXXgAHLvDfUmFKXOaU5z55tkoq56 iFBqoG5aMrwbnnW+9xB8ov38nqqhZd/XAzKUQ9iVzOma8AN6zYQPjz9e/b2H8/3XH4 yscxz7kq+mL3Q== Received: by mail-qk1-f197.google.com with SMTP id q11-20020a05620a0c8b00b00724fd33cb3eso319224qki.14 for ; Fri, 17 Feb 2023 09:13:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xOurczxU+bTaM7xsKO8ZfoI/G8WDuGPhG55IEC/uS5s=; b=fHVj6LpKEd3+KmhjFjVBpyffUWNN/J+qsFxTObTsGFIVUwuXKdntg6yMEFIifhvaby x76FZUDg2DKsIrhIwUniZNPq2CzrIMatgzliRRm2VnvnvzmJo86wXv9VWFYkqld0KMGV RE1QDEMnJ2wggpf6Ag1gtUo/tEq/LiVcviRvqYUmbNfmMcPl6/IAbVm70XVH4C6JxA3+ 7D7abLNLj0LcRbQ/2rGoZ6F5jPIyVLH+HAczdKoDzpFoPTWbwBovS/yOsafRNUue75tu h9zwkuVRes4SN1GSs3xrOHW9neXgLrcFuidZnJ063FsTDOUa0ddFT/7kytp2+V/Nw8IR Pfpw== X-Gm-Message-State: AO0yUKWmF7jTR9UFgYFCZjZC2kR9jEkj/xUEfRBFY5hVbqZPkD1MU5Zd OSSLMR8qNgLAk0KmpWn66svXU6jbP8Up1ZiaYMvZfSzmcafQeRDterGce2vgGfKLJULfiZKElv6 L3y/AGY1xMDE7ifDDmS5TGjzC7XQcGxfuTS0a8iJICj5OseY= X-Received: by 2002:ac8:5a0c:0:b0:3b9:bf32:f8c8 with SMTP id n12-20020ac85a0c000000b003b9bf32f8c8mr669506qta.24.1676654033092; Fri, 17 Feb 2023 09:13:53 -0800 (PST) X-Google-Smtp-Source: AK7set8XO6jXdM7ubWy2m3y0TAIydNw6HWlbloA00gcAO4VDBEZHyOo2rsmMG12M9SxD7xJxQZ+2hg== X-Received: by 2002:ac8:5a0c:0:b0:3b9:bf32:f8c8 with SMTP id n12-20020ac85a0c000000b003b9bf32f8c8mr669465qta.24.1676654032768; Fri, 17 Feb 2023 09:13:52 -0800 (PST) Received: from cache-ubuntu.hsd1.nj.comcast.net ([2601:86:200:98b0:9d45:7b34:f91b:e3e]) by smtp.gmail.com with ESMTPSA id i12-20020ac860cc000000b003b848759ed8sm3537073qtm.47.2023.02.17.09.13.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Feb 2023 09:13:51 -0800 (PST) From: Yuxuan Luo To: kernel-team@lists.ubuntu.com Subject: [SRU][Bionic][PATCH] ipc: replace costly bailout check in sysvipc_find_ipc() Date: Fri, 17 Feb 2023 12:13:48 -0500 Message-Id: <20230217171349.55737-2-yuxuan.luo@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230217171349.55737-1-yuxuan.luo@canonical.com> References: <20230217171349.55737-1-yuxuan.luo@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Rafael Aquini sysvipc_find_ipc() was left with a costly way to check if the offset position fed to it is bigger than the total number of IPC IDs in use. So much so that the time it takes to iterate over /proc/sysvipc/* files grows exponentially for a custom benchmark that creates "N" SYSV shm segments and then times the read of /proc/sysvipc/shm (milliseconds): 12 msecs to read 1024 segs from /proc/sysvipc/shm 18 msecs to read 2048 segs from /proc/sysvipc/shm 65 msecs to read 4096 segs from /proc/sysvipc/shm 325 msecs to read 8192 segs from /proc/sysvipc/shm 1303 msecs to read 16384 segs from /proc/sysvipc/shm 5182 msecs to read 32768 segs from /proc/sysvipc/shm The root problem lies with the loop that computes the total amount of ids in use to check if the "pos" feeded to sysvipc_find_ipc() grew bigger than "ids->in_use". That is a quite inneficient way to get to the maximum index in the id lookup table, specially when that value is already provided by struct ipc_ids.max_idx. This patch follows up on the optimization introduced via commit 15df03c879836 ("sysvipc: make get_maxid O(1) again") and gets rid of the aforementioned costly loop replacing it by a simpler checkpoint based on ipc_get_maxidx() returned value, which allows for a smooth linear increase in time complexity for the same custom benchmark: 2 msecs to read 1024 segs from /proc/sysvipc/shm 2 msecs to read 2048 segs from /proc/sysvipc/shm 4 msecs to read 4096 segs from /proc/sysvipc/shm 9 msecs to read 8192 segs from /proc/sysvipc/shm 19 msecs to read 16384 segs from /proc/sysvipc/shm 39 msecs to read 32768 segs from /proc/sysvipc/shm Link: https://lkml.kernel.org/r/20210809203554.1562989-1-aquini@redhat.com Signed-off-by: Rafael Aquini Acked-by: Davidlohr Bueso Acked-by: Manfred Spraul Cc: Waiman Long Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds (backported from commit 20401d1058f3f841f35a594ac2fc1293710e55b9) [yuxuan.luo: accept the incoming change for the conflict occurs at the for loop since we will be using max_idx regardlessly. Also, changed `ipc_get_maxidx()` to `ipc_get_maxid()` since the commit refactoring this function is hard to backport and not necessary.] CVE-2021-3669 Signed-off-by: Yuxuan Luo --- ipc/util.c | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/ipc/util.c b/ipc/util.c index 7af476b6dcdde..c58e670bdbc86 100644 --- a/ipc/util.c +++ b/ipc/util.c @@ -746,21 +746,13 @@ struct ipc_proc_iter { static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos, loff_t *new_pos) { - struct kern_ipc_perm *ipc; - int total, id; - - total = 0; - for (id = 0; id < pos && total < ids->in_use; id++) { - ipc = idr_find(&ids->ipcs_idr, id); - if (ipc != NULL) - total++; - } + struct kern_ipc_perm *ipc = NULL; + int max_idx = ipc_get_maxid(ids); - ipc = NULL; - if (total >= ids->in_use) + if (max_idx == -1 || pos > max_idx) goto out; - for (; pos < IPCMNI; pos++) { + for (; pos <= max_idx; pos++) { ipc = idr_find(&ids->ipcs_idr, pos); if (ipc != NULL) { rcu_read_lock();