From patchwork Thu Jul 27 23:22:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1814003 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=NdXNpi0y; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RBmzB5L4cz1yYD for ; Fri, 28 Jul 2023 09:23:18 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qPAK9-0002G3-V5; Thu, 27 Jul 2023 23:23:13 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qPAK8-0002Ft-05 for kernel-team@lists.ubuntu.com; Thu, 27 Jul 2023 23:23:12 +0000 Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id B3F843F71D for ; Thu, 27 Jul 2023 23:23:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1690500189; bh=QTB2zcAdFWyaqXL8WlOqJSo9lS7fqFCSQt5YAqYkItA=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=NdXNpi0y85vv7n8n7CoIxvocbul4t4NkC++sIF5ZDvqhkXSGZKhfYvLIO3BNTwH0C SpFoUuuaKdqFZNP2UnttEUh9/gp95aL9+csnjAlwjR5WK49glKQ4Kd/tlaSdGyH+n5 uubJoHi4EDJ2jjIxa+lMQlUDaHYeh5zmnJGs84iYsu82JkqBwpmuyQxy+RxonIPmWW YTbcvOEtIr536izVGuix8pdXfSOwQlBZCaRsYJF4eIbDxrsC+v23nZW8AIN3hhKDVS 29Tjb1uQUSuxe7thI1Bfvs2zdkLXIj1Ih4tfkMofwCx9oCwaNf/YUKld9qi94BKA9A Ck4BWC/xfGSgA== Received: by mail-wm1-f69.google.com with SMTP id 5b1f17b1804b1-3fb416d7731so8487015e9.2 for ; Thu, 27 Jul 2023 16:23:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690500188; x=1691104988; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QTB2zcAdFWyaqXL8WlOqJSo9lS7fqFCSQt5YAqYkItA=; b=Aqf9FtrPW24ETcqZZARetdey8xd3AlVAHT1APnuofgCzKEi80Vo/G8b7rDgEafaoKC HSzjiov42yO4xPGsly1t+2/uDsid2skVSb7kMUYjucD3GKXO5gjVLDA/aN7ms4pYa89C oUyegpr4/+q0V5UhhIE6VeQrokq2b7JO0uu0ZcOphaFAH+Z4OYEOGJt9auZCi8J4QbJ9 LQhOu8FOyC+Wb5+KYg8v4JjxAhMHTM614bfwwnT69d1b5w7xfiAylS9glHpD0lDx3VkN hfoFaLybENFBiOTWUsQFHRD3LHlJ2Si2cztVrh6b3XD0uSMm96Hgc7rlwqQLTO08Gccc 6Z1w== X-Gm-Message-State: ABy/qLZ3NXcpsGymj/5i0gWiencKll6+IwS6tI/vx58KGYVha5eiSO3a ORqSSNInuT3ublyqds5oMwoViwD1h9HLgXCtbnygO/v4Ejrc5rbuHDJkrCUyXxwIXTo5sTpYqhK ceV5YsxT4c7hIdg44bgZf42l4TgTRLBD2dVEbLS7DzdHRHDfHIIoM X-Received: by 2002:a5d:5546:0:b0:317:5e55:f06f with SMTP id g6-20020a5d5546000000b003175e55f06fmr406448wrw.10.1690500188820; Thu, 27 Jul 2023 16:23:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlGTt7JAIdZfJzzF9qe1kgVYEa9v/O+nPASPUEsztEWIqIw2CUFGs3vKMiN3cPqyj0j6R4tZsw== X-Received: by 2002:a5d:5546:0:b0:317:5e55:f06f with SMTP id g6-20020a5d5546000000b003175e55f06fmr406438wrw.10.1690500188585; Thu, 27 Jul 2023 16:23:08 -0700 (PDT) Received: from localhost (uk.sesame.canonical.com. [185.125.190.60]) by smtp.gmail.com with ESMTPSA id w10-20020a5d608a000000b0030ae499da59sm3200282wrt.111.2023.07.27.16.23.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 16:23:08 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Focal 1/1] net/sched: sch_qfq: account for stab overhead in qfq_enqueue Date: Fri, 28 Jul 2023 02:22:18 +0300 Message-Id: <20230727232220.972472-2-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727232220.972472-1-cengiz.can@canonical.com> References: <20230727232220.972472-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Pedro Tammela Lion says: ------- In the QFQ scheduler a similar issue to CVE-2023-31436 persists. Consider the following code in net/sched/sch_qfq.c: static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) { unsigned int len = qdisc_pkt_len(skb), gso_segs; // ... if (unlikely(cl->agg->lmax < len)) { pr_debug("qfq: increasing maxpkt from %u to %u for class %u", cl->agg->lmax, len, cl->common.classid); err = qfq_change_agg(sch, cl, cl->agg->class_weight, len); if (err) { cl->qstats.drops++; return qdisc_drop(skb, sch, to_free); } // ... } Similarly to CVE-2023-31436, "lmax" is increased without any bounds checks according to the packet length "len". Usually this would not impose a problem because packet sizes are naturally limited. This is however not the actual packet length, rather the "qdisc_pkt_len(skb)" which might apply size transformations according to "struct qdisc_size_table" as created by "qdisc_get_stab()" in net/sched/sch_api.c if the TCA_STAB option was set when modifying the qdisc. A user may choose virtually any size using such a table. As a result the same issue as in CVE-2023-31436 can occur, allowing heap out-of-bounds read / writes in the kmalloc-8192 cache. ------- We can create the issue with the following commands: tc qdisc add dev $DEV root handle 1: stab mtu 2048 tsize 512 mpu 0 \ overhead 999999999 linklayer ethernet qfq tc class add dev $DEV parent 1: classid 1:1 htb rate 6mbit burst 15k tc filter add dev $DEV parent 1: matchall classid 1:1 ping -I $DEV 1.1.1.2 This is caused by incorrectly assuming that qdisc_pkt_len() returns a length within the QFQ_MIN_LMAX < len < QFQ_MAX_LMAX. Fixes: 462dbc9101ac ("pkt_sched: QFQ Plus: fair-queueing service at DRR cost") Reported-by: Lion Reviewed-by: Eric Dumazet Signed-off-by: Jamal Hadi Salim Signed-off-by: Pedro Tammela Reviewed-by: Simon Horman Signed-off-by: Paolo Abeni CVE-2023-3611 (backported from commit 3e337087c3b5805fe0b8a46ba622a962880b5d64) [cengizcan: commit 25369891fcef ("net/sched: sch_qfq: refactor parsing of netlink parameters") does not exist in tree and it's very hard to backport its prerequisite commit 8cb081746c03 ("netlink: make validation more configurable for future strictness"). so simply add QFQ_MAX_LMAX into this patch.] Signed-off-by: Cengiz Can --- net/sched/sch_qfq.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c index 603bd3097bd8..d8248eb25ee4 100644 --- a/net/sched/sch_qfq.c +++ b/net/sched/sch_qfq.c @@ -113,6 +113,7 @@ #define QFQ_MTU_SHIFT 16 /* to support TSO/GSO */ #define QFQ_MIN_LMAX 512 /* see qfq_slot_insert */ +#define QFQ_MAX_LMAX (1UL << QFQ_MTU_SHIFT) #define QFQ_MAX_AGG_CLASSES 8 /* max num classes per aggregate allowed */ @@ -375,8 +376,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight, u32 lmax) { struct qfq_sched *q = qdisc_priv(sch); - struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight); + struct qfq_aggregate *new_agg; + /* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */ + if (lmax > QFQ_MAX_LMAX) + return -EINVAL; + + new_agg = qfq_find_agg(q, lmax, weight); if (new_agg == NULL) { /* create new aggregate */ new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC); if (new_agg == NULL) From patchwork Thu Jul 27 23:22:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cengiz Can X-Patchwork-Id: 1814005 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=Jhs0Ol+i; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4RBmzg3jJwz1yYD for ; Fri, 28 Jul 2023 09:23:43 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qPAKY-0002WZ-N6; Thu, 27 Jul 2023 23:23:38 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qPAKX-0002W8-Te for kernel-team@lists.ubuntu.com; Thu, 27 Jul 2023 23:23:37 +0000 Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id D2A783F189 for ; Thu, 27 Jul 2023 23:23:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1690500216; bh=/8UBm9ydPyDI6uJ2PBbX0qgwmIGEzPlnAJSA7gHDgjc=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Jhs0Ol+il6fiE8xgOQ44iuQGBCIXdRbiFOh2l3lpqQoQXWH3kkuzaMtkUFYP8l3Dv Zgmmt6WxBb/CVwRCiVenuZLIs6PFGXSLD6rOzshQSumz9VvKNTQSxtuPeDJK7UjCFM 7EtxTPc3kI49rT/c/3X27dRSYgE6Jbotr0lqQZONcHVYJng7ls6OWm7aRGwoQbx4+y 7hQGtjP/N5bNYVxHjUdX3a5N0w6o4BLBWeHTxMrwvUSw8zLoe8HepsnPZEnCUvRavA RJAeJXRZRa7c7bJdKooIB0ZuKixjG5gGlUF7fHy1wANKRf3yxpDytqvrCm2s7tFUKC iVH6NVtIO96AA== Received: by mail-wr1-f70.google.com with SMTP id ffacd0b85a97d-316f24a72e8so877097f8f.1 for ; Thu, 27 Jul 2023 16:23:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690500216; x=1691105016; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/8UBm9ydPyDI6uJ2PBbX0qgwmIGEzPlnAJSA7gHDgjc=; b=lbO2yl/KF0xGUpgOlWPEfkeL7aHIAKwvorX344C0hb28G1XciMnGQPE/oPX4LaDJuv QniCP8p1iVEHxXN35gqdRJ7nBZY3xkau3hdiJmsF1hCC0LEM6pJufAYL7iyGcBUIgfL5 q2Embg1Hg9vCLdibglqB1+x99oIISTGdDbzn/3dlFs2XumRdEdr32tqwUOOYtTnPJfkB E8gzIyKneLb0lXs68jEcdl8jIqJx9VC/e3J+r6/m3SQRxVB2esx+WVUjgmFDxfTSBuDa eiNYLraRdozq7Csc1YgUdJjZ83msnlgJ8cFzGzXbkxIJnUujBVu5xXVCpMczIw5riVJZ THEg== X-Gm-Message-State: ABy/qLa4cY8/Bl9lORT1U6Zc4ZuunI/Sq7rHXzdfkgopwlxrcvtm4UL+ c5hs39cAjy1DxQoyxOOf61F07VPsAWSqOfPYmleMZB47KwBvKhtopcJ1OewS48hRrdzyFqH4mB7 MTZrYKWlglCIrY69FdcQFozHYtZI0+KCcavKZjcB4Jtios/Sx6tlF X-Received: by 2002:adf:e690:0:b0:314:dc0:2fca with SMTP id r16-20020adfe690000000b003140dc02fcamr382551wrm.29.1690500216220; Thu, 27 Jul 2023 16:23:36 -0700 (PDT) X-Google-Smtp-Source: APBJJlETNnitlMfDbKwHmjkryr35zWaIwf1ZQIUL8wdpBYbSVCDyX1R0cw4SM08Gfe2adxJjk3jG5g== X-Received: by 2002:adf:e690:0:b0:314:dc0:2fca with SMTP id r16-20020adfe690000000b003140dc02fcamr382545wrm.29.1690500215951; Thu, 27 Jul 2023 16:23:35 -0700 (PDT) Received: from localhost (uk.sesame.canonical.com. [185.125.190.60]) by smtp.gmail.com with ESMTPSA id e5-20020a5d5005000000b00311d8c2561bsm3207827wrt.60.2023.07.27.16.23.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 27 Jul 2023 16:23:35 -0700 (PDT) From: Cengiz Can To: kernel-team@lists.ubuntu.com Subject: [SRU Jammy/Kinetic/Lunar 2/2] net/sched: sch_qfq: account for stab overhead in qfq_enqueue Date: Fri, 28 Jul 2023 02:22:22 +0300 Message-Id: <20230727232220.972472-4-cengiz.can@canonical.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230727232220.972472-1-cengiz.can@canonical.com> References: <20230727232220.972472-1-cengiz.can@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Pedro Tammela Lion says: ------- In the QFQ scheduler a similar issue to CVE-2023-31436 persists. Consider the following code in net/sched/sch_qfq.c: static int qfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) { unsigned int len = qdisc_pkt_len(skb), gso_segs; // ... if (unlikely(cl->agg->lmax < len)) { pr_debug("qfq: increasing maxpkt from %u to %u for class %u", cl->agg->lmax, len, cl->common.classid); err = qfq_change_agg(sch, cl, cl->agg->class_weight, len); if (err) { cl->qstats.drops++; return qdisc_drop(skb, sch, to_free); } // ... } Similarly to CVE-2023-31436, "lmax" is increased without any bounds checks according to the packet length "len". Usually this would not impose a problem because packet sizes are naturally limited. This is however not the actual packet length, rather the "qdisc_pkt_len(skb)" which might apply size transformations according to "struct qdisc_size_table" as created by "qdisc_get_stab()" in net/sched/sch_api.c if the TCA_STAB option was set when modifying the qdisc. A user may choose virtually any size using such a table. As a result the same issue as in CVE-2023-31436 can occur, allowing heap out-of-bounds read / writes in the kmalloc-8192 cache. ------- We can create the issue with the following commands: tc qdisc add dev $DEV root handle 1: stab mtu 2048 tsize 512 mpu 0 \ overhead 999999999 linklayer ethernet qfq tc class add dev $DEV parent 1: classid 1:1 htb rate 6mbit burst 15k tc filter add dev $DEV parent 1: matchall classid 1:1 ping -I $DEV 1.1.1.2 This is caused by incorrectly assuming that qdisc_pkt_len() returns a length within the QFQ_MIN_LMAX < len < QFQ_MAX_LMAX. Fixes: 462dbc9101ac ("pkt_sched: QFQ Plus: fair-queueing service at DRR cost") Reported-by: Lion Reviewed-by: Eric Dumazet Signed-off-by: Jamal Hadi Salim Signed-off-by: Pedro Tammela Reviewed-by: Simon Horman Signed-off-by: Paolo Abeni CVE-2023-3611 (cherry picked from commit 3e337087c3b5805fe0b8a46ba622a962880b5d64) Signed-off-by: Cengiz Can --- net/sched/sch_qfq.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c index 8fb30b20425f..3ba6a766601c 100644 --- a/net/sched/sch_qfq.c +++ b/net/sched/sch_qfq.c @@ -381,8 +381,13 @@ static int qfq_change_agg(struct Qdisc *sch, struct qfq_class *cl, u32 weight, u32 lmax) { struct qfq_sched *q = qdisc_priv(sch); - struct qfq_aggregate *new_agg = qfq_find_agg(q, lmax, weight); + struct qfq_aggregate *new_agg; + /* 'lmax' can range from [QFQ_MIN_LMAX, pktlen + stab overhead] */ + if (lmax > QFQ_MAX_LMAX) + return -EINVAL; + + new_agg = qfq_find_agg(q, lmax, weight); if (new_agg == NULL) { /* create new aggregate */ new_agg = kzalloc(sizeof(*new_agg), GFP_ATOMIC); if (new_agg == NULL)