From patchwork Wed Aug 7 18:17:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregor Haas X-Patchwork-Id: 1970195 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Pz753Mwk; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=GRg0Suk0; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WfJLk1Nnwz1yfC for ; Thu, 8 Aug 2024 04:17:48 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2R/4oaVQEkg+Tl2JsaXV9nLmJUG1YVqKwvg3einF54w=; b=Pz753Mwk8OLhdW K7m+od9PqPfezWfSV9FSZGSL26cpiPXdCsqdOuHLLHHbhe85TzVvb/lFOsX76sxy5/0+G0xAWJJsX Bf50kbj9QP5YGOzcJll+IEiqXEtUZqhjtEsUk3PyNzEgifoAH3IeBtSGazZDnX0GzJD1hgvq/XwlD y+xQqtJOcZAiQJ3/DzUWkUnHdz6oRm3hDIlLq8wpB2KVWmi3M31fcn7DpdhaD9M+4PoxnEtci5OPa RfqKslmjyL1xWMOUDRloxUXlz/9isNBasO3JmbyOuvuok2BvpQET0g2n2YonL8vVkznOCLd5lPD3x slBzCyCUN92MkeTDjoDQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sblE8-00000005v9i-1PkA; Wed, 07 Aug 2024 18:17:36 +0000 Received: from mail-pl1-x62e.google.com ([2607:f8b0:4864:20::62e]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sblE5-00000005v7q-2jFZ for opensbi@lists.infradead.org; Wed, 07 Aug 2024 18:17:35 +0000 Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-1fc60c3ead4so2072075ad.0 for ; Wed, 07 Aug 2024 11:17:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723054652; x=1723659452; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rG88dkoSIzPClj74vzjFK6+MiIenNg/bkgvUf3inhuE=; b=GRg0Suk0DmGJXyTWhHCIHo2MsQh/zcI9ou4rl1fbIHxo75d+sbmzar0uZI7G6FD60A Ue9RJuzVbqR/3iWtsJgclZO2OvZyHULEpO5xLdzFOsB9Xca4B+fwj+w8GaBf2ZBomRNS B2BrPfzGBZkJLs480Kzp05laf4DRassNmv1LCeIxEFhfjl4AoFZ2t52CuGeDau3NqOiV H3B42kfE+WsBHvD3y0PNaHED1vTgPAX85r1sCSNNELcU2+i8vQroBZ39baBc3Gh+41fq KK09PpM+I4mjdVrUKJ8KA/ntfIMob4W9I/veBymraP/vYccOJsHuwRIn2VmodJaxE4jC 4NHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723054652; x=1723659452; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rG88dkoSIzPClj74vzjFK6+MiIenNg/bkgvUf3inhuE=; b=UXR1+D1hUlf7BoRj5Vo3CAapPcb9n5Ou6SasikiCaf6Fh0hLxkS3WQr3AqmKadL03M Olb70asHd4uDZYOVDmyDWUhDmGVk4Rl3mpA3jhsHySN2MG+UCyoGGmIzKGICJ9LREREw LQ3Fg6aKyEN5QDISsKyHQujYVKYBy0JPVo31nB/0bT5E0KQD0+wnp46h6vjZmGihQodP lWrF+ubzysL/Bw5RtTldD6khMGsLV0MlXbwem/k7frCMKUjNB0goSnwKgBxe7fWmgZe+ qAg2kSZ+n79Et1MCOS2lwOgLthNJ90lLRvWAsRgyM1rLZmQtCM8n84qiE5e+9Vb7wou0 Lo4w== X-Gm-Message-State: AOJu0Ywnt3APPWjrlSmqiWosgxvcgcQ0A7IXPiRKACJ9fOy2L+Idlf4l gpTxlXD7/IHfVwv6rQNCq1DxvABgrJvl5mHzP/wyeqg5Jxfmv1G2mryQJJq7 X-Google-Smtp-Source: AGHT+IGLYRJu53mg/8hhKia+/kFrHlnbDm/7IL9S9gibllPN7x7X1ltpH/X3Ie17aVqzR6dROjMAsQ== X-Received: by 2002:a17:903:2443:b0:1fd:6655:e732 with SMTP id d9443c01a7336-1ff574caa46mr169328815ad.54.1723054652227; Wed, 07 Aug 2024 11:17:32 -0700 (PDT) Received: from localhost ([136.27.11.53]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-1ff58f53efcsm110091315ad.82.2024.08.07.11.17.31 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 07 Aug 2024 11:17:31 -0700 (PDT) From: Gregor Haas To: opensbi@lists.infradead.org Cc: atishp@rivosinc.com, anup@brainfault.org, Gregor Haas Subject: [PATCH v3 1/3] lib: sbi: Support multiple heaps Date: Wed, 7 Aug 2024 11:17:17 -0700 Message-ID: <20240807181719.244099-2-gregorhaas1997@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240807181719.244099-1-gregorhaas1997@gmail.com> References: <20240807181719.244099-1-gregorhaas1997@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240807_111733_978432_FDD6AF9B X-CRM114-Status: GOOD ( 20.56 ) X-Spam-Score: -1.8 (-) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The upcoming SMMTT implementation will require some larger contiguous memory regions for the memory tracking tables. We plan to specify the memory region for these tables as a reserved-memory node in [...] Content analysis details: (-1.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:62e listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit [gregorhaas1997(at)gmail.com] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider [gregorhaas1997(at)gmail.com] X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The upcoming SMMTT implementation will require some larger contiguous memory regions for the memory tracking tables. We plan to specify the memory region for these tables as a reserved-memory node in the device tree, and then dynamically allocate individual tables out of this region. These changes to the SBI heap allocator will allow us to explicitly create and allocate from a dedicated heap tied to the table memory region. Signed-off-by: Gregor Haas Reviewed-by: Anup Patel --- include/sbi/sbi_heap.h | 57 +++++++++++++++++-- lib/sbi/sbi_heap.c | 122 +++++++++++++++++++++++------------------ 2 files changed, 119 insertions(+), 60 deletions(-) diff --git a/include/sbi/sbi_heap.h b/include/sbi/sbi_heap.h index 16755ec..9a67090 100644 --- a/include/sbi/sbi_heap.h +++ b/include/sbi/sbi_heap.h @@ -12,16 +12,32 @@ #include +/* Opaque declaration of heap control struct */ +struct sbi_heap_control; + +/* Global heap control structure */ +extern struct sbi_heap_control global_hpctrl; + /* Alignment of heap base address and size */ #define HEAP_BASE_ALIGN 1024 struct sbi_scratch; /** Allocate from heap area */ -void *sbi_malloc(size_t size); +void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size); + +static inline void *sbi_malloc(size_t size) +{ + return sbi_malloc_from(&global_hpctrl, size); +} /** Zero allocate from heap area */ -void *sbi_zalloc(size_t size); +void *sbi_zalloc_from(struct sbi_heap_control *hpctrl, size_t size); + +static inline void *sbi_zalloc(size_t size) +{ + return sbi_zalloc_from(&global_hpctrl, size); +} /** Allocate array from heap area */ static inline void *sbi_calloc(size_t nitems, size_t size) @@ -29,19 +45,48 @@ static inline void *sbi_calloc(size_t nitems, size_t size) return sbi_zalloc(nitems * size); } +static inline void *sbi_calloc_from(struct sbi_heap_control *hpctrl, + size_t nitems, size_t size) +{ + return sbi_zalloc_from(hpctrl, nitems * size); +} + /** Free-up to heap area */ -void sbi_free(void *ptr); +void sbi_free_from(struct sbi_heap_control *hpctrl, void *ptr); + +static inline void sbi_free(void *ptr) +{ + return sbi_free_from(&global_hpctrl, ptr); +} /** Amount (in bytes) of free space in the heap area */ -unsigned long sbi_heap_free_space(void); +unsigned long sbi_heap_free_space_from(struct sbi_heap_control *hpctrl); + +static inline unsigned long sbi_heap_free_space(void) +{ + return sbi_heap_free_space_from(&global_hpctrl); +} /** Amount (in bytes) of used space in the heap area */ -unsigned long sbi_heap_used_space(void); +unsigned long sbi_heap_used_space_from(struct sbi_heap_control *hpctrl); + +static inline unsigned long sbi_heap_used_space(void) +{ + return sbi_heap_used_space_from(&global_hpctrl); +} /** Amount (in bytes) of reserved space in the heap area */ -unsigned long sbi_heap_reserved_space(void); +unsigned long sbi_heap_reserved_space_from(struct sbi_heap_control *hpctrl); + +static inline unsigned long sbi_heap_reserved_space(void) +{ + return sbi_heap_reserved_space_from(&global_hpctrl); +} /** Initialize heap area */ int sbi_heap_init(struct sbi_scratch *scratch); +int sbi_heap_init_new(struct sbi_heap_control *hpctrl, unsigned long base, + unsigned long size); +int sbi_heap_alloc_new(struct sbi_heap_control **hpctrl); #endif diff --git a/lib/sbi/sbi_heap.c b/lib/sbi/sbi_heap.c index bcd404b..e43d77c 100644 --- a/lib/sbi/sbi_heap.c +++ b/lib/sbi/sbi_heap.c @@ -24,7 +24,7 @@ struct heap_node { unsigned long size; }; -struct heap_control { +struct sbi_heap_control { spinlock_t lock; unsigned long base; unsigned long size; @@ -35,9 +35,9 @@ struct heap_control { struct sbi_dlist used_space_list; }; -static struct heap_control hpctrl; +struct sbi_heap_control global_hpctrl; -void *sbi_malloc(size_t size) +void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size) { void *ret = NULL; struct heap_node *n, *np; @@ -48,10 +48,10 @@ void *sbi_malloc(size_t size) size += HEAP_ALLOC_ALIGN - 1; size &= ~((unsigned long)HEAP_ALLOC_ALIGN - 1); - spin_lock(&hpctrl.lock); + spin_lock(&hpctrl->lock); np = NULL; - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { if (size <= n->size) { np = n; break; @@ -59,47 +59,47 @@ void *sbi_malloc(size_t size) } if (np) { if ((size < np->size) && - !sbi_list_empty(&hpctrl.free_node_list)) { - n = sbi_list_first_entry(&hpctrl.free_node_list, + !sbi_list_empty(&hpctrl->free_node_list)) { + n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); n->addr = np->addr + np->size - size; n->size = size; np->size -= size; - sbi_list_add_tail(&n->head, &hpctrl.used_space_list); + sbi_list_add_tail(&n->head, &hpctrl->used_space_list); ret = (void *)n->addr; } else if (size == np->size) { sbi_list_del(&np->head); - sbi_list_add_tail(&np->head, &hpctrl.used_space_list); + sbi_list_add_tail(&np->head, &hpctrl->used_space_list); ret = (void *)np->addr; } } - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return ret; } -void *sbi_zalloc(size_t size) +void *sbi_zalloc_from(struct sbi_heap_control *hpctrl, size_t size) { - void *ret = sbi_malloc(size); + void *ret = sbi_malloc_from(hpctrl, size); if (ret) sbi_memset(ret, 0, size); return ret; } -void sbi_free(void *ptr) +void sbi_free_from(struct sbi_heap_control *hpctrl, void *ptr) { struct heap_node *n, *np; if (!ptr) return; - spin_lock(&hpctrl.lock); + spin_lock(&hpctrl->lock); np = NULL; - sbi_list_for_each_entry(n, &hpctrl.used_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->used_space_list, head) { if ((n->addr <= (unsigned long)ptr) && ((unsigned long)ptr < (n->addr + n->size))) { np = n; @@ -107,22 +107,22 @@ void sbi_free(void *ptr) } } if (!np) { - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return; } sbi_list_del(&np->head); - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { if ((np->addr + np->size) == n->addr) { n->addr = np->addr; n->size += np->size; - sbi_list_add_tail(&np->head, &hpctrl.free_node_list); + sbi_list_add_tail(&np->head, &hpctrl->free_node_list); np = NULL; break; } else if (np->addr == (n->addr + n->size)) { n->size += np->size; - sbi_list_add_tail(&np->head, &hpctrl.free_node_list); + sbi_list_add_tail(&np->head, &hpctrl->free_node_list); np = NULL; break; } else if ((n->addr + n->size) < np->addr) { @@ -132,73 +132,87 @@ void sbi_free(void *ptr) } } if (np) - sbi_list_add_tail(&np->head, &hpctrl.free_space_list); + sbi_list_add_tail(&np->head, &hpctrl->free_space_list); - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); } -unsigned long sbi_heap_free_space(void) +unsigned long sbi_heap_free_space_from(struct sbi_heap_control *hpctrl) { struct heap_node *n; unsigned long ret = 0; - spin_lock(&hpctrl.lock); - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) + spin_lock(&hpctrl->lock); + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) ret += n->size; - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return ret; } -unsigned long sbi_heap_used_space(void) +unsigned long sbi_heap_used_space_from(struct sbi_heap_control *hpctrl) { - return hpctrl.size - hpctrl.hksize - sbi_heap_free_space(); + return hpctrl->size - hpctrl->hksize - sbi_heap_free_space(); } -unsigned long sbi_heap_reserved_space(void) +unsigned long sbi_heap_reserved_space_from(struct sbi_heap_control *hpctrl) { - return hpctrl.hksize; + return hpctrl->hksize; } -int sbi_heap_init(struct sbi_scratch *scratch) +int sbi_heap_init_new(struct sbi_heap_control *hpctrl, unsigned long base, + unsigned long size) { unsigned long i; struct heap_node *n; - /* Sanity checks on heap offset and size */ - if (!scratch->fw_heap_size || - (scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) || - (scratch->fw_heap_offset < scratch->fw_rw_offset) || - (scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) || - (scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1))) - return SBI_EINVAL; - /* Initialize heap control */ - SPIN_LOCK_INIT(hpctrl.lock); - hpctrl.base = scratch->fw_start + scratch->fw_heap_offset; - hpctrl.size = scratch->fw_heap_size; - hpctrl.hkbase = hpctrl.base; - hpctrl.hksize = hpctrl.size / HEAP_HOUSEKEEPING_FACTOR; - hpctrl.hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1); - SBI_INIT_LIST_HEAD(&hpctrl.free_node_list); - SBI_INIT_LIST_HEAD(&hpctrl.free_space_list); - SBI_INIT_LIST_HEAD(&hpctrl.used_space_list); + SPIN_LOCK_INIT(hpctrl->lock); + hpctrl->base = base; + hpctrl->size = size; + hpctrl->hkbase = hpctrl->base; + hpctrl->hksize = hpctrl->size / HEAP_HOUSEKEEPING_FACTOR; + hpctrl->hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1); + SBI_INIT_LIST_HEAD(&hpctrl->free_node_list); + SBI_INIT_LIST_HEAD(&hpctrl->free_space_list); + SBI_INIT_LIST_HEAD(&hpctrl->used_space_list); /* Prepare free node list */ - for (i = 0; i < (hpctrl.hksize / sizeof(*n)); i++) { - n = (struct heap_node *)(hpctrl.hkbase + (sizeof(*n) * i)); + for (i = 0; i < (hpctrl->hksize / sizeof(*n)); i++) { + n = (struct heap_node *)(hpctrl->hkbase + (sizeof(*n) * i)); SBI_INIT_LIST_HEAD(&n->head); n->addr = n->size = 0; - sbi_list_add_tail(&n->head, &hpctrl.free_node_list); + sbi_list_add_tail(&n->head, &hpctrl->free_node_list); } /* Prepare free space list */ - n = sbi_list_first_entry(&hpctrl.free_node_list, + n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); - n->addr = hpctrl.hkbase + hpctrl.hksize; - n->size = hpctrl.size - hpctrl.hksize; - sbi_list_add_tail(&n->head, &hpctrl.free_space_list); + n->addr = hpctrl->hkbase + hpctrl->hksize; + n->size = hpctrl->size - hpctrl->hksize; + sbi_list_add_tail(&n->head, &hpctrl->free_space_list); + + return 0; +} +int sbi_heap_init(struct sbi_scratch *scratch) +{ + /* Sanity checks on heap offset and size */ + if (!scratch->fw_heap_size || + (scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) || + (scratch->fw_heap_offset < scratch->fw_rw_offset) || + (scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) || + (scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1))) + return SBI_EINVAL; + + return sbi_heap_init_new(&global_hpctrl, + scratch->fw_start + scratch->fw_heap_offset, + scratch->fw_heap_size); +} + +int sbi_heap_alloc_new(struct sbi_heap_control **hpctrl) +{ + *hpctrl = sbi_calloc(1, sizeof(struct sbi_heap_control)); return 0; }