From patchwork Wed Jul 10 21:09:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregor Haas X-Patchwork-Id: 1959019 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=E6aTsLP4; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=S6rlw2fY; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WK9VL65hMz20MR for ; Thu, 11 Jul 2024 07:10:00 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9NpNaqi5iT4SGZVXH+bRHU/c7lPXJ2AuBvev71dOis0=; b=E6aTsLP4sekJUI 4w+J63/CuglPsx/pkzNbitWNlje2A74n+gETIYEQi2k7Ai6RbWY4E9ccac90Ms1agJXs+7KMIUeq3 MbiYV6dnARHpWk2ND4wi5nuEU/en5mHMP+988/DjeU8vSZfGkRm88vDmddr0O31vcpJIvrknV4fdT X9hx2p5vZZZCVFT9DIYX4n0woSdHHJsLaexqTmf+spCYtr26vIcWhwMYE1BESFZCskAYzFoTnHfF5 x5SxRpk/+u9arrIgkpntwXk7xALayapgOAIzFMcdBb+aimXVqU2+OCr38HTh76fWJ3biw9C4jNPab hjf4p1VQAlyef+9wuVxg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sReZU-0000000Bjx4-3Jhf; Wed, 10 Jul 2024 21:09:52 +0000 Received: from mail-pg1-x534.google.com ([2607:f8b0:4864:20::534]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sReZE-0000000BjsL-0QQI for opensbi@lists.infradead.org; Wed, 10 Jul 2024 21:09:37 +0000 Received: by mail-pg1-x534.google.com with SMTP id 41be03b00d2f7-7201cb6cae1so105600a12.2 for ; Wed, 10 Jul 2024 14:09:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720645775; x=1721250575; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zPJReGBdXkL23VLadXFpUcBjX4O3u4g1aJLupaZGx9E=; b=S6rlw2fYKkQd/Pyoa7SH/pDgMnxwE7MEopRzWVDfNlnOUaIrcheal/D451T/bY1Ka4 deJTucaIKQfsJdHC941tEQjSdwic7TpQevlt43ehSbgiLeYKZroUU8pvHfNWWEIorO+d pn+mrAFMPilr471TzMnfurba6gBkkaBRJBy2fAW1B+9NNLTjSsiH9BMDpcTrDuaf35vv YWnSCVTcU8cxX1HcVrNymneN1oNBqQVqnevc8xBqm4G5+5kQjS4NE0jKW/kN5EgWZOHd GLjIe9a3R+sVzCNJNRKROJszcPcPuRFPPFfcBCboLlohSkbS/1qY7uKQ48mMa3wXxzDY k5uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720645775; x=1721250575; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zPJReGBdXkL23VLadXFpUcBjX4O3u4g1aJLupaZGx9E=; b=VT05qpoHyn+kckmIkUK4uZRILM/k0e1zjgps6dMcyujvICmNIz0cWd77ElLd4nYpw8 9Icse8/STOkp2fI+IG2T4HQY9q55tDKOpqoFa5xgPZ+Fi1qHQSD1B1WNmx5HDEzZ9ph0 XuzXgrgbZY1xeaeckmeLApIZzH0avDc+AsWDgBJVybIu6xqJrJiukVxhLnVvk7BK30rY lFrEfJ7qBmsJWjuex2xfgYQHfPYIaspzJbBNdrvpGTt1YwsTKkavwcI3LEd/EhQvmphP UvwCTBji/m6eBjcKmlK3rXpkJhCO3cdW9GWdliVZ5orqCsCf81cruH1293f93fUDsVqu A4Ig== X-Gm-Message-State: AOJu0Yyr9arA+QMC/IwYHWe2vbfn4BIOrHSiDPHAtiGFY1ddIeW3rG+e HoWS/MK7qj8cZHYCmw9IaUdop1Vj6UEA/9Yw2Sts6VfSBllLMiOqb65/xehU X-Google-Smtp-Source: AGHT+IHNxcFe87OivQkYQmaQJGzBsRW7MbA4U1JANu/l8+VNiOumz52d/s7sD7i8x+ubxBMag9e05w== X-Received: by 2002:a05:6a21:6d96:b0:1c3:aec6:7663 with SMTP id adf61e73a8af0-1c3aec67f87mr1692019637.39.1720645774612; Wed, 10 Jul 2024 14:09:34 -0700 (PDT) Received: from localhost ([205.175.106.198]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-1fbb6a123e0sm38018495ad.26.2024.07.10.14.09.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 10 Jul 2024 14:09:34 -0700 (PDT) From: Gregor Haas To: opensbi@lists.infradead.org Cc: atishp@rivosinc.com, Gregor Haas Subject: [PATCH v2 1/2] lib: sbi: Support multiple heaps Date: Wed, 10 Jul 2024 14:09:23 -0700 Message-ID: <20240710210924.817753-2-gregorhaas1997@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240710210924.817753-1-gregorhaas1997@gmail.com> References: <20240710210924.817753-1-gregorhaas1997@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240710_140936_200006_F6523DC0 X-CRM114-Status: GOOD ( 20.61 ) X-Spam-Score: -1.8 (-) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The upcoming SMMTT implementation will require some larger contiguous memory regions for the memory tracking tables. We plan to specify the memory region for these tables as a reserved-memory node in [...] Content analysis details: (-1.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:534 listed in] [list.dnswl.org] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 SPF_PASS SPF: sender matches SPF record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider [gregorhaas1997(at)gmail.com] 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit [gregorhaas1997(at)gmail.com] X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The upcoming SMMTT implementation will require some larger contiguous memory regions for the memory tracking tables. We plan to specify the memory region for these tables as a reserved-memory node in the device tree, and then dynamically allocate individual tables out of this region. These changes to the SBI heap allocator will allow us to explicitly create and allocate from a dedicated heap tied to the table memory region. --- include/sbi/sbi_heap.h | 18 +++++ lib/sbi/sbi_heap.c | 146 +++++++++++++++++++++++++++-------------- 2 files changed, 113 insertions(+), 51 deletions(-) diff --git a/include/sbi/sbi_heap.h b/include/sbi/sbi_heap.h index 16755ec..a3f5a0c 100644 --- a/include/sbi/sbi_heap.h +++ b/include/sbi/sbi_heap.h @@ -12,6 +12,9 @@ #include +/* Opaque declaration of heap control struct */ +struct heap_control; + /* Alignment of heap base address and size */ #define HEAP_BASE_ALIGN 1024 @@ -19,9 +22,11 @@ struct sbi_scratch; /** Allocate from heap area */ void *sbi_malloc(size_t size); +void *sbi_malloc_from(struct heap_control *hpctrl, size_t size); /** Zero allocate from heap area */ void *sbi_zalloc(size_t size); +void *sbi_zalloc_from(struct heap_control *hpctrl, size_t size); /** Allocate array from heap area */ static inline void *sbi_calloc(size_t nitems, size_t size) @@ -29,19 +34,32 @@ static inline void *sbi_calloc(size_t nitems, size_t size) return sbi_zalloc(nitems * size); } +static inline void *sbi_calloc_from(struct heap_control *hpctrl, + size_t nitems, size_t size) +{ + return sbi_zalloc_from(hpctrl, nitems * size); +} + /** Free-up to heap area */ void sbi_free(void *ptr); +void sbi_free_from(struct heap_control *hpctrl, void *ptr); /** Amount (in bytes) of free space in the heap area */ unsigned long sbi_heap_free_space(void); +unsigned long sbi_heap_free_space_from(struct heap_control *hpctrl); /** Amount (in bytes) of used space in the heap area */ unsigned long sbi_heap_used_space(void); +unsigned long sbi_heap_used_space_from(struct heap_control *hpctrl); /** Amount (in bytes) of reserved space in the heap area */ unsigned long sbi_heap_reserved_space(void); +unsigned long sbi_heap_reserved_space_from(struct heap_control *hpctrl); /** Initialize heap area */ int sbi_heap_init(struct sbi_scratch *scratch); +int sbi_heap_init_new(struct heap_control *hpctrl, unsigned long base, + unsigned long size); +int sbi_heap_alloc_new(struct heap_control **hpctrl); #endif diff --git a/lib/sbi/sbi_heap.c b/lib/sbi/sbi_heap.c index bcd404b..3be56f3 100644 --- a/lib/sbi/sbi_heap.c +++ b/lib/sbi/sbi_heap.c @@ -35,9 +35,9 @@ struct heap_control { struct sbi_dlist used_space_list; }; -static struct heap_control hpctrl; +static struct heap_control global_hpctrl; -void *sbi_malloc(size_t size) +void *sbi_malloc_from(struct heap_control *hpctrl, size_t size) { void *ret = NULL; struct heap_node *n, *np; @@ -48,10 +48,10 @@ void *sbi_malloc(size_t size) size += HEAP_ALLOC_ALIGN - 1; size &= ~((unsigned long)HEAP_ALLOC_ALIGN - 1); - spin_lock(&hpctrl.lock); + spin_lock(&hpctrl->lock); np = NULL; - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { if (size <= n->size) { np = n; break; @@ -59,47 +59,57 @@ void *sbi_malloc(size_t size) } if (np) { if ((size < np->size) && - !sbi_list_empty(&hpctrl.free_node_list)) { - n = sbi_list_first_entry(&hpctrl.free_node_list, + !sbi_list_empty(&hpctrl->free_node_list)) { + n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); n->addr = np->addr + np->size - size; n->size = size; np->size -= size; - sbi_list_add_tail(&n->head, &hpctrl.used_space_list); + sbi_list_add_tail(&n->head, &hpctrl->used_space_list); ret = (void *)n->addr; } else if (size == np->size) { sbi_list_del(&np->head); - sbi_list_add_tail(&np->head, &hpctrl.used_space_list); + sbi_list_add_tail(&np->head, &hpctrl->used_space_list); ret = (void *)np->addr; } } - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return ret; } -void *sbi_zalloc(size_t size) +void *sbi_malloc(size_t size) { - void *ret = sbi_malloc(size); + return sbi_malloc_from(&global_hpctrl, size); +} + +void *sbi_zalloc_from(struct heap_control *hpctrl, size_t size) +{ + void *ret = sbi_malloc_from(hpctrl, size); if (ret) sbi_memset(ret, 0, size); return ret; } -void sbi_free(void *ptr) +void *sbi_zalloc(size_t size) +{ + return sbi_malloc_from(&global_hpctrl, size); +} + +void sbi_free_from(struct heap_control *hpctrl, void *ptr) { struct heap_node *n, *np; if (!ptr) return; - spin_lock(&hpctrl.lock); + spin_lock(&hpctrl->lock); np = NULL; - sbi_list_for_each_entry(n, &hpctrl.used_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->used_space_list, head) { if ((n->addr <= (unsigned long)ptr) && ((unsigned long)ptr < (n->addr + n->size))) { np = n; @@ -107,22 +117,22 @@ void sbi_free(void *ptr) } } if (!np) { - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return; } sbi_list_del(&np->head); - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { if ((np->addr + np->size) == n->addr) { n->addr = np->addr; n->size += np->size; - sbi_list_add_tail(&np->head, &hpctrl.free_node_list); + sbi_list_add_tail(&np->head, &hpctrl->free_node_list); np = NULL; break; } else if (np->addr == (n->addr + n->size)) { n->size += np->size; - sbi_list_add_tail(&np->head, &hpctrl.free_node_list); + sbi_list_add_tail(&np->head, &hpctrl->free_node_list); np = NULL; break; } else if ((n->addr + n->size) < np->addr) { @@ -132,73 +142,107 @@ void sbi_free(void *ptr) } } if (np) - sbi_list_add_tail(&np->head, &hpctrl.free_space_list); + sbi_list_add_tail(&np->head, &hpctrl->free_space_list); - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); } -unsigned long sbi_heap_free_space(void) +void sbi_free(void *ptr) +{ + return sbi_free_from(&global_hpctrl, ptr); +} + +unsigned long sbi_heap_free_space_from(struct heap_control *hpctrl) { struct heap_node *n; unsigned long ret = 0; - spin_lock(&hpctrl.lock); - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) + spin_lock(&hpctrl->lock); + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) ret += n->size; - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return ret; } +unsigned long sbi_heap_free_space(void) +{ + return sbi_heap_free_space_from(&global_hpctrl); +} + +unsigned long sbi_heap_used_space_from(struct heap_control *hpctrl) +{ + return hpctrl->size - hpctrl->hksize - sbi_heap_free_space(); +} + unsigned long sbi_heap_used_space(void) { - return hpctrl.size - hpctrl.hksize - sbi_heap_free_space(); + return sbi_heap_free_space_from(&global_hpctrl); +} + +unsigned long sbi_heap_reserved_space_from(struct heap_control *hpctrl) +{ + return hpctrl->hksize; } unsigned long sbi_heap_reserved_space(void) { - return hpctrl.hksize; + return sbi_heap_free_space_from(&global_hpctrl); } -int sbi_heap_init(struct sbi_scratch *scratch) +int sbi_heap_init_new(struct heap_control *hpctrl, unsigned long base, + unsigned long size) { unsigned long i; struct heap_node *n; - /* Sanity checks on heap offset and size */ - if (!scratch->fw_heap_size || - (scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) || - (scratch->fw_heap_offset < scratch->fw_rw_offset) || - (scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) || - (scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1))) - return SBI_EINVAL; - /* Initialize heap control */ - SPIN_LOCK_INIT(hpctrl.lock); - hpctrl.base = scratch->fw_start + scratch->fw_heap_offset; - hpctrl.size = scratch->fw_heap_size; - hpctrl.hkbase = hpctrl.base; - hpctrl.hksize = hpctrl.size / HEAP_HOUSEKEEPING_FACTOR; - hpctrl.hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1); - SBI_INIT_LIST_HEAD(&hpctrl.free_node_list); - SBI_INIT_LIST_HEAD(&hpctrl.free_space_list); - SBI_INIT_LIST_HEAD(&hpctrl.used_space_list); + SPIN_LOCK_INIT(hpctrl->lock); + hpctrl->base = base; + hpctrl->size = size; + hpctrl->hkbase = hpctrl->base; + hpctrl->hksize = hpctrl->size / HEAP_HOUSEKEEPING_FACTOR; + hpctrl->hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1); + SBI_INIT_LIST_HEAD(&hpctrl->free_node_list); + SBI_INIT_LIST_HEAD(&hpctrl->free_space_list); + SBI_INIT_LIST_HEAD(&hpctrl->used_space_list); /* Prepare free node list */ - for (i = 0; i < (hpctrl.hksize / sizeof(*n)); i++) { - n = (struct heap_node *)(hpctrl.hkbase + (sizeof(*n) * i)); + for (i = 0; i < (hpctrl->hksize / sizeof(*n)); i++) { + n = (struct heap_node *)(hpctrl->hkbase + (sizeof(*n) * i)); SBI_INIT_LIST_HEAD(&n->head); n->addr = n->size = 0; - sbi_list_add_tail(&n->head, &hpctrl.free_node_list); + sbi_list_add_tail(&n->head, &hpctrl->free_node_list); } /* Prepare free space list */ - n = sbi_list_first_entry(&hpctrl.free_node_list, + n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); - n->addr = hpctrl.hkbase + hpctrl.hksize; - n->size = hpctrl.size - hpctrl.hksize; - sbi_list_add_tail(&n->head, &hpctrl.free_space_list); + n->addr = hpctrl->hkbase + hpctrl->hksize; + n->size = hpctrl->size - hpctrl->hksize; + sbi_list_add_tail(&n->head, &hpctrl->free_space_list); return 0; } + +int sbi_heap_init(struct sbi_scratch *scratch) +{ + /* Sanity checks on heap offset and size */ + if (!scratch->fw_heap_size || + (scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) || + (scratch->fw_heap_offset < scratch->fw_rw_offset) || + (scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) || + (scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1))) + return SBI_EINVAL; + + return sbi_heap_init_new(&global_hpctrl, + scratch->fw_start + scratch->fw_heap_offset, + scratch->fw_heap_size); +} + +int sbi_heap_alloc_new(struct heap_control **hpctrl) +{ + *hpctrl = sbi_calloc(1, sizeof(struct heap_control)); + return 0; +} \ No newline at end of file