From patchwork Fri Aug 9 03:16:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregor Haas X-Patchwork-Id: 1970789 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Pd0bGaro; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=EYkWXpWK; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Wg8GJ11fvz1yfW for ; Fri, 9 Aug 2024 13:16:56 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WDbh8V/f/gk7BMZVVmvKiDBV495Ote1IJsl4lIuN+hA=; b=Pd0bGarom58Xxk cY9V+2mJwr70qaPO6LC8d7Pr6P1/VyCSQrgTqWY4UTKNJ/xfvUudyx+1ZOn4nC5ZOyCQ7SA38vDTm AqVqDr9Ak3svF0PnUlydwIE4zXS8ouxWua8qiyL1Dp0uq6peFEwlRSf7IbWrgm9W/75Hc2AP7iJWK wMDrKk0Ti3FZmipMqXFZQTHY1R0svUd6/DbvJJZlKC5rXjN7LkjH+bf2aNMIuZbCk+/olDbzvBY19 QJReyJeRrW1t2qU0hBiP59pbtusish10HdKmTYQLnIMzB47IXq5+KSaFenxlYqIZqCBcvifAaCCgn bgA7pXx9vbc3HeC9ll4w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1scG7T-0000000A73L-3Jci; Fri, 09 Aug 2024 03:16:47 +0000 Received: from mail-pf1-x434.google.com ([2607:f8b0:4864:20::434]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1scG7P-0000000A71n-2eEc for opensbi@lists.infradead.org; Fri, 09 Aug 2024 03:16:45 +0000 Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-70d199fb3dfso1316370b3a.3 for ; Thu, 08 Aug 2024 20:16:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723173403; x=1723778203; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dstNPw+Uy27pyWRYdpC3RzBJS92waJhCDpT5Sa1k3io=; b=EYkWXpWKK5AcWZemndQskp4NHwGgDdfq7hqKFEy3Hsxw/Ybgv9rVZkeLA3eX+M3pq8 jfHoE5b5yfe5r7o4x936YuU3oaCU5x13+aEZV8vbQoCVvW8bJG3EHSYZG7IC7EGk1U1K I09022gQrvc0hvtuj50K8GxzubvWyFbgYHHHZz8HF9SME2jn4CMmYdcUpipKeXV9eIcY Y0eV+ujNnxtS60VGPhVDoEAY+sGNRJNwH9gpw54Oq9kKFK0kAl7N/ch/JZ0DxhrKIyKW thBvtS9aPNHwz4QNPmZ4IINPC1ztaQ/Dj4opPG+CUjEP4eB9Tw6Emy/envaAmWcdVk+u l+BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723173403; x=1723778203; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dstNPw+Uy27pyWRYdpC3RzBJS92waJhCDpT5Sa1k3io=; b=LkNeASvmeLUHKzo2xJGZlfimTwy48/EBhw4c7NonJ/h3gN1nWOwCh1v+AL3ogoYusm FlBNrSF8FwwdzrfLbCAQavoHQTOblC2LrkJVR0r7K54Kjkg4C6VWQq6CwKi7c5W+/C/r BN9UxLPSCT2l4eZGi4iHlMpOz3t25LQ+XF0KqnEsF/rR4Ury+KIL17PBR2VjtziX+bcL NhGNuUSUqeq7pl5U1ClXi6XRDGJo/jnf2bznq2n4a+oMUOTT/WF8t7PrHV+i+ClezBry v4RVnpUt8eGHGBPXuXgri/eaw1vuODj4U/AhEJGcRlaIDCw2cbqaQ43wk9+wwMucLA98 Swvg== X-Gm-Message-State: AOJu0YxiFvBUsnvhq4jFm1e/5S65KsFOy7cqdblJ3xLm/SlmXVTNfaHG cBZPb/mHf72eAdFG5NU6UBWkeyS/O/yUGrzN0MrlThBEXkIbyVV5IkfDf3mh X-Google-Smtp-Source: AGHT+IHN0DPDXq4rrUO78UZDEpam+1ZSfiflgMCFD+sjY0znc3pmmjmrmpJ94ItbrXRI3MZ9uvHpUg== X-Received: by 2002:a05:6a00:2e1b:b0:70d:3420:9314 with SMTP id d2e1a72fcca58-710dc6808c7mr236129b3a.12.1723173402560; Thu, 08 Aug 2024 20:16:42 -0700 (PDT) Received: from localhost ([136.27.11.53]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-710cb20a06bsm1792053b3a.41.2024.08.08.20.16.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 08 Aug 2024 20:16:42 -0700 (PDT) From: Gregor Haas To: opensbi@lists.infradead.org Cc: atishp@rivosinc.com, anup@brainfault.org, jrtc27@jrtc27.com, Gregor Haas Subject: [PATCH v4 1/3] lib: sbi: Support multiple heaps Date: Thu, 8 Aug 2024 20:16:36 -0700 Message-ID: <20240809031638.89146-2-gregorhaas1997@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240809031638.89146-1-gregorhaas1997@gmail.com> References: <20240809031638.89146-1-gregorhaas1997@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240808_201643_697662_4B7640F0 X-CRM114-Status: GOOD ( 20.71 ) X-Spam-Score: -1.8 (-) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: The upcoming SMMTT implementation will require some larger contiguous memory regions for the memory tracking tables. We plan to specify the memory region for these tables as a reserved-memory node in [...] Content analysis details: (-1.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:434 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider [gregorhaas1997(at)gmail.com] 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit [gregorhaas1997(at)gmail.com] X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org The upcoming SMMTT implementation will require some larger contiguous memory regions for the memory tracking tables. We plan to specify the memory region for these tables as a reserved-memory node in the device tree, and then dynamically allocate individual tables out of this region. These changes to the SBI heap allocator will allow us to explicitly create and allocate from a dedicated heap tied to the table memory region. Signed-off-by: Gregor Haas Reviewed-by: Anup Patel --- include/sbi/sbi_heap.h | 57 +++++++++++++++++-- lib/sbi/sbi_heap.c | 122 +++++++++++++++++++++++------------------ 2 files changed, 119 insertions(+), 60 deletions(-) diff --git a/include/sbi/sbi_heap.h b/include/sbi/sbi_heap.h index 16755ec..9a67090 100644 --- a/include/sbi/sbi_heap.h +++ b/include/sbi/sbi_heap.h @@ -12,16 +12,32 @@ #include +/* Opaque declaration of heap control struct */ +struct sbi_heap_control; + +/* Global heap control structure */ +extern struct sbi_heap_control global_hpctrl; + /* Alignment of heap base address and size */ #define HEAP_BASE_ALIGN 1024 struct sbi_scratch; /** Allocate from heap area */ -void *sbi_malloc(size_t size); +void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size); + +static inline void *sbi_malloc(size_t size) +{ + return sbi_malloc_from(&global_hpctrl, size); +} /** Zero allocate from heap area */ -void *sbi_zalloc(size_t size); +void *sbi_zalloc_from(struct sbi_heap_control *hpctrl, size_t size); + +static inline void *sbi_zalloc(size_t size) +{ + return sbi_zalloc_from(&global_hpctrl, size); +} /** Allocate array from heap area */ static inline void *sbi_calloc(size_t nitems, size_t size) @@ -29,19 +45,48 @@ static inline void *sbi_calloc(size_t nitems, size_t size) return sbi_zalloc(nitems * size); } +static inline void *sbi_calloc_from(struct sbi_heap_control *hpctrl, + size_t nitems, size_t size) +{ + return sbi_zalloc_from(hpctrl, nitems * size); +} + /** Free-up to heap area */ -void sbi_free(void *ptr); +void sbi_free_from(struct sbi_heap_control *hpctrl, void *ptr); + +static inline void sbi_free(void *ptr) +{ + return sbi_free_from(&global_hpctrl, ptr); +} /** Amount (in bytes) of free space in the heap area */ -unsigned long sbi_heap_free_space(void); +unsigned long sbi_heap_free_space_from(struct sbi_heap_control *hpctrl); + +static inline unsigned long sbi_heap_free_space(void) +{ + return sbi_heap_free_space_from(&global_hpctrl); +} /** Amount (in bytes) of used space in the heap area */ -unsigned long sbi_heap_used_space(void); +unsigned long sbi_heap_used_space_from(struct sbi_heap_control *hpctrl); + +static inline unsigned long sbi_heap_used_space(void) +{ + return sbi_heap_used_space_from(&global_hpctrl); +} /** Amount (in bytes) of reserved space in the heap area */ -unsigned long sbi_heap_reserved_space(void); +unsigned long sbi_heap_reserved_space_from(struct sbi_heap_control *hpctrl); + +static inline unsigned long sbi_heap_reserved_space(void) +{ + return sbi_heap_reserved_space_from(&global_hpctrl); +} /** Initialize heap area */ int sbi_heap_init(struct sbi_scratch *scratch); +int sbi_heap_init_new(struct sbi_heap_control *hpctrl, unsigned long base, + unsigned long size); +int sbi_heap_alloc_new(struct sbi_heap_control **hpctrl); #endif diff --git a/lib/sbi/sbi_heap.c b/lib/sbi/sbi_heap.c index bcd404b..e43d77c 100644 --- a/lib/sbi/sbi_heap.c +++ b/lib/sbi/sbi_heap.c @@ -24,7 +24,7 @@ struct heap_node { unsigned long size; }; -struct heap_control { +struct sbi_heap_control { spinlock_t lock; unsigned long base; unsigned long size; @@ -35,9 +35,9 @@ struct heap_control { struct sbi_dlist used_space_list; }; -static struct heap_control hpctrl; +struct sbi_heap_control global_hpctrl; -void *sbi_malloc(size_t size) +void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size) { void *ret = NULL; struct heap_node *n, *np; @@ -48,10 +48,10 @@ void *sbi_malloc(size_t size) size += HEAP_ALLOC_ALIGN - 1; size &= ~((unsigned long)HEAP_ALLOC_ALIGN - 1); - spin_lock(&hpctrl.lock); + spin_lock(&hpctrl->lock); np = NULL; - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { if (size <= n->size) { np = n; break; @@ -59,47 +59,47 @@ void *sbi_malloc(size_t size) } if (np) { if ((size < np->size) && - !sbi_list_empty(&hpctrl.free_node_list)) { - n = sbi_list_first_entry(&hpctrl.free_node_list, + !sbi_list_empty(&hpctrl->free_node_list)) { + n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); n->addr = np->addr + np->size - size; n->size = size; np->size -= size; - sbi_list_add_tail(&n->head, &hpctrl.used_space_list); + sbi_list_add_tail(&n->head, &hpctrl->used_space_list); ret = (void *)n->addr; } else if (size == np->size) { sbi_list_del(&np->head); - sbi_list_add_tail(&np->head, &hpctrl.used_space_list); + sbi_list_add_tail(&np->head, &hpctrl->used_space_list); ret = (void *)np->addr; } } - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return ret; } -void *sbi_zalloc(size_t size) +void *sbi_zalloc_from(struct sbi_heap_control *hpctrl, size_t size) { - void *ret = sbi_malloc(size); + void *ret = sbi_malloc_from(hpctrl, size); if (ret) sbi_memset(ret, 0, size); return ret; } -void sbi_free(void *ptr) +void sbi_free_from(struct sbi_heap_control *hpctrl, void *ptr) { struct heap_node *n, *np; if (!ptr) return; - spin_lock(&hpctrl.lock); + spin_lock(&hpctrl->lock); np = NULL; - sbi_list_for_each_entry(n, &hpctrl.used_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->used_space_list, head) { if ((n->addr <= (unsigned long)ptr) && ((unsigned long)ptr < (n->addr + n->size))) { np = n; @@ -107,22 +107,22 @@ void sbi_free(void *ptr) } } if (!np) { - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return; } sbi_list_del(&np->head); - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) { + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { if ((np->addr + np->size) == n->addr) { n->addr = np->addr; n->size += np->size; - sbi_list_add_tail(&np->head, &hpctrl.free_node_list); + sbi_list_add_tail(&np->head, &hpctrl->free_node_list); np = NULL; break; } else if (np->addr == (n->addr + n->size)) { n->size += np->size; - sbi_list_add_tail(&np->head, &hpctrl.free_node_list); + sbi_list_add_tail(&np->head, &hpctrl->free_node_list); np = NULL; break; } else if ((n->addr + n->size) < np->addr) { @@ -132,73 +132,87 @@ void sbi_free(void *ptr) } } if (np) - sbi_list_add_tail(&np->head, &hpctrl.free_space_list); + sbi_list_add_tail(&np->head, &hpctrl->free_space_list); - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); } -unsigned long sbi_heap_free_space(void) +unsigned long sbi_heap_free_space_from(struct sbi_heap_control *hpctrl) { struct heap_node *n; unsigned long ret = 0; - spin_lock(&hpctrl.lock); - sbi_list_for_each_entry(n, &hpctrl.free_space_list, head) + spin_lock(&hpctrl->lock); + sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) ret += n->size; - spin_unlock(&hpctrl.lock); + spin_unlock(&hpctrl->lock); return ret; } -unsigned long sbi_heap_used_space(void) +unsigned long sbi_heap_used_space_from(struct sbi_heap_control *hpctrl) { - return hpctrl.size - hpctrl.hksize - sbi_heap_free_space(); + return hpctrl->size - hpctrl->hksize - sbi_heap_free_space(); } -unsigned long sbi_heap_reserved_space(void) +unsigned long sbi_heap_reserved_space_from(struct sbi_heap_control *hpctrl) { - return hpctrl.hksize; + return hpctrl->hksize; } -int sbi_heap_init(struct sbi_scratch *scratch) +int sbi_heap_init_new(struct sbi_heap_control *hpctrl, unsigned long base, + unsigned long size) { unsigned long i; struct heap_node *n; - /* Sanity checks on heap offset and size */ - if (!scratch->fw_heap_size || - (scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) || - (scratch->fw_heap_offset < scratch->fw_rw_offset) || - (scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) || - (scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1))) - return SBI_EINVAL; - /* Initialize heap control */ - SPIN_LOCK_INIT(hpctrl.lock); - hpctrl.base = scratch->fw_start + scratch->fw_heap_offset; - hpctrl.size = scratch->fw_heap_size; - hpctrl.hkbase = hpctrl.base; - hpctrl.hksize = hpctrl.size / HEAP_HOUSEKEEPING_FACTOR; - hpctrl.hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1); - SBI_INIT_LIST_HEAD(&hpctrl.free_node_list); - SBI_INIT_LIST_HEAD(&hpctrl.free_space_list); - SBI_INIT_LIST_HEAD(&hpctrl.used_space_list); + SPIN_LOCK_INIT(hpctrl->lock); + hpctrl->base = base; + hpctrl->size = size; + hpctrl->hkbase = hpctrl->base; + hpctrl->hksize = hpctrl->size / HEAP_HOUSEKEEPING_FACTOR; + hpctrl->hksize &= ~((unsigned long)HEAP_BASE_ALIGN - 1); + SBI_INIT_LIST_HEAD(&hpctrl->free_node_list); + SBI_INIT_LIST_HEAD(&hpctrl->free_space_list); + SBI_INIT_LIST_HEAD(&hpctrl->used_space_list); /* Prepare free node list */ - for (i = 0; i < (hpctrl.hksize / sizeof(*n)); i++) { - n = (struct heap_node *)(hpctrl.hkbase + (sizeof(*n) * i)); + for (i = 0; i < (hpctrl->hksize / sizeof(*n)); i++) { + n = (struct heap_node *)(hpctrl->hkbase + (sizeof(*n) * i)); SBI_INIT_LIST_HEAD(&n->head); n->addr = n->size = 0; - sbi_list_add_tail(&n->head, &hpctrl.free_node_list); + sbi_list_add_tail(&n->head, &hpctrl->free_node_list); } /* Prepare free space list */ - n = sbi_list_first_entry(&hpctrl.free_node_list, + n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); - n->addr = hpctrl.hkbase + hpctrl.hksize; - n->size = hpctrl.size - hpctrl.hksize; - sbi_list_add_tail(&n->head, &hpctrl.free_space_list); + n->addr = hpctrl->hkbase + hpctrl->hksize; + n->size = hpctrl->size - hpctrl->hksize; + sbi_list_add_tail(&n->head, &hpctrl->free_space_list); + + return 0; +} +int sbi_heap_init(struct sbi_scratch *scratch) +{ + /* Sanity checks on heap offset and size */ + if (!scratch->fw_heap_size || + (scratch->fw_heap_size & (HEAP_BASE_ALIGN - 1)) || + (scratch->fw_heap_offset < scratch->fw_rw_offset) || + (scratch->fw_size < (scratch->fw_heap_offset + scratch->fw_heap_size)) || + (scratch->fw_heap_offset & (HEAP_BASE_ALIGN - 1))) + return SBI_EINVAL; + + return sbi_heap_init_new(&global_hpctrl, + scratch->fw_start + scratch->fw_heap_offset, + scratch->fw_heap_size); +} + +int sbi_heap_alloc_new(struct sbi_heap_control **hpctrl) +{ + *hpctrl = sbi_calloc(1, sizeof(struct sbi_heap_control)); return 0; } From patchwork Fri Aug 9 03:16:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregor Haas X-Patchwork-Id: 1970788 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=D6GFnKw3; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=WfmY23a5; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Wg8GH6Drbz1yfC for ; Fri, 9 Aug 2024 13:16:55 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5mnWygaefKBBDiXbOo1iotebYmkJyxWrzbKs+VegopM=; b=D6GFnKw33nNnBz 9XoPv/Q8XTYfM5UlX54D6nIT6Cfritj24R1P/ypDCX20qGzKH2j1adXlNqcGp/o/4PLQ0ogRXFWuF JSJuisimTHE3F1wI3QQbYbSqyf19uEDCd+QviqLBNkYoOWnS5V8+4p1Ir0xXZ/rtO2o2CfOPlQoZZ EggyLGJwd3ZRMXQANlWcMYIDPt6reqm1+WGaa6B589k6NhviaV/u3ml7Dj1r7p8veT4TaYFDTp8n4 ORttM1HTnzFaVZylC5QHfHSOinc0b2bRJ0WmQyMcbKK1xIWVhRjtiGEDp6ebCkix/gcRZ2azu71KW C31BxN0xP1rTslO9pLoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1scG7U-0000000A73b-1T7R; Fri, 09 Aug 2024 03:16:48 +0000 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1scG7R-0000000A725-0Xrc for opensbi@lists.infradead.org; Fri, 09 Aug 2024 03:16:46 +0000 Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-70d399da0b5so1348322b3a.3 for ; Thu, 08 Aug 2024 20:16:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723173404; x=1723778204; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hbV8l+d3fX/AUQbLjE6263jHxdii/l6MhXuwDI6+wuA=; b=WfmY23a5o9tj+3JZFcmSo/4px0P8MXS2cVJaaTuOl4NATY8gTvNVWuHax6G47V6GUE 97bJ5zhKjzCH8hURQb0PC5a4HbmGbkxMb60jA5Eidg+c2DDVl5I9JAI4fJT9sWAsq90F 1vRZ+d/wV+DOpP1s1EXmgkgls3MTkBr2lrq8eQFGwIU9ETiXcNyPG7ftgLd8r6tv4R6O f4R48fgG0+SOQZP7RhH5U6L0ny9tW7CiFtlS1cxJyWmK8/9+18GYwG88ii6sK+6znwzr 22z7z0cnuE6PKRjQ+LJsHdGaUTA+h/pXmFYmOOXh+15vGgC1kyT3wmPRTT2ZhhceJaBT 344Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723173404; x=1723778204; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hbV8l+d3fX/AUQbLjE6263jHxdii/l6MhXuwDI6+wuA=; b=H18iepEj9mMWnhB6vEpVi+oFH1aurEtE2fv+OQQvQ0oHVXeh/MWy4NJDPkhH0/BWKS scdEeRirU8pp+vcVeizc9N3xU1r77C7VA3V7Wyi/z1oZ7eAcXk+H3zHFGlyz1JvswhPe E1nsmBfa9/6RaFB0NkSFYMxMKfvWB5Pzrnl5SSMvK1NJ45CtodrwjNrvrBodbavViE8u YP3SkZv4Vu5yuquRfb7Qt9mDpbrpdiwe9vYtmcFb7GrBO/d8TIXlsvTwFwtECL9xEUZV LAj6a/bRl5PAU8LSOmwsEi0Wd+4AIgOow90MnE+XljCGgIHBcllDY8hTt5h15YhZGbQE NxDA== X-Gm-Message-State: AOJu0YzsLI86zJFV2B6vvcIL3m0akqtoE0Qkbw2l0Ux65T3hr2fxiqTB IJxaPETP6FQCIL/CqyxLkijvrRa+pTDsy6MGq8pHA19u2YAe9PkJSbubqhSK X-Google-Smtp-Source: AGHT+IFSSmMFVWbBaE3IotJpbjkMn1h/mDKug2mRmInFwuSPWDcIawpTLnXnr356D/Sd52G4jpf7Rw== X-Received: by 2002:a05:6a00:2789:b0:70e:9383:e166 with SMTP id d2e1a72fcca58-710dcaf03f3mr231278b3a.29.1723173403612; Thu, 08 Aug 2024 20:16:43 -0700 (PDT) Received: from localhost ([136.27.11.53]) by smtp.gmail.com with UTF8SMTPSA id 41be03b00d2f7-7b76393ee2dsm10521203a12.40.2024.08.08.20.16.43 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 08 Aug 2024 20:16:43 -0700 (PDT) From: Gregor Haas To: opensbi@lists.infradead.org Cc: atishp@rivosinc.com, anup@brainfault.org, jrtc27@jrtc27.com, Gregor Haas Subject: [PATCH v4 2/3] lib: sbi: Allocate from beginning of heap blocks Date: Thu, 8 Aug 2024 20:16:37 -0700 Message-ID: <20240809031638.89146-3-gregorhaas1997@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240809031638.89146-1-gregorhaas1997@gmail.com> References: <20240809031638.89146-1-gregorhaas1997@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240808_201645_282382_51ED90DF X-CRM114-Status: GOOD ( 11.04 ) X-Spam-Score: -1.8 (-) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: In the next commit, we'll add a new sbi_memalign() function. In order to allocate aligned memory, we'll sometimes need to allocate from the middle of a heap block, effectively splitting it in two. All [...] Content analysis details: (-1.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:42c listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider [gregorhaas1997(at)gmail.com] 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit [gregorhaas1997(at)gmail.com] X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org In the next commit, we'll add a new sbi_memalign() function. In order to allocate aligned memory, we'll sometimes need to allocate from the middle of a heap block, effectively splitting it in two. Allocating from the beginning of a heap block in the nonaligned case more closely matches this behavior, reducing the complexity of understanding the heap implementation. Signed-off-by: Gregor Haas Reviewed-by: Anup Patel --- lib/sbi/sbi_heap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/sbi/sbi_heap.c b/lib/sbi/sbi_heap.c index e43d77c..cc4893d 100644 --- a/lib/sbi/sbi_heap.c +++ b/lib/sbi/sbi_heap.c @@ -63,8 +63,9 @@ void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size) n = sbi_list_first_entry(&hpctrl->free_node_list, struct heap_node, head); sbi_list_del(&n->head); - n->addr = np->addr + np->size - size; + n->addr = np->addr; n->size = size; + np->addr += size; np->size -= size; sbi_list_add_tail(&n->head, &hpctrl->used_space_list); ret = (void *)n->addr; From patchwork Fri Aug 9 03:16:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregor Haas X-Patchwork-Id: 1970790 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Cu9LgFMQ; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=PfUGoWD6; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Wg8GK3z1qz1ybT for ; Fri, 9 Aug 2024 13:16:57 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=6rGt+tZDIbYpfHDsztAlqtmEXmPmVRvFXWW2TFbwm70=; b=Cu9LgFMQzSEx1R HQmqJ8+qLJQDNDK4GdGoi9TOSghP+afVHUF8Xy4UQzE/4LqtgEUPRMKdNgv1yiUsqDZzHqlzwXBzG cWqEaEL7wbOCwGtxkjKJIS4u3kkKN5jSW5rwGAt9xaOnLvjBq6zFV0jQ/j8QksUURO++Y2XhpKW9G Bd8WPbA18wmbSogHKpHhFDyQ3euiQ/du+4ckLGefxGRyTBDqjwz5f4DM1rIFzen7cuHQunHI/vnQ6 wDs4C8RDsUCY+xHfj+1Kmj/U4LVbSIqyUdUSxv+PiNGbJOURMAaBxNktspboCJCY7qmXUX+M/cEtp +ItjSOGEkBIy/aXlqACw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1scG7U-0000000A73r-3LZA; Fri, 09 Aug 2024 03:16:48 +0000 Received: from mail-pg1-x530.google.com ([2607:f8b0:4864:20::530]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1scG7S-0000000A72H-0I8o for opensbi@lists.infradead.org; Fri, 09 Aug 2024 03:16:47 +0000 Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-7ab09739287so1155910a12.3 for ; Thu, 08 Aug 2024 20:16:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723173405; x=1723778205; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LcnJ/BmG+5j8Ev3bFGAO4oJROa55vS2nX++Bq4TMrlM=; b=PfUGoWD6LkaCptLqbn0eQDu3RIZnW3I2Mimm0HSoslqkks8+zVdJElHuFnyCsbi8tP wU7Li19QDZICnV0tlUh769HBoIl526urmgiAaWhGdET6WaIzoxz95r8/36pbmf1wF2Yf H10VfUtcfhxYJ0KQC8U1SUye5nCh8huukZYmu1i9RDD3A427G28eExED0/Bqxyy9XkXV havT9r7GgXoay4vc3B1sBYv8ggHlGHIlmAb/ITCxqaGRDsMlr9DN0Repwc8mNIJXa3Di qmZUrv80W8IiGcJay9sS+/WLPMhGs81OOfxkgYlA3IB4ya5dMI0tFmk/MJ3/OFc9Vzfr FV8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723173405; x=1723778205; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LcnJ/BmG+5j8Ev3bFGAO4oJROa55vS2nX++Bq4TMrlM=; b=kxMOirbbhkeLXNbGUMlBzkeTyHKQp9AmmpBmhI0IDiGHazY6PRHLNZ+sYwmvQIFJ3X espcAa1TfSawHLqrRtTIFs8ilp/o2mr8u2gc9xHKJkON1Q2jK5GMuSSKjdKP5Zq6x0aB 17FxQ4qGZlI1GpH+0HnQruSxqfcnMM6coka2mPcmz7FGbfIxIsP+t5yabiK7y8JP4sHN KJAl94Up+xW//TeDZIylek3cyeVNUDKc17klI+aXRe43M046zRdNGslij3sfFPgc6r0b DFoEyo6iGmuszh2elzFhzl6Xq908r6cAVpS4Qf1FMlIGfFDoZOgI5q8NNR2gtnhuHpBV 8+cg== X-Gm-Message-State: AOJu0YxK7QBEEXOnvxl/mTaE+DIlv7tHeC+Eu1uOq3tftlS/YCcXhtXn fx9LnXDDAar/7uujde5EUfBvadU5X6trpp4yeduGVW92yqyng4UvTO1Ud8Nt X-Google-Smtp-Source: AGHT+IE8wTR99Q1UQAdFcKcUTlHOYR+2v4EV/QHeq77YKbsuDfSxDsO6llIcEqknvumn5wcXoBXTZg== X-Received: by 2002:a05:6a20:6f8f:b0:1c4:a2a7:b18e with SMTP id adf61e73a8af0-1c89fe72c6bmr445680637.30.1723173404512; Thu, 08 Aug 2024 20:16:44 -0700 (PDT) Received: from localhost ([136.27.11.53]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-1ff58f5a968sm132704595ad.109.2024.08.08.20.16.44 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 08 Aug 2024 20:16:44 -0700 (PDT) From: Gregor Haas To: opensbi@lists.infradead.org Cc: atishp@rivosinc.com, anup@brainfault.org, jrtc27@jrtc27.com, Gregor Haas Subject: [PATCH v4 3/3] lib: sbi: Implement aligned memory allocators Date: Thu, 8 Aug 2024 20:16:38 -0700 Message-ID: <20240809031638.89146-4-gregorhaas1997@gmail.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240809031638.89146-1-gregorhaas1997@gmail.com> References: <20240809031638.89146-1-gregorhaas1997@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240808_201646_129944_09E1D211 X-CRM114-Status: GOOD ( 17.79 ) X-Spam-Score: -1.8 (-) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: This change adds a simple implementation of sbi_aligned_alloc(), for future use in allocating aligned memory for SMMTT tables. Signed-off-by: Gregor Haas --- include/sbi/sbi_heap.h | 9 +++++ lib/sbi/sbi_heap.c | 75 ++++++++++++++++++++++++++++++++++++++---- 2 files changed, 78 insertions(+), 6 deletions(-) Content analysis details: (-1.8 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2607:f8b0:4864:20:0:0:0:530 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider [gregorhaas1997(at)gmail.com] 0.2 FREEMAIL_ENVFROM_END_DIGIT Envelope-from freemail username ends in digit [gregorhaas1997(at)gmail.com] X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org This change adds a simple implementation of sbi_aligned_alloc(), for future use in allocating aligned memory for SMMTT tables. Signed-off-by: Gregor Haas Reviewed-by: Anup Patel --- include/sbi/sbi_heap.h | 9 +++++ lib/sbi/sbi_heap.c | 75 ++++++++++++++++++++++++++++++++++++++---- 2 files changed, 78 insertions(+), 6 deletions(-) diff --git a/include/sbi/sbi_heap.h b/include/sbi/sbi_heap.h index 9a67090..a4b3f0c 100644 --- a/include/sbi/sbi_heap.h +++ b/include/sbi/sbi_heap.h @@ -31,6 +31,15 @@ static inline void *sbi_malloc(size_t size) return sbi_malloc_from(&global_hpctrl, size); } +/** Allocate aligned from heap area */ +void *sbi_aligned_alloc_from(struct sbi_heap_control *hpctrl, + size_t alignment,size_t size); + +static inline void *sbi_aligned_alloc(size_t alignment, size_t size) +{ + return sbi_aligned_alloc_from(&global_hpctrl, alignment, size); +} + /** Zero allocate from heap area */ void *sbi_zalloc_from(struct sbi_heap_control *hpctrl, size_t size); diff --git a/lib/sbi/sbi_heap.c b/lib/sbi/sbi_heap.c index cc4893d..6d08e44 100644 --- a/lib/sbi/sbi_heap.c +++ b/lib/sbi/sbi_heap.c @@ -37,27 +37,67 @@ struct sbi_heap_control { struct sbi_heap_control global_hpctrl; -void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size) +static void *alloc_with_align(struct sbi_heap_control *hpctrl, + size_t align, size_t size) { void *ret = NULL; - struct heap_node *n, *np; + struct heap_node *n, *np, *rem; + unsigned long lowest_aligned; + size_t pad; if (!size) return NULL; - size += HEAP_ALLOC_ALIGN - 1; - size &= ~((unsigned long)HEAP_ALLOC_ALIGN - 1); + size += align - 1; + size &= ~((unsigned long)align - 1); spin_lock(&hpctrl->lock); np = NULL; sbi_list_for_each_entry(n, &hpctrl->free_space_list, head) { - if (size <= n->size) { + lowest_aligned = ROUNDUP(n->addr, align); + pad = lowest_aligned - n->addr; + + if (size + pad <= n->size) { np = n; break; } } - if (np) { + if (!np) + goto out; + + if (pad) { + if (sbi_list_empty(&hpctrl->free_node_list)) { + goto out; + } + + n = sbi_list_first_entry(&hpctrl->free_node_list, + struct heap_node, head); + sbi_list_del(&n->head); + + if ((size + pad < np->size) && + !sbi_list_empty(&hpctrl->free_node_list)) { + rem = sbi_list_first_entry(&hpctrl->free_node_list, + struct heap_node, head); + sbi_list_del(&rem->head); + rem->addr = np->addr + (size + pad); + rem->size = np->size - (size + pad); + sbi_list_add_tail(&rem->head, + &hpctrl->free_space_list); + } else if (size + pad != np->size) { + /* Can't allocate, return n */ + sbi_list_add(&n->head, &hpctrl->free_node_list); + ret = NULL; + goto out; + } + + n->addr = lowest_aligned; + n->size = size; + sbi_list_add_tail(&n->head, &hpctrl->used_space_list); + + np->size = pad; + ret = (void *)n->addr; + } else { if ((size < np->size) && !sbi_list_empty(&hpctrl->free_node_list)) { n = sbi_list_first_entry(&hpctrl->free_node_list, @@ -76,11 +116,34 @@ void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size) } } +out: spin_unlock(&hpctrl->lock); return ret; } +void *sbi_malloc_from(struct sbi_heap_control *hpctrl, size_t size) +{ + return alloc_with_align(hpctrl, HEAP_ALLOC_ALIGN, size); +} + +void *sbi_aligned_alloc_from(struct sbi_heap_control *hpctrl, + size_t alignment, size_t size) +{ + if (alignment < HEAP_ALLOC_ALIGN) + alignment = HEAP_ALLOC_ALIGN; + + /* Make sure alignment is power of two */ + if ((alignment & (alignment - 1)) != 0) + return NULL; + + /* Make sure size is multiple of alignment */ + if (size % alignment != 0) + return NULL; + + return alloc_with_align(hpctrl, alignment, size); +} + void *sbi_zalloc_from(struct sbi_heap_control *hpctrl, size_t size) { void *ret = sbi_malloc_from(hpctrl, size);