From patchwork Mon Sep 5 09:28:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Huth X-Patchwork-Id: 665681 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3sSPYF3KVCz9s9c for ; Mon, 5 Sep 2016 19:28:33 +1000 (AEST) Received: from ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3sSPYF1JCBzDrnw for ; Mon, 5 Sep 2016 19:28:33 +1000 (AEST) X-Original-To: slof@lists.ozlabs.org Delivered-To: slof@lists.ozlabs.org Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3sSPY865VSzDrnr for ; Mon, 5 Sep 2016 19:28:28 +1000 (AEST) Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9B67AC04B31F; Mon, 5 Sep 2016 09:28:26 +0000 (UTC) Received: from thh440s.redhat.com (ovpn-116-104.ams2.redhat.com [10.36.116.104]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u859SN2J020482; Mon, 5 Sep 2016 05:28:24 -0400 From: Thomas Huth To: slof@lists.ozlabs.org Date: Mon, 5 Sep 2016 11:28:23 +0200 Message-Id: <1473067703-10746-1-git-send-email-thuth@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Mon, 05 Sep 2016 09:28:26 +0000 (UTC) Subject: [SLOF] [PATCH] Improve SLOF_alloc_mem_aligned() X-BeenThere: slof@lists.ozlabs.org X-Mailman-Version: 2.1.22 Precedence: list List-Id: "Patches for https://github.com/aik/SLOF" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: slof-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "SLOF" When loading a file via the spapr-vlan network interface, SLOF currently leaks quite a lot of memory, about 12kB each time. This happens mainly because veth_init() uses for example SLOF_alloc_mem_aligned(8192, 4096) and similar calls to get the memory, but the space for the additional alignment is never returned anymore later. An easy way to ease this situation is to improve SLOF_alloc_mem_aligned() a little bit. We normally get memory from SLOF_alloc_mem() which is aligned pretty well already, thanks to the buddy allocator in SLOF. So we can try to first get a memory block with the exact size first, and only do the alignment dance if the first allocation was not aligned yet. Signed-off-by: Thomas Huth --- slof/helpers.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/slof/helpers.c b/slof/helpers.c index 48c34a6..b049141 100644 --- a/slof/helpers.c +++ b/slof/helpers.c @@ -69,9 +69,14 @@ void *SLOF_alloc_mem(long size) void *SLOF_alloc_mem_aligned(long size, long align) { - unsigned long addr = (unsigned long)SLOF_alloc_mem(size + align - 1); - addr = addr + align - 1; - addr = addr & ~(align - 1); + unsigned long addr = (unsigned long)SLOF_alloc_mem(size); + + if (addr % align) { + SLOF_free_mem((void *)addr, size); + addr = (unsigned long)SLOF_alloc_mem(size + align - 1); + addr = addr + align - 1; + addr = addr & ~(align - 1); + } return (void *)addr; }