From patchwork Mon Jun 15 14:40:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309558 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvDJ19cXz9sRh for ; Tue, 16 Jun 2020 00:41:52 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id BD480383E80E; Mon, 15 Jun 2020 14:41:31 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id A89943840C09 for ; Mon, 15 Jun 2020 14:41:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org A89943840C09 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 71C73C0A; Mon, 15 Jun 2020 07:41:25 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F2F633F6CF; Mon, 15 Jun 2020 07:41:24 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 1/7] config: Allow memory tagging to be enabled when configuring glibc Date: Mon, 15 Jun 2020 15:40:23 +0100 Message-Id: <20200615144029.19771-2-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" This patch adds the configuration machinery to allow memory tagging to be enabled from the command line via the configure option --enable-memory-tagging. The current default is off, though in time we may change that once the API is more stable. --- config.h.in | 3 +++ config.make.in | 2 ++ configure | 17 +++++++++++++++++ configure.ac | 10 ++++++++++ manual/install.texi | 13 +++++++++++++ 5 files changed, 45 insertions(+) diff --git a/config.h.in b/config.h.in index dea43df438..b7f37e8e2a 100644 --- a/config.h.in +++ b/config.h.in @@ -148,6 +148,9 @@ /* Define if __stack_chk_guard canary should be randomized at program startup. */ #undef ENABLE_STACKGUARD_RANDOMIZE +/* Define if memory tagging support should be enabled. */ +#undef _LIBC_MTAG + /* Package description. */ #undef PKGVERSION diff --git a/config.make.in b/config.make.in index 2fed3da773..b3be8e0381 100644 --- a/config.make.in +++ b/config.make.in @@ -84,6 +84,8 @@ mach-interface-list = @mach_interface_list@ experimental-malloc = @experimental_malloc@ +memory-tagging = @memory_tagging@ + nss-crypt = @libc_cv_nss_crypt@ static-nss-crypt = @libc_cv_static_nss_crypt@ diff --git a/configure b/configure index 8df47d61f8..92854715e0 100755 --- a/configure +++ b/configure @@ -678,6 +678,7 @@ link_obsolete_rpc libc_cv_static_nss_crypt libc_cv_nss_crypt build_crypt +memory_tagging experimental_malloc enable_werror all_warnings @@ -783,6 +784,7 @@ enable_all_warnings enable_werror enable_multi_arch enable_experimental_malloc +enable_memory_tagging enable_crypt enable_nss_crypt enable_obsolete_rpc @@ -1454,6 +1456,8 @@ Optional Features: architectures --disable-experimental-malloc disable experimental malloc features + --enable-memory-tagging enable memory tagging if supported by the + architecture [default=no] --disable-crypt do not build nor install the passphrase hashing library, libcrypt --enable-nss-crypt enable libcrypt to use nss @@ -3527,6 +3531,19 @@ fi +# Check whether --enable-memory-tagging was given. +if test "${enable_memory_tagging+set}" = set; then : + enableval=$enable_memory_tagging; memory_tagging=$enableval +else + memory_tagging=no +fi + +if test "$memory_tagging" = yes; then + $as_echo "#define _LIBC_MTAG 1" >>confdefs.h + +fi + + # Check whether --enable-crypt was given. if test "${enable_crypt+set}" = set; then : enableval=$enable_crypt; build_crypt=$enableval diff --git a/configure.ac b/configure.ac index 5f229679a9..307a32d94b 100644 --- a/configure.ac +++ b/configure.ac @@ -311,6 +311,16 @@ AC_ARG_ENABLE([experimental-malloc], [experimental_malloc=yes]) AC_SUBST(experimental_malloc) +AC_ARG_ENABLE([memory-tagging], + AC_HELP_STRING([--enable-memory-tagging], + [enable memory tagging if supported by the architecture @<:@default=no@:>@]), + [memory_tagging=$enableval], + [memory_tagging=no]) +if test "$memory_tagging" = yes; then + AC_DEFINE(_LIBC_MTAG) +fi +AC_SUBST(memory_tagging) + AC_ARG_ENABLE([crypt], AC_HELP_STRING([--disable-crypt], [do not build nor install the passphrase hashing library, libcrypt]), diff --git a/manual/install.texi b/manual/install.texi index 71bf47cac6..abf7197090 100644 --- a/manual/install.texi +++ b/manual/install.texi @@ -167,6 +167,19 @@ on non-CET processors. @option{--enable-cet} has been tested for x86_64 and x32 on CET SDVs, but Intel CET support hasn't been validated for i686. +@item --enable-memory-tagging +Enable memory tagging support on architectures that support it. When +@theglibc{} is built with this option then the resulting library will +be able to control the use of tagged memory when hardware support is +present by use of the tunable @samp{glibc.memtag.enable}. This includes +the generation of tagged memory when using the @code{malloc} APIs. + +At present only AArch64 platforms with MTE provide this functionality, +although the library will still operate (without memory tagging) on +older versions of the architecture. + +The default is to disable support for memory tagging. + @item --disable-profile Don't build libraries with profiling information. You may want to use this option if you don't plan to do profiling. From patchwork Mon Jun 15 14:40:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309554 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvCv4xCrz9sRh for ; Tue, 16 Jun 2020 00:41:31 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4CE00383F86F; Mon, 15 Jun 2020 14:41:29 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 6537A383F85F for ; Mon, 15 Jun 2020 14:41:26 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 6537A383F85F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 261E1D6E; Mon, 15 Jun 2020 07:41:26 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A751D3F6CF; Mon, 15 Jun 2020 07:41:25 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 2/7] elf: Add a tunable to control use of tagged memory Date: Mon, 15 Jun 2020 15:40:24 +0100 Message-Id: <20200615144029.19771-3-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" Add a new glibc tunable: mtag.enable, bound to the environment variable _MTAG_ENABLE. This is a decimal constant in the range 0-255 but used as a bit-field. Bit 0 enables use of tagged memory in the malloc family of functions. Bit 1 enables precise faulting of tag failure on platforms where this can be controlled. Other bits are currently unused, but if set will cause memory tag checking for the current process to be enabled in the kernel. --- elf/dl-tunables.list | 9 +++++++++ manual/tunables.texi | 31 +++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) diff --git a/elf/dl-tunables.list b/elf/dl-tunables.list index 0d398dd251..92b6f21b44 100644 --- a/elf/dl-tunables.list +++ b/elf/dl-tunables.list @@ -126,4 +126,13 @@ glibc { default: 3 } } + + memtag { + enable { + type: INT_32 + minval: 0 + maxval: 255 + env_alias: _MTAG_ENABLE + } + } } diff --git a/manual/tunables.texi b/manual/tunables.texi index ec18b10834..428a0918e6 100644 --- a/manual/tunables.texi +++ b/manual/tunables.texi @@ -35,6 +35,8 @@ their own namespace. * POSIX Thread Tunables:: Tunables in the POSIX thread subsystem * Hardware Capability Tunables:: Tunables that modify the hardware capabilities seen by @theglibc{} +* Memory Tagging Tunables:: Tunables that control the use of hardware + memory tagging @end menu @node Tunable names @@ -423,3 +425,32 @@ instead. This tunable is specific to i386 and x86-64. @end deftp + +@node Memory Tagging Tunables +@section Memory Tagging Tunables +@cindex memory tagging tunables + +@deftp {Tunable namespace} glibc.memtag +If the hardware supports memory tagging, these tunables can be used to +control the way @theglibc{} uses this feature. Currently, only AArch64 +supports this feature. +@end deftp + +@deftp Tunable glibc.memtag.enable +This tunable takes a value between 0 and 255 and acts as a bitmask +that enables various capabilities. + +Bit 0 (the least significant bit) causes the malloc subsystem to allocate +tagged memory, with each allocation being assigned a random tag. + +Bit 1 enables precise faulting mode for tag violations on systems that +support deferred tag violation reporting. This may cause programs +to run more slowly. + +Other bits are currently reserved. + +@Theglibc{} startup code will automatically enable memory tagging +support in the kernel if this tunable has any non-zero value. + +The default value is @samp{0}, which disables all memory tagging. +@end deftp From patchwork Mon Jun 15 14:40:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309556 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvD52b7Yz9sRh for ; Tue, 16 Jun 2020 00:41:41 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0FF58383F87E; Mon, 15 Jun 2020 14:41:31 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 97C58383F865 for ; Mon, 15 Jun 2020 14:41:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 97C58383F865 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 13E0B106F; Mon, 15 Jun 2020 07:41:27 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5CC133F6CF; Mon, 15 Jun 2020 07:41:26 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 3/7] malloc: Basic support for memory tagging in the malloc() family Date: Mon, 15 Jun 2020 15:40:25 +0100 Message-Id: <20200615144029.19771-4-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" This patch adds the basic support for memory tagging. Various flavours are supported, particularly being able to turn on tagged memory at run-time: this allows the same code to be used on systems where memory tagging support is not present without neededing a separate build of glibc. Also, depending on whether the kernel supports it, the code will use mmap for the default arena if morecore does not, or cannot support tagged memory (on AArch64 it is not available). All the hooks use function pointers to allow this to work without needing ifuncs. malloc: allow memory tagging to be controlled from a tunable --- malloc/arena.c | 42 ++++++++- malloc/malloc.c | 171 ++++++++++++++++++++++++++++-------- malloc/malloc.h | 7 ++ sysdeps/generic/libc-mtag.h | 52 +++++++++++ 4 files changed, 233 insertions(+), 39 deletions(-) create mode 100644 sysdeps/generic/libc-mtag.h diff --git a/malloc/arena.c b/malloc/arena.c index cecdb7f4c4..e306028c9a 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -274,17 +274,36 @@ next_env_entry (char ***position) #endif -#ifdef SHARED +#if defined(SHARED) || defined(_LIBC_MTAG) static void * __failing_morecore (ptrdiff_t d) { return (void *) MORECORE_FAILURE; } +#endif +#ifdef SHARED extern struct dl_open_hook *_dl_open_hook; libc_hidden_proto (_dl_open_hook); #endif +#ifdef _LIBC_MTAG +static void * +__mtag_tag_new_usable (void *ptr) +{ + if (ptr) + ptr = __libc_mtag_tag_region (__libc_mtag_new_tag (ptr), + __malloc_usable_size (ptr)); + return ptr; +} + +static void * +__mtag_tag_new_memset (void *ptr, int val, size_t size) +{ + return __libc_mtag_memset_with_tag (__libc_mtag_new_tag (ptr), val, size); +} +#endif + static void ptmalloc_init (void) { @@ -293,6 +312,23 @@ ptmalloc_init (void) __malloc_initialized = 0; +#ifdef _LIBC_MTAG + if ((TUNABLE_GET_FULL (glibc, memtag, enable, int32_t, NULL) & 1) != 0) + { + /* If the environment says that we should be using tagged memory + and that morecore does not support tagged regions, then + disable it. */ + if (__MTAG_SBRK_UNTAGGED) + __morecore = __failing_morecore; + + __mtag_mmap_flags = __MTAG_MMAP_FLAGS; + __tag_new_memset = __mtag_tag_new_memset; + __tag_region = __libc_mtag_tag_region; + __tag_new_usable = __mtag_tag_new_usable; + __tag_at = __libc_mtag_address_get_tag; + } +#endif + #ifdef SHARED /* In case this libc copy is in a non-default namespace, never use brk. Likewise if dlopened from statically linked program. */ @@ -512,7 +548,7 @@ new_heap (size_t size, size_t top_pad) } } } - if (__mprotect (p2, size, PROT_READ | PROT_WRITE) != 0) + if (__mprotect (p2, size, MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE) != 0) { __munmap (p2, HEAP_MAX_SIZE); return 0; @@ -542,7 +578,7 @@ grow_heap (heap_info *h, long diff) { if (__mprotect ((char *) h + h->mprotect_size, (unsigned long) new_size - h->mprotect_size, - PROT_READ | PROT_WRITE) != 0) + MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE) != 0) return -2; h->mprotect_size = new_size; diff --git a/malloc/malloc.c b/malloc/malloc.c index 1282863681..2e6668f9bd 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -242,6 +242,9 @@ /* For DIAG_PUSH/POP_NEEDS_COMMENT et al. */ #include +/* For memory tagging. */ +#include + #include /* For SINGLE_THREAD_P. */ @@ -279,6 +282,18 @@ #define MALLOC_DEBUG 0 #endif +/* When using tagged memory, we cannot share the end of the user block + with the header for the next chunk, so ensure that we allocate + blocks that are rounded up to the granule size. Take care not to + overflow from close to MAX_SIZE_T to a small number. */ +static inline size_t +ROUND_UP_ALLOCATION_SIZE(size_t bytes) +{ + if ((bytes & (__MTAG_GRANULE_SIZE - 1)) != 0) + return bytes | (__MTAG_GRANULE_SIZE - 1); + return bytes; +} + #ifndef NDEBUG # define __assert_fail(assertion, file, line, function) \ __malloc_assert(assertion, file, line, function) @@ -378,6 +393,36 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line, void * __default_morecore (ptrdiff_t); void *(*__morecore)(ptrdiff_t) = __default_morecore; +#ifdef _LIBC_MTAG +static void * +__default_tag_region (void *ptr, size_t size) +{ + return ptr; +} + +static void * +__default_tag_nop (void *ptr) +{ + return ptr; +} + +int __mtag_mmap_flags = 0; + +void *(*__tag_new_memset)(void *, int, size_t) = memset; +void *(*__tag_region)(void *, size_t) = __default_tag_region; +void *(*__tag_new_usable)(void *) = __default_tag_nop; +void *(*__tag_at)(void *) = __default_tag_nop; + +# define TAG_NEW_MEMSET(ptr, val, size) __tag_new_memset (ptr, val, size) +# define TAG_REGION(ptr, size) __tag_region (ptr, size) +# define TAG_NEW_USABLE(ptr) __tag_new_usable (ptr) +# define TAG_AT(ptr) __tag_at (ptr) +#else +# define TAG_NEW_MEMSET(ptr, val, size) memset (ptr, val, size) +# define TAG_REGION(ptr, size) (ptr) +# define TAG_NEW_USABLE(ptr) (ptr) +# define TAG_AT(ptr) (ptr) +#endif #include @@ -1184,8 +1229,9 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ /* conversion from malloc headers to user pointers, and back */ -#define chunk2mem(p) ((void*)((char*)(p) + 2*SIZE_SZ)) -#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ)) +#define chunk2mem(p) ((void*)TAG_AT (((char*)(p) + 2*SIZE_SZ))) +#define chunk2rawmem(p) ((void*)((char*)(p) + 2*SIZE_SZ)) +#define mem2chunk(mem) ((mchunkptr)TAG_AT (((char*)(mem) - 2*SIZE_SZ))) /* The smallest possible chunk */ #define MIN_CHUNK_SIZE (offsetof(struct malloc_chunk, fd_nextsize)) @@ -1964,7 +2010,7 @@ do_check_chunk (mstate av, mchunkptr p) /* chunk is page-aligned */ assert (((prev_size (p) + sz) & (GLRO (dl_pagesize) - 1)) == 0); /* mem is aligned */ - assert (aligned_OK (chunk2mem (p))); + assert (aligned_OK (chunk2rawmem (p))); } } @@ -1988,7 +2034,7 @@ do_check_free_chunk (mstate av, mchunkptr p) if ((unsigned long) (sz) >= MINSIZE) { assert ((sz & MALLOC_ALIGN_MASK) == 0); - assert (aligned_OK (chunk2mem (p))); + assert (aligned_OK (chunk2rawmem (p))); /* ... matching footer field */ assert (prev_size (next_chunk (p)) == sz); /* ... and is fully consolidated */ @@ -2067,7 +2113,7 @@ do_check_remalloced_chunk (mstate av, mchunkptr p, INTERNAL_SIZE_T s) assert ((sz & MALLOC_ALIGN_MASK) == 0); assert ((unsigned long) (sz) >= MINSIZE); /* ... and alignment */ - assert (aligned_OK (chunk2mem (p))); + assert (aligned_OK (chunk2rawmem (p))); /* chunk is less than MINSIZE more than request */ assert ((long) (sz) - (long) (s) >= 0); assert ((long) (sz) - (long) (s + MINSIZE) < 0); @@ -2322,7 +2368,8 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) /* Don't try if size wraps around 0 */ if ((unsigned long) (size) > (unsigned long) (nb)) { - mm = (char *) (MMAP (0, size, PROT_READ | PROT_WRITE, 0)); + mm = (char *) (MMAP (0, size, + MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE, 0)); if (mm != MAP_FAILED) { @@ -2336,14 +2383,14 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) if (MALLOC_ALIGNMENT == 2 * SIZE_SZ) { - /* For glibc, chunk2mem increases the address by 2*SIZE_SZ and + /* For glibc, chunk2rawmem increases the address by 2*SIZE_SZ and MALLOC_ALIGN_MASK is 2*SIZE_SZ-1. Each mmap'ed area is page aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ - assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); + assert (((INTERNAL_SIZE_T) chunk2rawmem (mm) & MALLOC_ALIGN_MASK) == 0); front_misalign = 0; } else - front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; + front_misalign = (INTERNAL_SIZE_T) chunk2rawmem (mm) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { correction = MALLOC_ALIGNMENT - front_misalign; @@ -2515,7 +2562,9 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) /* Don't try if size wraps around 0 */ if ((unsigned long) (size) > (unsigned long) (nb)) { - char *mbrk = (char *) (MMAP (0, size, PROT_READ | PROT_WRITE, 0)); + char *mbrk = (char *) (MMAP (0, size, + MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE, + 0)); if (mbrk != MAP_FAILED) { @@ -2586,7 +2635,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) /* Guarantee alignment of first new chunk made from this space */ - front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK; + front_misalign = (INTERNAL_SIZE_T) chunk2rawmem (brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { /* @@ -2644,10 +2693,10 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) { if (MALLOC_ALIGNMENT == 2 * SIZE_SZ) /* MORECORE/mmap must correctly align */ - assert (((unsigned long) chunk2mem (brk) & MALLOC_ALIGN_MASK) == 0); + assert (((unsigned long) chunk2rawmem (brk) & MALLOC_ALIGN_MASK) == 0); else { - front_misalign = (INTERNAL_SIZE_T) chunk2mem (brk) & MALLOC_ALIGN_MASK; + front_misalign = (INTERNAL_SIZE_T) chunk2rawmem (brk) & MALLOC_ALIGN_MASK; if (front_misalign > 0) { /* @@ -2832,7 +2881,7 @@ munmap_chunk (mchunkptr p) if (DUMPED_MAIN_ARENA_CHUNK (p)) return; - uintptr_t mem = (uintptr_t) chunk2mem (p); + uintptr_t mem = (uintptr_t) chunk2rawmem (p); uintptr_t block = (uintptr_t) p - prev_size (p); size_t total_size = prev_size (p) + size; /* Unfortunately we have to do the compilers job by hand here. Normally @@ -2887,7 +2936,7 @@ mremap_chunk (mchunkptr p, size_t new_size) p = (mchunkptr) (cp + offset); - assert (aligned_OK (chunk2mem (p))); + assert (aligned_OK (chunk2rawmem (p))); assert (prev_size (p) == offset); set_head (p, (new_size - offset) | IS_MMAPPED); @@ -3051,6 +3100,7 @@ __libc_malloc (size_t bytes) = atomic_forced_read (__malloc_hook); if (__builtin_expect (hook != NULL, 0)) return (*hook)(bytes, RETURN_ADDRESS (0)); + bytes = ROUND_UP_ALLOCATION_SIZE (bytes); #if USE_TCACHE /* int_free also calls request2size, be careful to not pad twice. */ size_t tbytes; @@ -3068,14 +3118,15 @@ __libc_malloc (size_t bytes) && tcache && tcache->counts[tc_idx] > 0) { - return tcache_get (tc_idx); + victim = tcache_get (tc_idx); + return TAG_NEW_USABLE (victim); } DIAG_POP_NEEDS_COMMENT; #endif if (SINGLE_THREAD_P) { - victim = _int_malloc (&main_arena, bytes); + victim = TAG_NEW_USABLE (_int_malloc (&main_arena, bytes)); assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || &main_arena == arena_for_chunk (mem2chunk (victim))); return victim; @@ -3096,6 +3147,8 @@ __libc_malloc (size_t bytes) if (ar_ptr != NULL) __libc_lock_unlock (ar_ptr->mutex); + victim = TAG_NEW_USABLE (victim); + assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || ar_ptr == arena_for_chunk (mem2chunk (victim))); return victim; @@ -3119,8 +3172,17 @@ __libc_free (void *mem) if (mem == 0) /* free(0) has no effect */ return; +#ifdef _LIBC_MTAG + /* Quickly check that the freed pointer matches the tag for the memory. + This gives a useful double-free detection. */ + *(volatile char *)mem; +#endif + p = mem2chunk (mem); + /* Mark the chunk as belonging to the library again. */ + (void)TAG_REGION (chunk2rawmem (p), __malloc_usable_size (mem)); + if (chunk_is_mmapped (p)) /* release mmapped memory. */ { /* See if the dynamic brk/mmap threshold needs adjusting. @@ -3170,6 +3232,13 @@ __libc_realloc (void *oldmem, size_t bytes) if (oldmem == 0) return __libc_malloc (bytes); + bytes = ROUND_UP_ALLOCATION_SIZE (bytes); +#ifdef _LIBC_MTAG + /* Perform a quick check to ensure that the pointer's tag matches the + memory's tag. */ + *(volatile char*) oldmem; +#endif + /* chunk corresponding to oldmem */ const mchunkptr oldp = mem2chunk (oldmem); /* its size */ @@ -3225,7 +3294,15 @@ __libc_realloc (void *oldmem, size_t bytes) #if HAVE_MREMAP newp = mremap_chunk (oldp, nb); if (newp) - return chunk2mem (newp); + { + void *newmem = chunk2rawmem (newp); + /* Give the new block a different tag. This helps to ensure + that stale handles to the previous mapping are not + reused. There's a performance hit for both us and the + caller for doing this, so we might want to + reconsider. */ + return TAG_NEW_USABLE (newmem); + } #endif /* Note the extra SIZE_SZ overhead. */ if (oldsize - SIZE_SZ >= nb) @@ -3308,7 +3385,7 @@ _mid_memalign (size_t alignment, size_t bytes, void *address) return 0; } - + bytes = ROUND_UP_ALLOCATION_SIZE (bytes); /* Make sure alignment is power of 2. */ if (!powerof2 (alignment)) { @@ -3323,8 +3400,7 @@ _mid_memalign (size_t alignment, size_t bytes, void *address) p = _int_memalign (&main_arena, alignment, bytes); assert (!p || chunk_is_mmapped (mem2chunk (p)) || &main_arena == arena_for_chunk (mem2chunk (p))); - - return p; + return TAG_NEW_USABLE (p); } arena_get (ar_ptr, bytes + alignment + MINSIZE); @@ -3342,7 +3418,7 @@ _mid_memalign (size_t alignment, size_t bytes, void *address) assert (!p || chunk_is_mmapped (mem2chunk (p)) || ar_ptr == arena_for_chunk (mem2chunk (p))); - return p; + return TAG_NEW_USABLE (p); } /* For ISO C11. */ weak_alias (__libc_memalign, aligned_alloc) @@ -3351,20 +3427,27 @@ libc_hidden_def (__libc_memalign) void * __libc_valloc (size_t bytes) { + void *p; + if (__malloc_initialized < 0) ptmalloc_init (); + bytes = ROUND_UP_ALLOCATION_SIZE (bytes); void *address = RETURN_ADDRESS (0); size_t pagesize = GLRO (dl_pagesize); - return _mid_memalign (pagesize, bytes, address); + p = _mid_memalign (pagesize, bytes, address); + return TAG_NEW_USABLE (p); } void * __libc_pvalloc (size_t bytes) { + void *p; + if (__malloc_initialized < 0) ptmalloc_init (); + bytes = ROUND_UP_ALLOCATION_SIZE (bytes); void *address = RETURN_ADDRESS (0); size_t pagesize = GLRO (dl_pagesize); size_t rounded_bytes; @@ -3378,19 +3461,22 @@ __libc_pvalloc (size_t bytes) } rounded_bytes = rounded_bytes & -(pagesize - 1); - return _mid_memalign (pagesize, rounded_bytes, address); + p = _mid_memalign (pagesize, rounded_bytes, address); + return TAG_NEW_USABLE (p); } void * __libc_calloc (size_t n, size_t elem_size) { mstate av; - mchunkptr oldtop, p; - INTERNAL_SIZE_T sz, csz, oldtopsize; + mchunkptr oldtop; + INTERNAL_SIZE_T sz, oldtopsize; void *mem; +#ifndef _LIBC_MTAG unsigned long clearsize; unsigned long nclears; INTERNAL_SIZE_T *d; +#endif ptrdiff_t bytes; if (__glibc_unlikely (__builtin_mul_overflow (n, elem_size, &bytes))) @@ -3398,6 +3484,7 @@ __libc_calloc (size_t n, size_t elem_size) __set_errno (ENOMEM); return NULL; } + sz = bytes; void *(*hook) (size_t, const void *) = @@ -3413,6 +3500,7 @@ __libc_calloc (size_t n, size_t elem_size) MAYBE_INIT_TCACHE (); + sz = ROUND_UP_ALLOCATION_SIZE (sz); if (SINGLE_THREAD_P) av = &main_arena; else @@ -3467,7 +3555,14 @@ __libc_calloc (size_t n, size_t elem_size) if (mem == 0) return 0; - p = mem2chunk (mem); + /* If we are using memory tagging, then we need to set the tags + regardless of MORECORE_CLEARS, so we zero the whole block while + doing so. */ +#ifdef _LIBC_MTAG + return TAG_NEW_MEMSET (mem, 0, __malloc_usable_size (mem)); +#else + mchunkptr p = mem2chunk (mem); + INTERNAL_SIZE_T csz = chunksize (p); /* Two optional cases in which clearing not necessary */ if (chunk_is_mmapped (p)) @@ -3478,8 +3573,6 @@ __libc_calloc (size_t n, size_t elem_size) return mem; } - csz = chunksize (p); - #if MORECORE_CLEARS if (perturb_byte == 0 && (p == oldtop && csz > oldtopsize)) { @@ -3522,6 +3615,7 @@ __libc_calloc (size_t n, size_t elem_size) } return mem; +#endif } /* @@ -4618,7 +4712,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, av->top = chunk_at_offset (oldp, nb); set_head (av->top, (newsize - nb) | PREV_INUSE); check_inuse_chunk (av, oldp); - return chunk2mem (oldp); + return TAG_NEW_USABLE (chunk2rawmem (oldp)); } /* Try to expand forward into next chunk; split off remainder below */ @@ -4651,7 +4745,10 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, } else { - memcpy (newmem, chunk2mem (oldp), oldsize - SIZE_SZ); + void *oldmem = chunk2mem (oldp); + newmem = TAG_NEW_USABLE (newmem); + memcpy (newmem, oldmem, __malloc_usable_size (oldmem)); + (void) TAG_REGION (chunk2rawmem (oldp), oldsize); _int_free (av, oldp, 1); check_inuse_chunk (av, newp); return chunk2mem (newp); @@ -4673,6 +4770,8 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, else /* split remainder */ { remainder = chunk_at_offset (newp, nb); + /* Clear any user-space tags before writing the header. */ + remainder = TAG_REGION (remainder, remainder_size); set_head_size (newp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); @@ -4682,8 +4781,8 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, } check_inuse_chunk (av, newp); - return chunk2mem (newp); -} + return TAG_NEW_USABLE (chunk2rawmem (newp)); + } /* ------------------------------ memalign ------------------------------ @@ -4760,7 +4859,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes) p = newp; assert (newsize >= nb && - (((unsigned long) (chunk2mem (p))) % alignment) == 0); + (((unsigned long) (chunk2rawmem (p))) % alignment) == 0); } /* Also give back spare room at the end */ @@ -4814,7 +4913,7 @@ mtrim (mstate av, size_t pad) + sizeof (struct malloc_chunk) + psm1) & ~psm1); - assert ((char *) chunk2mem (p) + 4 * SIZE_SZ <= paligned_mem); + assert ((char *) chunk2rawmem (p) + 4 * SIZE_SZ <= paligned_mem); assert ((char *) p + size > paligned_mem); /* This is the size we could potentially free. */ @@ -4902,7 +5001,7 @@ __malloc_usable_size (void *m) size_t result; result = musable (m); - return result; + return (size_t) (((INTERNAL_SIZE_T)result) & ~(__MTAG_GRANULE_SIZE - 1)); } /* diff --git a/malloc/malloc.h b/malloc/malloc.h index a6903fdd54..d012da9a9f 100644 --- a/malloc/malloc.h +++ b/malloc/malloc.h @@ -77,6 +77,13 @@ extern void *pvalloc (size_t __size) __THROW __attribute_malloc__ __wur; contiguous pieces of memory. */ extern void *(*__morecore) (ptrdiff_t __size); +#ifdef _LIBC_MTAG +extern int __mtag_mmap_flags; +#define MTAG_MMAP_FLAGS __mtag_mmap_flags +#else +#define MTAG_MMAP_FLAGS 0 +#endif + /* Default value of `__morecore'. */ extern void *__default_morecore (ptrdiff_t __size) __THROW __attribute_malloc__; diff --git a/sysdeps/generic/libc-mtag.h b/sysdeps/generic/libc-mtag.h new file mode 100644 index 0000000000..3e9885451c --- /dev/null +++ b/sysdeps/generic/libc-mtag.h @@ -0,0 +1,52 @@ +/* libc-internal interface for tagged (colored) memory support. + Copyright (C) 2019 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _GENERIC_LIBC_MTAG_H +#define _GENERIC_LIBC_MTAG_H 1 + +/* Generic bindings for systems that do not support memory tagging. */ + +/* Used to ensure additional alignment when objects need to have distinct + tags. */ +#define __MTAG_GRANULE_SIZE 1 + +/* Non-zero if memory obtained via morecore (sbrk) is not tagged. */ +#define __MTAG_SBRK_UNTAGGED 0 + +/* Extra flags to pass to mmap() to request a tagged region of memory. */ +#define __MTAG_MMAP_FLAGS 0 + +/* Set the tags for a region of memory, which must have size and alignment + that are multiples of __MTAG_GRANULE_SIZE. Size cannot be zero. + void *__libc_mtag_tag_region (const void *, size_t) */ +#define __libc_mtag_tag_region(p, s) (p) + +/* Optimized equivalent to __libc_mtag_tag_region followed by memset. */ +#define __libc_mtag_memset_with_tag memset + +/* Convert address P to a pointer that is tagged correctly for that + location. + void *__libc_mtag_address_get_tag (void*) */ +#define __libc_mtag_address_get_tag(p) (p) + +/* Assign a new (random) tag to a pointer P (does not adjust the tag on + the memory addressed). + void *__libc_mtag_new_tag (void*) */ +#define __libc_mtag_new_tag(p) (p) + +#endif /* _GENERIC_LIBC_MTAG_H */ From patchwork Mon Jun 15 14:40:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309555 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvD02ncHz9sRk for ; Tue, 16 Jun 2020 00:41:36 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A830F3840C09; Mon, 15 Jun 2020 14:41:30 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 1B38E383F85F for ; Mon, 15 Jun 2020 14:41:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 1B38E383F85F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BCAD711B3; Mon, 15 Jun 2020 07:41:27 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 495323F6CF; Mon, 15 Jun 2020 07:41:27 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 4/7] linux: Add compatibility definitions to sys/prctl.h for MTE Date: Mon, 15 Jun 2020 15:40:26 +0100 Message-Id: <20200615144029.19771-5-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" Older versions of the Linux kernel headers obviously lack support for memory tagging, but we still want to be able to build in support when using those (obviously it can't be enabled on such systems). The linux kernel extensions are made to the platform-independent header (linux/prctl.h), so this patch takes a similar approach. --- sysdeps/unix/sysv/linux/sys/prctl.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/sysdeps/unix/sysv/linux/sys/prctl.h b/sysdeps/unix/sysv/linux/sys/prctl.h index 7f748ebeeb..4d01379c23 100644 --- a/sysdeps/unix/sysv/linux/sys/prctl.h +++ b/sysdeps/unix/sysv/linux/sys/prctl.h @@ -21,6 +21,24 @@ #include #include /* The magic values come from here */ +/* Recent extensions to linux which may post-date the kernel headers + we're picking up... */ + +/* Memory tagging control operations (for AArch64). */ +#ifndef PR_TAGGED_ADDR_ENABLE +# define PR_TAGGED_ADDR_ENABLE (1UL << 8) +#endif + +#ifndef PR_MTE_TCF_SHIFT +# define PR_MTE_TCF_SHIFT 1 +# define PR_MTE_TCF_NONE (0UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TCF_SYNC (1UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TCF_ASYNC (2UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TCF_MASK (3UL << PR_MTE_TCF_SHIFT) +# define PR_MTE_TAG_SHIFT 3 +# define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT) +#endif + __BEGIN_DECLS /* Control process execution. */ From patchwork Mon Jun 15 14:40:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309559 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvDN27H0z9sRh for ; Tue, 16 Jun 2020 00:41:56 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 22163383E814; Mon, 15 Jun 2020 14:41:32 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id D7FB6383F869 for ; Mon, 15 Jun 2020 14:41:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org D7FB6383F869 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 71259D6E; Mon, 15 Jun 2020 07:41:28 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F1FB13F6CF; Mon, 15 Jun 2020 07:41:27 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 5/7] aarch64: Mitigations for string functions when MTE is enabled. Date: Mon, 15 Jun 2020 15:40:27 +0100 Message-Id: <20200615144029.19771-6-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" This is a place-holder patch for the changes needed to the string functions to make them safe when using memory tagging. It is expected that this patch will be replaced before the final series is committed. When memory tagging is enabled, functions must not fetch data beyond a granule boundary. Unfortunately, this affects a number of the optimized string operations for aarch64 which assume that provided a page boundary is not being crossed any amount of data within the page may be accessed. This patch replaces the existing string functions with variants that do not violate the granule size limitations that now exist. This patch has not been tuned for performance. --- sysdeps/aarch64/memchr.S | 21 ++++++++++++++++++++- sysdeps/aarch64/multiarch/strlen_asimd.S | 2 +- sysdeps/aarch64/strchr.S | 15 +++++++++++++++ sysdeps/aarch64/strchrnul.S | 14 +++++++++++++- sysdeps/aarch64/strcmp.S | 12 +++++++++--- sysdeps/aarch64/strcpy.S | 2 +- sysdeps/aarch64/strlen.S | 2 +- sysdeps/aarch64/strncmp.S | 10 ++++++++-- sysdeps/aarch64/strrchr.S | 15 ++++++++++++++- 9 files changed, 82 insertions(+), 11 deletions(-) diff --git a/sysdeps/aarch64/memchr.S b/sysdeps/aarch64/memchr.S index 85c65cbfca..6e01a0a0a9 100644 --- a/sysdeps/aarch64/memchr.S +++ b/sysdeps/aarch64/memchr.S @@ -64,6 +64,25 @@ */ ENTRY (MEMCHR) +#ifdef _LIBC_MTAG + /* Quick-and-dirty implementation for MTE. Needs a rewrite as + granules are only 16 bytes in size. */ + /* Do not dereference srcin if no bytes to compare. */ + cbz cntin, L(zero_length) + and chrin, chrin, #255 +L(next_byte): + ldrb wtmp2, [srcin], #1 + cmp wtmp2, chrin + b.eq L(found) + subs cntin, cntin, #1 + b.ne L(next_byte) +L(zero_length): + mov result, #0 + ret +L(found): + sub result, srcin, #1 + ret +#else /* Do not dereference srcin if no bytes to compare. */ cbz cntin, L(zero_length) /* @@ -152,10 +171,10 @@ L(tail): /* Select result or NULL */ csel result, xzr, result, eq ret - L(zero_length): mov result, #0 ret +#endif /* _LIBC_MTAG */ END (MEMCHR) weak_alias (MEMCHR, memchr) libc_hidden_builtin_def (memchr) diff --git a/sysdeps/aarch64/multiarch/strlen_asimd.S b/sysdeps/aarch64/multiarch/strlen_asimd.S index 236a2c96a6..c2c718e493 100644 --- a/sysdeps/aarch64/multiarch/strlen_asimd.S +++ b/sysdeps/aarch64/multiarch/strlen_asimd.S @@ -51,7 +51,7 @@ #define REP8_01 0x0101010101010101 #define REP8_7f 0x7f7f7f7f7f7f7f7f -#ifdef TEST_PAGE_CROSS +#if defined _LIBC_MTAG || defined TEST_PAGE_CROSS # define MIN_PAGE_SIZE 16 #else # define MIN_PAGE_SIZE 4096 diff --git a/sysdeps/aarch64/strchr.S b/sysdeps/aarch64/strchr.S index 4a75e73945..32c500609e 100644 --- a/sysdeps/aarch64/strchr.S +++ b/sysdeps/aarch64/strchr.S @@ -63,6 +63,20 @@ ENTRY (strchr) DELOUSE (0) +#ifdef _LIBC_MTAG + /* Quick and dirty implementation for MTE */ + and chrin, chrin, #255 +L(next_byte): + ldrb wtmp2, [srcin], #1 + cbz wtmp2, L(end) + cmp wtmp2, chrin + b.ne L(next_byte) + sub result, srcin, #1 + ret +L(end): + mov result, #0 + ret +#else mov wtmp2, #0x0401 movk wtmp2, #0x4010, lsl #16 dup vrepchr.16b, chrin @@ -134,6 +148,7 @@ L(tail): add result, src, tmp1, lsr #1 csel result, result, xzr, eq ret +#endif END (strchr) libc_hidden_builtin_def (strchr) weak_alias (strchr, index) diff --git a/sysdeps/aarch64/strchrnul.S b/sysdeps/aarch64/strchrnul.S index a65be6cba8..78a9252eb8 100644 --- a/sysdeps/aarch64/strchrnul.S +++ b/sysdeps/aarch64/strchrnul.S @@ -61,6 +61,18 @@ ENTRY (__strchrnul) DELOUSE (0) +#ifdef _LIBC_MTAG + /* Quick and dirty implementation for MTE */ + and chrin, chrin, #255 +L(next_byte): + ldrb wtmp2, [srcin], #1 + cmp wtmp2, #0 + ccmp wtmp2, chrin, #4, ne /* NZCV = 0x0100 */ + b.ne L(next_byte) + + sub result, srcin, #1 + ret +#else /* Magic constant 0x40100401 to allow us to identify which lane matches the termination condition. */ mov wtmp2, #0x0401 @@ -126,6 +138,6 @@ L(tail): /* tmp1 is twice the offset into the fragment. */ add result, src, tmp1, lsr #1 ret - +#endif /* _LIBC_MTAG */ END(__strchrnul) weak_alias (__strchrnul, strchrnul) diff --git a/sysdeps/aarch64/strcmp.S b/sysdeps/aarch64/strcmp.S index d044c29e9b..d01b199ab3 100644 --- a/sysdeps/aarch64/strcmp.S +++ b/sysdeps/aarch64/strcmp.S @@ -46,6 +46,12 @@ #define zeroones x10 #define pos x11 +#if defined _LIBC_MTAG || defined TEST_PAGE_CROSS +# define MIN_PAGE_SIZE 16 +#else +# define MIN_PAGE_SIZE 4096 +#endif + /* Start of performance-critical section -- one 64B cache line. */ ENTRY_ALIGN(strcmp, 6) @@ -161,10 +167,10 @@ L(do_misaligned): b.ne L(do_misaligned) L(loop_misaligned): - /* Test if we are within the last dword of the end of a 4K page. If + /* Test if we are within the last dword of the end of a page. If yes then jump back to the misaligned loop to copy a byte at a time. */ - and tmp1, src2, #0xff8 - eor tmp1, tmp1, #0xff8 + and tmp1, src2, #(MIN_PAGE_SIZE - 8) + eor tmp1, tmp1, #(MIN_PAGE_SIZE - 8) cbz tmp1, L(do_misaligned) ldr data1, [src1], #8 ldr data2, [src2], #8 diff --git a/sysdeps/aarch64/strcpy.S b/sysdeps/aarch64/strcpy.S index 548130e413..82548f3d53 100644 --- a/sysdeps/aarch64/strcpy.S +++ b/sysdeps/aarch64/strcpy.S @@ -87,7 +87,7 @@ misaligned, crosses a page boundary - after that we move to aligned fetches for the remainder of the string. */ -#ifdef STRCPY_TEST_PAGE_CROSS +#if defined _LIBC_MTAG || defined STRCPY_TEST_PAGE_CROSS /* Make everything that isn't Qword aligned look like a page cross. */ #define MIN_PAGE_P2 4 #else diff --git a/sysdeps/aarch64/strlen.S b/sysdeps/aarch64/strlen.S index e01fab7c2a..7455a668bb 100644 --- a/sysdeps/aarch64/strlen.S +++ b/sysdeps/aarch64/strlen.S @@ -57,7 +57,7 @@ #define REP8_7f 0x7f7f7f7f7f7f7f7f #define REP8_80 0x8080808080808080 -#ifdef TEST_PAGE_CROSS +#if defined _LIBC_MTAG || defined TEST_PAGE_CROSS # define MIN_PAGE_SIZE 16 #else # define MIN_PAGE_SIZE 4096 diff --git a/sysdeps/aarch64/strncmp.S b/sysdeps/aarch64/strncmp.S index c5141fab8a..40c805f609 100644 --- a/sysdeps/aarch64/strncmp.S +++ b/sysdeps/aarch64/strncmp.S @@ -51,6 +51,12 @@ #define endloop x15 #define count mask +#if defined _LIBC_MTAG || defined TEST_PAGE_CROSS +# define MIN_PAGE_SIZE 16 +#else +# define MIN_PAGE_SIZE 4096 +#endif + ENTRY_ALIGN_AND_PAD (strncmp, 6, 7) DELOUSE (0) DELOUSE (1) @@ -233,8 +239,8 @@ L(do_misaligned): subs limit_wd, limit_wd, #1 b.lo L(done_loop) L(loop_misaligned): - and tmp2, src2, #0xff8 - eor tmp2, tmp2, #0xff8 + and tmp2, src2, #(MIN_PAGE_SIZE - 8) + eor tmp2, tmp2, #(MIN_PAGE_SIZE - 8) cbz tmp2, L(page_end_loop) ldr data1, [src1], #8 diff --git a/sysdeps/aarch64/strrchr.S b/sysdeps/aarch64/strrchr.S index 94da08d351..ef00e969d9 100644 --- a/sysdeps/aarch64/strrchr.S +++ b/sysdeps/aarch64/strrchr.S @@ -70,6 +70,19 @@ ENTRY(strrchr) DELOUSE (0) cbz x1, L(null_search) +#ifdef _LIBC_MTAG + /* Quick and dirty version for MTE. */ + and chrin, chrin, #255 + mov src_match, #0 +L(next_byte): + ldrb wtmp2, [srcin] + cmp wtmp2, chrin + csel src_match, src_match, srcin, ne + add srcin, srcin, #1 + cbnz wtmp2, L(next_byte) + mov result, src_match + ret +#else /* Magic constant 0x40100401 to allow us to identify which lane matches the requested byte. Magic constant 0x80200802 used similarly for NUL termination. */ @@ -158,9 +171,9 @@ L(tail): csel result, result, xzr, ne ret +#endif L(null_search): b __strchrnul - END(strrchr) weak_alias (strrchr, rindex) libc_hidden_builtin_def (strrchr) From patchwork Mon Jun 15 14:40:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309560 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvDS6MF0z9sRh for ; Tue, 16 Jun 2020 00:42:00 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7D4C1383E818; Mon, 15 Jun 2020 14:41:32 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 6F155383F870 for ; Mon, 15 Jun 2020 14:41:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 6F155383F870 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 276D111FB; Mon, 15 Jun 2020 07:41:29 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A68863F6CF; Mon, 15 Jun 2020 07:41:28 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 6/7] aarch64: Add sysv specific enabling code for memory tagging Date: Mon, 15 Jun 2020 15:40:28 +0100 Message-Id: <20200615144029.19771-7-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" Add various defines and stubs for enabling MTE on AArch64 sysv-like systems such as Linux. The HWCAP feature bit is copied over in the same way as other feature bits. Similarly we add a new wrapper header for mman.h to define the PROT_MTE flag that can be used with mmap and related functions. We add a new field to struct cpu_features that can be used, for example, to check whether or not certain ifunc'd routines should be bound to MTE-safe versions. Finally, if we detect that MTE should be enabled (ie via the glibc tunable); we enable MTE during startup as required. --- sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h | 2 ++ sysdeps/unix/sysv/linux/aarch64/bits/mman.h | 32 +++++++++++++++++++ .../unix/sysv/linux/aarch64/cpu-features.c | 22 +++++++++++++ .../unix/sysv/linux/aarch64/cpu-features.h | 1 + 4 files changed, 57 insertions(+) create mode 100644 sysdeps/unix/sysv/linux/aarch64/bits/mman.h diff --git a/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h b/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h index f52840c2c4..4092603fd7 100644 --- a/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h +++ b/sysdeps/unix/sysv/linux/aarch64/bits/hwcap.h @@ -54,3 +54,5 @@ #define HWCAP_SB (1 << 29) #define HWCAP_PACA (1 << 30) #define HWCAP_PACG (1UL << 31) + +#define HWCAP2_MTE (1 << 18) diff --git a/sysdeps/unix/sysv/linux/aarch64/bits/mman.h b/sysdeps/unix/sysv/linux/aarch64/bits/mman.h new file mode 100644 index 0000000000..fa3f3a31f4 --- /dev/null +++ b/sysdeps/unix/sysv/linux/aarch64/bits/mman.h @@ -0,0 +1,32 @@ +/* Definitions for POSIX memory map interface. Linux/aarch64 version. + Copyright (C) 2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _SYS_MMAN_H +# error "Never use directly; include instead." +#endif + +/* The following definitions basically come from the kernel headers. + But the kernel header is not namespace clean. */ + +/* Other flags. */ +#define PROT_MTE 0x20 /* Normal Tagged mapping. */ + +#include + +/* Include generic Linux declarations. */ +#include diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c index 896c588fee..a8554f3e5d 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c @@ -19,6 +19,7 @@ #include #include #include +#include #define DCZID_DZP_MASK (1 << 4) #define DCZID_BS_MASK (0xf) @@ -83,4 +84,25 @@ init_cpu_features (struct cpu_features *cpu_features) if ((dczid & DCZID_DZP_MASK) == 0) cpu_features->zva_size = 4 << (dczid & DCZID_BS_MASK); + + /* Setup memory tagging support if the HW and kernel support it, and if + the user has requested it. */ +#if HAVE_TUNABLES + int mte_state = TUNABLE_GET (glibc, memtag, enable, unsigned, 0); + cpu_features->mte_state = (GLRO (dl_hwcap2) & HWCAP2_MTE) ? mte_state : 0; +#else + cpu_features->mte_state = 0; +#endif + /* For now, disallow tag 0, so that we can clearly see when tagged + addresses are being allocated. */ + if (cpu_features->mte_state & 2) + __prctl (PR_SET_TAGGED_ADDR_CTRL, + (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_SYNC + | (0xfffe << PR_MTE_TAG_SHIFT)), + 0, 0, 0); + else if (cpu_features->mte_state) + __prctl (PR_SET_TAGGED_ADDR_CTRL, + (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC + | (0xfffe << PR_MTE_TAG_SHIFT)), + 0, 0, 0); } diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h index 1389cea1b3..604de27c88 100644 --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h @@ -64,6 +64,7 @@ struct cpu_features { uint64_t midr_el1; unsigned zva_size; + unsigned mte_state; }; #endif /* _CPU_FEATURES_AARCH64_H */ From patchwork Mon Jun 15 14:40:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Earnshaw X-Patchwork-Id: 1309561 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49lvDX6mKHz9sRf for ; Tue, 16 Jun 2020 00:42:04 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D7079383E82D; Mon, 15 Jun 2020 14:41:34 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 349A8383F876 for ; Mon, 15 Jun 2020 14:41:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 349A8383F876 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rearnsha@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CFD9A12FC; Mon, 15 Jun 2020 07:41:29 -0700 (PDT) Received: from eagle.buzzard.freeserve.co.uk (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5C5193F6CF; Mon, 15 Jun 2020 07:41:29 -0700 (PDT) From: Richard Earnshaw To: libc-alpha@sourceware.org Subject: [PATCH 7/7] aarch64: Add aarch64-specific files for memory tagging support Date: Mon, 15 Jun 2020 15:40:29 +0100 Message-Id: <20200615144029.19771-8-rearnsha@arm.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200615144029.19771-1-rearnsha@arm.com> References: <20200615144029.19771-1-rearnsha@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-15.6 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Richard Earnshaw Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" This final patch provides the architecture-specific implementation of the memory-tagging support hooks for aarch64. --- sysdeps/aarch64/Makefile | 5 +++ sysdeps/aarch64/__mtag_address_get_tag.S | 31 +++++++++++++ sysdeps/aarch64/__mtag_memset_tag.S | 46 +++++++++++++++++++ sysdeps/aarch64/__mtag_new_tag.S | 38 ++++++++++++++++ sysdeps/aarch64/__mtag_tag_region.S | 44 ++++++++++++++++++ sysdeps/aarch64/libc-mtag.h | 57 ++++++++++++++++++++++++ 6 files changed, 221 insertions(+) create mode 100644 sysdeps/aarch64/__mtag_address_get_tag.S create mode 100644 sysdeps/aarch64/__mtag_memset_tag.S create mode 100644 sysdeps/aarch64/__mtag_new_tag.S create mode 100644 sysdeps/aarch64/__mtag_tag_region.S create mode 100644 sysdeps/aarch64/libc-mtag.h diff --git a/sysdeps/aarch64/Makefile b/sysdeps/aarch64/Makefile index 9cb141004d..34b5aa7f6e 100644 --- a/sysdeps/aarch64/Makefile +++ b/sysdeps/aarch64/Makefile @@ -21,4 +21,9 @@ endif ifeq ($(subdir),misc) sysdep_headers += sys/ifunc.h +sysdep_routines += __mtag_tag_region __mtag_new_tag __mtag_address_get_tag +endif + +ifeq ($(subdir),string) +sysdep_routines += __mtag_memset_tag endif diff --git a/sysdeps/aarch64/__mtag_address_get_tag.S b/sysdeps/aarch64/__mtag_address_get_tag.S new file mode 100644 index 0000000000..654c9d660c --- /dev/null +++ b/sysdeps/aarch64/__mtag_address_get_tag.S @@ -0,0 +1,31 @@ +/* Copyright (C) 2020 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include + +#define ptr x0 + + .arch armv8.5-a + .arch_extension memtag + +ENTRY (__libc_mtag_address_get_tag) + + ldg ptr, [ptr] + ret +END (__libc_mtag_address_get_tag) +libc_hidden_builtin_def (__libc_mtag_address_get_tag) diff --git a/sysdeps/aarch64/__mtag_memset_tag.S b/sysdeps/aarch64/__mtag_memset_tag.S new file mode 100644 index 0000000000..bc98dc49d2 --- /dev/null +++ b/sysdeps/aarch64/__mtag_memset_tag.S @@ -0,0 +1,46 @@ +/* Copyright (C) 2020 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include +/* Use the same register names and assignments as memset. */ +#include "memset-reg.h" + + .arch armv8.5-a + .arch_extension memtag + +/* NB, only supported on variants with 64-bit pointers. */ + +/* FIXME: This is a minimal implementation. We could do much better than + this for large values of COUNT. */ + +ENTRY_ALIGN(__libc_mtag_memset_with_tag, 6) + + and valw, valw, 255 + orr valw, valw, valw, lsl 8 + orr valw, valw, valw, lsl 16 + orr val, val, val, lsl 32 + mov dst, dstin + +L(loop): + stgp val, val, [dst], #16 + subs count, count, 16 + bne L(loop) + ldg dstin, [dstin] // Recover the tag created (might be untagged). + ret +END (__libc_mtag_memset_with_tag) +libc_hidden_builtin_def (__libc_mtag_memset_with_tag) diff --git a/sysdeps/aarch64/__mtag_new_tag.S b/sysdeps/aarch64/__mtag_new_tag.S new file mode 100644 index 0000000000..3a22995e9f --- /dev/null +++ b/sysdeps/aarch64/__mtag_new_tag.S @@ -0,0 +1,38 @@ +/* Copyright (C) 2020 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include + + .arch armv8.5-a + .arch_extension memtag + +/* NB, only supported on variants with 64-bit pointers. */ + +/* FIXME: This is a minimal implementation. We could do better than + this for larger values of COUNT. */ + +#define ptr x0 +#define xset x1 + +ENTRY(__libc_mtag_new_tag) + // Guarantee that the new tag is not the same as now. + gmi xset, ptr, xzr + irg ptr, ptr, xset + ret +END (__libc_mtag_new_tag) +libc_hidden_builtin_def (__libc_mtag_new_tag) diff --git a/sysdeps/aarch64/__mtag_tag_region.S b/sysdeps/aarch64/__mtag_tag_region.S new file mode 100644 index 0000000000..41019781d0 --- /dev/null +++ b/sysdeps/aarch64/__mtag_tag_region.S @@ -0,0 +1,44 @@ +/* Copyright (C) 2020 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library. If not, see + . */ + +#include +/* Use the same register names and assignments as memset. */ + + .arch armv8.5-a + .arch_extension memtag + +/* NB, only supported on variants with 64-bit pointers. */ + +/* FIXME: This is a minimal implementation. We could do better than + this for larger values of COUNT. */ + +#define dstin x0 +#define count x1 +#define dst x2 + +ENTRY_ALIGN(__libc_mtag_tag_region, 6) + + mov dst, dstin +L(loop): + stg dst, [dst], #16 + subs count, count, 16 + bne L(loop) + ldg dstin, [dstin] // Recover the tag created (might be untagged). + ret +END (__libc_mtag_tag_region) +libc_hidden_builtin_def (__libc_mtag_tag_region) diff --git a/sysdeps/aarch64/libc-mtag.h b/sysdeps/aarch64/libc-mtag.h new file mode 100644 index 0000000000..9c7d00c541 --- /dev/null +++ b/sysdeps/aarch64/libc-mtag.h @@ -0,0 +1,57 @@ +/* libc-internal interface for tagged (colored) memory support. + Copyright (C) 2020 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _AARCH64_LIBC_MTAG_H +#define _AARCH64_LIBC_MTAG_H 1 + +#ifndef _LIBC_MTAG +/* Generic bindings for systems that do not support memory tagging. */ +#include_next "libc-mtag.h" +#else + +/* Used to ensure additional alignment when objects need to have distinct + tags. */ +#define __MTAG_GRANULE_SIZE 16 + +/* Non-zero if memory obtained via morecore (sbrk) is not tagged. */ +#define __MTAG_SBRK_UNTAGGED 1 + +/* Extra flags to pass to mmap to get tagged pages. */ +#define __MTAG_MMAP_FLAGS PROT_MTE + +/* Set the tags for a region of memory, which must have size and alignment + that are multiples of __MTAG_GRANULE_SIZE. Size cannot be zero. + void *__libc_mtag_tag_region (const void *, size_t) */ +void *__libc_mtag_tag_region (void *, size_t); + +/* Optimized equivalent to __libc_mtag_tag_region followed by memset. */ +void *__libc_mtag_memset_with_tag(void *, int, size_t); + +/* Convert address P to a pointer that is tagged correctly for that + location. + void *__libc_mtag_address_get_tag (void*) */ +void *__libc_mtag_address_get_tag(void *); + +/* Assign a new (random) tag to a pointer P (does not adjust the tag on + the memory addressed). + void *__libc_mtag_new_tag (void*) */ +void *__libc_mtag_new_tag(void *); + +#endif /* _LIBC_MTAG */ + +#endif /* _AARCH64_LIBC_MTAG_H */