From patchwork Mon Aug 7 13:03:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 798628 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="QxRNdGpo"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xQyRy3m1Nz9sR9 for ; Mon, 7 Aug 2017 23:05:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752111AbdHGNFA (ORCPT ); Mon, 7 Aug 2017 09:05:00 -0400 Received: from mail-pf0-f169.google.com ([209.85.192.169]:33969 "EHLO mail-pf0-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751956AbdHGND1 (ORCPT ); Mon, 7 Aug 2017 09:03:27 -0400 Received: by mail-pf0-f169.google.com with SMTP id o86so1524317pfj.1 for ; Mon, 07 Aug 2017 06:03:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UEcqLMfMepDq4tVJa7tnNYwAXCaXDrKztfFUfePfrHk=; b=QxRNdGpo3Y5siuv0tUKjohYMxdEIy7BEA2Qzy0hgQzI9JZzxeIU9sM2sPuckcn0BKA A93z3xPySCDoSW2ATodGkvuzmcWZWt2Dv+GV9AZEIe7H9HlZlBZGOlSAU4T7w61SqWzw AGSGijK7tx9nB0ylZRHvMwFn3eJ9/a3b8/+8JDyIkf9f2OqvIzEIWu8OjzE2hRpR5RB1 quV+Zr/7uCAus535IFOxuN0n9GE7lhVI97bhg46nItQTr2GD+jx/Gkq9qsdRv0aTM7cy 6mB3+h52tNpw1SCmTDiX1ADPFfiHjlhz3fi1/PnVksUIqG9OmbJ0Rh2E3AYSEZUja7Nb n43Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UEcqLMfMepDq4tVJa7tnNYwAXCaXDrKztfFUfePfrHk=; b=cr9vpMJj1fmdV1KKxXO9rc31ylq2/cLdKAyzfj/H0EzKy+tFLZ24x6pyePwZ0kUiKh x9lfVYygK9pFlgT3rBrwDO63UENTDGqSAnM6iPNIhdd4h8scCo4I6j71c4GXdTgUHnB9 b5b4cR1SVtchsyhfvKGeHnn1Iw7TsxkOo4AjyfQTMr1E9q7u4C1fc8Nz/oCEjfyMWWz8 ASHTdgBkqWetaTwY0Jl8hAjlr2XMpHb4Sp6/PpylGOOTB/9zW1houzEo+3It+7jfvrN5 Dx9huyPOKicJTBXZJV11LOIxDi8uKJGmSuJymHaMiC2MkGHVWZNrSfPMsGq3FNCHbEQj 7z5g== X-Gm-Message-State: AHYfb5i/B2jv0cVyeVxct4fBwfvFpLmYMtDhuKcn0iGyfJpQTtKmsMnj D9Iy9QvyjTQYyLbh X-Received: by 10.98.215.21 with SMTP id b21mr529688pfh.120.1502111006947; Mon, 07 Aug 2017 06:03:26 -0700 (PDT) Received: from joelaf-glaptop0.roam.corp.google.com (c-24-130-92-142.hsd1.ca.comcast.net. [24.130.92.142]) by smtp.gmail.com with ESMTPSA id d28sm2955686pfb.139.2017.08.07.06.03.25 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 07 Aug 2017 06:03:26 -0700 (PDT) From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: Chenbo Feng , Alison Chaiken , Juri Lelli , Joel Fernandes , Alexei Starovoitov , Daniel Borkmann , netdev@vger.kernel.org (open list:BPF (Safe dynamic programs and tools)) Subject: [PATCH RFC 3/5] samples/bpf: Fix inline asm issues building samples on arm64 Date: Mon, 7 Aug 2017 06:03:04 -0700 Message-Id: <20170807130306.31530-4-joelaf@google.com> X-Mailer: git-send-email 2.14.0.rc1.383.gd1ce394fe2-goog In-Reply-To: <20170807130306.31530-1-joelaf@google.com> References: <20170807130306.31530-1-joelaf@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org inline assembly has haunted building samples on arm64 for quite sometime. This patch uses the pre-processor to noop all occurences of inline asm when compiling the BPF sample for the BPF target. This patch reintroduces inclusion of asm/sysregs.h which needs to be included to avoid compiler errors now, see [1]. Previously a hack prevented this inclusion [2] (to avoid the exact problem this patch fixes - skipping inline assembler) but the hack causes other errors now and no longer works. Using the preprocessor to noop the inline asm occurences, we also avoid any future unstable hackery needed (such as those that skip asm headers) and provides information that asm headers may have which could have been used but which the inline asm issues prevented. This is the least messy of all hacks in my opinion. [1] https://lkml.org/lkml/2017/8/5/143 [2] https://lists.linaro.org/pipermail/linaro-kernel/2015-November/024036.html Signed-off-by: Joel Fernandes --- samples/bpf/Makefile | 40 +++++++++++++++++++++++++++++++++------- samples/bpf/arm64_asmstubs.h | 3 +++ samples/bpf/bpf_helpers.h | 12 ++++++------ samples/bpf/generic_asmstubs.h | 4 ++++ 4 files changed, 46 insertions(+), 13 deletions(-) create mode 100644 samples/bpf/arm64_asmstubs.h create mode 100644 samples/bpf/generic_asmstubs.h diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index e5642c8c144d..7591cdd7fe69 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -151,6 +151,8 @@ HOSTLOADLIBES_test_map_in_map += -lelf # make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang LLC ?= llc CLANG ?= clang +PERL ?= perl +RM ?= rm # Detect that we're cross compiling and use the right compilers and flags ifdef CROSS_COMPILE @@ -186,14 +188,38 @@ verify_target_bpf: verify_cmds $(src)/*.c: verify_target_bpf -# asm/sysreg.h - inline assembly used by it is incompatible with llvm. -# But, there is no easy way to fix it, so just exclude it since it is -# useless for BPF samples. -$(obj)/%.o: $(src)/%.c - $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \ - -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \ +curdir := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) +ifeq ($(wildcard $(curdir)/${ARCH}_asmstubs.h),) + ARCH_ASM_STUBS := +else + ARCH_ASM_STUBS := -include $(src)/${ARCH}_asmstubs.h +endif + +ASM_STUBS := ${ARCH_ASM_STUBS} -include $(src)/generic_asmstubs.h + +CLANG_ARGS = $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \ + -D__KERNEL__ -Wno-unused-value -Wno-pointer-sign \ + $(ASM_STUBS) \ -Wno-compare-distinct-pointer-types \ -Wno-gnu-variable-sized-type-not-at-end \ -Wno-address-of-packed-member -Wno-tautological-compare \ -Wno-unknown-warning-option \ - -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@ + -O2 -emit-llvm + +$(obj)/%.o: $(src)/%.c + # Steps to compile BPF sample while getting rid of inline asm + # This has the advantage of not having to skip important asm headers + # Step 1. Use clang preprocessor to stub out asm() calls + # Step 2. Replace all "asm volatile" with single keyword "asmvolatile" + # Step 3. Use clang preprocessor to noop all asm volatile() calls + # and restore asm_bpf to asm for BPF's asm directives + # Step 4. Compile and link + + $(CLANG) -E $(CLANG_ARGS) -c $< -o - | \ + $(PERL) -pe "s/[_\s]*asm[_\s]*volatile[_\s]*/asmvolatile/g" | \ + $(CLANG) -E $(ASM_STUBS) - -o - | \ + $(CLANG) -E -Dasm_bpf=asm - -o $@.tmp.c + + $(CLANG) $(CLANG_ARGS) -c $@.tmp.c \ + -o - | $(LLC) -march=bpf -filetype=obj -o $@ + $(RM) $@.tmp.c diff --git a/samples/bpf/arm64_asmstubs.h b/samples/bpf/arm64_asmstubs.h new file mode 100644 index 000000000000..23d47dbe61b1 --- /dev/null +++ b/samples/bpf/arm64_asmstubs.h @@ -0,0 +1,3 @@ +/* Special handing for current_stack_pointer */ +#define __ASM_STACK_POINTER_H +#define current_stack_pointer 0 diff --git a/samples/bpf/bpf_helpers.h b/samples/bpf/bpf_helpers.h index 9a9c95f2c9fb..67c9c4438e4b 100644 --- a/samples/bpf/bpf_helpers.h +++ b/samples/bpf/bpf_helpers.h @@ -64,12 +64,12 @@ static int (*bpf_xdp_adjust_head)(void *ctx, int offset) = * emit BPF_LD_ABS and BPF_LD_IND instructions */ struct sk_buff; -unsigned long long load_byte(void *skb, - unsigned long long off) asm("llvm.bpf.load.byte"); -unsigned long long load_half(void *skb, - unsigned long long off) asm("llvm.bpf.load.half"); -unsigned long long load_word(void *skb, - unsigned long long off) asm("llvm.bpf.load.word"); +unsigned long long load_byte(void *skb, unsigned long long off) + asm_bpf("llvm.bpf.load.byte"); +unsigned long long load_half(void *skb, unsigned long long off) + asm_bpf("llvm.bpf.load.half"); +unsigned long long load_word(void *skb, unsigned long long off) + asm_bpf("llvm.bpf.load.word"); /* a helper structure used by eBPF C program * to describe map attributes to elf_bpf loader diff --git a/samples/bpf/generic_asmstubs.h b/samples/bpf/generic_asmstubs.h new file mode 100644 index 000000000000..1b9e9f5094d8 --- /dev/null +++ b/samples/bpf/generic_asmstubs.h @@ -0,0 +1,4 @@ +#define bpf_noop_stub +#define asm(...) bpf_noop_stub +#define __asm__(...) bpf_noop_stub +#define asmvolatile(...) bpf_noop_stub