From patchwork Mon Aug 7 13:06:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joel Fernandes X-Patchwork-Id: 798631 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="ZxTBxk8j"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3xQyWG2yRxz9sR9 for ; Mon, 7 Aug 2017 23:07:54 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751914AbdHGNHw (ORCPT ); Mon, 7 Aug 2017 09:07:52 -0400 Received: from mail-pf0-f173.google.com ([209.85.192.173]:34539 "EHLO mail-pf0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752092AbdHGNGX (ORCPT ); Mon, 7 Aug 2017 09:06:23 -0400 Received: by mail-pf0-f173.google.com with SMTP id o86so1552380pfj.1 for ; Mon, 07 Aug 2017 06:06:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UEcqLMfMepDq4tVJa7tnNYwAXCaXDrKztfFUfePfrHk=; b=ZxTBxk8jhVRWiROXMcu1qWzKh990+580jvk1jtSfivQWqqWjlacgQrcsMAJLSHoLB0 QwCtDFmMOT+zHkPQ01DsHhavouL1n9QoFrMuU6nxFVnsU/fFb6PV9eo8kstT7pDr2O3r soyMIe6zSlQ/uqDoBqiPCLjPZsoNIlCxPzlRFUX8BIkSFzZHLhVEdJ6NF/x4eyW1R1DL H840Oy1gxGHJM5nbv3RGJ0DKZvHZtjk/aTNWCEqXD/LNJ1i2ezfE8Iem7EvxRanpdKWv Vo6e7hDns7xPz0NPSDsn6SH9ow8lKRZVrj6iH3BSedGWI6EbKyzZwOtZKWdZojgoDHTL 1c+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UEcqLMfMepDq4tVJa7tnNYwAXCaXDrKztfFUfePfrHk=; b=l4hYd/COjNYsMzWFNadREONYzkBYWa4JQpOkwfNivVoY70Dl0Jiwk5L3Eol2pfWrEl fPBt9AU8WzShnKQ0l2xDZeycQprC/xnTPiCgv2tZHSMeoDWYTlej6/VxvHZbj3yXOp6U eCTa5Oth3vpY1JZ6H7rPSEKkQScPDic7+sgtAaRxGHugJJ+5LRuwYFleS6CXOV9Ovq8y M6sa3epUwJQjSkP9sctmYuTq2I47M3Gxq4avp53kqsSUTxAY92PlyvNCk25NUPF64E1Y Cc9+PBriiS8YmKtzDNMb0B5mufR1PTQ0oAAR1vgOb4hXeS5ohTLbjvusljMPfcqjhSb8 vxuw== X-Gm-Message-State: AHYfb5hae+wVM6EAD4CSafFphtk1kNBl6BLgjqrt4XggIbWFN0i0Rvdw 9Z67GEcsc7f4y06x X-Received: by 10.99.163.1 with SMTP id s1mr465523pge.416.1502111182917; Mon, 07 Aug 2017 06:06:22 -0700 (PDT) Received: from joelaf-glaptop0.roam.corp.google.com (c-24-130-92-142.hsd1.ca.comcast.net. [24.130.92.142]) by smtp.gmail.com with ESMTPSA id o10sm10224450pgc.81.2017.08.07.06.06.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 07 Aug 2017 06:06:22 -0700 (PDT) From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: Chenbo Feng , Alison Chaiken , Juri.Lelli@arm.com, Joel Fernandes , Alexei Starovoitov , Daniel Borkmann , netdev@vger.kernel.org (open list:BPF (Safe dynamic programs and tools)) Subject: [PATCH RFC v2 3/5] samples/bpf: Fix inline asm issues building samples on arm64 Date: Mon, 7 Aug 2017 06:06:00 -0700 Message-Id: <20170807130602.31785-4-joelaf@google.com> X-Mailer: git-send-email 2.14.0.rc1.383.gd1ce394fe2-goog In-Reply-To: <20170807130602.31785-1-joelaf@google.com> References: <20170807130602.31785-1-joelaf@google.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org inline assembly has haunted building samples on arm64 for quite sometime. This patch uses the pre-processor to noop all occurences of inline asm when compiling the BPF sample for the BPF target. This patch reintroduces inclusion of asm/sysregs.h which needs to be included to avoid compiler errors now, see [1]. Previously a hack prevented this inclusion [2] (to avoid the exact problem this patch fixes - skipping inline assembler) but the hack causes other errors now and no longer works. Using the preprocessor to noop the inline asm occurences, we also avoid any future unstable hackery needed (such as those that skip asm headers) and provides information that asm headers may have which could have been used but which the inline asm issues prevented. This is the least messy of all hacks in my opinion. [1] https://lkml.org/lkml/2017/8/5/143 [2] https://lists.linaro.org/pipermail/linaro-kernel/2015-November/024036.html Signed-off-by: Joel Fernandes --- samples/bpf/Makefile | 40 +++++++++++++++++++++++++++++++++------- samples/bpf/arm64_asmstubs.h | 3 +++ samples/bpf/bpf_helpers.h | 12 ++++++------ samples/bpf/generic_asmstubs.h | 4 ++++ 4 files changed, 46 insertions(+), 13 deletions(-) create mode 100644 samples/bpf/arm64_asmstubs.h create mode 100644 samples/bpf/generic_asmstubs.h diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile index e5642c8c144d..7591cdd7fe69 100644 --- a/samples/bpf/Makefile +++ b/samples/bpf/Makefile @@ -151,6 +151,8 @@ HOSTLOADLIBES_test_map_in_map += -lelf # make samples/bpf/ LLC=~/git/llvm/build/bin/llc CLANG=~/git/llvm/build/bin/clang LLC ?= llc CLANG ?= clang +PERL ?= perl +RM ?= rm # Detect that we're cross compiling and use the right compilers and flags ifdef CROSS_COMPILE @@ -186,14 +188,38 @@ verify_target_bpf: verify_cmds $(src)/*.c: verify_target_bpf -# asm/sysreg.h - inline assembly used by it is incompatible with llvm. -# But, there is no easy way to fix it, so just exclude it since it is -# useless for BPF samples. -$(obj)/%.o: $(src)/%.c - $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \ - -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \ +curdir := $(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) +ifeq ($(wildcard $(curdir)/${ARCH}_asmstubs.h),) + ARCH_ASM_STUBS := +else + ARCH_ASM_STUBS := -include $(src)/${ARCH}_asmstubs.h +endif + +ASM_STUBS := ${ARCH_ASM_STUBS} -include $(src)/generic_asmstubs.h + +CLANG_ARGS = $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) \ + -D__KERNEL__ -Wno-unused-value -Wno-pointer-sign \ + $(ASM_STUBS) \ -Wno-compare-distinct-pointer-types \ -Wno-gnu-variable-sized-type-not-at-end \ -Wno-address-of-packed-member -Wno-tautological-compare \ -Wno-unknown-warning-option \ - -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@ + -O2 -emit-llvm + +$(obj)/%.o: $(src)/%.c + # Steps to compile BPF sample while getting rid of inline asm + # This has the advantage of not having to skip important asm headers + # Step 1. Use clang preprocessor to stub out asm() calls + # Step 2. Replace all "asm volatile" with single keyword "asmvolatile" + # Step 3. Use clang preprocessor to noop all asm volatile() calls + # and restore asm_bpf to asm for BPF's asm directives + # Step 4. Compile and link + + $(CLANG) -E $(CLANG_ARGS) -c $< -o - | \ + $(PERL) -pe "s/[_\s]*asm[_\s]*volatile[_\s]*/asmvolatile/g" | \ + $(CLANG) -E $(ASM_STUBS) - -o - | \ + $(CLANG) -E -Dasm_bpf=asm - -o $@.tmp.c + + $(CLANG) $(CLANG_ARGS) -c $@.tmp.c \ + -o - | $(LLC) -march=bpf -filetype=obj -o $@ + $(RM) $@.tmp.c diff --git a/samples/bpf/arm64_asmstubs.h b/samples/bpf/arm64_asmstubs.h new file mode 100644 index 000000000000..23d47dbe61b1 --- /dev/null +++ b/samples/bpf/arm64_asmstubs.h @@ -0,0 +1,3 @@ +/* Special handing for current_stack_pointer */ +#define __ASM_STACK_POINTER_H +#define current_stack_pointer 0 diff --git a/samples/bpf/bpf_helpers.h b/samples/bpf/bpf_helpers.h index 9a9c95f2c9fb..67c9c4438e4b 100644 --- a/samples/bpf/bpf_helpers.h +++ b/samples/bpf/bpf_helpers.h @@ -64,12 +64,12 @@ static int (*bpf_xdp_adjust_head)(void *ctx, int offset) = * emit BPF_LD_ABS and BPF_LD_IND instructions */ struct sk_buff; -unsigned long long load_byte(void *skb, - unsigned long long off) asm("llvm.bpf.load.byte"); -unsigned long long load_half(void *skb, - unsigned long long off) asm("llvm.bpf.load.half"); -unsigned long long load_word(void *skb, - unsigned long long off) asm("llvm.bpf.load.word"); +unsigned long long load_byte(void *skb, unsigned long long off) + asm_bpf("llvm.bpf.load.byte"); +unsigned long long load_half(void *skb, unsigned long long off) + asm_bpf("llvm.bpf.load.half"); +unsigned long long load_word(void *skb, unsigned long long off) + asm_bpf("llvm.bpf.load.word"); /* a helper structure used by eBPF C program * to describe map attributes to elf_bpf loader diff --git a/samples/bpf/generic_asmstubs.h b/samples/bpf/generic_asmstubs.h new file mode 100644 index 000000000000..1b9e9f5094d8 --- /dev/null +++ b/samples/bpf/generic_asmstubs.h @@ -0,0 +1,4 @@ +#define bpf_noop_stub +#define asm(...) bpf_noop_stub +#define __asm__(...) bpf_noop_stub +#define asmvolatile(...) bpf_noop_stub