Message ID | 20190704085224.65223-1-iii@linux.ibm.com |
---|---|
State | Changes Requested |
Delegated to: | BPF Maintainers |
Headers | show |
Series | [v2,bpf-next] selftests/bpf: fix "alu with different scalars 1" on s390 | expand |
On Thu, Jul 4, 2019 at 1:52 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote: > > BPF_LDX_MEM is used to load the least significant byte of the retrieved > test_val.index, however, on big-endian machines it ends up retrieving > the most significant byte. > > Use the correct least significant byte offset on big-endian machines. > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Acked-by: Yonghong Song <yhs@fb.com> > --- > > v1->v2: > - use __BYTE_ORDER instead of __BYTE_ORDER__. > > tools/testing/selftests/bpf/verifier/value_ptr_arith.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c > index c3de1a2c9dc5..e5940c4e8b8f 100644 > --- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c > +++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c > @@ -183,7 +183,11 @@ > BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), > BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), > BPF_EXIT_INSN(), > +#if __BYTE_ORDER == __LITTLE_ENDIAN > BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), > +#else > + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, sizeof(int) - 1), > +#endif > BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), > BPF_MOV64_IMM(BPF_REG_2, 0), > BPF_MOV64_IMM(BPF_REG_3, 0x100000), > -- > 2.21.0 >
On Thu, Jul 4, 2019 at 1:53 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote: > > BPF_LDX_MEM is used to load the least significant byte of the retrieved > test_val.index, however, on big-endian machines it ends up retrieving > the most significant byte. > > Use the correct least significant byte offset on big-endian machines. > > Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> > --- > > v1->v2: > - use __BYTE_ORDER instead of __BYTE_ORDER__. > > tools/testing/selftests/bpf/verifier/value_ptr_arith.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c > index c3de1a2c9dc5..e5940c4e8b8f 100644 > --- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c > +++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c > @@ -183,7 +183,11 @@ > BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), > BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), > BPF_EXIT_INSN(), > +#if __BYTE_ORDER == __LITTLE_ENDIAN > BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), > +#else > + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, sizeof(int) - 1), > +#endif I think tests should be arch and endian independent where possible. In this case test_val.index is 4 byte and 4 byte load should work just as well.
On 7/16/19 12:13 AM, Alexei Starovoitov wrote: > On Thu, Jul 4, 2019 at 1:53 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote: >> >> BPF_LDX_MEM is used to load the least significant byte of the retrieved >> test_val.index, however, on big-endian machines it ends up retrieving >> the most significant byte. >> >> Use the correct least significant byte offset on big-endian machines. >> >> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> >> --- >> >> v1->v2: >> - use __BYTE_ORDER instead of __BYTE_ORDER__. >> >> tools/testing/selftests/bpf/verifier/value_ptr_arith.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c >> index c3de1a2c9dc5..e5940c4e8b8f 100644 >> --- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c >> +++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c >> @@ -183,7 +183,11 @@ >> BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), >> BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), >> BPF_EXIT_INSN(), >> +#if __BYTE_ORDER == __LITTLE_ENDIAN >> BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), >> +#else >> + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, sizeof(int) - 1), >> +#endif > > I think tests should be arch and endian independent where possible. > In this case test_val.index is 4 byte and 4 byte load should work just as well. Yes, agree, this should be fixed with BPF_W as load. Thanks, Daniel
diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c index c3de1a2c9dc5..e5940c4e8b8f 100644 --- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c +++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c @@ -183,7 +183,11 @@ BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem), BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), BPF_EXIT_INSN(), +#if __BYTE_ORDER == __LITTLE_ENDIAN BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0), +#else + BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, sizeof(int) - 1), +#endif BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), BPF_MOV64_IMM(BPF_REG_2, 0), BPF_MOV64_IMM(BPF_REG_3, 0x100000),
BPF_LDX_MEM is used to load the least significant byte of the retrieved test_val.index, however, on big-endian machines it ends up retrieving the most significant byte. Use the correct least significant byte offset on big-endian machines. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> --- v1->v2: - use __BYTE_ORDER instead of __BYTE_ORDER__. tools/testing/selftests/bpf/verifier/value_ptr_arith.c | 4 ++++ 1 file changed, 4 insertions(+)