diff mbox series

[bpf-next,3/3] libbpf: Use bpf_probe_read_kernel

Message ID 20200728120059.132256-4-iii@linux.ibm.com
State Changes Requested
Delegated to: BPF Maintainers
Headers show
Series samples/bpf: A couple s390 fixes | expand

Commit Message

Ilya Leoshkevich July 28, 2020, noon UTC
Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
bpf_probe_read{, str}() only to archs where they work") that makes more
samples compile on s390.

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
 tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
 2 files changed, 37 insertions(+), 29 deletions(-)

Comments

Andrii Nakryiko July 28, 2020, 7:11 p.m. UTC | #1
On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> bpf_probe_read{, str}() only to archs where they work") that makes more
> samples compile on s390.
>
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---

Sorry, we can't do this yet. This will break on older kernels that
don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
are working on extending a set of CO-RE relocations, that would allow
to do bpf_probe_read_kernel() detection on BPF side, transparently for
an application, and will pick either bpf_probe_read() or
bpf_probe_read_kernel(). It should be ready soon (this or next week,
most probably), though it will have dependency on the latest Clang.
But for now, please don't change this.

>  tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
>  tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
>  2 files changed, 37 insertions(+), 29 deletions(-)
>

[...]
Daniel Borkmann July 28, 2020, 9:16 p.m. UTC | #2
On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>>
>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
>> bpf_probe_read{, str}() only to archs where they work") that makes more
>> samples compile on s390.
>>
>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> 
> Sorry, we can't do this yet. This will break on older kernels that
> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> are working on extending a set of CO-RE relocations, that would allow
> to do bpf_probe_read_kernel() detection on BPF side, transparently for
> an application, and will pick either bpf_probe_read() or
> bpf_probe_read_kernel(). It should be ready soon (this or next week,
> most probably), though it will have dependency on the latest Clang.
> But for now, please don't change this.

Could you elaborate what this means wrt dependency on latest clang? Given clang
releases have a rather long cadence, what about existing users with current clang
releases?

>>   tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
>>   tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
>>   2 files changed, 37 insertions(+), 29 deletions(-)
>>
> 
> [...]
>
Andrii Nakryiko July 29, 2020, 4:06 a.m. UTC | #3
On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> > On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >>
> >> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> >> bpf_probe_read{, str}() only to archs where they work") that makes more
> >> samples compile on s390.
> >>
> >> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >
> > Sorry, we can't do this yet. This will break on older kernels that
> > don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> > are working on extending a set of CO-RE relocations, that would allow
> > to do bpf_probe_read_kernel() detection on BPF side, transparently for
> > an application, and will pick either bpf_probe_read() or
> > bpf_probe_read_kernel(). It should be ready soon (this or next week,
> > most probably), though it will have dependency on the latest Clang.
> > But for now, please don't change this.
>
> Could you elaborate what this means wrt dependency on latest clang? Given clang
> releases have a rather long cadence, what about existing users with current clang
> releases?

So the overall idea is to use something like this to do kernel reads:

static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
const void *src)
{
    if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
        return bpf_probe_read_kernel(dst, sz, src);
    else
        return bpf_probe_read(dst, sz, src);
}

And then use bpf_probe_read_universal() in BPF_CORE_READ and family.

This approach relies on few things:

1. each BPF helper has a corresponding btf_<helper-name> type defined for it
2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
specified type is found in kernel BTF (so needs kernel BTF, of
course). This is the part me and Yonghong are working on at the
moment.
3. verifier's dead code elimination, which will leave only
bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
other one. So on older kernels, there will never be unsupoorted call
to bpf_probe_read_kernel().


The new type existence relocation requires the latest Clang. So the
way to deal with older Clangs would be to just fallback to
bpf_probe_read, if we detect that Clang is too old and can't emit
necessary relocation.

If that's not an acceptable plan, then one can "parameterize"
BPF_CORE_READ macro family by re-defining bpf_core_read() macro. Right
now it's defined as:

#define bpf_core_read(dst, sz, src) \
    bpf_probe_read(dst, sz, (const void *)__builtin_preserve_access_index(src))

Re-defining it in terms of bpf_probe_read_kernel is trivial, but I
can't do it for BPF_CORE_READ, because it will break all the users of
bpf_core_read.h that run on older kernels.


>
> >>   tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
> >>   tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
> >>   2 files changed, 37 insertions(+), 29 deletions(-)
> >>
> >
> > [...]
> >
>
Daniel Borkmann July 29, 2020, 9:01 p.m. UTC | #4
On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>>>>
>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
>>>> samples compile on s390.
>>>>
>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>>>
>>> Sorry, we can't do this yet. This will break on older kernels that
>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
>>> are working on extending a set of CO-RE relocations, that would allow
>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
>>> an application, and will pick either bpf_probe_read() or
>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
>>> most probably), though it will have dependency on the latest Clang.
>>> But for now, please don't change this.
>>
>> Could you elaborate what this means wrt dependency on latest clang? Given clang
>> releases have a rather long cadence, what about existing users with current clang
>> releases?
> 
> So the overall idea is to use something like this to do kernel reads:
> 
> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
> const void *src)
> {
>      if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
>          return bpf_probe_read_kernel(dst, sz, src);
>      else
>          return bpf_probe_read(dst, sz, src);
> }
> 
> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
> 
> This approach relies on few things:
> 
> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
> specified type is found in kernel BTF (so needs kernel BTF, of
> course). This is the part me and Yonghong are working on at the
> moment.
> 3. verifier's dead code elimination, which will leave only
> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
> other one. So on older kernels, there will never be unsupoorted call
> to bpf_probe_read_kernel().
> 
> The new type existence relocation requires the latest Clang. So the
> way to deal with older Clangs would be to just fallback to
> bpf_probe_read, if we detect that Clang is too old and can't emit
> necessary relocation.

Okay, seems reasonable overall. One question though: couldn't libbpf transparently
fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
would rewrite the raw call number from the instruction from bpf_probe_read() into
the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
original use for bpf_probe_read() was related to CO-RE. But I think this could also
be overcome by adding a fake helper signature in libbpf with a unreasonable high
number that is dedicated to probing mem via CO-RE and then libbpf picks the right
underlying helper call number for the insn. That avoids fiddling with macros and
need for new clang version, no (unless I'm missing something)?

> If that's not an acceptable plan, then one can "parameterize"
> BPF_CORE_READ macro family by re-defining bpf_core_read() macro. Right
> now it's defined as:
> 
> #define bpf_core_read(dst, sz, src) \
>      bpf_probe_read(dst, sz, (const void *)__builtin_preserve_access_index(src))
> 
> Re-defining it in terms of bpf_probe_read_kernel is trivial, but I
> can't do it for BPF_CORE_READ, because it will break all the users of
> bpf_core_read.h that run on older kernels.
> 
> 
>>
>>>>    tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
>>>>    tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
>>>>    2 files changed, 37 insertions(+), 29 deletions(-)
>>>>
>>>
>>> [...]
>>>
>>
Andrii Nakryiko July 29, 2020, 9:36 p.m. UTC | #5
On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
> > On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> >>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >>>>
> >>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> >>>> bpf_probe_read{, str}() only to archs where they work") that makes more
> >>>> samples compile on s390.
> >>>>
> >>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >>>
> >>> Sorry, we can't do this yet. This will break on older kernels that
> >>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> >>> are working on extending a set of CO-RE relocations, that would allow
> >>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
> >>> an application, and will pick either bpf_probe_read() or
> >>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
> >>> most probably), though it will have dependency on the latest Clang.
> >>> But for now, please don't change this.
> >>
> >> Could you elaborate what this means wrt dependency on latest clang? Given clang
> >> releases have a rather long cadence, what about existing users with current clang
> >> releases?
> >
> > So the overall idea is to use something like this to do kernel reads:
> >
> > static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
> > const void *src)
> > {
> >      if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
> >          return bpf_probe_read_kernel(dst, sz, src);
> >      else
> >          return bpf_probe_read(dst, sz, src);
> > }
> >
> > And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
> >
> > This approach relies on few things:
> >
> > 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
> > 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
> > specified type is found in kernel BTF (so needs kernel BTF, of
> > course). This is the part me and Yonghong are working on at the
> > moment.
> > 3. verifier's dead code elimination, which will leave only
> > bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
> > other one. So on older kernels, there will never be unsupoorted call
> > to bpf_probe_read_kernel().
> >
> > The new type existence relocation requires the latest Clang. So the
> > way to deal with older Clangs would be to just fallback to
> > bpf_probe_read, if we detect that Clang is too old and can't emit
> > necessary relocation.
>
> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
> would rewrite the raw call number from the instruction from bpf_probe_read() into
> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
> original use for bpf_probe_read() was related to CO-RE. But I think this could also
> be overcome by adding a fake helper signature in libbpf with a unreasonable high
> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
> underlying helper call number for the insn. That avoids fiddling with macros and
> need for new clang version, no (unless I'm missing something)?

Libbpf could do it, but I'm a bit worried that unconditionally
changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
wrong in some cases. If that wasn't the case, why wouldn't we just
re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
itself, right?

But fear not about old Clang support. The bpf_core_type_exists() will
use a new built-in, and I'll be able to detect its presence with
__has_builtin(X) check in Clang. So it will be completely transparent
to users in the end.

>
> > If that's not an acceptable plan, then one can "parameterize"
> > BPF_CORE_READ macro family by re-defining bpf_core_read() macro. Right
> > now it's defined as:
> >
> > #define bpf_core_read(dst, sz, src) \
> >      bpf_probe_read(dst, sz, (const void *)__builtin_preserve_access_index(src))
> >
> > Re-defining it in terms of bpf_probe_read_kernel is trivial, but I
> > can't do it for BPF_CORE_READ, because it will break all the users of
> > bpf_core_read.h that run on older kernels.
> >
> >
> >>
> >>>>    tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
> >>>>    tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
> >>>>    2 files changed, 37 insertions(+), 29 deletions(-)
> >>>>
> >>>
> >>> [...]
> >>>
> >>
>
Daniel Borkmann July 29, 2020, 9:54 p.m. UTC | #6
On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
> On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
>>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
>>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>>>>>>
>>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
>>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
>>>>>> samples compile on s390.
>>>>>>
>>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>>>>>
>>>>> Sorry, we can't do this yet. This will break on older kernels that
>>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
>>>>> are working on extending a set of CO-RE relocations, that would allow
>>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
>>>>> an application, and will pick either bpf_probe_read() or
>>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
>>>>> most probably), though it will have dependency on the latest Clang.
>>>>> But for now, please don't change this.
>>>>
>>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
>>>> releases have a rather long cadence, what about existing users with current clang
>>>> releases?
>>>
>>> So the overall idea is to use something like this to do kernel reads:
>>>
>>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
>>> const void *src)
>>> {
>>>       if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
>>>           return bpf_probe_read_kernel(dst, sz, src);
>>>       else
>>>           return bpf_probe_read(dst, sz, src);
>>> }
>>>
>>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
>>>
>>> This approach relies on few things:
>>>
>>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
>>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
>>> specified type is found in kernel BTF (so needs kernel BTF, of
>>> course). This is the part me and Yonghong are working on at the
>>> moment.
>>> 3. verifier's dead code elimination, which will leave only
>>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
>>> other one. So on older kernels, there will never be unsupoorted call
>>> to bpf_probe_read_kernel().
>>>
>>> The new type existence relocation requires the latest Clang. So the
>>> way to deal with older Clangs would be to just fallback to
>>> bpf_probe_read, if we detect that Clang is too old and can't emit
>>> necessary relocation.
>>
>> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
>> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
>> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
>> would rewrite the raw call number from the instruction from bpf_probe_read() into
>> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
>> original use for bpf_probe_read() was related to CO-RE. But I think this could also
>> be overcome by adding a fake helper signature in libbpf with a unreasonable high
>> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
>> underlying helper call number for the insn. That avoids fiddling with macros and
>> need for new clang version, no (unless I'm missing something)?
> 
> Libbpf could do it, but I'm a bit worried that unconditionally
> changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
> wrong in some cases. If that wasn't the case, why wouldn't we just
> re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
> itself, right?

Yes, that is correct, but I mentioned above that this new 'fake' helper call number
that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
bpf_core_read.h.

Small example, bpf_core_read.h would be changed to (just an extract):

diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index eae5cccff761..4bddb2ddf3f0 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -115,7 +115,7 @@ enum bpf_field_info_kind {
   * (local) BTF, used to record relocation.
   */
  #define bpf_core_read(dst, sz, src)                                        \
-       bpf_probe_read(dst, sz,                                             \
+       bpf_probe_read_selector(dst, sz,                                                    \
                        (const void *)__builtin_preserve_access_index(src))

  /*
@@ -124,7 +124,7 @@ enum bpf_field_info_kind {
   * argument.
   */
  #define bpf_core_read_str(dst, sz, src)                                            \
-       bpf_probe_read_str(dst, sz,                                         \
+       bpf_probe_read_str_selector(dst, sz,                                        \
                            (const void *)__builtin_preserve_access_index(src))

  #define ___concat(a, b) a ## b

And bpf_probe_read_{,str_}selector would be defined as e.g. ...

static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;

... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
confined to usage in bpf_core_read.h when the CO-RE macros are used.

> But fear not about old Clang support. The bpf_core_type_exists() will
> use a new built-in, and I'll be able to detect its presence with
> __has_builtin(X) check in Clang. So it will be completely transparent
> to users in the end.

Ok.

>>> If that's not an acceptable plan, then one can "parameterize"
>>> BPF_CORE_READ macro family by re-defining bpf_core_read() macro. Right
>>> now it's defined as:
>>>
>>> #define bpf_core_read(dst, sz, src) \
>>>       bpf_probe_read(dst, sz, (const void *)__builtin_preserve_access_index(src))
>>>
>>> Re-defining it in terms of bpf_probe_read_kernel is trivial, but I
>>> can't do it for BPF_CORE_READ, because it will break all the users of
>>> bpf_core_read.h that run on older kernels.
>>>
>>>
>>>>
>>>>>>     tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
>>>>>>     tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
>>>>>>     2 files changed, 37 insertions(+), 29 deletions(-)
>>>>>>
>>>>>
>>>>> [...]
>>>>>
>>>>
>>
Andrii Nakryiko July 29, 2020, 10:05 p.m. UTC | #7
On Wed, Jul 29, 2020 at 2:54 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
> > On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
> >>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> >>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >>>>>>
> >>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> >>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
> >>>>>> samples compile on s390.
> >>>>>>
> >>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >>>>>
> >>>>> Sorry, we can't do this yet. This will break on older kernels that
> >>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> >>>>> are working on extending a set of CO-RE relocations, that would allow
> >>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
> >>>>> an application, and will pick either bpf_probe_read() or
> >>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
> >>>>> most probably), though it will have dependency on the latest Clang.
> >>>>> But for now, please don't change this.
> >>>>
> >>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
> >>>> releases have a rather long cadence, what about existing users with current clang
> >>>> releases?
> >>>
> >>> So the overall idea is to use something like this to do kernel reads:
> >>>
> >>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
> >>> const void *src)
> >>> {
> >>>       if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
> >>>           return bpf_probe_read_kernel(dst, sz, src);
> >>>       else
> >>>           return bpf_probe_read(dst, sz, src);
> >>> }
> >>>
> >>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
> >>>
> >>> This approach relies on few things:
> >>>
> >>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
> >>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
> >>> specified type is found in kernel BTF (so needs kernel BTF, of
> >>> course). This is the part me and Yonghong are working on at the
> >>> moment.
> >>> 3. verifier's dead code elimination, which will leave only
> >>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
> >>> other one. So on older kernels, there will never be unsupoorted call
> >>> to bpf_probe_read_kernel().
> >>>
> >>> The new type existence relocation requires the latest Clang. So the
> >>> way to deal with older Clangs would be to just fallback to
> >>> bpf_probe_read, if we detect that Clang is too old and can't emit
> >>> necessary relocation.
> >>
> >> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
> >> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
> >> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
> >> would rewrite the raw call number from the instruction from bpf_probe_read() into
> >> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
> >> original use for bpf_probe_read() was related to CO-RE. But I think this could also
> >> be overcome by adding a fake helper signature in libbpf with a unreasonable high
> >> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
> >> underlying helper call number for the insn. That avoids fiddling with macros and
> >> need for new clang version, no (unless I'm missing something)?
> >
> > Libbpf could do it, but I'm a bit worried that unconditionally
> > changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
> > wrong in some cases. If that wasn't the case, why wouldn't we just
> > re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
> > itself, right?
>
> Yes, that is correct, but I mentioned above that this new 'fake' helper call number
> that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
> bpf_core_read.h.
>
> Small example, bpf_core_read.h would be changed to (just an extract):
>
> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> index eae5cccff761..4bddb2ddf3f0 100644
> --- a/tools/lib/bpf/bpf_core_read.h
> +++ b/tools/lib/bpf/bpf_core_read.h
> @@ -115,7 +115,7 @@ enum bpf_field_info_kind {
>    * (local) BTF, used to record relocation.
>    */
>   #define bpf_core_read(dst, sz, src)                                        \
> -       bpf_probe_read(dst, sz,                                             \
> +       bpf_probe_read_selector(dst, sz,                                                    \
>                         (const void *)__builtin_preserve_access_index(src))
>
>   /*
> @@ -124,7 +124,7 @@ enum bpf_field_info_kind {
>    * argument.
>    */
>   #define bpf_core_read_str(dst, sz, src)                                            \
> -       bpf_probe_read_str(dst, sz,                                         \
> +       bpf_probe_read_str_selector(dst, sz,                                        \
>                             (const void *)__builtin_preserve_access_index(src))
>
>   #define ___concat(a, b) a ## b
>
> And bpf_probe_read_{,str_}selector would be defined as e.g. ...
>
> static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
> static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;
>
> ... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
> confined to usage in bpf_core_read.h when the CO-RE macros are used.

Ah, I see. Yeah, I suppose that would work as well. Do you prefer me
to go this way?

>
> > But fear not about old Clang support. The bpf_core_type_exists() will
> > use a new built-in, and I'll be able to detect its presence with
> > __has_builtin(X) check in Clang. So it will be completely transparent
> > to users in the end.
>
> Ok.
>
> >>> If that's not an acceptable plan, then one can "parameterize"
> >>> BPF_CORE_READ macro family by re-defining bpf_core_read() macro. Right
> >>> now it's defined as:
> >>>
> >>> #define bpf_core_read(dst, sz, src) \
> >>>       bpf_probe_read(dst, sz, (const void *)__builtin_preserve_access_index(src))
> >>>
> >>> Re-defining it in terms of bpf_probe_read_kernel is trivial, but I
> >>> can't do it for BPF_CORE_READ, because it will break all the users of
> >>> bpf_core_read.h that run on older kernels.
> >>>
> >>>
> >>>>
> >>>>>>     tools/lib/bpf/bpf_core_read.h | 51 ++++++++++++++++++-----------------
> >>>>>>     tools/lib/bpf/bpf_tracing.h   | 15 +++++++----
> >>>>>>     2 files changed, 37 insertions(+), 29 deletions(-)
> >>>>>>
> >>>>>
> >>>>> [...]
> >>>>>
> >>>>
> >>
>
Daniel Borkmann July 29, 2020, 10:12 p.m. UTC | #8
On 7/30/20 12:05 AM, Andrii Nakryiko wrote:
> On Wed, Jul 29, 2020 at 2:54 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
>>> On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
>>>>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
>>>>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>>>>>>>>
>>>>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
>>>>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
>>>>>>>> samples compile on s390.
>>>>>>>>
>>>>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>>>>>>>
>>>>>>> Sorry, we can't do this yet. This will break on older kernels that
>>>>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
>>>>>>> are working on extending a set of CO-RE relocations, that would allow
>>>>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
>>>>>>> an application, and will pick either bpf_probe_read() or
>>>>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
>>>>>>> most probably), though it will have dependency on the latest Clang.
>>>>>>> But for now, please don't change this.
>>>>>>
>>>>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
>>>>>> releases have a rather long cadence, what about existing users with current clang
>>>>>> releases?
>>>>>
>>>>> So the overall idea is to use something like this to do kernel reads:
>>>>>
>>>>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
>>>>> const void *src)
>>>>> {
>>>>>        if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
>>>>>            return bpf_probe_read_kernel(dst, sz, src);
>>>>>        else
>>>>>            return bpf_probe_read(dst, sz, src);
>>>>> }
>>>>>
>>>>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
>>>>>
>>>>> This approach relies on few things:
>>>>>
>>>>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
>>>>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
>>>>> specified type is found in kernel BTF (so needs kernel BTF, of
>>>>> course). This is the part me and Yonghong are working on at the
>>>>> moment.
>>>>> 3. verifier's dead code elimination, which will leave only
>>>>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
>>>>> other one. So on older kernels, there will never be unsupoorted call
>>>>> to bpf_probe_read_kernel().
>>>>>
>>>>> The new type existence relocation requires the latest Clang. So the
>>>>> way to deal with older Clangs would be to just fallback to
>>>>> bpf_probe_read, if we detect that Clang is too old and can't emit
>>>>> necessary relocation.
>>>>
>>>> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
>>>> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
>>>> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
>>>> would rewrite the raw call number from the instruction from bpf_probe_read() into
>>>> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
>>>> original use for bpf_probe_read() was related to CO-RE. But I think this could also
>>>> be overcome by adding a fake helper signature in libbpf with a unreasonable high
>>>> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
>>>> underlying helper call number for the insn. That avoids fiddling with macros and
>>>> need for new clang version, no (unless I'm missing something)?
>>>
>>> Libbpf could do it, but I'm a bit worried that unconditionally
>>> changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
>>> wrong in some cases. If that wasn't the case, why wouldn't we just
>>> re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
>>> itself, right?
>>
>> Yes, that is correct, but I mentioned above that this new 'fake' helper call number
>> that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
>> bpf_core_read.h.
>>
>> Small example, bpf_core_read.h would be changed to (just an extract):
>>
>> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
>> index eae5cccff761..4bddb2ddf3f0 100644
>> --- a/tools/lib/bpf/bpf_core_read.h
>> +++ b/tools/lib/bpf/bpf_core_read.h
>> @@ -115,7 +115,7 @@ enum bpf_field_info_kind {
>>     * (local) BTF, used to record relocation.
>>     */
>>    #define bpf_core_read(dst, sz, src)                                        \
>> -       bpf_probe_read(dst, sz,                                             \
>> +       bpf_probe_read_selector(dst, sz,                                                    \
>>                          (const void *)__builtin_preserve_access_index(src))
>>
>>    /*
>> @@ -124,7 +124,7 @@ enum bpf_field_info_kind {
>>     * argument.
>>     */
>>    #define bpf_core_read_str(dst, sz, src)                                            \
>> -       bpf_probe_read_str(dst, sz,                                         \
>> +       bpf_probe_read_str_selector(dst, sz,                                        \
>>                              (const void *)__builtin_preserve_access_index(src))
>>
>>    #define ___concat(a, b) a ## b
>>
>> And bpf_probe_read_{,str_}selector would be defined as e.g. ...
>>
>> static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
>> static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;
>>
>> ... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
>> confined to usage in bpf_core_read.h when the CO-RE macros are used.
> 
> Ah, I see. Yeah, I suppose that would work as well. Do you prefer me
> to go this way?

I would suggest we should try this path given this can be used with any clang version
that has the BPF backend, not just latest upstream git.

Thanks,
Daniel
Andrii Nakryiko July 29, 2020, 10:17 p.m. UTC | #9
On Wed, Jul 29, 2020 at 3:12 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/30/20 12:05 AM, Andrii Nakryiko wrote:
> > On Wed, Jul 29, 2020 at 2:54 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >> On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
> >>> On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
> >>>>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> >>>>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >>>>>>>>
> >>>>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> >>>>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
> >>>>>>>> samples compile on s390.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >>>>>>>
> >>>>>>> Sorry, we can't do this yet. This will break on older kernels that
> >>>>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> >>>>>>> are working on extending a set of CO-RE relocations, that would allow
> >>>>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
> >>>>>>> an application, and will pick either bpf_probe_read() or
> >>>>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
> >>>>>>> most probably), though it will have dependency on the latest Clang.
> >>>>>>> But for now, please don't change this.
> >>>>>>
> >>>>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
> >>>>>> releases have a rather long cadence, what about existing users with current clang
> >>>>>> releases?
> >>>>>
> >>>>> So the overall idea is to use something like this to do kernel reads:
> >>>>>
> >>>>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
> >>>>> const void *src)
> >>>>> {
> >>>>>        if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
> >>>>>            return bpf_probe_read_kernel(dst, sz, src);
> >>>>>        else
> >>>>>            return bpf_probe_read(dst, sz, src);
> >>>>> }
> >>>>>
> >>>>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
> >>>>>
> >>>>> This approach relies on few things:
> >>>>>
> >>>>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
> >>>>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
> >>>>> specified type is found in kernel BTF (so needs kernel BTF, of
> >>>>> course). This is the part me and Yonghong are working on at the
> >>>>> moment.
> >>>>> 3. verifier's dead code elimination, which will leave only
> >>>>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
> >>>>> other one. So on older kernels, there will never be unsupoorted call
> >>>>> to bpf_probe_read_kernel().
> >>>>>
> >>>>> The new type existence relocation requires the latest Clang. So the
> >>>>> way to deal with older Clangs would be to just fallback to
> >>>>> bpf_probe_read, if we detect that Clang is too old and can't emit
> >>>>> necessary relocation.
> >>>>
> >>>> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
> >>>> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
> >>>> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
> >>>> would rewrite the raw call number from the instruction from bpf_probe_read() into
> >>>> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
> >>>> original use for bpf_probe_read() was related to CO-RE. But I think this could also
> >>>> be overcome by adding a fake helper signature in libbpf with a unreasonable high
> >>>> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
> >>>> underlying helper call number for the insn. That avoids fiddling with macros and
> >>>> need for new clang version, no (unless I'm missing something)?
> >>>
> >>> Libbpf could do it, but I'm a bit worried that unconditionally
> >>> changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
> >>> wrong in some cases. If that wasn't the case, why wouldn't we just
> >>> re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
> >>> itself, right?
> >>
> >> Yes, that is correct, but I mentioned above that this new 'fake' helper call number
> >> that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
> >> bpf_core_read.h.
> >>
> >> Small example, bpf_core_read.h would be changed to (just an extract):
> >>
> >> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> >> index eae5cccff761..4bddb2ddf3f0 100644
> >> --- a/tools/lib/bpf/bpf_core_read.h
> >> +++ b/tools/lib/bpf/bpf_core_read.h
> >> @@ -115,7 +115,7 @@ enum bpf_field_info_kind {
> >>     * (local) BTF, used to record relocation.
> >>     */
> >>    #define bpf_core_read(dst, sz, src)                                        \
> >> -       bpf_probe_read(dst, sz,                                             \
> >> +       bpf_probe_read_selector(dst, sz,                                                    \
> >>                          (const void *)__builtin_preserve_access_index(src))
> >>
> >>    /*
> >> @@ -124,7 +124,7 @@ enum bpf_field_info_kind {
> >>     * argument.
> >>     */
> >>    #define bpf_core_read_str(dst, sz, src)                                            \
> >> -       bpf_probe_read_str(dst, sz,                                         \
> >> +       bpf_probe_read_str_selector(dst, sz,                                        \
> >>                              (const void *)__builtin_preserve_access_index(src))
> >>
> >>    #define ___concat(a, b) a ## b
> >>
> >> And bpf_probe_read_{,str_}selector would be defined as e.g. ...
> >>
> >> static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
> >> static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;
> >>
> >> ... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
> >> confined to usage in bpf_core_read.h when the CO-RE macros are used.
> >
> > Ah, I see. Yeah, I suppose that would work as well. Do you prefer me
> > to go this way?
>
> I would suggest we should try this path given this can be used with any clang version
> that has the BPF backend, not just latest upstream git.

Sure, sounds good.

>
> Thanks,
> Daniel
Andrii Nakryiko July 31, 2020, 5:41 p.m. UTC | #10
On Wed, Jul 29, 2020 at 3:12 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/30/20 12:05 AM, Andrii Nakryiko wrote:
> > On Wed, Jul 29, 2020 at 2:54 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >> On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
> >>> On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
> >>>>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> >>>>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >>>>>>>>
> >>>>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> >>>>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
> >>>>>>>> samples compile on s390.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >>>>>>>
> >>>>>>> Sorry, we can't do this yet. This will break on older kernels that
> >>>>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> >>>>>>> are working on extending a set of CO-RE relocations, that would allow
> >>>>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
> >>>>>>> an application, and will pick either bpf_probe_read() or
> >>>>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
> >>>>>>> most probably), though it will have dependency on the latest Clang.
> >>>>>>> But for now, please don't change this.
> >>>>>>
> >>>>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
> >>>>>> releases have a rather long cadence, what about existing users with current clang
> >>>>>> releases?
> >>>>>
> >>>>> So the overall idea is to use something like this to do kernel reads:
> >>>>>
> >>>>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
> >>>>> const void *src)
> >>>>> {
> >>>>>        if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
> >>>>>            return bpf_probe_read_kernel(dst, sz, src);
> >>>>>        else
> >>>>>            return bpf_probe_read(dst, sz, src);
> >>>>> }
> >>>>>
> >>>>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
> >>>>>
> >>>>> This approach relies on few things:
> >>>>>
> >>>>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
> >>>>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
> >>>>> specified type is found in kernel BTF (so needs kernel BTF, of
> >>>>> course). This is the part me and Yonghong are working on at the
> >>>>> moment.
> >>>>> 3. verifier's dead code elimination, which will leave only
> >>>>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
> >>>>> other one. So on older kernels, there will never be unsupoorted call
> >>>>> to bpf_probe_read_kernel().
> >>>>>
> >>>>> The new type existence relocation requires the latest Clang. So the
> >>>>> way to deal with older Clangs would be to just fallback to
> >>>>> bpf_probe_read, if we detect that Clang is too old and can't emit
> >>>>> necessary relocation.
> >>>>
> >>>> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
> >>>> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
> >>>> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
> >>>> would rewrite the raw call number from the instruction from bpf_probe_read() into
> >>>> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
> >>>> original use for bpf_probe_read() was related to CO-RE. But I think this could also
> >>>> be overcome by adding a fake helper signature in libbpf with a unreasonable high
> >>>> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
> >>>> underlying helper call number for the insn. That avoids fiddling with macros and
> >>>> need for new clang version, no (unless I'm missing something)?
> >>>
> >>> Libbpf could do it, but I'm a bit worried that unconditionally
> >>> changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
> >>> wrong in some cases. If that wasn't the case, why wouldn't we just
> >>> re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
> >>> itself, right?
> >>
> >> Yes, that is correct, but I mentioned above that this new 'fake' helper call number
> >> that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
> >> bpf_core_read.h.
> >>
> >> Small example, bpf_core_read.h would be changed to (just an extract):
> >>
> >> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> >> index eae5cccff761..4bddb2ddf3f0 100644
> >> --- a/tools/lib/bpf/bpf_core_read.h
> >> +++ b/tools/lib/bpf/bpf_core_read.h
> >> @@ -115,7 +115,7 @@ enum bpf_field_info_kind {
> >>     * (local) BTF, used to record relocation.
> >>     */
> >>    #define bpf_core_read(dst, sz, src)                                        \
> >> -       bpf_probe_read(dst, sz,                                             \
> >> +       bpf_probe_read_selector(dst, sz,                                                    \
> >>                          (const void *)__builtin_preserve_access_index(src))
> >>
> >>    /*
> >> @@ -124,7 +124,7 @@ enum bpf_field_info_kind {
> >>     * argument.
> >>     */
> >>    #define bpf_core_read_str(dst, sz, src)                                            \
> >> -       bpf_probe_read_str(dst, sz,                                         \
> >> +       bpf_probe_read_str_selector(dst, sz,                                        \
> >>                              (const void *)__builtin_preserve_access_index(src))
> >>
> >>    #define ___concat(a, b) a ## b
> >>
> >> And bpf_probe_read_{,str_}selector would be defined as e.g. ...
> >>
> >> static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
> >> static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;
> >>
> >> ... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
> >> confined to usage in bpf_core_read.h when the CO-RE macros are used.
> >
> > Ah, I see. Yeah, I suppose that would work as well. Do you prefer me
> > to go this way?
>
> I would suggest we should try this path given this can be used with any clang version
> that has the BPF backend, not just latest upstream git.

I have an even better solution, I think. Convert everything to
bpf_probe_read_kernel() or bpf_probe_read_user() unconditionally, but
let libbpf switch those two to bpf_probe_read() if _kernel()/_user()
variants are not yet in the kernel. That should handle both CO-RE
helpers and just pretty much any use case that was converted.


>
> Thanks,
> Daniel
Daniel Borkmann July 31, 2020, 8:34 p.m. UTC | #11
On 7/31/20 7:41 PM, Andrii Nakryiko wrote:
> On Wed, Jul 29, 2020 at 3:12 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>> On 7/30/20 12:05 AM, Andrii Nakryiko wrote:
>>> On Wed, Jul 29, 2020 at 2:54 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>> On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
>>>>> On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>>>> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
>>>>>>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>>>>>>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
>>>>>>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>>>>>>>>>>
>>>>>>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
>>>>>>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
>>>>>>>>>> samples compile on s390.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
>>>>>>>>>
>>>>>>>>> Sorry, we can't do this yet. This will break on older kernels that
>>>>>>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
>>>>>>>>> are working on extending a set of CO-RE relocations, that would allow
>>>>>>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
>>>>>>>>> an application, and will pick either bpf_probe_read() or
>>>>>>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
>>>>>>>>> most probably), though it will have dependency on the latest Clang.
>>>>>>>>> But for now, please don't change this.
>>>>>>>>
>>>>>>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
>>>>>>>> releases have a rather long cadence, what about existing users with current clang
>>>>>>>> releases?
>>>>>>>
>>>>>>> So the overall idea is to use something like this to do kernel reads:
>>>>>>>
>>>>>>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
>>>>>>> const void *src)
>>>>>>> {
>>>>>>>         if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
>>>>>>>             return bpf_probe_read_kernel(dst, sz, src);
>>>>>>>         else
>>>>>>>             return bpf_probe_read(dst, sz, src);
>>>>>>> }
>>>>>>>
>>>>>>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
>>>>>>>
>>>>>>> This approach relies on few things:
>>>>>>>
>>>>>>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
>>>>>>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
>>>>>>> specified type is found in kernel BTF (so needs kernel BTF, of
>>>>>>> course). This is the part me and Yonghong are working on at the
>>>>>>> moment.
>>>>>>> 3. verifier's dead code elimination, which will leave only
>>>>>>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
>>>>>>> other one. So on older kernels, there will never be unsupoorted call
>>>>>>> to bpf_probe_read_kernel().
>>>>>>>
>>>>>>> The new type existence relocation requires the latest Clang. So the
>>>>>>> way to deal with older Clangs would be to just fallback to
>>>>>>> bpf_probe_read, if we detect that Clang is too old and can't emit
>>>>>>> necessary relocation.
>>>>>>
>>>>>> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
>>>>>> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
>>>>>> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
>>>>>> would rewrite the raw call number from the instruction from bpf_probe_read() into
>>>>>> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
>>>>>> original use for bpf_probe_read() was related to CO-RE. But I think this could also
>>>>>> be overcome by adding a fake helper signature in libbpf with a unreasonable high
>>>>>> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
>>>>>> underlying helper call number for the insn. That avoids fiddling with macros and
>>>>>> need for new clang version, no (unless I'm missing something)?
>>>>>
>>>>> Libbpf could do it, but I'm a bit worried that unconditionally
>>>>> changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
>>>>> wrong in some cases. If that wasn't the case, why wouldn't we just
>>>>> re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
>>>>> itself, right?
>>>>
>>>> Yes, that is correct, but I mentioned above that this new 'fake' helper call number
>>>> that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
>>>> bpf_core_read.h.
>>>>
>>>> Small example, bpf_core_read.h would be changed to (just an extract):
>>>>
>>>> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
>>>> index eae5cccff761..4bddb2ddf3f0 100644
>>>> --- a/tools/lib/bpf/bpf_core_read.h
>>>> +++ b/tools/lib/bpf/bpf_core_read.h
>>>> @@ -115,7 +115,7 @@ enum bpf_field_info_kind {
>>>>      * (local) BTF, used to record relocation.
>>>>      */
>>>>     #define bpf_core_read(dst, sz, src)                                        \
>>>> -       bpf_probe_read(dst, sz,                                             \
>>>> +       bpf_probe_read_selector(dst, sz,                                                    \
>>>>                           (const void *)__builtin_preserve_access_index(src))
>>>>
>>>>     /*
>>>> @@ -124,7 +124,7 @@ enum bpf_field_info_kind {
>>>>      * argument.
>>>>      */
>>>>     #define bpf_core_read_str(dst, sz, src)                                            \
>>>> -       bpf_probe_read_str(dst, sz,                                         \
>>>> +       bpf_probe_read_str_selector(dst, sz,                                        \
>>>>                               (const void *)__builtin_preserve_access_index(src))
>>>>
>>>>     #define ___concat(a, b) a ## b
>>>>
>>>> And bpf_probe_read_{,str_}selector would be defined as e.g. ...
>>>>
>>>> static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
>>>> static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;
>>>>
>>>> ... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
>>>> confined to usage in bpf_core_read.h when the CO-RE macros are used.
>>>
>>> Ah, I see. Yeah, I suppose that would work as well. Do you prefer me
>>> to go this way?
>>
>> I would suggest we should try this path given this can be used with any clang version
>> that has the BPF backend, not just latest upstream git.
> 
> I have an even better solution, I think. Convert everything to
> bpf_probe_read_kernel() or bpf_probe_read_user() unconditionally, but
> let libbpf switch those two to bpf_probe_read() if _kernel()/_user()
> variants are not yet in the kernel. That should handle both CO-RE
> helpers and just pretty much any use case that was converted.

Yes, agree, that is an even cleaner solution and avoids to 'pollute' the
helper ID space with such remapping. The user intent with bpf_probe_read_kernel()
or bpf_probe_read_user() is rather clear so bpf_probe_read() can be a fallback
for this direction. Lets go with that.

Thanks,
Daniel
Andrii Nakryiko Aug. 5, 2020, 6:32 p.m. UTC | #12
On Fri, Jul 31, 2020 at 1:34 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
>
> On 7/31/20 7:41 PM, Andrii Nakryiko wrote:
> > On Wed, Jul 29, 2020 at 3:12 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >> On 7/30/20 12:05 AM, Andrii Nakryiko wrote:
> >>> On Wed, Jul 29, 2020 at 2:54 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>> On 7/29/20 11:36 PM, Andrii Nakryiko wrote:
> >>>>> On Wed, Jul 29, 2020 at 2:01 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>>>> On 7/29/20 6:06 AM, Andrii Nakryiko wrote:
> >>>>>>> On Tue, Jul 28, 2020 at 2:16 PM Daniel Borkmann <daniel@iogearbox.net> wrote:
> >>>>>>>> On 7/28/20 9:11 PM, Andrii Nakryiko wrote:
> >>>>>>>>> On Tue, Jul 28, 2020 at 5:15 AM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >>>>>>>>>>
> >>>>>>>>>> Yet another adaptation to commit 0ebeea8ca8a4 ("bpf: Restrict
> >>>>>>>>>> bpf_probe_read{, str}() only to archs where they work") that makes more
> >>>>>>>>>> samples compile on s390.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> >>>>>>>>>
> >>>>>>>>> Sorry, we can't do this yet. This will break on older kernels that
> >>>>>>>>> don't yet have bpf_probe_read_kernel() implemented. Met and Yonghong
> >>>>>>>>> are working on extending a set of CO-RE relocations, that would allow
> >>>>>>>>> to do bpf_probe_read_kernel() detection on BPF side, transparently for
> >>>>>>>>> an application, and will pick either bpf_probe_read() or
> >>>>>>>>> bpf_probe_read_kernel(). It should be ready soon (this or next week,
> >>>>>>>>> most probably), though it will have dependency on the latest Clang.
> >>>>>>>>> But for now, please don't change this.
> >>>>>>>>
> >>>>>>>> Could you elaborate what this means wrt dependency on latest clang? Given clang
> >>>>>>>> releases have a rather long cadence, what about existing users with current clang
> >>>>>>>> releases?
> >>>>>>>
> >>>>>>> So the overall idea is to use something like this to do kernel reads:
> >>>>>>>
> >>>>>>> static __always_inline int bpf_probe_read_universal(void *dst, u32 sz,
> >>>>>>> const void *src)
> >>>>>>> {
> >>>>>>>         if (bpf_core_type_exists(btf_bpf_probe_read_kernel))
> >>>>>>>             return bpf_probe_read_kernel(dst, sz, src);
> >>>>>>>         else
> >>>>>>>             return bpf_probe_read(dst, sz, src);
> >>>>>>> }
> >>>>>>>
> >>>>>>> And then use bpf_probe_read_universal() in BPF_CORE_READ and family.
> >>>>>>>
> >>>>>>> This approach relies on few things:
> >>>>>>>
> >>>>>>> 1. each BPF helper has a corresponding btf_<helper-name> type defined for it
> >>>>>>> 2. bpf_core_type_exists(some_type) returns 0 or 1, depending if
> >>>>>>> specified type is found in kernel BTF (so needs kernel BTF, of
> >>>>>>> course). This is the part me and Yonghong are working on at the
> >>>>>>> moment.
> >>>>>>> 3. verifier's dead code elimination, which will leave only
> >>>>>>> bpf_probe_read() or bpf_probe_read_kernel() calls and will remove the
> >>>>>>> other one. So on older kernels, there will never be unsupoorted call
> >>>>>>> to bpf_probe_read_kernel().
> >>>>>>>
> >>>>>>> The new type existence relocation requires the latest Clang. So the
> >>>>>>> way to deal with older Clangs would be to just fallback to
> >>>>>>> bpf_probe_read, if we detect that Clang is too old and can't emit
> >>>>>>> necessary relocation.
> >>>>>>
> >>>>>> Okay, seems reasonable overall. One question though: couldn't libbpf transparently
> >>>>>> fix up the selection of bpf_probe_read() vs bpf_probe_read_kernel()? E.g. it would
> >>>>>> probe the kernel whether bpf_probe_read_kernel() is available and if it is then it
> >>>>>> would rewrite the raw call number from the instruction from bpf_probe_read() into
> >>>>>> the one for bpf_probe_read_kernel()? I guess the question then becomes whether the
> >>>>>> original use for bpf_probe_read() was related to CO-RE. But I think this could also
> >>>>>> be overcome by adding a fake helper signature in libbpf with a unreasonable high
> >>>>>> number that is dedicated to probing mem via CO-RE and then libbpf picks the right
> >>>>>> underlying helper call number for the insn. That avoids fiddling with macros and
> >>>>>> need for new clang version, no (unless I'm missing something)?
> >>>>>
> >>>>> Libbpf could do it, but I'm a bit worried that unconditionally
> >>>>> changing bpf_probe_read() into bpf_probe_read_kernel() is going to be
> >>>>> wrong in some cases. If that wasn't the case, why wouldn't we just
> >>>>> re-purpose bpf_probe_read() into bpf_probe_read_kernel() in kernel
> >>>>> itself, right?
> >>>>
> >>>> Yes, that is correct, but I mentioned above that this new 'fake' helper call number
> >>>> that libbpf would be fixing up would only be used for bpf_probe_read{,str}() inside
> >>>> bpf_core_read.h.
> >>>>
> >>>> Small example, bpf_core_read.h would be changed to (just an extract):
> >>>>
> >>>> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> >>>> index eae5cccff761..4bddb2ddf3f0 100644
> >>>> --- a/tools/lib/bpf/bpf_core_read.h
> >>>> +++ b/tools/lib/bpf/bpf_core_read.h
> >>>> @@ -115,7 +115,7 @@ enum bpf_field_info_kind {
> >>>>      * (local) BTF, used to record relocation.
> >>>>      */
> >>>>     #define bpf_core_read(dst, sz, src)                                        \
> >>>> -       bpf_probe_read(dst, sz,                                             \
> >>>> +       bpf_probe_read_selector(dst, sz,                                                    \
> >>>>                           (const void *)__builtin_preserve_access_index(src))
> >>>>
> >>>>     /*
> >>>> @@ -124,7 +124,7 @@ enum bpf_field_info_kind {
> >>>>      * argument.
> >>>>      */
> >>>>     #define bpf_core_read_str(dst, sz, src)                                            \
> >>>> -       bpf_probe_read_str(dst, sz,                                         \
> >>>> +       bpf_probe_read_str_selector(dst, sz,                                        \
> >>>>                               (const void *)__builtin_preserve_access_index(src))
> >>>>
> >>>>     #define ___concat(a, b) a ## b
> >>>>
> >>>> And bpf_probe_read_{,str_}selector would be defined as e.g. ...
> >>>>
> >>>> static long (*bpf_probe_read_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -1;
> >>>> static long (*bpf_probe_read_str_selector)(void *dst, __u32 size, const void *unsafe_ptr) = (void *) -2;
> >>>>
> >>>> ... where libbpf would do the fix up to either 4 or 45 for insn->imm. But it's still
> >>>> confined to usage in bpf_core_read.h when the CO-RE macros are used.
> >>>
> >>> Ah, I see. Yeah, I suppose that would work as well. Do you prefer me
> >>> to go this way?
> >>
> >> I would suggest we should try this path given this can be used with any clang version
> >> that has the BPF backend, not just latest upstream git.
> >
> > I have an even better solution, I think. Convert everything to
> > bpf_probe_read_kernel() or bpf_probe_read_user() unconditionally, but
> > let libbpf switch those two to bpf_probe_read() if _kernel()/_user()
> > variants are not yet in the kernel. That should handle both CO-RE
> > helpers and just pretty much any use case that was converted.
>
> Yes, agree, that is an even cleaner solution and avoids to 'pollute' the
> helper ID space with such remapping. The user intent with bpf_probe_read_kernel()
> or bpf_probe_read_user() is rather clear so bpf_probe_read() can be a fallback
> for this direction. Lets go with that.
>

Ok, I have all this working locally. I'll post patches once bpf-next re-opens.

> Thanks,
> Daniel
diff mbox series

Patch

diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index eae5cccff761..b108381fbec4 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -23,28 +23,29 @@  enum bpf_field_info_kind {
 	__builtin_preserve_field_info((src)->field, BPF_FIELD_##info)
 
 #if __BYTE_ORDER == __LITTLE_ENDIAN
-#define __CORE_BITFIELD_PROBE_READ(dst, src, fld)			      \
-	bpf_probe_read((void *)dst,					      \
-		       __CORE_RELO(src, fld, BYTE_SIZE),		      \
-		       (const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
+#define __CORE_BITFIELD_PROBE_READ(dst, src, fld)                              \
+	bpf_probe_read_kernel((void *)dst, __CORE_RELO(src, fld, BYTE_SIZE),   \
+			      (const void *)src +                              \
+				      __CORE_RELO(src, fld, BYTE_OFFSET))
 #else
 /* semantics of LSHIFT_64 assumes loading values into low-ordered bytes, so
  * for big-endian we need to adjust destination pointer accordingly, based on
  * field byte size
  */
-#define __CORE_BITFIELD_PROBE_READ(dst, src, fld)			      \
-	bpf_probe_read((void *)dst + (8 - __CORE_RELO(src, fld, BYTE_SIZE)),  \
-		       __CORE_RELO(src, fld, BYTE_SIZE),		      \
-		       (const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
+#define __CORE_BITFIELD_PROBE_READ(dst, src, fld)                              \
+	bpf_probe_read_kernel(                                                 \
+		(void *)dst + (8 - __CORE_RELO(src, fld, BYTE_SIZE)),          \
+		__CORE_RELO(src, fld, BYTE_SIZE),                              \
+		(const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
 #endif
 
 /*
  * Extract bitfield, identified by s->field, and return its value as u64.
  * All this is done in relocatable manner, so bitfield changes such as
  * signedness, bit size, offset changes, this will be handled automatically.
- * This version of macro is using bpf_probe_read() to read underlying integer
- * storage. Macro functions as an expression and its return type is
- * bpf_probe_read()'s return value: 0, on success, <0 on error.
+ * This version of macro is using bpf_probe_read_kernel() to read underlying
+ * integer storage. Macro functions as an expression and its return type is
+ * bpf_probe_read_kernel()'s return value: 0, on success, <0 on error.
  */
 #define BPF_CORE_READ_BITFIELD_PROBED(s, field) ({			      \
 	unsigned long long val = 0;					      \
@@ -99,9 +100,9 @@  enum bpf_field_info_kind {
 	__builtin_preserve_field_info(field, BPF_FIELD_BYTE_SIZE)
 
 /*
- * bpf_core_read() abstracts away bpf_probe_read() call and captures offset
- * relocation for source address using __builtin_preserve_access_index()
- * built-in, provided by Clang.
+ * bpf_core_read() abstracts away bpf_probe_read_kernel() call and captures
+ * offset relocation for source address using
+ * __builtin_preserve_access_index() built-in, provided by Clang.
  *
  * __builtin_preserve_access_index() takes as an argument an expression of
  * taking an address of a field within struct/union. It makes compiler emit
@@ -114,18 +115,18 @@  enum bpf_field_info_kind {
  * actual field offset, based on target kernel BTF type that matches original
  * (local) BTF, used to record relocation.
  */
-#define bpf_core_read(dst, sz, src)					    \
-	bpf_probe_read(dst, sz,						    \
-		       (const void *)__builtin_preserve_access_index(src))
+#define bpf_core_read(dst, sz, src)                                            \
+	bpf_probe_read_kernel(                                                 \
+		dst, sz, (const void *)__builtin_preserve_access_index(src))
 
 /*
- * bpf_core_read_str() is a thin wrapper around bpf_probe_read_str()
+ * bpf_core_read_str() is a thin wrapper around bpf_probe_read_kernel_str()
  * additionally emitting BPF CO-RE field relocation for specified source
  * argument.
  */
-#define bpf_core_read_str(dst, sz, src)					    \
-	bpf_probe_read_str(dst, sz,					    \
-			   (const void *)__builtin_preserve_access_index(src))
+#define bpf_core_read_str(dst, sz, src)                                        \
+	bpf_probe_read_kernel_str(                                             \
+		dst, sz, (const void *)__builtin_preserve_access_index(src))
 
 #define ___concat(a, b) a ## b
 #define ___apply(fn, n) ___concat(fn, n)
@@ -239,15 +240,17 @@  enum bpf_field_info_kind {
  *	int x = BPF_CORE_READ(s, a.b.c, d.e, f, g);
  *
  * BPF_CORE_READ will decompose above statement into 4 bpf_core_read (BPF
- * CO-RE relocatable bpf_probe_read() wrapper) calls, logically equivalent to:
+ * CO-RE relocatable bpf_probe_read_kernel() wrapper) calls, logically
+ * equivalent to:
  * 1. const void *__t = s->a.b.c;
  * 2. __t = __t->d.e;
  * 3. __t = __t->f;
  * 4. return __t->g;
  *
  * Equivalence is logical, because there is a heavy type casting/preservation
- * involved, as well as all the reads are happening through bpf_probe_read()
- * calls using __builtin_preserve_access_index() to emit CO-RE relocations.
+ * involved, as well as all the reads are happening through
+ * bpf_probe_read_kernel() calls using __builtin_preserve_access_index() to
+ * emit CO-RE relocations.
  *
  * N.B. Only up to 9 "field accessors" are supported, which should be more
  * than enough for any practical purpose.
diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
index 58eceb884df3..4eb3be4130f0 100644
--- a/tools/lib/bpf/bpf_tracing.h
+++ b/tools/lib/bpf/bpf_tracing.h
@@ -288,11 +288,16 @@  struct pt_regs;
 #define BPF_KPROBE_READ_RET_IP(ip, ctx)		({ (ip) = PT_REGS_RET(ctx); })
 #define BPF_KRETPROBE_READ_RET_IP		BPF_KPROBE_READ_RET_IP
 #else
-#define BPF_KPROBE_READ_RET_IP(ip, ctx)					    \
-	({ bpf_probe_read(&(ip), sizeof(ip), (void *)PT_REGS_RET(ctx)); })
-#define BPF_KRETPROBE_READ_RET_IP(ip, ctx)				    \
-	({ bpf_probe_read(&(ip), sizeof(ip),				    \
-			  (void *)(PT_REGS_FP(ctx) + sizeof(ip))); })
+#define BPF_KPROBE_READ_RET_IP(ip, ctx)                                        \
+	({                                                                     \
+		bpf_probe_read_kernel(&(ip), sizeof(ip),                       \
+				      (void *)PT_REGS_RET(ctx));               \
+	})
+#define BPF_KRETPROBE_READ_RET_IP(ip, ctx)                                     \
+	({                                                                     \
+		bpf_probe_read_kernel(&(ip), sizeof(ip),                       \
+				      (void *)(PT_REGS_FP(ctx) + sizeof(ip))); \
+	})
 #endif
 
 #define ___bpf_concat(a, b) a ## b