Message ID | 20240528134846.148890-2-aleksander.lobakin@intel.com |
---|---|
State | Changes Requested |
Delegated to: | Anthony Nguyen |
Headers | show |
Series | idpf: XDP chapter I: convert Rx to libeth | expand |
On Tue, 28 May 2024 15:48:35 +0200 Alexander Lobakin wrote: > diff --git a/scripts/kernel-doc b/scripts/kernel-doc > index 95a59ac78f82..d0cf9a2d82de 100755 > --- a/scripts/kernel-doc > +++ b/scripts/kernel-doc > @@ -1155,6 +1155,7 @@ sub dump_struct($$) { > $members =~ s/\bstruct_group_attr\s*\(([^,]*,){2}/STRUCT_GROUP(/gos; > $members =~ s/\bstruct_group_tagged\s*\(([^,]*),([^,]*),/struct $1 $2; STRUCT_GROUP(/gos; > $members =~ s/\b__struct_group\s*\(([^,]*,){3}/STRUCT_GROUP(/gos; > + $members =~ s/\blibeth_cacheline_group\s*\(([^,]*,)/struct { } $1; STRUCT_GROUP(/gos; > $members =~ s/\bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*;/$2/gos; > > my $args = qr{([^,)]+)}; Having per-driver grouping defines is a no-go. Do you need the defines in the first place? Are you sure the assert you're adding are not going to explode on some weird arch? Honestly, patch 5 feels like a little too much for a driver..
On 5/30/24 03:34, Jakub Kicinski wrote: > On Tue, 28 May 2024 15:48:35 +0200 Alexander Lobakin wrote: >> diff --git a/scripts/kernel-doc b/scripts/kernel-doc >> index 95a59ac78f82..d0cf9a2d82de 100755 >> --- a/scripts/kernel-doc >> +++ b/scripts/kernel-doc >> @@ -1155,6 +1155,7 @@ sub dump_struct($$) { >> $members =~ s/\bstruct_group_attr\s*\(([^,]*,){2}/STRUCT_GROUP(/gos; >> $members =~ s/\bstruct_group_tagged\s*\(([^,]*),([^,]*),/struct $1 $2; STRUCT_GROUP(/gos; >> $members =~ s/\b__struct_group\s*\(([^,]*,){3}/STRUCT_GROUP(/gos; >> + $members =~ s/\blibeth_cacheline_group\s*\(([^,]*,)/struct { } $1; STRUCT_GROUP(/gos; >> $members =~ s/\bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*;/$2/gos; >> >> my $args = qr{([^,)]+)}; > > Having per-driver grouping defines is a no-go. [1] > Do you need the defines in the first place? this patch was a tough one for me too, but I see the idea promising > Are you sure the assert you're adding are not going to explode > on some weird arch? Honestly, patch 5 feels like a little too > much for a driver.. > definitively some of the patch 5 should be added here as doc/example, but it would be even better to simplify a bit -- I think that "mark this struct as explicit split into cachelines" is a hard hard C problem in general, especially in the kernel context, *but* I think that this could be simplified for your use case - split into exactly 3 (possibly empty) sections: mostly-Read, RW, COLD? Given that it will be a generic solution (would fix the [1] above), and be also easier to use, like: CACHELINE_STRUCT_GROUP(idpf_q_vector, CACHELINE_STRUCT_GROUP_RD(/* read mostly */ struct idpf_vport *vport; u16 num_rxq; u16 num_txq; u16 num_bufq; u16 num_complq; struct idpf_rx_queue **rx; struct idpf_tx_queue **tx; struct idpf_buf_queue **bufq; struct idpf_compl_queue **complq; struct idpf_intr_reg intr_reg; ), CACHELINE_STRUCT_GROUP_RW( struct napi_struct napi; u16 total_events; struct dim tx_dim; u16 tx_itr_value; bool tx_intr_mode; u32 tx_itr_idx; struct dim rx_dim; u16 rx_itr_value; bool rx_intr_mode; u32 rx_itr_idx; ), CACHELINE_STRUCT_GROUP_COLD( u16 v_idx; cpumask_var_t affinity_mask; ) ); Note that those three inner macros have distinct meaningful names not to have this working, but to aid human reader, then checkpatch/check-kdoc. Technically could be all the same CACHELINE_GROUP(). I'm not sure if (at most) 3 cacheline groups are fine for the general case, but it would be best to have just one variant of the CACHELINE_STRUCT_GROUP(), perhaps as a vararg.
On Wed, 12 Jun 2024 12:07:05 +0200 Przemek Kitszel wrote: > Given that it will be a generic solution (would fix the [1] above), > and be also easier to use, like: > > CACHELINE_STRUCT_GROUP(idpf_q_vector, > CACHELINE_STRUCT_GROUP_RD(/* read mostly */ > struct idpf_vport *vport; > u16 num_rxq; > u16 num_txq; > u16 num_bufq; > u16 num_complq; > struct idpf_rx_queue **rx; > struct idpf_tx_queue **tx; > struct idpf_buf_queue **bufq; > struct idpf_compl_queue **complq; > struct idpf_intr_reg intr_reg; > ), > CACHELINE_STRUCT_GROUP_RW( > struct napi_struct napi; > u16 total_events; > struct dim tx_dim; > u16 tx_itr_value; > bool tx_intr_mode; > u32 tx_itr_idx; > struct dim rx_dim; > u16 rx_itr_value; > bool rx_intr_mode; > u32 rx_itr_idx; > ), > CACHELINE_STRUCT_GROUP_COLD( > u16 v_idx; > cpumask_var_t affinity_mask; > ) > ); > > Note that those three inner macros have distinct meaningful names not to > have this working, but to aid human reader, then checkpatch/check-kdoc. > Technically could be all the same CACHELINE_GROUP(). > > I'm not sure if (at most) 3 cacheline groups are fine for the general > case, but it would be best to have just one variant of the > CACHELINE_STRUCT_GROUP(), perhaps as a vararg. I almost want to CC Linus on this because I think it's mostly about personal preferences. I dislike the struct_group()-style macros. They don't scale (imagine having to define two partially overlapping groups) and don't look like C to my eyes. Kees really had to do this for his memory safety work because we need to communicate a "real struct" type to the compiler, but if you're just doing this so fail the build and make the developer stop to think - it's not worth the ugliness. Can we not extend __cacheline_group_begin() and __cacheline_group_end() -style markings?
From: Jakub Kicinski <kuba@kernel.org> Date: Wed, 29 May 2024 18:34:09 -0700 > On Tue, 28 May 2024 15:48:35 +0200 Alexander Lobakin wrote: >> diff --git a/scripts/kernel-doc b/scripts/kernel-doc >> index 95a59ac78f82..d0cf9a2d82de 100755 >> --- a/scripts/kernel-doc >> +++ b/scripts/kernel-doc >> @@ -1155,6 +1155,7 @@ sub dump_struct($$) { >> $members =~ s/\bstruct_group_attr\s*\(([^,]*,){2}/STRUCT_GROUP(/gos; >> $members =~ s/\bstruct_group_tagged\s*\(([^,]*),([^,]*),/struct $1 $2; STRUCT_GROUP(/gos; >> $members =~ s/\b__struct_group\s*\(([^,]*,){3}/STRUCT_GROUP(/gos; >> + $members =~ s/\blibeth_cacheline_group\s*\(([^,]*,)/struct { } $1; STRUCT_GROUP(/gos; >> $members =~ s/\bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*;/$2/gos; >> >> my $args = qr{([^,)]+)}; > > Having per-driver grouping defines is a no-go. Without it, kdoc warns when I want to describe group fields =\ > Do you need the defines in the first place? They allow to describe CLs w/o repeating boilerplates like cacheline_group_begin(blah) __aligned(blah) fields cacheline_group_end(blah) > Are you sure the assert you're adding are not going to explode > on some weird arch? Honestly, patch 5 feels like a little too I was adjusting and testing it a lot and CI finally started building every arch with no issues some time ago, so yes, I'm sure. 64-byte CL on 64-bit arch behaves the same everywhere, so the assertions for it can be more strict. On other arches, the behaviour is the same as how Eric asserts netdev cachelines in the core code. > much for a driver.. We had multiple situations when our team were optimizing the structure layout and then someone added a new field and messed up the layout again. So I ended up with strict assertions. Why is it too much if we have the same stuff for the netdev core? Thanks, Olek
On Thu, 13 Jun 2024 12:47:33 +0200 Alexander Lobakin wrote: > > Having per-driver grouping defines is a no-go. > > Without it, kdoc warns when I want to describe group fields =\ > > > Do you need the defines in the first place? > > They allow to describe CLs w/o repeating boilerplates like > > cacheline_group_begin(blah) __aligned(blah) > fields > cacheline_group_end(blah) And you assert that your boilerplate is somehow nicer than this? See my reply to Przemek, I don't think so, and neither do other maintainers, judging by how the socket grouping was done. You can add new markers to include the align automatically too, etc. > > Are you sure the assert you're adding are not going to explode > > on some weird arch? Honestly, patch 5 feels like a little too > > I was adjusting and testing it a lot and CI finally started building > every arch with no issues some time ago, so yes, I'm sure. > 64-byte CL on 64-bit arch behaves the same everywhere, so the assertions > for it can be more strict. On other arches, the behaviour is the same as > how Eric asserts netdev cachelines in the core code. > > > much for a driver.. > > We had multiple situations when our team were optimizing the structure > layout and then someone added a new field and messed up the layout > again. So I ended up with strict assertions. I understand. Not 100% sure I agree but depends on the team, so okay. > Why is it too much if we have the same stuff for the netdev core? But we didn't add tcp_* macros and sock_* macros etc. Improve the stuff in cache.h is you think its worth it. And no struct_groups() please.
diff --git a/scripts/kernel-doc b/scripts/kernel-doc index 95a59ac78f82..d0cf9a2d82de 100755 --- a/scripts/kernel-doc +++ b/scripts/kernel-doc @@ -1155,6 +1155,7 @@ sub dump_struct($$) { $members =~ s/\bstruct_group_attr\s*\(([^,]*,){2}/STRUCT_GROUP(/gos; $members =~ s/\bstruct_group_tagged\s*\(([^,]*),([^,]*),/struct $1 $2; STRUCT_GROUP(/gos; $members =~ s/\b__struct_group\s*\(([^,]*,){3}/STRUCT_GROUP(/gos; + $members =~ s/\blibeth_cacheline_group\s*\(([^,]*,)/struct { } $1; STRUCT_GROUP(/gos; $members =~ s/\bSTRUCT_GROUP(\(((?:(?>[^)(]+)|(?1))*)\))[^;]*;/$2/gos; my $args = qr{([^,)]+)}; diff --git a/include/net/libeth/cache.h b/include/net/libeth/cache.h new file mode 100644 index 000000000000..5579240913d2 --- /dev/null +++ b/include/net/libeth/cache.h @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2024 Intel Corporation */ + +#ifndef __LIBETH_CACHE_H +#define __LIBETH_CACHE_H + +#include <linux/cache.h> + +/* ``__aligned_largest`` is architecture-dependent. Get the actual alignment */ +#define ___LIBETH_LARGEST_ALIGN \ + sizeof(struct { long __UNIQUE_ID(long_); } __aligned_largest) +#define __LIBETH_LARGEST_ALIGN \ + (___LIBETH_LARGEST_ALIGN > SMP_CACHE_BYTES ? \ + ___LIBETH_LARGEST_ALIGN : SMP_CACHE_BYTES) +#define __LIBETH_LARGEST_ALIGNED(sz) \ + ALIGN(sz, __LIBETH_LARGEST_ALIGN) + +#define __libeth_cacheline_group_begin(grp) \ + __cacheline_group_begin(grp) __aligned(__LIBETH_LARGEST_ALIGN) +#define __libeth_cacheline_group_end(grp) \ + __cacheline_group_end(grp) __aligned(sizeof(long)) + +/** + * libeth_cacheline_group - declare a cacheline-aligned field group + * @grp: name of the group (usually 'read_mostly', 'read_write', or 'cold') + * @...: struct fields inside the group + * + * Note that the whole group is cacheline-aligned, but the end marker is + * aligned to long, so that you pass the (almost) actual field size sum to + * the assertion macros below instead of CL-aligned values. + * Each cacheline group must be described in struct's kernel-doc. + */ +#define libeth_cacheline_group(grp, ...) \ + struct_group(grp, \ + __libeth_cacheline_group_begin(grp); \ + __VA_ARGS__ \ + __libeth_cacheline_group_end(grp); \ + ) + +/** + * libeth_cacheline_group_assert - make sure cacheline group size is expected + * @type: type of the structure containing the group + * @grp: group name inside the struct + * @sz: expected group size + */ +#if defined(CONFIG_64BIT) && SMP_CACHE_BYTES == 64 +#define libeth_cacheline_group_assert(type, grp, sz) \ + static_assert(offsetof(type, __cacheline_group_end__##grp) - \ + offsetofend(type, __cacheline_group_begin__##grp) == \ + (sz)) +#define __libeth_cacheline_struct_assert(type, sz) \ + static_assert(sizeof(type) == (sz)) +#else /* !CONFIG_64BIT || SMP_CACHE_BYTES != 64 */ +#define libeth_cacheline_group_assert(type, grp, sz) \ + static_assert(offsetof(type, __cacheline_group_end__##grp) - \ + offsetofend(type, __cacheline_group_begin__##grp) <= \ + (sz)) +#define __libeth_cacheline_struct_assert(type, sz) \ + static_assert(sizeof(type) <= (sz)) +#endif /* !CONFIG_64BIT || SMP_CACHE_BYTES != 64 */ + +#define __libeth_cls1(sz1) \ + __LIBETH_LARGEST_ALIGNED(sz1) +#define __libeth_cls2(sz1, sz2) \ + (__LIBETH_LARGEST_ALIGNED(sz1) + __LIBETH_LARGEST_ALIGNED(sz2)) +#define __libeth_cls3(sz1, sz2, sz3) \ + (__LIBETH_LARGEST_ALIGNED(sz1) + __LIBETH_LARGEST_ALIGNED(sz2) + \ + __LIBETH_LARGEST_ALIGNED(sz3)) +#define __libeth_cls(...) \ + CONCATENATE(__libeth_cls, COUNT_ARGS(__VA_ARGS__))(__VA_ARGS__) + +/** + * libeth_cacheline_struct_assert - make sure CL-based struct size is expected + * @type: type of the struct + * @...: from 1 to 3 CL group sizes (read-mostly, read-write, cold) + * + * When a struct contains several CL groups, it's difficult to predict its size + * on different architectures. The macro instead takes sizes of all of the + * groups the structure contains and generates the final struct size. + */ +#define libeth_cacheline_struct_assert(type, ...) \ + __libeth_cacheline_struct_assert(type, __libeth_cls(__VA_ARGS__)); \ + static_assert(__alignof(type) >= __LIBETH_LARGEST_ALIGN) + +/** + * libeth_cacheline_set_assert - make sure CL-based struct layout is expected + * @type: type of the struct + * @ro: expected size of the read-mostly group + * @rw: expected size of the read-write group + * @c: expected size of the cold group + * + * Check that each group size is expected and then do final struct size check. + */ +#define libeth_cacheline_set_assert(type, ro, rw, c) \ + libeth_cacheline_group_assert(type, read_mostly, ro); \ + libeth_cacheline_group_assert(type, read_write, rw); \ + libeth_cacheline_group_assert(type, cold, c); \ + libeth_cacheline_struct_assert(type, ro, rw, c) + +#endif /* __LIBETH_CACHE_H */
Following the latest netdev trend, i.e. effective and usage-based field cacheline placement, add helpers to group and then assert struct fields by cachelines. For 64-bit with 64-byte cachelines, the assertions are more strict as the size can then be easily predicted. For the rest, just make sure they don't cross the specified bound. Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com> --- scripts/kernel-doc | 1 + include/net/libeth/cache.h | 100 +++++++++++++++++++++++++++++++++++++ 2 files changed, 101 insertions(+) create mode 100644 include/net/libeth/cache.h