diff mbox series

[1/3] xsk: replace datagram_poll by sock_poll_wait

Message ID dfa43bcf7083edd0823e276c0cf8e21f3a226da6.1605686678.git.xuanzhuo@linux.alibaba.com
State Superseded
Headers show
Series xsk: fix for xsk_poll writeable | expand

Commit Message

Xuan Zhuo Nov. 18, 2020, 8:25 a.m. UTC
datagram_poll will judge the current socket status (EPOLLIN, EPOLLOUT)
based on the traditional socket information (eg: sk_wmem_alloc), but
this does not apply to xsk. So this patch uses sock_poll_wait instead of
datagram_poll, and the mask is calculated by xsk_poll.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 net/xdp/xsk.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Magnus Karlsson Nov. 23, 2020, 2:11 p.m. UTC | #1
On Wed, Nov 18, 2020 at 9:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> datagram_poll will judge the current socket status (EPOLLIN, EPOLLOUT)
> based on the traditional socket information (eg: sk_wmem_alloc), but
> this does not apply to xsk. So this patch uses sock_poll_wait instead of
> datagram_poll, and the mask is calculated by xsk_poll.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  net/xdp/xsk.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index cfbec39..7f0353e 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -477,11 +477,13 @@ static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
>  static __poll_t xsk_poll(struct file *file, struct socket *sock,
>                              struct poll_table_struct *wait)
>  {
> -       __poll_t mask = datagram_poll(file, sock, wait);
> +       __poll_t mask = 0;

It would indeed be nice to not execute a number of tests in
datagram_poll that will never be triggered. It will speed up things
for sure. But we need to make sure that removing those flags that
datagram_poll sets do not have any bad effects in the code above this.
But let us tentatively keep this patch for the next version of the
patch set. Just need to figure out how to solve your problem in a nice
way first. See discussion in patch 0/3.

>         struct sock *sk = sock->sk;
>         struct xdp_sock *xs = xdp_sk(sk);
>         struct xsk_buff_pool *pool;
>
> +       sock_poll_wait(file, sock, wait);
> +
>         if (unlikely(!xsk_is_bound(xs)))
>                 return mask;
>
> --
> 1.8.3.1
>
Magnus Karlsson Nov. 24, 2020, 11:36 a.m. UTC | #2
On Mon, Nov 23, 2020 at 3:11 PM Magnus Karlsson
<magnus.karlsson@gmail.com> wrote:
>
> On Wed, Nov 18, 2020 at 9:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > datagram_poll will judge the current socket status (EPOLLIN, EPOLLOUT)
> > based on the traditional socket information (eg: sk_wmem_alloc), but
> > this does not apply to xsk. So this patch uses sock_poll_wait instead of
> > datagram_poll, and the mask is calculated by xsk_poll.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  net/xdp/xsk.c | 4 +++-
> >  1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> > index cfbec39..7f0353e 100644
> > --- a/net/xdp/xsk.c
> > +++ b/net/xdp/xsk.c
> > @@ -477,11 +477,13 @@ static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
> >  static __poll_t xsk_poll(struct file *file, struct socket *sock,
> >                              struct poll_table_struct *wait)
> >  {
> > -       __poll_t mask = datagram_poll(file, sock, wait);
> > +       __poll_t mask = 0;
>
> It would indeed be nice to not execute a number of tests in
> datagram_poll that will never be triggered. It will speed up things
> for sure. But we need to make sure that removing those flags that
> datagram_poll sets do not have any bad effects in the code above this.
> But let us tentatively keep this patch for the next version of the
> patch set. Just need to figure out how to solve your problem in a nice
> way first. See discussion in patch 0/3.
>
> >         struct sock *sk = sock->sk;
> >         struct xdp_sock *xs = xdp_sk(sk);
> >         struct xsk_buff_pool *pool;
> >
> > +       sock_poll_wait(file, sock, wait);
> > +
> >         if (unlikely(!xsk_is_bound(xs)))
> >                 return mask;
> >
> > --
> > 1.8.3.1
> >

The fix looks correct and it will speed things up too as a bonus.
Please include this patch in the v2 as outlined in my answer to 0/3.

Thanks!
diff mbox series

Patch

diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index cfbec39..7f0353e 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -477,11 +477,13 @@  static int xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len)
 static __poll_t xsk_poll(struct file *file, struct socket *sock,
 			     struct poll_table_struct *wait)
 {
-	__poll_t mask = datagram_poll(file, sock, wait);
+	__poll_t mask = 0;
 	struct sock *sk = sock->sk;
 	struct xdp_sock *xs = xdp_sk(sk);
 	struct xsk_buff_pool *pool;
 
+	sock_poll_wait(file, sock, wait);
+
 	if (unlikely(!xsk_is_bound(xs)))
 		return mask;