mbox series

[v2,net-next,0/2] net: preserve sock reference when scrubbing the skb.

Message ID 20180627133426.3858-1-fbl@redhat.com
Headers show
Series net: preserve sock reference when scrubbing the skb. | expand

Message

Flavio Leitner June 27, 2018, 1:34 p.m. UTC
The sock reference is lost when scrubbing the packet and that breaks
TSQ (TCP Small Queues) and XPS (Transmit Packet Steering) causing
performance impacts of about 50% in a single TCP stream when crossing
network namespaces.

XPS breaks because the queue mapping stored in the socket is not
available, so another random queue might be selected when the stack
needs to transmit something like a TCP ACK, or TCP Retransmissions.
That causes packet re-ordering and/or performance issues.

TSQ breaks because it orphans the packet while it is still in the
host, so packets are queued contributing to the buffer bloat problem.

Preserving the sock reference fixes both issues. The socket is
orphaned anyways in the receiving path before any relevant action,
but the transmit side needs some extra checking included in the
first patch.

The first patch will update netfilter to check if the socket
netns is local before use it.

The second patch removes the skb_orphan() from the skb_scrub_packet()
and improve the documentation.

ChangeLog:
- split into two (Eric)
- addressed Paolo's offline feedback to swap the checks in xt_socket.c
  to preserve original behavior.
- improved ip-sysctl.txt (reported by Cong)

Flavio Leitner (2):
  netfilter: check if the socket netns is correct.
  skbuff: preserve sock reference when scrubbing the skb.

 Documentation/networking/ip-sysctl.txt | 10 +++++-----
 include/net/netfilter/nf_log.h         |  3 ++-
 net/core/skbuff.c                      |  1 -
 net/ipv4/netfilter/nf_log_ipv4.c       |  8 ++++----
 net/ipv6/netfilter/nf_log_ipv6.c       |  8 ++++----
 net/netfilter/nf_conntrack_broadcast.c |  2 +-
 net/netfilter/nf_log_common.c          |  5 +++--
 net/netfilter/nf_nat_core.c            |  6 +++++-
 net/netfilter/nft_meta.c               |  9 ++++++---
 net/netfilter/nft_socket.c             |  5 ++++-
 net/netfilter/xt_cgroup.c              |  6 ++++--
 net/netfilter/xt_owner.c               |  2 +-
 net/netfilter/xt_recent.c              |  3 ++-
 net/netfilter/xt_socket.c              |  8 ++++++++
 14 files changed, 49 insertions(+), 27 deletions(-)

Comments

Cong Wang June 27, 2018, 7:39 p.m. UTC | #1
Let me rephrase why I don't like this patchset:

1. Let's forget about TSQ for a moment, skb_orphan() before leaving
the stack is not just reasonable but also aligning to network isolation
design. You can't claim skb_orphan() is broken from beginning, it is
designed in this way and it is intentional.

2. Now, let's consider the current TSQ behavior (without any patch):

2a. For packets leaving the host or just leaving the stack to another
netns, there is no difference, and this should be expected from user's
point of view, because I don't need to think about its destination to
decide how I should configure tcp_limit_output_bytes.

2b. The hidden pipeline behind TSQ is well defined, that is, any
queues in between L4 and L2, most importantly qdisc. I can easily
predict the number of queues my packets will go through with a
given configuration. This also aligns with 2a.

2c. Isolation is respected as it should. TCP sockets in this netns
won't be influenced by any factor in another netns.

Now with your patchset:

2a. There is an apparent difference for packets leaving the host
and for packets just leaving this stack.

2b. You extend the pipeline to another netns's L3, which means
the number of queues is now unpredictable.

2c. Isolation is now slightly broken, the other netns could influence
the source netns.

I don't see you have any good argument on any of these 3 points.
David Miller June 28, 2018, 1:20 p.m. UTC | #2
From: Cong Wang <xiyou.wangcong@gmail.com>
Date: Wed, 27 Jun 2018 12:39:01 -0700

> Let me rephrase why I don't like this patchset:

Cong, I don't think you are seeing the situation clearly and
I am certainly going to apply this patch series even in the
face of your objections.

Suggesting that solving the lack of back pressure on a UDP
socket caused by this problem by using cgroups or cpu
usage controllers is just complete and utter madness.
David Miller June 28, 2018, 1:21 p.m. UTC | #3
From: Flavio Leitner <fbl@redhat.com>
Date: Wed, 27 Jun 2018 10:34:24 -0300

> The sock reference is lost when scrubbing the packet and that breaks
> TSQ (TCP Small Queues) and XPS (Transmit Packet Steering) causing
> performance impacts of about 50% in a single TCP stream when crossing
> network namespaces.
> 
> XPS breaks because the queue mapping stored in the socket is not
> available, so another random queue might be selected when the stack
> needs to transmit something like a TCP ACK, or TCP Retransmissions.
> That causes packet re-ordering and/or performance issues.
> 
> TSQ breaks because it orphans the packet while it is still in the
> host, so packets are queued contributing to the buffer bloat problem.
> 
> Preserving the sock reference fixes both issues. The socket is
> orphaned anyways in the receiving path before any relevant action,
> but the transmit side needs some extra checking included in the
> first patch.
> 
> The first patch will update netfilter to check if the socket
> netns is local before use it.
> 
> The second patch removes the skb_orphan() from the skb_scrub_packet()
> and improve the documentation.
> 
> ChangeLog:
> - split into two (Eric)
> - addressed Paolo's offline feedback to swap the checks in xt_socket.c
>   to preserve original behavior.
> - improved ip-sysctl.txt (reported by Cong)

Series applied, thanks Flavio.
Cong Wang June 28, 2018, 9:41 p.m. UTC | #4
On Thu, Jun 28, 2018 at 6:20 AM David Miller <davem@davemloft.net> wrote:
>
> From: Cong Wang <xiyou.wangcong@gmail.com>
> Date: Wed, 27 Jun 2018 12:39:01 -0700
>
> > Let me rephrase why I don't like this patchset:
>
> Cong, I don't think you are seeing the situation clearly and
> I am certainly going to apply this patch series even in the
> face of your objections.
>
> Suggesting that solving the lack of back pressure on a UDP
> socket caused by this problem by using cgroups or cpu
> usage controllers is just complete and utter madness.

Pretty sure you didn't even read the rest of my reply,
I can't help you if you just stop at where you quoted.
Cong Wang June 28, 2018, 9:53 p.m. UTC | #5
On Wed, Jun 27, 2018 at 12:39 PM Cong Wang <xiyou.wangcong@gmail.com> wrote:
>
> Let me rephrase why I don't like this patchset:
>
> 1. Let's forget about TSQ for a moment, skb_orphan() before leaving
> the stack is not just reasonable but also aligning to network isolation
> design. You can't claim skb_orphan() is broken from beginning, it is
> designed in this way and it is intentional.
>
> 2. Now, let's consider the current TSQ behavior (without any patch):
>
> 2a. For packets leaving the host or just leaving the stack to another
> netns, there is no difference, and this should be expected from user's
> point of view, because I don't need to think about its destination to
> decide how I should configure tcp_limit_output_bytes.
>
> 2b. The hidden pipeline behind TSQ is well defined, that is, any
> queues in between L4 and L2, most importantly qdisc. I can easily
> predict the number of queues my packets will go through with a
> given configuration. This also aligns with 2a.
>
> 2c. Isolation is respected as it should. TCP sockets in this netns
> won't be influenced by any factor in another netns.
>
> Now with your patchset:
>
> 2a. There is an apparent difference for packets leaving the host
> and for packets just leaving this stack.
>
> 2b. You extend the pipeline to another netns's L3, which means
> the number of queues is now unpredictable.
>
> 2c. Isolation is now slightly broken, the other netns could influence
> the source netns.
>
> I don't see you have any good argument on any of these 3 points.

No one finishes reading this.
I will send a revert with quote of the above.
David Miller June 29, 2018, 2:20 a.m. UTC | #6
From: Cong Wang <xiyou.wangcong@gmail.com>
Date: Thu, 28 Jun 2018 14:41:16 -0700

> Pretty sure you didn't even read the rest of my reply,
> I can't help you if you just stop at where you quoted.

I read your entire email, please do not accuse me of things
you are not sure of.
David Miller June 29, 2018, 2:22 a.m. UTC | #7
From: Cong Wang <xiyou.wangcong@gmail.com>
Date: Thu, 28 Jun 2018 14:53:09 -0700

> I will send a revert with quote of the above.

And it will go to /dev/null as far as I am concerned.  I read it the
first time, so posting it again will not change my opinion of what you
have to say.

Cong, you really need to calm down and understand that people perhaps
simply fundamentally disagree with you.

And I am one of those people.
Cong Wang June 30, 2018, 12:15 a.m. UTC | #8
On Thu, Jun 28, 2018 at 7:22 PM David Miller <davem@davemloft.net> wrote:
>
> From: Cong Wang <xiyou.wangcong@gmail.com>
> Date: Thu, 28 Jun 2018 14:53:09 -0700
>
> > I will send a revert with quote of the above.
>
> And it will go to /dev/null as far as I am concerned.  I read it the
> first time, so posting it again will not change my opinion of what you
> have to say.

David, you claim you read it, now tell me, where is "cgroups" or "cpu"
from?

This is the link of my reply you quoted:
https://marc.info/?l=linux-netdev&m=153013948711582&w=2

I did mention cgroups to Eric because of isolation, the softnet_data
is per-CPU, and CPU is not isolated by netns apparently, therefore
sd->input_pkt_queue can't be totally isolated for netns without cpuset.

But this is never the reason why I dislike it, this is why I never even
mentioned it in the link above.

>
> Cong, you really need to calm down and understand that people perhaps
> simply fundamentally disagree with you.

1. Eric's "forwarding to eth0" is missing, never brought up until in his
private reply. Without this information, XPS makes no sense at all in
this patchset. For the record, I provide a different solution for Eric.

2. No one responses to:
https://marc.info/?l=linux-netdev&m=153013948711582&w=2
I never expect you agree with me on all of them, but no one
gives me any response to my concerns.

3. I will write a blog post to draw you some pictures, since it
is so hard to understand the isolation...