Message ID | CAM_3v9+-5V3UD-6g==ahE6kH82xEoofnhBrHcm+n-1CNfdo5+Q@mail.gmail.com |
---|---|
State | Not Applicable |
Headers | show |
On Sun, Jul 03, 2016 at 08:17:32PM -0700, Guru Shetty wrote: > On 3 July 2016 at 10:24, Ben Pfaff <blp@ovn.org> wrote: > > > On Wed, Jun 29, 2016 at 01:17:11AM -0700, Gurucharan Shetty wrote: > > > This commit adds a 'pre_lb' table that sits before 'pre_stateful' table. > > > For packets that need to be load balanced, this table sets reg0[0] > > > to act as a hint for the pre-stateful table to send the packet to > > > the conntrack table for defragmentation. ... > > Is there a way to test this? > > The load balancing itself via group actions are tested here: > https://github.com/openvswitch/ovs/blob/master/tests/system-traffic.at#L2529 > > But OVN tests do not exist end to end as unit test framework does not > integrate with conntrack NAT. > One option is to wait for Daniele's userpace NAT work to get merged to add > the tests. > > One way that we could test it is to look for specific group flows that get > generated with 'ovs-ofctl dump-groups' without any traffic sent. I can > create a unit test for that as a separate patch. That might be useful; I'll leave it to you to decide. > Thank you, I applied it with the following incremental around documentation > (which I had completely forgotten about, sorry about that. I am happy to > send any incremental documentation fixes if you have any comments. I > thought about sending another version, but I would likely again have > conflict with Numan's DHCP patches). Thanks for adding the documentation.
Hi, Ben and Guru. I tried to test lb feature on my OpenStack env, but failed. The simplest topology, three VMs(cirros) and VIP are on the same switch. VM2 and VM3 are endpoints for the VIP. I tried to use ping and ssh to test VIP, but things don't work. I think it should be arp issue. First, in table ls_in_arp_rsp, there is no flow entry to response for VIP. Second, in table ls_in_l2_lkup, it will determine which port to output per packet eth.dst. I'm not familiar with with conntrack, but it seems have nothing to process on packet L2 address, when I run "conntrack -L". So I suppose this is another place will cause load balance failure. Thanks. Zong Kai, LI
On 5 July 2016 at 07:31, Zong Kai LI <zealokii@gmail.com> wrote: > Hi, Ben and Guru. I tried to test lb feature on my OpenStack env, but > failed. > The simplest topology, three VMs(cirros) and VIP are on the same switch. > VM2 and VM3 are endpoints for the VIP. > I tried to use ping and ssh to test VIP, but things don't work. > Yeah, the current feature works when the destination endpoints (servers) are in a different subnet than the client. i.e. there has to be a router in-between. The documentation should have clarified that, sorry! I have a patch in my tree here that works for your use case too. https://github.com/shettyg/ovs/commit/f961026fc0dd4645e5bcf1e819b8de99a7c3f95a I will clean it up (i.e. add documentation) and send it out for review. > > I think it should be arp issue. > First, in table ls_in_arp_rsp, there is no flow entry to response for VIP. > Second, in table ls_in_l2_lkup, it will determine which port to output per > packet eth.dst. I'm not familiar with with conntrack, but it seems have > nothing to process on packet L2 address, when I run "conntrack -L". So I > suppose this is another place will cause load balance failure. > > Thanks. > Zong Kai, LI > _______________________________________________ > dev mailing list > dev@openvswitch.org > http://openvswitch.org/mailman/listinfo/dev >
On 5 July 2016 at 07:31, Zong Kai LI <zealokii@gmail.com> wrote: > Hi, Ben and Guru. I tried to test lb feature on my OpenStack env, but > failed. > The simplest topology, three VMs(cirros) and VIP are on the same switch. > VM2 and VM3 are endpoints for the VIP. > I tried to use ping and ssh to test VIP, but things don't work. > > I think it should be arp issue. > First, in table ls_in_arp_rsp, there is no flow entry to response for VIP. > Second, in table ls_in_l2_lkup, it will determine which port to output per > packet eth.dst. I'm not familiar with with conntrack, but it seems have > nothing to process on packet L2 address, when I run "conntrack -L". So I > suppose this is another place will cause load balance failure. > On second thoughts, your use case would still not work without a router connected to your switch. i.e. the VIP itself does not have a MAC address associated with it. When a VIP is in a different subnet, the logical port has to send the packet to the router port. I think, that would be the case in a non-virtualized world too. > Thanks. > Zong Kai, LI > _______________________________________________ > dev mailing list > dev@openvswitch.org > http://openvswitch.org/mailman/listinfo/dev >
diff --git a/ovn/northd/ovn-northd.8.xml b/ovn/northd/ovn-northd.8.xml index b8ee106..6bc83ea 100644 --- a/ovn/northd/ovn-northd.8.xml +++ b/ovn/northd/ovn-northd.8.xml @@ -252,7 +252,23 @@ before eventually advancing to ingress table <code>ACLs</code>. </p> - <h3>Ingress Table 4: Pre-stateful</h3> + <h3>Ingress Table 4: Pre-LB</h3> + + <p> + This table prepares flows for possible stateful load balancing processing + in ingress table <code>LB</code> and <code>Stateful</code>. It contains + a priority-0 flow that simply moves traffic to the next table. If load + balancing rules with virtual IP addresses (and ports) are configured in + <code>OVN_Northbound</code> database for a logical datapath, a + priority-100 flow is added for each configured virtual IP address + <var>VIP</var> with a match <code>ip && ip4.dst == <var>VIP</var> + </code> that sets an action <code>reg0[0] = 1; next;</code> to act as a + hint for table <code>Pre-stateful</code> to send IP packets to the + connection tracker for packet de-fragmentation before eventually + advancing to ingress table <code>LB</code>.