Message ID | 20200526122741.1603118-1-numans@ovn.org |
---|---|
Headers | show
Return-Path: <ovs-dev-bounces@openvswitch.org> X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=<UNKNOWN>) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ovn.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49WYCF383sz9sRW for <incoming@patchwork.ozlabs.org>; Tue, 26 May 2020 22:28:08 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 310E787A96; Tue, 26 May 2020 12:28:07 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 8Y5RN5qDVjNn; Tue, 26 May 2020 12:28:06 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by hemlock.osuosl.org (Postfix) with ESMTP id 1063287ACE; Tue, 26 May 2020 12:28:06 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id E7931C0888; Tue, 26 May 2020 12:28:05 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id E4200C016F for <dev@openvswitch.org>; Tue, 26 May 2020 12:28:03 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id DC54887AE7 for <dev@openvswitch.org>; Tue, 26 May 2020 12:28:03 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id v5tnhzPY-BIN for <dev@openvswitch.org>; Tue, 26 May 2020 12:28:01 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from relay8-d.mail.gandi.net (relay8-d.mail.gandi.net [217.70.183.201]) by hemlock.osuosl.org (Postfix) with ESMTPS id 6E2B487A43 for <dev@openvswitch.org>; Tue, 26 May 2020 12:28:01 +0000 (UTC) X-Originating-IP: 27.7.6.164 Received: from nusiddiq.home.org.com (unknown [27.7.6.164]) (Authenticated sender: numans@ovn.org) by relay8-d.mail.gandi.net (Postfix) with ESMTPSA id 24F2C1BF211; Tue, 26 May 2020 12:27:56 +0000 (UTC) From: numans@ovn.org To: dev@openvswitch.org Date: Tue, 26 May 2020 17:57:41 +0530 Message-Id: <20200526122741.1603118-1-numans@ovn.org> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 Subject: [ovs-dev] [PATCH ovn v8 0/8] Incremental processing improvements. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: <ovs-dev.openvswitch.org> List-Unsubscribe: <https://mail.openvswitch.org/mailman/options/ovs-dev>, <mailto:ovs-dev-request@openvswitch.org?subject=unsubscribe> List-Archive: <http://mail.openvswitch.org/pipermail/ovs-dev/> List-Post: <mailto:ovs-dev@openvswitch.org> List-Help: <mailto:ovs-dev-request@openvswitch.org?subject=help> List-Subscribe: <https://mail.openvswitch.org/mailman/listinfo/ovs-dev>, <mailto:ovs-dev-request@openvswitch.org?subject=subscribe> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" <ovs-dev-bounces@openvswitch.org> |
Series |
Incremental processing improvements.
|
expand
|
From: Numan Siddique <numans@ovn.org> This patch series handles port binding, datapath binding, ovs interface changes, runtime data changes, sb chassis changes incrementally. (These patches can also be found here - https://github.com/numansiddique/ovn/tree/IP_improvements_v5) Below are the results of some testing I did with ovn-fake-multinode setup Test setup ------ 1. ovn-central fake node running OVN dbs and 2 compute nodes running ovn-controller. 2. Before running the tests, used an existing OVN db with the below resources No of logical switches - 53 No of logical ports - 1256 No of logical routers - 9 No of logical router ports - 56 No of port groups - 152 No of logical flows - 45447 Port bindings on compute-1 - 19 Port bindings on compute-2 - 18 No of OF flows on compute-1 - 84996 No of OF flows on compute-2 - 84901 3. The test does the following - Creates 2 logical switches (one for each compute node) and connect to a logical router for each compute node. - 100 logical ports are created (50 per lswitch), a simple ACL is added and the address set is created for each port. - Each port is bound on the respective compute node and the test pings the IP of the port (from another port belonging to the same lswitch created earlier). Below are the results with OVN master +---------------------------------------------------------------------------------------------------------+ | Response Times (sec) | +----------------------------------+-------+--------+--------+--------+--------+--------+---------+-------+ | action | min | median | 90%ile | 95%ile | max | avg | success | count | +----------------------------------+-------+--------+--------+--------+--------+--------+---------+-------+ | ovn.create_or_update_address_set | 0.491 | 0.519 | 0.542 | 0.548 | 0.558 | 0.521 | 100.0% | 100 | | ovn.create_or_update_port_group | 0.0 | 0.0 | 0.0 | 0.0 | 0.001 | 0.0 | 100.0% | 100 | | ovn.create_port_group_acls | 0.966 | 1.037 | 1.065 | 1.069 | 1.07 | 1.037 | 50.0% | 100 | | ovn_network.bind_port | 1.242 | 1.341 | 1.397 | 1.409 | 1.443 | 1.348 | 100.0% | 100 | | ovn.bind_ovs_vm | 0.413 | 0.469 | 0.49 | 0.494 | 0.523 | 0.469 | 100.0% | 100 | | ovn.bind_internal_vm | 0.804 | 0.875 | 0.921 | 0.935 | 0.95 | 0.88 | 100.0% | 100 | | ovn_network.wait_port_ping | 6.695 | 7.788 | 7.903 | 11.63 | 16.124 | 7.997 | 100.0% | 100 | | total | 9.271 | 10.318 | 11.269 | 14.047 | 18.509 | 10.871 | 100.0% | 100 | +----------------------------------+-------+--------+--------+--------+--------+--------+---------+-------+ Load duration: 1087.5742933750153 Full duration: 1089.151035308838 Below are the results with these patches +-------------------------------------------------------------------------------------------------------+ | Response Times (sec) | +----------------------------------+-------+--------+--------+--------+-------+-------+---------+-------+ | action | min | median | 90%ile | 95%ile | max | avg | success | count | +----------------------------------+-------+--------+--------+--------+-------+-------+---------+-------+ | ovn.create_or_update_address_set | 0.484 | 0.506 | 0.53 | 0.536 | 0.551 | 0.509 | 100.0% | 100 | | ovn.create_or_update_port_group | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 100.0% | 100 | | ovn.create_port_group_acls | 0.966 | 1.006 | 1.032 | 1.036 | 1.059 | 1.006 | 50.0% | 100 | | ovn_network.bind_port | 1.255 | 1.352 | 1.421 | 1.444 | 1.516 | 1.352 | 100.0% | 100 | | ovn.bind_ovs_vm | 0.411 | 0.455 | 0.472 | 0.476 | 0.5 | 0.456 | 100.0% | 100 | | ovn.bind_internal_vm | 0.806 | 0.893 | 0.968 | 0.989 | 1.043 | 0.896 | 100.0% | 100 | | ovn_network.wait_port_ping | 0.226 | 0.253 | 0.325 | 0.329 | 0.347 | 0.267 | 100.0% | 100 | | total | 2.517 | 3.137 | 3.718 | 3.749 | 3.797 | 3.135 | 100.0% | 100 | +----------------------------------+-------+--------+--------+--------+-------+-------+---------+-------+ Load duration: 313.99292826652527 Full duration: 315.29931354522705 I ran same tests but with 1000 lports and below are the results with these patches +-------------------------------------------------------------------------------------------------------+ | Response Times (sec) | +----------------------------------+-------+--------+--------+--------+-------+-------+---------+-------+ | action | min | median | 90%ile | 95%ile | max | avg | success | count | +----------------------------------+-------+--------+--------+--------+-------+-------+---------+-------+ | ovn.create_or_update_address_set | 0.483 | 0.555 | 0.6 | 0.615 | 0.661 | 0.555 | 100.0% | 1000 | | ovn.create_or_update_port_group | 0.0 | 0.0 | 0.0 | 0.0 | 0.002 | 0.0 | 100.0% | 1000 | | ovn.create_port_group_acls | 0.973 | 1.101 | 1.176 | 1.195 | 1.271 | 1.097 | 50.0% | 1000 | | ovn_network.bind_port | 1.239 | 1.371 | 1.444 | 1.47 | 1.557 | 1.373 | 100.0% | 1000 | | ovn.bind_ovs_vm | 0.409 | 0.482 | 0.522 | 0.541 | 0.597 | 0.486 | 100.0% | 1000 | | ovn.bind_internal_vm | 0.784 | 0.882 | 0.945 | 0.968 | 1.063 | 0.887 | 100.0% | 1000 | | ovn_network.wait_port_ping | 0.218 | 0.251 | 0.313 | 0.324 | 0.395 | 0.262 | 100.0% | 1000 | | total | 2.465 | 3.251 | 3.956 | 4.016 | 4.226 | 3.274 | 100.0% | 1000 | +----------------------------------+-------+--------+--------+--------+-------+-------+---------+-------+ Load duration: 3279.292845249176 Full duration: 3280.6857619285583 v7 -> v8 * Dropped the patch 4 as it is not needed, thanks to Han. * Swapped the patches 5 and 6, which are now patch 4 and 5. The v8 patch 5 will now make use of tracked data support of engine to clear the physical_flow changes. * Addressed comments from Han. Added comments in Patch 5 and Patch 6. v6 -> v7 ---- * Addressed the review comments from Han in patch 1 and patch 2. v5 -> v6 ---- * Addressed the review comments from Dumitru. * Patch 1 and Patch 2 are significantly changed due to further refactoring. v4 -> v5 ---- * Applied patch 1 of v4 to master. * Addressed the review comments from Han for patch 2. * Rebased to latest master. v3 -> v4 ---- * A small fix in patch 3 when binding the port for ovs interface change. * Rebased to latest master. v2 -> v3 ---- * Added back the patch 5 and 6 and added 4 more patches. So totally totally 10 patches in the series * Handling the runtime data changes in flow computation. * Handling sbrec_chassis changes. v1 -> v2 ------ * Addressed the review comments from Han in patch 1, 2 and 3. * Removed patch 5 and 6 from the series. As per the comments from Han, we should handle runtime data changes in flow output engine. But the patch 6 of the series had added a no-op handler. So removed these 2 patches until those are addressed. RFC v2 -> v1 --------- * Fixed the 2 failing test cases. * Updated the commit messages. RFC v1 -> RFC v2 --------- * Added 2 new patches * Patch 5 (ofctrl_check_and_add_flow) was submitted earlier too and the previous discussion is here - https://patchwork.ozlabs.org/patch/1202417/ * Patch 6 handles I-P for ct_zone and OVS interface changes in flow_output_run stage. Numan Siddique (6): Refactor binding_run()to take two context argument - binding_ctx_in and binding_ctx_out. ovn-controller: Refactor binding.c ovn-controller: I-P for port binding in runtime_data stage ovn-controller: I-P for datapath binding ofctrl_check_and_add_flow: Replace the actions of an existing flow if actions have changed. ovn-controller: I-P for ct zone and OVN Numan Siddique (7): ovn-controller: Store the local port bindings in the runtime data I-P state. ovn-controller: I-P for SB port binding and OVS interface in runtime_data. ovn-controller: I-P for datapath binding I-P engine: Provide the option for an engine to store changed data ovn-controller: I-P for ct zone and OVS interface changes in flow output stage. ovn-controller: Handle runtime data changes in flow output engine ovn-controller: Handle sbrec_chassis changes incrementally. Venkata Anil (1): ovn-controller: Use the tracked runtime data changes for flow calculation. controller/binding.c | 2040 ++++++++++++++++++++++++++++------- controller/binding.h | 62 +- controller/lflow.c | 82 +- controller/lflow.h | 14 +- controller/ovn-controller.c | 555 ++++++++-- controller/physical.c | 51 + controller/physical.h | 8 +- lib/inc-proc-eng.c | 30 +- lib/inc-proc-eng.h | 20 + tests/ovn-performance.at | 51 +- tests/ovn.at | 36 + 11 files changed, 2426 insertions(+), 523 deletions(-)