From patchwork Sun Oct 30 13:30:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Finucane X-Patchwork-Id: 688945 X-Patchwork-Delegate: rbryant@redhat.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3t6JQd2q2Vz9t1T for ; Mon, 31 Oct 2016 00:35:21 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="key not found in DNS" (0-bit key; unprotected) header.d=that.guru header.i=@that.guru header.b=E+qGDFKx; dkim-atps=neutral Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id D59F8105E1; Sun, 30 Oct 2016 06:35:04 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx1e4.cudamail.com (mx1.cudamail.com [69.90.118.67]) by archives.nicira.com (Postfix) with ESMTPS id 81B24105DE for ; Sun, 30 Oct 2016 06:35:03 -0700 (PDT) Received: from bar5.cudamail.com (unknown [192.168.21.12]) by mx1e4.cudamail.com (Postfix) with ESMTPS id 15EBC1E03E3 for ; Sun, 30 Oct 2016 07:35:03 -0600 (MDT) X-ASG-Debug-ID: 1477834501-09eadd0f95243780001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar5.cudamail.com with ESMTP id YGNGSROfIYvZAkOL (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Sun, 30 Oct 2016 07:35:01 -0600 (MDT) X-Barracuda-Envelope-From: stephen@that.guru X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO nov-007-i531.relay.mailchannels.net) (46.232.183.85) by mx3-pf3.cudamail.com with ESMTPS (DHE-RSA-AES256-SHA encrypted); 30 Oct 2016 13:34:59 -0000 Received-SPF: none (mx3-pf3.cudamail.com: domain at that.guru does not designate permitted sender hosts) X-Barracuda-Apparent-Source-IP: 46.232.183.85 X-Barracuda-RBL-IP: 46.232.183.85 X-Sender-Id: mxroute|x-authuser|stephen@that.guru Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id 0CDB9125F30 for ; Sun, 30 Oct 2016 13:34:53 +0000 (UTC) Received: from one.mxroute.com (ip-10-120-4-226.us-west-2.compute.internal [10.120.4.226]) by relay.mailchannels.net (Postfix) with ESMTPA id 7797E126ACE for ; Sun, 30 Oct 2016 13:34:52 +0000 (UTC) X-Sender-Id: mxroute|x-authuser|stephen@that.guru Received: from one.mxroute.com ([UNAVAILABLE]. [10.28.138.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384) by 0.0.0.0:2500 (trex/5.7.8); Sun, 30 Oct 2016 13:34:53 +0000 X-MC-Relay: Good X-MailChannels-SenderId: mxroute|x-authuser|stephen@that.guru X-MailChannels-Auth-Id: mxroute X-MC-Loop-Signature: 1477834492820:690837352 X-MC-Ingress-Time: 1477834492819 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=that.guru; s=default; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From: Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=hpb+SHrY+z8P7uX9q9WoqiQqmFjrD/TfLW/XG6obz8o=; b=E+qGDFKxXMpFgFFQlrsjxrwnPT 8zvu/4TnRHseoTQcXyBlKZkwmpstnR66DWgZKSeclHJjummpmKQX/2pUBy/cU/tCijOhbPc0vydgl 7HHSVqo2xtLR7HAzhMjEb9ugEgMiU8fMBgB+0ZWkW6y5xneCNR50o3kHrXNL5E+pmTc4IAH/61dFA K6PbEQqKQZxq4hLlB8HHXGkNsyzzvOXLgQX0dnuQO0iqeQEudT10ELQx8D0PLKogQdsGXK7tz/dqA 9HzLMLf7urW56mG4ZPCjjrG7Kn6mJ4P+rQBs63tPBZDhZcnKgkd9T9r9UzWgXvz4NmfnzifYHZPDD PUaJW4YA==; X-CudaMail-Envelope-Sender: stephen@that.guru From: Stephen Finucane To: dev@openvswitch.org X-CudaMail-MID: CM-V3-1029004444 X-CudaMail-DTE: 103016 X-CudaMail-Originating-IP: 46.232.183.85 Date: Sun, 30 Oct 2016 13:30:07 +0000 X-ASG-Orig-Subj: [##CM-V3-1029004444##][PATCH 21/23] doc: Convert CONTAINERS.OpenStack to rST Message-Id: <1477834209-11414-22-git-send-email-stephen@that.guru> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1477834209-11414-1-git-send-email-stephen@that.guru> References: <1477834209-11414-1-git-send-email-stephen@that.guru> X-OutGoing-Spam-Status: No, score=-9.2 X-AuthUser: stephen@that.guru X-GBUdb-Analysis: 0, 46.232.183.85, Ugly c=0 p=0 Source New X-MessageSniffer-Rules: 0-0-0-31365-c X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1477834501 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 1.10 X-Barracuda-Spam-Status: No, SCORE=1.10 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC0_MV0713, BSF_SC5_MJ1963, DKIM_SIGNED, RDNS_NONE X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.34168 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 DKIM_SIGNED Domain Keys Identified Mail: message has a signature 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC0_MV0713 Custom rule MV0713 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Subject: [ovs-dev] [PATCH 21/23] doc: Convert CONTAINERS.OpenStack to rST X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Signed-off-by: Stephen Finucane --- ovn/CONTAINERS.OpenStack.md | 122 -------------------------------------- ovn/CONTAINERS.OpenStack.rst | 135 +++++++++++++++++++++++++++++++++++++++++++ ovn/automake.mk | 2 +- 3 files changed, 136 insertions(+), 123 deletions(-) delete mode 100644 ovn/CONTAINERS.OpenStack.md create mode 100644 ovn/CONTAINERS.OpenStack.rst diff --git a/ovn/CONTAINERS.OpenStack.md b/ovn/CONTAINERS.OpenStack.md deleted file mode 100644 index 8f74e6f..0000000 --- a/ovn/CONTAINERS.OpenStack.md +++ /dev/null @@ -1,122 +0,0 @@ -Integration of Containers with OVN and OpenStack ------------------------------------------------- - -Isolation between containers is weaker than isolation between VMs, so -some environments deploy containers for different tenants in separate -VMs as an additional security measure. This document describes creation of -containers inside VMs and how they can be made part of the logical networks -securely. The created logical network can include VMs, containers and -physical machines as endpoints. To better understand the proposed integration -of containers with OVN and OpenStack, this document describes the end to end -workflow with an example. - -* A OpenStack tenant creates a VM (say VM-A) with a single network interface -that belongs to a management logical network. The VM is meant to host -containers. OpenStack Nova chooses the hypervisor on which VM-A is created. - -* A Neutron port may have been created in advance and passed in to Nova -with the request to create a new VM. If not, Nova will issue a request -to Neutron to create a new port. The ID of the logical port from -Neutron will also be used as the vif-id for the virtual network -interface (VIF) of VM-A. - -* When VM-A is created on a hypervisor, its VIF gets added to the -Open vSwitch integration bridge. This creates a row in the Interface table -of the Open_vSwitch database. As explained in the [IntegrationGuide.rst], -the vif-id associated with the VM network interface gets added in the -external_ids:iface-id column of the newly created row in the Interface table. - -* Since VM-A belongs to a logical network, it gets an IP address. This IP -address is used to spawn containers (either manually or through container -orchestration systems) inside that VM and to monitor the health of the -created containers. - -* The vif-id associated with the VM's network interface can be obtained by -making a call to Neutron using tenant credentials. - -* This flow assumes a component called a "container network plugin". -If you take Docker as an example for containers, you could envision -the plugin to be either a wrapper around Docker or a feature of Docker itself -that understands how to perform part of this workflow to get a container -connected to a logical network managed by Neutron. The rest of the flow -refers to this logical component that does not yet exist as the -"container network plugin". - -* All the calls to Neutron will need tenant credentials. These calls can -either be made from inside the tenant VM as part of a container network plugin -or from outside the tenant VM (if the tenant is not comfortable using temporary -Keystone tokens from inside the tenant VMs). For simplicity, this document -explains the work flow using the former method. - -* The container hosting VM will need Open vSwitch installed in it. The only -work for Open vSwitch inside the VM is to tag network traffic coming from -containers. - -* When a container needs to be created inside the VM with a container network -interface that is expected to be attached to a particular logical switch, the -network plugin in that VM chooses any unused VLAN (This VLAN tag only needs to -be unique inside that VM. This limits the number of container interfaces to -4096 inside a single VM). This VLAN tag is stripped out in the hypervisor -by OVN and is only useful as a context (or metadata) for OVN. - -* The container network plugin then makes a call to Neutron to create a -logical port. In addition to all the inputs that a call to create a port in -Neutron that are currently needed, it sends the vif-id and the VLAN tag as -inputs. - -* Neutron in turn will verify that the vif-id belongs to the tenant in question -and then uses the OVN specific plugin to create a new row in the -Logical_Switch_Port table of the OVN Northbound Database. Neutron -responds back with an IP address and MAC address for that network -interface. So Neutron becomes the IPAM system and provides unique IP -and MAC addresses across VMs and containers in the same logical network. - -* The Neutron API call above to create a logical port for the container -could add a relatively significant amount of time to container creation. -However, an optimization is possible here. Logical ports could be -created in advance and reused by the container system doing container -orchestration. Additional Neutron API calls would only be needed if the -port needs to be attached to a different logical network. - -* When a container is eventually deleted, the network plugin in that VM -may make a call to Neutron to delete that port. Neutron in turn will -delete the entry in the Logical_Switch_Port table of the OVN Northbound -Database. - -As an example, consider Docker containers. Since Docker currently does not -have a network plugin feature, this example uses a hypothetical wrapper -around Docker to make calls to Neutron. - -* Create a Logical switch, e.g.: - -``` -% ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f create network LS1 -``` - -The above command will make a call to Neutron with the credentials to create -a logical switch. The above is optional if the logical switch has already -been created from outside the VM. - -* List networks available to the tenant. - -``` -% ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f list networks -``` - -* Create a container and attach a interface to the previously created switch -as a logical port. - -``` -% ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f --vif-id=$VIF_ID \ ---network=LS1 run -d --net=none ubuntu:14.04 /bin/sh -c \ -"while true; do echo hello world; sleep 1; done" -``` - -The above command will make a call to Neutron with all the inputs it currently -needs to create a logical port. In addition, it passes the $VIF_ID and a -unused VLAN. Neutron will add that information in OVN and return back -a MAC address and IP address for that interface. ovn-docker will then create -a veth pair, insert one end inside the container as 'eth0' and the other end -as a port of a local OVS bridge as an access port of the chosen VLAN. - -[IntegrationGuide.rst]:IntegrationGuide.rst diff --git a/ovn/CONTAINERS.OpenStack.rst b/ovn/CONTAINERS.OpenStack.rst new file mode 100644 index 0000000..2934d1c --- /dev/null +++ b/ovn/CONTAINERS.OpenStack.rst @@ -0,0 +1,135 @@ +.. + Licensed under the Apache License, Version 2.0 (the "License"); you may + not use this file except in compliance with the License. You may obtain + a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, WITHOUT + WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the + License for the specific language governing permissions and limitations + under the License. + + Convention for heading levels in Open vSwitch documentation: + + ======= Heading 0 (reserved for the title in a document) + ------- Heading 1 + ~~~~~~~ Heading 2 + +++++++ Heading 3 + ''''''' Heading 4 + + Avoid deeper levels because they do not render well. + +================================================ +Integration of Containers with OVN and OpenStack +================================================ + +Isolation between containers is weaker than isolation between VMs, so some +environments deploy containers for different tenants in separate VMs as an +additional security measure. This document describes creation of containers +inside VMs and how they can be made part of the logical networks securely. The +created logical network can include VMs, containers and physical machines as +endpoints. To better understand the proposed integration of containers with +OVN and OpenStack, this document describes the end to end workflow with an +example. + +* A OpenStack tenant creates a VM (say VM-A) with a single network interface + that belongs to a management logical network. The VM is meant to host + containers. OpenStack Nova chooses the hypervisor on which VM-A is created. + +* A Neutron port may have been created in advance and passed in to Nova with + the request to create a new VM. If not, Nova will issue a request to Neutron + to create a new port. The ID of the logical port from Neutron will also be + used as the vif-id for the virtual network interface (VIF) of VM-A. + +* When VM-A is created on a hypervisor, its VIF gets added to the Open vSwitch + integration bridge. This creates a row in the Interface table of the + ``Open_vSwitch`` database. As explained in the `integration guide + `__, the vif-id associated with the VM network + interface gets added in the ``external_ids:iface-id`` column of the newly + created row in the Interface table. + +* Since VM-A belongs to a logical network, it gets an IP address. This IP + address is used to spawn containers (either manually or through container + orchestration systems) inside that VM and to monitor the health of the + created containers. + +* The vif-id associated with the VM's network interface can be obtained by + making a call to Neutron using tenant credentials. + +* This flow assumes a component called a "container network plugin". If you + take Docker as an example for containers, you could envision the plugin to be + either a wrapper around Docker or a feature of Docker itself that understands + how to perform part of this workflow to get a container connected to a + logical network managed by Neutron. The rest of the flow refers to this + logical component that does not yet exist as the "container network plugin". + +* All the calls to Neutron will need tenant credentials. These calls can + either be made from inside the tenant VM as part of a container network + plugin or from outside the tenant VM (if the tenant is not comfortable using + temporary Keystone tokens from inside the tenant VMs). For simplicity, this + document explains the work flow using the former method. + +* The container hosting VM will need Open vSwitch installed in it. The only + work for Open vSwitch inside the VM is to tag network traffic coming from + containers. + +* When a container needs to be created inside the VM with a container network + interface that is expected to be attached to a particular logical switch, the + network plugin in that VM chooses any unused VLAN (This VLAN tag only needs + to be unique inside that VM. This limits the number of container interfaces + to 4096 inside a single VM). This VLAN tag is stripped out in the hypervisor + by OVN and is only useful as a context (or metadata) for OVN. + +* The container network plugin then makes a call to Neutron to create a logical + port. In addition to all the inputs that a call to create a port in Neutron + that are currently needed, it sends the vif-id and the VLAN tag as inputs. + +* Neutron in turn will verify that the vif-id belongs to the tenant in question + and then uses the OVN specific plugin to create a new row in the + Logical_Switch_Port table of the OVN Northbound Database. Neutron responds + back with an IP address and MAC address for that network interface. So + Neutron becomes the IPAM system and provides unique IP and MAC addresses + across VMs and containers in the same logical network. + +* The Neutron API call above to create a logical port for the container could + add a relatively significant amount of time to container creation. However, + an optimization is possible here. Logical ports could be created in advance + and reused by the container system doing container orchestration. Additional + Neutron API calls would only be needed if the port needs to be attached to a + different logical network. + +* When a container is eventually deleted, the network plugin in that VM may + make a call to Neutron to delete that port. Neutron in turn will delete the + entry in the Logical_Switch_Port table of the OVN Northbound Database. + +As an example, consider Docker containers. Since Docker currently does not +have a network plugin feature, this example uses a hypothetical wrapper around +Docker to make calls to Neutron. + +* Create a Logical switch:: + + $ ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f create network LS1 + + The above command will make a call to Neutron with the credentials to create + a logical switch. The above is optional if the logical switch has already + been created from outside the VM. + +* List networks available to the tenant:: + + $ ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f list networks + +* Create a container and attach a interface to the previously created switch as + a logical port:: + + $ ovn-docker --cred=cca86bd13a564ac2a63ddf14bf45d37f --vif-id=$VIF_ID \ + --network=LS1 run -d --net=none ubuntu:14.04 /bin/sh -c \ + "while true; do echo hello world; sleep 1; done" + + The above command will make a call to Neutron with all the inputs it + currently needs to create a logical port. In addition, it passes the $VIF_ID + and a unused VLAN. Neutron will add that information in OVN and return back + a MAC address and IP address for that interface. ovn-docker will then create + a veth pair, insert one end inside the container as 'eth0' and the other end + as a port of a local OVS bridge as an access port of the chosen VLAN. diff --git a/ovn/automake.mk b/ovn/automake.mk index f3f40e5..5cc86cd 100644 --- a/ovn/automake.mk +++ b/ovn/automake.mk @@ -72,7 +72,7 @@ DISTCLEANFILES += ovn/ovn-architecture.7 EXTRA_DIST += \ ovn/TODO \ - ovn/CONTAINERS.OpenStack.md \ + ovn/CONTAINERS.OpenStack.rst \ ovn/OVN-GW-HA.md # Version checking for ovn-nb.ovsschema.