From patchwork Tue Jun 11 15:22:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Felix Huettner X-Patchwork-Id: 1946397 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=mail.schwarz header.i=@mail.schwarz header.a=rsa-sha256 header.s=selector1 header.b=fiZUg+d1; dkim=fail reason="signature verification failed" (2048-bit key) header.d=mail.schwarz header.i=@mail.schwarz header.a=rsa-sha256 header.s=selector1 header.b=fiZUg+d1; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=2605:bc80:3010::136; helo=smtp3.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org) Received: from smtp3.osuosl.org (smtp3.osuosl.org [IPv6:2605:bc80:3010::136]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VzC9J2PGgz20Pb for ; Wed, 12 Jun 2024 01:23:00 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by smtp3.osuosl.org (Postfix) with ESMTP id 78B2F605FE; Tue, 11 Jun 2024 15:22:58 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp3.osuosl.org ([127.0.0.1]) by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id TMCem9gtAXc2; Tue, 11 Jun 2024 15:22:53 +0000 (UTC) X-Comment: SPF check N/A for local connections - client-ip=140.211.9.56; helo=lists.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver= DKIM-Filter: OpenDKIM Filter v2.11.0 smtp3.osuosl.org 0520B605A3 Authentication-Results: smtp3.osuosl.org; dkim=fail reason="signature verification failed" (2048-bit key, unprotected) header.d=mail.schwarz header.i=@mail.schwarz header.a=rsa-sha256 header.s=selector1 header.b=fiZUg+d1; dkim=fail reason="signature verification failed" (2048-bit key) header.d=mail.schwarz header.i=@mail.schwarz header.a=rsa-sha256 header.s=selector1 header.b=fiZUg+d1 Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by smtp3.osuosl.org (Postfix) with ESMTPS id 0520B605A3; Tue, 11 Jun 2024 15:22:53 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id A774AC0012; Tue, 11 Jun 2024 15:22:52 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 5B41FC0011 for ; Tue, 11 Jun 2024 15:22:51 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by smtp2.osuosl.org (Postfix) with ESMTP id 2DDA8405DE for ; Tue, 11 Jun 2024 15:22:51 +0000 (UTC) X-Virus-Scanned: amavis at osuosl.org Received: from smtp2.osuosl.org ([127.0.0.1]) by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP id oMipROS-pVMI for ; Tue, 11 Jun 2024 15:22:48 +0000 (UTC) Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=2a01:111:f403:2613::601; helo=eur05-vi1-obe.outbound.protection.outlook.com; envelope-from=felix.huettner@mail.schwarz; receiver= DMARC-Filter: OpenDMARC Filter v1.4.2 smtp2.osuosl.org 27CC940273 Authentication-Results: smtp2.osuosl.org; dmarc=pass (p=reject dis=none) header.from=mail.schwarz DKIM-Filter: OpenDKIM Filter v2.11.0 smtp2.osuosl.org 27CC940273 Authentication-Results: smtp2.osuosl.org; dkim=pass (2048-bit key, unprotected) header.d=mail.schwarz header.i=@mail.schwarz header.a=rsa-sha256 header.s=selector1 header.b=fiZUg+d1; dkim=pass (2048-bit key) header.d=mail.schwarz header.i=@mail.schwarz header.a=rsa-sha256 header.s=selector1 header.b=fiZUg+d1 Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on20601.outbound.protection.outlook.com [IPv6:2a01:111:f403:2613::601]) by smtp2.osuosl.org (Postfix) with ESMTPS id 27CC940273 for ; Tue, 11 Jun 2024 15:22:46 +0000 (UTC) ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=bvxXpbj/fsbnzFzZ7IQ3ZvVJNxeF/xa3p2NtPMIVO9vQA4n/OtF0IVDXTIeg5csgcEtd+Lx4XFRNABMCnnJhfIg378kGCOvm9nGS8y/Jt9ErI4sHNW7mGwy8CMCMSKRGrV0Qlrv56oZPtxz5E3uBxYo1EXd4vNSMCSbmW9Q4hm3D+EkyuBrdfEfnrb1G2q5Kwj712uycQ9dvVEzJN2h0nfmDQYCyC7Mf38VsFj5Cjpl6tpngUJTH4lxheJYW+oBibPcl1uMgwlNm6JNX4mrO4/ZrFwELj64s+20PYOSaxSOBczvPds1HdigCwxbdSh0/o8a0dUAyhY48cIcN4+EvmQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/9PJSJS85tVzSbH8e6XWJCK+IKmpsNgGDTz9byhR300=; b=bJyr/7mTrDbRhigTrPkYXWrtaTuvk3qJ1mn/5cSwrZS98dzM4CJ/RWwOlzvuHC6FjhvAQMySf+p8TvqpPF0ZWST26khXIt5MZLVPxPQZ86h8Z9yYdhJpfn7PLO4z1oZs42LESfQO2ID3XYMw56oFFVSHnxQEqK/vAAalZW7P9b6x6aKFhPJ7CYUd3RC/+IgdLDsMJobPdJFqEsCYtvM4NvpWyLzXTnKodYk+uGrB6arM2FVcY0jQmoAFchz0kzktAf5H9bLgllvQhHytgGvV/pRrA+QfmsDgHP38wzb+h66l7/QVdeo5yolHmfdUItcku6lfqKFagYWLzGKWzUVKzQ== ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 104.40.229.156) smtp.rcpttodomain=openvswitch.org smtp.mailfrom=mail.schwarz; dmarc=pass (p=reject sp=none pct=100) action=none header.from=mail.schwarz; dkim=pass (signature was verified) header.d=mail.schwarz; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=mail.schwarz] dkim=[1,1,header.d=mail.schwarz] dmarc=[1,1,header.from=mail.schwarz]) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mail.schwarz; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/9PJSJS85tVzSbH8e6XWJCK+IKmpsNgGDTz9byhR300=; b=fiZUg+d1u2fkYPNVXdzbH4s+y1iU44kqSrX+dBEKhVLmv7b8WC3nqP9R842pI4AxVQIzPZZzHprJJF03vEN8tmZaAxM8AvgsoDKSOHOtXEmWnGxIkOjRi19IZ+k2adf4HU3ldMuMderAWgdQLEujtURHBm0oM52PeUzTvxf8Fiu9qz6HhK2SVjhjeRP2nEjofUD8FPIt+wt1Sxyo2btUnnq6hoJxpARIL0NPm5ph4l1MDXhfOwjY4GI9FvX7Q+7kGTua47IgVPXm0UNxoxKyUquUZspcbNVzUyti5/hDj/Zdeld/3AiDHoD/8bQ7qB9P5/aCqOhQnPGdWB1x7UuCHQ== Received: from DU7P189CA0025.EURP189.PROD.OUTLOOK.COM (2603:10a6:10:552::32) by AS5PR10MB8101.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:672::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Tue, 11 Jun 2024 15:22:39 +0000 Received: from DU2PEPF00028CFC.eurprd03.prod.outlook.com (2603:10a6:10:552:cafe::6a) by DU7P189CA0025.outlook.office365.com (2603:10a6:10:552::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.17 via Frontend Transport; Tue, 11 Jun 2024 15:22:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 104.40.229.156) smtp.mailfrom=mail.schwarz; dkim=pass (signature was verified) header.d=mail.schwarz;dmarc=pass action=none header.from=mail.schwarz; Received-SPF: Pass (protection.outlook.com: domain of mail.schwarz designates 104.40.229.156 as permitted sender) receiver=protection.outlook.com; client-ip=104.40.229.156; helo=eu1.smtp.exclaimer.net; pr=C Received: from eu1.smtp.exclaimer.net (104.40.229.156) by DU2PEPF00028CFC.mail.protection.outlook.com (10.167.242.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.7677.15 via Frontend Transport; Tue, 11 Jun 2024 15:22:38 +0000 Received: from EUR01-VE1-obe.outbound.protection.outlook.com (104.47.1.51) by eu1.smtp.exclaimer.net (104.40.229.156) with Exclaimer Signature Manager ESMTP Proxy eu1.smtp.exclaimer.net (tlsversion=TLS12, tlscipher=TLS_ECDHE_WITH_AES256_SHA384); Tue, 11 Jun 2024 15:22:39 +0000 X-ExclaimerHostedSignatures-MessageProcessed: true X-ExclaimerProxyLatency: 7775409 X-ExclaimerImprintLatency: 2215185 X-ExclaimerImprintAction: b6bd3c60fc274c02840b3656807a4814 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M2JUBFai+Et9xqYA1uTZurvZGF7KqwXkJThOn7lelsE8fa7eQF4bMa1xZw57/QOySrkyTS9+/e2nAVfwBxoYk6+vFt7FmzwH/ujnIVCo/fU1mU3ubjvFL7OJS6SWmbkcKOMcM/lgiPBWrEvlOoz1mjKZkBBJVdyjIH8dm6VwbmfG5hKfunMjWP9fGeOtk7hDefGvONFnWjyCHIrdTmSNSVtqB31Fl2VaQJFdj9SHPkmk3RBlNqrpZ7DpdgAVWwsL//CX3U5gWoSOl17cr+LvTqzWHScgvfZvlmJKBRLFiv9FFXrG+Vr6n1Io1ug6EA8PGO2jZHlINjtGmH5doKxZ+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/9PJSJS85tVzSbH8e6XWJCK+IKmpsNgGDTz9byhR300=; b=MKs56sbUEJxkjSgWWfpjX0hXnpKv6fnzJswhAuJEIdBy01qRg5SURwf+PTjQvGBLhgfa49rYwoi8vRb+djgZzZZ+013UPAWLVVIxUFLyYJK+hkHpHNuEMLir1+xnp0G2T1O6Pa4bb2s5r9YI32TIhK4E3+RMlbF5S3Pcmfa7lLXQBPUvQOFiPW6kPXfKeliFfHRK+A5GfJrZ74Ag5zhn0YR6iGBqon+aTHfQe0eYGHsH7qtGuDyipzO7STuknG414gtf9xhq1VucxKDgAXC8gb4690/sgltyOTRDfe9r5GkZenBlnmUwXw8U+frsgK6LLtbE3cxJQn9PuKbIX6rg5w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mail.schwarz; dmarc=pass action=none header.from=mail.schwarz; dkim=pass header.d=mail.schwarz; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mail.schwarz; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/9PJSJS85tVzSbH8e6XWJCK+IKmpsNgGDTz9byhR300=; b=fiZUg+d1u2fkYPNVXdzbH4s+y1iU44kqSrX+dBEKhVLmv7b8WC3nqP9R842pI4AxVQIzPZZzHprJJF03vEN8tmZaAxM8AvgsoDKSOHOtXEmWnGxIkOjRi19IZ+k2adf4HU3ldMuMderAWgdQLEujtURHBm0oM52PeUzTvxf8Fiu9qz6HhK2SVjhjeRP2nEjofUD8FPIt+wt1Sxyo2btUnnq6hoJxpARIL0NPm5ph4l1MDXhfOwjY4GI9FvX7Q+7kGTua47IgVPXm0UNxoxKyUquUZspcbNVzUyti5/hDj/Zdeld/3AiDHoD/8bQ7qB9P5/aCqOhQnPGdWB1x7UuCHQ== Authentication-Results-Original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=mail.schwarz; Received: from PAVPR10MB6914.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:102:30d::9) by GV1PR10MB6244.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:150:90::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Tue, 11 Jun 2024 15:22:36 +0000 Received: from PAVPR10MB6914.EURPRD10.PROD.OUTLOOK.COM ([fe80::f00d:feeb:e45e:54f8]) by PAVPR10MB6914.EURPRD10.PROD.OUTLOOK.COM ([fe80::f00d:feeb:e45e:54f8%2]) with mapi id 15.20.7633.036; Tue, 11 Jun 2024 15:22:31 +0000 Date: Tue, 11 Jun 2024 17:22:34 +0200 To: dev@openvswitch.org Message-ID: <8e1cc6cc5987a992bbd8bc08cb99c3c8e7edbf7e.1718090635.git.felix.huettner@mail.schwarz> Mail-Followup-To: dev@openvswitch.org Content-Disposition: inline X-ClientProxiedBy: FR3P281CA0112.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a3::15) To PAVPR10MB6914.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:102:30d::9) MIME-Version: 1.0 X-MS-TrafficTypeDiagnostic: PAVPR10MB6914:EE_|GV1PR10MB6244:EE_|DU2PEPF00028CFC:EE_|AS5PR10MB8101:EE_ X-MS-Office365-Filtering-Correlation-Id: 22e70394-b4db-4a73-3677-08dc8a2a5499 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam-Untrusted: BCL:0;ARA:13230032|1800799016|366008|376006; X-Microsoft-Antispam-Message-Info-Original: dFi47i2IEcrpCxNVaAzUBZSKYTXk6gsFD3mHAufBXth/CH/4e9pbg3NmIoTdxixtiwDjl40cl7lvLWyKTXEVeD7abljWwB98cUgjhSzuTztmc9pBEV36AT5X2/N2Wprkgycg8l0pPCO+PVPbvieislIdulOhrRiPPf7QUXaLTd8gC5D3qBsUFHzOQTkKnkKRkzHeH2FTJMTJg9wT6G8P7H475osNW0V8hS+G5PMGFV1gSVAj0aRJ2D/IO3mdEAeuyneD85hASWmvf+SWAUbN2/sKcRMlxgNGQCMHMqUkis9tY6bbafCimpfRP2I8Ct5giDTF/ZCpCbhFJFlS2VScAIfbt2GXTkCMB+OOqH9W2Y6ZIcLhDZ5N6mXIlyKT6uDZStt1VZ0A/VTp2DrVpnEbGHamWP6PtFvAs7bh8gq8pKusV2Iy8MBMtqdBOtCFClrMWXZ9jfW5ynMbRWni8f+WReWROFGZAAWr8dJS2v233KOOgK+I7exBFTtcnU9MkQUUhxCSGKispjrvUEM76hKeoz0HzJZpBbwtT+QI8hDo0k90phPmKTANOPk+50cY17FSszXZ9McM/UIAbFeaolYD9brnLCGsT5SL5vxud+L78kkuMUH4II3ZR3JVIxpIiFE9wTf6HjmT2or4cfXkzr80O4oOhMH4TFC3IvjV521i6qYGYZPf9KICzIOWZJ4+4JbaTefNyNIK1VSCMrMQTgf5jW8dIQWqxYolyBFXgK+V3uWhR4dAIQa0cVgOEgpVdquUXHKqF19baeJrMq1ahc2/EWilkqzrS0dbAEJHw7mynR1FWi3+SCUmQ/tPI+pxTfygS1KAOoS/jmaxd0R/lHBU+8CL30Br6CHYKDMeDhYc6HDAr8lWSvORoVxIdq35afhsVpmxFx/k7RSxogFN8tpSI6bFaSxx2zq7xSfdbfTDUMcC6FHqbNetnQcfI2C8ehURR0/UGfxc0dssfY6rjjDd1MXhaMk/yt5SkYBscuqUs6c1IiOUoDrVDMUHD5q4n2FUPNRRYs4z9xP5ugs9B1SMbJ57De0Yut1XHZMNMWxfCkoEP+xLbq3+sOZiQ4qL+ZMHMDPyBVke1ESOD+nU6aibb3k6PzipIcDM/6v2nST2N1f1kL+I3hwXjjsPKFdkXJ0lU9vsYORf8QzyTZYsCxGSJuyWj9UW0LPJ5x+atjZJzGA1AKV+IFi2535EjREIOu57mWFrpSAD3WPkkqjquIuChO9i/N99uhQKEPmGBEG7ATaH/0F5DnXhugzZk0ICKOBwPRx4ySbGYiCbY7y5ml/y8LReZEgBX9n9NjAaBxUhwLRRNZZw5wmB1AgwdML+WEEX X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PAVPR10MB6914.EURPRD10.PROD.OUTLOOK.COM; PTR:; CAT:NONE; SFS:(13230032)(1800799016)(366008)(376006); DIR:OUT; SFP:1101; X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR10MB6244 X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DU2PEPF00028CFC.eurprd03.prod.outlook.com X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id-Prvs: 309384e0-fffe-45b1-3e53-08dc8a2a5022 X-Microsoft-Antispam: BCL:0; ARA:13230032|1800799016|82310400018|35042699014|376006|36860700005; X-Microsoft-Antispam-Message-Info: cWzlznnXC2JqUJKdzkjE13lJWA/ili/DH7qV6Dz2jnQQvAan8SCzq2kgtBq/6URXmVTZODKmXyy/Gx0r7zOecECB4HpZqGw8t1g4VjReJz3ssiN8CTwjd9irOo8uUyUEV5BBNaC4RF1s0gxk2/jej2r+i1DXCMbZJDMD5JsjZa6qAEnKgcaBYHgtrc0RHabmlgifkhznZ7ufyYZMfxiumWeDXJTw3C6aUbtTKBcwctkrbOmcyb0BG4nQMbY+NoqrJdzliA/MPURKOuri4CJSbVx2Bj+fsaXwB40nIjWGClDxLDjlounfaCdp5l/VlWs3Tbnh1VMS9AHR0JUHJbBPulrrMr9oBpgDiogXFDuTJRme9lA0y3daR6ISdPrOlf0AQtoa8Db8PmACyL3TffKxJkHQ24zxqhnu9+gdgJJjD7wNSAijMC2OsK6IoDfu7Fd4+VtX01Y8LilAg6PHD4REw38acJTct05swVjHi33WwuQAwfRzPVeP/Gphete1pvo6Kirzq8Lmt26iFlWkT8wdChpLZII7E9HLxQaZMByD5zoipSIwVHA9/yMHraW5kmkYdq7AQUrs8mdcTczvB9/egkFBPfU70barOYFQ/OS6bnEAE0eJKmp5acqYNWQNoClph2NsZJ9GlgleFlawUZ1ZycQeefECSjf6jT5dHzkjzGypwFU2KJi8IEaUk6jPCm58i4rqFxBmv70qMx9zrnA/q7JSujQdcw5+lYJ6QnmDu/WDVWQ/S9/Ekj+beqw6S0OTBaSPbcmUcbNHYqeRavPbX52D3T0RQ6haLV3S2Dv6ION+nIno9rrMh1gws94p4xNSKUzClOm5NYWBobAuPW+UPyNBvY97but/jnSMWdl63iDxTDQHDDfvj9yLn6VO/DhImks8zgBIUp7qgAoztbyGua8NeRWkXCgBFTLw+vp/cNhGL8vzDEkYBDryd5oHQme5KIpVuQczZNSZ4u6xIjnPuD75cx6Z09p8HAWGKpLMHCpwTJJ5QibGAyErv4dkmTfu3tdm0A4uJ3mfSNQYj7YuiWnVtTViKtUtHAx0ofGPPSI/M9B6va8YKG60xhj7pIGAu3XlC5x3pRimqoCEBfcLCvTNTKfBkKtNPMzJNJFGbfa7Zp0LmJf+H1+2d1FvvCrlVQAYB6kJ2KgPbcOlPwxoSX4ysvndYWvl0DTvUJu342wC2JL9PZ+PPZPvu77QuPtIwsk5VnJ5GzO81X7j9kimpxzRT8zj2fdnuedvWTXye5mk5pQurGtLftfEpBqtkcJlLM1EG/YM4j47rSsya+kKoR4dVYMhv5DgdAyvaCdpbe2oeMMej6MavNZSq1wKDLHYA0wLohpeNdIJCfDm3QYagjclK9QoaLxNJltj9fY3p9PujHf0IvFN8FWRfxM5mH2ctOy/y+t0H8ZIL0MCMgIPaw== X-Forefront-Antispam-Report: CIP:104.40.229.156; CTRY:NL; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:eu1.smtp.exclaimer.net; PTR:eu1.smtp.exclaimer.net; CAT:NONE; SFS:(13230032)(1800799016)(82310400018)(35042699014)(376006)(36860700005); DIR:OUT; SFP:1101; X-OriginatorOrg: mail.schwarz X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2024 15:22:38.7979 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 22e70394-b4db-4a73-3677-08dc8a2a5499 X-MS-Exchange-CrossTenant-Id: d04f4717-5a6e-4b98-b3f9-6918e0385f4c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=d04f4717-5a6e-4b98-b3f9-6918e0385f4c; Ip=[104.40.229.156]; Helo=[eu1.smtp.exclaimer.net] X-MS-Exchange-CrossTenant-AuthSource: DU2PEPF00028CFC.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS5PR10MB8101 Subject: [ovs-dev] [PATCH ovn v4 1/2] northd: Handle routing for other address families. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Felix Huettner via dev From: Felix Huettner Reply-To: Felix Huettner Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" In most cases IPv4 packets are routed only over other IPv4 networks and IPv6 packets are routed only over IPv6 networks. However there is no inherent reason for this limitation. Routing IPv4 packets over IPv6 networks just requires the router to contain a route for an IPv4 network with an IPv6 nexthop. This was previously prevented in OVN in ovn-nbctl and northd. By removing these filters the forwarding will work if the mac addresses are prepopulated. If the mac addresses are not prepopulated we will attempt to resolve them using the original address family of the packet and not the address family of the nexthop. This will fail and we will not forward the packet. This feature can for example be used by service providers to interconnect multiple IPv4 networks of a customer without needing to negotiate free IPv4 addresses by just using any IPv6 address. Signed-off-by: Felix Huettner --- v3->v4: - additional tests - add additional regbit for nexthop address family v2->v3: fix uninitialized variable v1->v2: - move ipv4 info to parsed_route - add tests for lr-route-add - switch tests to use fmt_pkt - some minor test cleanups NEWS | 4 + northd/northd.c | 83 +++--- tests/ovn-nbctl.at | 26 +- tests/ovn-northd.at | 128 ++++++--- tests/ovn.at | 645 ++++++++++++++++++++++++++++++++++++++++++ utilities/ovn-nbctl.c | 12 +- 6 files changed, 818 insertions(+), 80 deletions(-) base-commit: 7abae8142841ba3083e08678f5e5d01039dcc428 diff --git a/NEWS b/NEWS index 81c958f9a..7b53b8aa7 100644 --- a/NEWS +++ b/NEWS @@ -21,6 +21,10 @@ Post v24.03.0 MAC addresses configured on the LSP with "unknown", are learnt via the OVN native FDB. - Add support for ovsdb-server `--config-file` option in ovn-ctl. + - Allow Static Routes where the address families of ip_prefix and nexthop + diverge (e.g. IPv4 packets over IPv6 links). This is currently limited to + nexthops that have their mac addresses prepopulated (so + dynamic_neigh_routers must be false). OVN v24.03.0 - 01 Mar 2024 -------------------------- diff --git a/northd/northd.c b/northd/northd.c index c8a5efa01..60fe2a3d3 100644 --- a/northd/northd.c +++ b/northd/northd.c @@ -155,6 +155,7 @@ static bool default_acl_drop; #define REGBIT_KNOWN_LB_SESSION "reg9[6]" #define REGBIT_DHCP_RELAY_REQ_CHK "reg9[7]" #define REGBIT_DHCP_RELAY_RESP_CHK "reg9[8]" +#define REGBIT_NEXTHOP_IS_IPV4 "reg9[9]" /* Register to store the eth address associated to a router port for packets * received in S_ROUTER_IN_ADMISSION. @@ -264,7 +265,8 @@ static bool default_acl_drop; * | | LOOKUP_NEIGHBOR_RESULT/ | | | * | | SKIP_LOOKUP_NEIGHBOR/ | | | * | |REGBIT_DHCP_RELAY_REQ_CHK/ | | | - * | |REGBIT_DHCP_RELAY_RESP_CHK}| | | + * | |REGBIT_DHCP_RELAY_RESP_CHK | | | + * | |REGBIT_NEXTHOP_IS_IPV4} | | | * | | | | | * | | REG_ORIG_TP_DPORT_ROUTER | | | * | | | | | @@ -10100,13 +10102,15 @@ build_routing_policy_flow(struct lflow_table *lflows, struct ovn_datapath *od, "outport = %s; " "flags.loopback = 1; " REG_ECMP_GROUP_ID" = 0; " + REGBIT_NEXTHOP_IS_IPV4" = %d; " "next;", is_ipv4 ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6, nexthop, is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6, lrp_addr_s, out_port->lrp_networks.ea_s, - out_port->json_key); + out_port->json_key, + is_ipv4); } else if (!strcmp(rule->action, "drop")) { ds_put_cstr(&actions, debug_drop_action()); @@ -10201,13 +10205,15 @@ build_ecmp_routing_policy_flows(struct lflow_table *lflows, "eth.src = %s; " "outport = %s; " "flags.loopback = 1; " + REGBIT_NEXTHOP_IS_IPV4" = %d; " "next;", is_ipv4 ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6, rule->nexthops[i], is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6, lrp_addr_s, out_port->lrp_networks.ea_s, - out_port->json_key); + out_port->json_key, + is_ipv4); ds_clear(&match); ds_put_format(&match, REG_ECMP_GROUP_ID" == %"PRIu16" && " @@ -10325,6 +10331,8 @@ struct parsed_route { const struct nbrec_logical_router_static_route *route; bool ecmp_symmetric_reply; bool is_discard_route; + bool is_ipv4_prefix; + bool is_ipv4_nexthop; }; static uint32_t @@ -10350,6 +10358,8 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, /* Verify that the next hop is an IP address with an all-ones mask. */ struct in6_addr nexthop; unsigned int plen; + bool is_ipv4_nexthop = true; + bool is_ipv4_prefix; bool is_discard_route = !strcmp(route->nexthop, "discard"); bool valid_nexthop = route->nexthop[0] && !is_discard_route; if (valid_nexthop) { @@ -10368,6 +10378,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, UUID_ARGS(&route->header_.uuid)); return NULL; } + is_ipv4_nexthop = IN6_IS_ADDR_V4MAPPED(&nexthop); } /* Parse ip_prefix */ @@ -10379,18 +10390,7 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, UUID_ARGS(&route->header_.uuid)); return NULL; } - - /* Verify that ip_prefix and nexthop have same address familiy. */ - if (valid_nexthop) { - if (IN6_IS_ADDR_V4MAPPED(&prefix) != IN6_IS_ADDR_V4MAPPED(&nexthop)) { - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 1); - VLOG_WARN_RL(&rl, "Address family doesn't match between 'ip_prefix'" - " %s and 'nexthop' %s in static route "UUID_FMT, - route->ip_prefix, route->nexthop, - UUID_ARGS(&route->header_.uuid)); - return NULL; - } - } + is_ipv4_prefix = IN6_IS_ADDR_V4MAPPED(&prefix); /* Verify that ip_prefix and nexthop are on the same network. */ if (!is_discard_route && @@ -10433,6 +10433,8 @@ parsed_routes_add(struct ovn_datapath *od, const struct hmap *lr_ports, pr->ecmp_symmetric_reply = smap_get_bool(&route->options, "ecmp_symmetric_reply", false); pr->is_discard_route = is_discard_route; + pr->is_ipv4_prefix = is_ipv4_prefix; + pr->is_ipv4_nexthop = is_ipv4_nexthop; ovs_list_insert(routes, &pr->list_node); return pr; } @@ -10808,7 +10810,7 @@ build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od, struct lflow_ref *lflow_ref) { - bool is_ipv4 = IN6_IS_ADDR_V4MAPPED(&eg->prefix); + bool is_ipv4_prefix = IN6_IS_ADDR_V4MAPPED(&eg->prefix); uint16_t priority; struct ecmp_route_list_node *er; struct ds route_match = DS_EMPTY_INITIALIZER; @@ -10817,7 +10819,8 @@ build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od, int ofs = !strcmp(eg->origin, ROUTE_ORIGIN_CONNECTED) ? ROUTE_PRIO_OFFSET_CONNECTED: ROUTE_PRIO_OFFSET_STATIC; build_route_match(NULL, eg->route_table_id, prefix_s, eg->plen, - eg->is_src_route, is_ipv4, &route_match, &priority, ofs); + eg->is_src_route, is_ipv4_prefix, &route_match, + &priority, ofs); free(prefix_s); struct ds actions = DS_EMPTY_INITIALIZER; @@ -10850,7 +10853,8 @@ build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od, /* Find the outgoing port. */ const char *lrp_addr_s = NULL; struct ovn_port *out_port = NULL; - if (!find_static_route_outport(od, lr_ports, route, is_ipv4, + if (!find_static_route_outport(od, lr_ports, route, + route_->is_ipv4_nexthop, &lrp_addr_s, &out_port)) { continue; } @@ -10874,13 +10878,16 @@ build_ecmp_route_flow(struct lflow_table *lflows, struct ovn_datapath *od, "%s = %s; " "eth.src = %s; " "outport = %s; " + REGBIT_NEXTHOP_IS_IPV4" = %d; " "next;", - is_ipv4 ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6, + route_->is_ipv4_nexthop ? + REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6, route->nexthop, - is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6, + route_->is_ipv4_nexthop ? REG_SRC_IPV4 : REG_SRC_IPV6, lrp_addr_s, out_port->lrp_networks.ea_s, - out_port->json_key); + out_port->json_key, + route_->is_ipv4_nexthop); ovn_lflow_add_with_hint(lflows, od, S_ROUTER_IN_IP_ROUTING_ECMP, 100, ds_cstr(&match), ds_cstr(&actions), &route->header_, lflow_ref); @@ -10897,15 +10904,15 @@ add_route(struct lflow_table *lflows, struct ovn_datapath *od, const char *network_s, int plen, const char *gateway, bool is_src_route, const uint32_t rtb_id, const struct ovsdb_idl_row *stage_hint, bool is_discard_route, - int ofs, struct lflow_ref *lflow_ref) + int ofs, struct lflow_ref *lflow_ref, + bool is_ipv4_prefix, bool is_ipv4_nexthop) { - bool is_ipv4 = strchr(network_s, '.') ? true : false; struct ds match = DS_EMPTY_INITIALIZER; uint16_t priority; const struct ovn_port *op_inport = NULL; /* IPv6 link-local addresses must be scoped to the local router port. */ - if (!is_ipv4) { + if (!is_ipv4_prefix) { struct in6_addr network; ovs_assert(ipv6_parse(network_s, &network)); if (in6_is_lla(&network)) { @@ -10913,7 +10920,7 @@ add_route(struct lflow_table *lflows, struct ovn_datapath *od, } } build_route_match(op_inport, rtb_id, network_s, plen, is_src_route, - is_ipv4, &match, &priority, ofs); + is_ipv4_prefix, &match, &priority, ofs); struct ds common_actions = DS_EMPTY_INITIALIZER; struct ds actions = DS_EMPTY_INITIALIZER; @@ -10921,22 +10928,25 @@ add_route(struct lflow_table *lflows, struct ovn_datapath *od, ds_put_cstr(&actions, debug_drop_action()); } else { ds_put_format(&common_actions, REG_ECMP_GROUP_ID" = 0; %s = ", - is_ipv4 ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6); + is_ipv4_nexthop ? REG_NEXT_HOP_IPV4 : REG_NEXT_HOP_IPV6); if (gateway && gateway[0]) { ds_put_cstr(&common_actions, gateway); } else { - ds_put_format(&common_actions, "ip%s.dst", is_ipv4 ? "4" : "6"); + ds_put_format(&common_actions, "ip%s.dst", + is_ipv4_prefix ? "4" : "6"); } ds_put_format(&common_actions, "; " "%s = %s; " "eth.src = %s; " "outport = %s; " "flags.loopback = 1; " + REGBIT_NEXTHOP_IS_IPV4" = %d; " "next;", - is_ipv4 ? REG_SRC_IPV4 : REG_SRC_IPV6, + is_ipv4_nexthop ? REG_SRC_IPV4 : REG_SRC_IPV6, lrp_addr_s, op->lrp_networks.ea_s, - op->json_key); + op->json_key, + is_ipv4_nexthop); ds_put_format(&actions, "ip.ttl--; %s", ds_cstr(&common_actions)); } @@ -10985,7 +10995,8 @@ build_static_route_flow(struct lflow_table *lflows, struct ovn_datapath *od, add_route(lflows, route_->is_discard_route ? od : out_port->od, out_port, lrp_addr_s, prefix_s, route_->plen, route->nexthop, route_->is_src_route, route_->route_table_id, &route->header_, - route_->is_discard_route, ofs, lflow_ref); + route_->is_discard_route, ofs, lflow_ref, + route_->is_ipv4_prefix, route_->is_ipv4_nexthop); free(prefix_s); } @@ -12707,7 +12718,7 @@ build_ip_routing_flows_for_lrp( op->lrp_networks.ipv4_addrs[i].network_s, op->lrp_networks.ipv4_addrs[i].plen, NULL, false, 0, &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED, - lflow_ref); + lflow_ref, true, true); } for (int i = 0; i < op->lrp_networks.n_ipv6_addrs; i++) { @@ -12715,7 +12726,7 @@ build_ip_routing_flows_for_lrp( op->lrp_networks.ipv6_addrs[i].network_s, op->lrp_networks.ipv6_addrs[i].plen, NULL, false, 0, &op->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED, - lflow_ref); + lflow_ref, false, false); } } @@ -12951,11 +12962,13 @@ build_arp_resolve_flows_for_lrouter( "ip4.mcast || ip6.mcast", "next;", lflow_ref); - ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 1, "ip4", + ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 1, + REGBIT_NEXTHOP_IS_IPV4 " == 1", "get_arp(outport, " REG_NEXT_HOP_IPV4 "); next;", lflow_ref); - ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 1, "ip6", + ovn_lflow_add(lflows, od, S_ROUTER_IN_ARP_RESOLVE, 1, + REGBIT_NEXTHOP_IS_IPV4 " == 0", "get_nd(outport, " REG_NEXT_HOP_IPV6 "); next;", lflow_ref); @@ -15695,7 +15708,7 @@ build_routable_flows_for_router_port( laddrs->ipv4_addrs[k].plen, NULL, false, 0, &router_port->nbrp->header_, false, ROUTE_PRIO_OFFSET_CONNECTED, - lrp->stateful_lflow_ref); + lrp->stateful_lflow_ref, true, true); } } } diff --git a/tests/ovn-nbctl.at b/tests/ovn-nbctl.at index 5248e6c76..4a219ab61 100644 --- a/tests/ovn-nbctl.at +++ b/tests/ovn-nbctl.at @@ -1757,7 +1757,7 @@ AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.0.1/24 11.0.0.2]) AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.10.0/24 lp0]) AT_CHECK([ovn-nbctl --bfd lr-route-add lr0 10.0.20.0/24 11.0.2.1 lp0]) AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.10.0/24 lp1], [1], [], - [ovn-nbctl: bad IPv4 nexthop argument: lp1 + [ovn-nbctl: bad nexthop argument: lp1 ]) dnl Add overlapping route with 10.0.0.1/24 @@ -1771,13 +1771,13 @@ AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.0.111/24a 11.0.0.1], [1], [], [ovn-nbctl: bad prefix argument: 10.0.0.111/24a ]) AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.0.111/24 11.0.0.1a], [1], [], - [ovn-nbctl: bad IPv4 nexthop argument: 11.0.0.1a + [ovn-nbctl: bad nexthop argument: 11.0.0.1a ]) AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.0.111/24 11.0.0.1/24], [1], [], - [ovn-nbctl: bad IPv4 nexthop argument: 11.0.0.1/24 + [ovn-nbctl: bad nexthop argument: 11.0.0.1/24 ]) AT_CHECK([ovn-nbctl lr-route-add lr0 2001:0db8:1::/64 2001:0db8:0:f103::1/64], [1], [], - [ovn-nbctl: bad IPv6 nexthop argument: 2001:0db8:0:f103::1/64 + [ovn-nbctl: bad nexthop argument: 2001:0db8:0:f103::1/64 ]) AT_CHECK([ovn-nbctl --ecmp lr-route-add lr0 20.0.0.0/24 discard], [1], [], [ovn-nbctl: ecmp is not valid for discard routes. @@ -2005,6 +2005,24 @@ check ovn-nbctl lr-route-del lr0 AT_CHECK([ovn-nbctl lr-route-list lr0], [0], [dnl ]) +dnl Check IPv4 over v6 and IPv6 over v4 routes +AT_CHECK([ovn-nbctl lr-route-add lr0 10.0.0.1/24 2001:0db8:0:f103::10]) +AT_CHECK([ovn-nbctl lr-route-add lr0 2001:0db8:0::/64 11.0.1.10]) + +AT_CHECK([ovn-nbctl lr-route-list lr0], [0], [dnl +IPv4 Routes +Route Table
: + 10.0.0.0/24 2001:db8:0:f103::10 dst-ip + +IPv6 Routes +Route Table
: + 2001:db8::/64 11.0.1.10 dst-ip +]) + +check ovn-nbctl lr-route-del lr0 +AT_CHECK([ovn-nbctl lr-route-list lr0], [0], [dnl +]) + dnl Check IPv4 routes in route table check ovn-nbctl --route-table=rtb-1 lr-route-add lr0 0.0.0.0/0 192.168.0.1 check ovn-nbctl --route-table=rtb-1 lr-route-add lr0 10.0.1.1/24 11.0.1.1 lp0 diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at index 47416ad49..64da386bf 100644 --- a/tests/ovn-northd.at +++ b/tests/ovn-northd.at @@ -3377,8 +3377,8 @@ AT_CHECK([grep "lr_in_policy" lr0flows3 | ovn_strip_lflows], [0], [dnl table=??(lr_in_policy ), priority=0 , match=(1), action=(reg8[[0..15]] = 0; next;) table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.3), action=(reg8[[0..15]] = 1; reg8[[16..31]] = select(1, 2);) table=??(lr_in_policy_ecmp ), priority=0 , match=(1), action=(drop;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=150 , match=(reg8[[0..15]] == 0), action=(next;) ]) @@ -3393,11 +3393,11 @@ sed 's/reg8\[[0..15\]] == [[0-9]]*/reg8\[[0..15\]] == /' | ovn_strip_lf table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.3), action=(reg8[[0..15]] = ; reg8[[16..31]] = select(1, 2);) table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.4), action=(reg8[[0..15]] = ; reg8[[16..31]] = select(1, 2, 3);) table=??(lr_in_policy_ecmp ), priority=0 , match=(1), action=(drop;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 3), action=(reg0 = 172.168.0.103; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 3), action=(reg0 = 172.168.0.103; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=150 , match=(reg8[[0..15]] == ), action=(next;) ]) @@ -3411,13 +3411,13 @@ sed 's/reg8\[[0..15\]] == [[0-9]]*/reg8\[[0..15\]] == /' | ovn_strip_lf table=??(lr_in_policy ), priority=0 , match=(1), action=(reg8[[0..15]] = ; next;) table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.3), action=(reg8[[0..15]] = ; reg8[[16..31]] = select(1, 2);) table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.4), action=(reg8[[0..15]] = ; reg8[[16..31]] = select(1, 2, 3);) - table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.5), action=(reg0 = 172.168.0.110; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg8[[0..15]] = ; next;) + table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.5), action=(reg0 = 172.168.0.110; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg8[[0..15]] = ; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=0 , match=(1), action=(drop;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 3), action=(reg0 = 172.168.0.103; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 3), action=(reg0 = 172.168.0.103; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=150 , match=(reg8[[0..15]] == ), action=(next;) ]) @@ -3430,11 +3430,11 @@ sed 's/reg8\[[0..15\]] = [[0-9]]*/reg8\[[0..15\]] = /' | \ sed 's/reg8\[[0..15\]] == [[0-9]]*/reg8\[[0..15\]] == /' | ovn_strip_lflows], [0], [dnl table=??(lr_in_policy ), priority=0 , match=(1), action=(reg8[[0..15]] = ; next;) table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.4), action=(reg8[[0..15]] = ; reg8[[16..31]] = select(1, 2, 3);) - table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.5), action=(reg0 = 172.168.0.110; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg8[[0..15]] = ; next;) + table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.5), action=(reg0 = 172.168.0.110; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg8[[0..15]] = ; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=0 , match=(1), action=(drop;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) - table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 3), action=(reg0 = 172.168.0.103; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 1), action=(reg0 = 172.168.0.101; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 2), action=(reg0 = 172.168.0.102; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_policy_ecmp ), priority=100 , match=(reg8[[0..15]] == && reg8[[16..31]] == 3), action=(reg0 = 172.168.0.103; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=150 , match=(reg8[[0..15]] == ), action=(next;) ]) @@ -3446,7 +3446,7 @@ AT_CHECK([grep "lr_in_policy" lr0flows3 | \ sed 's/reg8\[[0..15\]] = [[0-9]]*/reg8\[[0..15\]] = /' | \ sed 's/reg8\[[0..15\]] == [[0-9]]*/reg8\[[0..15\]] == /' | ovn_strip_lflows], [0], [dnl table=??(lr_in_policy ), priority=0 , match=(1), action=(reg8[[0..15]] = ; next;) - table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.5), action=(reg0 = 172.168.0.110; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg8[[0..15]] = ; next;) + table=??(lr_in_policy ), priority=10 , match=(ip4.src == 10.0.0.5), action=(reg0 = 172.168.0.110; reg1 = 172.168.0.100; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg8[[0..15]] = ; reg9[[9]] = 1; next;) table=??(lr_in_policy_ecmp ), priority=0 , match=(1), action=(drop;) table=??(lr_in_policy_ecmp ), priority=150 , match=(reg8[[0..15]] == ), action=(next;) ]) @@ -6684,8 +6684,8 @@ AT_CHECK([grep -e "lr_in_ip_routing.*select" lr0flows | ovn_strip_lflows], [0], ]) AT_CHECK([grep -e "lr_in_ip_routing_ecmp" lr0flows | sed 's/192\.168\.0\..0/192.168.0.??/' | ovn_strip_lflows], [0], [dnl table=??(lr_in_ip_routing_ecmp), priority=0 , match=(1), action=(drop;) - table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;) - table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 2), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;) + table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 2), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; reg9[[9]] = 1; next;) table=??(lr_in_ip_routing_ecmp), priority=150 , match=(reg8[[0..15]] == 0), action=(next;) ]) @@ -6720,8 +6720,8 @@ AT_CHECK([grep -e "lr_in_ip_routing.*select" lr0flows | ovn_strip_lflows], [0], ]) AT_CHECK([grep -e "lr_in_ip_routing_ecmp" lr0flows | sed 's/192\.168\.0\..0/192.168.0.??/' | ovn_strip_lflows], [0], [dnl table=??(lr_in_ip_routing_ecmp), priority=0 , match=(1), action=(drop;) - table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;) - table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 2), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; next;) + table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 1), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing_ecmp), priority=100 , match=(reg8[[0..15]] == 1 && reg8[[16..31]] == 2), action=(reg0 = 192.168.0.??; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; reg9[[9]] = 1; next;) table=??(lr_in_ip_routing_ecmp), priority=150 , match=(reg8[[0..15]] == 0), action=(next;) ]) @@ -6732,14 +6732,74 @@ check ovn-nbctl --wait=sb lr-route-add lr0 1.0.0.0/24 192.168.0.10 ovn-sbctl dump-flows lr0 > lr0flows AT_CHECK([grep -e "lr_in_ip_routing.*192.168.0.10" lr0flows | ovn_strip_lflows], [0], [dnl - table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 0 && ip4.dst == 1.0.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) + table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 0 && ip4.dst == 1.0.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) ]) check ovn-nbctl --wait=sb lr-route-add lr0 2.0.0.0/24 lr0-public ovn-sbctl dump-flows lr0 > lr0flows AT_CHECK([grep -e "lr_in_ip_routing.*2.0.0.0" lr0flows | ovn_strip_lflows], [0], [dnl - table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 0 && ip4.dst == 2.0.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; next;) + table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 0 && ip4.dst == 2.0.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) +]) + +AT_CLEANUP +]) + +OVN_FOR_EACH_NORTHD_NO_HV([ +AT_SETUP([ovn -- static routes multiple address families flows]) +AT_KEYWORDS([static-routes-flows]) +ovn_start + +check ovn-sbctl chassis-add ch1 geneve 127.0.0.1 + +check ovn-nbctl lr-add lr0 +check ovn-nbctl set logical_router lr0 options:chassis=ch1 +check ovn-nbctl ls-add public +check ovn-nbctl ls-add private +check ovn-nbctl lrp-add lr0 lr0-public 00:00:20:20:12:13 192.168.0.1/24 +check ovn-nbctl lsp-add public public-lr0 +check ovn-nbctl lsp-set-type public-lr0 router +check ovn-nbctl lsp-set-addresses public-lr0 router +check ovn-nbctl lsp-set-options public-lr0 router-port=lr0-public + +check ovn-nbctl lrp-add lr0 lr0-private 00:00:20:20:12:14 2001:db8::1/64 +check ovn-nbctl lsp-add private private-lr0 +check ovn-nbctl lsp-set-type private-lr0 router +check ovn-nbctl lsp-set-addresses private-lr0 router +check ovn-nbctl lsp-set-options private-lr0 router-port=lr0-private + +check ovn-nbctl --wait=sb lr-route-add lr0 10.0.0.0/24 192.168.0.10 +check ovn-nbctl --wait=sb lr-route-add lr0 11.0.0.0/24 2001:db8::10 +check ovn-nbctl --wait=sb lr-route-add lr0 2001:db8:1::/64 192.168.0.20 +check ovn-nbctl --wait=sb lr-route-add lr0 2001:db8:2::/64 2001:db8::20 + +ovn-sbctl dump-flows lr0 > lr0flows +AT_CHECK([grep -e "lr_in_ip_routing " lr0flows | ovn_strip_lflows], [0], [dnl + table=??(lr_in_ip_routing ), priority=0 , match=(1), action=(drop;) + table=??(lr_in_ip_routing ), priority=10550, match=(nd_rs || nd_ra), action=(drop;) + table=??(lr_in_ip_routing ), priority=193 , match=(reg7 == 0 && ip6.dst == 2001:db8:1::/64), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.20; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=193 , match=(reg7 == 0 && ip6.dst == 2001:db8:2::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = 2001:db8::20; xxreg1 = 2001:db8::1; eth.src = 00:00:20:20:12:14; outport = "lr0-private"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lr0-private" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:20ff:fe20:1214; eth.src = 00:00:20:20:12:14; outport = "lr0-private"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lr0-public" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:20ff:fe20:1213; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=194 , match=(ip6.dst == 2001:db8::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = 2001:db8::1; eth.src = 00:00:20:20:12:14; outport = "lr0-private"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 0 && ip4.dst == 10.0.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 0 && ip4.dst == 11.0.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = 2001:db8::10; xxreg1 = 2001:db8::1; eth.src = 00:00:20:20:12:14; outport = "lr0-private"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:20:20:12:13; outport = "lr0-public"; flags.loopback = 1; reg9[[9]] = 1; next;) +]) + +AT_CHECK([grep -e "lr_in_arp_resolve" lr0flows | ovn_strip_lflows], [0], [dnl + table=??(lr_in_arp_resolve ), priority=0 , match=(1), action=(drop;) + table=??(lr_in_arp_resolve ), priority=1 , match=(reg9[[9]] == 0), action=(get_nd(outport, xxreg0); next;) + table=??(lr_in_arp_resolve ), priority=1 , match=(reg9[[9]] == 1), action=(get_arp(outport, reg0); next;) + table=??(lr_in_arp_resolve ), priority=500 , match=(ip4.mcast || ip6.mcast), action=(next;) +]) + +AT_CHECK([grep -e "lr_in_arp_request" lr0flows | ovn_strip_lflows], [0], [dnl + table=??(lr_in_arp_request ), priority=0 , match=(1), action=(output;) + table=??(lr_in_arp_request ), priority=100 , match=(eth.dst == 00:00:00:00:00:00 && ip4), action=(arp { eth.dst = ff:ff:ff:ff:ff:ff; arp.spa = reg1; arp.tpa = reg0; arp.op = 1; output; }; output;) + table=??(lr_in_arp_request ), priority=100 , match=(eth.dst == 00:00:00:00:00:00 && ip6), action=(nd_ns { nd.target = xxreg0; output; }; output;) + table=??(lr_in_arp_request ), priority=200 , match=(eth.dst == 00:00:00:00:00:00 && ip6 && xxreg0 == 2001:db8::10), action=(nd_ns { eth.dst = 33:33:ff:00:00:10; ip6.dst = ff02::1:ff00:10; nd.target = 2001:db8::10; output; }; output;) + table=??(lr_in_arp_request ), priority=200 , match=(eth.dst == 00:00:00:00:00:00 && ip6 && xxreg0 == 2001:db8::20), action=(nd_ns { eth.dst = 33:33:ff:00:00:20; ip6.dst = ff02::1:ff00:20; nd.target = 2001:db8::20; output; }; output;) ]) AT_CLEANUP @@ -7163,16 +7223,16 @@ AT_CHECK([grep "lr_in_ip_routing_pre" lr0flows | ovn_strip_lflows], [0], [dnl grep -e "(lr_in_ip_routing ).*outport" lr0flows AT_CHECK([grep -e "(lr_in_ip_routing ).*outport" lr0flows | ovn_strip_lflows], [0], [dnl - table=??(lr_in_ip_routing ), priority=1 , match=(reg7 == 0 && ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=1 , match=(reg7 == 2 && ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lrp0" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lrp1" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:101; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lrp2" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:201; eth.src = 00:00:00:00:02:01; outport = "lrp2"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 1 && ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.1.10; reg1 = 192.168.1.1; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.1.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.1.1; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.2.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.2.1; eth.src = 00:00:00:00:02:01; outport = "lrp2"; flags.loopback = 1; next;) - table=??(lr_in_ip_routing ), priority=97 , match=(reg7 == 2 && ip4.dst == 1.1.1.1/32), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.20; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; next;) + table=??(lr_in_ip_routing ), priority=1 , match=(reg7 == 0 && ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=1 , match=(reg7 == 2 && ip4.dst == 0.0.0.0/0), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.10; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lrp0" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lrp1" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:101; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=194 , match=(inport == "lrp2" && ip6.dst == fe80::/64), action=(ip.ttl--; reg8[[0..15]] = 0; xxreg0 = ip6.dst; xxreg1 = fe80::200:ff:fe00:201; eth.src = 00:00:00:00:02:01; outport = "lrp2"; flags.loopback = 1; reg9[[9]] = 0; next;) + table=??(lr_in_ip_routing ), priority=73 , match=(reg7 == 1 && ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.1.10; reg1 = 192.168.1.1; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.0.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.1.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.1.1; eth.src = 00:00:00:00:01:01; outport = "lrp1"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=74 , match=(ip4.dst == 192.168.2.0/24), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = ip4.dst; reg1 = 192.168.2.1; eth.src = 00:00:00:00:02:01; outport = "lrp2"; flags.loopback = 1; reg9[[9]] = 1; next;) + table=??(lr_in_ip_routing ), priority=97 , match=(reg7 == 2 && ip4.dst == 1.1.1.1/32), action=(ip.ttl--; reg8[[0..15]] = 0; reg0 = 192.168.0.20; reg1 = 192.168.0.1; eth.src = 00:00:00:00:00:01; outport = "lrp0"; flags.loopback = 1; reg9[[9]] = 1; next;) ]) AT_CLEANUP diff --git a/tests/ovn.at b/tests/ovn.at index 96e43d80c..cae27fa82 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -38273,3 +38273,648 @@ OVN_CLEANUP([hv1 ]) AT_CLEANUP ]) + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, 2 peer LRs, IPv4 over IPv6]) +AT_SKIP_IF([test $HAVE_SCAPY = no]) +ovn_start + +# Logical network: +# Two LRs - R1 and R2 that are connected to each other as peers in 2001:db8::/64 +# network. R1 has a switchs ls1 (192.168.1.0/24) connected to it. +# R2 has ls2 (172.16.1.0/24) connected to it. + +ls1_lp1_mac="f0:00:00:01:02:03" +rp_ls1_mac="00:00:00:01:02:03" +rp_ls2_mac="00:00:00:01:02:04" +ls2_lp1_mac="f0:00:00:01:02:04" + +ls1_lp1_ip="192.168.1.2" +ls2_lp1_ip="172.16.1.2" + +check ovn-nbctl lr-add R1 +check ovn-nbctl lr-add R2 + +check ovn-nbctl ls-add ls1 +check ovn-nbctl ls-add ls2 + +# Connect ls1 to R1 +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 192.168.1.1/24 + +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 type=router \ + options:router-port=ls1 addresses=\"$rp_ls1_mac\" + +# Connect ls2 to R2 +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 172.16.1.1/24 + +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 type=router \ + options:router-port=ls2 addresses=\"$rp_ls2_mac\" + +# Connect R1 to R2 +check ovn-nbctl lrp-add R1 R1_R2 00:00:00:02:03:04 2001:db8::1/64 peer=R2_R1 +check ovn-nbctl lrp-add R2 R2_R1 00:00:00:02:03:05 2001:db8::2/64 peer=R1_R2 + +AT_CHECK([ovn-nbctl lr-route-add R1 "0.0.0.0/0" 2001:db8::2]) +AT_CHECK([ovn-nbctl lr-route-add R2 "0.0.0.0/0" 2001:db8::1]) + +# Create logical port ls1-lp1 in ls1 +check ovn-nbctl lsp-add ls1 ls1-lp1 \ +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip" + +# Create logical port ls2-lp1 in ls2 +check ovn-nbctl lsp-add ls2 ls2-lp1 \ +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip" + +# Create two hypervisor and create OVS ports corresponding to logical ports. +net_add n1 + +sim_add hv1 +as hv1 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 +check ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 + +sim_add hv2 +as hv2 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.2 +check ovs-vsctl -- add-port br-int hv2-vif1 -- \ + set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \ + options:tx_pcap=hv2/vif1-tx.pcap \ + options:rxq_pcap=hv2/vif1-rx.pcap \ + ofport-request=1 + + +# Pre-populate the hypervisors' ARP tables so that we don't lose any +# packets for ARP resolution (native tunneling doesn't queue packets +# for ARP resolution). +OVN_POPULATE_ARP + +# Allow some time for ovn-northd and ovn-controller to catch up. +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +# Packet to send. +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=64)/ \ + UDP(sport=53, dport=4369)") +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# Packet to Expect +# The TTL should be decremented by 2. +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=62)/ \ + UDP(sport=53, dport=4369)") +echo ${expected} > expected +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [1 +]) + +# Disable the ls2-lp1 port. +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [0 +]) + +# Send the same packet again and it should not be delivered +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# The 2nd packet sent shound not be received. +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +OVN_CLEANUP([hv1],[hv2]) + +AT_CLEANUP +]) + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, LRs connected via LS, IPv4 over IPv6]) +AT_SKIP_IF([test $HAVE_SCAPY = no]) +ovn_start + +# Logical network: +# Two LRs - R1 and R2 that are connected to ls-transfer in 2001:db8::/64 +# network. R1 has a switchs ls1 (192.168.1.0/24) connected to it. +# R2 has ls2 (172.16.1.0/24) connected to it. + +ls1_lp1_mac="f0:00:00:01:02:03" +rp_ls1_mac="00:00:00:01:02:03" +rp_ls2_mac="00:00:00:01:02:04" +ls2_lp1_mac="f0:00:00:01:02:04" + +ls1_lp1_ip="192.168.1.2" +ls2_lp1_ip="172.16.1.2" + +check ovn-nbctl lr-add R1 +check ovn-nbctl lr-add R2 + +check ovn-nbctl ls-add ls1 +check ovn-nbctl ls-add ls2 +check ovn-nbctl ls-add ls-transfer + +# Connect ls1 to R1 +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 192.168.1.1/24 + +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 type=router \ + options:router-port=ls1 addresses=\"$rp_ls1_mac\" + +# Connect ls2 to R2 +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 172.16.1.1/24 + +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 type=router \ + options:router-port=ls2 addresses=\"$rp_ls2_mac\" + +# Connect R1 to R2 +check ovn-nbctl lrp-add R1 R1_ls-transfer 00:00:00:02:03:04 2001:db8::1/64 +check ovn-nbctl lrp-add R2 R2_ls-transfer 00:00:00:02:03:05 2001:db8::2/64 + +check ovn-nbctl lsp-add ls-transfer ls-transfer_r1 -- \ + set Logical_Switch_Port ls-transfer_r1 type=router \ + options:router-port=R1_ls-transfer addresses=\"router\" +check ovn-nbctl lsp-add ls-transfer ls-transfer_r2 -- \ + set Logical_Switch_Port ls-transfer_r2 type=router \ + options:router-port=R2_ls-transfer addresses=\"router\" + +AT_CHECK([ovn-nbctl lr-route-add R1 "0.0.0.0/0" 2001:db8::2]) +AT_CHECK([ovn-nbctl lr-route-add R2 "0.0.0.0/0" 2001:db8::1]) + +# Create logical port ls1-lp1 in ls1 +check ovn-nbctl lsp-add ls1 ls1-lp1 \ +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip" + +# Create logical port ls2-lp1 in ls2 +check ovn-nbctl lsp-add ls2 ls2-lp1 \ +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip" + +# Create two hypervisor and create OVS ports corresponding to logical ports. +net_add n1 + +sim_add hv1 +as hv1 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 +check ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 + +sim_add hv2 +as hv2 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.2 +check ovs-vsctl -- add-port br-int hv2-vif1 -- \ + set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \ + options:tx_pcap=hv2/vif1-tx.pcap \ + options:rxq_pcap=hv2/vif1-rx.pcap \ + ofport-request=1 + + +# Pre-populate the hypervisors' ARP tables so that we don't lose any +# packets for ARP resolution (native tunneling doesn't queue packets +# for ARP resolution). +OVN_POPULATE_ARP + +# Allow some time for ovn-northd and ovn-controller to catch up. +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +# Packet to send. +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=64)/ \ + UDP(sport=53, dport=4369)") +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# Packet to Expect +# The TTL should be decremented by 2. +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=62)/ \ + UDP(sport=53, dport=4369)") +echo ${expected} > expected +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [1 +]) + +# Disable the ls2-lp1 port. +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [0 +]) + +# Send the same packet again and it should not be delivered +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# The 2nd packet sent shound not be received. +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +OVN_CLEANUP([hv1],[hv2]) + +AT_CLEANUP +]) + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, LRs connected via LS, IPv4 over IPv6, static mac]) +AT_SKIP_IF([test $HAVE_SCAPY = no]) +ovn_start + +# Logical network: +# Two LRs - R1 and R2 that are connected to ls-transfer in 2001:db8::/64 +# network. R1 has a switchs ls1 (192.168.1.0/24) connected to it. +# R2 has ls2 (172.16.1.0/24) connected to it. + +ls1_lp1_mac="f0:00:00:01:02:03" +rp_ls1_mac="00:00:00:01:02:03" +rp_ls2_mac="00:00:00:01:02:04" +ls2_lp1_mac="f0:00:00:01:02:04" + +ls1_lp1_ip="192.168.1.2" +ls2_lp1_ip="172.16.1.2" + +check ovn-nbctl lr-add R1 +check ovn-nbctl lr-add R2 + +check ovn-nbctl ls-add ls1 +check ovn-nbctl ls-add ls2 +check ovn-nbctl ls-add ls-transfer + +# Connect ls1 to R1 +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 192.168.1.1/24 +check ovn-nbctl set Logical_Router R1 options:dynamic_neigh_routers=true + +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 type=router \ + options:router-port=ls1 addresses=\"$rp_ls1_mac\" + +# Connect ls2 to R2 +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 172.16.1.1/24 +check ovn-nbctl set Logical_Router R2 options:dynamic_neigh_routers=true + +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 type=router \ + options:router-port=ls2 addresses=\"$rp_ls2_mac\" + +# Connect R1 to R2 +check ovn-nbctl lrp-add R1 R1_ls-transfer 00:00:00:02:03:04 2001:db8::1/64 +check ovn-nbctl lrp-add R2 R2_ls-transfer 00:00:00:02:03:05 2001:db8::2/64 + +check ovn-nbctl lsp-add ls-transfer ls-transfer_r1 -- \ + set Logical_Switch_Port ls-transfer_r1 type=router \ + options:router-port=R1_ls-transfer addresses=\"router\" +check ovn-nbctl lsp-add ls-transfer ls-transfer_r2 -- \ + set Logical_Switch_Port ls-transfer_r2 type=router \ + options:router-port=R2_ls-transfer addresses=\"router\" + +# Static mac binding entries +check ovn-nbctl static-mac-binding-add R1_ls-transfer 2001:db8::2 00:00:00:02:03:05 +check ovn-nbctl static-mac-binding-add R2_ls-transfer 2001:db8::1 00:00:00:02:03:04 + +AT_CHECK([ovn-nbctl lr-route-add R1 "0.0.0.0/0" 2001:db8::2]) +AT_CHECK([ovn-nbctl lr-route-add R2 "0.0.0.0/0" 2001:db8::1]) + +# Create logical port ls1-lp1 in ls1 +check ovn-nbctl lsp-add ls1 ls1-lp1 \ +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip" + +# Create logical port ls2-lp1 in ls2 +check ovn-nbctl lsp-add ls2 ls2-lp1 \ +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip" + +# Create two hypervisor and create OVS ports corresponding to logical ports. +net_add n1 + +sim_add hv1 +as hv1 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 +check ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 + +sim_add hv2 +as hv2 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.2 +check ovs-vsctl -- add-port br-int hv2-vif1 -- \ + set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \ + options:tx_pcap=hv2/vif1-tx.pcap \ + options:rxq_pcap=hv2/vif1-rx.pcap \ + ofport-request=1 + + +# Pre-populate the hypervisors' ARP tables so that we don't lose any +# packets for ARP resolution (native tunneling doesn't queue packets +# for ARP resolution). +OVN_POPULATE_ARP + +# Allow some time for ovn-northd and ovn-controller to catch up. +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +# Packet to send. +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=64)/ \ + UDP(sport=53, dport=4369)") +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# Packet to Expect +# The TTL should be decremented by 2. +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=62)/ \ + UDP(sport=53, dport=4369)") +echo ${expected} > expected +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [1 +]) + +# Disable the ls2-lp1 port. +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [0 +]) + +# Send the same packet again and it should not be delivered +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# The 2nd packet sent shound not be received. +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +OVN_CLEANUP([hv1],[hv2]) + +AT_CLEANUP +]) + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, LRs connected via LS, IPv4 over IPv6, ECMP]) +AT_SKIP_IF([test $HAVE_SCAPY = no]) +ovn_start + +# Logical network: +# Two LRs - R1 and R2 that are connected to ls-transfer1 and lr-transfer2 in +# 2001:db8:1::/64 and 2001:db8:2::/64 +# network. R1 has a switchs ls1 (192.168.1.0/24) connected to it. +# R2 has ls2 (172.16.1.0/24) connected to it. + +ls1_lp1_mac="f0:00:00:01:02:03" +rp_ls1_mac="00:00:00:01:02:03" +rp_ls2_mac="00:00:00:01:02:04" +ls2_lp1_mac="f0:00:00:01:02:04" + +ls1_lp1_ip="192.168.1.2" +ls2_lp1_ip="172.16.1.2" + +check ovn-nbctl lr-add R1 +check ovn-nbctl lr-add R2 + +check ovn-nbctl ls-add ls1 +check ovn-nbctl ls-add ls2 +check ovn-nbctl ls-add ls-transfer1 +check ovn-nbctl ls-add ls-transfer2 + +# Connect ls1 to R1 +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 192.168.1.1/24 + +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 type=router \ + options:router-port=ls1 addresses=\"$rp_ls1_mac\" + +# Connect ls2 to R2 +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 172.16.1.1/24 + +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 type=router \ + options:router-port=ls2 addresses=\"$rp_ls2_mac\" + +# Connect R1 to R2 (ls-transfer1) +check ovn-nbctl lrp-add R1 R1_ls-transfer1 00:00:00:02:03:04 2001:db8:1::1/64 +check ovn-nbctl lrp-add R2 R2_ls-transfer1 00:00:00:02:03:05 2001:db8:1::2/64 + +check ovn-nbctl lsp-add ls-transfer1 ls-transfer1_r1 -- \ + set Logical_Switch_Port ls-transfer1_r1 type=router \ + options:router-port=R1_ls-transfer1 addresses=\"router\" +check ovn-nbctl lsp-add ls-transfer1 ls-transfer1_r2 -- \ + set Logical_Switch_Port ls-transfer1_r2 type=router \ + options:router-port=R2_ls-transfer1 addresses=\"router\" + +# Connect R1 to R2 (ls-transfer2) +check ovn-nbctl lrp-add R1 R1_ls-transfer2 00:00:00:02:03:14 2001:db8:2::1/64 +check ovn-nbctl lrp-add R2 R2_ls-transfer2 00:00:00:02:03:15 2001:db8:2::2/64 + +check ovn-nbctl lsp-add ls-transfer2 ls-transfer2_r1 -- \ + set Logical_Switch_Port ls-transfer2_r1 type=router \ + options:router-port=R1_ls-transfer2 addresses=\"router\" +check ovn-nbctl lsp-add ls-transfer2 ls-transfer2_r2 -- \ + set Logical_Switch_Port ls-transfer2_r2 type=router \ + options:router-port=R2_ls-transfer2 addresses=\"router\" + +AT_CHECK([ovn-nbctl lr-route-add R1 "0.0.0.0/0" 2001:db8:1::2]) +AT_CHECK([ovn-nbctl --ecmp lr-route-add R1 "0.0.0.0/0" 2001:db8:2::2]) +AT_CHECK([ovn-nbctl lr-route-add R2 "0.0.0.0/0" 2001:db8:1::1]) +AT_CHECK([ovn-nbctl --ecmp lr-route-add R2 "0.0.0.0/0" 2001:db8:2::1]) + +# Create logical port ls1-lp1 in ls1 +check ovn-nbctl lsp-add ls1 ls1-lp1 \ +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip" + +# Create logical port ls2-lp1 in ls2 +check ovn-nbctl lsp-add ls2 ls2-lp1 \ +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip" + +# Create two hypervisor and create OVS ports corresponding to logical ports. +net_add n1 + +sim_add hv1 +as hv1 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 +check ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 + +sim_add hv2 +as hv2 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.2 +check ovs-vsctl -- add-port br-int hv2-vif1 -- \ + set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \ + options:tx_pcap=hv2/vif1-tx.pcap \ + options:rxq_pcap=hv2/vif1-rx.pcap \ + ofport-request=1 + + +# Pre-populate the hypervisors' ARP tables so that we don't lose any +# packets for ARP resolution (native tunneling doesn't queue packets +# for ARP resolution). +OVN_POPULATE_ARP + +# Allow some time for ovn-northd and ovn-controller to catch up. +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +# Packet to send. +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=64)/ \ + UDP(sport=53, dport=4369)") +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# Packet to Expect +# The TTL should be decremented by 2. +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \ + IP(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', ttl=62)/ \ + UDP(sport=53, dport=4369)") +echo ${expected} > expected +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [1 +]) + +# Disable the ls2-lp1 port. +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "reg0 == 172.16.1.2" | wc -l], [0], [0 +]) + +# Send the same packet again and it should not be delivered +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# The 2nd packet sent shound not be received. +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +OVN_CLEANUP([hv1],[hv2]) + +AT_CLEANUP +]) + +OVN_FOR_EACH_NORTHD([ +AT_SETUP([2 HVs, 2 LS, 1 lport/LS, 2 peer LRs, IPv6 over IPv4]) +AT_SKIP_IF([test $HAVE_SCAPY = no]) +ovn_start + +# Logical network: +# Two LRs - R1 and R2 that are connected to each other as peers in 10.0.0.0/24 +# network. R1 has a switchs ls1 (2001:db8:1::/64) connected to it. +# R2 has ls2 (2001:db8:2::/64) connected to it. + +ls1_lp1_mac="f0:00:00:01:02:03" +rp_ls1_mac="00:00:00:01:02:03" +rp_ls2_mac="00:00:00:01:02:04" +ls2_lp1_mac="f0:00:00:01:02:04" + +ls1_lp1_ip="2001:db8:1::2" +ls2_lp1_ip="2001:db8:2::2" + +check ovn-nbctl lr-add R1 +check ovn-nbctl lr-add R2 + +check ovn-nbctl ls-add ls1 +check ovn-nbctl ls-add ls2 + +# Connect ls1 to R1 +check ovn-nbctl lrp-add R1 ls1 $rp_ls1_mac 2001:db8:1::1/64 + +check ovn-nbctl lsp-add ls1 rp-ls1 -- set Logical_Switch_Port rp-ls1 type=router \ + options:router-port=ls1 addresses=\"$rp_ls1_mac\" + +# Connect ls2 to R2 +check ovn-nbctl lrp-add R2 ls2 $rp_ls2_mac 2001:db8:2::1/64 + +check ovn-nbctl lsp-add ls2 rp-ls2 -- set Logical_Switch_Port rp-ls2 type=router \ + options:router-port=ls2 addresses=\"$rp_ls2_mac\" + +# Connect R1 to R2 +check ovn-nbctl lrp-add R1 R1_R2 00:00:00:02:03:04 10.0.0.1/24 peer=R2_R1 +check ovn-nbctl lrp-add R2 R2_R1 00:00:00:02:03:05 10.0.0.2/24 peer=R1_R2 + +AT_CHECK([ovn-nbctl lr-route-add R1 "::/0" 10.0.0.2]) +AT_CHECK([ovn-nbctl lr-route-add R2 "::/0" 10.0.0.1]) + +# Create logical port ls1-lp1 in ls1 +check ovn-nbctl lsp-add ls1 ls1-lp1 \ +-- lsp-set-addresses ls1-lp1 "$ls1_lp1_mac $ls1_lp1_ip" + +# Create logical port ls2-lp1 in ls2 +check ovn-nbctl lsp-add ls2 ls2-lp1 \ +-- lsp-set-addresses ls2-lp1 "$ls2_lp1_mac $ls2_lp1_ip" + +# Create two hypervisor and create OVS ports corresponding to logical ports. +net_add n1 + +sim_add hv1 +as hv1 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.1 +check ovs-vsctl -- add-port br-int hv1-vif1 -- \ + set interface hv1-vif1 external-ids:iface-id=ls1-lp1 \ + options:tx_pcap=hv1/vif1-tx.pcap \ + options:rxq_pcap=hv1/vif1-rx.pcap \ + ofport-request=1 + +sim_add hv2 +as hv2 +check ovs-vsctl add-br br-phys +ovn_attach n1 br-phys 192.168.0.2 +check ovs-vsctl -- add-port br-int hv2-vif1 -- \ + set interface hv2-vif1 external-ids:iface-id=ls2-lp1 \ + options:tx_pcap=hv2/vif1-tx.pcap \ + options:rxq_pcap=hv2/vif1-rx.pcap \ + ofport-request=1 + + +# Pre-populate the hypervisors' ARP tables so that we don't lose any +# packets for ARP resolution (native tunneling doesn't queue packets +# for ARP resolution). +OVN_POPULATE_ARP + +# Allow some time for ovn-northd and ovn-controller to catch up. +wait_for_ports_up +check ovn-nbctl --wait=hv sync + +# Packet to send. +packet=$(fmt_pkt "Ether(dst='${rp_ls1_mac}', src='${ls1_lp1_mac}')/ \ + IPv6(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', hlim=64)/ \ + UDP(sport=53, dport=4369)") +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# Packet to Expect +# The TTL should be decremented by 2. +expected=$(fmt_pkt "Ether(dst='${ls2_lp1_mac}', src='${rp_ls2_mac}')/ \ + IPv6(src='${ls1_lp1_ip}', dst='${ls2_lp1_ip}', hlim=62)/ \ + UDP(sport=53, dport=4369)") +echo ${expected} > expected +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "xxreg0 == 2001:db8:2::2" | wc -l], [0], [1 +]) + +# Disable the ls2-lp1 port. +check ovn-nbctl --wait=hv set logical_switch_port ls2-lp1 enabled=false + +AT_CHECK([ovn-sbctl dump-flows | grep lr_in_arp_resolve | \ +grep "xxreg0 == 2001:db8:2::2" | wc -l], [0], [0 +]) + +# Send the same packet again and it should not be delivered +check as hv1 ovs-appctl netdev-dummy/receive hv1-vif1 "$packet" + +# The 2nd packet sent shound not be received. +OVN_CHECK_PACKETS([hv2/vif1-tx.pcap], [expected]) + +OVN_CLEANUP([hv1],[hv2]) + +AT_CLEANUP +]) diff --git a/utilities/ovn-nbctl.c b/utilities/ovn-nbctl.c index 618f3a18b..8e59b5fcb 100644 --- a/utilities/ovn-nbctl.c +++ b/utilities/ovn-nbctl.c @@ -4548,11 +4548,9 @@ nbctl_lr_route_add(struct ctl_context *ctx) } char *route_table = shash_find_data(&ctx->options, "--route-table"); - bool v6_prefix = false; prefix = normalize_ipv4_prefix_str(ctx->argv[2]); if (!prefix) { prefix = normalize_ipv6_prefix_str(ctx->argv[2]); - v6_prefix = true; } if (!prefix) { ctl_error(ctx, "bad prefix argument: %s", ctx->argv[2]); @@ -4563,15 +4561,15 @@ nbctl_lr_route_add(struct ctl_context *ctx) if (is_discard_route) { next_hop = xasprintf("discard"); } else { - next_hop = v6_prefix - ? normalize_ipv6_addr_str(ctx->argv[3]) - : normalize_ipv4_addr_str(ctx->argv[3]); + next_hop = normalize_ipv4_addr_str(ctx->argv[3]); + if (!next_hop) { + next_hop = normalize_ipv6_addr_str(ctx->argv[3]); + } if (!next_hop) { /* check if it is a output port. */ error = lrp_by_name_or_uuid(ctx, ctx->argv[3], true, &out_lrp); if (error) { - ctl_error(ctx, "bad %s nexthop argument: %s", - v6_prefix ? "IPv6" : "IPv4", ctx->argv[3]); + ctl_error(ctx, "bad nexthop argument: %s", ctx->argv[3]); free(error); goto cleanup; }