From patchwork Thu Aug 4 09:49:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mark Kavanagh X-Patchwork-Id: 655718 X-Patchwork-Delegate: diproiettod@vmware.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3s4lWz1BMxz9sXx for ; Thu, 4 Aug 2016 19:49:19 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id C678010916; Thu, 4 Aug 2016 02:49:17 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 68872108E3 for ; Thu, 4 Aug 2016 02:49:16 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id E84F1161A45 for ; Thu, 4 Aug 2016 03:49:15 -0600 (MDT) X-ASG-Debug-ID: 1470304155-0b323775f5eb820001-byXFYA Received: from mx1-pf1.cudamail.com ([192.168.24.1]) by bar6.cudamail.com with ESMTP id rGNlfF1K8TFbZOPj (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Thu, 04 Aug 2016 03:49:15 -0600 (MDT) X-Barracuda-Envelope-From: mark.b.kavanagh@intel.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.24.1 Received: from unknown (HELO mga01.intel.com) (192.55.52.88) by mx1-pf1.cudamail.com with SMTP; 4 Aug 2016 09:49:15 -0000 Received-SPF: pass (mx1-pf1.cudamail.com: SPF record at intel.com designates 192.55.52.88 as permitted sender) X-Barracuda-Apparent-Source-IP: 192.55.52.88 X-Barracuda-RBL-IP: 192.55.52.88 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP; 04 Aug 2016 02:49:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,470,1464678000"; d="scan'208";a="150462086" Received: from silpixa00380299.ir.intel.com ([10.237.222.17]) by fmsmga004.fm.intel.com with ESMTP; 04 Aug 2016 02:49:13 -0700 X-CudaMail-Envelope-Sender: mark.b.kavanagh@intel.com From: Mark Kavanagh To: dev@openvswitch.org X-CudaMail-MID: CM-E1-803003403 X-CudaMail-DTE: 080416 X-CudaMail-Originating-IP: 192.55.52.88 Date: Thu, 4 Aug 2016 10:49:12 +0100 X-ASG-Orig-Subj: [##CM-E1-803003403##][ovs-dev][PATCH V2] netdev-dpdk: fix memory leak Message-Id: <1470304152-226557-1-git-send-email-mark.b.kavanagh@intel.com> X-Mailer: git-send-email 1.9.3 X-Barracuda-Connect: UNKNOWN[192.168.24.1] X-Barracuda-Start-Time: 1470304155 X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384 X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-ASG-Whitelist: EmailCat (corporate) Subject: [ovs-dev] [PATCH V2] netdev-dpdk: fix memory leak X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" DPDK v16.07 introduces the ability to free memzones. Up until this point, DPDK memory pools created in OVS could not be destroyed, thus incurring a memory leak. Leverage the DPDK v16.07 rte_mempool API to free DPDK mempools when their associated reference count reaches 0 (this indicates that the memory pool is no longer in use). Signed-off-by: Mark Kavanagh --- v2->v1: rebase to head of master, and remove 'RFC' tag lib/netdev-dpdk.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index aaac0d1..ffcd35c 100644 --- a/lib/netdev-dpdk.c +++ b/lib/netdev-dpdk.c @@ -506,7 +506,7 @@ dpdk_mp_get(int socket_id, int mtu) OVS_REQUIRES(dpdk_mutex) } static void -dpdk_mp_put(struct dpdk_mp *dmp) +dpdk_mp_put(struct dpdk_mp *dmp) OVS_REQUIRES(dpdk_mutex) { if (!dmp) { @@ -514,15 +514,12 @@ dpdk_mp_put(struct dpdk_mp *dmp) } dmp->refcount--; - ovs_assert(dmp->refcount >= 0); -#if 0 - /* I could not find any API to destroy mp. */ - if (dmp->refcount == 0) { - list_delete(dmp->list_node); - /* destroy mp-pool. */ - } -#endif + if (OVS_UNLIKELY(!dmp->refcount)) { + ovs_list_remove(&dmp->list_node); + rte_mempool_free(dmp->mp); + } + } static void @@ -928,16 +925,18 @@ netdev_dpdk_destruct(struct netdev *netdev) { struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + ovs_mutex_lock(&dpdk_mutex); ovs_mutex_lock(&dev->mutex); + rte_eth_dev_stop(dev->port_id); free(ovsrcu_get_protected(struct ingress_policer *, &dev->ingress_policer)); - ovs_mutex_unlock(&dev->mutex); - ovs_mutex_lock(&dpdk_mutex); rte_free(dev->tx_q); ovs_list_remove(&dev->list_node); dpdk_mp_put(dev->dpdk_mp); + + ovs_mutex_unlock(&dev->mutex); ovs_mutex_unlock(&dpdk_mutex); } @@ -946,6 +945,9 @@ netdev_dpdk_vhost_destruct(struct netdev *netdev) { struct netdev_dpdk *dev = netdev_dpdk_cast(netdev); + ovs_mutex_lock(&dpdk_mutex); + ovs_mutex_lock(&dev->mutex); + /* Guest becomes an orphan if still attached. */ if (netdev_dpdk_get_vid(dev) >= 0) { VLOG_ERR("Removing port '%s' while vhost device still attached.", @@ -961,15 +963,14 @@ netdev_dpdk_vhost_destruct(struct netdev *netdev) fatal_signal_remove_file_to_unlink(dev->vhost_id); } - ovs_mutex_lock(&dev->mutex); free(ovsrcu_get_protected(struct ingress_policer *, &dev->ingress_policer)); - ovs_mutex_unlock(&dev->mutex); - ovs_mutex_lock(&dpdk_mutex); rte_free(dev->tx_q); ovs_list_remove(&dev->list_node); dpdk_mp_put(dev->dpdk_mp); + + ovs_mutex_unlock(&dev->mutex); ovs_mutex_unlock(&dpdk_mutex); }