From patchwork Wed Oct 21 16:37:44 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lan Tianyu X-Patchwork-Id: 534121 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 74AE61401F0 for ; Thu, 22 Oct 2015 10:01:30 +1100 (AEDT) Received: from localhost ([::1]:54796 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zp2Ns-00051U-FI for incoming@patchwork.ozlabs.org; Wed, 21 Oct 2015 19:01:28 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51466) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zoway-0000gu-3Y for qemu-devel@nongnu.org; Wed, 21 Oct 2015 12:50:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Zowau-0002ql-I6 for qemu-devel@nongnu.org; Wed, 21 Oct 2015 12:50:35 -0400 Received: from mga01.intel.com ([192.55.52.88]:10916) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Zowau-0002qe-Ck for qemu-devel@nongnu.org; Wed, 21 Oct 2015 12:50:32 -0400 Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 21 Oct 2015 09:50:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,712,1437462000"; d="scan'208";a="831883854" Received: from lantianyu-ws.sh.intel.com (HELO localhost) ([10.239.159.159]) by orsmga002.jf.intel.com with ESMTP; 21 Oct 2015 09:49:56 -0700 From: Lan Tianyu To: bhelgaas@google.com, carolyn.wyborny@intel.com, donald.c.skidmore@intel.com, eddie.dong@intel.com, nrupal.jani@intel.com, yang.z.zhang@intel.com, agraf@suse.de, kvm@vger.kernel.org, pbonzini@redhat.com, qemu-devel@nongnu.org, emil.s.tantilov@intel.com, intel-wired-lan@lists.osuosl.org, jeffrey.t.kirsher@intel.com, jesse.brandeburg@intel.com, john.ronciak@intel.com, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, matthew.vick@intel.com, mitch.a.williams@intel.com, netdev@vger.kernel.org, shannon.nelson@intel.com Date: Thu, 22 Oct 2015 00:37:44 +0800 Message-Id: <1445445464-5056-13-git-send-email-tianyu.lan@intel.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1445445464-5056-1-git-send-email-tianyu.lan@intel.com> References: <1445445464-5056-1-git-send-email-tianyu.lan@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.88 Cc: Lan Tianyu Subject: [Qemu-devel] [RFC Patch 12/12] IXGBEVF: Track dma dirty pages X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Migration relies on tracking dirty page to migrate memory. Hardware can't automatically mark a page as dirty after DMA memory access. VF descriptor rings and data buffers are modified by hardware when receive and transmit data. To track such dirty memory manually, do dummy writes(read a byte and write it back) during receive and transmit data. Signed-off-by: Lan Tianyu --- drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index d22160f..ce7bd7a 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -414,6 +414,9 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, if (!(eop_desc->wb.status & cpu_to_le32(IXGBE_TXD_STAT_DD))) break; + /* write back status to mark page dirty */ + eop_desc->wb.status = eop_desc->wb.status; + /* clear next_to_watch to prevent false hangs */ tx_buffer->next_to_watch = NULL; tx_buffer->desc_num = 0; @@ -946,15 +949,17 @@ static struct sk_buff *ixgbevf_fetch_rx_buffer(struct ixgbevf_ring *rx_ring, { struct ixgbevf_rx_buffer *rx_buffer; struct page *page; + u8 *page_addr; rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean]; page = rx_buffer->page; prefetchw(page); - if (likely(!skb)) { - void *page_addr = page_address(page) + - rx_buffer->page_offset; + /* Mark page dirty */ + page_addr = page_address(page) + rx_buffer->page_offset; + *page_addr = *page_addr; + if (likely(!skb)) { /* prefetch first cache line of first page */ prefetch(page_addr); #if L1_CACHE_BYTES < 128 @@ -1032,6 +1037,9 @@ static int ixgbevf_clean_rx_irq(struct ixgbevf_q_vector *q_vector, if (!ixgbevf_test_staterr(rx_desc, IXGBE_RXD_STAT_DD)) break; + /* Write back status to mark page dirty */ + rx_desc->wb.upper.status_error = rx_desc->wb.upper.status_error; + /* This memory barrier is needed to keep us from reading * any other fields out of the rx_desc until we know the * RXD_STAT_DD bit is set