From patchwork Fri Jan 18 10:06:11 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 213530 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B8A462C007C for ; Fri, 18 Jan 2013 21:06:45 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751541Ab3ARKGm (ORCPT ); Fri, 18 Jan 2013 05:06:42 -0500 Received: from Chamillionaire.breakpoint.cc ([80.244.247.6]:43934 "EHLO Chamillionaire.breakpoint.cc" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751324Ab3ARKGk (ORCPT ); Fri, 18 Jan 2013 05:06:40 -0500 Received: from bigeasy by Chamillionaire.breakpoint.cc with local (Exim 4.72) (envelope-from ) id 1Tw8qL-0001Km-Dt; Fri, 18 Jan 2013 11:06:37 +0100 From: Sebastian Andrzej Siewior To: netdev@vger.kernel.org Cc: "David S. Miller" , Thomas Gleixner , Rakesh Ranjan , Bruno Bittner , Holger Dengler , Jan Altenberg , Sebastian Andrzej Siewior Subject: [PATCH 3/4] net: ethernet: ti cpsw: Split up DMA descriptor pool Date: Fri, 18 Jan 2013 11:06:11 +0100 Message-Id: <1358503572-5057-3-git-send-email-sebastian@breakpoint.cc> X-Mailer: git-send-email 1.7.2.5 In-Reply-To: <1358503572-5057-1-git-send-email-sebastian@breakpoint.cc> References: <1358503572-5057-1-git-send-email-sebastian@breakpoint.cc> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Thomas Gleixner Split the buffer pool into a RX and a TX block so neither of the channels can influence the other. It is possible to fillup the pool by sending a lot of large packets on a slow half-duplex link. Cc: Rakesh Ranjan Cc: Bruno Bittner Signed-off-by: Thomas Gleixner [dengler: patch description] Signed-off-by: Holger Dengler [jan: forward ported] Signed-off-by: Jan Altenberg Signed-off-by: Sebastian Andrzej Siewior --- drivers/net/ethernet/ti/davinci_cpdma.c | 35 +++++++++++++++++++++++++++--- 1 files changed, 31 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 709c437..70325cd 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -217,16 +217,41 @@ desc_from_phys(struct cpdma_desc_pool *pool, dma_addr_t dma) } static struct cpdma_desc __iomem * -cpdma_desc_alloc(struct cpdma_desc_pool *pool, int num_desc) +cpdma_desc_alloc(struct cpdma_desc_pool *pool, int num_desc, bool is_rx) { unsigned long flags; int index; struct cpdma_desc __iomem *desc = NULL; + static int last_index = 4096; spin_lock_irqsave(&pool->lock, flags); - index = bitmap_find_next_zero_area(pool->bitmap, pool->num_desc, 0, - num_desc, 0); + /* + * The pool is split into two areas rx and tx. So we make sure + * that we can't run out of pool buffers for RX when TX has + * tons of stuff queued. + */ + if (is_rx) { + index = bitmap_find_next_zero_area(pool->bitmap, + pool->num_desc/2, 0, num_desc, 0); + } else { + if (last_index >= pool->num_desc) + last_index = pool->num_desc / 2; + + index = bitmap_find_next_zero_area(pool->bitmap, + pool->num_desc, last_index, num_desc, 0); + + if (!(index < pool->num_desc)) { + index = bitmap_find_next_zero_area(pool->bitmap, + pool->num_desc, pool->num_desc/2, num_desc, 0); + } + + if (index < pool->num_desc) + last_index = index + 1; + else + last_index = pool->num_desc / 2; + } + if (index < pool->num_desc) { bitmap_set(pool->bitmap, index, num_desc); desc = pool->iomap + pool->desc_size * index; @@ -660,6 +685,7 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, unsigned long flags; u32 mode; int ret = 0; + bool is_rx; spin_lock_irqsave(&chan->lock, flags); @@ -668,7 +694,8 @@ int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, goto unlock_ret; } - desc = cpdma_desc_alloc(ctlr->pool, 1); + is_rx = (chan->rxfree != 0); + desc = cpdma_desc_alloc(ctlr->pool, 1, is_rx); if (!desc) { chan->stats.desc_alloc_fail++; ret = -ENOMEM;