diff mbox

[net-next-2.6] pcnet32: Remove pointless memory barriers

Message ID 1240945659.8819.9.camel@Maple
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

John Dykstra April 28, 2009, 7:07 p.m. UTC
These two memory barriers in performance-critical paths are not needed
on x86.  Even if some other architecture does buffer PCI I/O space
writes, the existing memory-mapped I/O barriers are unlikely to be what
is needed.

Signed-off-by: John Dykstra <john.dykstra1@gmail.com>
---
 drivers/net/pcnet32.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

Comments

David Miller April 29, 2009, 5:16 a.m. UTC | #1
From: John Dykstra <john.dykstra1@gmail.com>
Date: Tue, 28 Apr 2009 19:07:39 +0000

> These two memory barriers in performance-critical paths are not needed
> on x86.  Even if some other architecture does buffer PCI I/O space
> writes, the existing memory-mapped I/O barriers are unlikely to be what
> is needed.
> 
> Signed-off-by: John Dykstra <john.dykstra1@gmail.com>

Any driver where these things are present usually has them
there for a reason.  Usually it's because the SGI guys really
did run into real problems without them on their huge
machines which can reorder PCI MMIO wrt. real memory operations.

I don't feel good applying this at all, given that I see no
evidence that there has been any investigation into how these
barriers got there in the first place.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
John Dykstra April 29, 2009, 1:48 p.m. UTC | #2
On Tue, 2009-04-28 at 22:16 -0700, David Miller wrote:
> Any driver where these things are present usually has them
> there for a reason.  Usually it's because the SGI guys really
> did run into real problems without them on their huge
> machines which can reorder PCI MMIO wrt. real memory operations.

Is that relevant when the driver doesn't do any MMIO operations?  All
pcnet32 register accesses are via PCI I/O space, as implied by the
commit description.

Descriptors and buffers are, of course, in consistent physical memory.
This patch doesn't touch the barriers associated with those structures.

> I don't feel good applying this at all, given that I see no
> evidence that there has been any investigation into how these
> barriers got there in the first place.

Before sending out this patch, I determined that the MMIO barriers were
added as part of NAPI support.  I CCed one of the authors of that patch,
so he could NAK if appropriate.  I've added the other author on this
reply.

I knew there'd be questions about whether removing these two barriers
was safe.  Submitting the patch seemed the best way to understand why
they were needed, if they are.

  --  John

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Don Fry April 29, 2009, 5:10 p.m. UTC | #3
My original NAPI implementation did not have the mmiowb() but it was
added because of some comments from Francois Romeiu (2006-06-29/30).  I
do not know if they are required or not.  My feeling was/is that they
are not as the writes are flushed with the unlocking primitives.
However that is not based on knowledge, just "feeling".

How would this be tested on all architectures to find out?

Don

-----Original Message-----
From: John Dykstra <john.dykstra1@gmail.com>
To: David Miller <davem@davemloft.net>
Cc: netdev@vger.kernel.org, pcnet32@verizon.net, jeff@garzik.org
Subject: Re: [PATCH net-next-2.6] pcnet32: Remove pointless memory
barriers
Date: Wed, 29 Apr 2009 08:48:17 -0500

On Tue, 2009-04-28 at 22:16 -0700, David Miller wrote:
> Any driver where these things are present usually has them
> there for a reason.  Usually it's because the SGI guys really
> did run into real problems without them on their huge
> machines which can reorder PCI MMIO wrt. real memory operations.

Is that relevant when the driver doesn't do any MMIO operations?  All
pcnet32 register accesses are via PCI I/O space, as implied by the
commit description.

Descriptors and buffers are, of course, in consistent physical memory.
This patch doesn't touch the barriers associated with those structures.

> I don't feel good applying this at all, given that I see no
> evidence that there has been any investigation into how these
> barriers got there in the first place.

Before sending out this patch, I determined that the MMIO barriers were
added as part of NAPI support.  I CCed one of the authors of that patch,
so he could NAK if appropriate.  I've added the other author on this
reply.

I knew there'd be questions about whether removing these two barriers
was safe.  Submitting the patch seemed the best way to understand why
they were needed, if they are.

  --  John

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
John Dykstra April 29, 2009, 7:07 p.m. UTC | #4
On Wed, 2009-04-29 at 10:10 -0700, Don Fry wrote:
> How would this be tested on all architectures to find out?

Testing could never prove that the barriers _aren't_ needed, because
you'd never be sure that you'd hit the right conditions. 

  --  John

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Francois Romieu April 29, 2009, 10:29 p.m. UTC | #5
Don Fry <pcnet32@verizon.net> :
> My original NAPI implementation did not have the mmiowb() but it was
> added because of some comments from Francois Romeiu (2006-06-29/30).  I
> do not know if they are required or not.  My feeling was/is that they
> are not as the writes are flushed with the unlocking primitives.
> However that is not based on knowledge, just "feeling".

(not sure if anyone should care about the loss of performance on x86
for a memory barrier near a couple of PCI I/O accesses but...)

I take back my comment from 2006 as I did not notice that there were
PCI I/O space accesses only.
diff mbox

Patch

diff --git a/drivers/net/pcnet32.c b/drivers/net/pcnet32.c
index e5e8c59..1c35e1d 100644
--- a/drivers/net/pcnet32.c
+++ b/drivers/net/pcnet32.c
@@ -1405,7 +1405,7 @@  static int pcnet32_poll(struct napi_struct *napi, int budget)
 
 		/* Set interrupt enable. */
 		lp->a.write_csr(ioaddr, CSR0, CSR0_INTEN);
-		mmiowb();
+
 		spin_unlock_irqrestore(&lp->lock, flags);
 	}
 	return work_done;
@@ -2597,7 +2597,7 @@  pcnet32_interrupt(int irq, void *dev_id)
 			val = lp->a.read_csr(ioaddr, CSR3);
 			val |= 0x5f00;
 			lp->a.write_csr(ioaddr, CSR3, val);
-			mmiowb();
+
 			__napi_schedule(&lp->napi);
 			break;
 		}