diff mbox series

[v1,1/1] tmfifo: Fix a memory barrier issue

Message ID 7c45c0a237c02b8ba34eca49f29845a0341ecff5.1620233587.git.limings@nvidia.com
State New
Headers show
Series UBUNTU: SAUCE: tmfifo: Fix a memory barrier issue | expand

Commit Message

Liming Sun May 5, 2021, 4:58 p.m. UTC
From: Liming Sun <lsun@mellanox.com>

The virtio framework uses wmb() when updating avail->idx. It
gurantees the write order, but not necessarily loading order
for the code accessing the memory. This commit adds a load barrier
after reading the avail->idx to make sure all the data in the
descriptor is visible. It also adds a barrier when returning the
packet to virtio framework to make sure read/writes are visible to
the virtio code.

RM #2269638

Change-Id: I0759e75a8863b3cee9eb0b3794cd87c113019f72
---
 drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Liming Sun May 5, 2021, 7:03 p.m. UTC | #1
BugLink: https://bugs.launchpad.net/bugs/1927262

SRU Justification:

[Impact]

* The virtio framework uses wmb() when updating avail->idx. It guarantees
  the write order, but not necessarily loading order for the code accessing
  the memory. So potentially it could cause traffic stuck which has been
  observed in the field.

[Fix]
* This commit adds a load barrier after reading the avail->idx to make sure
  all the data in the descriptor is visible. It also adds a barrier when
  returning the packet to virtio framework to make sure read/writes are
  visible to the virtio code.

[Test Case]
* Just normal test. This change doesn't affect any functionality.

[Regression Potential]

* This version of the driver was tested by QA/verification for a while so no
  known regression at the moment.
diff mbox series

Patch

diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
index 5739a966..92bda873 100644
--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
+++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
@@ -294,6 +294,9 @@  static irqreturn_t mlxbf_tmfifo_irq_handler(int irq, void *arg)
 	if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx))
 		return NULL;
 
+	/* Make sure 'avail->idx' is visible already. */
+	virtio_rmb(false);
+
 	idx = vring->next_avail % vr->num;
 	head = virtio16_to_cpu(vdev, vr->avail->ring[idx]);
 	if (WARN_ON(head >= vr->num))
@@ -322,7 +325,7 @@  static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring,
 	 * done or not. Add a memory barrier here to make sure the update above
 	 * completes before updating the idx.
 	 */
-	mb();
+	virtio_mb(false);
 	vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1);
 }
 
@@ -730,6 +733,12 @@  static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
 		desc = NULL;
 		fifo->vring[is_rx] = NULL;
 
+		/*
+		 * Make sure the load/store are in order before
+		 * returning back to virtio.
+		 */
+		virtio_mb(false);
+
 		/* Notify upper layer that packet is done. */
 		spin_lock_irqsave(&fifo->spin_lock[is_rx], flags);
 		vring_interrupt(0, vring->vq);