diff mbox

[v2] rds: rds-stress show all zeros after few minutes

Message ID 1459405762-29778-1-git-send-email-shamir.rabinovitch@oracle.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Shamir Rabinovitch March 31, 2016, 6:29 a.m. UTC
Issue can be seen on platforms that use 8K and above page size
while rds fragment size is 4K. On those platforms single page is
shared between 2 or more rds fragments. Each fragment has its own
offset and rds congestion map code need to take this offset to account.
Not taking this offset to account lead to reading the data fragment
as congestion map fragment and hang of the rds transmit due to far
congestion map corruption.

Signed-off-by: shamir rabinovitch <shamir.rabinovitch@oracle.com>

Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com>
Reviewed-by: Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Tested-by: Anand Bibhuti <anand.bibhuti@oracle.com>
---
 net/rds/ib_recv.c |    2 +-
 net/rds/iw_recv.c |    2 +-
 net/rds/page.c    |    5 +++--
 3 files changed, 5 insertions(+), 4 deletions(-)

Comments

David Miller March 31, 2016, 8:02 p.m. UTC | #1
From: shamir rabinovitch <shamir.rabinovitch@oracle.com>
Date: Thu, 31 Mar 2016 02:29:22 -0400

> Issue can be seen on platforms that use 8K and above page size
> while rds fragment size is 4K. On those platforms single page is
> shared between 2 or more rds fragments. Each fragment has its own
> offset and rds congestion map code need to take this offset to account.
> Not taking this offset to account lead to reading the data fragment
> as congestion map fragment and hang of the rds transmit due to far
> congestion map corruption.
> 
> Signed-off-by: shamir rabinovitch <shamir.rabinovitch@oracle.com>
> 
> Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com>
> Reviewed-by: Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com>
> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> Tested-by: Anand Bibhuti <anand.bibhuti@oracle.com>

This doesn't apply cleanly to my current tree, please respin.
Shamir Rabinovitch April 3, 2016, 12:29 p.m. UTC | #2
On Thu, Mar 31, 2016 at 04:02:46PM -0400, David Miller wrote:
> From: shamir rabinovitch <shamir.rabinovitch@oracle.com>
> Date: Thu, 31 Mar 2016 02:29:22 -0400
> 
> > Issue can be seen on platforms that use 8K and above page size
> > while rds fragment size is 4K. On those platforms single page is
> > shared between 2 or more rds fragments. Each fragment has its own
> > offset and rds congestion map code need to take this offset to account.
> > Not taking this offset to account lead to reading the data fragment
> > as congestion map fragment and hang of the rds transmit due to far
> > congestion map corruption.
> > 
> > Signed-off-by: shamir rabinovitch <shamir.rabinovitch@oracle.com>
> > 
> > Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com>
> > Reviewed-by: Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com>
> > Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
> > Tested-by: Anand Bibhuti <anand.bibhuti@oracle.com>
> 
> This doesn't apply cleanly to my current tree, please respin.

Sorry for the trouble.

Re-sent the patch based on net-next master.
Broke the patch according to comments from Santosh Shilimkar.

BR, Shamir
Santosh Shilimkar April 3, 2016, 5:11 p.m. UTC | #3
On 4/3/16 5:29 AM, Shamir Rabinovitch wrote:
> On Thu, Mar 31, 2016 at 04:02:46PM -0400, David Miller wrote:
>> From: shamir rabinovitch <shamir.rabinovitch@oracle.com>
>> Date: Thu, 31 Mar 2016 02:29:22 -0400
>>
>>> Issue can be seen on platforms that use 8K and above page size
>>> while rds fragment size is 4K. On those platforms single page is
>>> shared between 2 or more rds fragments. Each fragment has its own
>>> offset and rds congestion map code need to take this offset to account.
>>> Not taking this offset to account lead to reading the data fragment
>>> as congestion map fragment and hang of the rds transmit due to far
>>> congestion map corruption.
>>>
>>> Signed-off-by: shamir rabinovitch <shamir.rabinovitch@oracle.com>
>>>
>>> Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com>
>>> Reviewed-by: Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com>
>>> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
>>> Tested-by: Anand Bibhuti <anand.bibhuti@oracle.com>
>>
>> This doesn't apply cleanly to my current tree, please respin.
>
> Sorry for the trouble.
>
> Re-sent the patch based on net-next master.
> Broke the patch according to comments from Santosh Shilimkar.
>
Thanks Shamir. Updated versions looks fine. You already
have my ack included.


Regards,
Santosh
diff mbox

Patch

diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index 977fb86..abc8cc8 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -796,7 +796,7 @@  static void rds_ib_cong_recv(struct rds_connection *conn,
 
 		addr = kmap_atomic(sg_page(&frag->f_sg));
 
-		src = addr + frag_off;
+		src = addr + frag->f_sg.offset + frag_off;
 		dst = (void *)map->m_page_addrs[map_page] + map_off;
 		for (k = 0; k < to_copy; k += 8) {
 			/* Record ports that became uncongested, ie
diff --git a/net/rds/iw_recv.c b/net/rds/iw_recv.c
index a66d179..62a1738 100644
--- a/net/rds/iw_recv.c
+++ b/net/rds/iw_recv.c
@@ -585,7 +585,7 @@  static void rds_iw_cong_recv(struct rds_connection *conn,
 
 		addr = kmap_atomic(frag->f_page);
 
-		src = addr + frag_off;
+		src = addr +  frag->f_offset + frag_off;
 		dst = (void *)map->m_page_addrs[map_page] + map_off;
 		for (k = 0; k < to_copy; k += 8) {
 			/* Record ports that became uncongested, ie
diff --git a/net/rds/page.c b/net/rds/page.c
index 5a14e6d..715cbaa 100644
--- a/net/rds/page.c
+++ b/net/rds/page.c
@@ -135,8 +135,9 @@  int rds_page_remainder_alloc(struct scatterlist *scat, unsigned long bytes,
 			if (rem->r_offset != 0)
 				rds_stats_inc(s_page_remainder_hit);
 
-			rem->r_offset += bytes;
-			if (rem->r_offset == PAGE_SIZE) {
+			/* some hw (e.g. sparc) require aligned memory */
+			rem->r_offset += ALIGN(bytes, 8);
+			if (rem->r_offset >= PAGE_SIZE) {
 				__free_page(rem->r_page);
 				rem->r_page = NULL;
 			}