Message ID | f5fab678eedae83e79f25e4385bc1381ed554599.1663961449.git.tom@talpey.com |
---|---|
State | New |
Headers | show |
Series | Reduce SMBDirect RDMA SGE counts and sizes | expand |
2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: > Reduce ksmbd smbdirect max segment send and receive size to 1364 > to match protocol norms. Larger buffers are unnecessary and add > significant memory overhead. > > Signed-off-by: Tom Talpey <tom@talpey.com> > --- > fs/ksmbd/transport_rdma.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c > index 494b8e5af4b3..0315bca3d53b 100644 > --- a/fs/ksmbd/transport_rdma.c > +++ b/fs/ksmbd/transport_rdma.c > @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; > static int smb_direct_send_credit_target = 255; > > /* The maximum single message size can be sent to remote peer */ > -static int smb_direct_max_send_size = 8192; > +static int smb_direct_max_send_size = 1364; > > /* The maximum fragmented upper-layer payload receive size supported */ > static int smb_direct_max_fragmented_recv_size = 1024 * 1024; > > /* The maximum single-message size which can be received */ > -static int smb_direct_max_receive_size = 8192; > +static int smb_direct_max_receive_size = 1364; Can I know what value windows server set to ? I can see the following settings for them in MS-SMBD.pdf Connection.MaxSendSize is set to 1364. Connection.MaxReceiveSize is set to 8192. Why does the specification describe setting it to 8192? > > static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; > > -- > 2.34.1 > >
On 9/24/2022 11:40 PM, Namjae Jeon wrote: > 2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: >> Reduce ksmbd smbdirect max segment send and receive size to 1364 >> to match protocol norms. Larger buffers are unnecessary and add >> significant memory overhead. >> >> Signed-off-by: Tom Talpey <tom@talpey.com> >> --- >> fs/ksmbd/transport_rdma.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c >> index 494b8e5af4b3..0315bca3d53b 100644 >> --- a/fs/ksmbd/transport_rdma.c >> +++ b/fs/ksmbd/transport_rdma.c >> @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; >> static int smb_direct_send_credit_target = 255; >> >> /* The maximum single message size can be sent to remote peer */ >> -static int smb_direct_max_send_size = 8192; >> +static int smb_direct_max_send_size = 1364; >> >> /* The maximum fragmented upper-layer payload receive size supported */ >> static int smb_direct_max_fragmented_recv_size = 1024 * 1024; >> >> /* The maximum single-message size which can be received */ >> -static int smb_direct_max_receive_size = 8192; >> +static int smb_direct_max_receive_size = 1364; > Can I know what value windows server set to ? > > I can see the following settings for them in MS-SMBD.pdf > Connection.MaxSendSize is set to 1364. > Connection.MaxReceiveSize is set to 8192. Glad you asked, it's an interesting situation IMO. In MS-SMBD, the following are documented as behavior notes: Client-side (active connect): Connection.MaxSendSize is set to 1364. Connection.MaxReceiveSize is set to 8192. Server-side (passive listen): Connection.MaxSendSize is set to 1364. Connection.MaxReceiveSize is set to 8192. However, these are only the initial values. During SMBD negotiation, the two sides adjust downward to the other's maximum. Therefore, Windows connecting to Windows will use 1364 on both sides. In cifs and ksmbd, the choices were messier: Client-side smbdirect.c: int smbd_max_send_size = 1364; int smbd_max_receive_size = 8192; Server-side transport_rdma.c: static int smb_direct_max_send_size = 8192; static int smb_direct_max_receive_size = 8192; Therefore, peers connecting to ksmbd would typically end up negotiating 1364 for send and 8192 for receive. There is almost no good reason to use larger buffers. Because RDMA is highly efficient, and the smbdirect protocol trivially fragments longer messages, there is no significant performance penalty. And, because not many SMB3 messages require 8192 bytes over smbdirect, it's a colossal waste of virtually contiguous kernel memory to allocate 8192 to all receives. By setting all four to the practical reality of 1364, it's a consistent and efficient default, and aligns Linux smbdirect with Windows. Tom. > > Why does the specification describe setting it to 8192? >> >> static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; >> >> -- >> 2.34.1 >> >> >
2022-09-26 0:41 GMT+09:00, Tom Talpey <tom@talpey.com>: > On 9/24/2022 11:40 PM, Namjae Jeon wrote: >> 2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: >>> Reduce ksmbd smbdirect max segment send and receive size to 1364 >>> to match protocol norms. Larger buffers are unnecessary and add >>> significant memory overhead. >>> >>> Signed-off-by: Tom Talpey <tom@talpey.com> >>> --- >>> fs/ksmbd/transport_rdma.c | 4 ++-- >>> 1 file changed, 2 insertions(+), 2 deletions(-) >>> >>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c >>> index 494b8e5af4b3..0315bca3d53b 100644 >>> --- a/fs/ksmbd/transport_rdma.c >>> +++ b/fs/ksmbd/transport_rdma.c >>> @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; >>> static int smb_direct_send_credit_target = 255; >>> >>> /* The maximum single message size can be sent to remote peer */ >>> -static int smb_direct_max_send_size = 8192; >>> +static int smb_direct_max_send_size = 1364; >>> >>> /* The maximum fragmented upper-layer payload receive size supported >>> */ >>> static int smb_direct_max_fragmented_recv_size = 1024 * 1024; >>> >>> /* The maximum single-message size which can be received */ >>> -static int smb_direct_max_receive_size = 8192; >>> +static int smb_direct_max_receive_size = 1364; >> Can I know what value windows server set to ? >> >> I can see the following settings for them in MS-SMBD.pdf >> Connection.MaxSendSize is set to 1364. >> Connection.MaxReceiveSize is set to 8192. > > Glad you asked, it's an interesting situation IMO. > > In MS-SMBD, the following are documented as behavior notes: > > Client-side (active connect): > Connection.MaxSendSize is set to 1364. > Connection.MaxReceiveSize is set to 8192. > > Server-side (passive listen): > Connection.MaxSendSize is set to 1364. > Connection.MaxReceiveSize is set to 8192. > > However, these are only the initial values. During SMBD > negotiation, the two sides adjust downward to the other's > maximum. Therefore, Windows connecting to Windows will use > 1364 on both sides. > > In cifs and ksmbd, the choices were messier: > > Client-side smbdirect.c: > int smbd_max_send_size = 1364; > int smbd_max_receive_size = 8192; > > Server-side transport_rdma.c: > static int smb_direct_max_send_size = 8192; > static int smb_direct_max_receive_size = 8192; > > Therefore, peers connecting to ksmbd would typically end up > negotiating 1364 for send and 8192 for receive. > > There is almost no good reason to use larger buffers. Because > RDMA is highly efficient, and the smbdirect protocol trivially > fragments longer messages, there is no significant performance > penalty. > > And, because not many SMB3 messages require 8192 bytes over > smbdirect, it's a colossal waste of virtually contiguous kernel > memory to allocate 8192 to all receives. > > By setting all four to the practical reality of 1364, it's a > consistent and efficient default, and aligns Linux smbdirect > with Windows. Thanks for your detailed explanation! Agree to set both to 1364 by default, Is there any usage to increase it? I wonder if users need any configuration parameters to adjust them. > > Tom. > >> >> Why does the specification describe setting it to 8192? >>> >>> static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; >>> >>> -- >>> 2.34.1 >>> >>> >> >
On 9/25/2022 9:13 PM, Namjae Jeon wrote: > 2022-09-26 0:41 GMT+09:00, Tom Talpey <tom@talpey.com>: >> On 9/24/2022 11:40 PM, Namjae Jeon wrote: >>> 2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: >>>> Reduce ksmbd smbdirect max segment send and receive size to 1364 >>>> to match protocol norms. Larger buffers are unnecessary and add >>>> significant memory overhead. >>>> >>>> Signed-off-by: Tom Talpey <tom@talpey.com> >>>> --- >>>> fs/ksmbd/transport_rdma.c | 4 ++-- >>>> 1 file changed, 2 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c >>>> index 494b8e5af4b3..0315bca3d53b 100644 >>>> --- a/fs/ksmbd/transport_rdma.c >>>> +++ b/fs/ksmbd/transport_rdma.c >>>> @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; >>>> static int smb_direct_send_credit_target = 255; >>>> >>>> /* The maximum single message size can be sent to remote peer */ >>>> -static int smb_direct_max_send_size = 8192; >>>> +static int smb_direct_max_send_size = 1364; >>>> >>>> /* The maximum fragmented upper-layer payload receive size supported >>>> */ >>>> static int smb_direct_max_fragmented_recv_size = 1024 * 1024; >>>> >>>> /* The maximum single-message size which can be received */ >>>> -static int smb_direct_max_receive_size = 8192; >>>> +static int smb_direct_max_receive_size = 1364; >>> Can I know what value windows server set to ? >>> >>> I can see the following settings for them in MS-SMBD.pdf >>> Connection.MaxSendSize is set to 1364. >>> Connection.MaxReceiveSize is set to 8192. >> >> Glad you asked, it's an interesting situation IMO. >> >> In MS-SMBD, the following are documented as behavior notes: >> >> Client-side (active connect): >> Connection.MaxSendSize is set to 1364. >> Connection.MaxReceiveSize is set to 8192. >> >> Server-side (passive listen): >> Connection.MaxSendSize is set to 1364. >> Connection.MaxReceiveSize is set to 8192. >> >> However, these are only the initial values. During SMBD >> negotiation, the two sides adjust downward to the other's >> maximum. Therefore, Windows connecting to Windows will use >> 1364 on both sides. >> >> In cifs and ksmbd, the choices were messier: >> >> Client-side smbdirect.c: >> int smbd_max_send_size = 1364; >> int smbd_max_receive_size = 8192; >> >> Server-side transport_rdma.c: >> static int smb_direct_max_send_size = 8192; >> static int smb_direct_max_receive_size = 8192; >> >> Therefore, peers connecting to ksmbd would typically end up >> negotiating 1364 for send and 8192 for receive. >> >> There is almost no good reason to use larger buffers. Because >> RDMA is highly efficient, and the smbdirect protocol trivially >> fragments longer messages, there is no significant performance >> penalty. >> >> And, because not many SMB3 messages require 8192 bytes over >> smbdirect, it's a colossal waste of virtually contiguous kernel >> memory to allocate 8192 to all receives. >> >> By setting all four to the practical reality of 1364, it's a >> consistent and efficient default, and aligns Linux smbdirect >> with Windows. > Thanks for your detailed explanation! Agree to set both to 1364 by > default, Is there any usage to increase it? I wonder if users need any > configuration parameters to adjust them. In my opinion, probably not. I give some reasons why large fragments aren't always helpful just above. It's the same number of packets! Just a question of whether SMBDirect or Ethernet does the fragmentation, and the buffer management. There might conceivably be a case for *smaller*, for example on IB when it's cranked down to the minimum (256B) MTU. But it will work with this default. I'd say let's don't over-engineer it until we address the many other issues in this code. Merging the two smbdirect implementations is much more important than adding tweaky little knobs to both. MHO. Tom. >>> >>> Why does the specification describe setting it to 8192? >>>> >>>> static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; >>>> >>>> -- >>>> 2.34.1 >>>> >>>> >>> >> >
2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: > Reduce ksmbd smbdirect max segment send and receive size to 1364 > to match protocol norms. Larger buffers are unnecessary and add > significant memory overhead. > > Signed-off-by: Tom Talpey <tom@talpey.com> Acked-by: Namjae Jeon <linkinjeon@kernel.org> Thanks!
> -----Original Message----- > From: Tom Talpey <tom@talpey.com> > Sent: Monday, 26 September 2022 19:25 > To: Namjae Jeon <linkinjeon@kernel.org> > Cc: smfrench@gmail.com; linux-cifs@vger.kernel.org; > senozhatsky@chromium.org; Bernard Metzler <BMT@zurich.ibm.com>; > longli@microsoft.com; dhowells@redhat.com > Subject: [EXTERNAL] Re: [PATCH v2 4/6] Reduce server smbdirect max > send/receive segment sizes > > On 9/25/2022 9:13 PM, Namjae Jeon wrote: > > 2022-09-26 0:41 GMT+09:00, Tom Talpey <tom@talpey.com>: > >> On 9/24/2022 11:40 PM, Namjae Jeon wrote: > >>> 2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: > >>>> Reduce ksmbd smbdirect max segment send and receive size to 1364 > >>>> to match protocol norms. Larger buffers are unnecessary and add > >>>> significant memory overhead. > >>>> > >>>> Signed-off-by: Tom Talpey <tom@talpey.com> > >>>> --- > >>>> fs/ksmbd/transport_rdma.c | 4 ++-- > >>>> 1 file changed, 2 insertions(+), 2 deletions(-) > >>>> > >>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c > >>>> index 494b8e5af4b3..0315bca3d53b 100644 > >>>> --- a/fs/ksmbd/transport_rdma.c > >>>> +++ b/fs/ksmbd/transport_rdma.c > >>>> @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; > >>>> static int smb_direct_send_credit_target = 255; > >>>> > >>>> /* The maximum single message size can be sent to remote peer */ > >>>> -static int smb_direct_max_send_size = 8192; > >>>> +static int smb_direct_max_send_size = 1364; > >>>> > >>>> /* The maximum fragmented upper-layer payload receive size > supported > >>>> */ > >>>> static int smb_direct_max_fragmented_recv_size = 1024 * 1024; > >>>> > >>>> /* The maximum single-message size which can be received */ > >>>> -static int smb_direct_max_receive_size = 8192; > >>>> +static int smb_direct_max_receive_size = 1364; > >>> Can I know what value windows server set to ? > >>> > >>> I can see the following settings for them in MS-SMBD.pdf > >>> Connection.MaxSendSize is set to 1364. > >>> Connection.MaxReceiveSize is set to 8192. > >> > >> Glad you asked, it's an interesting situation IMO. > >> > >> In MS-SMBD, the following are documented as behavior notes: > >> > >> Client-side (active connect): > >> Connection.MaxSendSize is set to 1364. > >> Connection.MaxReceiveSize is set to 8192. > >> > >> Server-side (passive listen): > >> Connection.MaxSendSize is set to 1364. > >> Connection.MaxReceiveSize is set to 8192. > >> > >> However, these are only the initial values. During SMBD > >> negotiation, the two sides adjust downward to the other's > >> maximum. Therefore, Windows connecting to Windows will use > >> 1364 on both sides. > >> > >> In cifs and ksmbd, the choices were messier: > >> > >> Client-side smbdirect.c: > >> int smbd_max_send_size = 1364; > >> int smbd_max_receive_size = 8192; > >> > >> Server-side transport_rdma.c: > >> static int smb_direct_max_send_size = 8192; > >> static int smb_direct_max_receive_size = 8192; > >> > >> Therefore, peers connecting to ksmbd would typically end up > >> negotiating 1364 for send and 8192 for receive. > >> > >> There is almost no good reason to use larger buffers. Because > >> RDMA is highly efficient, and the smbdirect protocol trivially > >> fragments longer messages, there is no significant performance > >> penalty. > >> > >> And, because not many SMB3 messages require 8192 bytes over > >> smbdirect, it's a colossal waste of virtually contiguous kernel > >> memory to allocate 8192 to all receives. > >> > >> By setting all four to the practical reality of 1364, it's a > >> consistent and efficient default, and aligns Linux smbdirect > >> with Windows. > > Thanks for your detailed explanation! Agree to set both to 1364 by > > default, Is there any usage to increase it? I wonder if users need any > > configuration parameters to adjust them. > > In my opinion, probably not. I give some reasons why large fragments > aren't always helpful just above. It's the same number of packets! Just > a question of whether SMBDirect or Ethernet does the fragmentation, and > the buffer management. > One simple reason for larger buffers I am aware of is running efficiently on software only RDMA providers like siw or rxe. For siw I'd guess we cut to less than half the performance with 1364 bytes buffers. But maybe that is no concern for the setups you have in mind. Best, Bernard. > There might conceivably be a case for *smaller*, for example on IB when > it's cranked down to the minimum (256B) MTU. But it will work with this > default. > > I'd say let's don't over-engineer it until we address the many other > issues in this code. Merging the two smbdirect implementations is much > more important than adding tweaky little knobs to both. MHO. > > Tom. > > >>> > >>> Why does the specification describe setting it to 8192? > >>>> > >>>> static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; > >>>> > >>>> -- > >>>> 2.34.1 > >>>> > >>>> > >>> > >> > >
On 9/27/2022 10:59 AM, Bernard Metzler wrote: > > >> -----Original Message----- >> From: Tom Talpey <tom@talpey.com> >> Sent: Monday, 26 September 2022 19:25 >> To: Namjae Jeon <linkinjeon@kernel.org> >> Cc: smfrench@gmail.com; linux-cifs@vger.kernel.org; >> senozhatsky@chromium.org; Bernard Metzler <BMT@zurich.ibm.com>; >> longli@microsoft.com; dhowells@redhat.com >> Subject: [EXTERNAL] Re: [PATCH v2 4/6] Reduce server smbdirect max >> send/receive segment sizes >> >> On 9/25/2022 9:13 PM, Namjae Jeon wrote: >>> 2022-09-26 0:41 GMT+09:00, Tom Talpey <tom@talpey.com>: >>>> On 9/24/2022 11:40 PM, Namjae Jeon wrote: >>>>> 2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: >>>>>> Reduce ksmbd smbdirect max segment send and receive size to 1364 >>>>>> to match protocol norms. Larger buffers are unnecessary and add >>>>>> significant memory overhead. >>>>>> >>>>>> Signed-off-by: Tom Talpey <tom@talpey.com> >>>>>> --- >>>>>> fs/ksmbd/transport_rdma.c | 4 ++-- >>>>>> 1 file changed, 2 insertions(+), 2 deletions(-) >>>>>> >>>>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c >>>>>> index 494b8e5af4b3..0315bca3d53b 100644 >>>>>> --- a/fs/ksmbd/transport_rdma.c >>>>>> +++ b/fs/ksmbd/transport_rdma.c >>>>>> @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; >>>>>> static int smb_direct_send_credit_target = 255; >>>>>> >>>>>> /* The maximum single message size can be sent to remote peer */ >>>>>> -static int smb_direct_max_send_size = 8192; >>>>>> +static int smb_direct_max_send_size = 1364; >>>>>> >>>>>> /* The maximum fragmented upper-layer payload receive size >> supported >>>>>> */ >>>>>> static int smb_direct_max_fragmented_recv_size = 1024 * 1024; >>>>>> >>>>>> /* The maximum single-message size which can be received */ >>>>>> -static int smb_direct_max_receive_size = 8192; >>>>>> +static int smb_direct_max_receive_size = 1364; >>>>> Can I know what value windows server set to ? >>>>> >>>>> I can see the following settings for them in MS-SMBD.pdf >>>>> Connection.MaxSendSize is set to 1364. >>>>> Connection.MaxReceiveSize is set to 8192. >>>> >>>> Glad you asked, it's an interesting situation IMO. >>>> >>>> In MS-SMBD, the following are documented as behavior notes: >>>> >>>> Client-side (active connect): >>>> Connection.MaxSendSize is set to 1364. >>>> Connection.MaxReceiveSize is set to 8192. >>>> >>>> Server-side (passive listen): >>>> Connection.MaxSendSize is set to 1364. >>>> Connection.MaxReceiveSize is set to 8192. >>>> >>>> However, these are only the initial values. During SMBD >>>> negotiation, the two sides adjust downward to the other's >>>> maximum. Therefore, Windows connecting to Windows will use >>>> 1364 on both sides. >>>> >>>> In cifs and ksmbd, the choices were messier: >>>> >>>> Client-side smbdirect.c: >>>> int smbd_max_send_size = 1364; >>>> int smbd_max_receive_size = 8192; >>>> >>>> Server-side transport_rdma.c: >>>> static int smb_direct_max_send_size = 8192; >>>> static int smb_direct_max_receive_size = 8192; >>>> >>>> Therefore, peers connecting to ksmbd would typically end up >>>> negotiating 1364 for send and 8192 for receive. >>>> >>>> There is almost no good reason to use larger buffers. Because >>>> RDMA is highly efficient, and the smbdirect protocol trivially >>>> fragments longer messages, there is no significant performance >>>> penalty. >>>> >>>> And, because not many SMB3 messages require 8192 bytes over >>>> smbdirect, it's a colossal waste of virtually contiguous kernel >>>> memory to allocate 8192 to all receives. >>>> >>>> By setting all four to the practical reality of 1364, it's a >>>> consistent and efficient default, and aligns Linux smbdirect >>>> with Windows. >>> Thanks for your detailed explanation! Agree to set both to 1364 by >>> default, Is there any usage to increase it? I wonder if users need any >>> configuration parameters to adjust them. >> >> In my opinion, probably not. I give some reasons why large fragments >> aren't always helpful just above. It's the same number of packets! Just >> a question of whether SMBDirect or Ethernet does the fragmentation, and >> the buffer management. >> > > One simple reason for larger buffers I am aware of is running > efficiently on software only RDMA providers like siw or rxe. > For siw I'd guess we cut to less than half the performance with > 1364 bytes buffers. But maybe that is no concern for the setups > you have in mind. I'm skeptical of "less than half" the performance, and wonder why that might be, but... Again, it's rather uncommon that these inline messages are ever large. Bulk data (r/w >=4KB) is always carried by RDMA, and does not appear at all in these datagrams, for example. The code currently has a single system-wide default, which is not tunable per connection and requires both sides of the connection to do so. It's not reasonable to depend on Windows, cifs.ko and ksmbd.ko to all somehow magically do the same thing. So the best default is the most conservative, least wasteful setting. Tom. > Best, > Bernard. > >> There might conceivably be a case for *smaller*, for example on IB when >> it's cranked down to the minimum (256B) MTU. But it will work with this >> default. >> >> I'd say let's don't over-engineer it until we address the many other >> issues in this code. Merging the two smbdirect implementations is much >> more important than adding tweaky little knobs to both. MHO. >> >> Tom. >> >>>>> >>>>> Why does the specification describe setting it to 8192? >>>>>> >>>>>> static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; >>>>>> >>>>>> -- >>>>>> 2.34.1 >>>>>> >>>>>> >>>>> >>>> >>>
> -----Original Message----- > From: Tom Talpey <tom@talpey.com> > Sent: Wednesday, 28 September 2022 16:54 > To: Bernard Metzler <BMT@zurich.ibm.com>; Namjae Jeon > <linkinjeon@kernel.org> > Cc: smfrench@gmail.com; linux-cifs@vger.kernel.org; > senozhatsky@chromium.org; longli@microsoft.com; dhowells@redhat.com > Subject: [EXTERNAL] Re: [PATCH v2 4/6] Reduce server smbdirect max > send/receive segment sizes > > On 9/27/2022 10:59 AM, Bernard Metzler wrote: > > > > > >> -----Original Message----- > >> From: Tom Talpey <tom@talpey.com> > >> Sent: Monday, 26 September 2022 19:25 > >> To: Namjae Jeon <linkinjeon@kernel.org> > >> Cc: smfrench@gmail.com; linux-cifs@vger.kernel.org; > >> senozhatsky@chromium.org; Bernard Metzler <BMT@zurich.ibm.com>; > >> longli@microsoft.com; dhowells@redhat.com > >> Subject: [EXTERNAL] Re: [PATCH v2 4/6] Reduce server smbdirect max > >> send/receive segment sizes > >> > >> On 9/25/2022 9:13 PM, Namjae Jeon wrote: > >>> 2022-09-26 0:41 GMT+09:00, Tom Talpey <tom@talpey.com>: > >>>> On 9/24/2022 11:40 PM, Namjae Jeon wrote: > >>>>> 2022-09-24 6:53 GMT+09:00, Tom Talpey <tom@talpey.com>: > >>>>>> Reduce ksmbd smbdirect max segment send and receive size to 1364 > >>>>>> to match protocol norms. Larger buffers are unnecessary and add > >>>>>> significant memory overhead. > >>>>>> > >>>>>> Signed-off-by: Tom Talpey <tom@talpey.com> > >>>>>> --- > >>>>>> fs/ksmbd/transport_rdma.c | 4 ++-- > >>>>>> 1 file changed, 2 insertions(+), 2 deletions(-) > >>>>>> > >>>>>> diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c > >>>>>> index 494b8e5af4b3..0315bca3d53b 100644 > >>>>>> --- a/fs/ksmbd/transport_rdma.c > >>>>>> +++ b/fs/ksmbd/transport_rdma.c > >>>>>> @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; > >>>>>> static int smb_direct_send_credit_target = 255; > >>>>>> > >>>>>> /* The maximum single message size can be sent to remote peer */ > >>>>>> -static int smb_direct_max_send_size = 8192; > >>>>>> +static int smb_direct_max_send_size = 1364; > >>>>>> > >>>>>> /* The maximum fragmented upper-layer payload receive size > >> supported > >>>>>> */ > >>>>>> static int smb_direct_max_fragmented_recv_size = 1024 * 1024; > >>>>>> > >>>>>> /* The maximum single-message size which can be received */ > >>>>>> -static int smb_direct_max_receive_size = 8192; > >>>>>> +static int smb_direct_max_receive_size = 1364; > >>>>> Can I know what value windows server set to ? > >>>>> > >>>>> I can see the following settings for them in MS-SMBD.pdf > >>>>> Connection.MaxSendSize is set to 1364. > >>>>> Connection.MaxReceiveSize is set to 8192. > >>>> > >>>> Glad you asked, it's an interesting situation IMO. > >>>> > >>>> In MS-SMBD, the following are documented as behavior notes: > >>>> > >>>> Client-side (active connect): > >>>> Connection.MaxSendSize is set to 1364. > >>>> Connection.MaxReceiveSize is set to 8192. > >>>> > >>>> Server-side (passive listen): > >>>> Connection.MaxSendSize is set to 1364. > >>>> Connection.MaxReceiveSize is set to 8192. > >>>> > >>>> However, these are only the initial values. During SMBD > >>>> negotiation, the two sides adjust downward to the other's > >>>> maximum. Therefore, Windows connecting to Windows will use > >>>> 1364 on both sides. > >>>> > >>>> In cifs and ksmbd, the choices were messier: > >>>> > >>>> Client-side smbdirect.c: > >>>> int smbd_max_send_size = 1364; > >>>> int smbd_max_receive_size = 8192; > >>>> > >>>> Server-side transport_rdma.c: > >>>> static int smb_direct_max_send_size = 8192; > >>>> static int smb_direct_max_receive_size = 8192; > >>>> > >>>> Therefore, peers connecting to ksmbd would typically end up > >>>> negotiating 1364 for send and 8192 for receive. > >>>> > >>>> There is almost no good reason to use larger buffers. Because > >>>> RDMA is highly efficient, and the smbdirect protocol trivially > >>>> fragments longer messages, there is no significant performance > >>>> penalty. > >>>> > >>>> And, because not many SMB3 messages require 8192 bytes over > >>>> smbdirect, it's a colossal waste of virtually contiguous kernel > >>>> memory to allocate 8192 to all receives. > >>>> > >>>> By setting all four to the practical reality of 1364, it's a > >>>> consistent and efficient default, and aligns Linux smbdirect > >>>> with Windows. > >>> Thanks for your detailed explanation! Agree to set both to 1364 by > >>> default, Is there any usage to increase it? I wonder if users need any > >>> configuration parameters to adjust them. > >> > >> In my opinion, probably not. I give some reasons why large fragments > >> aren't always helpful just above. It's the same number of packets! Just > >> a question of whether SMBDirect or Ethernet does the fragmentation, and > >> the buffer management. > >> > > > > One simple reason for larger buffers I am aware of is running > > efficiently on software only RDMA providers like siw or rxe. > > For siw I'd guess we cut to less than half the performance with > > 1364 bytes buffers. But maybe that is no concern for the setups > > you have in mind. > > I'm skeptical of "less than half" the performance, and wonder why > that might be, but... > > Again, it's rather uncommon that these inline messages are ever > large. Bulk data (r/w >=4KB) is always carried by RDMA, and does > not appear at all in these datagrams, for example. > > The code currently has a single system-wide default, which is not > tunable per connection and requires both sides of the connection > to do so. It's not reasonable to depend on Windows, cifs.ko and > ksmbd.ko to all somehow magically do the same thing. So the best > default is the most conservative, least wasteful setting. > Oh, sorry, my bad. I was under the impression we talk about bulk data, if 8k buffers are the default. So I completely agree with your point. Best, Bernard. > Tom. > > > > Best, > > Bernard. > > > >> There might conceivably be a case for *smaller*, for example on IB when > >> it's cranked down to the minimum (256B) MTU. But it will work with this > >> default. > >> > >> I'd say let's don't over-engineer it until we address the many other > >> issues in this code. Merging the two smbdirect implementations is much > >> more important than adding tweaky little knobs to both. MHO. > >> > >> Tom. > >> > >>>>> > >>>>> Why does the specification describe setting it to 8192? > >>>>>> > >>>>>> static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE; > >>>>>> > >>>>>> -- > >>>>>> 2.34.1 > >>>>>> > >>>>>> > >>>>> > >>>> > >>>
diff --git a/fs/ksmbd/transport_rdma.c b/fs/ksmbd/transport_rdma.c index 494b8e5af4b3..0315bca3d53b 100644 --- a/fs/ksmbd/transport_rdma.c +++ b/fs/ksmbd/transport_rdma.c @@ -62,13 +62,13 @@ static int smb_direct_receive_credit_max = 255; static int smb_direct_send_credit_target = 255; /* The maximum single message size can be sent to remote peer */ -static int smb_direct_max_send_size = 8192; +static int smb_direct_max_send_size = 1364; /* The maximum fragmented upper-layer payload receive size supported */ static int smb_direct_max_fragmented_recv_size = 1024 * 1024; /* The maximum single-message size which can be received */ -static int smb_direct_max_receive_size = 8192; +static int smb_direct_max_receive_size = 1364; static int smb_direct_max_read_write_size = SMBD_DEFAULT_IOSIZE;
Reduce ksmbd smbdirect max segment send and receive size to 1364 to match protocol norms. Larger buffers are unnecessary and add significant memory overhead. Signed-off-by: Tom Talpey <tom@talpey.com> --- fs/ksmbd/transport_rdma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)