diff mbox series

[v2] Add test for data integrity over NFS

Message ID 20241129133421.24349-1-mdoucha@suse.cz
State Superseded
Headers show
Series [v2] Add test for data integrity over NFS | expand

Commit Message

Martin Doucha Nov. 29, 2024, 1:34 p.m. UTC
Add NFS test which checks data integrity of random writes into a file,
with both buffered and direct I/O.

Signed-off-by: Martin Doucha <mdoucha@suse.cz>
---

Changes since v1: Added TST_TIMEOUT

The lower loop count is necessary because NFS has very large block size,
up to 256KB on x86_64. The new tests take ~50 minutes to complete in total
on my laptop. With the default loop count, the TCP tests would all time out.

The subtest timeout is fully determined by NFS block size and hardcoded
command line arguments. For TCP tests, it's exactly 8 minutes 32 seconds
so the whole test script will take no more than 34 minutes 8 seconds.
The TST_TIMEOUT value is rounded up to 40 minutes to add some margin
for setup and cleanup.

 runtest/net.nfs                           | 11 +++++++
 testcases/network/nfs/nfs_stress/nfs10.sh | 37 +++++++++++++++++++++++
 2 files changed, 48 insertions(+)
 create mode 100755 testcases/network/nfs/nfs_stress/nfs10.sh

Comments

Petr Vorel Dec. 2, 2024, 2:36 p.m. UTC | #1
Hi Martin,

...
> +TST_CNT=4
> +TST_TESTFUNC="do_test"
> +TST_DEVICE_SIZE=1024
> +TST_TIMEOUT=2400

It looks like block size (f_bsize member of struct statvfs) differs depending on
RAM. Testing on s390x VM:

* 1 GB => block size: 131072
* 2 GB => block size: 262144
* 4 GB => block size: 524288

This can cause timeouts depending on the machine setup.
How about changing TST_TIMEOUT=-1 ? I don't like unlimited setup, but in this
case I would keep it. Timeout is handled by fsplough.c anyway.
If you agree, I can change during merge.

Reviewed-by: Petr Vorel <pvorel@suse.cz>

Kind regards,
Petr

> +
> +do_test1()
> +{
> +	tst_res TINFO "Testing buffered write, buffered read"
> +	EXPECT_PASS fsplough -c 512 -d "$PWD"
> +}
> +
> +do_test2()
> +{
> +	tst_res TINFO "Testing buffered write, direct read"
> +	EXPECT_PASS fsplough -c 512 -R -d "$PWD"
> +}
> +
> +do_test3()
> +{
> +	tst_res TINFO "Testing direct write, buffered read"
> +	EXPECT_PASS fsplough -c 512 -W -d "$PWD"
> +}
> +
> +do_test4()
> +{
> +	tst_res TINFO "Testing direct write, direct read"
> +	EXPECT_PASS fsplough -c 512 -RW -d "$PWD"
> +}
> +
> +. nfs_lib.sh
> +tst_run
Cyril Hrubis Dec. 6, 2024, 11:43 a.m. UTC | #2
Hi!
> Add NFS test which checks data integrity of random writes into a file,
> with both buffered and direct I/O.
> 
> Signed-off-by: Martin Doucha <mdoucha@suse.cz>
> ---
> 
> Changes since v1: Added TST_TIMEOUT
> 
> The lower loop count is necessary because NFS has very large block size,
> up to 256KB on x86_64. The new tests take ~50 minutes to complete in total
> on my laptop. With the default loop count, the TCP tests would all time out.
> 
> The subtest timeout is fully determined by NFS block size and hardcoded
> command line arguments. For TCP tests, it's exactly 8 minutes 32 seconds
> so the whole test script will take no more than 34 minutes 8 seconds.
> The TST_TIMEOUT value is rounded up to 40 minutes to add some margin
> for setup and cleanup.

This is only on a one particular setup though. I bet it would be a few
hours on a hardware with slower I/O.

What about we change the fsplough so that it loops over
tst_remaining_runtime() and pass a runtime instead of number of
iterations?
diff mbox series

Patch

diff --git a/runtest/net.nfs b/runtest/net.nfs
index 7f84457bc..fef993da8 100644
--- a/runtest/net.nfs
+++ b/runtest/net.nfs
@@ -94,6 +94,17 @@  nfs09_v40_ip6t nfs09.sh -6 -v 4 -t tcp
 nfs09_v41_ip6t nfs09.sh -6 -v 4.1 -t tcp
 nfs09_v42_ip6t nfs09.sh -6 -v 4.2 -t tcp
 
+nfs10_v30_ip4u nfs10.sh -v 3 -t udp
+nfs10_v30_ip4t nfs10.sh -v 3 -t tcp
+nfs10_v40_ip4t nfs10.sh -v 4 -t tcp
+nfs10_v41_ip4t nfs10.sh -v 4.1 -t tcp
+nfs10_v42_ip4t nfs10.sh -v 4.2 -t tcp
+nfs10_v30_ip6u nfs10.sh -6 -v 3 -t udp
+nfs10_v30_ip6t nfs10.sh -6 -v 3 -t tcp
+nfs10_v40_ip6t nfs10.sh -6 -v 4 -t tcp
+nfs10_v41_ip6t nfs10.sh -6 -v 4.1 -t tcp
+nfs10_v42_ip6t nfs10.sh -6 -v 4.2 -t tcp
+
 nfslock01_v30_ip4u nfslock01.sh -v 3 -t udp
 nfslock01_v30_ip4t nfslock01.sh -v 3 -t tcp
 nfslock01_v40_ip4t nfslock01.sh -v 4 -t tcp
diff --git a/testcases/network/nfs/nfs_stress/nfs10.sh b/testcases/network/nfs/nfs_stress/nfs10.sh
new file mode 100755
index 000000000..79a5ddb17
--- /dev/null
+++ b/testcases/network/nfs/nfs_stress/nfs10.sh
@@ -0,0 +1,37 @@ 
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0-or-later
+# Copyright (C) 2024 SUSE LLC <mdoucha@suse.cz>
+#
+# DESCRIPTION: Verify data integrity over NFS, with and without O_DIRECT
+
+TST_CNT=4
+TST_TESTFUNC="do_test"
+TST_DEVICE_SIZE=1024
+TST_TIMEOUT=2400
+
+do_test1()
+{
+	tst_res TINFO "Testing buffered write, buffered read"
+	EXPECT_PASS fsplough -c 512 -d "$PWD"
+}
+
+do_test2()
+{
+	tst_res TINFO "Testing buffered write, direct read"
+	EXPECT_PASS fsplough -c 512 -R -d "$PWD"
+}
+
+do_test3()
+{
+	tst_res TINFO "Testing direct write, buffered read"
+	EXPECT_PASS fsplough -c 512 -W -d "$PWD"
+}
+
+do_test4()
+{
+	tst_res TINFO "Testing direct write, direct read"
+	EXPECT_PASS fsplough -c 512 -RW -d "$PWD"
+}
+
+. nfs_lib.sh
+tst_run