diff mbox series

[RFC] Add hw tests

Message ID 20180814045655.20519-1-alistair@popple.id.au
State Superseded
Headers show
Series [RFC] Add hw tests | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success master/apply_patch Successfully applied
snowpatch_ozlabs/build-multiarch success Test build-multiarch on branch master

Commit Message

Alistair Popple Aug. 14, 2018, 4:56 a.m. UTC
---

Amitay,

I was wondering if you had any thoughts of how we could better integrate the
below prototype of some HW testing? What is here currently works, but has a few
issues:

1. It only does a basic check to see if the host is powered on and skips the
   rest of the tests if it isn't rather than attempt to get the host into the
   right state.

2. You have to manually copy pdbg to the BMC. It would be good if we could
   automate deployment of pdbg to the BMC (eg. rsync <pdbg> <bmc>)

3. It creates a new connection for each command which is slow, so it would be
   nice if we could avoid that.

4. Environment setup (such as BMC host/user/pass) is hardcoded in the
   test_hw_bmc.sh script.

5. Testing of stdout and stderr. Using "2>&1" doesn't give consistent results
   when there is output on both stdout and stderr (for example progress bars) so
   it would be nice if we could add a "required_stderr" check seperate from the
   "required_output" check.

6. bmc_driver.sh is basically a copy of the existing driver.sh so there is a lot
   of code duplication.

I had to update the driver to use regexes for matching command output (which
depends on bash) as different platforms (eg. P8 vs. P9) will return different
outputs for specific values. In general I like the simple approach here (run
command, match output), so I would like to extend what we already have. Thanks.

- Alistair

Makefile.am          |   3 +-
 tests/bmc_driver.sh  | 247 +++++++++++++++++++++++++++++++++++++++++++++++++++
 tests/test_hw_bmc.sh |  59 ++++++++++++
 3 files changed, 308 insertions(+), 1 deletion(-)
 create mode 100644 tests/bmc_driver.sh
 create mode 100755 tests/test_hw_bmc.sh

Comments

Amitay Isaacs Aug. 15, 2018, 5:48 a.m. UTC | #1
On Tue, 2018-08-14 at 14:56 +1000, Alistair Popple wrote:
> ---
> 
> Amitay,
> 
> I was wondering if you had any thoughts of how we could better
> integrate the
> below prototype of some HW testing? What is here currently works, but
> has a few
> issues:

Following answers are with respect to the updates to the test driver
and the updates to the hw bmc test posted on the list.

> 
> 1. It only does a basic check to see if the host is powered on and
> skips the
>    rest of the tests if it isn't rather than attempt to get the host
> into the
>    right state.

This can now be done as part of setup hook.

> 
> 2. You have to manually copy pdbg to the BMC. It would be good if we
> could
>    automate deployment of pdbg to the BMC (eg. rsync <pdbg> <bmc>)

Another setup hook.

> 
> 3. It creates a new connection for each command which is slow, so it
> would be
>    nice if we could avoid that.

That's hard!

> 
> 4. Environment setup (such as BMC host/user/pass) is hardcoded in the
>    test_hw_bmc.sh script.

The test_hw_bmc.sh test checks for the existence of .test.bmc file
where the environment can be defined.

> 
> 5. Testing of stdout and stderr. Using "2>&1" doesn't give consistent
> results
>    when there is output on both stdout and stderr (for example
> progress bars) so
>    it would be nice if we could add a "required_stderr" check
> seperate from the
>    "required_output" check.

test_result now only matches output on stdout. 
test_result_stderr added to optionally match output on stderr.

> 
> 6. bmc_driver.sh is basically a copy of the existing driver.sh so
> there is a lot
>    of code duplication.
> 
> I had to update the driver to use regexes for matching command output
> (which
> depends on bash) as different platforms (eg. P8 vs. P9) will return
> different
> outputs for specific values. In general I like the simple approach
> here (run
> command, match output), so I would like to extend what we already
> have. Thanks.

Different outputs can be now matched using result_filter as long as you
can represent the pattern as a regular expression.


> 
> - Alistair
> 
> Makefile.am          |   3 +-
>  tests/bmc_driver.sh  | 247
> +++++++++++++++++++++++++++++++++++++++++++++++++++
>  tests/test_hw_bmc.sh |  59 ++++++++++++
>  3 files changed, 308 insertions(+), 1 deletion(-)
>  create mode 100644 tests/bmc_driver.sh
>  create mode 100755 tests/test_hw_bmc.sh
> 
> diff --git a/Makefile.am b/Makefile.am
> index 571660d..245e593 100644
> --- a/Makefile.am
> +++ b/Makefile.am
> @@ -7,7 +7,8 @@ bin_PROGRAMS = pdbg
>  check_PROGRAMS = optcmd_test
>  
>  PDBG_TESTS = \
> -	tests/test_selection.sh
> +	tests/test_selection.sh 	\
> +	tests/test_hw_bmc.sh
>  
>  TESTS = optcmd_test $(PDBG_TESTS)
>  
> diff --git a/tests/bmc_driver.sh b/tests/bmc_driver.sh
> new file mode 100644
> index 0000000..f7566f2
> --- /dev/null
> +++ b/tests/bmc_driver.sh
> @@ -0,0 +1,247 @@
> +#!/bin/bash
> +
> +#
> +# Simple test driver for shell based testsuite
> +#
> +# Each testsuite is a shell script, which sources this driver file.
> +# Following functions can be used to define testsuite and tests.
> +#
> +# test_group <testsuite-name>
> +#
> +#    Define a test suite.  This function should be called before
> calling any
> +#    other functions
> +#
> +# test_result <rc> <--|output>
> +#
> +#    Define the exit code and the output for the test.  If there is
> no output
> +#    from the commands, then '--' can be specified to denote empty
> output.
> +#    Multi-line output can be added using a here document.
> +#
> +# test_run <command> [<arguments>]
> +#
> +#    Define the command to execute for the test.
> +#
> +# test_skip
> +#
> +#    This can be called before test_run, to skip the test.  This is
> useful to
> +#    write tests which are dependent on the environment (e.g.
> architecture).
> +#
> +#
> +# Example:
> +#
> +#  test_group "my group of tests"
> +#
> +#  test_result 0 output1
> +#  test_run command1 arguments1
> +#
> +#  test_result 1 --
> +#  test_run command2 arguments2
> +#
> +#  test_result 0 output3
> +#  if [ $condition ] ; then
> +#       test_skip
> +#  fi
> +#  test_run command3 arguments3
> +#
> +
> +set -u
> +
> +TESTDIR=$(dirname "$0")
> +if [ "$TESTDIR" = "." ] ; then
> +	TESTDIR=$(cd "$TESTDIR"; pwd)
> +fi
> +SRCDIR=$(dirname "$TESTDIR")
> +PATH="$SRCDIR":$PATH
> +
> +test_name=${TEST_NAME:-$0}
> +test_logfile=${TEST_LOG:-}
> +test_trsfile=${TEST_TRS:-}
> +test_color=${TEST_COLOR:-yes}
> +
> +red= grn= lgn= blu= mgn= std=
> +if [ $test_color = yes ] ; then
> +	red='' # Red.
> +	grn='' # Green.
> +	lgn='' # Light green.
> +	blu='' # Blue.
> +	mgn='' # Magenta.
> +	std=''     # No color.
> +fi
> +
> +test_started=0
> +test_skipped=0
> +test_defined=0
> +
> +count_total=0
> +count_skipped=0
> +count_failed=0
> +
> +trap 'test_end' 0
> +
> +test_group ()
> +{
> +	test_name=${1:-$test_name}
> +	test_started=1
> +
> +	echo "-- $test_name"
> +}
> +
> +test_end ()
> +{
> +	trap 0
> +	if [ $count_total -eq 0 ] ; then
> +		status=99
> +	elif [ $count_failed -gt 0 ] ; then
> +		status=1
> +	elif [ $count_skipped -eq $count_total ] ; then
> +		status=77
> +	else
> +		status=0
> +	fi
> +
> +	exit $status
> +}
> +
> +test_error ()
> +{
> +	echo "$@"
> +	exit 99
> +}
> +
> +test_log ()
> +{
> +	if [ -z "$test_logfile" ] ; then
> +		echo "$@"
> +	else
> +		echo "$@" >> "$test_logfile"
> +	fi
> +}
> +
> +test_trs ()
> +{
> +	if [ -n "$test_trsfile" ] ; then
> +		echo "$@" >> "$test_trsfile"
> +	fi
> +}
> +
> +test_output ()
> +{
> +	rc=${1:-1}
> +	required_rc=${2:-0}
> +	output_mismatch=${3:-0}
> +
> +	if [ $required_rc -eq 0 ] ; then
> +		expect_failure=no
> +	else
> +		expect_failure=yes
> +	fi
> +
> +	case $rc:$expect_failure:$output_mismatch in
> +	0:*:1)   col=$red res=FAIL ;;
> +	0:yes:*) col=$red res=XPASS ;;
> +	0:*:*)   col=$grn res=PASS ;;
> +	77:*:*)  col=$blu res=SKIP ;;
> +	*:*:1)   col=$red res=FAIL ;;
> +	*:yes:*) col=$lgn res=XFAIL ;;
> +	*:*:*)   col=$red res=FAIL ;;
> +	esac
> +
> +	if [ -n "$test_logfile" ] ; then
> +		test_log "${res} ${test_cmd} (exit status: ${rc})"
> +	fi
> +	test_trs ":test-result: ${res}"
> +
> +	if [ -n "$test_trsfile" ] ; then
> +		indent="  "
> +	else
> +		indent=""
> +	fi
> +
> +	echo "${indent}${col}${res}${std}: ${test_cmd}"
> +
> +	count_total=$(( count_total + 1 ))
> +	if [ $res = "SKIP" ] ; then
> +		count_skipped=$(( count_skipped + 1 ))
> +	elif [ $res = "XPASS" -o $res = "FAIL" ] ; then
> +		count_failed=$(( count_failed + 1 ))
> +	fi
> +}
> +
> +#-------------------------------------------------------------------
> --
> +# Public functions
> +#-------------------------------------------------------------------
> --
> +
> +test_result ()
> +{
> +	if [ $test_started -eq 0 ] ; then
> +		test_error "ERROR: missing call to test_group"
> +	fi
> +
> +	required_rc="${1:-0}"
> +	if [ $# -eq 2 ] ; then
> +		if [ "$2" = "--" ] ; then
> +			required_output=""
> +		else
> +			required_output="$2"
> +		fi
> +	else
> +		if ! tty -s ; then
> +			required_output=$(cat)
> +		else
> +			required_output=""
> +		fi
> +	fi
> +
> +	test_defined=1
> +}
> +
> +test_skip ()
> +{
> +	if [ $test_started -eq 0 ] ; then
> +		test_error "ERROR: missing call to test_group"
> +	fi
> +
> +	test_skipped=1
> +}
> +
> +test_run ()
> +{
> +	output_mismatch=0
> +
> +	if [ $test_started -eq 0 ] ; then
> +		test_error "ERROR: missing call to test_group"
> +	fi
> +
> +	if [ $test_defined -eq 0 ] ; then
> +		test_error "ERROR: missing call to test_result"
> +	fi
> +
> +	test_cmd="$@"
> +
> +	if [ $test_skipped -eq 1 ] ; then
> +		test_output 77
> +		test_skipped=0
> +		test_defined=0
> +		return
> +	fi
> +
> +	output=$(sshpass -p $BMC_PASS ssh $BMC_USER@$BMC_HOST
> LD_LIBRARY_PATH=$PDBG_PATH "$@")
> +	rc=$?
> +
> +	if [ $rc -ne $required_rc ] ; then
> +		test_log "expected rc: $required_rc"
> +		test_log "output rc: $rc"
> +	fi
> +
> +	if [[ !("$output" =~ $required_output) ]] ; then
> +		test_log "expected:"
> +		test_log "$required_output"
> +		test_log "output:"
> +		test_log "$output"
> +		output_mismatch=1
> +	fi
> +
> +	test_output $rc $required_rc $output_mismatch
> +	test_skipped=0
> +	test_defined=0
> +}
> diff --git a/tests/test_hw_bmc.sh b/tests/test_hw_bmc.sh
> new file mode 100755
> index 0000000..127cba2
> --- /dev/null
> +++ b/tests/test_hw_bmc.sh
> @@ -0,0 +1,59 @@
> +#!/bin/bash
> +
> +PDBG_PATH=/tmp/pdbg
> +BMC_USER=<bmc user>
> +BMC_PASS=<bmc pass>
> +BMC_HOST=<bmc host>
> +
> +. $(dirname "$0")/bmc_driver.sh
> +
> +test_group "BMC HW tests"
> +
> +hw_state=0
> +
> +do_skip ()
> +{
> +    if [ "$hw_state" -ne 1 ] ; then
> +	test_skip
> +    fi
> +}
> +
> +test_result 0 <<EOF
> +CurrentBMCState     : xyz.openbmc_project.State.BMC.BMCState.Ready
> +CurrentPowerState   :
> xyz.openbmc_project.State.Chassis.PowerState.On
> +CurrentHostState    :
> xyz.openbmc_project.State.Host.HostState.Running
> +EOF
> +
> +test_run /usr/sbin/obmcutil state
> +
> +if [ "$res" = "PASS" ] ; then
> +    hw_state=1
> +fi
> +
> +test_result 0 <<EOF
> +p0:0xc09 = 0x[[:xdigit:]]{8}
> +EOF
> +
> +do_skip
> +test_run $PDBG_PATH/pdbg -p0 getcfam 0xc09 2>&1
> +
> +test_result 0 <<EOF
> +p0:0xf000f = 0x[[:xdigit:]]{16}
> +EOF
> +
> +do_skip
> +test_run $PDBG_PATH/pdbg -p0 getscom 0xf000f 2>&1
> +
> +test_result 0 <<EOF
> +Wrote 8 bytes starting at 0x0000000031000000
> +EOF
> +
> +do_skip
> +echo -n "DEADBEEF" | test_run $PDBG_PATH/pdbg -p0 putmem 0x31000000
> 2>/dev/null
> +
> +test_result 0 <<EOF
> +DEADBEEF
> +EOF
> +
> +do_skip
> +test_run $PDBG_PATH/pdbg -p0 getmem 0x31000000 0x8 2>/dev/null
> -- 
> 2.11.0
> 

Amitay.
diff mbox series

Patch

diff --git a/Makefile.am b/Makefile.am
index 571660d..245e593 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -7,7 +7,8 @@  bin_PROGRAMS = pdbg
 check_PROGRAMS = optcmd_test
 
 PDBG_TESTS = \
-	tests/test_selection.sh
+	tests/test_selection.sh 	\
+	tests/test_hw_bmc.sh
 
 TESTS = optcmd_test $(PDBG_TESTS)
 
diff --git a/tests/bmc_driver.sh b/tests/bmc_driver.sh
new file mode 100644
index 0000000..f7566f2
--- /dev/null
+++ b/tests/bmc_driver.sh
@@ -0,0 +1,247 @@ 
+#!/bin/bash
+
+#
+# Simple test driver for shell based testsuite
+#
+# Each testsuite is a shell script, which sources this driver file.
+# Following functions can be used to define testsuite and tests.
+#
+# test_group <testsuite-name>
+#
+#    Define a test suite.  This function should be called before calling any
+#    other functions
+#
+# test_result <rc> <--|output>
+#
+#    Define the exit code and the output for the test.  If there is no output
+#    from the commands, then '--' can be specified to denote empty output.
+#    Multi-line output can be added using a here document.
+#
+# test_run <command> [<arguments>]
+#
+#    Define the command to execute for the test.
+#
+# test_skip
+#
+#    This can be called before test_run, to skip the test.  This is useful to
+#    write tests which are dependent on the environment (e.g. architecture).
+#
+#
+# Example:
+#
+#  test_group "my group of tests"
+#
+#  test_result 0 output1
+#  test_run command1 arguments1
+#
+#  test_result 1 --
+#  test_run command2 arguments2
+#
+#  test_result 0 output3
+#  if [ $condition ] ; then
+#       test_skip
+#  fi
+#  test_run command3 arguments3
+#
+
+set -u
+
+TESTDIR=$(dirname "$0")
+if [ "$TESTDIR" = "." ] ; then
+	TESTDIR=$(cd "$TESTDIR"; pwd)
+fi
+SRCDIR=$(dirname "$TESTDIR")
+PATH="$SRCDIR":$PATH
+
+test_name=${TEST_NAME:-$0}
+test_logfile=${TEST_LOG:-}
+test_trsfile=${TEST_TRS:-}
+test_color=${TEST_COLOR:-yes}
+
+red= grn= lgn= blu= mgn= std=
+if [ $test_color = yes ] ; then
+	red='' # Red.
+	grn='' # Green.
+	lgn='' # Light green.
+	blu='' # Blue.
+	mgn='' # Magenta.
+	std=''     # No color.
+fi
+
+test_started=0
+test_skipped=0
+test_defined=0
+
+count_total=0
+count_skipped=0
+count_failed=0
+
+trap 'test_end' 0
+
+test_group ()
+{
+	test_name=${1:-$test_name}
+	test_started=1
+
+	echo "-- $test_name"
+}
+
+test_end ()
+{
+	trap 0
+	if [ $count_total -eq 0 ] ; then
+		status=99
+	elif [ $count_failed -gt 0 ] ; then
+		status=1
+	elif [ $count_skipped -eq $count_total ] ; then
+		status=77
+	else
+		status=0
+	fi
+
+	exit $status
+}
+
+test_error ()
+{
+	echo "$@"
+	exit 99
+}
+
+test_log ()
+{
+	if [ -z "$test_logfile" ] ; then
+		echo "$@"
+	else
+		echo "$@" >> "$test_logfile"
+	fi
+}
+
+test_trs ()
+{
+	if [ -n "$test_trsfile" ] ; then
+		echo "$@" >> "$test_trsfile"
+	fi
+}
+
+test_output ()
+{
+	rc=${1:-1}
+	required_rc=${2:-0}
+	output_mismatch=${3:-0}
+
+	if [ $required_rc -eq 0 ] ; then
+		expect_failure=no
+	else
+		expect_failure=yes
+	fi
+
+	case $rc:$expect_failure:$output_mismatch in
+	0:*:1)   col=$red res=FAIL ;;
+	0:yes:*) col=$red res=XPASS ;;
+	0:*:*)   col=$grn res=PASS ;;
+	77:*:*)  col=$blu res=SKIP ;;
+	*:*:1)   col=$red res=FAIL ;;
+	*:yes:*) col=$lgn res=XFAIL ;;
+	*:*:*)   col=$red res=FAIL ;;
+	esac
+
+	if [ -n "$test_logfile" ] ; then
+		test_log "${res} ${test_cmd} (exit status: ${rc})"
+	fi
+	test_trs ":test-result: ${res}"
+
+	if [ -n "$test_trsfile" ] ; then
+		indent="  "
+	else
+		indent=""
+	fi
+
+	echo "${indent}${col}${res}${std}: ${test_cmd}"
+
+	count_total=$(( count_total + 1 ))
+	if [ $res = "SKIP" ] ; then
+		count_skipped=$(( count_skipped + 1 ))
+	elif [ $res = "XPASS" -o $res = "FAIL" ] ; then
+		count_failed=$(( count_failed + 1 ))
+	fi
+}
+
+#---------------------------------------------------------------------
+# Public functions
+#---------------------------------------------------------------------
+
+test_result ()
+{
+	if [ $test_started -eq 0 ] ; then
+		test_error "ERROR: missing call to test_group"
+	fi
+
+	required_rc="${1:-0}"
+	if [ $# -eq 2 ] ; then
+		if [ "$2" = "--" ] ; then
+			required_output=""
+		else
+			required_output="$2"
+		fi
+	else
+		if ! tty -s ; then
+			required_output=$(cat)
+		else
+			required_output=""
+		fi
+	fi
+
+	test_defined=1
+}
+
+test_skip ()
+{
+	if [ $test_started -eq 0 ] ; then
+		test_error "ERROR: missing call to test_group"
+	fi
+
+	test_skipped=1
+}
+
+test_run ()
+{
+	output_mismatch=0
+
+	if [ $test_started -eq 0 ] ; then
+		test_error "ERROR: missing call to test_group"
+	fi
+
+	if [ $test_defined -eq 0 ] ; then
+		test_error "ERROR: missing call to test_result"
+	fi
+
+	test_cmd="$@"
+
+	if [ $test_skipped -eq 1 ] ; then
+		test_output 77
+		test_skipped=0
+		test_defined=0
+		return
+	fi
+
+	output=$(sshpass -p $BMC_PASS ssh $BMC_USER@$BMC_HOST LD_LIBRARY_PATH=$PDBG_PATH "$@")
+	rc=$?
+
+	if [ $rc -ne $required_rc ] ; then
+		test_log "expected rc: $required_rc"
+		test_log "output rc: $rc"
+	fi
+
+	if [[ !("$output" =~ $required_output) ]] ; then
+		test_log "expected:"
+		test_log "$required_output"
+		test_log "output:"
+		test_log "$output"
+		output_mismatch=1
+	fi
+
+	test_output $rc $required_rc $output_mismatch
+	test_skipped=0
+	test_defined=0
+}
diff --git a/tests/test_hw_bmc.sh b/tests/test_hw_bmc.sh
new file mode 100755
index 0000000..127cba2
--- /dev/null
+++ b/tests/test_hw_bmc.sh
@@ -0,0 +1,59 @@ 
+#!/bin/bash
+
+PDBG_PATH=/tmp/pdbg
+BMC_USER=<bmc user>
+BMC_PASS=<bmc pass>
+BMC_HOST=<bmc host>
+
+. $(dirname "$0")/bmc_driver.sh
+
+test_group "BMC HW tests"
+
+hw_state=0
+
+do_skip ()
+{
+    if [ "$hw_state" -ne 1 ] ; then
+	test_skip
+    fi
+}
+
+test_result 0 <<EOF
+CurrentBMCState     : xyz.openbmc_project.State.BMC.BMCState.Ready
+CurrentPowerState   : xyz.openbmc_project.State.Chassis.PowerState.On
+CurrentHostState    : xyz.openbmc_project.State.Host.HostState.Running
+EOF
+
+test_run /usr/sbin/obmcutil state
+
+if [ "$res" = "PASS" ] ; then
+    hw_state=1
+fi
+
+test_result 0 <<EOF
+p0:0xc09 = 0x[[:xdigit:]]{8}
+EOF
+
+do_skip
+test_run $PDBG_PATH/pdbg -p0 getcfam 0xc09 2>&1
+
+test_result 0 <<EOF
+p0:0xf000f = 0x[[:xdigit:]]{16}
+EOF
+
+do_skip
+test_run $PDBG_PATH/pdbg -p0 getscom 0xf000f 2>&1
+
+test_result 0 <<EOF
+Wrote 8 bytes starting at 0x0000000031000000
+EOF
+
+do_skip
+echo -n "DEADBEEF" | test_run $PDBG_PATH/pdbg -p0 putmem 0x31000000 2>/dev/null
+
+test_result 0 <<EOF
+DEADBEEF
+EOF
+
+do_skip
+test_run $PDBG_PATH/pdbg -p0 getmem 0x31000000 0x8 2>/dev/null