mirror of
https://git.proxmox.com/git/mirror_zfs.git
synced 2026-03-10 20:36:21 +03:00
ZTS: fail test run if test runner crashes unexpectedly
zfs-tests.sh executes test-runner.py to do the actual test work. Any exit code < 4 is interpreted as success, with the actual value describing the outcome of the tests inside. If a Python program crashes in some way (eg an uncaught exception), the process exit code is 1. Taken together, this means that test-runner.py can crash during setup, but return a "success" error code to zfs-tests.sh, which will report and exit 0. This in turn causes the CI runner to believe the test run completed successfully. This commit addresses this by making zfs-tests.sh interpret an exit code of 255 as a failure in the runner itself. Then, in test-runner.py, the "fail()" function defaults to a 255 return, and the main function gets wrapped in a generic exception handler, which prints it and calls fail(). All together, this should mean that any unexpected failure in the test runner itself will be propagated out of zfs-tests.sh for CI or any other calling program to deal with. Sponsored-by: Klara, Inc. Sponsored-by: Wasabi Technology, Inc. Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Signed-off-by: Rob Norris <rob.norris@klarasystems.com> Closes #17858
This commit is contained in:
parent
3a55e76b84
commit
72f41454a6
@ -797,6 +797,10 @@ msg "${TEST_RUNNER}" \
|
|||||||
2>&1; echo $? >"$REPORT_FILE"; } | tee "$RESULTS_FILE"
|
2>&1; echo $? >"$REPORT_FILE"; } | tee "$RESULTS_FILE"
|
||||||
read -r RUNRESULT <"$REPORT_FILE"
|
read -r RUNRESULT <"$REPORT_FILE"
|
||||||
|
|
||||||
|
if [[ "$RUNRESULT" -eq "255" ]] ; then
|
||||||
|
fail "$TEST_RUNNER failed, test aborted."
|
||||||
|
fi
|
||||||
|
|
||||||
#
|
#
|
||||||
# Analyze the results.
|
# Analyze the results.
|
||||||
#
|
#
|
||||||
|
|||||||
@ -25,6 +25,7 @@ import sys
|
|||||||
import ctypes
|
import ctypes
|
||||||
import re
|
import re
|
||||||
import configparser
|
import configparser
|
||||||
|
import traceback
|
||||||
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from optparse import OptionParser
|
from optparse import OptionParser
|
||||||
@ -1138,7 +1139,7 @@ def filter_tests(testrun, options):
|
|||||||
testrun.filter(failed)
|
testrun.filter(failed)
|
||||||
|
|
||||||
|
|
||||||
def fail(retstr, ret=1):
|
def fail(retstr, ret=255):
|
||||||
print('%s: %s' % (sys.argv[0], retstr))
|
print('%s: %s' % (sys.argv[0], retstr))
|
||||||
exit(ret)
|
exit(ret)
|
||||||
|
|
||||||
@ -1247,23 +1248,27 @@ def parse_args():
|
|||||||
def main():
|
def main():
|
||||||
options = parse_args()
|
options = parse_args()
|
||||||
|
|
||||||
testrun = TestRun(options)
|
try:
|
||||||
|
testrun = TestRun(options)
|
||||||
|
|
||||||
if options.runfiles:
|
if options.runfiles:
|
||||||
testrun.read(options)
|
testrun.read(options)
|
||||||
else:
|
else:
|
||||||
find_tests(testrun, options)
|
find_tests(testrun, options)
|
||||||
|
|
||||||
if options.logfile:
|
if options.logfile:
|
||||||
filter_tests(testrun, options)
|
filter_tests(testrun, options)
|
||||||
|
|
||||||
if options.template:
|
if options.template:
|
||||||
testrun.write(options)
|
testrun.write(options)
|
||||||
exit(0)
|
exit(0)
|
||||||
|
|
||||||
testrun.complete_outputdirs()
|
testrun.complete_outputdirs()
|
||||||
testrun.run(options)
|
testrun.run(options)
|
||||||
exit(testrun.summary())
|
exit(testrun.summary())
|
||||||
|
|
||||||
|
except Exception:
|
||||||
|
fail("Uncaught exception in test runner:\n" + traceback.format_exc())
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user