micropython/tests
Damien George 2ea21abae0 tests/extmod/vfs_fat_finaliser.py: Make finalisation more robust.
Signed-off-by: Damien George <damien@micropython.org>
2022-02-12 09:45:32 +11:00
..
basics
cmdline
cpydiff all: Update Python formatting to latest Black version 22.1.0. 2022-02-02 16:49:55 +11:00
esp32
extmod tests/extmod/vfs_fat_finaliser.py: Make finalisation more robust. 2022-02-12 09:45:32 +11:00
feature_check
float all: Update Python formatting to latest Black version 22.1.0. 2022-02-02 16:49:55 +11:00
import
inlineasm
internal_bench
io
jni
micropython all: Update Python formatting to latest Black version 22.1.0. 2022-02-02 16:49:55 +11:00
misc all: Update Python formatting to latest Black version 22.1.0. 2022-02-02 16:49:55 +11:00
multi_bluetooth
multi_net tests/multi_net/udp_data.py: Make UDP test more reliable. 2022-02-09 14:05:01 +11:00
net_hosted
net_inet
perf_bench tests/perf_bench: Add perf test for yield-from execution. 2022-02-11 13:42:00 +11:00
pyb
pybnative
qemu-arm
stress
thread
unicode
unix
wipy
README
run-internalbench.py
run-multitests.py
run-natmodtests.py
run-perfbench.py tests/run-perfbench.py: Use SKIP consistently, and increase print width. 2022-02-11 22:19:38 +11:00
run-tests-exp.py
run-tests-exp.sh
run-tests.py

README

This directory contains tests for various functionality areas of MicroPython.
To run all stable tests, run "run-tests.py" script in this directory.

Tests of capabilities not supported on all platforms should be written
to check for the capability being present. If it is not, the test
should merely output 'SKIP' followed by the line terminator, and call
sys.exit() to raise SystemExit, instead of attempting to test the
missing capability. The testing framework (run-tests.py in this
directory, test_main.c in qemu_arm) recognizes this as a skipped test.

There are a few features for which this mechanism cannot be used to
condition a test. The run-tests.py script uses small scripts in the
feature_check directory to check whether each such feature is present,
and skips the relevant tests if not.

Tests are generally verified by running the test both in MicroPython and
in CPython and comparing the outputs. If the output differs the test fails
and the outputs are saved in a .out and a .exp file respectively.
For tests that cannot be run in CPython, for example because they use
the machine module, a .exp file can be provided next to the test's .py
file. A convenient way to generate that is to run the test, let it fail
(because CPython cannot run it) and then copy the .out file (but not
before checking it manually!)

When creating new tests, anything that relies on float support should go in the
float/ subdirectory.  Anything that relies on import x, where x is not a built-in
module, should go in the import/ subdirectory.