Wykres commitów

18 Commity (master)

Autor SHA1 Wiadomość Data
Jim Mussared 339f02a594 tests/run-perfbench.py: Don't allow imports from the cwd.
Make tests run in an isolated environment (i.e. `import io` would
otherwise get the `tests/io` directory).

This work was funded through GitHub Sponsors.

Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2023-06-08 17:54:24 +10:00
stijn 9c7ff87643 all: Keep msvc build output in build/ directories.
This follow the change made for Makefile-based projects in b2e82402.
2022-12-13 17:18:53 +11:00
Damien George 329f8252b9 tests/run-perfbench: Support --heapsize argument and pass to executable.
Signed-off-by: Damien George <damien@micropython.org>
2022-11-03 17:33:25 +11:00
Damien George cbc9f944c4 tests,tools: Update path to unix micropython executable.
These were missed by 47c84286e8

Signed-off-by: Damien George <damien@micropython.org>
2022-08-18 11:47:58 +10:00
Angus Gratton e024a4c59c tests: Fix run-perfbench parsing "no matching params" case.
Signed-off-by: Angus Gratton <gus@projectgus.com>
2022-06-28 14:22:06 +10:00
Angus Gratton 5568c324ba tests/perf_bench: Add some configurations for N=32, M=10.
For STM32L072 and similar, very low end targets.

The other perf_bench tests run out of memory, crash, or fail on
prerequisite features.

Signed-off-by: Angus Gratton <gus@projectgus.com>
2022-06-28 10:32:18 +10:00
Damien George 7d3204783a tests/run-tests.py: Handle case where mpy-cross fails to compile script.
Signed-off-by: Damien George <damien@micropython.org>
2022-05-23 15:45:21 +10:00
Damien George 54ab9d23e9 tests/run-perfbench.py: Allow running tests via mpy and native emitter.
The performance benchmark tests now support `--via-mpy` and `--emit native`
on remote targets.  For example:

    $ ./run-perfbench.py -p --via-mpy --emit native 100 100

Signed-off-by: Damien George <damien@micropython.org>
2022-05-19 17:31:56 +10:00
Damien George 6f68a8c240 tests/run-perfbench.py: Return error code if any test fails on target.
Signed-off-by: Damien George <damien@micropython.org>
2022-05-17 14:06:41 +10:00
Damien George e8bc4a3a5b tests/run-perfbench.py: Use SKIP consistently, and increase print width.
A script will print "SKIP" if it wants to be skipped, so the test runner
must also use uppercase SKIP.

Signed-off-by: Damien George <damien@micropython.org>
2022-02-11 22:19:38 +11:00
Damien George b33fdbe535 tests/run-perfbench.py: Allow a test to SKIP, and to have a .exp file.
Signed-off-by: Damien George <damien@micropython.org>
2022-02-10 15:25:33 +11:00
Damien George ab2923dfa1 all: Update Python formatting to latest Black version 22.1.0.
Signed-off-by: Damien George <damien@micropython.org>
2022-02-02 16:49:55 +11:00
Damien George 18d984c8b2 tests/run-perfbench.py: Fix native feature check.
This was broken by 8459f538eb

Signed-off-by: Damien George <damien@micropython.org>
2021-05-11 23:45:36 +10:00
David Lechner 3dc324d3f1 tests: Format all Python code with black, except tests in basics subdir.
This adds the Python files in the tests/ directory to be formatted with
./tools/codeformat.py.  The basics/ subdirectory is excluded for now so we
aren't changing too much at once.

In a few places `# fmt: off`/`# fmt: on` was used where the code had
special formatting for readability or where the test was actually testing
the specific formatting.
2020-03-30 13:21:58 +11:00
Damien George 858e992d2e tests/run-perfbench.py: Skip complex tests if target doesn't enable it. 2019-10-15 16:46:06 +11:00
Jim Mussared f1882636c0 tests/run-perfbench.py: Show error when truth check fails. 2019-10-15 16:38:11 +11:00
Damien George 3967dd68e8 tests/run-perfbench.py: Add --emit option to select emitter for tests. 2019-07-19 14:07:41 +10:00
Damien George e92c9aa9c9 tests: Add performance benchmarking test-suite framework.
This benchmarking test suite is intended to be run on any MicroPython
target.  As such all tests are parameterised with N and M: N is the
approximate CPU frequency (in MHz) of the target and M is the approximate
amount of heap memory (in kbytes) available on the target.  When running
the benchmark suite these parameters must be specified and then each test
is tuned to run on that target in a reasonable time (<1 second).

The test scripts are not standalone: they require adding some extra code at
the end to run the test with the appropriate parameters.  This is done
automatically by the run-perfbench.py script, in such a way that imports
are minimised (so the tests can be run on targets without filesystem
support).

To interface with the benchmarking framework, each test provides a
bm_params dict and a bm_setup function, with the later taking a set of
parameters (chosen based on N, M) and returning a pair of functions, one to
run the test and one to get the results.

When running the test the number of microseconds taken by the test are
recorded.  Then this is converted into a benchmark score by inverting it
(so higher number is faster) and normalising it with an appropriate factor
(based roughly on the amount of work done by the test, eg number of
iterations).

Test outputs are also compared against a "truth" value, computed by running
the test with CPython.  This provides a basic way of making sure the test
actually ran correctly.

Each test is run multiple times and the results averaged and standard
deviation computed.  This is output as a summary of the test.

To make comparisons of performance across different runs the
run-perfbench.py script also includes a diff mode that reads in the output
of two previous runs and computes the difference in performance.  Reports
are given as a percentage change in performance with a combined standard
deviation to give an indication if the noise in the benchmarking is less
than the thing that is being measured.

Example invocations for PC, pyboard and esp8266 targets respectively:

    $ ./run-perfbench.py 1000 1000
    $ ./run-perfbench.py --pyboard 100 100
    $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-28 16:29:23 +10:00