micropython/tests/perf_bench/benchrun.py

28 wiersze
714 B
Python
Czysty Zwykły widok Historia

tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 04:23:03 +00:00
def bm_run(N, M):
try:
from utime import ticks_us, ticks_diff
except ImportError:
import time
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 04:23:03 +00:00
ticks_us = lambda: int(time.perf_counter() * 1000000)
ticks_diff = lambda a, b: a - b
# Pick sensible parameters given N, M
cur_nm = (0, 0)
param = None
for nm, p in bm_params.items():
if 10 * nm[0] <= 12 * N and nm[1] <= M and nm > cur_nm:
cur_nm = nm
param = p
if param is None:
print(-1, -1, "SKIP: no matching params")
tests: Add performance benchmarking test-suite framework. This benchmarking test suite is intended to be run on any MicroPython target. As such all tests are parameterised with N and M: N is the approximate CPU frequency (in MHz) of the target and M is the approximate amount of heap memory (in kbytes) available on the target. When running the benchmark suite these parameters must be specified and then each test is tuned to run on that target in a reasonable time (<1 second). The test scripts are not standalone: they require adding some extra code at the end to run the test with the appropriate parameters. This is done automatically by the run-perfbench.py script, in such a way that imports are minimised (so the tests can be run on targets without filesystem support). To interface with the benchmarking framework, each test provides a bm_params dict and a bm_setup function, with the later taking a set of parameters (chosen based on N, M) and returning a pair of functions, one to run the test and one to get the results. When running the test the number of microseconds taken by the test are recorded. Then this is converted into a benchmark score by inverting it (so higher number is faster) and normalising it with an appropriate factor (based roughly on the amount of work done by the test, eg number of iterations). Test outputs are also compared against a "truth" value, computed by running the test with CPython. This provides a basic way of making sure the test actually ran correctly. Each test is run multiple times and the results averaged and standard deviation computed. This is output as a summary of the test. To make comparisons of performance across different runs the run-perfbench.py script also includes a diff mode that reads in the output of two previous runs and computes the difference in performance. Reports are given as a percentage change in performance with a combined standard deviation to give an indication if the noise in the benchmarking is less than the thing that is being measured. Example invocations for PC, pyboard and esp8266 targets respectively: $ ./run-perfbench.py 1000 1000 $ ./run-perfbench.py --pyboard 100 100 $ ./run-perfbench.py --pyboard --device /dev/ttyUSB0 50 25
2019-06-26 04:23:03 +00:00
return
# Run and time benchmark
run, result = bm_setup(param)
t0 = ticks_us()
run()
t1 = ticks_us()
norm, out = result()
print(ticks_diff(t1, t0), norm, out)