Wykres commitów

107 Commity (b326edf68c5edb648fac4dc2a3403ee33510e179)

Autor SHA1 Wiadomość Data
Jim Mussared b326edf68c all: Remove MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE.
This commit removes all parts of code associated with the existing
MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE optimisation option, including the
-mcache-lookup-bc option to mpy-cross.

This feature originally provided a significant performance boost for Unix,
but wasn't able to be enabled for MCU targets (due to frozen bytecode), and
added significant extra complexity to generating and distributing .mpy
files.

The equivalent performance gain is now provided by the combination of
MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE (which has
been enabled on the unix port in the previous commit).

It's hard to provide precise performance numbers, but tests have been run
on a wide variety of architectures (x86-64, ARM Cortex, Aarch64, RISC-V,
xtensa) and they all generally agree on the qualitative improvements seen
by the combination of MICROPY_OPT_LOAD_ATTR_FAST_PATH and
MICROPY_OPT_MAP_LOOKUP_CACHE.

For example, on a "quiet" Linux x64 environment (i3-5010U @ 2.10GHz) the
change from CACHE_MAP_LOOKUP_IN_BYTECODE, to LOAD_ATTR_FAST_PATH combined
with MAP_LOOKUP_CACHE is:

diff of scores (higher is better)
N=2000 M=2000       bccache -> attrmapcache      diff      diff% (error%)
bm_chaos.py        13742.56 ->   13905.67 :   +163.11 =  +1.187% (+/-3.75%)
bm_fannkuch.py        60.13 ->      61.34 :     +1.21 =  +2.012% (+/-2.11%)
bm_fft.py         113083.20 ->  114793.68 :  +1710.48 =  +1.513% (+/-1.57%)
bm_float.py       256552.80 ->  243908.29 : -12644.51 =  -4.929% (+/-1.90%)
bm_hexiom.py         521.93 ->     625.41 :   +103.48 = +19.826% (+/-0.40%)
bm_nqueens.py     197544.25 ->  217713.12 : +20168.87 = +10.210% (+/-3.01%)
bm_pidigits.py      8072.98 ->    8198.75 :   +125.77 =  +1.558% (+/-3.22%)
misc_aes.py        17283.45 ->   16480.52 :   -802.93 =  -4.646% (+/-0.82%)
misc_mandel.py     99083.99 ->  128939.84 : +29855.85 = +30.132% (+/-5.88%)
misc_pystone.py    83860.10 ->   82592.56 :  -1267.54 =  -1.511% (+/-2.27%)
misc_raytrace.py   21490.40 ->   22227.23 :   +736.83 =  +3.429% (+/-1.88%)

This shows that the new optimisations are at least as good as the existing
inline-bytecode-caching, and are sometimes much better (because the new
ones apply caching to a wider variety of map lookups).

The new optimisations can also benefit code generated by the native
emitter, because they apply to the runtime rather than the generated code.
The improvement for the native emitter when LOAD_ATTR_FAST_PATH and
MAP_LOOKUP_CACHE are enabled is (same Linux environment as above):

diff of scores (higher is better)
N=2000 M=2000        native -> nat-attrmapcache  diff      diff% (error%)
bm_chaos.py        14130.62 ->   15464.68 :  +1334.06 =  +9.441% (+/-7.11%)
bm_fannkuch.py        74.96 ->      76.16 :     +1.20 =  +1.601% (+/-1.80%)
bm_fft.py         166682.99 ->  168221.86 :  +1538.87 =  +0.923% (+/-4.20%)
bm_float.py       233415.23 ->  265524.90 : +32109.67 = +13.756% (+/-2.57%)
bm_hexiom.py         628.59 ->     734.17 :   +105.58 = +16.796% (+/-1.39%)
bm_nqueens.py     225418.44 ->  232926.45 :  +7508.01 =  +3.331% (+/-3.10%)
bm_pidigits.py      6322.00 ->    6379.52 :    +57.52 =  +0.910% (+/-5.62%)
misc_aes.py        20670.10 ->   27223.18 :  +6553.08 = +31.703% (+/-1.56%)
misc_mandel.py    138221.11 ->  152014.01 : +13792.90 =  +9.979% (+/-2.46%)
misc_pystone.py    85032.14 ->  105681.44 : +20649.30 = +24.284% (+/-2.25%)
misc_raytrace.py   19800.01 ->   23350.73 :  +3550.72 = +17.933% (+/-2.79%)

In summary, compared to MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE, the new
MICROPY_OPT_LOAD_ATTR_FAST_PATH and MICROPY_OPT_MAP_LOOKUP_CACHE options:
- are simpler;
- take less code size;
- are faster (generally);
- work with code generated by the native emitter;
- can be used on embedded targets with a small and constant RAM overhead;
- allow the same .mpy bytecode to run on all targets.

See #7680 for further discussion.  And see also #7653 for a discussion
about simplifying mpy-cross options.

Signed-off-by: Jim Mussared <jim.mussared@gmail.com>
2021-09-16 16:04:03 +10:00
Jeff Epler 413f34cd8f all: Fix signed shifts and NULL access errors from -fsanitize=undefined.
Fixes the following (the line numbers match commit 0e87459e2b):

../../extmod/crypto-algorithms/sha256.c:49:19: runtime error: left shif...
../../extmod/moduasyncio.c:106:35: runtime error: member access within ...
../../py/binary.c:210:13: runtime error: left shift of negative value -...
../../py/mpz.c:744:16: runtime error: negation of -9223372036854775808 ...
../../py/objint.c:109:22: runtime error: left shift of 1 by 31 places c...
../../py/objint_mpz.c:374:9: runtime error: left shift of 4611686018427...
../../py/objint_mpz.c:374:9: runtime error: left shift of negative valu...
../../py/parsenum.c:106:14: runtime error: left shift of 46116860184273...
../../py/runtime.c:395:33: runtime error: left shift of negative value ...
../../py/showbc.c:177:28: runtime error: left shift of negative value -...
../../py/vm.c:321:36: runtime error: left shift of negative value -1```

Testing was done on an amd64 Debian Buster system using gcc-8.3 and these
settings:

    CFLAGS += -g3 -Og -fsanitize=undefined
    LDFLAGS += -fsanitize=undefined

The introduced TASK_PAIRHEAP macro's conditional (x ? &x->i : NULL)
assembles (under amd64 gcc 8.3 -Os) to the same as &x->i, since i is the
initial field of the struct.  However, for the purposes of undefined
behavior analysis the conditional is needed.

Signed-off-by: Jeff Epler <jepler@gmail.com>
2021-06-24 23:01:04 +10:00
Damien George 85f2b239d8 py/showbc: Pass in an mp_print_t struct to all bytecode-print functions.
So the output can be redirected if needed.

Signed-off-by: Damien George <damien@micropython.org>
2020-09-11 17:22:28 +10:00
Damien George 69661f3343 all: Reformat C and Python source code with tools/codeformat.py.
This is run with uncrustify 0.70.1, and black 19.10b0.
2020-02-28 10:33:03 +11:00
Damien George c8c0fd4ca3 py: Rework and compress second part of bytecode prelude.
This patch compresses the second part of the bytecode prelude which
contains the source file name, function name, source-line-number mapping
and cell closure information.  This part of the prelude now begins with a
single varible length unsigned integer which encodes 2 numbers, being the
byte-size of the following 2 sections in the header: the "source info
section" and the "closure section".  After decoding this variable unsigned
integer it's possible to skip over one or both of these sections very
easily.

This scheme saves about 2 bytes for most functions compared to the original
format: one in the case that there are no closure cells, and one because
padding was eliminated.
2019-10-01 12:26:22 +10:00
Damien George b5ebfadbd6 py: Compress first part of bytecode prelude.
The start of the bytecode prelude contains 6 numbers telling the amount of
stack needed for the Python values and exceptions, and the signature of the
function.  Prior to this patch these numbers were all encoded one after the
other (2x variable unsigned integers, then 4x bytes), but using so many
bytes is unnecessary.

An entropy analysis of around 150,000 bytecode functions from the CPython
standard library showed that the optimal Shannon coding would need about
7.1 bits on average to encode these 6 numbers, compared to the existing 48
bits.

This patch attempts to get close to this optimal value by packing the 6
numbers into a single, varible-length unsigned integer via bit-wise
interleaving.  The interleaving scheme is chosen to minimise the average
number of bytes needed, and at the same time keep the scheme simple enough
so it can be implemented without too much overhead in code size or speed.
The scheme requires about 10.5 bits on average to store the 6 numbers.

As a result most functions which originally took 6 bytes to encode these 6
numbers now need only 1 byte (in 80% of cases).
2019-10-01 12:26:22 +10:00
Damien George 02db91a7a3 py: Split RAISE_VARARGS opcode into 3 separate ones.
From the beginning of this project the RAISE_VARARGS opcode was named and
implemented following CPython, where it has an argument (to the opcode)
counting how many args the raise takes:

    raise # 0 args (re-raise previous exception)
    raise exc # 1 arg
    raise exc from exc2 # 2 args (chained raise)

In the bytecode this operation therefore takes 2 bytes, one for
RAISE_VARARGS and one for the number of args.

This patch splits this opcode into 3, where each is now a single byte.
This reduces bytecode size by 1 byte for each use of raise.  Every byte
counts!  It also has the benefit of reducing code size (on all ports except
nanbox).
2019-09-26 15:39:50 +10:00
Milan Rossa efdcd6baa7 py/showbc: Fix off-by-one when showing address of unknown opcode. 2019-08-06 16:08:39 +10:00
Damien George 5a2599d962 py: Replace POP_BLOCK and POP_EXCEPT opcodes with POP_EXCEPT_JUMP.
POP_BLOCK and POP_EXCEPT are now the same, and are always followed by a
JUMP.  So this optimisation reduces code size, and RAM usage of bytecode by
two bytes for each try-except handler.
2019-03-05 16:09:58 +11:00
Damien George 0864a6957f py: Clean up unary and binary enum list to keep groups together.
2 non-bytecode binary ops (NOT_IN and IN_NOT) are moved out of the
bytecode group, so this change will change the bytecode format.
2017-10-05 10:49:44 +11:00
Paul Sokolovsky 9d836fedbd py: Clarify which mp_unary_op_t's may appear in the bytecode.
Not all can, so we don't need to reserve bytecodes for them, and can
use free slots for something else later.
2017-09-25 16:35:19 -07:00
Alexander Steffen 55f33240f3 all: Use the name MicroPython consistently in comments
There were several different spellings of MicroPython present in comments,
when there should be only one.
2017-07-31 18:35:40 +10:00
Damien George dd11af209d py: Add LOAD_SUPER_METHOD bytecode to allow heap-free super meth calls.
This patch allows the following code to run without allocating on the heap:

    super().foo(...)

Before this patch such a call would allocate a super object on the heap and
then load the foo method and call it right away.  The super object is only
needed to perform the lookup of the method and not needed after that.  This
patch makes an optimisation to allocate the super object on the C stack and
discard it right after use.

Changes in code size due to this patch are:

   bare-arm: +128
    minimal: +232
   unix x64: +416
unix nanbox: +364
     stmhal: +184
    esp8266: +340
     cc3200: +128
2017-04-22 23:39:20 +10:00
Damien George f4df3aaa72 py: Allow bytecode/native to put iter_buf on stack for simple for loops.
So that the "for x in it: ..." statement can now work without using the
heap (so long as the iterator argument fits in an iter_buf structure).
2017-02-16 18:38:06 +11:00
Damien George cc4c1adf6e py/showbc: Make sure to set the const_table before printing bytecode. 2017-01-27 12:34:09 +11:00
Damien George fbddea929d py/showbc: Make printf's go to the platform print stream.
The system printf is no longer used by the core uPy code.  Instead, the
platform print stream or DEBUG_printf is used.  Using DEBUG_printf in the
showbc functions would mean that the code can't be tested by the test
suite, so use the normal output instead.

This patch also fixes parsing of bytecode-line-number mappings.
2016-09-20 11:30:54 +10:00
Damien George adaf0d865c py: Combine 3 comprehension opcodes (list/dict/set) into 1.
With the previous patch combining 3 emit functions into 1, it now makes
sense to also combine the corresponding VM opcodes, which is what this
patch does.  This eliminates 2 opcodes which simplifies the VM and reduces
code size, in bytes: bare-arm:44, minimal:64, unix(NDEBUG,x86-64):272,
stmhal:92, esp8266:200.  Profiling (with a simple script that creates many
list/dict/set comprehensions) shows no measurable change in performance.
2016-09-19 12:28:03 +10:00
Damien George bdbe8c9ae2 py: Make UNARY_OP_NOT a first-class op, to agree with Py not semantics.
Fixes #1684 and makes "not" match Python semantics.  The code is also
simplified (the separate MP_BC_NOT opcode is removed) and the patch saves
68 bytes for bare-arm/ and 52 bytes for minimal/.

Previously "not x" was implemented as !mp_unary_op(x, MP_UNARY_OP_BOOL),
so any given object only needs to implement MP_UNARY_OP_BOOL (and the VM
had a special opcode to do the ! bit).

With this patch "not x" is implemented as mp_unary_op(x, MP_UNARY_OP_NOT),
but this operation is caught at the start of mp_unary_op and dispatched as
!mp_obj_is_true(x).  mp_obj_is_true has special logic to test for
truthness, and is the correct way to handle the not operation.
2015-12-10 22:19:48 +00:00
Damien George 999cedb90f py: Wrap all obj-ptr conversions in MP_OBJ_TO_PTR/MP_OBJ_FROM_PTR.
This allows the mp_obj_t type to be configured to something other than a
pointer-sized primitive type.

This patch also includes additional changes to allow the code to compile
when sizeof(mp_uint_t) != sizeof(void*), such as using size_t instead of
mp_uint_t, and various casts.
2015-11-29 14:25:35 +00:00
Damien George c8e9c0d89a py: Add MICROPY_PERSISTENT_CODE so code can persist beyond the runtime.
Main changes when MICROPY_PERSISTENT_CODE is enabled are:

- qstrs are encoded as 2-byte fixed width in the bytecode
- all pointers are removed from bytecode and put in const_table (this
  includes const objects and raw code pointers)

Ultimately this option will enable persistence for not just bytecode but
also native code.
2015-11-13 12:49:18 +00:00
Damien George 713ea1800d py: Add constant table to bytecode.
Contains just argument names at the moment but makes it easy to add
arbitrary constants.
2015-11-13 12:49:18 +00:00
Damien George 3a3db4dcf0 py: Put all bytecode state (arg count, etc) in bytecode. 2015-11-13 12:49:18 +00:00
Damien George 9b7f583b0c py: Reorganise bytecode layout so it's more structured, easier to edit. 2015-11-13 12:49:18 +00:00
Damien George 59fba2d6ea py: Remove mp_load_const_bytes and instead load precreated bytes object.
Previous to this patch each time a bytes object was referenced a new
instance (with the same data) was created.  With this patch a single
bytes object is created in the compiler and is loaded directly at execute
time as a true constant (similar to loading bignum and float objects).
This saves on allocating RAM and means that bytes objects can now be
used when the memory manager is locked (eg in interrupts).

The MP_BC_LOAD_CONST_BYTES bytecode was removed as part of this.

Generated bytecode is slightly larger due to storing a pointer to the
bytes object instead of the qstr identifier.

Code size is reduced by about 60 bytes on Thumb2 architectures.
2015-06-25 14:42:13 +00:00
Damien George c8870b7c69 py: Make showbc decode UNPACK_EX, and use correct range for unop/binop. 2015-06-18 15:12:17 +00:00
Damien George 8872abcbc4 py: Remove LOAD_CONST_ELLIPSIS bytecode, use LOAD_CONST_OBJ instead.
Ellipsis constant is rarely used so no point having an extra bytecode
for it.
2015-05-05 22:15:42 +01:00
Damien George c9aa1883ed py: Simplify bytecode prelude when encoding closed over variables. 2015-04-07 00:08:17 +01:00
Damien George 8e9a71257d py: Implement DELETE_GLOBAL in showbc.c. 2015-03-20 17:12:09 +00:00
Damien George 7d414a1b52 py: Parse big-int/float/imag constants directly in parser.
Previous to this patch, a big-int, float or imag constant was interned
(made into a qstr) and then parsed at runtime to create an object each
time it was needed.  This is wasteful in RAM and not efficient.  Now,
these constants are parsed straight away in the parser and turned into
objects.  This allows constants with large numbers of digits (so
addresses issue #1103) and takes us a step closer to #722.
2015-02-08 01:57:40 +00:00
Damien George a5efcd4745 py: Specify unary/binary op name in TypeError error message.
Eg, "() + 1" now tells you that __add__ is not supported for tuple and
int types (before it just said the generic "binary operator").  We reuse
the table of names for slot lookup because it would be a waste of code
space to store the pretty name for each operator.
2015-01-27 18:02:25 +00:00
Damien George 50912e7f5d py, unix, stmhal: Allow to compile with -Wshadow.
See issue #699.
2015-01-20 11:55:10 +00:00
Damien George 963a5a3e82 py, unix: Allow to compile with -Wsign-compare.
See issue #699.
2015-01-16 17:47:07 +00:00
Damien George d6ed6702f7 py/showbc.c: Handle new LOAD_CONST_OBJ opcode, and opcodes with cache. 2015-01-13 23:08:47 +00:00
Paul Sokolovsky d8bfd77ad5 showbc: Show conditional jump destination as unsigned value.
This is consistent with how BC_JUMP was handled before. We never show jumps
destinations relative to jump instrucion itself, only relative to beginning
of function. Another useful way to show them as absolute (real memory
address), and this change makes result expected and consistent with how
BC_JUMP is shown.
2015-01-07 00:29:15 +02:00
Damien George 51dfcb4bb7 py: Move to guarded includes, everywhere in py/ core.
Addresses issue #1022.
2015-01-01 20:32:09 +00:00
Paul Sokolovsky 1ee1785bed showbc: Print operation mnemonic in BINARY_OP. 2014-12-28 21:43:44 +02:00
Paul Sokolovsky df103462dc showbc: Make code object start pointer semi-public.
This allows to pring either absolute addresses or relative offsets in jumps
and code references.
2014-12-28 21:37:17 +02:00
Paul Sokolovsky 343266ea51 showbc: Refactor to allow inline instruction printing. 2014-12-27 05:01:21 +02:00
Damien George 7764f163fa py: Fix label printing in showbc; print sp in vm trace. 2014-12-12 17:18:56 +00:00
Damien George 8456cc017b py: Compress load-int, load-fast, store-fast, unop, binop bytecodes.
There is a lot potential in compress bytecodes and make more use of the
coding space.  This patch introduces "multi" bytecodes which have their
argument included in the bytecode (by addition).

UNARY_OP and BINARY_OP now no longer take a 1 byte argument for the
opcode.  Rather, the opcode is included in the first byte itself.

LOAD_FAST_[0,1,2] and STORE_FAST_[0,1,2] are removed in favour of their
multi versions, which can take an argument between 0 and 15 inclusive.
The majority of LOAD_FAST/STORE_FAST codes fit in this range and so this
saves a byte for each of these.

LOAD_CONST_SMALL_INT_MULTI is used to load small ints between -16 and 47
inclusive.  Such ints are quite common and now only need 1 byte to
store, and now have much faster decoding.

In all this patch saves about 2% RAM for typically bytecode (1.8% on
64-bit test, 2.5% on pyboard test).  It also reduces the binary size
(because bytecodes are simplified) and doesn't harm performance.
2014-10-25 20:23:13 +01:00
Damien George 1084b0f9c2 py: Store bytecode arg names in bytecode (were in own array).
This saves a lot of RAM for 2 reasons:

1. For functions that don't have default values, var args or var kw
args (which is a large number of functions in the general case), the
mp_obj_fun_bc_t type now fits in 1 GC block (previously needed 2 because
of the extra pointer to point to the arg_names array).  So this saves 16
bytes per function (32 bytes on 64-bit machines).

2. Combining separate memory regions generally saves RAM because the
unused bytes at the end of the GC block are saved for 1 of the blocks
(since that block doesn't exist on its own anymore).  So generally this
saves 8 bytes per function.

Tested by importing lots of modules:

- 64-bit Linux gave about an 8% RAM saving for 86k of used RAM.
- pyboard gave about a 6% RAM saving for 31k of used RAM.
2014-10-25 20:23:13 +01:00
Damien George 564963a170 py: Fix debug-printing of bytecode line numbers.
Also move the raw bytecode printing code from emitglue to mp_bytecode_print.
2014-10-24 14:42:50 +00:00
Damien George 3eaa0c3833 py: Use UINT_FMT instead of %d. 2014-10-03 17:54:25 +00:00
Damien George 42f3de924b py: Convert [u]int to mp_[u]int_t where appropriate.
Addressing issue #50.
2014-10-03 17:44:14 +00:00
Damien George b534e1b9f1 py: Use variable length encoded uints in more places in bytecode.
Code-info size, block name, source name, n_state and n_exc_stack now use
variable length encoded uints.  This saves 7-9 bytes per bytecode
function for most functions.
2014-09-04 14:44:01 +01:00
Damien George 4747becc64 py: Improve encoding scheme for line-number to bytecode map.
Reduces by about a factor of 10 on average the amount of RAM needed to
store the line-number to bytecode map in the bytecode prelude.

Using CPython3.4's stdlib for statistics: previously, an average of
13 bytes were used per (bytecode offset, line-number offset) pair, and
now with this improvement, that's down to 1.3 bytes on average.

Large RAM usage before was due to some very large steps in line numbers,
both from the start of the first line in a function way down in the
file, and also functions that have big comments and/or big strings in
them (both cases were significant).

Although the savings are large on average for the CPython stdlib, it
won't have such a big effect for small scripts used in embedded
programming.

Addresses issue #648.
2014-07-31 16:12:01 +00:00
Damien George 40f3c02682 Rename machine_(u)int_t to mp_(u)int_t.
See discussion in issue #50.
2014-07-03 13:25:24 +01:00
Paul Sokolovsky a4ac5b9f05 showbc: Make sure it's possible to trace MAKE_FUNCTION arg to actual bytecode. 2014-06-03 01:26:51 +03:00
Paul Sokolovsky 8bf8404c15 showbc: Print code block header at the beginning, not in the middle of dump.
Also, dump code block in bytes.
2014-06-02 16:35:57 +03:00
Damien George fb510b3bf9 Rename bultins config variables to MICROPY_PY_BUILTINS_*.
This renames:
MICROPY_PY_FROZENSET -> MICROPY_PY_BUILTINS_FROZENSET
MICROPY_PY_PROPERTY -> MICROPY_PY_BUILTINS_PROPERTY
MICROPY_PY_SLICE -> MICROPY_PY_BUILTINS_SLICE
MICROPY_ENABLE_FLOAT -> MICROPY_PY_BUILTINS_FLOAT

See issue #35 for discussion.
2014-06-01 13:32:54 +01:00