rp2/mpthreadport: Fix race with IRQ when entering atomic section.

Prior to this commit there is a potential deadlock in
mp_thread_begin_atomic_section(), when obtaining the atomic_mutex, in the
following situation:
- main thread calls mp_thread_begin_atomic_section() (for whatever reason,
  doesn't matter)
- the second core is running so the main thread grabs the mutex via the
  call mp_thread_mutex_lock(&atomic_mutex, 1), and this succeeds
- before the main thread has a chance to run save_and_disable_interrupts()
  a USB IRQ comes in and the main thread jumps off to process this IRQ
- that USB processing triggers a call to the dcd_event_handler() wrapper
  from commit bcbdee2357
- that then calls mp_sched_schedule_node()
- that then attempts to obtain the atomic section, calling
  mp_thread_begin_atomic_section()
- that call then blocks trying to obtain atomic_mutex
- core0 is now deadlocked on itself, because the main thread has the mutex
  but the IRQ handler (which preempted the main thread) is blocked waiting
  for the mutex, which will never be free

The solution in this commit is to use mutex enter/exit functions that also
atomically disable/restore interrupts.

Fixes issues #12980 and #13288.

Signed-off-by: Damien George <damien@micropython.org>
pull/13312/head
Damien George 2024-01-02 01:04:03 +11:00
rodzic 8438c8790c
commit dc2a4e3cbd
1 zmienionych plików z 10 dodań i 9 usunięć

Wyświetl plik

@ -30,6 +30,7 @@
#include "py/mpthread.h"
#include "pico/stdlib.h"
#include "pico/multicore.h"
#include "mutex_extra.h"
#if MICROPY_PY_THREAD
@ -45,23 +46,23 @@ STATIC uint32_t *core1_stack = NULL;
STATIC size_t core1_stack_num_words = 0;
// Thread mutex.
STATIC mp_thread_mutex_t atomic_mutex;
STATIC mutex_t atomic_mutex;
uint32_t mp_thread_begin_atomic_section(void) {
if (core1_entry) {
// When both cores are executing, we also need to provide
// full mutual exclusion.
mp_thread_mutex_lock(&atomic_mutex, 1);
return mutex_enter_blocking_and_disable_interrupts(&atomic_mutex);
} else {
return save_and_disable_interrupts();
}
return save_and_disable_interrupts();
}
void mp_thread_end_atomic_section(uint32_t state) {
restore_interrupts(state);
if (core1_entry) {
mp_thread_mutex_unlock(&atomic_mutex);
if (atomic_mutex.owner != LOCK_INVALID_OWNER_ID) {
mutex_exit_and_restore_interrupts(&atomic_mutex, state);
} else {
restore_interrupts(state);
}
}
@ -69,7 +70,7 @@ void mp_thread_end_atomic_section(uint32_t state) {
void mp_thread_init(void) {
assert(get_core_num() == 0);
mp_thread_mutex_init(&atomic_mutex);
mutex_init(&atomic_mutex);
// Allow MICROPY_BEGIN_ATOMIC_SECTION to be invoked from core1.
multicore_lockout_victim_init();