Message ID | 20220706201533.289775-14-eesposit@redhat.com |
---|---|
State | New |
Headers | show |
Series | job: replace AioContext lock with job_mutex | expand |
On 7/6/22 23:15, Emanuele Giuseppe Esposito wrote: > From: Paolo Bonzini <pbonzini@redhat.com> > > We want to make sure access of job->aio_context is always done > under either BQL or job_mutex. The problem is that using > aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond > makes the coroutine immediately resume, so we can't hold the job lock. > And caching it is not safe either, as it might change. > > job_start is under BQL, so it can freely read job->aiocontext, but > job_enter_cond is not. In order to fix this, use aio_co_wake(): > the advantage is that it won't use job->aiocontext, but the > main disadvantage is that it won't be able to detect a change of > job AioContext. > > Calling bdrv_try_set_aio_context() will issue the following calls > (simplified): > * in terms of bdrv callbacks: > .drained_begin -> .set_aio_context -> .drained_end > * in terms of child_job functions: > child_job_drained_begin -> child_job_set_aio_context -> child_job_drained_end > * in terms of job functions: > job_pause_locked -> job_set_aio_context -> job_resume_locked > > We can see that after setting the new aio_context, job_resume_locked > calls again job_enter_cond, which then invokes aio_co_wake(). But > while job->aiocontext has been set in job_set_aio_context, > job->co->ctx has not changed, so the coroutine would be entering in > the wrong aiocontext. > > Using aio_co_schedule in job_resume_locked() might seem as a valid > alternative, but the problem is that the bh resuming the coroutine > is not scheduled immediately, and if in the meanwhile another > bdrv_try_set_aio_context() is run (see test_propagate_mirror() in > test-block-iothread.c), we would have the first schedule in the > wrong aiocontext, and the second set of drains won't even manage > to schedule the coroutine, as job->busy would still be true from > the previous job_resume_locked(). > > The solution is to stick with aio_co_wake(), but then detect every time > the coroutine resumes back from yielding if job->aio_context > has changed. If so, we can reschedule it to the new context. > > Check for the aiocontext change in job_do_yield_locked because: > 1) aio_co_reschedule_self requires to be in the running coroutine > 2) since child_job_set_aio_context allows changing the aiocontext only > while the job is paused, this is the exact place where the coroutine > resumes, before running JobDriver's code. > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Seems reasonable to me, still I don't have the whole picture in mind. Weak: Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> > --- > job.c | 22 +++++++++++++++++++--- > 1 file changed, 19 insertions(+), 3 deletions(-) > > diff --git a/job.c b/job.c > index 9c8792c9e8..405f39566b 100644 > --- a/job.c > +++ b/job.c > @@ -581,11 +581,12 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)) > return; > } > > - assert(!job->deferred_to_main_loop); why we remove this assertion? > timer_del(&job->sleep_timer); > job->busy = true; > real_job_unlock(); > - aio_co_enter(job->aio_context, job->co); > + job_unlock(); > + aio_co_wake(job->co); > + job_lock(); > } > > void job_enter_cond(Job *job, bool(*fn)(Job *job)) > @@ -611,6 +612,8 @@ void job_enter(Job *job) > */ > static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) > { > + AioContext *next_aio_context; > + > real_job_lock(); > if (ns != -1) { > timer_mod(&job->sleep_timer, ns); > @@ -622,7 +625,20 @@ static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) > qemu_coroutine_yield(); > job_lock(); > > - /* Set by job_enter_cond() before re-entering the coroutine. */ > + next_aio_context = job->aio_context; > + /* > + * Coroutine has resumed, but in the meanwhile the job AioContext > + * might have changed via bdrv_try_set_aio_context(), so we need to move > + * the coroutine too in the new aiocontext. > + */ > + while (qemu_get_current_aio_context() != next_aio_context) { > + job_unlock(); > + aio_co_reschedule_self(next_aio_context); > + job_lock(); > + next_aio_context = job->aio_context; > + } > + > + /* Set by job_enter_cond_locked() before re-entering the coroutine. */ > assert(job->busy); > } >
diff --git a/job.c b/job.c index 9c8792c9e8..405f39566b 100644 --- a/job.c +++ b/job.c @@ -581,11 +581,12 @@ void job_enter_cond_locked(Job *job, bool(*fn)(Job *job)) return; } - assert(!job->deferred_to_main_loop); timer_del(&job->sleep_timer); job->busy = true; real_job_unlock(); - aio_co_enter(job->aio_context, job->co); + job_unlock(); + aio_co_wake(job->co); + job_lock(); } void job_enter_cond(Job *job, bool(*fn)(Job *job)) @@ -611,6 +612,8 @@ void job_enter(Job *job) */ static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) { + AioContext *next_aio_context; + real_job_lock(); if (ns != -1) { timer_mod(&job->sleep_timer, ns); @@ -622,7 +625,20 @@ static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) qemu_coroutine_yield(); job_lock(); - /* Set by job_enter_cond() before re-entering the coroutine. */ + next_aio_context = job->aio_context; + /* + * Coroutine has resumed, but in the meanwhile the job AioContext + * might have changed via bdrv_try_set_aio_context(), so we need to move + * the coroutine too in the new aiocontext. + */ + while (qemu_get_current_aio_context() != next_aio_context) { + job_unlock(); + aio_co_reschedule_self(next_aio_context); + job_lock(); + next_aio_context = job->aio_context; + } + + /* Set by job_enter_cond_locked() before re-entering the coroutine. */ assert(job->busy); }