Message ID | 20220118162738.1366281-10-eesposit@redhat.com |
---|---|
State | New |
Headers | show |
Series | Removal of Aiocontext lock through drains: protect bdrv_replace_child_noperm. | expand |
On Tue, Jan 18, 2022 at 11:27:35AM -0500, Emanuele Giuseppe Esposito wrote: > If a drain happens while a job is sleeping, the timeout > gets cancelled and the job continues once the drain ends. > This is especially bad for the sleep performed in commit and stream > jobs, since that is dictated by ratelimit to maintain a certain speed. > > Basically the execution path is the followig: s/followig/following/ > 1. job calls job_sleep_ns, and yield with a timer in @ns ns. > 2. meanwhile, a drain is executed, and > child_job_drained_{begin/end} could be executed as ->drained_begin() > and ->drained_end() callbacks. > Therefore child_job_drained_begin() enters the job, that continues > execution in job_sleep_ns() and calls job_pause_point_locked(). > 3. job_pause_point_locked() detects that we are in the middle of a > drain, and firstly deletes any existing timer and then yields again, > waiting for ->drained_end(). > 4. Once draining is finished, child_job_drained_end() runs and resumes > the job. At this point, the timer has been lost and we just resume > without checking if enough time has passed. > > This fix implies that from now onwards, job_sleep_ns will force the job > to sleep @ns, even if it is wake up (purposefully or not) in the middle > of the sleep. Therefore qemu-iotests test might run a little bit slower, > depending on the speed of the job. Setting a job speed to values like "1" > is not allowed anymore (unless you want to wait forever). > > Because of this fix, test_stream_parallel() in tests/qemu-iotests/030 > takes too long, since speed of stream job is just 1024 and before > it was skipping all the wait thanks to the drains. Increase the > speed to 256 * 1024. Exactly the same happens for test 151. > > Instead we need to sleep less in test_cancel_ready() test-blockjob.c, > so that the job will be able to exit the sleep and transition to ready > before the main loop asserts. I remember seeing Hanna and Kevin use carefully rate-limited jobs in qemu-iotests. They might have thoughts on whether this patch is acceptable or not.
On 26/01/2022 12:21, Stefan Hajnoczi wrote: > On Tue, Jan 18, 2022 at 11:27:35AM -0500, Emanuele Giuseppe Esposito wrote: >> If a drain happens while a job is sleeping, the timeout >> gets cancelled and the job continues once the drain ends. >> This is especially bad for the sleep performed in commit and stream >> jobs, since that is dictated by ratelimit to maintain a certain speed. >> >> Basically the execution path is the followig: > > s/followig/following/ > >> 1. job calls job_sleep_ns, and yield with a timer in @ns ns. >> 2. meanwhile, a drain is executed, and >> child_job_drained_{begin/end} could be executed as ->drained_begin() >> and ->drained_end() callbacks. >> Therefore child_job_drained_begin() enters the job, that continues >> execution in job_sleep_ns() and calls job_pause_point_locked(). >> 3. job_pause_point_locked() detects that we are in the middle of a >> drain, and firstly deletes any existing timer and then yields again, >> waiting for ->drained_end(). >> 4. Once draining is finished, child_job_drained_end() runs and resumes >> the job. At this point, the timer has been lost and we just resume >> without checking if enough time has passed. >> >> This fix implies that from now onwards, job_sleep_ns will force the job >> to sleep @ns, even if it is wake up (purposefully or not) in the middle >> of the sleep. Therefore qemu-iotests test might run a little bit slower, >> depending on the speed of the job. Setting a job speed to values like "1" >> is not allowed anymore (unless you want to wait forever). >> >> Because of this fix, test_stream_parallel() in tests/qemu-iotests/030 >> takes too long, since speed of stream job is just 1024 and before >> it was skipping all the wait thanks to the drains. Increase the >> speed to 256 * 1024. Exactly the same happens for test 151. >> >> Instead we need to sleep less in test_cancel_ready() test-blockjob.c, >> so that the job will be able to exit the sleep and transition to ready >> before the main loop asserts. > > I remember seeing Hanna and Kevin use carefully rate-limited jobs in > qemu-iotests. They might have thoughts on whether this patch is > acceptable or not. > I think the speed was carefully set as "slow enough" just to give time to the operation to happen while the job was running. Anyways, all tests I run work as intended, I just increased their speed slightly. Having speed=1 would make e job really really slow. Emanuele
diff --git a/job.c b/job.c index 83921dd79b..6ef2adead4 100644 --- a/job.c +++ b/job.c @@ -584,17 +584,15 @@ static void coroutine_fn job_do_yield_locked(Job *job, uint64_t ns) assert(job->busy); } -void coroutine_fn job_pause_point(Job *job) +/* Called with job_mutex held, but releases it temporarly. */ +static void coroutine_fn job_pause_point_locked(Job *job) { assert(job && job_started(job)); - job_lock(); if (!job_should_pause_locked(job)) { - job_unlock(); return; } if (job_is_cancelled_locked(job)) { - job_unlock(); return; } @@ -614,13 +612,20 @@ void coroutine_fn job_pause_point(Job *job) job->paused = false; job_state_transition_locked(job, status); } - job_unlock(); if (job->driver->resume) { + job_unlock(); job->driver->resume(job); + job_lock(); } } +void coroutine_fn job_pause_point(Job *job) +{ + JOB_LOCK_GUARD(); + job_pause_point_locked(job); +} + void job_yield(Job *job) { WITH_JOB_LOCK_GUARD() { @@ -641,21 +646,22 @@ void job_yield(Job *job) void coroutine_fn job_sleep_ns(Job *job, int64_t ns) { - WITH_JOB_LOCK_GUARD() { - assert(job->busy); + int64_t end_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns; + JOB_LOCK_GUARD(); + assert(job->busy); + do { /* Check cancellation *before* setting busy = false, too! */ if (job_is_cancelled_locked(job)) { return; } if (!job_should_pause_locked(job)) { - job_do_yield_locked(job, - qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + ns); + job_do_yield_locked(job, end_ns); } - } - job_pause_point(job); + job_pause_point_locked(job); + } while (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) < end_ns); } /* Assumes the job_mutex is held */ diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030 index 567bf1da67..969b246d0f 100755 --- a/tests/qemu-iotests/030 +++ b/tests/qemu-iotests/030 @@ -248,7 +248,7 @@ class TestParallelOps(iotests.QMPTestCase): pending_jobs.append(job_id) result = self.vm.qmp('block-stream', device=node_name, job_id=job_id, bottom=f'node{i-1}', - speed=1024) + speed=256*1024) self.assert_qmp(result, 'return', {}) # Do this in reverse: After unthrottling them, some jobs may finish diff --git a/tests/qemu-iotests/151 b/tests/qemu-iotests/151 index 93d14193d0..5998beb5c4 100755 --- a/tests/qemu-iotests/151 +++ b/tests/qemu-iotests/151 @@ -129,7 +129,7 @@ class TestActiveMirror(iotests.QMPTestCase): sync='full', copy_mode='write-blocking', buf_size=(1048576 // 4), - speed=1) + speed=1024*1024) self.assert_qmp(result, 'return', {}) # Start an unaligned request to a dirty area @@ -154,7 +154,7 @@ class TestActiveMirror(iotests.QMPTestCase): target='target-node', sync='full', copy_mode='write-blocking', - speed=1) + speed=1024*1024) self.vm.hmp_qemu_io('source', 'break write_aio A') self.vm.hmp_qemu_io('source', 'aio_write 0 1M') # 1 diff --git a/tests/unit/test-blockjob.c b/tests/unit/test-blockjob.c index c926db7b5d..0b3010b94d 100644 --- a/tests/unit/test-blockjob.c +++ b/tests/unit/test-blockjob.c @@ -184,7 +184,7 @@ static int coroutine_fn cancel_job_run(Job *job, Error **errp) job_transition_to_ready(&s->common.job); } - job_sleep_ns(&s->common.job, 100000); + job_sleep_ns(&s->common.job, 100); } return 0;
If a drain happens while a job is sleeping, the timeout gets cancelled and the job continues once the drain ends. This is especially bad for the sleep performed in commit and stream jobs, since that is dictated by ratelimit to maintain a certain speed. Basically the execution path is the followig: 1. job calls job_sleep_ns, and yield with a timer in @ns ns. 2. meanwhile, a drain is executed, and child_job_drained_{begin/end} could be executed as ->drained_begin() and ->drained_end() callbacks. Therefore child_job_drained_begin() enters the job, that continues execution in job_sleep_ns() and calls job_pause_point_locked(). 3. job_pause_point_locked() detects that we are in the middle of a drain, and firstly deletes any existing timer and then yields again, waiting for ->drained_end(). 4. Once draining is finished, child_job_drained_end() runs and resumes the job. At this point, the timer has been lost and we just resume without checking if enough time has passed. This fix implies that from now onwards, job_sleep_ns will force the job to sleep @ns, even if it is wake up (purposefully or not) in the middle of the sleep. Therefore qemu-iotests test might run a little bit slower, depending on the speed of the job. Setting a job speed to values like "1" is not allowed anymore (unless you want to wait forever). Because of this fix, test_stream_parallel() in tests/qemu-iotests/030 takes too long, since speed of stream job is just 1024 and before it was skipping all the wait thanks to the drains. Increase the speed to 256 * 1024. Exactly the same happens for test 151. Instead we need to sleep less in test_cancel_ready() test-blockjob.c, so that the job will be able to exit the sleep and transition to ready before the main loop asserts. Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> --- job.c | 28 +++++++++++++++++----------- tests/qemu-iotests/030 | 2 +- tests/qemu-iotests/151 | 4 ++-- tests/unit/test-blockjob.c | 2 +- 4 files changed, 21 insertions(+), 15 deletions(-)