Message ID | 4FD5C89D.2010408@siemens.com |
---|---|
State | New |
Headers | show |
On 06/11/2012 01:29 PM, Jan Kiszka wrote: > On 2012-06-11 12:07, Avi Kivity wrote: >> On 06/06/2012 05:28 PM, Jan Kiszka wrote: >>> Due to a offset between the clock used to generate the in-kernel >>> count_load_time (CLOCK_MONOTONIC) and the clock used for processing this >>> in userspace (vm_clock), reading back the output of PIT channel 2 via >>> port 0x61 was broken. One use cases that suffered from it was the CPU >>> frequency calibration of SeaBIOS, which also affected IDE/AHCI timeouts. >>> >>> This fixes it by calibrating the offset between both clocks on >>> kvm_pit_get and adjusting the kernel value before saving it in the >>> userspace state. As the calibration only works while the vm_clock is >>> running, we cache the in-kernel state across stopped phases. >> >> Applied, thanks. >> >>> + clock_offset = LLONG_MAX; >> >> INT64_MAX would me more strictly correct, but in practice it makes no >> difference. > > Was looking for this, just not long enough. Need to print some cheat > sheet. However, let's clean this up immediately: > > ---8<--- > > clock_offset is int64_t. > Thanks, folded.
diff --git a/hw/kvm/i8254.c b/hw/kvm/i8254.c index 04333d8..c5d3711 100644 --- a/hw/kvm/i8254.c +++ b/hw/kvm/i8254.c @@ -62,7 +62,7 @@ static void kvm_pit_get(PITCommonState *pit) * kvm_pit_channel_state::count_load_time, and vm_clock. Take the * minimum of several samples to filter out scheduling noise. */ - clock_offset = LLONG_MAX; + clock_offset = INT64_MAX; for (i = 0; i < CALIBRATION_ROUNDS; i++) { offset = qemu_get_clock_ns(vm_clock); clock_gettime(CLOCK_MONOTONIC, &ts);