Message ID | 4FEDD9E7.9040702@siemens.com |
---|---|
State | New |
Headers | show |
On 06/29/2012 07:37 PM, Jan Kiszka wrote: > Instead of flushing pending coalesced MMIO requests on every vmexit, > this provides a mechanism to selectively flush when memory regions > related to the coalesced one are accessed. This first of all includes > the coalesced region itself but can also applied to other regions, e.g. > of the same device, by calling memory_region_set_flush_coalesced. Looks fine. I have a hard time deciding whether this should go through the kvm tree or memory tree. Anthony, perhaps you can commit it directly to avoid the livelock? Reviewed-by: Avi Kivity <avi@redhat.com>
On 07/02/2012 12:07 PM, Avi Kivity wrote: > > Reviewed-by: Avi Kivity <avi@redhat.com> (for the entire patchset)
On 2012-07-02 11:07, Avi Kivity wrote: > On 06/29/2012 07:37 PM, Jan Kiszka wrote: >> Instead of flushing pending coalesced MMIO requests on every vmexit, >> this provides a mechanism to selectively flush when memory regions >> related to the coalesced one are accessed. This first of all includes >> the coalesced region itself but can also applied to other regions, e.g. >> of the same device, by calling memory_region_set_flush_coalesced. > > Looks fine. > > I have a hard time deciding whether this should go through the kvm tree > or memory tree. Anthony, perhaps you can commit it directly to avoid > the livelock? > > Reviewed-by: Avi Kivity <avi@redhat.com> > Anthony, ping? Jan
On 2012-07-10 12:41, Jan Kiszka wrote: > On 2012-07-02 11:07, Avi Kivity wrote: >> On 06/29/2012 07:37 PM, Jan Kiszka wrote: >>> Instead of flushing pending coalesced MMIO requests on every vmexit, >>> this provides a mechanism to selectively flush when memory regions >>> related to the coalesced one are accessed. This first of all includes >>> the coalesced region itself but can also applied to other regions, e.g. >>> of the same device, by calling memory_region_set_flush_coalesced. >> >> Looks fine. >> >> I have a hard time deciding whether this should go through the kvm tree >> or memory tree. Anthony, perhaps you can commit it directly to avoid >> the livelock? >> >> Reviewed-by: Avi Kivity <avi@redhat.com> >> > > Anthony, ping? Argh, missed that this series was forgotten. Patch 1 is a bug fix, will resend it separately so that it can make it into 1.2. Will repost the rest once master reopens. Jan
On 08/17/2012 01:55 PM, Jan Kiszka wrote: > On 2012-07-10 12:41, Jan Kiszka wrote: >> On 2012-07-02 11:07, Avi Kivity wrote: >>> On 06/29/2012 07:37 PM, Jan Kiszka wrote: >>>> Instead of flushing pending coalesced MMIO requests on every vmexit, >>>> this provides a mechanism to selectively flush when memory regions >>>> related to the coalesced one are accessed. This first of all includes >>>> the coalesced region itself but can also applied to other regions, e.g. >>>> of the same device, by calling memory_region_set_flush_coalesced. >>> >>> Looks fine. >>> >>> I have a hard time deciding whether this should go through the kvm tree >>> or memory tree. Anthony, perhaps you can commit it directly to avoid >>> the livelock? >>> >>> Reviewed-by: Avi Kivity <avi@redhat.com> >>> >> >> Anthony, ping? > > Argh, missed that this series was forgotten. Patch 1 is a bug fix, will > resend it separately so that it can make it into 1.2. Will repost the > rest once master reopens. My fault, I should have just taken it into memory/core and sent a pull request. Sorry about that.
diff --git a/memory.c b/memory.c index aab4a31..7221c3c 100644 --- a/memory.c +++ b/memory.c @@ -311,6 +311,9 @@ static void memory_region_read_accessor(void *opaque, MemoryRegion *mr = opaque; uint64_t tmp; + if (mr->flush_coalesced_mmio) { + qemu_flush_coalesced_mmio_buffer(); + } tmp = mr->ops->read(mr->opaque, addr, size); *value |= (tmp & mask) << shift; } @@ -325,6 +328,9 @@ static void memory_region_write_accessor(void *opaque, MemoryRegion *mr = opaque; uint64_t tmp; + if (mr->flush_coalesced_mmio) { + qemu_flush_coalesced_mmio_buffer(); + } tmp = (*value >> shift) & mask; mr->ops->write(mr->opaque, addr, tmp, size); } @@ -826,6 +832,7 @@ void memory_region_init(MemoryRegion *mr, mr->dirty_log_mask = 0; mr->ioeventfd_nb = 0; mr->ioeventfds = NULL; + mr->flush_coalesced_mmio = false; } static bool memory_region_access_valid(MemoryRegion *mr, @@ -1176,12 +1183,16 @@ void memory_region_add_coalescing(MemoryRegion *mr, cmr->addr = addrrange_make(int128_make64(offset), int128_make64(size)); QTAILQ_INSERT_TAIL(&mr->coalesced, cmr, link); memory_region_update_coalesced_range(mr); + memory_region_set_flush_coalesced(mr); } void memory_region_clear_coalescing(MemoryRegion *mr) { CoalescedMemoryRange *cmr; + qemu_flush_coalesced_mmio_buffer(); + mr->flush_coalesced_mmio = false; + while (!QTAILQ_EMPTY(&mr->coalesced)) { cmr = QTAILQ_FIRST(&mr->coalesced); QTAILQ_REMOVE(&mr->coalesced, cmr, link); @@ -1190,6 +1201,19 @@ void memory_region_clear_coalescing(MemoryRegion *mr) memory_region_update_coalesced_range(mr); } +void memory_region_set_flush_coalesced(MemoryRegion *mr) +{ + mr->flush_coalesced_mmio = true; +} + +void memory_region_clear_flush_coalesced(MemoryRegion *mr) +{ + qemu_flush_coalesced_mmio_buffer(); + if (QTAILQ_EMPTY(&mr->coalesced)) { + mr->flush_coalesced_mmio = false; + } +} + void memory_region_add_eventfd(MemoryRegion *mr, target_phys_addr_t addr, unsigned size, diff --git a/memory.h b/memory.h index 740c48e..77167d8 100644 --- a/memory.h +++ b/memory.h @@ -133,6 +133,7 @@ struct MemoryRegion { bool enabled; bool rom_device; bool warning_printed; /* For reservations */ + bool flush_coalesced_mmio; MemoryRegion *alias; target_phys_addr_t alias_offset; unsigned priority; @@ -521,6 +522,31 @@ void memory_region_add_coalescing(MemoryRegion *mr, void memory_region_clear_coalescing(MemoryRegion *mr); /** + * memory_region_set_flush_coalesced: Enforce memory coalescing flush before + * accesses. + * + * Ensure that pending coalesced MMIO request are flushed before the memory + * region is accessed. This property is automatically enabled for all regions + * passed to memory_region_set_coalescing() and memory_region_add_coalescing(). + * + * @mr: the memory region to be updated. + */ +void memory_region_set_flush_coalesced(MemoryRegion *mr); + +/** + * memory_region_clear_flush_coalesced: Disable memory coalescing flush before + * accesses. + * + * Clear the automatic coalesced MMIO flushing enabled via + * memory_region_set_flush_coalesced. Note that this service has no effect on + * memory regions that have MMIO coalescing enabled for themselves. For them, + * automatic flushing will stop once coalescing is disabled. + * + * @mr: the memory region to be updated. + */ +void memory_region_clear_flush_coalesced(MemoryRegion *mr); + +/** * memory_region_add_eventfd: Request an eventfd to be triggered when a word * is written to a location. *
Instead of flushing pending coalesced MMIO requests on every vmexit, this provides a mechanism to selectively flush when memory regions related to the coalesced one are accessed. This first of all includes the coalesced region itself but can also applied to other regions, e.g. of the same device, by calling memory_region_set_flush_coalesced. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> --- Changes in v3: - refuse to clear flush_coalesced_mmio for regions that have coalescing enabled memory.c | 24 ++++++++++++++++++++++++ memory.h | 26 ++++++++++++++++++++++++++ 2 files changed, 50 insertions(+), 0 deletions(-)