Message ID | 1378247351-8446-6-git-send-email-hpoussin@reactos.org |
---|---|
State | New |
Headers | show |
Il 04/09/2013 00:29, Hervé Poussineau ha scritto: > PCI I/O region is 0x3f800000 bytes starting at 0x80000000. > Do not use global QEMU I/O region, which is only 64KB. You can make the global QEMU I/O region larger, that's not a problem. Not using address_space_io is fine as well, but it's a separate change and I doubt it is a good idea to do it for a single target; if you do it for all non-x86 PCI bridges, and move the initialization of address_space_io to target-i386, that's a different story of course. Paolo > Signed-off-by: Hervé Poussineau <hpoussin@reactos.org> > --- > hw/pci-host/prep.c | 15 +++++++++------ > 1 file changed, 9 insertions(+), 6 deletions(-) > > diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c > index 95fa2ea..af0bf2b 100644 > --- a/hw/pci-host/prep.c > +++ b/hw/pci-host/prep.c > @@ -53,6 +53,7 @@ typedef struct PRePPCIState { > > qemu_irq irq[PCI_NUM_PINS]; > PCIBus pci_bus; > + MemoryRegion pci_io; > MemoryRegion pci_intack; > RavenPCIState pci_dev; > } PREPPCIState; > @@ -136,13 +137,11 @@ static void raven_pcihost_realizefn(DeviceState *d, Error **errp) > > memory_region_init_io(&h->conf_mem, OBJECT(h), &pci_host_conf_be_ops, s, > "pci-conf-idx", 1); > - sysbus_add_io(dev, 0xcf8, &h->conf_mem); > - sysbus_init_ioports(&h->busdev, 0xcf8, 1); > + memory_region_add_subregion(&s->pci_io, 0xcf8, &h->conf_mem); > > memory_region_init_io(&h->data_mem, OBJECT(h), &pci_host_data_be_ops, s, > "pci-conf-data", 1); > - sysbus_add_io(dev, 0xcfc, &h->data_mem); > - sysbus_init_ioports(&h->busdev, 0xcfc, 1); > + memory_region_add_subregion(&s->pci_io, 0xcfc, &h->data_mem); > > memory_region_init_io(&h->mmcfg, OBJECT(s), &PPC_PCIIO_ops, s, "pciio", 0x00400000); > memory_region_add_subregion(address_space_mem, 0x80800000, &h->mmcfg); > @@ -160,11 +159,15 @@ static void raven_pcihost_initfn(Object *obj) > PCIHostState *h = PCI_HOST_BRIDGE(obj); > PREPPCIState *s = RAVEN_PCI_HOST_BRIDGE(obj); > MemoryRegion *address_space_mem = get_system_memory(); > - MemoryRegion *address_space_io = get_system_io(); > DeviceState *pci_dev; > > + memory_region_init(&s->pci_io, obj, "pci-io", 0x3f800000); > + > + /* CPU address space */ > + memory_region_add_subregion(address_space_mem, 0x80000000, &s->pci_io); > pci_bus_new_inplace(&s->pci_bus, DEVICE(obj), NULL, > - address_space_mem, address_space_io, 0, TYPE_PCI_BUS); > + address_space_mem, &s->pci_io, 0, TYPE_PCI_BUS); > + > h->bus = &s->pci_bus; > > object_initialize(&s->pci_dev, TYPE_RAVEN_PCI_DEVICE); >
On 4 September 2013 07:01, Paolo Bonzini <pbonzini@redhat.com> wrote: > Not using address_space_io is fine as well, but it's a separate change > and I doubt it is a good idea to do it for a single target; if you do it > for all non-x86 PCI bridges, and move the initialization of > address_space_io to target-i386, that's a different story of course. What's the problem with doing it for a single target? That's what I did for versatilepb. I think any CPU that doesn't have inb/outb type instructions (x86 does, and I think also alpha but I forget) should not be using address_space_io; but the easiest way to get there is to convert the PCI bridges one at a time as we have maintenance effort to do so. -- PMM
Il 04/09/2013 09:22, Peter Maydell ha scritto: > > Not using address_space_io is fine as well, but it's a separate change > > and I doubt it is a good idea to do it for a single target; if you do it > > for all non-x86 PCI bridges, and move the initialization of > > address_space_io to target-i386, that's a different story of course. > What's the problem with doing it for a single target? That's > what I did for versatilepb. I think any CPU that doesn't have > inb/outb type instructions (x86 does, and I think also alpha > but I forget) Actually, Alpha is also not using address_space_io at all as far as I can see. > should not be using address_space_io; but the > easiest way to get there is to convert the PCI bridges one at > a time as we have maintenance effort to do so. I'm not against the patch, but there are less than ten host bridges and most of them should be tested by "make check", so I would prefer to have a plan for making things consistent. Paolo
On 4 September 2013 09:11, Paolo Bonzini <pbonzini@redhat.com> wrote: > Il 04/09/2013 09:22, Peter Maydell ha scritto: >> should not be using address_space_io; but the >> easiest way to get there is to convert the PCI bridges one at >> a time as we have maintenance effort to do so. > > I'm not against the patch, but there are less than ten host bridges and > most of them should be tested by "make check", so I would prefer to have > a plan for making things consistent. My plan for that goes: 1. where people are overhauling a host bridge (ie in a patchset like this one) allow them to make the changes that move in the right direction 2. look at how many other bridges remain after that 3. fix the other bridges if anybody has time and effort (Does 'make check' really test all the host bridges? This doesn't seem very likely to me.) -- PMM
Il 04/09/2013 10:25, Peter Maydell ha scritto: > My plan for that goes: > 1. where people are overhauling a host bridge (ie in a patchset like > this one) allow them to make the changes that move in the right > direction > 2. look at how many other bridges remain after that > 3. fix the other bridges if anybody has time and effort > > (Does 'make check' really test all the host bridges? This doesn't > seem very likely to me.) The endianness test does reads and writes to the I/O address space of most bridges. Paolo
On 4 September 2013 09:31, Paolo Bonzini <pbonzini@redhat.com> wrote: > The endianness test does reads and writes to the I/O address space of > most bridges. Nice. It looks like it's using an ISA device though, which is a bit of a roundabout way of testing PCI I/O (means it can't test the versatile pci bridge's handling of PCI I/O access, for instance). -- PMM
Am 04.09.2013 10:25, schrieb Peter Maydell: > (Does 'make check' really test all the host bridges? This doesn't > seem very likely to me.) Not sure if all yet. Now that the ppc pull is merged, I'll respin and push forward my qom-test, which covers all targets and thereby all PHBs instantiated by default. I'm not aware of any instantiated from the command line (apart from the mac99 DEC ugliness). Andreas
Peter Maydell a écrit : > On 4 September 2013 09:11, Paolo Bonzini <pbonzini@redhat.com> wrote: >> Il 04/09/2013 09:22, Peter Maydell ha scritto: >>> should not be using address_space_io; but the >>> easiest way to get there is to convert the PCI bridges one at >>> a time as we have maintenance effort to do so. >> I'm not against the patch, but there are less than ten host bridges and >> most of them should be tested by "make check", so I would prefer to have >> a plan for making things consistent. > > My plan for that goes: > 1. where people are overhauling a host bridge (ie in a patchset like > this one) allow them to make the changes that move in the right > direction > 2. look at how many other bridges remain after that > 3. fix the other bridges if anybody has time and effort > > (Does 'make check' really test all the host bridges? This doesn't > seem very likely to me.) Paolo, Peter, so, did we raise some consensus? Should I reuse get_system_io(), or having a separate MemoryRegion is acceptable? I think that creating a independant MemoryRegion is better, as I see no reason why QEMU should provide a global I/O region, which has some sense mostly on x86 architectures only. However, I can rework patches to use get_system_io() if that's what you prefer... Hervé
On 9 September 2013 21:57, Hervé Poussineau <hpoussin@reactos.org> wrote: > Paolo, Peter, so, did we raise some consensus? Should I reuse > get_system_io(), or having a separate MemoryRegion is acceptable? > I think that creating a independant MemoryRegion is better, as I see no > reason why QEMU should provide a global I/O region, which has some sense > mostly on x86 architectures only. > However, I can rework patches to use get_system_io() if that's what you > prefer... I don't think there's any reason to use get_system_io() here, and I don't think we need to block cleaning up this host bridge on some cleanup of every host bridge at once -- that seems unrealistic to me since nobody really has a complete understanding of every platform that would touch. -- PMM
Il 09/09/2013 22:57, Hervé Poussineau ha scritto: >> > > Paolo, Peter, so, did we raise some consensus? Should I reuse > get_system_io(), or having a separate MemoryRegion is acceptable? > I think that creating a independant MemoryRegion is better, as I see no > reason why QEMU should provide a global I/O region, which has some sense > mostly on x86 architectures only. > However, I can rework patches to use get_system_io() if that's what you > prefer... Since alpha-softmmu and versatile have established a precedent, your patch is fine. Thanks! Paolo
diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c index 95fa2ea..af0bf2b 100644 --- a/hw/pci-host/prep.c +++ b/hw/pci-host/prep.c @@ -53,6 +53,7 @@ typedef struct PRePPCIState { qemu_irq irq[PCI_NUM_PINS]; PCIBus pci_bus; + MemoryRegion pci_io; MemoryRegion pci_intack; RavenPCIState pci_dev; } PREPPCIState; @@ -136,13 +137,11 @@ static void raven_pcihost_realizefn(DeviceState *d, Error **errp) memory_region_init_io(&h->conf_mem, OBJECT(h), &pci_host_conf_be_ops, s, "pci-conf-idx", 1); - sysbus_add_io(dev, 0xcf8, &h->conf_mem); - sysbus_init_ioports(&h->busdev, 0xcf8, 1); + memory_region_add_subregion(&s->pci_io, 0xcf8, &h->conf_mem); memory_region_init_io(&h->data_mem, OBJECT(h), &pci_host_data_be_ops, s, "pci-conf-data", 1); - sysbus_add_io(dev, 0xcfc, &h->data_mem); - sysbus_init_ioports(&h->busdev, 0xcfc, 1); + memory_region_add_subregion(&s->pci_io, 0xcfc, &h->data_mem); memory_region_init_io(&h->mmcfg, OBJECT(s), &PPC_PCIIO_ops, s, "pciio", 0x00400000); memory_region_add_subregion(address_space_mem, 0x80800000, &h->mmcfg); @@ -160,11 +159,15 @@ static void raven_pcihost_initfn(Object *obj) PCIHostState *h = PCI_HOST_BRIDGE(obj); PREPPCIState *s = RAVEN_PCI_HOST_BRIDGE(obj); MemoryRegion *address_space_mem = get_system_memory(); - MemoryRegion *address_space_io = get_system_io(); DeviceState *pci_dev; + memory_region_init(&s->pci_io, obj, "pci-io", 0x3f800000); + + /* CPU address space */ + memory_region_add_subregion(address_space_mem, 0x80000000, &s->pci_io); pci_bus_new_inplace(&s->pci_bus, DEVICE(obj), NULL, - address_space_mem, address_space_io, 0, TYPE_PCI_BUS); + address_space_mem, &s->pci_io, 0, TYPE_PCI_BUS); + h->bus = &s->pci_bus; object_initialize(&s->pci_dev, TYPE_RAVEN_PCI_DEVICE);
PCI I/O region is 0x3f800000 bytes starting at 0x80000000. Do not use global QEMU I/O region, which is only 64KB. Signed-off-by: Hervé Poussineau <hpoussin@reactos.org> --- hw/pci-host/prep.c | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-)