Message ID | 53430D4A.7070308@web.de |
---|---|
State | New |
Headers | show |
On 7 April 2014 21:40, Andreas Färber <andreas.faerber@web.de> wrote: > Am 07.04.2014 21:32, schrieb Andreas Färber: >> I tested .bswap = false - that fixes ppc64 host but breaks x86_64 host. > > Same results for the following patch (x86_64 broken, ppc64 fixed): > > diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c > index d3e746c..fd3956f 100644 > --- a/hw/pci-host/prep.c > +++ b/hw/pci-host/prep.c > @@ -177,7 +177,7 @@ static void raven_io_write(void *opaque, hwaddr addr, > static const MemoryRegionOps raven_io_ops = { > .read = raven_io_read, > .write = raven_io_write, > - .endianness = DEVICE_LITTLE_ENDIAN, > + .endianness = DEVICE_NATIVE_ENDIAN, > .impl.max_access_size = 4, > .valid.unaligned = true, > }; Unsurprisingly, since both of those changes add/remove an extra endianness swap for all host systems. What you're looking for is the point in the chain where we do something which is different depending on the endianness of the host. You could stick in debug printfs or just look around in gdb, to find out whether, for instance, the values being passed into the raven_io_read/write functions are different on the two hosts: if so then the problem is somewhere further up the callstack... thanks -- PMM
diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c index d3e746c..fd3956f 100644 --- a/hw/pci-host/prep.c +++ b/hw/pci-host/prep.c @@ -177,7 +177,7 @@ static void raven_io_write(void *opaque, hwaddr addr, static const MemoryRegionOps raven_io_ops = { .read = raven_io_read, .write = raven_io_write, - .endianness = DEVICE_LITTLE_ENDIAN, + .endianness = DEVICE_NATIVE_ENDIAN, .impl.max_access_size = 4, .valid.unaligned = true, };