diff mbox series

hvf: arm: Ignore cache operations on MMIO

Message ID 20211025191349.52992-1-agraf@csgraf.de
State New
Headers show
Series hvf: arm: Ignore cache operations on MMIO | expand

Commit Message

Alexander Graf Oct. 25, 2021, 7:13 p.m. UTC
Apple's Hypervisor.Framework forwards cache operations as MMIO traps
into user space. For MMIO however, these have no meaning: There is no
cache attached to them.

So let's filter SYS instructions for DATA exits out and treat them as nops.

This fixes OpenBSD booting as guest.

Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reported-by: AJ Barris <AwlsomeAlex@github.com>
---
 target/arm/hvf/hvf.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

Comments

Philippe Mathieu-Daudé Oct. 25, 2021, 8:57 p.m. UTC | #1
On 10/25/21 21:13, Alexander Graf wrote:
> Apple's Hypervisor.Framework forwards cache operations as MMIO traps
> into user space. For MMIO however, these have no meaning: There is no
> cache attached to them.
> 
> So let's filter SYS instructions for DATA exits out and treat them as nops.
> 
> This fixes OpenBSD booting as guest.
> 
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> Reported-by: AJ Barris <AwlsomeAlex@github.com>
> ---
>  target/arm/hvf/hvf.c | 32 ++++++++++++++++++++++++++++++++
>  1 file changed, 32 insertions(+)
> 
> diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
> index bff3e0cde7..46ff4892a7 100644
> --- a/target/arm/hvf/hvf.c
> +++ b/target/arm/hvf/hvf.c
> @@ -1098,6 +1098,33 @@ static void hvf_sync_vtimer(CPUState *cpu)
>      }
>  }
>  
> +static bool hvf_emulate_insn(CPUState *cpu)
> +{
> +    ARMCPU *arm_cpu = ARM_CPU(cpu);
> +    CPUARMState *env = &arm_cpu->env;
> +    uint32_t insn;
> +
> +    /*
> +     * We ran into an instruction that traps for data, but is not
> +     * hardware predecoded. This should not ever happen for well
> +     * behaved guests. Let's try to see if we can somehow rescue
> +     * the situation.
> +     */
> +
> +    cpu_synchronize_state(cpu);
> +    if (cpu_memory_rw_debug(cpu, env->pc, &insn, 4, 0)) {

What about using cpu_ldl_data()?

> +        /* Could not read the instruction */
> +        return false;
> +    }
> +
> +    if ((insn & 0xffc00000) == 0xd5000000) {

Could there be an endianess issue here?

Otherwise,
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

> +        /* MSR/MRS/SYS/SYSL - happens for cache ops which are nops on data */
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
>  int hvf_vcpu_exec(CPUState *cpu)
>  {
>      ARMCPU *arm_cpu = ARM_CPU(cpu);
> @@ -1156,6 +1183,11 @@ int hvf_vcpu_exec(CPUState *cpu)
>                               hvf_exit->exception.physical_address, isv,
>                               iswrite, s1ptw, len, srt);
>  
> +        if (!isv) {
> +            g_assert(hvf_emulate_insn(cpu));
> +            advance_pc = true;
> +            break;
> +        }
>          assert(isv);
>  
>          if (iswrite) {
>
Richard Henderson Oct. 26, 2021, 12:14 a.m. UTC | #2
On 10/25/21 12:13 PM, Alexander Graf wrote:
> +    /*
> +     * We ran into an instruction that traps for data, but is not
> +     * hardware predecoded. This should not ever happen for well
> +     * behaved guests. Let's try to see if we can somehow rescue
> +     * the situation.
> +     */
> +
> +    cpu_synchronize_state(cpu);
> +    if (cpu_memory_rw_debug(cpu, env->pc, &insn, 4, 0)) {

This isn't correct, since this would be a physical address access, and env->pc is virtual.

Phil's idea of cpu_ldl_data may be correct, and cpu_ldl_code may be slightly more so, 
because we got EC_DATAABORT not EC_INSNABORT, which means that the virtual address at 
env->pc is mapped and executable.

However, in the event that there's some sort of race condition in between this data abort 
and hvf stopping all threads for the vm exit, by which the page tables could have been 
modified between here and there, then cpu_ldl_code *could* produce another exception.

In which case the interface that gdbstub uses, cc->memory_rw_debug, will be most correct.


> @@ -1156,6 +1183,11 @@ int hvf_vcpu_exec(CPUState *cpu)
>                                hvf_exit->exception.physical_address, isv,
>                                iswrite, s1ptw, len, srt);
>   
> +        if (!isv) {
> +            g_assert(hvf_emulate_insn(cpu));
> +            advance_pc = true;
> +            break;
> +        }
>           assert(isv);

Ouch.  HVF really passes along an invalid syndrome?  I was expecting that you'd be able to 
avoid all of the instruction parsing and check syndrome.cm (bit 8) for a cache management 
instruction.


r~
Alexander Graf Oct. 26, 2021, 7:09 a.m. UTC | #3
On 26.10.21 02:14, Richard Henderson wrote:
> On 10/25/21 12:13 PM, Alexander Graf wrote:
>> +    /*
>> +     * We ran into an instruction that traps for data, but is not
>> +     * hardware predecoded. This should not ever happen for well
>> +     * behaved guests. Let's try to see if we can somehow rescue
>> +     * the situation.
>> +     */
>> +
>> +    cpu_synchronize_state(cpu);
>> +    if (cpu_memory_rw_debug(cpu, env->pc, &insn, 4, 0)) {
>
> This isn't correct, since this would be a physical address access, and 
> env->pc is virtual.


Yes, hence cpu_memory_rw_debug which accesses virtual memory:

https://git.qemu.org/?p=qemu.git;a=blob;f=softmmu/physmem.c#l3418


>
> Phil's idea of cpu_ldl_data may be correct, and cpu_ldl_code may be 
> slightly more so, because we got EC_DATAABORT not EC_INSNABORT, which 
> means that the virtual address at env->pc is mapped and executable.
>
> However, in the event that there's some sort of race condition in 
> between this data abort and hvf stopping all threads for the vm exit, 
> by which the page tables could have been modified between here and 
> there, then cpu_ldl_code *could* produce another exception.
>
> In which case the interface that gdbstub uses, cc->memory_rw_debug, 
> will be most correct.


I don't believe that one is implemented for arm, correct?


>
>
>> @@ -1156,6 +1183,11 @@ int hvf_vcpu_exec(CPUState *cpu)
>> hvf_exit->exception.physical_address, isv,
>>                                iswrite, s1ptw, len, srt);
>>   +        if (!isv) {
>> +            g_assert(hvf_emulate_insn(cpu));
>> +            advance_pc = true;
>> +            break;
>> +        }
>>           assert(isv);
>
> Ouch.  HVF really passes along an invalid syndrome?  I was expecting 
> that you'd be able to avoid all of the instruction parsing and check 
> syndrome.cm (bit 8) for a cache management instruction.


That's a very subtle way of telling me I'm stupid :). Thanks for the 
catch! Using the CM bit is obviously way better. Let me build v2.


Alex
Philippe Mathieu-Daudé Oct. 26, 2021, 8:56 a.m. UTC | #4
On 10/26/21 09:09, Alexander Graf wrote:
> On 26.10.21 02:14, Richard Henderson wrote:
>> On 10/25/21 12:13 PM, Alexander Graf wrote:

>>> @@ -1156,6 +1183,11 @@ int hvf_vcpu_exec(CPUState *cpu)
>>> hvf_exit->exception.physical_address, isv,
>>>                                iswrite, s1ptw, len, srt);
>>>   +        if (!isv) {
>>> +            g_assert(hvf_emulate_insn(cpu));
>>> +            advance_pc = true;
>>> +            break;
>>> +        }
>>>           assert(isv);
>>
>> Ouch.  HVF really passes along an invalid syndrome?  I was expecting
>> that you'd be able to avoid all of the instruction parsing and check
>> syndrome.cm (bit 8) for a cache management instruction.
> 
> 
> That's a very subtle way of telling me I'm stupid :). Thanks for the
> catch! Using the CM bit is obviously way better. Let me build v2.

Having given my R-b I take half of the blame.
diff mbox series

Patch

diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
index bff3e0cde7..46ff4892a7 100644
--- a/target/arm/hvf/hvf.c
+++ b/target/arm/hvf/hvf.c
@@ -1098,6 +1098,33 @@  static void hvf_sync_vtimer(CPUState *cpu)
     }
 }
 
+static bool hvf_emulate_insn(CPUState *cpu)
+{
+    ARMCPU *arm_cpu = ARM_CPU(cpu);
+    CPUARMState *env = &arm_cpu->env;
+    uint32_t insn;
+
+    /*
+     * We ran into an instruction that traps for data, but is not
+     * hardware predecoded. This should not ever happen for well
+     * behaved guests. Let's try to see if we can somehow rescue
+     * the situation.
+     */
+
+    cpu_synchronize_state(cpu);
+    if (cpu_memory_rw_debug(cpu, env->pc, &insn, 4, 0)) {
+        /* Could not read the instruction */
+        return false;
+    }
+
+    if ((insn & 0xffc00000) == 0xd5000000) {
+        /* MSR/MRS/SYS/SYSL - happens for cache ops which are nops on data */
+        return true;
+    }
+
+    return false;
+}
+
 int hvf_vcpu_exec(CPUState *cpu)
 {
     ARMCPU *arm_cpu = ARM_CPU(cpu);
@@ -1156,6 +1183,11 @@  int hvf_vcpu_exec(CPUState *cpu)
                              hvf_exit->exception.physical_address, isv,
                              iswrite, s1ptw, len, srt);
 
+        if (!isv) {
+            g_assert(hvf_emulate_insn(cpu));
+            advance_pc = true;
+            break;
+        }
         assert(isv);
 
         if (iswrite) {