diff mbox series

[2/4] target/ppc: Ensure stcx size matches larx

Message ID 20230604102858.148584-2-npiggin@gmail.com
State New
Headers show
Series [1/4] target/ppc: Fix lqarx to set cpu_reserve | expand

Commit Message

Nicholas Piggin June 4, 2023, 10:28 a.m. UTC
Differently-sized larx/stcx. pairs can succeed if the starting address
matches. Add a size check to require stcx. exactly match the larx that
established the reservation.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 target/ppc/cpu.h       | 1 +
 target/ppc/cpu_init.c  | 4 ++--
 target/ppc/translate.c | 8 ++++++++
 3 files changed, 11 insertions(+), 2 deletions(-)

Comments

Richard Henderson June 4, 2023, 4:58 p.m. UTC | #1
> @@ -3584,6 +3588,7 @@ static void gen_load_locked(DisasContext *ctx, MemOp memop)
>       gen_set_access_type(ctx, ACCESS_RES);
>       gen_addr_reg_index(ctx, t0);
>       tcg_gen_mov_tl(cpu_reserve, t0);
> +    tcg_gen_movi_tl(cpu_reserve_size, memop_size(memop));

Not that it really matters, this produces a byte value...

> @@ -3873,6 +3879,7 @@ static void gen_lqarx(DisasContext *ctx)
>       EA = tcg_temp_new();
>       gen_addr_reg_index(ctx, EA);
>       tcg_gen_mov_tl(cpu_reserve, EA);
> +    tcg_gen_movi_tl(cpu_reserve_size, 128);

... so perhaps ideally this would be 16.

Perhaps name it reserve_length to exactly match the manual.
Otherwise,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~
Nicholas Piggin June 5, 2023, 6:27 a.m. UTC | #2
On Sun Jun 4, 2023 at 8:28 PM AEST, Nicholas Piggin wrote:
> Differently-sized larx/stcx. pairs can succeed if the starting address
> matches. Add a size check to require stcx. exactly match the larx that
> established the reservation.

Hmm, question: reserve_addr is a VMSTATE field, but reserve_val is not
(nor reserve_size after this patch).

Blue Swirl added that with commit a456d59c20f ("VM load/save support for
PPC CPU"), and when reserve_val was added in commit 18b21a2f83a
("target-ppc: retain l{w,d}arx loaded value") it did not get migrated.

Could we end up with reserve_addr != -1, but with a bogus reserve_val,
which could then permit a stcx. incorrectly? Not entirely outlandish if
reserve_val starts out initialised to zero.

Could we just clear the reserve in cpu_post_load? It is permitted to be
lost for an implementation-specific reason. Doesn't seem necessary to
try keep it alive over a migration.

Thanks,
Nick
Richard Henderson June 19, 2023, 3:48 p.m. UTC | #3
On 6/5/23 08:27, Nicholas Piggin wrote:
> On Sun Jun 4, 2023 at 8:28 PM AEST, Nicholas Piggin wrote:
>> Differently-sized larx/stcx. pairs can succeed if the starting address
>> matches. Add a size check to require stcx. exactly match the larx that
>> established the reservation.
> 
> Hmm, question: reserve_addr is a VMSTATE field, but reserve_val is not
> (nor reserve_size after this patch).
> 
> Blue Swirl added that with commit a456d59c20f ("VM load/save support for
> PPC CPU"), and when reserve_val was added in commit 18b21a2f83a
> ("target-ppc: retain l{w,d}arx loaded value") it did not get migrated.
> 
> Could we end up with reserve_addr != -1, but with a bogus reserve_val,
> which could then permit a stcx. incorrectly? Not entirely outlandish if
> reserve_val starts out initialised to zero.
> 
> Could we just clear the reserve in cpu_post_load? It is permitted to be
> lost for an implementation-specific reason. Doesn't seem necessary to
> try keep it alive over a migration.

It's not a bad idea to flush the reservation over migrate.
You can do

-       VMSTATE_UINTTL(env.reserve_addr, PowerPCCPU),
+       VMSTATE_UNUSED(sizeof(target_long))

when making that change.

Peter, any thoughts on this?  If we're going to do one guest, we might ought to do the 
same for all load-lock/store-conditional guests.


r~
Peter Maydell June 19, 2023, 3:55 p.m. UTC | #4
On Mon, 19 Jun 2023 at 16:49, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 6/5/23 08:27, Nicholas Piggin wrote:
> > On Sun Jun 4, 2023 at 8:28 PM AEST, Nicholas Piggin wrote:
> >> Differently-sized larx/stcx. pairs can succeed if the starting address
> >> matches. Add a size check to require stcx. exactly match the larx that
> >> established the reservation.
> >
> > Hmm, question: reserve_addr is a VMSTATE field, but reserve_val is not
> > (nor reserve_size after this patch).
> >
> > Blue Swirl added that with commit a456d59c20f ("VM load/save support for
> > PPC CPU"), and when reserve_val was added in commit 18b21a2f83a
> > ("target-ppc: retain l{w,d}arx loaded value") it did not get migrated.
> >
> > Could we end up with reserve_addr != -1, but with a bogus reserve_val,
> > which could then permit a stcx. incorrectly? Not entirely outlandish if
> > reserve_val starts out initialised to zero.
> >
> > Could we just clear the reserve in cpu_post_load? It is permitted to be
> > lost for an implementation-specific reason. Doesn't seem necessary to
> > try keep it alive over a migration.
>
> It's not a bad idea to flush the reservation over migrate.

Is there any particular reason to do so? The default simple
thing is "if this is state that persists across instructions
then migrate it"; we usually reserve "do something special in
post-load" for oddball cases where "just copy the data" doesn't
work.

target/arm migrates both the exclusive addr and value.

target/mips migrates lladdr but has forgotten llval
(and perhaps llval_wp and llnewval_wp, depending on what
those fields do).

thanks
-- PMM
Richard Henderson June 19, 2023, 5:02 p.m. UTC | #5
On 6/19/23 17:55, Peter Maydell wrote:
> On Mon, 19 Jun 2023 at 16:49, Richard Henderson
> <richard.henderson@linaro.org> wrote:
>>
>> On 6/5/23 08:27, Nicholas Piggin wrote:
>>> On Sun Jun 4, 2023 at 8:28 PM AEST, Nicholas Piggin wrote:
>>>> Differently-sized larx/stcx. pairs can succeed if the starting address
>>>> matches. Add a size check to require stcx. exactly match the larx that
>>>> established the reservation.
>>>
>>> Hmm, question: reserve_addr is a VMSTATE field, but reserve_val is not
>>> (nor reserve_size after this patch).
>>>
>>> Blue Swirl added that with commit a456d59c20f ("VM load/save support for
>>> PPC CPU"), and when reserve_val was added in commit 18b21a2f83a
>>> ("target-ppc: retain l{w,d}arx loaded value") it did not get migrated.
>>>
>>> Could we end up with reserve_addr != -1, but with a bogus reserve_val,
>>> which could then permit a stcx. incorrectly? Not entirely outlandish if
>>> reserve_val starts out initialised to zero.
>>>
>>> Could we just clear the reserve in cpu_post_load? It is permitted to be
>>> lost for an implementation-specific reason. Doesn't seem necessary to
>>> try keep it alive over a migration.
>>
>> It's not a bad idea to flush the reservation over migrate.
> 
> Is there any particular reason to do so? The default simple
> thing is "if this is state that persists across instructions
> then migrate it"; we usually reserve "do something special in
> post-load" for oddball cases where "just copy the data" doesn't
> work.
> 
> target/arm migrates both the exclusive addr and value.

ppc is adding "size", which arm technically should have as well.

> target/mips migrates lladdr but has forgotten llval
> (and perhaps llval_wp and llnewval_wp, depending on what
> those fields do).

So, similarly, would need to handle migration for which all of the required data is not 
present.

The thought is, rather than migrate this new data also, and handle compatibility, simply 
discard all reservations.


r~
Peter Maydell June 19, 2023, 5:14 p.m. UTC | #6
On Mon, 19 Jun 2023 at 18:03, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 6/19/23 17:55, Peter Maydell wrote:
> > On Mon, 19 Jun 2023 at 16:49, Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >>
> >> On 6/5/23 08:27, Nicholas Piggin wrote:
> >>> On Sun Jun 4, 2023 at 8:28 PM AEST, Nicholas Piggin wrote:
> >>>> Differently-sized larx/stcx. pairs can succeed if the starting address
> >>>> matches. Add a size check to require stcx. exactly match the larx that
> >>>> established the reservation.
> >>>
> >>> Hmm, question: reserve_addr is a VMSTATE field, but reserve_val is not
> >>> (nor reserve_size after this patch).
> >>>
> >>> Blue Swirl added that with commit a456d59c20f ("VM load/save support for
> >>> PPC CPU"), and when reserve_val was added in commit 18b21a2f83a
> >>> ("target-ppc: retain l{w,d}arx loaded value") it did not get migrated.
> >>>
> >>> Could we end up with reserve_addr != -1, but with a bogus reserve_val,
> >>> which could then permit a stcx. incorrectly? Not entirely outlandish if
> >>> reserve_val starts out initialised to zero.
> >>>
> >>> Could we just clear the reserve in cpu_post_load? It is permitted to be
> >>> lost for an implementation-specific reason. Doesn't seem necessary to
> >>> try keep it alive over a migration.
> >>
> >> It's not a bad idea to flush the reservation over migrate.
> >
> > Is there any particular reason to do so? The default simple
> > thing is "if this is state that persists across instructions
> > then migrate it"; we usually reserve "do something special in
> > post-load" for oddball cases where "just copy the data" doesn't
> > work.
> >
> > target/arm migrates both the exclusive addr and value.
>
> ppc is adding "size", which arm technically should have as well.

Arm allows an implementation to require the transaction size
to match on loadexcl and storexcl, but doesn't mandate it, fwiw.
(Also, our implementation is miles away from the architectural
requirements anyway because we operate on virtual addresses,
not physical addresses.)

> > target/mips migrates lladdr but has forgotten llval
> > (and perhaps llval_wp and llnewval_wp, depending on what
> > those fields do).
>
> So, similarly, would need to handle migration for which all of the required data is not
> present.
>
> The thought is, rather than migrate this new data also, and handle compatibility, simply
> discard all reservations.

I don't see a problem for normal migration and snapshotting.
I do wonder whether this would have a bad interaction
with record-and-replay's use of snapshots. Does that
expect "execution from the loaded snapshot" to match
"execution continues from point of snapshot save" ?

-- PMM
Nicholas Piggin June 20, 2023, 3:39 a.m. UTC | #7
On Tue Jun 20, 2023 at 3:14 AM AEST, Peter Maydell wrote:
> On Mon, 19 Jun 2023 at 18:03, Richard Henderson
> <richard.henderson@linaro.org> wrote:
> >
> > On 6/19/23 17:55, Peter Maydell wrote:
> > > On Mon, 19 Jun 2023 at 16:49, Richard Henderson
> > > <richard.henderson@linaro.org> wrote:
> > >>
> > >> On 6/5/23 08:27, Nicholas Piggin wrote:
> > >>> On Sun Jun 4, 2023 at 8:28 PM AEST, Nicholas Piggin wrote:
> > >>>> Differently-sized larx/stcx. pairs can succeed if the starting address
> > >>>> matches. Add a size check to require stcx. exactly match the larx that
> > >>>> established the reservation.
> > >>>
> > >>> Hmm, question: reserve_addr is a VMSTATE field, but reserve_val is not
> > >>> (nor reserve_size after this patch).
> > >>>
> > >>> Blue Swirl added that with commit a456d59c20f ("VM load/save support for
> > >>> PPC CPU"), and when reserve_val was added in commit 18b21a2f83a
> > >>> ("target-ppc: retain l{w,d}arx loaded value") it did not get migrated.
> > >>>
> > >>> Could we end up with reserve_addr != -1, but with a bogus reserve_val,
> > >>> which could then permit a stcx. incorrectly? Not entirely outlandish if
> > >>> reserve_val starts out initialised to zero.
> > >>>
> > >>> Could we just clear the reserve in cpu_post_load? It is permitted to be
> > >>> lost for an implementation-specific reason. Doesn't seem necessary to
> > >>> try keep it alive over a migration.
> > >>
> > >> It's not a bad idea to flush the reservation over migrate.
> > >
> > > Is there any particular reason to do so? The default simple
> > > thing is "if this is state that persists across instructions
> > > then migrate it"; we usually reserve "do something special in
> > > post-load" for oddball cases where "just copy the data" doesn't
> > > work.
> > >
> > > target/arm migrates both the exclusive addr and value.
> >
> > ppc is adding "size", which arm technically should have as well.
>
> Arm allows an implementation to require the transaction size
> to match on loadexcl and storexcl, but doesn't mandate it, fwiw.
> (Also, our implementation is miles away from the architectural
> requirements anyway because we operate on virtual addresses,
> not physical addresses.)

The same as powerpc. Size *and* address within reserve granule
does not have to match the larx which established the reserve,
but the latter we always enforced and in practice no open source
software seems to hit it (or AIX).

My thinking is that it is good to tighten it because very likely
software that gets it wrong is deviating from ISA unintentionally.
Linux provides no HWCAP bit to allow code to test such
implementation details, for example.

> > > target/mips migrates lladdr but has forgotten llval
> > > (and perhaps llval_wp and llnewval_wp, depending on what
> > > those fields do).
> >
> > So, similarly, would need to handle migration for which all of the required data is not
> > present.
> >
> > The thought is, rather than migrate this new data also, and handle compatibility, simply
> > discard all reservations.
>
> I don't see a problem for normal migration and snapshotting.
> I do wonder whether this would have a bad interaction
> with record-and-replay's use of snapshots. Does that
> expect "execution from the loaded snapshot" to match
> "execution continues from point of snapshot save" ?

I don't mind the idea of moving the new state across, I wondered
if clearing the reserve would be easier for compatibility and
backporting.

I don't know the rr code but if the snapshots use this vmstate
and the replay from that is expected to match exactly the
recording, then I think you must be right.

Thanks,
Nick
diff mbox series

Patch

diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
index 7959bfed0a..1d71f325d8 100644
--- a/target/ppc/cpu.h
+++ b/target/ppc/cpu.h
@@ -1124,6 +1124,7 @@  struct CPUArchState {
     target_ulong ca32;
 
     target_ulong reserve_addr; /* Reservation address */
+    target_ulong reserve_size; /* Reservation size */
     target_ulong reserve_val;  /* Reservation value */
     target_ulong reserve_val2;
 
diff --git a/target/ppc/cpu_init.c b/target/ppc/cpu_init.c
index 944a74befe..082981b148 100644
--- a/target/ppc/cpu_init.c
+++ b/target/ppc/cpu_init.c
@@ -7421,8 +7421,8 @@  void ppc_cpu_dump_state(CPUState *cs, FILE *f, int flags)
         }
         qemu_fprintf(f, " %c%c", a, env->crf[i] & 0x01 ? 'O' : ' ');
     }
-    qemu_fprintf(f, " ]             RES " TARGET_FMT_lx "\n",
-                 env->reserve_addr);
+    qemu_fprintf(f, " ]     RES %03x@" TARGET_FMT_lx "\n",
+                 (int)env->reserve_size, env->reserve_addr);
 
     if (flags & CPU_DUMP_FPU) {
         for (i = 0; i < 32; i++) {
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index e129cdcb8f..5195047146 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -71,6 +71,7 @@  static TCGv cpu_cfar;
 #endif
 static TCGv cpu_xer, cpu_so, cpu_ov, cpu_ca, cpu_ov32, cpu_ca32;
 static TCGv cpu_reserve;
+static TCGv cpu_reserve_size;
 static TCGv cpu_reserve_val;
 static TCGv cpu_reserve_val2;
 static TCGv cpu_fpscr;
@@ -141,6 +142,9 @@  void ppc_translate_init(void)
     cpu_reserve = tcg_global_mem_new(cpu_env,
                                      offsetof(CPUPPCState, reserve_addr),
                                      "reserve_addr");
+    cpu_reserve_size = tcg_global_mem_new(cpu_env,
+                                          offsetof(CPUPPCState, reserve_size),
+                                          "reserve_size");
     cpu_reserve_val = tcg_global_mem_new(cpu_env,
                                          offsetof(CPUPPCState, reserve_val),
                                          "reserve_val");
@@ -3584,6 +3588,7 @@  static void gen_load_locked(DisasContext *ctx, MemOp memop)
     gen_set_access_type(ctx, ACCESS_RES);
     gen_addr_reg_index(ctx, t0);
     tcg_gen_mov_tl(cpu_reserve, t0);
+    tcg_gen_movi_tl(cpu_reserve_size, memop_size(memop));
     tcg_gen_qemu_ld_tl(gpr, t0, ctx->mem_idx, memop | MO_ALIGN);
     tcg_gen_mov_tl(cpu_reserve_val, gpr);
     tcg_gen_mb(TCG_MO_ALL | TCG_BAR_LDAQ);
@@ -3816,6 +3821,7 @@  static void gen_conditional_store(DisasContext *ctx, MemOp memop)
     gen_set_access_type(ctx, ACCESS_RES);
     gen_addr_reg_index(ctx, t0);
     tcg_gen_brcond_tl(TCG_COND_NE, t0, cpu_reserve, l1);
+    tcg_gen_brcondi_tl(TCG_COND_NE, cpu_reserve_size, memop_size(memop), l1);
 
     t0 = tcg_temp_new();
     tcg_gen_atomic_cmpxchg_tl(t0, cpu_reserve, cpu_reserve_val,
@@ -3873,6 +3879,7 @@  static void gen_lqarx(DisasContext *ctx)
     EA = tcg_temp_new();
     gen_addr_reg_index(ctx, EA);
     tcg_gen_mov_tl(cpu_reserve, EA);
+    tcg_gen_movi_tl(cpu_reserve_size, 128);
 
     /* Note that the low part is always in RD+1, even in LE mode.  */
     lo = cpu_gpr[rd + 1];
@@ -3907,6 +3914,7 @@  static void gen_stqcx_(DisasContext *ctx)
     gen_addr_reg_index(ctx, EA);
 
     tcg_gen_brcond_tl(TCG_COND_NE, EA, cpu_reserve, lab_fail);
+    tcg_gen_brcondi_tl(TCG_COND_NE, cpu_reserve_size, 128, lab_fail);
 
     cmp = tcg_temp_new_i128();
     val = tcg_temp_new_i128();