diff mbox series

PR ipa/103601: ICE compiling CSiBE in ipa-modref's insert_kill

Message ID 001e01d7edb0$d89c2de0$89d489a0$@nextmovesoftware.com
State New
Headers show
Series PR ipa/103601: ICE compiling CSiBE in ipa-modref's insert_kill | expand

Commit Message

Roger Sayle Dec. 10, 2021, 10:29 a.m. UTC
This patch fixes PR ipa/103061 which is P1 regression that shows up as
an ICE in ipa-modref-tree.c's insert_kill when compiling the CSiBE
benchmark.  I believe the underlying cause is that the new kill tracking
functionality wasn't anticipating memory accesses that are zero bits
wide!?.  The failing source code (test case) contains the unusual lines:
typedef struct { } spinlock_t;
and
q->lock = (spinlock_t) { };
Making spinlock_t larger, or removing the assignment work around the issue.

The one line patch below to useful_for_kill_p teaches IPA that a memory
write is only useful as a "kill" if it is more than zero bits wide.
In theory, the existing known_size_p (size) test is now redundant, as
poly_int64 currently uses the value -1 for unknown size values,
but the proposed change makes the semantics clear, and defends against
possible future changes in representation [but I'm happy to change this].

This patch has been tested on x86_64-pc-linux-gnu with a make bootstrap
and make -k check with no new failures.  Ok for mainline?


2021-12-10  Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	PR ipa/103601
	* ipa-modref-tree.h (useful_for_kill_p): Zero width accesses aren't
	useful for kill tracking.

gcc/testsuite/ChangeLog
	PR ipa/103601
	* gcc.dg/ipa/pr103601.c: New test case.

Thanks in advance,
Roger
--

Comments

Andrew Pinski Dec. 10, 2021, 10:33 a.m. UTC | #1
On Fri, Dec 10, 2021 at 2:30 AM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> This patch fixes PR ipa/103061 which is P1 regression that shows up as
> an ICE in ipa-modref-tree.c's insert_kill when compiling the CSiBE
> benchmark.  I believe the underlying cause is that the new kill tracking
> functionality wasn't anticipating memory accesses that are zero bits
> wide!?.  The failing source code (test case) contains the unusual lines:
> typedef struct { } spinlock_t;
> and
> q->lock = (spinlock_t) { };
> Making spinlock_t larger, or removing the assignment work around the issue.

zero sized accesses (load and stores) should have been removed during
gimplification. Why was it not?

Thanks,
Andrew

>
> The one line patch below to useful_for_kill_p teaches IPA that a memory
> write is only useful as a "kill" if it is more than zero bits wide.
> In theory, the existing known_size_p (size) test is now redundant, as
> poly_int64 currently uses the value -1 for unknown size values,
> but the proposed change makes the semantics clear, and defends against
> possible future changes in representation [but I'm happy to change this].
>
> This patch has been tested on x86_64-pc-linux-gnu with a make bootstrap
> and make -k check with no new failures.  Ok for mainline?
>
>
> 2021-12-10  Roger Sayle  <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
>         PR ipa/103601
>         * ipa-modref-tree.h (useful_for_kill_p): Zero width accesses aren't
>         useful for kill tracking.
>
> gcc/testsuite/ChangeLog
>         PR ipa/103601
>         * gcc.dg/ipa/pr103601.c: New test case.
>
> Thanks in advance,
> Roger
> --
>
Jan Hubicka Dec. 10, 2021, 10:38 a.m. UTC | #2
> On Fri, Dec 10, 2021 at 2:30 AM Roger Sayle <roger@nextmovesoftware.com> wrote:
> >
> >
> > This patch fixes PR ipa/103061 which is P1 regression that shows up as
> > an ICE in ipa-modref-tree.c's insert_kill when compiling the CSiBE
> > benchmark.  I believe the underlying cause is that the new kill tracking
> > functionality wasn't anticipating memory accesses that are zero bits
> > wide!?.  The failing source code (test case) contains the unusual lines:
> > typedef struct { } spinlock_t;
> > and
> > q->lock = (spinlock_t) { };
> > Making spinlock_t larger, or removing the assignment work around the issue.
> 
> zero sized accesses (load and stores) should have been removed during
> gimplification. Why was it not?

Sadly this does not happen systematically... I already had to fix
similar issue with load/store analysis in modref :(

> >
> > 2021-12-10  Roger Sayle  <roger@nextmovesoftware.com>
> >
> > gcc/ChangeLog
> >         PR ipa/103601
> >         * ipa-modref-tree.h (useful_for_kill_p): Zero width accesses aren't
> >         useful for kill tracking.

The patch is OK.  Even if we make gimplifier smarter about zero width
accesses I guess we want to be safe that they are not synthetized from
i.e. variable sized arrays.

Thanks for fixing this!
Honza
> >
> > gcc/testsuite/ChangeLog
> >         PR ipa/103601
> >         * gcc.dg/ipa/pr103601.c: New test case.
> >
> > Thanks in advance,
> > Roger
> > --
> >
diff mbox series

Patch

diff --git a/gcc/ipa-modref-tree.h b/gcc/ipa-modref-tree.h
index 35190c2..4ad556f 100644
--- a/gcc/ipa-modref-tree.h
+++ b/gcc/ipa-modref-tree.h
@@ -87,7 +87,8 @@  struct GTY(()) modref_access_node
     {
       return parm_offset_known && parm_index != MODREF_UNKNOWN_PARM
 	     && parm_index != MODREF_RETSLOT_PARM && known_size_p (size)
-	     && known_eq (max_size, size);
+	     && known_eq (max_size, size)
+	     && known_gt (size, 0);
     }
   /* Dump range to debug OUT.  */
   void dump (FILE *out);
diff --git a/gcc/testsuite/gcc.dg/ipa/pr103601.c b/gcc/testsuite/gcc.dg/ipa/pr103601.c
new file mode 100644
index 0000000..7bdb5e5
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/ipa/pr103601.c
@@ -0,0 +1,35 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fgnu89-inline" } */
+
+typedef struct { } spinlock_t;
+struct list_head {
+ struct list_head *next, *prev;
+};
+struct __wait_queue_head {
+ spinlock_t lock;
+ struct list_head task_list;
+};
+typedef struct __wait_queue_head wait_queue_head_t;
+static inline void init_waitqueue_head(wait_queue_head_t *q)
+{
+ q->lock = (spinlock_t) { };
+ do { (&q->task_list)->next = (&q->task_list); (&q->task_list)->prev = (&q->task_list); } while (0);
+}
+struct timer_list {
+ void (*function)(unsigned long);
+};
+struct rpc_task {
+ struct timer_list tk_timer;
+ wait_queue_head_t tk_wait;
+};
+static void
+rpc_run_timer(struct rpc_task *task)
+{
+}
+inline void
+rpc_init_task(struct rpc_task *task)
+{
+ task->tk_timer.function = (void (*)(unsigned long)) rpc_run_timer;
+ init_waitqueue_head(&task->tk_wait);
+}
+