diff mbox

malloc: Preseve arena free list for attached threads [BZ #20370]

Message ID 20160714152848.50714402408BD@oldenburg.str.redhat.com
State New
Headers show

Commit Message

Florian Weimer July 14, 2016, 3:28 p.m. UTC
It is necessary to preserve the invariant that if an arena is
on the free list, it has thread attach count zero.  Otherwise,
when arena_thread_freeres sees the zero attach count, it will
add it, and without the invariant, an arena could get pushed
to the list twice, resulting in a cycle.

One possible execution trace looks like this:

Thread 1 examines free list and observes it as empty.
Thread 2 exits and adds its arena to the free list,
  with attached_threads == 0).
Thread 1 selects this arena in reused_arena (not from the free list).
Thread 1 increments attached_threads and attaches itself.
  (The arena remains on the free list.)
Thread 1 exits, decrements attached_threads,
  and adds the arena to the free list.

The final step creates a cycle in the usual way (by overwriting the
next_free member with the former list head, while there is another
list item pointing to the arena structure).

tst-malloc-thread-exit exhibits this issue, but it was only visible
with a debugger because the incorrect fix in bug 19243 removed
the assert from get_free_list.

2016-07-14  Florian Weimer  <fweimer@redhat.com>

	* malloc/arena.c (get_free_list): Update comment.  Assert that
	arenas on the free list have no attached threads.
	(remove_from_free_list): New function.
	(reused_arena): Call it.

Comments

Florian Weimer July 14, 2016, 3:29 p.m. UTC | #1
On 07/14/2016 05:28 PM, Florian Weimer wrote:

> 2016-07-14  Florian Weimer  <fweimer@redhat.com>
>
> 	* malloc/arena.c (get_free_list): Update comment.  Assert that
> 	arenas on the free list have no attached threads.
> 	(remove_from_free_list): New function.
> 	(reused_arena): Call it.

I should say that this is for after the release.

I also added the missing [BZ #20370] line to the ChangeLog locally.

Thanks,
Florian
Carlos O'Donell July 19, 2016, 6:35 p.m. UTC | #2
On 07/14/2016 11:28 AM, Florian Weimer wrote:
> It is necessary to preserve the invariant that if an arena is
> on the free list, it has thread attach count zero.  Otherwise,
> when arena_thread_freeres sees the zero attach count, it will
> add it, and without the invariant, an arena could get pushed
> to the list twice, resulting in a cycle.
> 
> One possible execution trace looks like this:
> 
> Thread 1 examines free list and observes it as empty.
> Thread 2 exits and adds its arena to the free list,
>   with attached_threads == 0).
> Thread 1 selects this arena in reused_arena (not from the free list).
> Thread 1 increments attached_threads and attaches itself.
>   (The arena remains on the free list.)
> Thread 1 exits, decrements attached_threads,
>   and adds the arena to the free list.

So the problem we're fixing is adding the same arena to the free 
list twice and creating a cycle.

We thought we had fixed this by using the attached_threads count,
but because of the concurrent use of the arena via the ->next list
we can have it in use twice and placed on the free list twice.

I agree that removing the arena from the free list, if it's there,
is the best option right now.

> The final step creates a cycle in the usual way (by overwriting the
> next_free member with the former list head, while there is another
> list item pointing to the arena structure).
> 
> tst-malloc-thread-exit exhibits this issue, but it was only visible
> with a debugger because the incorrect fix in bug 19243 removed
> the assert from get_free_list.

OK.

> 2016-07-14  Florian Weimer  <fweimer@redhat.com>
> 
> 	* malloc/arena.c (get_free_list): Update comment.  Assert that
> 	arenas on the free list have no attached threads.
> 	(remove_from_free_list): New function.
> 	(reused_arena): Call it.
> 
> diff --git a/malloc/arena.c b/malloc/arena.c
> index 229783f..4e16593 100644
> --- a/malloc/arena.c
> +++ b/malloc/arena.c
> @@ -702,8 +702,7 @@ _int_new_arena (size_t size)
>  }
>  
>  
> -/* Remove an arena from free_list.  The arena may be in use because it
> -   was attached concurrently to a thread by reused_arena below.  */
> +/* Remove an arena from free_list.  */

OK.

>  static mstate
>  get_free_list (void)
>  {
> @@ -718,7 +717,8 @@ get_free_list (void)
>  	  free_list = result->next_free;
>  
>  	  /* The arena will be attached to this thread.  */
> -	  ++result->attached_threads;
> +	  assert (result->attached_threads == 0);
> +	  result->attached_threads = 1;

OK. Adding back the assert is good here.

>  
>  	  detach_arena (replaced_arena);
>  	}
> @@ -735,6 +735,26 @@ get_free_list (void)
>    return result;
>  }
>  
> +/* Remove the arena from the free list (if it is present).
> +   free_list_lock must have been acquired by the caller.  */
> +static void
> +remove_from_free_list (mstate arena)
> +{
> +  mstate *previous = &free_list;

Start at the head of the free list. This is consistent because
we took the free_list_lock.

> +  for (mstate p = free_list; p != NULL; p = p->next_free)
> +    {
> +      assert (p->attached_threads == 0);
> +      if (p == arena)
> +	{
> +	  /* Remove the requested arena from the list.  */
> +	  *previous = p->next_free;
> +	  break;
> +	}
> +      else
> +	previous = &p->next_free;
> +    }
> +}

OK. This should really be an infrequent operation triggered only by
failing allocations in your main arena e.g. arena_get_retry, or as a thread
is started up and looking for a free list. I would imagine that the cost of
this linear list walk (which grows by CPU count) would be nothing compared to
the cost of starting up and tearing down threads.

> +
>  /* Lock and return an arena that can be reused for memory allocation.
>     Avoid AVOID_ARENA as we have already failed to allocate memory in
>     it and it is currently locked.  */
> @@ -782,14 +802,25 @@ reused_arena (mstate avoid_arena)
>    (void) mutex_lock (&result->mutex);
>  
>  out:
> -  /* Attach the arena to the current thread.  Note that we may have
> -     selected an arena which was on free_list.  */
> +  /* Attach the arena to the current thread.  */

OK.

>    {
>      /* Update the arena thread attachment counters.   */
>      mstate replaced_arena = thread_arena;
>      (void) mutex_lock (&free_list_lock);
>      detach_arena (replaced_arena);
> +
> +    /* We may have picked up an arena on the free list.  We need to
> +       preserve the invariant that no arena on the free list has a
> +       positive attached_threads counter (otherwise,
> +       arena_thread_freeres cannot use the counter to determine if the
> +       arena needs to be put on the free list).  We unconditionally
> +       remove the selected arena from the free list.  The caller of
> +       reused_arena checked the free list and observed it to be empty,
> +       so the list is very short.  */
> +    remove_from_free_list (result);

OK.

This is the best solution IMO, which makes it clear that free_list
arenas have attached_threads count _at_ zero.

> +
>      ++result->attached_threads;
> +
>      (void) mutex_unlock (&free_list_lock);
>    }
>  
> 

Patch looks good to me.
Florian Weimer July 19, 2016, 7:22 p.m. UTC | #3
On 07/19/2016 08:35 PM, Carlos O'Donell wrote:
> OK. This should really be an infrequent operation triggered only by
> failing allocations in your main arena e.g. arena_get_retry, or as a thread
> is started up and looking for a free list. I would imagine that the cost of
> this linear list walk (which grows by CPU count) would be nothing compared to
> the cost of starting up and tearing down threads.

NB: Note that this only happens after the current thread lost a race 
towards the free list, and some other thread exited between the point 
where the current thread observed the free list as empty (because 
otherwise the current thread would have picked an arena from the 
non-empty free list), and the current thread attaches itself to the 
selected arena.

Florian
diff mbox

Patch

diff --git a/malloc/arena.c b/malloc/arena.c
index 229783f..4e16593 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -702,8 +702,7 @@  _int_new_arena (size_t size)
 }
 
 
-/* Remove an arena from free_list.  The arena may be in use because it
-   was attached concurrently to a thread by reused_arena below.  */
+/* Remove an arena from free_list.  */
 static mstate
 get_free_list (void)
 {
@@ -718,7 +717,8 @@  get_free_list (void)
 	  free_list = result->next_free;
 
 	  /* The arena will be attached to this thread.  */
-	  ++result->attached_threads;
+	  assert (result->attached_threads == 0);
+	  result->attached_threads = 1;
 
 	  detach_arena (replaced_arena);
 	}
@@ -735,6 +735,26 @@  get_free_list (void)
   return result;
 }
 
+/* Remove the arena from the free list (if it is present).
+   free_list_lock must have been acquired by the caller.  */
+static void
+remove_from_free_list (mstate arena)
+{
+  mstate *previous = &free_list;
+  for (mstate p = free_list; p != NULL; p = p->next_free)
+    {
+      assert (p->attached_threads == 0);
+      if (p == arena)
+	{
+	  /* Remove the requested arena from the list.  */
+	  *previous = p->next_free;
+	  break;
+	}
+      else
+	previous = &p->next_free;
+    }
+}
+
 /* Lock and return an arena that can be reused for memory allocation.
    Avoid AVOID_ARENA as we have already failed to allocate memory in
    it and it is currently locked.  */
@@ -782,14 +802,25 @@  reused_arena (mstate avoid_arena)
   (void) mutex_lock (&result->mutex);
 
 out:
-  /* Attach the arena to the current thread.  Note that we may have
-     selected an arena which was on free_list.  */
+  /* Attach the arena to the current thread.  */
   {
     /* Update the arena thread attachment counters.   */
     mstate replaced_arena = thread_arena;
     (void) mutex_lock (&free_list_lock);
     detach_arena (replaced_arena);
+
+    /* We may have picked up an arena on the free list.  We need to
+       preserve the invariant that no arena on the free list has a
+       positive attached_threads counter (otherwise,
+       arena_thread_freeres cannot use the counter to determine if the
+       arena needs to be put on the free list).  We unconditionally
+       remove the selected arena from the free list.  The caller of
+       reused_arena checked the free list and observed it to be empty,
+       so the list is very short.  */
+    remove_from_free_list (result);
+
     ++result->attached_threads;
+
     (void) mutex_unlock (&free_list_lock);
   }