diff mbox series

[v2,4/8] powerpc/mm: protect linear mapping modifications by a mutex

Message ID 20201111145322.15793-5-david@redhat.com (mailing list archive)
State Accepted
Commit e5b2af044f31bf18defa557a8cd11c23caefa34c
Headers show
Series powernv/memtrace: don't abuse memory hot(un)plug infrastructure for memory allocations | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success Successfully applied on branch powerpc/merge (80ecbe16c827714ce3741ed1f1d34488b903e717)
snowpatch_ozlabs/checkpatch success total: 0 errors, 0 warnings, 0 checks, 26 lines checked
snowpatch_ozlabs/needsstable success Patch has no Fixes tags

Commit Message

David Hildenbrand Nov. 11, 2020, 2:53 p.m. UTC
This code currently relies on mem_hotplug_begin()/mem_hotplug_done() -
create_section_mapping()/remove_section_mapping() implementations
cannot tollerate getting called concurrently.

Let's prepare for callers (memtrace) not holding any such locks (and
don't force them to mess with memory hotplug locks).

Other parts in these functions don't seem to rely on external locking.

Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Rashmica Gupta <rashmica.g@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 arch/powerpc/mm/mem.c | 5 +++++
 1 file changed, 5 insertions(+)

Comments

Oscar Salvador Nov. 17, 2020, 3:37 p.m. UTC | #1
On Wed, Nov 11, 2020 at 03:53:18PM +0100, David Hildenbrand wrote:
> @@ -144,7 +147,9 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size)
>  	start = (unsigned long)__va(start);
>  	flush_dcache_range_chunked(start, start + size, FLUSH_CHUNK_SIZE);
>  
> +	mutex_lock(&linear_mapping_mutex);
>  	ret = remove_section_mapping(start, start + size);
> +	mutex_unlock(&linear_mapping_mutex);
>  	WARN_ON_ONCE(ret);

My expertise in this area is low, so bear with me.

Why we do not need to protect flush_dcache_range_chunked and
vm_unmap_aliases?
David Hildenbrand Nov. 17, 2020, 3:46 p.m. UTC | #2
On 17.11.20 16:37, Oscar Salvador wrote:
> On Wed, Nov 11, 2020 at 03:53:18PM +0100, David Hildenbrand wrote:
>> @@ -144,7 +147,9 @@ void __ref arch_remove_linear_mapping(u64 start, u64 size)
>>   	start = (unsigned long)__va(start);
>>   	flush_dcache_range_chunked(start, start + size, FLUSH_CHUNK_SIZE);
>>   
>> +	mutex_lock(&linear_mapping_mutex);
>>   	ret = remove_section_mapping(start, start + size);
>> +	mutex_unlock(&linear_mapping_mutex);
>>   	WARN_ON_ONCE(ret);
> 
> My expertise in this area is low, so bear with me.
> 
> Why we do not need to protect flush_dcache_range_chunked and
> vm_unmap_aliases?
> 

vm_unmap_aliases does own locking and can handle concurrent calls.


flush_dcache_range_chunked()->flush_dcache_range() ends up as a sequence 
of memory barriers paired with dcbf instructions.

dcbf: Copies modified cache blocks to main storage and invalidates the 
copy in the data cache.

It's called from various places and no global variables seem to be 
involved, so it looks like it doesn't need any kind of locking.
diff mbox series

Patch

diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 8a86d81f8df0..ca5c4b54c366 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -58,6 +58,7 @@ 
 #define CPU_FTR_NOEXECUTE	0
 #endif
 
+static DEFINE_MUTEX(linear_mapping_mutex);
 unsigned long long memory_limit;
 bool init_mem_is_free;
 
@@ -126,8 +127,10 @@  int __ref arch_create_linear_mapping(int nid, u64 start, u64 size,
 	int rc;
 
 	start = (unsigned long)__va(start);
+	mutex_lock(&linear_mapping_mutex);
 	rc = create_section_mapping(start, start + size, nid,
 				    params->pgprot);
+	mutex_unlock(&linear_mapping_mutex);
 	if (rc) {
 		pr_warn("Unable to create linear mapping for 0x%llx..0x%llx: %d\n",
 			start, start + size, rc);
@@ -144,7 +147,9 @@  void __ref arch_remove_linear_mapping(u64 start, u64 size)
 	start = (unsigned long)__va(start);
 	flush_dcache_range_chunked(start, start + size, FLUSH_CHUNK_SIZE);
 
+	mutex_lock(&linear_mapping_mutex);
 	ret = remove_section_mapping(start, start + size);
+	mutex_unlock(&linear_mapping_mutex);
 	WARN_ON_ONCE(ret);
 
 	/* Ensure all vmalloc mappings are flushed in case they also