Message ID | 20220117075246.36072-2-hbathini@linux.ibm.com (mailing list archive) |
---|---|
State | Handled Elsewhere, archived |
Headers | show |
Series | powerpc/fadump: handle CMA activation failure appropriately | expand |
Hi Andrew, Could you please pick these patches via -mm tree. On 17/01/22 1:22 pm, Hari Bathini wrote: > Commit 072355c1cf2d ("mm/cma: expose all pages to the buddy if > activation of an area fails") started exposing all pages to buddy > allocator on CMA activation failure. But there can be CMA users that > want to handle the reserved memory differently on CMA allocation > failure. Provide an option to opt out from exposing pages to buddy > for such cases. > > Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> > Reviewed-by: David Hildenbrand <david@redhat.com> > --- > > Changes in v3: > * Dropped NULL check in cma_reserve_pages_on_error(). > * Dropped explicit initialization of cma->reserve_pages_on_error to > 'false' in cma_init_reserved_mem(). > * Added Reviewed-by tag from David. > > Changes in v2: > * Changed cma->free_pages_on_error to cma->reserve_pages_on_error and > cma_dont_free_pages_on_error() to cma_reserve_pages_on_error() to > avoid confusion. > > > include/linux/cma.h | 2 ++ > mm/cma.c | 11 +++++++++-- > mm/cma.h | 1 + > 3 files changed, 12 insertions(+), 2 deletions(-) > > diff --git a/include/linux/cma.h b/include/linux/cma.h > index bd801023504b..51d540eee18a 100644 > --- a/include/linux/cma.h > +++ b/include/linux/cma.h > @@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned > extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count); > > extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); > + > +extern void cma_reserve_pages_on_error(struct cma *cma); > #endif > diff --git a/mm/cma.c b/mm/cma.c > index bc9ca8f3c487..766f1b82b532 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma) > bitmap_free(cma->bitmap); > out_error: > /* Expose all pages to the buddy, they are useless for CMA. */ > - for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++) > - free_reserved_page(pfn_to_page(pfn)); > + if (!cma->reserve_pages_on_error) { > + for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++) > + free_reserved_page(pfn_to_page(pfn)); > + } > totalcma_pages -= cma->count; > cma->count = 0; > pr_err("CMA area %s could not be activated\n", cma->name); > @@ -150,6 +152,11 @@ static int __init cma_init_reserved_areas(void) > } > core_initcall(cma_init_reserved_areas); > > +void __init cma_reserve_pages_on_error(struct cma *cma) > +{ > + cma->reserve_pages_on_error = true; > +} > + > /** > * cma_init_reserved_mem() - create custom contiguous area from reserved memory > * @base: Base address of the reserved area > diff --git a/mm/cma.h b/mm/cma.h > index 2c775877eae2..88a0595670b7 100644 > --- a/mm/cma.h > +++ b/mm/cma.h > @@ -30,6 +30,7 @@ struct cma { > /* kobject requires dynamic object */ > struct cma_kobject *cma_kobj; > #endif > + bool reserve_pages_on_error; > }; > > extern struct cma cma_areas[MAX_CMA_AREAS]; Thanks Hari
diff --git a/include/linux/cma.h b/include/linux/cma.h index bd801023504b..51d540eee18a 100644 --- a/include/linux/cma.h +++ b/include/linux/cma.h @@ -50,4 +50,6 @@ extern bool cma_pages_valid(struct cma *cma, const struct page *pages, unsigned extern bool cma_release(struct cma *cma, const struct page *pages, unsigned long count); extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data); + +extern void cma_reserve_pages_on_error(struct cma *cma); #endif diff --git a/mm/cma.c b/mm/cma.c index bc9ca8f3c487..766f1b82b532 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -131,8 +131,10 @@ static void __init cma_activate_area(struct cma *cma) bitmap_free(cma->bitmap); out_error: /* Expose all pages to the buddy, they are useless for CMA. */ - for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++) - free_reserved_page(pfn_to_page(pfn)); + if (!cma->reserve_pages_on_error) { + for (pfn = base_pfn; pfn < base_pfn + cma->count; pfn++) + free_reserved_page(pfn_to_page(pfn)); + } totalcma_pages -= cma->count; cma->count = 0; pr_err("CMA area %s could not be activated\n", cma->name); @@ -150,6 +152,11 @@ static int __init cma_init_reserved_areas(void) } core_initcall(cma_init_reserved_areas); +void __init cma_reserve_pages_on_error(struct cma *cma) +{ + cma->reserve_pages_on_error = true; +} + /** * cma_init_reserved_mem() - create custom contiguous area from reserved memory * @base: Base address of the reserved area diff --git a/mm/cma.h b/mm/cma.h index 2c775877eae2..88a0595670b7 100644 --- a/mm/cma.h +++ b/mm/cma.h @@ -30,6 +30,7 @@ struct cma { /* kobject requires dynamic object */ struct cma_kobject *cma_kobj; #endif + bool reserve_pages_on_error; }; extern struct cma cma_areas[MAX_CMA_AREAS];