Message ID | 153358817264.15150.1450644377959627061.stgit@hbathini.in.ibm.com (mailing list archive) |
---|---|
State | Accepted |
Commit | ced1bf52f47783135b985d2aacf53fa77fd72e2e |
Headers | show |
Series | [v2,1/2] powerpc/fadump: handle crash memory ranges array index overflow | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/apply_patch | success | next/apply_patch Successfully applied |
snowpatch_ozlabs/checkpatch | success | Test checkpatch on branch next |
snowpatch_ozlabs/build-ppc64le | success | Test build-ppc64le on branch next |
snowpatch_ozlabs/build-ppc64be | success | Test build-ppc64be on branch next |
snowpatch_ozlabs/build-ppc64e | success | Test build-ppc64e on branch next |
snowpatch_ozlabs/build-ppc32 | success | Test build-ppc32 on branch next |
On 08/07/2018 02:12 AM, Hari Bathini wrote: > With dynamic memory allocation support for crash memory ranges array, > there is no hard limit on the no. of crash memory ranges kernel could > export, but program headers count could overflow in the /proc/vmcore > ELF file while exporting each memory range as PT_LOAD segment. Reduce > the likelihood of a such scenario, by folding adjacent crash memory > ranges which minimizes the total number of PT_LOAD segments. > > Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> > --- > arch/powerpc/kernel/fadump.c | 45 ++++++++++++++++++++++++++++++++++-------- > 1 file changed, 36 insertions(+), 9 deletions(-) > > diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c > index 2ec5704..cd0c555 100644 > --- a/arch/powerpc/kernel/fadump.c > +++ b/arch/powerpc/kernel/fadump.c > @@ -908,22 +908,41 @@ static int allocate_crash_memory_ranges(void) > static inline int fadump_add_crash_memory(unsigned long long base, > unsigned long long end) > { > + u64 start, size; > + bool is_adjacent = false; > + > if (base == end) > return 0; > > - if (crash_mem_ranges == max_crash_mem_ranges) { > - int ret; > + /* > + * Fold adjacent memory ranges to bring down the memory ranges/ > + * PT_LOAD segments count. > + */ > + if (crash_mem_ranges) { > + start = crash_memory_ranges[crash_mem_ranges-1].base; > + size = crash_memory_ranges[crash_mem_ranges-1].size; > > - ret = allocate_crash_memory_ranges(); > - if (ret) > - return ret; > + if ((start + size) == base) > + is_adjacent = true; > + } > + if (!is_adjacent) { > + /* resize the array on reaching the limit */ > + if (crash_mem_ranges == max_crash_mem_ranges) { > + int ret; > + > + ret = allocate_crash_memory_ranges(); > + if (ret) > + return ret; > + } > + > + start = base; > + crash_memory_ranges[crash_mem_ranges].base = start; > + crash_mem_ranges++; > } > > + crash_memory_ranges[crash_mem_ranges-1].size = (end - start); > pr_debug("crash_memory_range[%d] [%#016llx-%#016llx], %#llx bytes\n", > - crash_mem_ranges, base, end - 1, (end - base)); > - crash_memory_ranges[crash_mem_ranges].base = base; > - crash_memory_ranges[crash_mem_ranges].size = end - base; > - crash_mem_ranges++; > + (crash_mem_ranges - 1), start, end - 1, (end - start)); > return 0; > } > > @@ -999,6 +1018,14 @@ static int fadump_setup_crash_memory_ranges(void) > > pr_debug("Setup crash memory ranges.\n"); > crash_mem_ranges = 0; > + > + /* allocate memory for crash memory ranges for the first time */ > + if (!max_crash_mem_ranges) { > + ret = allocate_crash_memory_ranges(); > + if (ret) > + return ret; > + } > + I see that the check for (!is_adjacent) in first hunk already handles the first time allocation. Do we need this ? Rest looks fine to me. Reviewed-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Thanks, -Mahesh. > /* > * add the first memory chunk (RMA_START through boot_memory_size) as > * a separate memory chunk. The reason is, at the time crash firmware >
On Wednesday 08 August 2018 02:38 PM, Mahesh Jagannath Salgaonkar wrote: > On 08/07/2018 02:12 AM, Hari Bathini wrote: >> With dynamic memory allocation support for crash memory ranges array, >> there is no hard limit on the no. of crash memory ranges kernel could >> export, but program headers count could overflow in the /proc/vmcore >> ELF file while exporting each memory range as PT_LOAD segment. Reduce >> the likelihood of a such scenario, by folding adjacent crash memory >> ranges which minimizes the total number of PT_LOAD segments. >> >> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> >> --- >> arch/powerpc/kernel/fadump.c | 45 ++++++++++++++++++++++++++++++++++-------- >> 1 file changed, 36 insertions(+), 9 deletions(-) >> >> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c >> index 2ec5704..cd0c555 100644 >> --- a/arch/powerpc/kernel/fadump.c >> +++ b/arch/powerpc/kernel/fadump.c >> @@ -908,22 +908,41 @@ static int allocate_crash_memory_ranges(void) >> static inline int fadump_add_crash_memory(unsigned long long base, >> unsigned long long end) >> { >> + u64 start, size; >> + bool is_adjacent = false; >> + >> if (base == end) >> return 0; >> >> - if (crash_mem_ranges == max_crash_mem_ranges) { >> - int ret; >> + /* >> + * Fold adjacent memory ranges to bring down the memory ranges/ >> + * PT_LOAD segments count. >> + */ >> + if (crash_mem_ranges) { >> + start = crash_memory_ranges[crash_mem_ranges-1].base; >> + size = crash_memory_ranges[crash_mem_ranges-1].size; >> >> - ret = allocate_crash_memory_ranges(); >> - if (ret) >> - return ret; >> + if ((start + size) == base) >> + is_adjacent = true; >> + } >> + if (!is_adjacent) { >> + /* resize the array on reaching the limit */ >> + if (crash_mem_ranges == max_crash_mem_ranges) { >> + int ret; >> + >> + ret = allocate_crash_memory_ranges(); >> + if (ret) >> + return ret; >> + } >> + >> + start = base; >> + crash_memory_ranges[crash_mem_ranges].base = start; >> + crash_mem_ranges++; >> } >> >> + crash_memory_ranges[crash_mem_ranges-1].size = (end - start); >> pr_debug("crash_memory_range[%d] [%#016llx-%#016llx], %#llx bytes\n", >> - crash_mem_ranges, base, end - 1, (end - base)); >> - crash_memory_ranges[crash_mem_ranges].base = base; >> - crash_memory_ranges[crash_mem_ranges].size = end - base; >> - crash_mem_ranges++; >> + (crash_mem_ranges - 1), start, end - 1, (end - start)); >> return 0; >> } >> >> @@ -999,6 +1018,14 @@ static int fadump_setup_crash_memory_ranges(void) >> >> pr_debug("Setup crash memory ranges.\n"); >> crash_mem_ranges = 0; >> + >> + /* allocate memory for crash memory ranges for the first time */ >> + if (!max_crash_mem_ranges) { >> + ret = allocate_crash_memory_ranges(); >> + if (ret) >> + return ret; >> + } >> + > I see that the check for (!is_adjacent) in first hunk already handles > the first time allocation. Do we need this ? Right. This hunk in fadump_setup_crash_memory_ranges() is unnecessary. Can be dropped. Also, I missed out on adding "#include <linux/slab.h>". Though it compiles fine with upstream kernel, will add and post v3 just to be safe.. > Rest looks fine to me. > > Reviewed-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Thanks for the review - Hari
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c index 2ec5704..cd0c555 100644 --- a/arch/powerpc/kernel/fadump.c +++ b/arch/powerpc/kernel/fadump.c @@ -908,22 +908,41 @@ static int allocate_crash_memory_ranges(void) static inline int fadump_add_crash_memory(unsigned long long base, unsigned long long end) { + u64 start, size; + bool is_adjacent = false; + if (base == end) return 0; - if (crash_mem_ranges == max_crash_mem_ranges) { - int ret; + /* + * Fold adjacent memory ranges to bring down the memory ranges/ + * PT_LOAD segments count. + */ + if (crash_mem_ranges) { + start = crash_memory_ranges[crash_mem_ranges-1].base; + size = crash_memory_ranges[crash_mem_ranges-1].size; - ret = allocate_crash_memory_ranges(); - if (ret) - return ret; + if ((start + size) == base) + is_adjacent = true; + } + if (!is_adjacent) { + /* resize the array on reaching the limit */ + if (crash_mem_ranges == max_crash_mem_ranges) { + int ret; + + ret = allocate_crash_memory_ranges(); + if (ret) + return ret; + } + + start = base; + crash_memory_ranges[crash_mem_ranges].base = start; + crash_mem_ranges++; } + crash_memory_ranges[crash_mem_ranges-1].size = (end - start); pr_debug("crash_memory_range[%d] [%#016llx-%#016llx], %#llx bytes\n", - crash_mem_ranges, base, end - 1, (end - base)); - crash_memory_ranges[crash_mem_ranges].base = base; - crash_memory_ranges[crash_mem_ranges].size = end - base; - crash_mem_ranges++; + (crash_mem_ranges - 1), start, end - 1, (end - start)); return 0; } @@ -999,6 +1018,14 @@ static int fadump_setup_crash_memory_ranges(void) pr_debug("Setup crash memory ranges.\n"); crash_mem_ranges = 0; + + /* allocate memory for crash memory ranges for the first time */ + if (!max_crash_mem_ranges) { + ret = allocate_crash_memory_ranges(); + if (ret) + return ret; + } + /* * add the first memory chunk (RMA_START through boot_memory_size) as * a separate memory chunk. The reason is, at the time crash firmware
With dynamic memory allocation support for crash memory ranges array, there is no hard limit on the no. of crash memory ranges kernel could export, but program headers count could overflow in the /proc/vmcore ELF file while exporting each memory range as PT_LOAD segment. Reduce the likelihood of a such scenario, by folding adjacent crash memory ranges which minimizes the total number of PT_LOAD segments. Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> --- arch/powerpc/kernel/fadump.c | 45 ++++++++++++++++++++++++++++++++++-------- 1 file changed, 36 insertions(+), 9 deletions(-)