Message ID | 1345215429-18570-1-git-send-email-stefan.bader@canonical.com |
---|---|
State | New |
Headers | show |
On 17.08.2012 16:57, Stefan Bader wrote: > This will also fix some issues with crash-kexec not working anymore > on 64bit systems. And I hope to get hold of some real humans at > Plumbers the week after next. But meanwhile... > > -Stefan > > --- > > > From d370223977f1e0b4218cb978f480ff4220344eb0 Mon Sep 17 00:00:00 2001 > From: Stefan Bader <stefan.bader@canonical.com> > Date: Fri, 17 Aug 2012 15:07:27 +0200 > Subject: [PATCH] UBUNTU: SAUCE: (no-up) x86/mm: Fix 64bit size of mapping > tables > > commit 722bc6b16771ed80871e1fd81c86d3627dda2ac8 > x86/mm: Fix the size calculation of mapping tables > > introduced a change to adapt size reserved for the initial mapping > tables for the first 2/4M range being still 4K. This would looks > necessary because that range is also covered by MTRRs and it > would be bad (performance issues at least) to have MTRR ranges > not aligned with the page boundaries. However there does not seem > to be a reason to handle 32bit and 64bit differently. Which is exaclty > what the upstream code (even now) does. > > While trying to get into discussion with upstream about the right > way to fix this, we should at least make the calculations consistent. > IOW, for 64bit the first range _is_ 2M pages (or at least not 4K), so > the additional PTEs are not needed. > > With this applied the size of the initial page tables went from > 4M to 16K on a VM that had 2G of memory. > BugLink: http://bugs.launchpad.net/bugs/1022561 > Signed-off-by: Stefan Bader <stefan.bader@canonical.com> > --- > arch/x86/mm/init.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c > index bc4e9d8..6219612 100644 > --- a/arch/x86/mm/init.c > +++ b/arch/x86/mm/init.c > @@ -60,10 +60,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en > extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT); > #ifdef CONFIG_X86_32 > extra += PMD_SIZE; > -#endif > /* The first 2/4M doesn't use large pages. */ > if (mr->start < PMD_SIZE) > extra += mr->end - mr->start; > +#endif > > ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; > } else >
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index bc4e9d8..6219612 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -60,10 +60,10 @@ static void __init find_early_table_space(struct map_range *mr, unsigned long en extra = end - ((end>>PMD_SHIFT) << PMD_SHIFT); #ifdef CONFIG_X86_32 extra += PMD_SIZE; -#endif /* The first 2/4M doesn't use large pages. */ if (mr->start < PMD_SIZE) extra += mr->end - mr->start; +#endif ptes = (extra + PAGE_SIZE - 1) >> PAGE_SHIFT; } else