Message ID | 20190607144703.GB8604@ram.ibm.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [RFC,1/1] powerpc/pseries/svm: Unshare all pages before kexecing a new kernel | expand |
Context | Check | Description |
---|---|---|
snowpatch_ozlabs/apply_patch | warning | Failed to apply on branch next (a3bf9fbdad600b1e4335dd90979f8d6072e4f602) |
snowpatch_ozlabs/apply_patch | fail | Failed to apply to any branch |
diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h index 8a6c5b4d..c8dd470 100644 --- a/arch/powerpc/include/asm/ultravisor-api.h +++ b/arch/powerpc/include/asm/ultravisor-api.h @@ -31,5 +31,6 @@ #define UV_UNSHARE_PAGE 0xF134 #define UV_PAGE_INVAL 0xF138 #define UV_SVM_TERMINATE 0xF13C +#define UV_UNSHARE_ALL_PAGES 0xF140 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */ diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h index bf5ac05..73c44ff 100644 --- a/arch/powerpc/include/asm/ultravisor.h +++ b/arch/powerpc/include/asm/ultravisor.h @@ -120,6 +120,12 @@ static inline int uv_unshare_page(u64 pfn, u64 npages) return ucall(UV_UNSHARE_PAGE, retbuf, pfn, npages); } +static inline int uv_unshare_all_pages(void) +{ + unsigned long retbuf[UCALL_BUFSIZE]; + + return ucall(UV_UNSHARE_ALL_PAGES, retbuf); +} #endif /* !__ASSEMBLY__ */ #endif /* _ASM_POWERPC_ULTRAVISOR_H */ diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c index 75692c3..a93e3ab 100644 --- a/arch/powerpc/kernel/machine_kexec_64.c +++ b/arch/powerpc/kernel/machine_kexec_64.c @@ -329,6 +329,13 @@ void default_machine_kexec(struct kimage *image) #ifdef CONFIG_PPC_PSERIES kexec_paca.lppaca_ptr = NULL; #endif + + if (is_svm_platform() && !(image->preserve_context || + image->type == KEXEC_TYPE_CRASH)) { + uv_unshare_all_pages(); + printk("kexec: Unshared all shared pages.\n"); + } + paca_ptrs[kexec_paca.paca_index] = &kexec_paca; setup_paca(&kexec_paca);
powerpc/pseries/svm: Unshare all pages before kexecing a new kernel. A new kernel deserves a clean slate. Any pages shared with the hypervisor is unshared before invoking the new kernel. However there are exceptions. If the new kernel is invoked to dump the current kernel, or if there is a explicit request to preserve the state of the current kernel, unsharing of pages is skipped. NOTE: Reserve atleast 256M for crashkernel. Otherwise SWIOTLB allocation fails and crash kernel fails to boot. Signed-off-by: Ram Pai <linuxram@us.ibm.com>