From patchwork Mon May 6 07:25:54 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 241586 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id B93F12C0A1E for ; Mon, 6 May 2013 17:26:26 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753286Ab3EFH00 (ORCPT ); Mon, 6 May 2013 03:26:26 -0400 Received: from mail-yh0-f42.google.com ([209.85.213.42]:36621 "EHLO mail-yh0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753043Ab3EFH0Z (ORCPT ); Mon, 6 May 2013 03:26:25 -0400 Received: by mail-yh0-f42.google.com with SMTP id t59so670810yho.29 for ; Mon, 06 May 2013 00:26:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=z0E6rIeA2ZMtHe+xlK7tk9IYRmAQeBXYITQuFnwVZR4=; b=VHSPn5b/WllD+9+MZx7G5BVWszsjRG2OEwKdPO7dhLzSVfzo8NiYTF2wv2fpDpB8tz v/7GuBDvLcDnDuBWRNa15YZcW21zZd6C1sI8P43Y7VB02iBrrJhD1E34lsL8gOEjsLI1 0r50RTHj5SK9LNLkF5TX9K4yxEmUcSKawEKTxjKW22DYjQsmpXGoO5z0BLw+GSwW89WR MBm3qrdDQu/IoPfAW6hcBO2Z8PJleKE1961WcIbFILuXKe9wMPp6r8V0fKo1pTgpmEmN gMyHsMi1Pnbt6tGSbE9uqjIVimxkY9+mF7iiaQdvHL/CFz0Nc6GSP1r1uqeR5MzqBfqf eaUw== X-Received: by 10.236.38.67 with SMTP id z43mr7677655yha.68.1367825184901; Mon, 06 May 2013 00:26:24 -0700 (PDT) Received: from ka1.ozlabs.ibm.com (ibmaus65.lnk.telstra.net. [165.228.126.9]) by mx.google.com with ESMTPSA id a66sm37294534yhq.19.2013.05.06.00.26.19 for (version=TLSv1.2 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 06 May 2013 00:26:23 -0700 (PDT) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , David Gibson , Benjamin Herrenschmidt , Alexander Graf , Paul Mackerras , Joerg Roedel , Alex Williamson , kvm@vger.kernel.org, kvm-ppc@vger.kernel.org Subject: [PATCH 3/6] powerpc: Prepare to support kernel handling of IOMMU map/unmap Date: Mon, 6 May 2013 17:25:54 +1000 Message-Id: <1367825157-27231-4-git-send-email-aik@ozlabs.ru> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1367825157-27231-1-git-send-email-aik@ozlabs.ru> References: <1367825157-27231-1-git-send-email-aik@ozlabs.ru> X-Gm-Message-State: ALoCoQmJCZuaXdX0Hj/vcZ2LGk9yplpXNtwxkSmIVAdqPWZ5nUoxvwhnngW79rSgfp6YLwyA8zDD Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org The current VFIO-on-POWER implementation supports only user mode driven mapping, i.e. QEMU is sending requests to map/unmap pages. However this approach is really slow, so we want to move that to KVM. Since H_PUT_TCE can be extremely performance sensitive (especially with network adapters where each packet needs to be mapped/unmapped) we chose to implement that as a "fast" hypercall directly in "real mode" (processor still in the guest context but MMU off). To be able to do that, we need to provide some facilities to access the struct page count within that real mode environment as things like the sparsemem vmemmap mappings aren't accessible. This adds an API to increment/decrement page counter as get_user_pages API used for user mode mapping does not work in the real mode. CONFIG_SPARSEMEM_VMEMMAP and CONFIG_FLATMEM are supported. Signed-off-by: Alexey Kardashevskiy Reviewed-by: Paul Mackerras Cc: David Gibson Signed-off-by: Paul Mackerras --- arch/powerpc/include/asm/pgtable-ppc64.h | 4 ++ arch/powerpc/mm/init_64.c | 77 +++++++++++++++++++++++++++++- 2 files changed, 80 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h index 0182c20..4c56ede 100644 --- a/arch/powerpc/include/asm/pgtable-ppc64.h +++ b/arch/powerpc/include/asm/pgtable-ppc64.h @@ -377,6 +377,10 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, } #endif /* !CONFIG_HUGETLB_PAGE */ +struct page *realmode_pfn_to_page(unsigned long pfn); +int realmode_get_page(struct page *page); +int realmode_put_page(struct page *page); + #endif /* __ASSEMBLY__ */ #endif /* _ASM_POWERPC_PGTABLE_PPC64_H_ */ diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c index 95a4529..838b8ae 100644 --- a/arch/powerpc/mm/init_64.c +++ b/arch/powerpc/mm/init_64.c @@ -297,5 +297,80 @@ int __meminit vmemmap_populate(struct page *start_page, return 0; } -#endif /* CONFIG_SPARSEMEM_VMEMMAP */ +/* + * We do not have access to the sparsemem vmemmap, so we fallback to + * walking the list of sparsemem blocks which we already maintain for + * the sake of crashdump. In the long run, we might want to maintain + * a tree if performance of that linear walk becomes a problem. + * + * Any of realmode_XXXX functions can fail due to: + * 1) As real sparsemem blocks do not lay in RAM continously (they + * are in virtual address space which is not available in the real mode), + * the requested page struct can be split between blocks so get_page/put_page + * may fail. + * 2) When huge pages are used, the get_page/put_page API will fail + * in real mode as the linked addresses in the page struct are virtual + * too. + * When 1) or 2) takes place, the API returns an error code to cause + * an exit to kernel virtual mode where the operation will be completed. + */ +struct page *realmode_pfn_to_page(unsigned long pfn) +{ + struct vmemmap_backing *vmem_back; + struct page *page; + unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift; + unsigned long pg_va = (unsigned long) pfn_to_page(pfn); + + for (vmem_back = vmemmap_list; vmem_back; vmem_back = vmem_back->list) { + if (pg_va < vmem_back->virt_addr) + continue; + + /* Check that page struct is not split between real pages */ + if ((pg_va + sizeof(struct page)) > + (vmem_back->virt_addr + page_size)) + return NULL; + + page = (struct page *) (vmem_back->phys + pg_va - + vmem_back->virt_addr); + return page; + } + + return NULL; +} +EXPORT_SYMBOL_GPL(realmode_pfn_to_page); + +#elif defined(CONFIG_FLATMEM) + +struct page *realmode_pfn_to_page(unsigned long pfn) +{ + struct page *page = pfn_to_page(pfn); + return page; +} +EXPORT_SYMBOL_GPL(realmode_pfn_to_page); + +#endif /* CONFIG_SPARSEMEM_VMEMMAP/CONFIG_FLATMEM */ + +#if defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_FLATMEM) +int realmode_get_page(struct page *page) +{ + if (PageTail(page)) + return -EAGAIN; + + get_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(realmode_get_page); + +int realmode_put_page(struct page *page) +{ + if (PageCompound(page)) + return -EAGAIN; + + put_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(realmode_put_page); +#endif