Message ID | 1481913337-9331-3-git-send-email-mike.kravetz@oracle.com |
---|---|
State | RFC |
Delegated to: | David Miller |
Headers | show |
Hi Mike. On Fri, Dec 16, 2016 at 10:35:25AM -0800, Mike Kravetz wrote: > Add new fields to the mm_context structure to support shared context. > Instead of a simple context ID, add a pointer to a structure with a > reference count. This is needed as multiple tasks will share the > context ID. What are the benefits with the shared_mmu_ctx struct? It does not save any space in mm_context_t, and the CPU only supports one extra context. So it looks like over-engineering with all the extra administration required to handle it with refcount, poitners etc. what do I miss? Sam -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Mike > diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h > index b84be67..d031799 100644 > --- a/arch/sparc/include/asm/mmu_context_64.h > +++ b/arch/sparc/include/asm/mmu_context_64.h > @@ -35,15 +35,15 @@ void __tsb_context_switch(unsigned long pgd_pa, > static inline void tsb_context_switch(struct mm_struct *mm) > { > __tsb_context_switch(__pa(mm->pgd), > - &mm->context.tsb_block[0], > + &mm->context.tsb_block[MM_TSB_BASE], > #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) > - (mm->context.tsb_block[1].tsb ? > - &mm->context.tsb_block[1] : > + (mm->context.tsb_block[MM_TSB_HUGE].tsb ? > + &mm->context.tsb_block[MM_TSB_HUGE] : > NULL) > #else > NULL > #endif > - , __pa(&mm->context.tsb_descr[0])); > + , __pa(&mm->context.tsb_descr[MM_TSB_BASE])); > } > This is a nice cleanup that has nothing to do with your series. Could you submit this as a separate patch so we can get it applied. This is the only place left where the array index for tsb_block and tsb_descr uses hardcoded values. And it would be good to get rid of these. Sam -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 12/16/2016 11:34 PM, Sam Ravnborg wrote: > Hi Mike. > > On Fri, Dec 16, 2016 at 10:35:25AM -0800, Mike Kravetz wrote: >> Add new fields to the mm_context structure to support shared context. >> Instead of a simple context ID, add a pointer to a structure with a >> reference count. This is needed as multiple tasks will share the >> context ID. > > What are the benefits with the shared_mmu_ctx struct? > It does not save any space in mm_context_t, and the CPU only > supports one extra context. > So it looks like over-engineering with all the extra administration > required to handle it with refcount, poitners etc. > > what do I miss? Multiple tasks will share this same context ID. The first task to need a new shared context will allocate the structure, increment the ref count and point to it. As other tasks join the sharing, they will increment the ref count and point to the same structure. Similarly, when tasks no longer use the shared context ID, they will decrement the reference count. The reference count is important so that we will know when the last reference to the shared context ID is dropped. When the last reference is dropped, then the ID can be recycled/given back to the global pool of context IDs. This seemed to be the most straight forward way to implement this.
On 12/16/2016 11:38 PM, Sam Ravnborg wrote: > Hi Mike > >> diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h >> index b84be67..d031799 100644 >> --- a/arch/sparc/include/asm/mmu_context_64.h >> +++ b/arch/sparc/include/asm/mmu_context_64.h >> @@ -35,15 +35,15 @@ void __tsb_context_switch(unsigned long pgd_pa, >> static inline void tsb_context_switch(struct mm_struct *mm) >> { >> __tsb_context_switch(__pa(mm->pgd), >> - &mm->context.tsb_block[0], >> + &mm->context.tsb_block[MM_TSB_BASE], >> #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) >> - (mm->context.tsb_block[1].tsb ? >> - &mm->context.tsb_block[1] : >> + (mm->context.tsb_block[MM_TSB_HUGE].tsb ? >> + &mm->context.tsb_block[MM_TSB_HUGE] : >> NULL) >> #else >> NULL >> #endif >> - , __pa(&mm->context.tsb_descr[0])); >> + , __pa(&mm->context.tsb_descr[MM_TSB_BASE])); >> } >> > This is a nice cleanup that has nothing to do with your series. > Could you submit this as a separate patch so we can get it applied. > > This is the only place left where the array index for tsb_block > and tsb_descr uses hardcoded values. And it would be good to get > rid of these. Sure, I will submit a separate cleanup patch for this. However, do note that in my series if CONFIG_SHARED_MMU_CTX is defined, then MM_TSB_HUGE_SHARED is index 0, instead of MM_TSB_BASE being 0 in the case where CONFIG_SHARED_MMU_CTX is not defined. This may seem 'strange' and the obvious question would be 'why not put CONFIG_SHARED_MMU_CTX at the end of the existing array (index 2)?'. The reason is that tsb_descr array can not have any 'holes' when passed to the hypervisor. Since there will always be a MM_TSB_BASE tsb, with MM_TSB_HUGE_SHARED before and MM_TSB_HUGE after MM_TSB_BASE, few tricks are necessary to ensure no holes are in the array passed to the hypervisor.
Hi Mike. On Sun, Dec 18, 2016 at 03:33:59PM -0800, Mike Kravetz wrote: > On 12/16/2016 11:34 PM, Sam Ravnborg wrote: > > Hi Mike. > > > > On Fri, Dec 16, 2016 at 10:35:25AM -0800, Mike Kravetz wrote: > >> Add new fields to the mm_context structure to support shared context. > >> Instead of a simple context ID, add a pointer to a structure with a > >> reference count. This is needed as multiple tasks will share the > >> context ID. > > > > What are the benefits with the shared_mmu_ctx struct? > > It does not save any space in mm_context_t, and the CPU only > > supports one extra context. > > So it looks like over-engineering with all the extra administration > > required to handle it with refcount, poitners etc. > > > > what do I miss? > > Multiple tasks will share this same context ID. The first task to need > a new shared context will allocate the structure, increment the ref count > and point to it. As other tasks join the sharing, they will increment > the ref count and point to the same structure. Similarly, when tasks > no longer use the shared context ID, they will decrement the reference > count. > > The reference count is important so that we will know when the last > reference to the shared context ID is dropped. When the last reference > is dropped, then the ID can be recycled/given back to the global pool > of context IDs. > > This seemed to be the most straight forward way to implement this. This nice explanation clarified it - thanks. Could you try to include this info in the description of the struct - so it is obvious what the intention with the reference counter is. Sam -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Dec 18, 2016 at 03:45:31PM -0800, Mike Kravetz wrote: > On 12/16/2016 11:38 PM, Sam Ravnborg wrote: > > Hi Mike > > > >> diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h > >> index b84be67..d031799 100644 > >> --- a/arch/sparc/include/asm/mmu_context_64.h > >> +++ b/arch/sparc/include/asm/mmu_context_64.h > >> @@ -35,15 +35,15 @@ void __tsb_context_switch(unsigned long pgd_pa, > >> static inline void tsb_context_switch(struct mm_struct *mm) > >> { > >> __tsb_context_switch(__pa(mm->pgd), > >> - &mm->context.tsb_block[0], > >> + &mm->context.tsb_block[MM_TSB_BASE], > >> #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) > >> - (mm->context.tsb_block[1].tsb ? > >> - &mm->context.tsb_block[1] : > >> + (mm->context.tsb_block[MM_TSB_HUGE].tsb ? > >> + &mm->context.tsb_block[MM_TSB_HUGE] : > >> NULL) > >> #else > >> NULL > >> #endif > >> - , __pa(&mm->context.tsb_descr[0])); > >> + , __pa(&mm->context.tsb_descr[MM_TSB_BASE])); > >> } > >> > > This is a nice cleanup that has nothing to do with your series. > > Could you submit this as a separate patch so we can get it applied. > > > > This is the only place left where the array index for tsb_block > > and tsb_descr uses hardcoded values. And it would be good to get > > rid of these. > > Sure, I will submit a separate cleanup patch for this. > > However, do note that in my series if CONFIG_SHARED_MMU_CTX is defined, > then MM_TSB_HUGE_SHARED is index 0, instead of MM_TSB_BASE being 0 in > the case where CONFIG_SHARED_MMU_CTX is not defined. This may seem > 'strange' and the obvious question would be 'why not put CONFIG_SHARED_MMU_CTX > at the end of the existing array (index 2)?'. The reason is that tsb_descr > array can not have any 'holes' when passed to the hypervisor. Since there > will always be a MM_TSB_BASE tsb, with MM_TSB_HUGE_SHARED before and > MM_TSB_HUGE after MM_TSB_BASE, few tricks are necessary to ensure no holes > are in the array passed to the hypervisor. So this is the explanation for the strange changes of the constants. Add a similar explanation to the code to help the next reader. Sam -- To unsubscribe from this list: send the line "unsubscribe sparclinux" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h index f7de0db..edf8663 100644 --- a/arch/sparc/include/asm/mmu_64.h +++ b/arch/sparc/include/asm/mmu_64.h @@ -57,6 +57,13 @@ (!(((__ctx.sparc64_ctx_val) ^ tlb_context_cache) & CTX_VERSION_MASK)) #define CTX_HWBITS(__ctx) ((__ctx.sparc64_ctx_val) & CTX_HW_MASK) #define CTX_NRBITS(__ctx) ((__ctx.sparc64_ctx_val) & CTX_NR_MASK) +#define SHARED_CTX_VALID(__ctx) (__ctx.shared_ctx && \ + (!(((__ctx.shared_ctx->shared_ctx_val) ^ tlb_context_cache) & \ + CTX_VERSION_MASK))) +#define SHARED_CTX_HWBITS(__ctx) \ + ((__ctx.shared_ctx->shared_ctx_val) & CTX_HW_MASK) +#define SHARED_CTX_NRBITS(__ctx) \ + ((__ctx.shared_ctx->shared_ctx_val) & CTX_NR_MASK) #ifndef __ASSEMBLY__ @@ -80,24 +87,45 @@ struct tsb_config { unsigned long tsb_map_pte; }; -#define MM_TSB_BASE 0 +#if defined(CONFIG_SHARED_MMU_CTX) +struct shared_mmu_ctx { + atomic_t refcount; + unsigned long shared_ctx_val; +}; + +#define MM_TSB_HUGE_SHARED 0 +#define MM_TSB_BASE 1 +#define MM_TSB_HUGE 2 +#define MM_NUM_TSBS 3 +#else +#define MM_TSB_BASE 0 #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) -#define MM_TSB_HUGE 1 -#define MM_NUM_TSBS 2 +#define MM_TSB_HUGE 1 +#define MM_TSB_HUGE_SHARED 1 /* Simplifies conditions in code */ +#define MM_NUM_TSBS 2 #else -#define MM_NUM_TSBS 1 +#define MM_NUM_TSBS 1 +#endif #endif typedef struct { spinlock_t lock; unsigned long sparc64_ctx_val; +#if defined(CONFIG_SHARED_MMU_CTX) + struct shared_mmu_ctx *shared_ctx; + unsigned long shared_hugetlb_pte_count; +#endif unsigned long hugetlb_pte_count; unsigned long thp_pte_count; struct tsb_config tsb_block[MM_NUM_TSBS]; struct hv_tsb_descr tsb_descr[MM_NUM_TSBS]; } mm_context_t; +#define mm_shared_ctx_val(mm) \ + ((mm)->context.shared_ctx ? \ + (mm)->context.shared_ctx->shared_ctx_val : 0UL) + #endif /* !__ASSEMBLY__ */ #define TSB_CONFIG_TSB 0x00 diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h index b84be67..d031799 100644 --- a/arch/sparc/include/asm/mmu_context_64.h +++ b/arch/sparc/include/asm/mmu_context_64.h @@ -35,15 +35,15 @@ void __tsb_context_switch(unsigned long pgd_pa, static inline void tsb_context_switch(struct mm_struct *mm) { __tsb_context_switch(__pa(mm->pgd), - &mm->context.tsb_block[0], + &mm->context.tsb_block[MM_TSB_BASE], #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) - (mm->context.tsb_block[1].tsb ? - &mm->context.tsb_block[1] : + (mm->context.tsb_block[MM_TSB_HUGE].tsb ? + &mm->context.tsb_block[MM_TSB_HUGE] : NULL) #else NULL #endif - , __pa(&mm->context.tsb_descr[0])); + , __pa(&mm->context.tsb_descr[MM_TSB_BASE])); } void tsb_grow(struct mm_struct *mm,
Add new fields to the mm_context structure to support shared context. Instead of a simple context ID, add a pointer to a structure with a reference count. This is needed as multiple tasks will share the context ID. Pages using the shared context ID will reside in a separate TSB. So changes are made to increase the number of TSBs as well. Note that only support for context sharing of huge pages is provided. Therefore, no base page size shared context TSB. Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> --- arch/sparc/include/asm/mmu_64.h | 36 +++++++++++++++++++++++++++++---- arch/sparc/include/asm/mmu_context_64.h | 8 ++++---- 2 files changed, 36 insertions(+), 8 deletions(-)