From patchwork Wed Apr 3 20:42:33 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Christoph Lameter (Ampere)" X-Patchwork-Id: 233584 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 0BA932C00BD for ; Thu, 4 Apr 2013 07:49:50 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1763356Ab3DCUtq (ORCPT ); Wed, 3 Apr 2013 16:49:46 -0400 Received: from a9-50.smtp-out.amazonses.com ([54.240.9.50]:47668 "EHLO a9-50.smtp-out.amazonses.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761357Ab3DCUtp (ORCPT ); Wed, 3 Apr 2013 16:49:45 -0400 X-Greylist: delayed 430 seconds by postgrey-1.27 at vger.kernel.org; Wed, 03 Apr 2013 16:49:45 EDT Date: Wed, 3 Apr 2013 20:42:33 +0000 From: Christoph Lameter X-X-Sender: cl@gentwo.org To: Tejun Heo cc: RongQing Li , Shan Wei , netdev@vger.kernel.org, Eric Dumazet Subject: [PERCPU] Remove & in front of this_cpu_ptr In-Reply-To: Message-ID: <0000013dd1a300fb-1fbb26a9-77a7-4c24-95ff-f088309206d9-000000@email.amazonses.com> References: <1364463761-32510-1-git-send-email-roy.qing.li@gmail.com> <1364475933.15753.36.camel@edumazet-glaptop> <0000013db16f1e1d-abcb7d9e-1c9d-4ef9-b4de-767bc0282ccf-000000@email.amazonses.com> <0000013dc6307f44-940f2bf1-7556-4d9e-92ab-1a84d2a47ca8-000000@email.amazonses.com> <1364833887.5113.161.camel@edumazet-glaptop> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 X-SES-Outgoing: 2013.04.03-54.240.9.50 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Subject: percpu: Remove & in front of this_cpu_ptr Both this_cpu_ptr(&percpu_pointer->field) [Add Offset in percpu pointer to the field offset in the struct and then add to the local percpu base] as well as &this_cpu_ptr(percpu_pointer)->field [Add percpu variable offset to local percpu base to form an address and then add the field offset to the address]. are correct. However, the latter looks a bit more complicated. The first one is easier to understand. The second one may be more difficult for the compiler to optimize as well. Convert all of them to this_cpu_ptr(&percpu_pointer->field). Signed-off-by: Christoph Lameter --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: linux/fs/gfs2/rgrp.c =================================================================== --- linux.orig/fs/gfs2/rgrp.c 2013-04-03 15:25:22.576562629 -0500 +++ linux/fs/gfs2/rgrp.c 2013-04-03 15:26:43.045773676 -0500 @@ -1726,7 +1726,7 @@ static bool gfs2_rgrp_congested(const st s64 var; preempt_disable(); - st = &this_cpu_ptr(sdp->sd_lkstats)->lkstats[LM_TYPE_RGRP]; + st = this_cpu_ptr(&sdp->sd_lkstats->lkstats[LM_TYPE_RGRP]); r_srttb = st->stats[GFS2_LKS_SRTTB]; r_dcount = st->stats[GFS2_LKS_DCOUNT]; var = st->stats[GFS2_LKS_SRTTVARB] + Index: linux/mm/page_alloc.c =================================================================== --- linux.orig/mm/page_alloc.c 2013-04-03 15:25:22.576562629 -0500 +++ linux/mm/page_alloc.c 2013-04-03 15:30:02.124769119 -0500 @@ -1342,7 +1342,7 @@ void free_hot_cold_page(struct page *pag migratetype = MIGRATE_MOVABLE; } - pcp = &this_cpu_ptr(zone->pageset)->pcp; + pcp = this_cpu_ptr(&zone->pageset->pcp); if (cold) list_add_tail(&page->lru, &pcp->lists[migratetype]); else @@ -1484,7 +1484,7 @@ again: struct list_head *list; local_irq_save(flags); - pcp = &this_cpu_ptr(zone->pageset)->pcp; + pcp = this_cpu_ptr(&zone->pageset->pcp); list = &pcp->lists[migratetype]; if (list_empty(list)) { pcp->count += rmqueue_bulk(zone, 0, Index: linux/net/core/flow.c =================================================================== --- linux.orig/net/core/flow.c 2013-04-03 15:25:22.576562629 -0500 +++ linux/net/core/flow.c 2013-04-03 15:26:43.045773676 -0500 @@ -328,7 +328,7 @@ static void flow_cache_flush_per_cpu(voi struct flow_flush_info *info = data; struct tasklet_struct *tasklet; - tasklet = &this_cpu_ptr(info->cache->percpu)->flush_tasklet; + tasklet = this_cpu_ptr(&info->cache->percpu->flush_tasklet); tasklet->data = (unsigned long)info; tasklet_schedule(tasklet); }