From patchwork Wed Dec 1 07:36:13 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Changli Gao X-Patchwork-Id: 73735 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5DB4BB6EED for ; Wed, 1 Dec 2010 18:36:41 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753055Ab0LAHgf (ORCPT ); Wed, 1 Dec 2010 02:36:35 -0500 Received: from mail-fx0-f46.google.com ([209.85.161.46]:63351 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750735Ab0LAHgf (ORCPT ); Wed, 1 Dec 2010 02:36:35 -0500 Received: by fxm8 with SMTP id 8so1949018fxm.19 for ; Tue, 30 Nov 2010 23:36:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:mime-version:received:in-reply-to :references:from:date:message-id:subject:to:cc:content-type; bh=GlK1Tor+AuyzYXZyNMfDFNWKmt3HR6NInj3NayCPBNA=; b=HG3H6HLj91hJ9Vcqp4uYM8/EtZXSmPJWQmgBn2aLlEu/uVlN4YS7l5rNE9QJ5VHrfc b5hRM7qW9oCp0QYZsv4bSy8XJpvPLcRHOgqzj624fGPvezk0kPu7uBoejHo0AwajXOrl SlVUTTqBsdxEBz++pAnuaeSSEtzTkaODeC39Y= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; b=NSNp15JHMzSRW3l0VxybziaNBp6fuzIC7iSjgEvZE/VB+ki1+/e7KmxnunBV7LePUG pOcBXWET/4oDz91A0ccsgl2hPJhjqcV3HcMCFWMKYWl+4ijm4PjyhCOkd8S+Jk8/wjTr 6pMLYBXjywK5XaGpKAJyYjZTeMd3FYUz4102Q= Received: by 10.223.104.198 with SMTP id q6mr7773741fao.13.1291188993915; Tue, 30 Nov 2010 23:36:33 -0800 (PST) MIME-Version: 1.0 Received: by 10.223.98.197 with HTTP; Tue, 30 Nov 2010 23:36:13 -0800 (PST) In-Reply-To: References: <1291109699.2904.11.camel@edumazet-laptop> <1291127670.2904.96.camel@edumazet-laptop> From: Changli Gao Date: Wed, 1 Dec 2010 15:36:13 +0800 Message-ID: Subject: Re: multi bpf filter will impact performance? To: Rui Cc: Eric Dumazet , netdev@vger.kernel.org Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Wed, Dec 1, 2010 at 11:48 AM, Rui wrote: > one more question is > > if  RPS can spread the load into 4 separate cpus, how about the > "packet_rcv(or tpacket_rcv)" ? will they run in parallel? > You mentioned RPS. But the current bpf doesn't have an instruction to get the current CPU number. You can try this patch attached. Maybe we can leverage the bpf and SO_REUSEPORT to direct the traffic to the socket instance on the local CPU. diff --git a/include/linux/filter.h b/include/linux/filter.h index 447a775..35db44a 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -124,7 +124,8 @@ struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ #define SKF_AD_MARK 20 #define SKF_AD_QUEUE 24 #define SKF_AD_HATYPE 28 -#define SKF_AD_MAX 32 +#define SKF_AD_CPU 32 +#define SKF_AD_MAX 36 #define SKF_NET_OFF (-0x100000) #define SKF_LL_OFF (-0x200000) diff --git a/net/core/filter.c b/net/core/filter.c index a44d27f..3baa3f7 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -410,6 +410,9 @@ load_b: A = 0; continue; } + case SKF_AD_CPU: + A = raw_smp_processor_id(); + continue; default: return 0; }