From patchwork Sun Jun 26 18:13:54 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jamal X-Patchwork-Id: 102066 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 5E007B6F7F for ; Mon, 27 Jun 2011 04:18:27 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754631Ab1FZSPF (ORCPT ); Sun, 26 Jun 2011 14:15:05 -0400 Received: from mail-iw0-f174.google.com ([209.85.214.174]:65505 "EHLO mail-iw0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754167Ab1FZSN7 (ORCPT ); Sun, 26 Jun 2011 14:13:59 -0400 Received: by iwn6 with SMTP id 6so3572112iwn.19 for ; Sun, 26 Jun 2011 11:13:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:subject:from:reply-to:to:cc:in-reply-to :references:content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; bh=WYln/Z8PjXpRNjNi3tj1W62NUCs8S6nxRUhJrQkLWCc=; b=tKmWAXOkFbUy3QKkr1uK+p3bWb07R6usgbq2DnckW1KVpgBFb97wpC9rjVE7C1T+uT ifyHSGAtl9d8quvUQNXLfmYaLm0WUE+DiNl72DEyp9slALFKfCqFl0M6axhygFdEaXGy Mt7rDk3mbm3duRfGWMESBMFNL25VrsKviw2es= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:subject:from:reply-to:to:cc:in-reply-to:references :content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; b=wlxW03QlwBbD3vo2e5PIOLCx25ShwPjxkGb0CIlIKG4tgMWkSGMnTbXIzLx93SaR2h Y1ppc5S8Xh2rQ3vfhP5zsEQJ1o9ZGyFJR5WIOCgF78ZNd6zYi8kiJi3A7cUUlao3lYCz AIVhng+kNLL62PJYSIU5QDty2GoKmp7JiDUnE= Received: by 10.42.132.7 with SMTP id b7mr6467211ict.0.1309112037660; Sun, 26 Jun 2011 11:13:57 -0700 (PDT) Received: from [10.0.0.31] (CPE0030ab124d2f-CM001bd7a7f1a0.cpe.net.cable.rogers.com [99.224.72.75]) by mx.google.com with ESMTPS id e23sm2620455ibe.6.2011.06.26.11.13.56 (version=SSLv3 cipher=OTHER); Sun, 26 Jun 2011 11:13:56 -0700 (PDT) Subject: Re: [PATCH] net_sched: fix dequeuer fairness From: jamal Reply-To: jhs@mojatatu.com To: Eric Dumazet Cc: David Miller , Herbert Xu , netdev@vger.kernel.org, adi , Joe Perches In-Reply-To: <1309106317.5134.56.camel@mojatatu> References: <1309097254.5134.24.camel@mojatatu> <1309100994.2532.35.camel@edumazet-laptop> <1309102334.5134.31.camel@mojatatu> <1309104215.5134.37.camel@mojatatu> <1309105599.2532.50.camel@edumazet-laptop> <1309106317.5134.56.camel@mojatatu> Date: Sun, 26 Jun 2011 14:13:54 -0400 Message-ID: <1309112034.5134.66.camel@mojatatu> Mime-Version: 1.0 X-Mailer: Evolution 2.28.3 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Sigh. One more change (should have heeded the compile warning) thanks to adi for pointing it out. Thanks to Joe Perches for the tip on inlining with Evolution (seems to work when i sent to myself). ---- commit a4e964941428bdf58741702e2808d2f813dac1fd Author: Jamal Hadi Salim Date: Sun Jun 26 14:06:29 2011 -0400 [PATCH] net_sched: fix dequeuer fairness Results on dummy device can be seen in my netconf 2011 slides. These results are for a 10Gige IXGBE intel nic - on another i5 machine, very similar specs to the one used in the netconf2011 results. It turns out - this is a hell lot worse than dummy and so this patch is even more beneficial for 10G. Test setup: ---------- System under test sending packets out. Additional box connected directly dropping packets. Installed prio qdisc on the eth device and default netdev default length of 1000 used as is. The 3 prio bands each were set to 100 (didnt factor in the results). 5 packet runs were made and the middle 3 picked. results ------- The "cpu" column indicates the which cpu the sample was taken on, The "Pkt runx" carries the number of packets a cpu dequeued when forced to be in the "dequeuer" role. The "avg" for each run is the number of times each cpu should be a "dequeuer" if the system was fair. 3.0-rc4 (plain) cpu Pkt run1 Pkt run2 Pkt run3 ================================================ cpu0 21853354 21598183 22199900 cpu1 431058 473476 393159 cpu2 481975 477529 458466 cpu3 23261406 23412299 22894315 avg 11506948 11490372 11486460 3.0-rc4 with patch and default weight 64 cpu Pkt run1 Pkt run2 Pkt run3 ================================================ cpu0 13205312 13109359 13132333 cpu1 10189914 10159127 10122270 cpu2 10213871 10124367 10168722 cpu3 13165760 13164767 13096705 avg 11693714 11639405 11630008 As you can see the system is still not perfect but is a lot better than what it was before... At the moment we use the old backlog weight, weight_p which is 64 packets. It seems to be reasonably fine with that value. The system could be made more fair if we reduce the weight_p (as per my presentation), but we are going to affect the shared backlog weight. Unless deemed necessary, I think the default value is fine. If not we could add yet another knob. Signed-off-by: Jamal Hadi Salim --- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index b4c6809..d253c16 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -189,15 +189,17 @@ static inline int qdisc_restart(struct Qdisc *q) void __qdisc_run(struct Qdisc *q) { - unsigned long start_time = jiffies; + int quota = weight_p; + int work = 0; while (qdisc_restart(q)) { + work++; /* - * Postpone processing if - * 1. another process needs the CPU; - * 2. we've been doing it for too long. + * Ordered by possible occurrence: Postpone processing if + * 1. we've exceeded packet quota + * 2. another process needs the CPU; */ - if (need_resched() || jiffies != start_time) { + if (work >= quota || need_resched()) { __netif_schedule(q); break; }