From patchwork Wed Jan 18 08:18:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Timo Aaltonen X-Patchwork-Id: 716541 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 3v3Kc44rLdz9snk; Wed, 18 Jan 2017 19:18:28 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1cTlRm-00040o-5W; Wed, 18 Jan 2017 08:18:22 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.76) (envelope-from ) id 1cTlRh-00040d-RU for kernel-team@lists.ubuntu.com; Wed, 18 Jan 2017 08:18:17 +0000 Received: from mobile-user-c1d2e7-114.dhcp.inet.fi ([193.210.231.114] helo=deckard.tyrell) by youngberry.canonical.com with esmtpsa (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1cTlRh-0003Xk-Fe for kernel-team@lists.ubuntu.com; Wed, 18 Jan 2017 08:18:17 +0000 From: Timo Aaltonen To: kernel-team@lists.ubuntu.com Subject: [PATCH 1/2] UBUNTU: SAUCE: i915_bpo: Fix DP link rate math Date: Wed, 18 Jan 2017 10:18:14 +0200 Message-Id: <1484727495-21891-2-git-send-email-tjaalton@ubuntu.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1484727495-21891-1-git-send-email-tjaalton@ubuntu.com> References: <1484727495-21891-1-git-send-email-tjaalton@ubuntu.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com From: Timo Aaltonen BugLink: http://bugs.launchpad.net/bugs/1657353 We store DP link rates as link clock frequencies in kHz, just like all other clock values. But, DP link rates in the DP Spec. are expressed in Gbps/lane, which seems to have led to some confusion. E.g., for HBR2 Max. data rate = 5.4 Gbps/lane x 4 lane x 8/10 x 1/8 = 2160000 kBps where, 8/10 is for channel encoding and 1/8 is for bit to Byte conversion Using link clock frequency, like we do Max. data rate = 540000 kHz * 4 lanes = 2160000 kSymbols/s Because, each symbol has 8 bit of data, this is 2160000 kBps and there is no need to account for channel encoding here. But, currently we do 540000 kHz * 4 lanes * (8/10) = 1728000 kBps Similarly, while computing the required link bandwidth for a mode, there is a mysterious 1/10 term. This should simply be pixel_clock kHz * (bpp/8) to give the final result in kBps v2: Changed to DIV_ROUND_UP() and comment changes (Ville) Signed-off-by: Dhinakaran Pandiyan Link: http://patchwork.freedesktop.org/patch/msgid/1479160220-17794-1-git-send-email-dhinakaran.pandiyan@intel.com Signed-off-by: Ville Syrjälä (backported from drm-intel-next commit fd81c44eba9ca1e78d0601f37b5d7819df522aa7) Signed-off-by: Timo Aaltonen --- ubuntu/i915/intel_dp.c | 35 +++++++++++++++-------------------- 1 file changed, 15 insertions(+), 20 deletions(-) diff --git a/ubuntu/i915/intel_dp.c b/ubuntu/i915/intel_dp.c index 94b3571..53772f2 100644 --- a/ubuntu/i915/intel_dp.c +++ b/ubuntu/i915/intel_dp.c @@ -166,33 +166,23 @@ static u8 intel_dp_max_lane_count(struct intel_dp *intel_dp) return min(source_max, sink_max); } -/* - * The units on the numbers in the next two are... bizarre. Examples will - * make it clearer; this one parallels an example in the eDP spec. - * - * intel_dp_max_data_rate for one lane of 2.7GHz evaluates as: - * - * 270000 * 1 * 8 / 10 == 216000 - * - * The actual data capacity of that configuration is 2.16Gbit/s, so the - * units are decakilobits. ->clock in a drm_display_mode is in kilohertz - - * or equivalently, kilopixels per second - so for 1680x1050R it'd be - * 119000. At 18bpp that's 2142000 kilobits per second. - * - * Thus the strange-looking division by 10 in intel_dp_link_required, to - * get the result in decakilobits instead of kilobits. - */ - static int intel_dp_link_required(int pixel_clock, int bpp) { - return (pixel_clock * bpp + 9) / 10; + /* pixel_clock is in kHz, divide bpp by 8 for bit to Byte conversion */ + return DIV_ROUND_UP(pixel_clock * bpp, 8); } static int intel_dp_max_data_rate(int max_link_clock, int max_lanes) { - return (max_link_clock * max_lanes * 8) / 10; + /* max_link_clock is the link symbol clock (LS_Clk) in kHz and not the + * link rate that is generally expressed in Gbps. Since, 8 bits of data + * is transmitted every LS_Clk per lane, there is no need to account for + * the channel encoding that is done in the PHY layer here. + */ + + return max_link_clock * max_lanes; } static enum drm_mode_status @@ -3794,7 +3784,12 @@ intel_dp_get_dpcd(struct intel_dp *intel_dp) if (val == 0) break; - /* Value read is in kHz while drm clock is saved in deca-kHz */ + /* Value read multiplied by 200kHz gives the per-lane + * link rate in kHz. The source rates are, however, + * stored in terms of LS_Clk kHz. The full conversion + * back to symbols is + * (val * 200kHz)*(8/10 ch. encoding)*(1/8 bit to Byte) + */ intel_dp->sink_rates[i] = (val * 200) / 10; } intel_dp->num_sink_rates = i;