From patchwork Mon Jan 4 16:36:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Van Haaren, Harry" X-Patchwork-Id: 1422175 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org (client-ip=140.211.166.137; helo=fraxinus.osuosl.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4D8hCn2YPhz9sVr for ; Tue, 5 Jan 2021 03:39:01 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id F309C86362; Mon, 4 Jan 2021 16:38:59 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id BDBZumOndWan; Mon, 4 Jan 2021 16:38:56 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by fraxinus.osuosl.org (Postfix) with ESMTP id 98FBC861A1; Mon, 4 Jan 2021 16:38:39 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 7291DC08A1; Mon, 4 Jan 2021 16:38:39 +0000 (UTC) X-Original-To: ovs-dev@openvswitch.org Delivered-To: ovs-dev@lists.linuxfoundation.org Received: from whitealder.osuosl.org (smtp1.osuosl.org [140.211.166.138]) by lists.linuxfoundation.org (Postfix) with ESMTP id 5634BC013A for ; Mon, 4 Jan 2021 16:38:38 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by whitealder.osuosl.org (Postfix) with ESMTP id 4E35985620 for ; Mon, 4 Jan 2021 16:38:38 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from whitealder.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id dKEkKsuEN0xJ for ; Mon, 4 Jan 2021 16:38:31 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by whitealder.osuosl.org (Postfix) with ESMTPS id 4F7C28698C for ; Mon, 4 Jan 2021 16:37:58 +0000 (UTC) IronPort-SDR: YE29JKQC9mP38r4qNyUJU/X+0wK3Gp2ftLqit1BwImTI4B7qiK5f3+7puU+cShbfyHUgul7y76 jRkdd7n4qqRA== X-IronPort-AV: E=McAfee;i="6000,8403,9854"; a="177130172" X-IronPort-AV: E=Sophos;i="5.78,474,1599548400"; d="scan'208";a="177130172" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Jan 2021 08:37:58 -0800 IronPort-SDR: PQZ40pYSqRhA+38pkcK0RAAMJjAU96jDufrE5s5I06hjIMPSyLAZNOdB+O83AAaf8BIwmClP2Q 6nRkFKCWIKeQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,474,1599548400"; d="scan'208";a="462000481" Received: from silpixa00400633.ir.intel.com ([10.237.213.44]) by fmsmga001.fm.intel.com with ESMTP; 04 Jan 2021 08:37:56 -0800 From: Harry van Haaren To: ovs-dev@openvswitch.org Date: Mon, 4 Jan 2021 16:36:51 +0000 Message-Id: <20210104163653.2218575-15-harry.van.haaren@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210104163653.2218575-1-harry.van.haaren@intel.com> References: <20201216181033.572425-2-harry.van.haaren@intel.com> <20210104163653.2218575-1-harry.van.haaren@intel.com> MIME-Version: 1.0 Cc: i.maximets@ovn.org Subject: [ovs-dev] [PATCH v8 14/16] dpcls-avx512: enabling avx512 vector popcount instruction. X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ovs-dev-bounces@openvswitch.org Sender: "dev" This commit enables the AVX512-VPOPCNTDQ Vector Popcount instruction. This instruction is not available on every CPU that supports the AVX512-F Foundation ISA, hence it is enabled only when the additional VPOPCNTDQ ISA check is passed. The vector popcount instruction is used instead of the AVX512 popcount emulation code present in the avx512 optimized DPCLS today. It provides higher performance in the SIMD miniflow processing as that requires the popcount to calculate the miniflow block indexes. Signed-off-by: Harry van Haaren --- v8: Add NEWS entry. --- NEWS | 3 + lib/dpdk.c | 1 + lib/dpif-netdev-lookup-avx512-gather.c | 84 ++++++++++++++++++++------ 3 files changed, 70 insertions(+), 18 deletions(-) diff --git a/NEWS b/NEWS index 345cd2696..a75ea900c 100644 --- a/NEWS +++ b/NEWS @@ -28,6 +28,9 @@ Post-v2.14.0 * Enable AVX512 optimized DPCLS to search subtables with larger miniflows. * Add more specialized DPCLS subtables to cover common rules, enhancing the lookup performance. + * Enable the AVX512 DPCLS implementation to use VPOPCNT instruction if the + CPU supports it. This enhances performance by using the native vpopcount + instructions, instead of the emulated version of vpopcount. - The environment variable OVS_UNBOUND_CONF, if set, is now used as the DNS resolver's (unbound) configuration file. - Linux datapath: diff --git a/lib/dpdk.c b/lib/dpdk.c index c883a4b8b..a9494a40f 100644 --- a/lib/dpdk.c +++ b/lib/dpdk.c @@ -655,6 +655,7 @@ dpdk_get_cpu_has_isa(const char *arch, const char *feature) #if __x86_64__ /* CPU flags only defined for the architecture that support it. */ CHECK_CPU_FEATURE(feature, "avx512f", RTE_CPUFLAG_AVX512F); + CHECK_CPU_FEATURE(feature, "avx512vpopcntdq", RTE_CPUFLAG_AVX512VPOPCNTDQ); CHECK_CPU_FEATURE(feature, "bmi2", RTE_CPUFLAG_BMI2); #endif diff --git a/lib/dpif-netdev-lookup-avx512-gather.c b/lib/dpif-netdev-lookup-avx512-gather.c index 3a684fadf..9a3273dc6 100644 --- a/lib/dpif-netdev-lookup-avx512-gather.c +++ b/lib/dpif-netdev-lookup-avx512-gather.c @@ -53,6 +53,15 @@ VLOG_DEFINE_THIS_MODULE(dpif_lookup_avx512_gather); + +/* Wrapper function required to enable ISA. */ +static inline __m512i +__attribute__((__target__("avx512vpopcntdq"))) +_mm512_popcnt_epi64_wrapper(__m512i v_in) +{ + return _mm512_popcnt_epi64(v_in); +} + static inline __m512i _mm512_popcnt_epi64_manual(__m512i v_in) { @@ -126,7 +135,8 @@ avx512_blocks_gather(__m512i v_u0, /* reg of u64 of all u0 bits */ __mmask64 u1_bcast_msk, /* mask of u1 lanes */ const uint64_t pkt_mf_u0_pop, /* num bits in u0 of pkt */ __mmask64 zero_mask, /* maskz if pkt not have mf bit */ - __mmask64 u64_lanes_mask) /* total lane count to use */ + __mmask64 u64_lanes_mask, /* total lane count to use */ + const uint32_t use_vpop) /* use AVX512 vpopcntdq */ { /* Suggest to compiler to load tbl blocks ahead of gather() */ __m512i v_tbl_blocks = _mm512_maskz_loadu_epi64(u64_lanes_mask, @@ -140,8 +150,15 @@ avx512_blocks_gather(__m512i v_u0, /* reg of u64 of all u0 bits */ tbl_mf_masks); __m512i v_masks = _mm512_and_si512(v_pkt_bits, v_tbl_masks); - /* Manual AVX512 popcount for u64 lanes. */ - __m512i v_popcnts = _mm512_popcnt_epi64_manual(v_masks); + /* Calculate AVX512 popcount for u64 lanes using the native instruction + * if available, or using emulation if not available. + */ + __m512i v_popcnts; + if (use_vpop) { + v_popcnts = _mm512_popcnt_epi64_wrapper(v_masks); + } else { + v_popcnts = _mm512_popcnt_epi64_manual(v_masks); + } /* Add popcounts and offset for u1 bits. */ __m512i v_idx_u0_offset = _mm512_maskz_set1_epi64(u1_bcast_msk, @@ -166,7 +183,8 @@ avx512_lookup_impl(struct dpcls_subtable *subtable, const struct netdev_flow_key *keys[], struct dpcls_rule **rules, const uint32_t bit_count_u0, - const uint32_t bit_count_u1) + const uint32_t bit_count_u1, + const uint32_t use_vpop) { OVS_ALIGNED_VAR(CACHE_LINE_SIZE)uint64_t block_cache[BLOCKS_CACHE_SIZE]; uint32_t hashes[NETDEV_MAX_BURST]; @@ -218,7 +236,8 @@ avx512_lookup_impl(struct dpcls_subtable *subtable, u1_bcast_mask, pkt_mf_u0_pop, zero_mask, - bit_count_total_mask); + bit_count_total_mask, + use_vpop); _mm512_storeu_si512(&block_cache[i * MF_BLOCKS_PER_PACKET], v_blocks); if (bit_count_total > 8) { @@ -239,7 +258,8 @@ avx512_lookup_impl(struct dpcls_subtable *subtable, u1_bcast_mask_gt8, pkt_mf_u0_pop, zero_mask_gt8, - bit_count_gt8_mask); + bit_count_gt8_mask, + use_vpop); _mm512_storeu_si512(&block_cache[(i * MF_BLOCKS_PER_PACKET) + 8], v_blocks_gt8); } @@ -288,7 +308,11 @@ avx512_lookup_impl(struct dpcls_subtable *subtable, return found_map; } -/* Expand out specialized functions with U0 and U1 bit attributes. */ +/* Expand out specialized functions with U0 and U1 bit attributes. As the + * AVX512 vpopcnt instruction is not supported on all AVX512 capable CPUs, + * create two functions for each miniflow signature. This allows the runtime + * CPU detection in probe() to select the ideal implementation. + */ #define DECLARE_OPTIMIZED_LOOKUP_FUNCTION(U0, U1) \ static uint32_t \ dpcls_avx512_gather_mf_##U0##_##U1(struct dpcls_subtable *subtable, \ @@ -296,7 +320,20 @@ avx512_lookup_impl(struct dpcls_subtable *subtable, const struct netdev_flow_key *keys[], \ struct dpcls_rule **rules) \ { \ - return avx512_lookup_impl(subtable, keys_map, keys, rules, U0, U1); \ + const uint32_t use_vpop = 0; \ + return avx512_lookup_impl(subtable, keys_map, keys, rules, \ + U0, U1, use_vpop); \ + } \ + \ + static uint32_t __attribute__((__target__("avx512vpopcntdq"))) \ + dpcls_avx512_gather_mf_##U0##_##U1##_vpop(struct dpcls_subtable *subtable,\ + uint32_t keys_map, \ + const struct netdev_flow_key *keys[], \ + struct dpcls_rule **rules) \ + { \ + const uint32_t use_vpop = 1; \ + return avx512_lookup_impl(subtable, keys_map, keys, rules, \ + U0, U1, use_vpop); \ } \ DECLARE_OPTIMIZED_LOOKUP_FUNCTION(9, 4) @@ -306,11 +343,18 @@ DECLARE_OPTIMIZED_LOOKUP_FUNCTION(5, 1) DECLARE_OPTIMIZED_LOOKUP_FUNCTION(4, 1) DECLARE_OPTIMIZED_LOOKUP_FUNCTION(4, 0) -/* Check if a specialized function is valid for the required subtable. */ -#define CHECK_LOOKUP_FUNCTION(U0, U1) \ +/* Check if a specialized function is valid for the required subtable. + * The use_vpop variable is used to decide if the VPOPCNT instruction can be + * used or not. + */ +#define CHECK_LOOKUP_FUNCTION(U0, U1, use_vpop) \ ovs_assert((U0 + U1) <= (NUM_U64_IN_ZMM_REG * 2)); \ if (!f && u0_bits == U0 && u1_bits == U1) { \ - f = dpcls_avx512_gather_mf_##U0##_##U1; \ + if (use_vpop) { \ + f = dpcls_avx512_gather_mf_##U0##_##U1##_vpop; \ + } else { \ + f = dpcls_avx512_gather_mf_##U0##_##U1; \ + } \ } static uint32_t @@ -318,9 +362,11 @@ dpcls_avx512_gather_mf_any(struct dpcls_subtable *subtable, uint32_t keys_map, const struct netdev_flow_key *keys[], struct dpcls_rule **rules) { + const uint32_t use_vpop = 0; return avx512_lookup_impl(subtable, keys_map, keys, rules, subtable->mf_bits_set_unit0, - subtable->mf_bits_set_unit1); + subtable->mf_bits_set_unit1, + use_vpop); } dpcls_subtable_lookup_func @@ -334,12 +380,14 @@ dpcls_subtable_avx512_gather_probe(uint32_t u0_bits, uint32_t u1_bits) return NULL; } - CHECK_LOOKUP_FUNCTION(9, 4); - CHECK_LOOKUP_FUNCTION(9, 1); - CHECK_LOOKUP_FUNCTION(5, 3); - CHECK_LOOKUP_FUNCTION(5, 1); - CHECK_LOOKUP_FUNCTION(4, 1); - CHECK_LOOKUP_FUNCTION(4, 0); + int use_vpop = dpdk_get_cpu_has_isa("x86_64", "avx512vpopcntdq"); + + CHECK_LOOKUP_FUNCTION(9, 4, use_vpop); + CHECK_LOOKUP_FUNCTION(9, 1, use_vpop); + CHECK_LOOKUP_FUNCTION(5, 3, use_vpop); + CHECK_LOOKUP_FUNCTION(5, 1, use_vpop); + CHECK_LOOKUP_FUNCTION(4, 1, use_vpop); + CHECK_LOOKUP_FUNCTION(4, 0, use_vpop); /* Check if the _any looping version of the code can perform this miniflow * lookup. Performance gain may be less pronounced due to non-specialized