From patchwork Mon Jun 24 23:44:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Law X-Patchwork-Id: 1951844 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=WQ8RNdq5; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4W7PhR0WWnz20X6 for ; Tue, 25 Jun 2024 09:44:52 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 63FF5384773E for ; Mon, 24 Jun 2024 23:44:49 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com [IPv6:2607:f8b0:4864:20::331]) by sourceware.org (Postfix) with ESMTPS id 0BA66384AB52 for ; Mon, 24 Jun 2024 23:44:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 0BA66384AB52 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 0BA66384AB52 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::331 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719272670; cv=none; b=GBaL8UnxGTOLf04pRAOIf2MuvEb8V7gWgM7fF3qv7c7Cjm8TLhFwNU2V7ycUcfQJ7hz9E2OOF+u8hEoyfbV+wSQn95XR04GJKF4LP0rW3lPQkOvKG3/5Vhgw2LhtJVEs1jzH22DCLOrvubwEph08R0bIsL26mgRY1X/C57S0br4= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1719272670; c=relaxed/simple; bh=k46cyTEWieN2zuk0Z0MjashwMC0/Vhm6aLX+WGV8ETI=; h=DKIM-Signature:Message-ID:Date:MIME-Version:From:To:Subject; b=uoDxJXDmuzOdXF7wC++9HEvYTyVQes6FctqejZe/k6hyIIZRgRuAp1sAITviTs6EgqsD2nGP9k8i3m1LTc8gu8bM4/pHu5obBZBP/oGaFBJOD5V/UjvQ5jh3GPMGoCfnQzXMNuwrhmX+sXT/RKq9xUsrFznUBpEVH7D+e1wbfYI= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-ot1-x331.google.com with SMTP id 46e09a7af769-6f99555c922so2885056a34.3 for ; Mon, 24 Jun 2024 16:44:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1719272665; x=1719877465; darn=gcc.gnu.org; h=subject:to:from:content-language:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=unm5uSSFg2crEfOwgSZdqqy+5Y/j8fNNFU+G5DlbNSE=; b=WQ8RNdq5FhrIDMHFD3W6UXqABZb72/QBmBdX4byLjZhi89N5d03778Q1v3xNItq0uc zYYQi1uuxt0T/MIPV9d0Cj1cVh9SG6/afMpuqzdPgUQm5MFpcHZ56dm/ZKYSsmbPVcYJ cuboAkLrt2iNLPHnDMFH0FccnK6KkJtlMOfSpK4Sbvujmwv1SM1DPKuLvW8AAwcdqITS AXdQJWbOrTtZzuWEbcEdDQz09RUvAv1tq+UPRHDXMFasAUi2d1TXLVfH9KFmzFRIrbdL Ot9CwzoCrMNQqNlOen9/Lm7f8m6GVIr57H1ePB3cpPrIym5h4U+lvRpKgoTSN4p6MZkQ 1YcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719272665; x=1719877465; h=subject:to:from:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=unm5uSSFg2crEfOwgSZdqqy+5Y/j8fNNFU+G5DlbNSE=; b=QOve0oLZD4ukZguuW7tbCoU7AMFpCq3+JF+QC3IdxZHKenGwj6Tvlndgl/V+J8DLDu EDNOOpxp5ELOs2Y0pjMGaqhRYjqta25YpKNmhVqWU8x7Gu8qUHuW7rztwQ17R7tIaUzw k2mOrRCVo91ZGwz39AQ+D0c4uMsdOwByEUB2fXDv9P7HiuC/wRXD+0uquIBFivMYWCHb /ed0xQqDSDvnL654TlBV+ZX0JSYUzS4SieuaP3dYIp4U3xPOKqgta/i0nXBZ3xz5MIxI x7VPwjYy3ax4ZabKHhItx+Gqbi5bcec1WadfmToBJBGXEggirK04muOfYqHnSuu7SAIc BlBw== X-Gm-Message-State: AOJu0YxkGKSyLyAeIjC41P5lTNaKsLDWcpX7ZvmkE6uiCtCQ7HqsTPTO DsLTLxpqWxg7dw7Wqc2kXWAau03n4OUBJNFqzrTNMwB+gEfQaHTT8kuzbw== X-Google-Smtp-Source: AGHT+IGjEB3Ytl40jZCuX+rq2EK1+KvvYk2NeqJXqS7vdPLS2vRbqwyNExwxEHVIX4J71C2NltR0Bw== X-Received: by 2002:a05:6830:4108:b0:6fa:18e8:4440 with SMTP id 46e09a7af769-700af779828mr8412117a34.0.1719272665152; Mon, 24 Jun 2024 16:44:25 -0700 (PDT) Received: from [172.31.0.109] ([136.36.72.243]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-700a05badc7sm1237123a34.52.2024.06.24.16.44.24 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 24 Jun 2024 16:44:24 -0700 (PDT) Message-ID: Date: Mon, 24 Jun 2024 17:44:23 -0600 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Beta Content-Language: en-US From: Jeff Law To: "gcc-patches@gcc.gnu.org" Subject: [to-be-committed][V3][RISC-V] cmpmem for RISCV with V extension X-Spam-Status: No, score=-8.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org So this is the cmpmem patch from Sergei, updated for the trunk. Updates included adjusting the existing cmpmemsi expander to conditionally try expansion via vector. And a minor testsuite adjustment to turn off vector expansion in one test that is primarily focused on vset optimization and ensuring we don't have extras. I've spun this in my tester successfully and just want to see a clean run through precommit CI before moving forward. Jeff gcc/ChangeLog: * config/riscv/riscv-protos.h (riscv_vector::expand_vec_cmpmem): New function declaration. * config/riscv/riscv-string.cc (riscv_vector::expand_vec_cmpmem): New function. * config/riscv/riscv.md (cmpmemsi): Try riscv_vector::expand_vec_cmpmem for constant lengths. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/cmpmem-1.c: New codegen tests * gcc.target/riscv/rvv/base/cmpmem-2.c: New execution tests * gcc.target/riscv/rvv/base/cmpmem-3.c: New codegen tests * gcc.target/riscv/rvv/base/cmpmem-4.c: New codegen tests * gcc.target/riscv/rvv/autovec/vls/misalign-1.c: Turn off vector mem* and str* handling. diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h index a3380d4250d..a8b76173fa0 100644 --- a/gcc/config/riscv/riscv-protos.h +++ b/gcc/config/riscv/riscv-protos.h @@ -679,6 +679,7 @@ void expand_rawmemchr (machine_mode, rtx, rtx, rtx, bool = false); bool expand_strcmp (rtx, rtx, rtx, rtx, unsigned HOST_WIDE_INT, bool); void emit_vec_extract (rtx, rtx, rtx); bool expand_vec_setmem (rtx, rtx, rtx); +bool expand_vec_cmpmem (rtx, rtx, rtx, rtx); /* Rounding mode bitfield for fixed point VXRM. */ enum fixed_point_rounding_mode diff --git a/gcc/config/riscv/riscv-string.cc b/gcc/config/riscv/riscv-string.cc index 1ddebdcee3f..257a514d290 100644 --- a/gcc/config/riscv/riscv-string.cc +++ b/gcc/config/riscv/riscv-string.cc @@ -1605,4 +1605,104 @@ expand_vec_setmem (rtx dst_in, rtx length_in, rtx fill_value_in) return true; } +/* Used by cmpmemsi in riscv.md. */ + +bool +expand_vec_cmpmem (rtx result_out, rtx blk_a_in, rtx blk_b_in, rtx length_in) +{ + HOST_WIDE_INT lmul; + /* Check we are able and allowed to vectorise this operation; + bail if not. */ + if (!check_vectorise_memory_operation (length_in, lmul)) + return false; + + /* Strategy: + load entire blocks at a and b into vector regs + generate mask of bytes that differ + find first set bit in mask + find offset of first set bit in mask, use 0 if none set + result is ((char*)a[offset] - (char*)b[offset]) + */ + + machine_mode vmode + = riscv_vector::get_vector_mode (QImode, BYTES_PER_RISCV_VECTOR * lmul) + .require (); + rtx blk_a_addr = copy_addr_to_reg (XEXP (blk_a_in, 0)); + rtx blk_a = change_address (blk_a_in, vmode, blk_a_addr); + rtx blk_b_addr = copy_addr_to_reg (XEXP (blk_b_in, 0)); + rtx blk_b = change_address (blk_b_in, vmode, blk_b_addr); + + rtx vec_a = gen_reg_rtx (vmode); + rtx vec_b = gen_reg_rtx (vmode); + + machine_mode mask_mode = get_mask_mode (vmode); + rtx mask = gen_reg_rtx (mask_mode); + rtx mismatch_ofs = gen_reg_rtx (Pmode); + + rtx ne = gen_rtx_NE (mask_mode, vec_a, vec_b); + rtx vmsops[] = { mask, ne, vec_a, vec_b }; + rtx vfops[] = { mismatch_ofs, mask }; + + /* If the length is exactly vlmax for the selected mode, do that. + Otherwise, use a predicated store. */ + + if (known_eq (GET_MODE_SIZE (vmode), INTVAL (length_in))) + { + emit_move_insn (vec_a, blk_a); + emit_move_insn (vec_b, blk_b); + emit_vlmax_insn (code_for_pred_cmp (vmode), riscv_vector::COMPARE_OP, + vmsops); + + emit_vlmax_insn (code_for_pred_ffs (mask_mode, Pmode), + riscv_vector::CPOP_OP, vfops); + } + else + { + if (!satisfies_constraint_K (length_in)) + length_in = force_reg (Pmode, length_in); + + rtx memmask = CONSTM1_RTX (mask_mode); + + rtx m_ops_a[] = { vec_a, memmask, blk_a }; + rtx m_ops_b[] = { vec_b, memmask, blk_b }; + + emit_nonvlmax_insn (code_for_pred_mov (vmode), + riscv_vector::UNARY_OP_TAMA, m_ops_a, length_in); + emit_nonvlmax_insn (code_for_pred_mov (vmode), + riscv_vector::UNARY_OP_TAMA, m_ops_b, length_in); + + emit_nonvlmax_insn (code_for_pred_cmp (vmode), riscv_vector::COMPARE_OP, + vmsops, length_in); + + emit_nonvlmax_insn (code_for_pred_ffs (mask_mode, Pmode), + riscv_vector::CPOP_OP, vfops, length_in); + } + + /* Mismatch_ofs is -1 if blocks match, or the offset of + the first mismatch otherwise. */ + rtx ltz = gen_reg_rtx (Xmode); + emit_insn (gen_slt_3 (LT, Xmode, Xmode, ltz, mismatch_ofs, const0_rtx)); + /* mismatch_ofs += (mismatch_ofs < 0) ? 1 : 0. */ + emit_insn ( + gen_rtx_SET (mismatch_ofs, gen_rtx_PLUS (Pmode, mismatch_ofs, ltz))); + + /* Unconditionally load the bytes at mismatch_ofs and subtract them + to get our result. */ + emit_insn (gen_rtx_SET (blk_a_addr, + gen_rtx_PLUS (Pmode, mismatch_ofs, blk_a_addr))); + emit_insn (gen_rtx_SET (blk_b_addr, + gen_rtx_PLUS (Pmode, mismatch_ofs, blk_b_addr))); + + blk_a = change_address (blk_a, QImode, blk_a_addr); + blk_b = change_address (blk_b, QImode, blk_b_addr); + + rtx byte_a = gen_reg_rtx (SImode); + rtx byte_b = gen_reg_rtx (SImode); + do_zero_extendqi2 (byte_a, blk_a); + do_zero_extendqi2 (byte_b, blk_b); + + emit_insn (gen_rtx_SET (result_out, gen_rtx_MINUS (SImode, byte_a, byte_b))); + + return true; +} } diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md index 78cf83c9252..ff37125e3f2 100644 --- a/gcc/config/riscv/riscv.md +++ b/gcc/config/riscv/riscv.md @@ -2669,6 +2669,12 @@ (define_expand "cmpmemsi" (use (match_operand:SI 4))])] "!optimize_size" { + /* If TARGET_VECTOR is false, this routine will return false and we will + try scalar expansion. */ + if (riscv_vector::expand_vec_cmpmem (operands[0], operands[1], + operands[2], operands[3])) + DONE; + if (riscv_expand_block_compare (operands[0], operands[1], operands[2], operands[3])) DONE; @@ -2717,7 +2723,6 @@ (define_expand "setmem" FAIL; }) - ;; Expand in-line code to clear the instruction cache between operand[0] and ;; operand[1]. (define_expand "clear_cache" diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/vls/misalign-1.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/vls/misalign-1.c index 5184a295e16..9d698b421d6 100644 --- a/gcc/testsuite/gcc.target/riscv/rvv/autovec/vls/misalign-1.c +++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/vls/misalign-1.c @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2 -mrvv-max-lmul=m4 -fno-tree-loop-distribute-patterns -mno-vector-strict-align" } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2 -mrvv-max-lmul=m4 -fno-tree-loop-distribute-patterns -mno-vector-strict-align -mstringop-strategy=libcall" } */ #include diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-1.c new file mode 100644 index 00000000000..6bc8b07bc2c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-1.c @@ -0,0 +1,88 @@ +/* { dg-do compile } */ +/* { dg-add-options riscv_v } */ +/* { dg-additional-options "-O3 -mrvv-max-lmul=dynamic" } */ +/* { dg-final { check-function-bodies "**" "" } } */ + +#define MIN_VECTOR_BYTES (__riscv_v_min_vlen / 8) + +/* Trivial memcmp should use inline scalar ops. +** f1: +** lbu\s+a\d+,0\(a0\) +** lbu\s+a\d+,0\(a1\) +** subw?\s+a0,a\d+,a\d+ +** ret +*/ +int +f1 (void *a, void *b) +{ + return __builtin_memcmp (a, b, 1); +} + +/* Tiny __builtin_memcmp should use libc. +** f2: +** li\s+a\d,\d+ +** tail\s+memcmp +*/ +int +f2 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES - 1); +} + +/* Vectorise+inline minimum vector register width with LMUL=1 +** f3: +** ( +** vsetivli\s+zero,\d+,e8,m1,ta,ma +** | +** li\s+a\d+,\d+ +** vsetvli\s+zero,a\d+,e8,m1,ta,ma +** ) +** ... +** ret +*/ +int +f3 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES); +} + +/* Vectorised code should use smallest lmul known to fit length +** f4: +** ( +** vsetivli\s+zero,\d+,e8,m2,ta,ma +** | +** li\s+a\d+,\d+ +** vsetvli\s+zero,a\d+,e8,m2,ta,ma +** ) +** ... +** ret +*/ +int +f4 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES + 1); +} + +/* Vectorise+inline up to LMUL=8 +** f5: +** li\s+a\d+,\d+ +** vsetvli\s+zero,a\d+,e8,m8,ta,ma +** ... +** ret +*/ +int +f5 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES * 8); +} + +/* Don't inline if the length is too large for one operation. +** f6: +** li\s+a2,\d+ +** tail\s+memcmp +*/ +int +f6 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES * 8 + 1); +} diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-2.c new file mode 100644 index 00000000000..c782cc6c6e6 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-2.c @@ -0,0 +1,74 @@ +/* { dg-do run { target { riscv_v } } } */ +/* { dg-add-options riscv_v } */ +/* { dg-options "-O2 -mrvv-max-lmul=dynamic" } */ + +#include + +#define MIN_VECTOR_BYTES (__riscv_v_min_vlen / 8) + +static inline __attribute__ ((always_inline)) void +do_one_test (int const size, int const diff_offset, int const diff_dir) +{ + unsigned char A[size]; + unsigned char B[size]; + unsigned char const fill_value = 0x55; + __builtin_memset (A, fill_value, size); + __builtin_memset (B, fill_value, size); + + if (diff_dir != 0) + { + if (diff_dir < 0) + { + A[diff_offset] = fill_value - 1; + } + else + { + A[diff_offset] = fill_value + 1; + } + } + + if (__builtin_memcmp (A, B, size) != diff_dir) + { + abort (); + } +} + +int +main () +{ + do_one_test (0, 0, 0); + + do_one_test (1, 0, -1); + do_one_test (1, 0, 0); + do_one_test (1, 0, 1); + + do_one_test (MIN_VECTOR_BYTES - 1, 0, -1); + do_one_test (MIN_VECTOR_BYTES - 1, 0, 0); + do_one_test (MIN_VECTOR_BYTES - 1, 0, 1); + do_one_test (MIN_VECTOR_BYTES - 1, 1, -1); + do_one_test (MIN_VECTOR_BYTES - 1, 1, 0); + do_one_test (MIN_VECTOR_BYTES - 1, 1, 1); + + do_one_test (MIN_VECTOR_BYTES, 0, -1); + do_one_test (MIN_VECTOR_BYTES, 0, 0); + do_one_test (MIN_VECTOR_BYTES, 0, 1); + do_one_test (MIN_VECTOR_BYTES, MIN_VECTOR_BYTES - 1, -1); + do_one_test (MIN_VECTOR_BYTES, MIN_VECTOR_BYTES - 1, 0); + do_one_test (MIN_VECTOR_BYTES, MIN_VECTOR_BYTES - 1, 1); + + do_one_test (MIN_VECTOR_BYTES + 1, 0, -1); + do_one_test (MIN_VECTOR_BYTES + 1, 0, 0); + do_one_test (MIN_VECTOR_BYTES + 1, 0, 1); + do_one_test (MIN_VECTOR_BYTES + 1, MIN_VECTOR_BYTES, -1); + do_one_test (MIN_VECTOR_BYTES + 1, MIN_VECTOR_BYTES, 0); + do_one_test (MIN_VECTOR_BYTES + 1, MIN_VECTOR_BYTES, 1); + + do_one_test (MIN_VECTOR_BYTES * 8, 0, -1); + do_one_test (MIN_VECTOR_BYTES * 8, 0, 0); + do_one_test (MIN_VECTOR_BYTES * 8, 0, 1); + do_one_test (MIN_VECTOR_BYTES * 8, MIN_VECTOR_BYTES * 8 - 1, -1); + do_one_test (MIN_VECTOR_BYTES * 8, MIN_VECTOR_BYTES * 8 - 1, 0); + do_one_test (MIN_VECTOR_BYTES * 8, MIN_VECTOR_BYTES * 8 - 1, 1); + + return 0; +} diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-3.c new file mode 100644 index 00000000000..5ca31af90fb --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-3.c @@ -0,0 +1,45 @@ +/* { dg-do compile } */ +/* { dg-add-options riscv_v } */ +/* { dg-additional-options "-O3 -mrvv-max-lmul=m1" } */ +/* { dg-final { check-function-bodies "**" "" } } */ + +#define MIN_VECTOR_BYTES (__riscv_v_min_vlen / 8) + +/* Tiny __builtin_memcmp should use libc. +** f1: +** li\s+a\d,\d+ +** tail\s+memcmp +*/ +int +f1 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES - 1); +} + +/* Vectorise+inline minimum vector register width with LMUL=1 +** f2: +** ( +** vsetivli\s+zero,\d+,e8,m1,ta,ma +** | +** li\s+a\d+,\d+ +** vsetvli\s+zero,a\d+,e8,m1,ta,ma +** ) +** ... +** ret +*/ +int +f2 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES); +} + +/* Don't inline if the length is too large for one operation. +** f3: +** li\s+a2,\d+ +** tail\s+memcmp +*/ +int +f3 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES + 1); +} diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-4.c b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-4.c new file mode 100644 index 00000000000..5860b27a233 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/cmpmem-4.c @@ -0,0 +1,62 @@ +/* { dg-do compile } */ +/* { dg-add-options riscv_v } */ +/* { dg-additional-options "-O3 -mrvv-max-lmul=m8" } */ +/* { dg-final { check-function-bodies "**" "" } } */ + +#define MIN_VECTOR_BYTES (__riscv_v_min_vlen / 8) + +/* Tiny __builtin_memcmp should use libc. +** f1: +** li\s+a\d,\d+ +** tail\s+memcmp +*/ +int +f1 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES - 1); +} + +/* Vectorise+inline minimum vector register width with LMUL=8 as requested +** f2: +** ( +** vsetivli\s+zero,\d+,e8,m8,ta,ma +** | +** li\s+a\d+,\d+ +** vsetvli\s+zero,a\d+,e8,m8,ta,ma +** ) +** ... +** ret +*/ +int +f2 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES); +} + +/* Vectorise+inline anything that fits +** f3: +** ( +** vsetivli\s+zero,\d+,e8,m8,ta,ma +** | +** li\s+a\d+,\d+ +** vsetvli\s+zero,a\d+,e8,m8,ta,ma +** ) +** ... +** ret +*/ +int +f3 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES * 8); +} + +/* Don't inline if the length is too large for one operation. +** f4: +** li\s+a2,\d+ +** tail\s+memcmp +*/ +int +f4 (void *a, void *b) +{ + return __builtin_memcmp (a, b, MIN_VECTOR_BYTES * 8 + 1); +}