From patchwork Fri Oct 11 08:08:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jennifer Schmitz X-Patchwork-Id: 1995931 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256 header.s=selector2 header.b=VpaOZvK2; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4XPzmH3Ywdz1xt1 for ; Fri, 11 Oct 2024 19:09:03 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9FFCF3857835 for ; Fri, 11 Oct 2024 08:09:01 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2062c.outbound.protection.outlook.com [IPv6:2a01:111:f403:240a::62c]) by sourceware.org (Postfix) with ESMTPS id 6D2553858D26 for ; Fri, 11 Oct 2024 08:08:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6D2553858D26 Authentication-Results: sourceware.org; dmarc=fail (p=reject dis=none) header.from=nvidia.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=nvidia.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 6D2553858D26 Authentication-Results: server2.sourceware.org; arc=pass smtp.remote-ip=2a01:111:f403:240a::62c ARC-Seal: i=2; a=rsa-sha256; d=sourceware.org; s=key; t=1728634120; cv=pass; b=GUOUdeIHg4ToqhDsJngINemoFPVVoKGjKOSFscfyZ9skbzPrEvlGICVuHxdFCBqNFpHopLdiAt+Utx7J/9M2jvfEO1URNjheQussckEom1AZ3+dss4ZSmlNz1bPGVeS1+JxZ4TPhr1kFy8IrQIAsM92jVwZX/xaqcR/8ZmhgdXc= ARC-Message-Signature: i=2; a=rsa-sha256; d=sourceware.org; s=key; t=1728634120; c=relaxed/simple; bh=nQpC+m7PuDdsEE2yMehfM6x1afaXFTiuy5dNIczSoGY=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=aGBgzc1aGrqK++jU/As+9T3wOw1+DGDknMCEP+K+oorMlLKq1WSSDiIZvalZjBY1yIyCG0YpipoGwqc86S8IdUbnOwe6LRd0+tugiDqXIOHrEkQ6+H6DlcfNOldrv15Fx4tK6dXP/T/+2GkqFQGyVr6Pxe5+Ug5NhpRPPmAkDxE= ARC-Authentication-Results: i=2; server2.sourceware.org ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=hu8tSpi7Nhi8M01H4BjQjyqrk9/I6TMleDCiZDWHyWbtwnU93HV7b3ZWaAeV6U43QDfBF/7eWqk+U4DuRUgh/qdQGmhhi9uRRF+nrj4IJ9zdfj/j4G+bfXnYasiBN8Xje0W/86luppK9uA2vyJC9KK/IpPEq9l+OvUtL3GskA3JMUke5ja6v3S/NTSwkzBHZ75m9plDIVDb3/Z61Yz4fnL0ugzOs6gmlnbDy1U5eq7nk9LVsiL1AdLRQp+em8MXgYr2cO3mwlSP3E3Kj4U8Qh7vQjmL2WB+HQyLl/4PRnT72Pj4EbXYgnsdPxscmUc336C4rAiBQ30IN02RbQ0edpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eaMiaFaZp5GRd76NireOTvV5v0d342awaqUJfhlLDAk=; b=ibbC/5dX0lP1xYDnxp8/CQx2OvI3m1eAAoKRvBwsOPdd+2Pp0QIoyEbrMYgHfKoddxPnicXa9E2mNOnHjGNMdvPKueEbOTAV5G5nJJFFBFusmcIXd8NuqzgQMX2qSMFsgL/6df4YfMHFMdzzzUey6aT2fG5bUCG0WUWwt5jjildooU9MWTXTYIH2ARq1A8Z0F9O9iSdXPC5DFe0aJhdpvhI69QssvHNDIoTG4SJbXhu1/t78WaixV0QRZTSN8fyVt7Nys18DnQmRvP1IVp8rXqEkNVi++KpmPnv4hltrKGYd3UCYNUe5bwU5ZvAwb4EbpZ/nqNNkgE0JYxm5u32DeA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eaMiaFaZp5GRd76NireOTvV5v0d342awaqUJfhlLDAk=; b=VpaOZvK2LE++WOkzyLSa1FheusIzhdyQ+JGc7ImyRxy9YcG3t9Y6Nypu/G8cSkmyL7ECsImXY2UYCtqGjKKGlmQRnWCGoASKG72YVQLk+pu56QGQ3diCTe3bM6h+igZQq95M7W+rY50+VCF9WB79a/ur0d91GHpvxjK4Bs2KMwvCcgOF4mCRN3prSQwMItfPkeqPEpwUSPVHNhJxvxqiXTevnSkH0O38CczfkUgirYjrSnxfT3MdY1eyeyDrf1KNPgEB9PmtTswJty2E0FqubitjKLIKxTdqrm1ydmYr/LT8oR+Q8CRxGxPqoxWZzxiD69JfjiE0tQz9pMZ1rwgDqw== Received: from CH0PR12MB5252.namprd12.prod.outlook.com (2603:10b6:610:d3::24) by SJ2PR12MB7823.namprd12.prod.outlook.com (2603:10b6:a03:4c9::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8048.18; Fri, 11 Oct 2024 08:08:27 +0000 Received: from CH0PR12MB5252.namprd12.prod.outlook.com ([fe80::290b:293f:5cbd:9c9d]) by CH0PR12MB5252.namprd12.prod.outlook.com ([fe80::290b:293f:5cbd:9c9d%5]) with mapi id 15.20.8048.018; Fri, 11 Oct 2024 08:08:26 +0000 From: Jennifer Schmitz To: "gcc-patches@gcc.gnu.org" CC: Richard Sandiford , Kyrylo Tkachov Subject: [PATCH] SVE intrinsics: Fold svmul with constant power-of-2 operand to svlsl Thread-Topic: [PATCH] SVE intrinsics: Fold svmul with constant power-of-2 operand to svlsl Thread-Index: AQHbG7S/wGiDeqV+t0eKJSYLm6SByg== Date: Fri, 11 Oct 2024 08:08:26 +0000 Message-ID: <0CE9B8D5-56C7-4401-AE4C-04440371BE7E@nvidia.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: CH0PR12MB5252:EE_|SJ2PR12MB7823:EE_ x-ms-office365-filtering-correlation-id: c5611641-83bc-48f6-c191-08dce9cbe231 x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; ARA:13230040|1800799024|376014|366016|38070700018; x-microsoft-antispam-message-info: aPCWqN2mcyiJkCYWDbjORglrsRRUdGZBtcPAkyf1PtBzXlHNzF4ucecBwAeuPOl6Mns/vRU2e2L81QU1g1iE19eoBbBCkF6vkUSfsAuCN0hhowSm60QRtdQraO+7RmsSjSbnOHPJPPSPMi1QujFq3VGwu0Y0t6TiyQeg++e01doo9DcWTKe1YxlFwXuE3T49xD5x45ZeKDszlUZbnLTCoIORnIC79naaPoEm3ES8zytfU0DrTgpssQZkBpcMXFPm7Z9bWr7MT3gyWuDrdyMKLlbJbU+ibP/1xdtCR+eZh3eEmnI0ULqVeYGHQHWztTvK/8ze4QlTIYaInuGB7Sk7KU+VP1C/L2sEdMyjplOn1CyMbuetEB3aWn2MqoK3iZBYhOqtsjXE29qDXVIU6KMTS+ayhjeNkxK5cFT91xHLw98Y23BkOugDimre2+TTemwYmAYGEGMfc1E3sVumlj+MqvWp8smBm2tUOBEsdPNetJ18Mt4ulg4p4Q+ocC0d35Za1GJRYGaVxwmAOpMAowVrGxbtTFEyd47EjXDdTNNqbNwhZW2E9JzZp2xf9c5pJrfrfgaBYWARg8eYNhUwnOwxUmbNWETjLPQ7W7Lc4JDuWlfC12cWtnrP512sn2r7YPrWrHGDA4OwiOUKcDmY4oWNz1OdWQzZ2PVACmm/r3y6HsDxB34/9rgHGBNkefI6Fh3xgBli4CQaX8J75TcmBHe+6pvsOhQxX2WxwP6IWDzvfY+6fATbZ7MuwGwbwtIH7r16tkBe4qG8K6sz9UwUyoWGDELWlvxMFaUlzgfLF1RhBSpLgiEBlBqw2aDlBm8Hp22R86iF86oIVh8NIBiEhPgvv+hqPy8ynexgOv4ikRLBxNMikWB9d/+u/Ih4jSQE3O9I6uuTVa99HL1MS+YNfPLr4IRWq1+CIuj2TGC6LkjWrxsezCSYmpmMKrZv8YqMyHeFmuyuxjQvLQXm3qSX1IBZTrmV7oAg822e+wXIN+KJDNOt2SNrJkJn/h1da/YDmOcdI6udoRn4DOFYR7qLrTZI9cGczjmamRuoNVLz0I3rsQ12OZJG8RJ8bgJSYciPpENkGgSCxXB2xxp39nzk8cuH7ZTAtiIzlY2KA53eOc/2+C96NxS+g/OEd8/DB6D90+WYECmQmHrKHx8ubBROh2b1kS29DrwijM9UUfVdzPEYMZVd1XaSm7jW5E6uGMZx9cYLqp563jKX6xkbdETOOWy1j3avNFrN49ftTDgiXW7nZ868XXyMSdv4HtSxrvSWmYK/S5KZfIwFAzzYQMk1XzGk22ChAGr9BpHMG5sg4r5O5NcVVKqGL+pv8jrWeO01tpeK4VMAKDgn0jp4fN/JHbHudw== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH0PR12MB5252.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(1800799024)(376014)(366016)(38070700018); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: c3i7f0IgvegmfybEgW1kBBIP6y7QCyP17CAocjchQu92OVtHHP0ujHoesXajCuIcyQ2nJtFcH8Am/4tzb8rofvh3/uErRhqrOBj2plMsXAMxM29X2F44OOb9gZ21WyNKcwrCvcOTAfBw8YqgajJGEGrMBgnvk1JxNkyBE/XpsWl54I9j5c2Pjl7tM4JcsY0aZzfLoOtXkI37NNNQSPPOXjYrA/3jFvYT2lC4jPJZl4Ge4w42kUH4SSfRtAqZwpWIV/UCPXWBea3X5QEGuwU2Vggt7kbr7U/AEaaVIHXrMd+bvCDw+eWigwQJC9L48xnYqrPBgd5kYVohWGWRmke5EcMSx2QM+Bd4B34Zb9c2jgamVDzxqh8lUghAeEZF6a41LSdOvMoYDQlGOp9u8q9ohHTiEpVoe4P/uywNQCz61L4+m8aj/t/U8kgT0kQwueaY+q4BcTJtFk+44eWMcd5dVBio9dxICpm/ee6GsgvvDEW9HhxvVMCzTUPU+yke21cX6pNKzypGXq5oPo5RRUaGPL8T7SHpIOxuj3Lyyrqt02RykmEIBgmzSVVCdRemz1tJpJ8+DwftoNWGJeuxe/xQGlX0AC579d/8kT3aWAvHWxpD9TI0ZrQO7EM2U7TFnnr0RbKmmybVh0fUfMbsfV8llMm8ePONVCssTVAYr/kjlPwfaoxr+BhgxRQK51piBMPA3eP694y+VQEVEjPxT7sVGMl74NF4FVE8Ii8C4SFlfe+Gd1PMvxv6uszJrurTzEGIbkWCkX/sBpbbD5d0LuHDH9rhEqSSICygzvEx1rQrA57DV1geZiXaYtERiXOjzxhC+ZsapyfdsrkvfuTNraqDC0w7QZNNqg/nCT4IUmfFJz4ZFl5MuMRn2fRNWOpy3aj3h1S7OfA+ULFQTEVe6FS0V2zUSWB5VFc6Y+w6SNdKFxzemoVzaE8L1tEB/boqO2CVZQROMB8uU5JKEEiQARN+lTc+6XeSLfUFrQN9h+9XfEx+TgUcVlnDuYLsu9XIfRXmG0EcQq9ia2DjJNrMrcDjw/SqEiiIB4+89S4L9sFetLC72aKAh5x4iTuHm0wFihJxOoUPPyygyFeQ0CxdxxQE8AIrofQBXezEy3X1ZwcKu5beCziTzz6U/fpBJ7eMpyuAABhqBIrTAsxks5rLCKtn5lb0h05OZnvFn5uJxIt1W68NnLhs7H1bD2CC9QIBwGjPnyIjpAGUmGTXysbTyYDGPcumQIzyQc40U2A2hctUrwVa82WxHOSvrpo9apXUQDQzSUx8ZHstyMY99c4F9Q3xdbcTe2jNGtD0/LHpfDDNBa0P1rGAFBJh2ozlz+jZD7Ufmv3KsReYe+KGprO2/SHKvgAEwG3clPXS8UkqVWhfQH9POo6WISlqa7ZiwrYh5etrIdZI6dhPtx/nDMxcawPNe2i0nQQHPGJsUy1SO0xDIp5vjdCCKGulx1yezW7FBkL6lm/gpf9ohng8Y8cdSaeLLjVpF/tO8f0yI5BPhWwn4+3mrvxlNg6JiL0fFr7eGyezPrweGxAH+5FpsNV7yY2i5l4qAj07m4jIqhrbLlTCJKV0j5B6zTKAAxVgv6G99yWG MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: CH0PR12MB5252.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: c5611641-83bc-48f6-c191-08dce9cbe231 X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Oct 2024 08:08:26.1883 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: zNsyRbw0xPt4LkprQz7i/mwAftQzUqm2430/EU5iwXkt9dMpRu8J3F2r2ad2dEEw0cThg56rHC2xo+EC8Opvqw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7823 X-Spam-Status: No, score=-9.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FORGED_SPF_HELO, GIT_PATCH_0, KAM_SHORT, SPF_HELO_PASS, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org Previously submitted in https://gcc.gnu.org/pipermail/gcc-patches/2024-September/663435.html For svmul, if one of the operands is a constant vector with a uniform power of 2, this patch folds the multiplication to a left-shift by immediate (svlsl). Because the shift amount in svlsl is the second operand, the order of the operands is switched, if the first operand contained the powers of 2. However, this switching is not valid for some predications: If the predication is _m and the predicate not ptrue, the result of svlsl might not be the same as for svmul. Therefore, we do not apply the fold in this case. The transform is also not applied to INTMIN for signed integers and to constant vectors of 1 (this case is partially covered by constant folding already and the missing cases will be addressed by the follow-up patch suggested in https://gcc.gnu.org/pipermail/gcc-patches/2024-September/663275.html). Tests were added in the existing test harness to check the produced assembly - when the first or second operand contains the power of 2 - when the second operand is a vector or scalar (_n) - for _m, _z, _x predication - for _m with ptrue or non-ptrue - for intmin for signed integer types - for the maximum power of 2 for signed and unsigned integer types. Note that we used 4 as a power of 2, instead of 2, because a recent patch optimizes left-shifts by 1 to an add instruction. But since we wanted to highlight the change to an lsl instruction we used a higher power of 2. To also check correctness, runtime tests were added. The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression. OK for mainline? Signed-off-by: Jennifer Schmitz gcc/ * config/aarch64/aarch64-sve-builtins-base.cc (svmul_impl::fold): Implement fold to svlsl for power-of-2 operands. gcc/testsuite/ * gcc.target/aarch64/sve/acle/asm/mul_s8.c: New test. * gcc.target/aarch64/sve/acle/asm/mul_s16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_s64.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u8.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u16.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u32.c: Likewise. * gcc.target/aarch64/sve/acle/asm/mul_u64.c: Likewise. * gcc.target/aarch64/sve/mul_const_run.c: Likewise. Signed-off-by: Jennifer Schmitz Signed-off-by: Jennifer Schmitz --- .../aarch64/aarch64-sve-builtins-base.cc | 36 +- .../gcc.target/aarch64/sve/acle/asm/mul_s16.c | 353 +++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_s32.c | 353 +++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_s64.c | 361 ++++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_s8.c | 353 +++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u16.c | 322 ++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u32.c | 322 ++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u64.c | 332 ++++++++++++++-- .../gcc.target/aarch64/sve/acle/asm/mul_u8.c | 327 ++++++++++++++-- .../gcc.target/aarch64/sve/mul_const_run.c | 101 +++++ 10 files changed, 2620 insertions(+), 240 deletions(-) create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc index afce52a7e8d..0ba350edfe5 100644 --- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc +++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc @@ -2035,7 +2035,41 @@ public: || is_ptrue (pg, f.type_suffix (0).element_bytes))) return gimple_build_assign (f.lhs, build_zero_cst (TREE_TYPE (f.lhs))); - return NULL; + /* If one of the operands is a uniform power of 2, fold to a left shift + by immediate. */ + tree op1_cst = uniform_integer_cst_p (op1); + tree op2_cst = uniform_integer_cst_p (op2); + tree shift_op1, shift_op2; + if (op1_cst && integer_pow2p (op1_cst) + && (f.pred != PRED_m + || is_ptrue (pg, f.type_suffix (0).element_bytes))) + { + shift_op1 = op2; + shift_op2 = op1_cst; + } + else if (op2_cst && integer_pow2p (op2_cst)) + { + shift_op1 = op1; + shift_op2 = op2_cst; + } + else + return NULL; + + if ((f.type_suffix (0).unsigned_p && tree_to_uhwi (shift_op2) == 1) + || (!f.type_suffix (0).unsigned_p + && (tree_int_cst_sign_bit (shift_op2) + || tree_to_shwi (shift_op2) == 1))) + return NULL; + + shift_op2 = wide_int_to_tree (unsigned_type_for (TREE_TYPE (shift_op2)), + tree_log2 (shift_op2)); + function_instance instance ("svlsl", functions::svlsl, + shapes::binary_uint_opt_n, MODE_n, + f.type_suffix_ids, GROUP_none, f.pred); + gcall *call = f.redirect_call (instance); + gimple_call_set_arg (call, 1, shift_op1); + gimple_call_set_arg (call, 2, shift_op2); + return call; } }; diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c index 80295f7bec3..3f2246856ff 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s16.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1ULL<<14 + /* ** mul_s16_m_tied1: ** mul z0\.h, p0/m, z0\.h, z1\.h @@ -54,25 +56,122 @@ TEST_UNIFORM_ZX (mul_w0_s16_m_untied, svint16_t, int16_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_s16_m_tied1: -** mov (z[0-9]+\.h), #2 +** mul_4dupop1_s16_m_tied1: +** mov (z[0-9]+)\.h, #4 +** mov (z[0-9]+)\.d, z0\.d +** movprfx z0, \1 +** mul z0\.h, p0/m, z0\.h, \2\.h +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s16_m_tied1, svint16_t, + z0 = svmul_m (p0, svdup_s16 (4), z0), + z0 = svmul_m (p0, svdup_s16 (4), z0)) + +/* +** mul_4dupop1ptrue_s16_m_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s16_m_tied1, svint16_t, + z0 = svmul_m (svptrue_b16 (), svdup_s16 (4), z0), + z0 = svmul_m (svptrue_b16 (), svdup_s16 (4), z0)) + +/* +** mul_4dupop2_s16_m_tied1: +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s16_m_tied1, svint16_t, + z0 = svmul_m (p0, z0, svdup_s16 (4)), + z0 = svmul_m (p0, z0, svdup_s16 (4))) + +/* +** mul_4nop2_s16_m_tied1: +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s16_m_tied1, svint16_t, + z0 = svmul_n_s16_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_s16_m_tied1: +** lsl z0\.h, p0/m, z0\.h, #14 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s16_m_tied1, svint16_t, + z0 = svmul_n_s16_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s16_m_tied1: +** mov (z[0-9]+\.h), #-32768 ** mul z0\.h, p0/m, z0\.h, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s16_m_tied1, svint16_t, - z0 = svmul_n_s16_m (p0, z0, 2), - z0 = svmul_m (p0, z0, 2)) +TEST_UNIFORM_Z (mul_intminnop2_s16_m_tied1, svint16_t, + z0 = svmul_n_s16_m (p0, z0, INT16_MIN), + z0 = svmul_m (p0, z0, INT16_MIN)) /* -** mul_2_s16_m_untied: -** mov (z[0-9]+\.h), #2 +** mul_1_s16_m_tied1: +** sel z0\.h, p0, z0\.h, z0\.h +** ret +*/ +TEST_UNIFORM_Z (mul_1_s16_m_tied1, svint16_t, + z0 = svmul_n_s16_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_3_s16_m_tied1: +** mov (z[0-9]+\.h), #3 +** mul z0\.h, p0/m, z0\.h, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s16_m_tied1, svint16_t, + z0 = svmul_n_s16_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_s16_m_untied: +** movprfx z0, z1 +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s16_m_untied, svint16_t, + z0 = svmul_m (p0, z1, svdup_s16 (4)), + z0 = svmul_m (p0, z1, svdup_s16 (4))) + +/* +** mul_4nop2_s16_m_untied: +** movprfx z0, z1 +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s16_m_untied, svint16_t, + z0 = svmul_n_s16_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_s16_m_untied: +** movprfx z0, z1 +** lsl z0\.h, p0/m, z0\.h, #14 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s16_m_untied, svint16_t, + z0 = svmul_n_s16_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_s16_m_untied: +** mov (z[0-9]+\.h), #3 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s16_m_untied, svint16_t, - z0 = svmul_n_s16_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s16_m_untied, svint16_t, + z0 = svmul_n_s16_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_s16_m: @@ -147,19 +246,120 @@ TEST_UNIFORM_ZX (mul_w0_s16_z_untied, svint16_t, int16_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_s16_z_tied1: -** mov (z[0-9]+\.h), #2 +** mul_4dupop1_s16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s16_z_tied1, svint16_t, + z0 = svmul_z (p0, svdup_s16 (4), z0), + z0 = svmul_z (p0, svdup_s16 (4), z0)) + +/* +** mul_4dupop1ptrue_s16_z_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s16_z_tied1, svint16_t, + z0 = svmul_z (svptrue_b16 (), svdup_s16 (4), z0), + z0 = svmul_z (svptrue_b16 (), svdup_s16 (4), z0)) + +/* +** mul_4dupop2_s16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s16_z_tied1, svint16_t, + z0 = svmul_z (p0, z0, svdup_s16 (4)), + z0 = svmul_z (p0, z0, svdup_s16 (4))) + +/* +** mul_4nop2_s16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s16_z_tied1, svint16_t, + z0 = svmul_n_s16_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_s16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #14 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s16_z_tied1, svint16_t, + z0 = svmul_n_s16_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s16_z_tied1: +** mov (z[0-9]+\.h), #-32768 ** movprfx z0\.h, p0/z, z0\.h ** mul z0\.h, p0/m, z0\.h, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s16_z_tied1, svint16_t, - z0 = svmul_n_s16_z (p0, z0, 2), - z0 = svmul_z (p0, z0, 2)) +TEST_UNIFORM_Z (mul_intminnop2_s16_z_tied1, svint16_t, + z0 = svmul_n_s16_z (p0, z0, INT16_MIN), + z0 = svmul_z (p0, z0, INT16_MIN)) + +/* +** mul_1_s16_z_tied1: +** mov z31.h, #1 +** movprfx z0.h, p0/z, z0.h +** mul z0.h, p0/m, z0.h, z31.h +** ret +*/ +TEST_UNIFORM_Z (mul_1_s16_z_tied1, svint16_t, + z0 = svmul_n_s16_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_3_s16_z_tied1: +** mov (z[0-9]+\.h), #3 +** movprfx z0\.h, p0/z, z0\.h +** mul z0\.h, p0/m, z0\.h, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s16_z_tied1, svint16_t, + z0 = svmul_n_s16_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_s16_z_untied: +** movprfx z0\.h, p0/z, z1\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s16_z_untied, svint16_t, + z0 = svmul_z (p0, z1, svdup_s16 (4)), + z0 = svmul_z (p0, z1, svdup_s16 (4))) + +/* +** mul_4nop2_s16_z_untied: +** movprfx z0\.h, p0/z, z1\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s16_z_untied, svint16_t, + z0 = svmul_n_s16_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_s16_z_untied: +** movprfx z0\.h, p0/z, z1\.h +** lsl z0\.h, p0/m, z0\.h, #14 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s16_z_untied, svint16_t, + z0 = svmul_n_s16_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) /* -** mul_2_s16_z_untied: -** mov (z[0-9]+\.h), #2 +** mul_3_s16_z_untied: +** mov (z[0-9]+\.h), #3 ** ( ** movprfx z0\.h, p0/z, z1\.h ** mul z0\.h, p0/m, z0\.h, \1 @@ -169,9 +369,9 @@ TEST_UNIFORM_Z (mul_2_s16_z_tied1, svint16_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_s16_z_untied, svint16_t, - z0 = svmul_n_s16_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s16_z_untied, svint16_t, + z0 = svmul_n_s16_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_s16_x_tied1: @@ -227,23 +427,113 @@ TEST_UNIFORM_ZX (mul_w0_s16_x_untied, svint16_t, int16_t, z0 = svmul_x (p0, z1, x0)) /* -** mul_2_s16_x_tied1: -** mul z0\.h, z0\.h, #2 +** mul_4dupop1_s16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s16_x_tied1, svint16_t, + z0 = svmul_x (p0, svdup_s16 (4), z0), + z0 = svmul_x (p0, svdup_s16 (4), z0)) + +/* +** mul_4dupop1ptrue_s16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s16_x_tied1, svint16_t, + z0 = svmul_x (svptrue_b16 (), svdup_s16 (4), z0), + z0 = svmul_x (svptrue_b16 (), svdup_s16 (4), z0)) + +/* +** mul_4dupop2_s16_x_tied1: +** lsl z0\.h, z0\.h, #2 ** ret */ -TEST_UNIFORM_Z (mul_2_s16_x_tied1, svint16_t, - z0 = svmul_n_s16_x (p0, z0, 2), - z0 = svmul_x (p0, z0, 2)) +TEST_UNIFORM_Z (mul_4dupop2_s16_x_tied1, svint16_t, + z0 = svmul_x (p0, z0, svdup_s16 (4)), + z0 = svmul_x (p0, z0, svdup_s16 (4))) /* -** mul_2_s16_x_untied: +** mul_4nop2_s16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s16_x_tied1, svint16_t, + z0 = svmul_n_s16_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_s16_x_tied1: +** lsl z0\.h, z0\.h, #14 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s16_x_tied1, svint16_t, + z0 = svmul_n_s16_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s16_x_tied1: +** mov (z[0-9]+\.h), #-32768 +** mul z0\.h, p0/m, z0\.h, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_intminnop2_s16_x_tied1, svint16_t, + z0 = svmul_n_s16_x (p0, z0, INT16_MIN), + z0 = svmul_x (p0, z0, INT16_MIN)) + +/* +** mul_1_s16_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_s16_x_tied1, svint16_t, + z0 = svmul_n_s16_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + +/* +** mul_3_s16_x_tied1: +** mul z0\.h, z0\.h, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s16_x_tied1, svint16_t, + z0 = svmul_n_s16_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_s16_x_untied: +** lsl z0\.h, z1\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s16_x_untied, svint16_t, + z0 = svmul_x (p0, z1, svdup_s16 (4)), + z0 = svmul_x (p0, z1, svdup_s16 (4))) + +/* +** mul_4nop2_s16_x_untied: +** lsl z0\.h, z1\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s16_x_untied, svint16_t, + z0 = svmul_n_s16_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_s16_x_untied: +** lsl z0\.h, z1\.h, #14 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s16_x_untied, svint16_t, + z0 = svmul_n_s16_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) + +/* +** mul_3_s16_x_untied: ** movprfx z0, z1 -** mul z0\.h, z0\.h, #2 +** mul z0\.h, z0\.h, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_s16_x_untied, svint16_t, - z0 = svmul_n_s16_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s16_x_untied, svint16_t, + z0 = svmul_n_s16_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_s16_x: @@ -256,8 +546,7 @@ TEST_UNIFORM_Z (mul_127_s16_x, svint16_t, /* ** mul_128_s16_x: -** mov (z[0-9]+\.h), #128 -** mul z0\.h, p0/m, z0\.h, \1 +** lsl z0\.h, z0\.h, #7 ** ret */ TEST_UNIFORM_Z (mul_128_s16_x, svint16_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c index 01c224932d9..5d1f66689b2 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s32.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1ULL<<30 + /* ** mul_s32_m_tied1: ** mul z0\.s, p0/m, z0\.s, z1\.s @@ -54,25 +56,122 @@ TEST_UNIFORM_ZX (mul_w0_s32_m_untied, svint32_t, int32_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_s32_m_tied1: -** mov (z[0-9]+\.s), #2 +** mul_4dupop1_s32_m_tied1: +** mov (z[0-9]+)\.s, #4 +** mov (z[0-9]+)\.d, z0\.d +** movprfx z0, \1 +** mul z0\.s, p0/m, z0\.s, \2\.s +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s32_m_tied1, svint32_t, + z0 = svmul_m (p0, svdup_s32 (4), z0), + z0 = svmul_m (p0, svdup_s32 (4), z0)) + +/* +** mul_4dupop1ptrue_s32_m_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s32_m_tied1, svint32_t, + z0 = svmul_m (svptrue_b32 (), svdup_s32 (4), z0), + z0 = svmul_m (svptrue_b32 (), svdup_s32 (4), z0)) + +/* +** mul_4dupop2_s32_m_tied1: +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s32_m_tied1, svint32_t, + z0 = svmul_m (p0, z0, svdup_s32 (4)), + z0 = svmul_m (p0, z0, svdup_s32 (4))) + +/* +** mul_4nop2_s32_m_tied1: +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s32_m_tied1, svint32_t, + z0 = svmul_n_s32_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_s32_m_tied1: +** lsl z0\.s, p0/m, z0\.s, #30 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s32_m_tied1, svint32_t, + z0 = svmul_n_s32_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s32_m_tied1: +** mov (z[0-9]+\.s), #-2147483648 ** mul z0\.s, p0/m, z0\.s, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s32_m_tied1, svint32_t, - z0 = svmul_n_s32_m (p0, z0, 2), - z0 = svmul_m (p0, z0, 2)) +TEST_UNIFORM_Z (mul_intminnop2_s32_m_tied1, svint32_t, + z0 = svmul_n_s32_m (p0, z0, INT32_MIN), + z0 = svmul_m (p0, z0, INT32_MIN)) /* -** mul_2_s32_m_untied: -** mov (z[0-9]+\.s), #2 +** mul_1_s32_m_tied1: +** sel z0\.s, p0, z0\.s, z0\.s +** ret +*/ +TEST_UNIFORM_Z (mul_1_s32_m_tied1, svint32_t, + z0 = svmul_n_s32_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_3_s32_m_tied1: +** mov (z[0-9]+\.s), #3 +** mul z0\.s, p0/m, z0\.s, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s32_m_tied1, svint32_t, + z0 = svmul_n_s32_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_s32_m_untied: +** movprfx z0, z1 +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s32_m_untied, svint32_t, + z0 = svmul_m (p0, z1, svdup_s32 (4)), + z0 = svmul_m (p0, z1, svdup_s32 (4))) + +/* +** mul_4nop2_s32_m_untied: +** movprfx z0, z1 +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s32_m_untied, svint32_t, + z0 = svmul_n_s32_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_s32_m_untied: +** movprfx z0, z1 +** lsl z0\.s, p0/m, z0\.s, #30 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s32_m_untied, svint32_t, + z0 = svmul_n_s32_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_s32_m_untied: +** mov (z[0-9]+\.s), #3 ** movprfx z0, z1 ** mul z0\.s, p0/m, z0\.s, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s32_m_untied, svint32_t, - z0 = svmul_n_s32_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s32_m_untied, svint32_t, + z0 = svmul_n_s32_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_s32_m: @@ -147,19 +246,120 @@ TEST_UNIFORM_ZX (mul_w0_s32_z_untied, svint32_t, int32_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_s32_z_tied1: -** mov (z[0-9]+\.s), #2 +** mul_4dupop1_s32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s32_z_tied1, svint32_t, + z0 = svmul_z (p0, svdup_s32 (4), z0), + z0 = svmul_z (p0, svdup_s32 (4), z0)) + +/* +** mul_4dupop1ptrue_s32_z_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s32_z_tied1, svint32_t, + z0 = svmul_z (svptrue_b32 (), svdup_s32 (4), z0), + z0 = svmul_z (svptrue_b32 (), svdup_s32 (4), z0)) + +/* +** mul_4dupop2_s32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s32_z_tied1, svint32_t, + z0 = svmul_z (p0, z0, svdup_s32 (4)), + z0 = svmul_z (p0, z0, svdup_s32 (4))) + +/* +** mul_4nop2_s32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s32_z_tied1, svint32_t, + z0 = svmul_n_s32_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_s32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #30 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s32_z_tied1, svint32_t, + z0 = svmul_n_s32_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s32_z_tied1: +** mov (z[0-9]+\.s), #-2147483648 ** movprfx z0\.s, p0/z, z0\.s ** mul z0\.s, p0/m, z0\.s, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s32_z_tied1, svint32_t, - z0 = svmul_n_s32_z (p0, z0, 2), - z0 = svmul_z (p0, z0, 2)) +TEST_UNIFORM_Z (mul_intminnop2_s32_z_tied1, svint32_t, + z0 = svmul_n_s32_z (p0, z0, INT32_MIN), + z0 = svmul_z (p0, z0, INT32_MIN)) + +/* +** mul_1_s32_z_tied1: +** mov z31.s, #1 +** movprfx z0.s, p0/z, z0.s +** mul z0.s, p0/m, z0.s, z31.s +** ret +*/ +TEST_UNIFORM_Z (mul_1_s32_z_tied1, svint32_t, + z0 = svmul_n_s32_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_3_s32_z_tied1: +** mov (z[0-9]+\.s), #3 +** movprfx z0\.s, p0/z, z0\.s +** mul z0\.s, p0/m, z0\.s, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s32_z_tied1, svint32_t, + z0 = svmul_n_s32_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_s32_z_untied: +** movprfx z0\.s, p0/z, z1\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s32_z_untied, svint32_t, + z0 = svmul_z (p0, z1, svdup_s32 (4)), + z0 = svmul_z (p0, z1, svdup_s32 (4))) + +/* +** mul_4nop2_s32_z_untied: +** movprfx z0\.s, p0/z, z1\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s32_z_untied, svint32_t, + z0 = svmul_n_s32_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_s32_z_untied: +** movprfx z0\.s, p0/z, z1\.s +** lsl z0\.s, p0/m, z0\.s, #30 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s32_z_untied, svint32_t, + z0 = svmul_n_s32_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) /* -** mul_2_s32_z_untied: -** mov (z[0-9]+\.s), #2 +** mul_3_s32_z_untied: +** mov (z[0-9]+\.s), #3 ** ( ** movprfx z0\.s, p0/z, z1\.s ** mul z0\.s, p0/m, z0\.s, \1 @@ -169,9 +369,9 @@ TEST_UNIFORM_Z (mul_2_s32_z_tied1, svint32_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_s32_z_untied, svint32_t, - z0 = svmul_n_s32_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s32_z_untied, svint32_t, + z0 = svmul_n_s32_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_s32_x_tied1: @@ -227,23 +427,113 @@ TEST_UNIFORM_ZX (mul_w0_s32_x_untied, svint32_t, int32_t, z0 = svmul_x (p0, z1, x0)) /* -** mul_2_s32_x_tied1: -** mul z0\.s, z0\.s, #2 +** mul_4dupop1_s32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s32_x_tied1, svint32_t, + z0 = svmul_x (p0, svdup_s32 (4), z0), + z0 = svmul_x (p0, svdup_s32 (4), z0)) + +/* +** mul_4dupop1ptrue_s32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s32_x_tied1, svint32_t, + z0 = svmul_x (svptrue_b32 (), svdup_s32 (4), z0), + z0 = svmul_x (svptrue_b32 (), svdup_s32 (4), z0)) + +/* +** mul_4dupop2_s32_x_tied1: +** lsl z0\.s, z0\.s, #2 ** ret */ -TEST_UNIFORM_Z (mul_2_s32_x_tied1, svint32_t, - z0 = svmul_n_s32_x (p0, z0, 2), - z0 = svmul_x (p0, z0, 2)) +TEST_UNIFORM_Z (mul_4dupop2_s32_x_tied1, svint32_t, + z0 = svmul_x (p0, z0, svdup_s32 (4)), + z0 = svmul_x (p0, z0, svdup_s32 (4))) /* -** mul_2_s32_x_untied: +** mul_4nop2_s32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s32_x_tied1, svint32_t, + z0 = svmul_n_s32_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_s32_x_tied1: +** lsl z0\.s, z0\.s, #30 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s32_x_tied1, svint32_t, + z0 = svmul_n_s32_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s32_x_tied1: +** mov (z[0-9]+\.s), #-2147483648 +** mul z0\.s, p0/m, z0\.s, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_intminnop2_s32_x_tied1, svint32_t, + z0 = svmul_n_s32_x (p0, z0, INT32_MIN), + z0 = svmul_x (p0, z0, INT32_MIN)) + +/* +** mul_1_s32_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_s32_x_tied1, svint32_t, + z0 = svmul_n_s32_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + +/* +** mul_3_s32_x_tied1: +** mul z0\.s, z0\.s, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s32_x_tied1, svint32_t, + z0 = svmul_n_s32_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_s32_x_untied: +** lsl z0\.s, z1\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s32_x_untied, svint32_t, + z0 = svmul_x (p0, z1, svdup_s32 (4)), + z0 = svmul_x (p0, z1, svdup_s32 (4))) + +/* +** mul_4nop2_s32_x_untied: +** lsl z0\.s, z1\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s32_x_untied, svint32_t, + z0 = svmul_n_s32_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_s32_x_untied: +** lsl z0\.s, z1\.s, #30 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s32_x_untied, svint32_t, + z0 = svmul_n_s32_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) + +/* +** mul_3_s32_x_untied: ** movprfx z0, z1 -** mul z0\.s, z0\.s, #2 +** mul z0\.s, z0\.s, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_s32_x_untied, svint32_t, - z0 = svmul_n_s32_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s32_x_untied, svint32_t, + z0 = svmul_n_s32_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_s32_x: @@ -256,8 +546,7 @@ TEST_UNIFORM_Z (mul_127_s32_x, svint32_t, /* ** mul_128_s32_x: -** mov (z[0-9]+\.s), #128 -** mul z0\.s, p0/m, z0\.s, \1 +** lsl z0\.s, z0\.s, #7 ** ret */ TEST_UNIFORM_Z (mul_128_s32_x, svint32_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c index c3cf581a0a4..52f0911a6df 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s64.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1ULL<<62 + /* ** mul_s64_m_tied1: ** mul z0\.d, p0/m, z0\.d, z1\.d @@ -54,25 +56,131 @@ TEST_UNIFORM_ZX (mul_x0_s64_m_untied, svint64_t, int64_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_s64_m_tied1: -** mov (z[0-9]+\.d), #2 +** mul_4dupop1_s64_m_tied1: +** mov (z[0-9]+)\.d, #4 +** mov (z[0-9]+\.d), z0\.d +** movprfx z0, \1 +** mul z0\.d, p0/m, z0\.d, \2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s64_m_tied1, svint64_t, + z0 = svmul_m (p0, svdup_s64 (4), z0), + z0 = svmul_m (p0, svdup_s64 (4), z0)) + +/* +** mul_4dupop1ptrue_s64_m_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s64_m_tied1, svint64_t, + z0 = svmul_m (svptrue_b64 (), svdup_s64 (4), z0), + z0 = svmul_m (svptrue_b64 (), svdup_s64 (4), z0)) + +/* +** mul_4dupop2_s64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s64_m_tied1, svint64_t, + z0 = svmul_m (p0, z0, svdup_s64 (4)), + z0 = svmul_m (p0, z0, svdup_s64 (4))) + +/* +** mul_4nop2_s64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s64_m_tied1, svint64_t, + z0 = svmul_n_s64_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_s64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #62 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s64_m_tied1, svint64_t, + z0 = svmul_n_s64_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s64_m_tied1: +** mov (z[0-9]+\.d), #-9223372036854775808 ** mul z0\.d, p0/m, z0\.d, \1 ** ret */ +TEST_UNIFORM_Z (mul_intminnop2_s64_m_tied1, svint64_t, + z0 = svmul_n_s64_m (p0, z0, INT64_MIN), + z0 = svmul_m (p0, z0, INT64_MIN)) + +/* +** mul_1_s64_m_tied1: +** sel z0\.d, p0, z0\.d, z0\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1_s64_m_tied1, svint64_t, + z0 = svmul_n_s64_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_2_s64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #1 +** ret +*/ TEST_UNIFORM_Z (mul_2_s64_m_tied1, svint64_t, z0 = svmul_n_s64_m (p0, z0, 2), z0 = svmul_m (p0, z0, 2)) /* -** mul_2_s64_m_untied: -** mov (z[0-9]+\.d), #2 +** mul_3_s64_m_tied1: +** mov (z[0-9]+\.d), #3 +** mul z0\.d, p0/m, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s64_m_tied1, svint64_t, + z0 = svmul_n_s64_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_s64_m_untied: +** movprfx z0, z1 +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s64_m_untied, svint64_t, + z0 = svmul_m (p0, z1, svdup_s64 (4)), + z0 = svmul_m (p0, z1, svdup_s64 (4))) + +/* +** mul_4nop2_s64_m_untied: +** movprfx z0, z1 +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s64_m_untied, svint64_t, + z0 = svmul_n_s64_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_s64_m_untied: +** movprfx z0, z1 +** lsl z0\.d, p0/m, z0\.d, #62 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s64_m_untied, svint64_t, + z0 = svmul_n_s64_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_s64_m_untied: +** mov (z[0-9]+\.d), #3 ** movprfx z0, z1 ** mul z0\.d, p0/m, z0\.d, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s64_m_untied, svint64_t, - z0 = svmul_n_s64_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s64_m_untied, svint64_t, + z0 = svmul_n_s64_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_s64_m: @@ -147,19 +255,130 @@ TEST_UNIFORM_ZX (mul_x0_s64_z_untied, svint64_t, int64_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_s64_z_tied1: -** mov (z[0-9]+\.d), #2 +** mul_4dupop1_s64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s64_z_tied1, svint64_t, + z0 = svmul_z (p0, svdup_s64 (4), z0), + z0 = svmul_z (p0, svdup_s64 (4), z0)) + +/* +** mul_4dupop1ptrue_s64_z_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s64_z_tied1, svint64_t, + z0 = svmul_z (svptrue_b64 (), svdup_s64 (4), z0), + z0 = svmul_z (svptrue_b64 (), svdup_s64 (4), z0)) + +/* +** mul_4dupop2_s64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s64_z_tied1, svint64_t, + z0 = svmul_z (p0, z0, svdup_s64 (4)), + z0 = svmul_z (p0, z0, svdup_s64 (4))) + +/* +** mul_4nop2_s64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s64_z_tied1, svint64_t, + z0 = svmul_n_s64_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_s64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #62 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s64_z_tied1, svint64_t, + z0 = svmul_n_s64_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s64_z_tied1: +** mov (z[0-9]+\.d), #-9223372036854775808 ** movprfx z0\.d, p0/z, z0\.d ** mul z0\.d, p0/m, z0\.d, \1 ** ret */ +TEST_UNIFORM_Z (mul_intminnop2_s64_z_tied1, svint64_t, + z0 = svmul_n_s64_z (p0, z0, INT64_MIN), + z0 = svmul_z (p0, z0, INT64_MIN)) + +/* +** mul_1_s64_z_tied1: +** mov z31.d, #1 +** movprfx z0.d, p0/z, z0.d +** mul z0.d, p0/m, z0.d, z31.d +** ret +*/ +TEST_UNIFORM_Z (mul_1_s64_z_tied1, svint64_t, + z0 = svmul_n_s64_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_2_s64_z_tied1: +** movprfx z0.d, p0/z, z0.d +** lsl z0.d, p0/m, z0.d, #1 +** ret +*/ TEST_UNIFORM_Z (mul_2_s64_z_tied1, svint64_t, z0 = svmul_n_s64_z (p0, z0, 2), z0 = svmul_z (p0, z0, 2)) /* -** mul_2_s64_z_untied: -** mov (z[0-9]+\.d), #2 +** mul_3_s64_z_tied1: +** mov (z[0-9]+\.d), #3 +** movprfx z0\.d, p0/z, z0\.d +** mul z0\.d, p0/m, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s64_z_tied1, svint64_t, + z0 = svmul_n_s64_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_s64_z_untied: +** movprfx z0\.d, p0/z, z1\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s64_z_untied, svint64_t, + z0 = svmul_z (p0, z1, svdup_s64 (4)), + z0 = svmul_z (p0, z1, svdup_s64 (4))) + +/* +** mul_4nop2_s64_z_untied: +** movprfx z0\.d, p0/z, z1\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s64_z_untied, svint64_t, + z0 = svmul_n_s64_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_s64_z_untied: +** movprfx z0\.d, p0/z, z1\.d +** lsl z0\.d, p0/m, z0\.d, #62 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s64_z_untied, svint64_t, + z0 = svmul_n_s64_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) + +/* +** mul_3_s64_z_untied: +** mov (z[0-9]+\.d), #3 ** ( ** movprfx z0\.d, p0/z, z1\.d ** mul z0\.d, p0/m, z0\.d, \1 @@ -169,9 +388,9 @@ TEST_UNIFORM_Z (mul_2_s64_z_tied1, svint64_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_s64_z_untied, svint64_t, - z0 = svmul_n_s64_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s64_z_untied, svint64_t, + z0 = svmul_n_s64_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_s64_x_tied1: @@ -226,9 +445,72 @@ TEST_UNIFORM_ZX (mul_x0_s64_x_untied, svint64_t, int64_t, z0 = svmul_n_s64_x (p0, z1, x0), z0 = svmul_x (p0, z1, x0)) +/* +** mul_4dupop1_s64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s64_x_tied1, svint64_t, + z0 = svmul_x (p0, svdup_s64 (4), z0), + z0 = svmul_x (p0, svdup_s64 (4), z0)) + +/* +** mul_4dupop1ptrue_s64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s64_x_tied1, svint64_t, + z0 = svmul_x (svptrue_b64 (), svdup_s64 (4), z0), + z0 = svmul_x (svptrue_b64 (), svdup_s64 (4), z0)) + +/* +** mul_4dupop2_s64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s64_x_tied1, svint64_t, + z0 = svmul_x (p0, z0, svdup_s64 (4)), + z0 = svmul_x (p0, z0, svdup_s64 (4))) + +/* +** mul_4nop2_s64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s64_x_tied1, svint64_t, + z0 = svmul_n_s64_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_s64_x_tied1: +** lsl z0\.d, z0\.d, #62 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s64_x_tied1, svint64_t, + z0 = svmul_n_s64_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s64_x_tied1: +** mov (z[0-9]+\.d), #-9223372036854775808 +** mul z0\.d, p0/m, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_intminnop2_s64_x_tied1, svint64_t, + z0 = svmul_n_s64_x (p0, z0, INT64_MIN), + z0 = svmul_x (p0, z0, INT64_MIN)) + +/* +** mul_1_s64_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_s64_x_tied1, svint64_t, + z0 = svmul_n_s64_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + /* ** mul_2_s64_x_tied1: -** mul z0\.d, z0\.d, #2 +** add z0\.d, z0\.d, z0\.d ** ret */ TEST_UNIFORM_Z (mul_2_s64_x_tied1, svint64_t, @@ -236,14 +518,50 @@ TEST_UNIFORM_Z (mul_2_s64_x_tied1, svint64_t, z0 = svmul_x (p0, z0, 2)) /* -** mul_2_s64_x_untied: +** mul_3_s64_x_tied1: +** mul z0\.d, z0\.d, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s64_x_tied1, svint64_t, + z0 = svmul_n_s64_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_s64_x_untied: +** lsl z0\.d, z1\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s64_x_untied, svint64_t, + z0 = svmul_x (p0, z1, svdup_s64 (4)), + z0 = svmul_x (p0, z1, svdup_s64 (4))) + +/* +** mul_4nop2_s64_x_untied: +** lsl z0\.d, z1\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s64_x_untied, svint64_t, + z0 = svmul_n_s64_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_s64_x_untied: +** lsl z0\.d, z1\.d, #62 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s64_x_untied, svint64_t, + z0 = svmul_n_s64_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) + +/* +** mul_3_s64_x_untied: ** movprfx z0, z1 -** mul z0\.d, z0\.d, #2 +** mul z0\.d, z0\.d, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_s64_x_untied, svint64_t, - z0 = svmul_n_s64_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s64_x_untied, svint64_t, + z0 = svmul_n_s64_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_s64_x: @@ -256,8 +574,7 @@ TEST_UNIFORM_Z (mul_127_s64_x, svint64_t, /* ** mul_128_s64_x: -** mov (z[0-9]+\.d), #128 -** mul z0\.d, p0/m, z0\.d, \1 +** lsl z0\.d, z0\.d, #7 ** ret */ TEST_UNIFORM_Z (mul_128_s64_x, svint64_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c index 4ac4c8eeb2a..0e2a0033480 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_s8.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1<<6 + /* ** mul_s8_m_tied1: ** mul z0\.b, p0/m, z0\.b, z1\.b @@ -54,30 +56,127 @@ TEST_UNIFORM_ZX (mul_w0_s8_m_untied, svint8_t, int8_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_s8_m_tied1: -** mov (z[0-9]+\.b), #2 +** mul_4dupop1_s8_m_tied1: +** mov (z[0-9]+)\.b, #4 +** mov (z[0-9]+)\.d, z0\.d +** movprfx z0, \1 +** mul z0\.b, p0/m, z0\.b, \2\.b +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s8_m_tied1, svint8_t, + z0 = svmul_m (p0, svdup_s8 (4), z0), + z0 = svmul_m (p0, svdup_s8 (4), z0)) + +/* +** mul_4dupop1ptrue_s8_m_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s8_m_tied1, svint8_t, + z0 = svmul_m (svptrue_b8 (), svdup_s8 (4), z0), + z0 = svmul_m (svptrue_b8 (), svdup_s8 (4), z0)) + +/* +** mul_4dupop2_s8_m_tied1: +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s8_m_tied1, svint8_t, + z0 = svmul_m (p0, z0, svdup_s8 (4)), + z0 = svmul_m (p0, z0, svdup_s8 (4))) + +/* +** mul_4nop2_s8_m_tied1: +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s8_m_tied1, svint8_t, + z0 = svmul_n_s8_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_s8_m_tied1: +** lsl z0\.b, p0/m, z0\.b, #6 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s8_m_tied1, svint8_t, + z0 = svmul_n_s8_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s8_m_tied1: +** mov (z[0-9]+\.b), #-128 ** mul z0\.b, p0/m, z0\.b, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s8_m_tied1, svint8_t, - z0 = svmul_n_s8_m (p0, z0, 2), - z0 = svmul_m (p0, z0, 2)) +TEST_UNIFORM_Z (mul_intminnop2_s8_m_tied1, svint8_t, + z0 = svmul_n_s8_m (p0, z0, INT8_MIN), + z0 = svmul_m (p0, z0, INT8_MIN)) /* -** mul_2_s8_m_untied: -** mov (z[0-9]+\.b), #2 +** mul_1_s8_m_tied1: +** sel z0\.b, p0, z0\.b, z0\.b +** ret +*/ +TEST_UNIFORM_Z (mul_1_s8_m_tied1, svint8_t, + z0 = svmul_n_s8_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_3_s8_m_tied1: +** mov (z[0-9]+\.b), #3 +** mul z0\.b, p0/m, z0\.b, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s8_m_tied1, svint8_t, + z0 = svmul_n_s8_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_s8_m_untied: +** movprfx z0, z1 +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s8_m_untied, svint8_t, + z0 = svmul_m (p0, z1, svdup_s8 (4)), + z0 = svmul_m (p0, z1, svdup_s8 (4))) + +/* +** mul_4nop2_s8_m_untied: +** movprfx z0, z1 +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s8_m_untied, svint8_t, + z0 = svmul_n_s8_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_s8_m_untied: +** movprfx z0, z1 +** lsl z0\.b, p0/m, z0\.b, #6 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s8_m_untied, svint8_t, + z0 = svmul_n_s8_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_s8_m_untied: +** mov (z[0-9]+\.b), #3 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s8_m_untied, svint8_t, - z0 = svmul_n_s8_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s8_m_untied, svint8_t, + z0 = svmul_n_s8_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_s8_m: -** mov (z[0-9]+\.b), #-1 -** mul z0\.b, p0/m, z0\.b, \1 +** mov (z[0-9]+)\.b, #-1 +** mul z0\.b, p0/m, z0\.b, \1\.b ** ret */ TEST_UNIFORM_Z (mul_m1_s8_m, svint8_t, @@ -147,19 +246,120 @@ TEST_UNIFORM_ZX (mul_w0_s8_z_untied, svint8_t, int8_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_s8_z_tied1: -** mov (z[0-9]+\.b), #2 +** mul_4dupop1_s8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s8_z_tied1, svint8_t, + z0 = svmul_z (p0, svdup_s8 (4), z0), + z0 = svmul_z (p0, svdup_s8 (4), z0)) + +/* +** mul_4dupop1ptrue_s8_z_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s8_z_tied1, svint8_t, + z0 = svmul_z (svptrue_b8 (), svdup_s8 (4), z0), + z0 = svmul_z (svptrue_b8 (), svdup_s8 (4), z0)) + +/* +** mul_4dupop2_s8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s8_z_tied1, svint8_t, + z0 = svmul_z (p0, z0, svdup_s8 (4)), + z0 = svmul_z (p0, z0, svdup_s8 (4))) + +/* +** mul_4nop2_s8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s8_z_tied1, svint8_t, + z0 = svmul_n_s8_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_s8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #6 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s8_z_tied1, svint8_t, + z0 = svmul_n_s8_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s8_z_tied1: +** mov (z[0-9]+\.b), #-128 +** movprfx z0\.b, p0/z, z0\.b +** mul z0\.b, p0/m, z0\.b, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_intminnop2_s8_z_tied1, svint8_t, + z0 = svmul_n_s8_z (p0, z0, INT8_MIN), + z0 = svmul_z (p0, z0, INT8_MIN)) + +/* +** mul_1_s8_z_tied1: +** mov z31.b, #1 +** movprfx z0.b, p0/z, z0.b +** mul z0.b, p0/m, z0.b, z31.b +** ret +*/ +TEST_UNIFORM_Z (mul_1_s8_z_tied1, svint8_t, + z0 = svmul_n_s8_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_3_s8_z_tied1: +** mov (z[0-9]+\.b), #3 ** movprfx z0\.b, p0/z, z0\.b ** mul z0\.b, p0/m, z0\.b, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_s8_z_tied1, svint8_t, - z0 = svmul_n_s8_z (p0, z0, 2), - z0 = svmul_z (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_s8_z_tied1, svint8_t, + z0 = svmul_n_s8_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_s8_z_untied: +** movprfx z0\.b, p0/z, z1\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s8_z_untied, svint8_t, + z0 = svmul_z (p0, z1, svdup_s8 (4)), + z0 = svmul_z (p0, z1, svdup_s8 (4))) /* -** mul_2_s8_z_untied: -** mov (z[0-9]+\.b), #2 +** mul_4nop2_s8_z_untied: +** movprfx z0\.b, p0/z, z1\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s8_z_untied, svint8_t, + z0 = svmul_n_s8_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_s8_z_untied: +** movprfx z0\.b, p0/z, z1\.b +** lsl z0\.b, p0/m, z0\.b, #6 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s8_z_untied, svint8_t, + z0 = svmul_n_s8_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) + +/* +** mul_3_s8_z_untied: +** mov (z[0-9]+\.b), #3 ** ( ** movprfx z0\.b, p0/z, z1\.b ** mul z0\.b, p0/m, z0\.b, \1 @@ -169,9 +369,9 @@ TEST_UNIFORM_Z (mul_2_s8_z_tied1, svint8_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_s8_z_untied, svint8_t, - z0 = svmul_n_s8_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s8_z_untied, svint8_t, + z0 = svmul_n_s8_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_s8_x_tied1: @@ -227,23 +427,112 @@ TEST_UNIFORM_ZX (mul_w0_s8_x_untied, svint8_t, int8_t, z0 = svmul_x (p0, z1, x0)) /* -** mul_2_s8_x_tied1: -** mul z0\.b, z0\.b, #2 +** mul_4dupop1_s8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_s8_x_tied1, svint8_t, + z0 = svmul_x (p0, svdup_s8 (4), z0), + z0 = svmul_x (p0, svdup_s8 (4), z0)) + +/* +** mul_4dupop1ptrue_s8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_s8_x_tied1, svint8_t, + z0 = svmul_x (svptrue_b8 (), svdup_s8 (4), z0), + z0 = svmul_x (svptrue_b8 (), svdup_s8 (4), z0)) + +/* +** mul_4dupop2_s8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s8_x_tied1, svint8_t, + z0 = svmul_x (p0, z0, svdup_s8 (4)), + z0 = svmul_x (p0, z0, svdup_s8 (4))) + +/* +** mul_4nop2_s8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s8_x_tied1, svint8_t, + z0 = svmul_n_s8_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_s8_x_tied1: +** lsl z0\.b, z0\.b, #6 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_s8_x_tied1, svint8_t, + z0 = svmul_n_s8_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_intminnop2_s8_x_tied1: +** mul z0\.b, z0\.b, #-128 +** ret +*/ +TEST_UNIFORM_Z (mul_intminnop2_s8_x_tied1, svint8_t, + z0 = svmul_n_s8_x (p0, z0, INT8_MIN), + z0 = svmul_x (p0, z0, INT8_MIN)) + +/* +** mul_1_s8_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_s8_x_tied1, svint8_t, + z0 = svmul_n_s8_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + +/* +** mul_3_s8_x_tied1: +** mul z0\.b, z0\.b, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_s8_x_tied1, svint8_t, + z0 = svmul_n_s8_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_s8_x_untied: +** lsl z0\.b, z1\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_s8_x_untied, svint8_t, + z0 = svmul_x (p0, z1, svdup_s8 (4)), + z0 = svmul_x (p0, z1, svdup_s8 (4))) + +/* +** mul_4nop2_s8_x_untied: +** lsl z0\.b, z1\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_s8_x_untied, svint8_t, + z0 = svmul_n_s8_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_s8_x_untied: +** lsl z0\.b, z1\.b, #6 ** ret */ -TEST_UNIFORM_Z (mul_2_s8_x_tied1, svint8_t, - z0 = svmul_n_s8_x (p0, z0, 2), - z0 = svmul_x (p0, z0, 2)) +TEST_UNIFORM_Z (mul_maxpownop2_s8_x_untied, svint8_t, + z0 = svmul_n_s8_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) /* -** mul_2_s8_x_untied: +** mul_3_s8_x_untied: ** movprfx z0, z1 -** mul z0\.b, z0\.b, #2 +** mul z0\.b, z0\.b, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_s8_x_untied, svint8_t, - z0 = svmul_n_s8_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_s8_x_untied, svint8_t, + z0 = svmul_n_s8_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_s8_x: diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c index affee965005..39e1afc83f9 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u16.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1ULL<<15 + /* ** mul_u16_m_tied1: ** mul z0\.h, p0/m, z0\.h, z1\.h @@ -54,25 +56,112 @@ TEST_UNIFORM_ZX (mul_w0_u16_m_untied, svuint16_t, uint16_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_u16_m_tied1: -** mov (z[0-9]+\.h), #2 +** mul_4dupop1_u16_m_tied1: +** mov (z[0-9]+)\.h, #4 +** mov (z[0-9]+)\.d, z0\.d +** movprfx z0, \1 +** mul z0\.h, p0/m, z0\.h, \2\.h +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u16_m_tied1, svuint16_t, + z0 = svmul_m (p0, svdup_u16 (4), z0), + z0 = svmul_m (p0, svdup_u16 (4), z0)) + +/* +** mul_4dupop1ptrue_u16_m_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u16_m_tied1, svuint16_t, + z0 = svmul_m (svptrue_b16 (), svdup_u16 (4), z0), + z0 = svmul_m (svptrue_b16 (), svdup_u16 (4), z0)) + +/* +** mul_4dupop2_u16_m_tied1: +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u16_m_tied1, svuint16_t, + z0 = svmul_m (p0, z0, svdup_u16 (4)), + z0 = svmul_m (p0, z0, svdup_u16 (4))) + +/* +** mul_4nop2_u16_m_tied1: +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u16_m_tied1, svuint16_t, + z0 = svmul_n_u16_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_u16_m_tied1: +** lsl z0\.h, p0/m, z0\.h, #15 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u16_m_tied1, svuint16_t, + z0 = svmul_n_u16_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_1_u16_m_tied1: +** sel z0\.h, p0, z0\.h, z0\.h +** ret +*/ +TEST_UNIFORM_Z (mul_1_u16_m_tied1, svuint16_t, + z0 = svmul_n_u16_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_3_u16_m_tied1: +** mov (z[0-9]+\.h), #3 ** mul z0\.h, p0/m, z0\.h, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u16_m_tied1, svuint16_t, - z0 = svmul_n_u16_m (p0, z0, 2), - z0 = svmul_m (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_u16_m_tied1, svuint16_t, + z0 = svmul_n_u16_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_u16_m_untied: +** movprfx z0, z1 +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u16_m_untied, svuint16_t, + z0 = svmul_m (p0, z1, svdup_u16 (4)), + z0 = svmul_m (p0, z1, svdup_u16 (4))) /* -** mul_2_u16_m_untied: -** mov (z[0-9]+\.h), #2 +** mul_4nop2_u16_m_untied: +** movprfx z0, z1 +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u16_m_untied, svuint16_t, + z0 = svmul_n_u16_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_u16_m_untied: +** movprfx z0, z1 +** lsl z0\.h, p0/m, z0\.h, #15 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u16_m_untied, svuint16_t, + z0 = svmul_n_u16_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_u16_m_untied: +** mov (z[0-9]+\.h), #3 ** movprfx z0, z1 ** mul z0\.h, p0/m, z0\.h, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u16_m_untied, svuint16_t, - z0 = svmul_n_u16_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u16_m_untied, svuint16_t, + z0 = svmul_n_u16_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_u16_m: @@ -147,19 +236,109 @@ TEST_UNIFORM_ZX (mul_w0_u16_z_untied, svuint16_t, uint16_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_u16_z_tied1: -** mov (z[0-9]+\.h), #2 +** mul_4dupop1_u16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u16_z_tied1, svuint16_t, + z0 = svmul_z (p0, svdup_u16 (4), z0), + z0 = svmul_z (p0, svdup_u16 (4), z0)) + +/* +** mul_4dupop1ptrue_u16_z_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u16_z_tied1, svuint16_t, + z0 = svmul_z (svptrue_b16 (), svdup_u16 (4), z0), + z0 = svmul_z (svptrue_b16 (), svdup_u16 (4), z0)) + +/* +** mul_4dupop2_u16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u16_z_tied1, svuint16_t, + z0 = svmul_z (p0, z0, svdup_u16 (4)), + z0 = svmul_z (p0, z0, svdup_u16 (4))) + +/* +** mul_4nop2_u16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u16_z_tied1, svuint16_t, + z0 = svmul_n_u16_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_u16_z_tied1: +** movprfx z0\.h, p0/z, z0\.h +** lsl z0\.h, p0/m, z0\.h, #15 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u16_z_tied1, svuint16_t, + z0 = svmul_n_u16_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_1_u16_z_tied1: +** mov z31.h, #1 +** movprfx z0.h, p0/z, z0.h +** mul z0.h, p0/m, z0.h, z31.h +** ret +*/ +TEST_UNIFORM_Z (mul_1_u16_z_tied1, svuint16_t, + z0 = svmul_n_u16_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_3_u16_z_tied1: +** mov (z[0-9]+\.h), #3 ** movprfx z0\.h, p0/z, z0\.h ** mul z0\.h, p0/m, z0\.h, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u16_z_tied1, svuint16_t, - z0 = svmul_n_u16_z (p0, z0, 2), - z0 = svmul_z (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_u16_z_tied1, svuint16_t, + z0 = svmul_n_u16_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_u16_z_untied: +** movprfx z0\.h, p0/z, z1\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u16_z_untied, svuint16_t, + z0 = svmul_z (p0, z1, svdup_u16 (4)), + z0 = svmul_z (p0, z1, svdup_u16 (4))) + +/* +** mul_4nop2_u16_z_untied: +** movprfx z0\.h, p0/z, z1\.h +** lsl z0\.h, p0/m, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u16_z_untied, svuint16_t, + z0 = svmul_n_u16_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_u16_z_untied: +** movprfx z0\.h, p0/z, z1\.h +** lsl z0\.h, p0/m, z0\.h, #15 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u16_z_untied, svuint16_t, + z0 = svmul_n_u16_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) /* -** mul_2_u16_z_untied: -** mov (z[0-9]+\.h), #2 +** mul_3_u16_z_untied: +** mov (z[0-9]+\.h), #3 ** ( ** movprfx z0\.h, p0/z, z1\.h ** mul z0\.h, p0/m, z0\.h, \1 @@ -169,9 +348,9 @@ TEST_UNIFORM_Z (mul_2_u16_z_tied1, svuint16_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_u16_z_untied, svuint16_t, - z0 = svmul_n_u16_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u16_z_untied, svuint16_t, + z0 = svmul_n_u16_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_u16_x_tied1: @@ -227,23 +406,103 @@ TEST_UNIFORM_ZX (mul_w0_u16_x_untied, svuint16_t, uint16_t, z0 = svmul_x (p0, z1, x0)) /* -** mul_2_u16_x_tied1: -** mul z0\.h, z0\.h, #2 +** mul_4dupop1_u16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u16_x_tied1, svuint16_t, + z0 = svmul_x (p0, svdup_u16 (4), z0), + z0 = svmul_x (p0, svdup_u16 (4), z0)) + +/* +** mul_4dupop1ptrue_u16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u16_x_tied1, svuint16_t, + z0 = svmul_x (svptrue_b16 (), svdup_u16 (4), z0), + z0 = svmul_x (svptrue_b16 (), svdup_u16 (4), z0)) + +/* +** mul_4dupop2_u16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u16_x_tied1, svuint16_t, + z0 = svmul_x (p0, z0, svdup_u16 (4)), + z0 = svmul_x (p0, z0, svdup_u16 (4))) + +/* +** mul_4nop2_u16_x_tied1: +** lsl z0\.h, z0\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u16_x_tied1, svuint16_t, + z0 = svmul_n_u16_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_u16_x_tied1: +** lsl z0\.h, z0\.h, #15 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u16_x_tied1, svuint16_t, + z0 = svmul_n_u16_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_1_u16_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_u16_x_tied1, svuint16_t, + z0 = svmul_n_u16_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + +/* +** mul_3_u16_x_tied1: +** mul z0\.h, z0\.h, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_u16_x_tied1, svuint16_t, + z0 = svmul_n_u16_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_u16_x_untied: +** lsl z0\.h, z1\.h, #2 ** ret */ -TEST_UNIFORM_Z (mul_2_u16_x_tied1, svuint16_t, - z0 = svmul_n_u16_x (p0, z0, 2), - z0 = svmul_x (p0, z0, 2)) +TEST_UNIFORM_Z (mul_4dupop2_u16_x_untied, svuint16_t, + z0 = svmul_x (p0, z1, svdup_u16 (4)), + z0 = svmul_x (p0, z1, svdup_u16 (4))) /* -** mul_2_u16_x_untied: +** mul_4nop2_u16_x_untied: +** lsl z0\.h, z1\.h, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u16_x_untied, svuint16_t, + z0 = svmul_n_u16_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_u16_x_untied: +** lsl z0\.h, z1\.h, #15 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u16_x_untied, svuint16_t, + z0 = svmul_n_u16_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) + +/* +** mul_3_u16_x_untied: ** movprfx z0, z1 -** mul z0\.h, z0\.h, #2 +** mul z0\.h, z0\.h, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_u16_x_untied, svuint16_t, - z0 = svmul_n_u16_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u16_x_untied, svuint16_t, + z0 = svmul_n_u16_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_u16_x: @@ -256,8 +515,7 @@ TEST_UNIFORM_Z (mul_127_u16_x, svuint16_t, /* ** mul_128_u16_x: -** mov (z[0-9]+\.h), #128 -** mul z0\.h, p0/m, z0\.h, \1 +** lsl z0\.h, z0\.h, #7 ** ret */ TEST_UNIFORM_Z (mul_128_u16_x, svuint16_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c index 38b4bc71b40..5f685c07d11 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u32.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1ULL<<31 + /* ** mul_u32_m_tied1: ** mul z0\.s, p0/m, z0\.s, z1\.s @@ -54,25 +56,112 @@ TEST_UNIFORM_ZX (mul_w0_u32_m_untied, svuint32_t, uint32_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_u32_m_tied1: -** mov (z[0-9]+\.s), #2 +** mul_4dupop1_u32_m_tied1: +** mov (z[0-9]+)\.s, #4 +** mov (z[0-9]+)\.d, z0\.d +** movprfx z0, \1 +** mul z0\.s, p0/m, z0\.s, \2\.s +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u32_m_tied1, svuint32_t, + z0 = svmul_m (p0, svdup_u32 (4), z0), + z0 = svmul_m (p0, svdup_u32 (4), z0)) + +/* +** mul_4dupop1ptrue_u32_m_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u32_m_tied1, svuint32_t, + z0 = svmul_m (svptrue_b32 (), svdup_u32 (4), z0), + z0 = svmul_m (svptrue_b32 (), svdup_u32 (4), z0)) + +/* +** mul_4dupop2_u32_m_tied1: +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u32_m_tied1, svuint32_t, + z0 = svmul_m (p0, z0, svdup_u32 (4)), + z0 = svmul_m (p0, z0, svdup_u32 (4))) + +/* +** mul_4nop2_u32_m_tied1: +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u32_m_tied1, svuint32_t, + z0 = svmul_n_u32_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_u32_m_tied1: +** lsl z0\.s, p0/m, z0\.s, #31 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u32_m_tied1, svuint32_t, + z0 = svmul_n_u32_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_1_u32_m_tied1: +** sel z0\.s, p0, z0\.s, z0\.s +** ret +*/ +TEST_UNIFORM_Z (mul_1_u32_m_tied1, svuint32_t, + z0 = svmul_n_u32_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_3_u32_m_tied1: +** mov (z[0-9]+\.s), #3 ** mul z0\.s, p0/m, z0\.s, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u32_m_tied1, svuint32_t, - z0 = svmul_n_u32_m (p0, z0, 2), - z0 = svmul_m (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_u32_m_tied1, svuint32_t, + z0 = svmul_n_u32_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_u32_m_untied: +** movprfx z0, z1 +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u32_m_untied, svuint32_t, + z0 = svmul_m (p0, z1, svdup_u32 (4)), + z0 = svmul_m (p0, z1, svdup_u32 (4))) /* -** mul_2_u32_m_untied: -** mov (z[0-9]+\.s), #2 +** mul_4nop2_u32_m_untied: +** movprfx z0, z1 +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u32_m_untied, svuint32_t, + z0 = svmul_n_u32_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_u32_m_untied: +** movprfx z0, z1 +** lsl z0\.s, p0/m, z0\.s, #31 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u32_m_untied, svuint32_t, + z0 = svmul_n_u32_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_u32_m_untied: +** mov (z[0-9]+\.s), #3 ** movprfx z0, z1 ** mul z0\.s, p0/m, z0\.s, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u32_m_untied, svuint32_t, - z0 = svmul_n_u32_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u32_m_untied, svuint32_t, + z0 = svmul_n_u32_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_u32_m: @@ -147,19 +236,109 @@ TEST_UNIFORM_ZX (mul_w0_u32_z_untied, svuint32_t, uint32_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_u32_z_tied1: -** mov (z[0-9]+\.s), #2 +** mul_4dupop1_u32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u32_z_tied1, svuint32_t, + z0 = svmul_z (p0, svdup_u32 (4), z0), + z0 = svmul_z (p0, svdup_u32 (4), z0)) + +/* +** mul_4dupop1ptrue_u32_z_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u32_z_tied1, svuint32_t, + z0 = svmul_z (svptrue_b32 (), svdup_u32 (4), z0), + z0 = svmul_z (svptrue_b32 (), svdup_u32 (4), z0)) + +/* +** mul_4dupop2_u32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u32_z_tied1, svuint32_t, + z0 = svmul_z (p0, z0, svdup_u32 (4)), + z0 = svmul_z (p0, z0, svdup_u32 (4))) + +/* +** mul_4nop2_u32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u32_z_tied1, svuint32_t, + z0 = svmul_n_u32_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_u32_z_tied1: +** movprfx z0\.s, p0/z, z0\.s +** lsl z0\.s, p0/m, z0\.s, #31 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u32_z_tied1, svuint32_t, + z0 = svmul_n_u32_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_1_u32_z_tied1: +** mov z31.s, #1 +** movprfx z0.s, p0/z, z0.s +** mul z0.s, p0/m, z0.s, z31.s +** ret +*/ +TEST_UNIFORM_Z (mul_1_u32_z_tied1, svuint32_t, + z0 = svmul_n_u32_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_3_u32_z_tied1: +** mov (z[0-9]+\.s), #3 ** movprfx z0\.s, p0/z, z0\.s ** mul z0\.s, p0/m, z0\.s, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u32_z_tied1, svuint32_t, - z0 = svmul_n_u32_z (p0, z0, 2), - z0 = svmul_z (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_u32_z_tied1, svuint32_t, + z0 = svmul_n_u32_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_u32_z_untied: +** movprfx z0\.s, p0/z, z1\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u32_z_untied, svuint32_t, + z0 = svmul_z (p0, z1, svdup_u32 (4)), + z0 = svmul_z (p0, z1, svdup_u32 (4))) + +/* +** mul_4nop2_u32_z_untied: +** movprfx z0\.s, p0/z, z1\.s +** lsl z0\.s, p0/m, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u32_z_untied, svuint32_t, + z0 = svmul_n_u32_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_u32_z_untied: +** movprfx z0\.s, p0/z, z1\.s +** lsl z0\.s, p0/m, z0\.s, #31 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u32_z_untied, svuint32_t, + z0 = svmul_n_u32_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) /* -** mul_2_u32_z_untied: -** mov (z[0-9]+\.s), #2 +** mul_3_u32_z_untied: +** mov (z[0-9]+\.s), #3 ** ( ** movprfx z0\.s, p0/z, z1\.s ** mul z0\.s, p0/m, z0\.s, \1 @@ -169,9 +348,9 @@ TEST_UNIFORM_Z (mul_2_u32_z_tied1, svuint32_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_u32_z_untied, svuint32_t, - z0 = svmul_n_u32_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u32_z_untied, svuint32_t, + z0 = svmul_n_u32_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_u32_x_tied1: @@ -227,23 +406,103 @@ TEST_UNIFORM_ZX (mul_w0_u32_x_untied, svuint32_t, uint32_t, z0 = svmul_x (p0, z1, x0)) /* -** mul_2_u32_x_tied1: -** mul z0\.s, z0\.s, #2 +** mul_4dupop1_u32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u32_x_tied1, svuint32_t, + z0 = svmul_x (p0, svdup_u32 (4), z0), + z0 = svmul_x (p0, svdup_u32 (4), z0)) + +/* +** mul_4dupop1ptrue_u32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u32_x_tied1, svuint32_t, + z0 = svmul_x (svptrue_b32 (), svdup_u32 (4), z0), + z0 = svmul_x (svptrue_b32 (), svdup_u32 (4), z0)) + +/* +** mul_4dupop2_u32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u32_x_tied1, svuint32_t, + z0 = svmul_x (p0, z0, svdup_u32 (4)), + z0 = svmul_x (p0, z0, svdup_u32 (4))) + +/* +** mul_4nop2_u32_x_tied1: +** lsl z0\.s, z0\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u32_x_tied1, svuint32_t, + z0 = svmul_n_u32_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_u32_x_tied1: +** lsl z0\.s, z0\.s, #31 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u32_x_tied1, svuint32_t, + z0 = svmul_n_u32_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_1_u32_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_u32_x_tied1, svuint32_t, + z0 = svmul_n_u32_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + +/* +** mul_3_u32_x_tied1: +** mul z0\.s, z0\.s, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_u32_x_tied1, svuint32_t, + z0 = svmul_n_u32_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_u32_x_untied: +** lsl z0\.s, z1\.s, #2 ** ret */ -TEST_UNIFORM_Z (mul_2_u32_x_tied1, svuint32_t, - z0 = svmul_n_u32_x (p0, z0, 2), - z0 = svmul_x (p0, z0, 2)) +TEST_UNIFORM_Z (mul_4dupop2_u32_x_untied, svuint32_t, + z0 = svmul_x (p0, z1, svdup_u32 (4)), + z0 = svmul_x (p0, z1, svdup_u32 (4))) /* -** mul_2_u32_x_untied: +** mul_4nop2_u32_x_untied: +** lsl z0\.s, z1\.s, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u32_x_untied, svuint32_t, + z0 = svmul_n_u32_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_u32_x_untied: +** lsl z0\.s, z1\.s, #31 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u32_x_untied, svuint32_t, + z0 = svmul_n_u32_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) + +/* +** mul_3_u32_x_untied: ** movprfx z0, z1 -** mul z0\.s, z0\.s, #2 +** mul z0\.s, z0\.s, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_u32_x_untied, svuint32_t, - z0 = svmul_n_u32_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u32_x_untied, svuint32_t, + z0 = svmul_n_u32_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_u32_x: @@ -256,8 +515,7 @@ TEST_UNIFORM_Z (mul_127_u32_x, svuint32_t, /* ** mul_128_u32_x: -** mov (z[0-9]+\.s), #128 -** mul z0\.s, p0/m, z0\.s, \1 +** lsl z0\.s, z0\.s, #7 ** ret */ TEST_UNIFORM_Z (mul_128_u32_x, svuint32_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c index ab655554db7..1302975ef43 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u64.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1ULL<<63 + /* ** mul_u64_m_tied1: ** mul z0\.d, p0/m, z0\.d, z1\.d @@ -53,10 +55,66 @@ TEST_UNIFORM_ZX (mul_x0_u64_m_untied, svuint64_t, uint64_t, z0 = svmul_n_u64_m (p0, z1, x0), z0 = svmul_m (p0, z1, x0)) +/* +** mul_4dupop1_u64_m_tied1: +** mov (z[0-9]+)\.d, #4 +** mov (z[0-9]+\.d), z0\.d +** movprfx z0, \1 +** mul z0\.d, p0/m, z0\.d, \2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u64_m_tied1, svuint64_t, + z0 = svmul_m (p0, svdup_u64 (4), z0), + z0 = svmul_m (p0, svdup_u64 (4), z0)) + +/* +** mul_4dupop1ptrue_u64_m_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u64_m_tied1, svuint64_t, + z0 = svmul_m (svptrue_b64 (), svdup_u64 (4), z0), + z0 = svmul_m (svptrue_b64 (), svdup_u64 (4), z0)) + +/* +** mul_4dupop2_u64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u64_m_tied1, svuint64_t, + z0 = svmul_m (p0, z0, svdup_u64 (4)), + z0 = svmul_m (p0, z0, svdup_u64 (4))) + +/* +** mul_4nop2_u64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u64_m_tied1, svuint64_t, + z0 = svmul_n_u64_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_u64_m_tied1: +** lsl z0\.d, p0/m, z0\.d, #63 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u64_m_tied1, svuint64_t, + z0 = svmul_n_u64_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_1_u64_m_tied1: +** sel z0\.d, p0, z0\.d, z0\.d +** ret +*/ +TEST_UNIFORM_Z (mul_1_u64_m_tied1, svuint64_t, + z0 = svmul_n_u64_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + /* ** mul_2_u64_m_tied1: -** mov (z[0-9]+\.d), #2 -** mul z0\.d, p0/m, z0\.d, \1 +** lsl z0\.d, p0/m, z0\.d, #1 ** ret */ TEST_UNIFORM_Z (mul_2_u64_m_tied1, svuint64_t, @@ -64,15 +122,55 @@ TEST_UNIFORM_Z (mul_2_u64_m_tied1, svuint64_t, z0 = svmul_m (p0, z0, 2)) /* -** mul_2_u64_m_untied: -** mov (z[0-9]+\.d), #2 +** mul_3_u64_m_tied1: +** mov (z[0-9]+\.d), #3 +** mul z0\.d, p0/m, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_u64_m_tied1, svuint64_t, + z0 = svmul_n_u64_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_u64_m_untied: +** movprfx z0, z1 +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u64_m_untied, svuint64_t, + z0 = svmul_m (p0, z1, svdup_u64 (4)), + z0 = svmul_m (p0, z1, svdup_u64 (4))) + +/* +** mul_4nop2_u64_m_untied: +** movprfx z0, z1 +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u64_m_untied, svuint64_t, + z0 = svmul_n_u64_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_u64_m_untied: +** movprfx z0, z1 +** lsl z0\.d, p0/m, z0\.d, #63 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u64_m_untied, svuint64_t, + z0 = svmul_n_u64_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_u64_m_untied: +** mov (z[0-9]+\.d), #3 ** movprfx z0, z1 ** mul z0\.d, p0/m, z0\.d, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u64_m_untied, svuint64_t, - z0 = svmul_n_u64_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u64_m_untied, svuint64_t, + z0 = svmul_n_u64_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_u64_m: @@ -147,10 +245,69 @@ TEST_UNIFORM_ZX (mul_x0_u64_z_untied, svuint64_t, uint64_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_u64_z_tied1: -** mov (z[0-9]+\.d), #2 +** mul_4dupop1_u64_z_tied1: ** movprfx z0\.d, p0/z, z0\.d -** mul z0\.d, p0/m, z0\.d, \1 +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u64_z_tied1, svuint64_t, + z0 = svmul_z (p0, svdup_u64 (4), z0), + z0 = svmul_z (p0, svdup_u64 (4), z0)) + +/* +** mul_4dupop1ptrue_u64_z_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u64_z_tied1, svuint64_t, + z0 = svmul_z (svptrue_b64 (), svdup_u64 (4), z0), + z0 = svmul_z (svptrue_b64 (), svdup_u64 (4), z0)) + +/* +** mul_4dupop2_u64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u64_z_tied1, svuint64_t, + z0 = svmul_z (p0, z0, svdup_u64 (4)), + z0 = svmul_z (p0, z0, svdup_u64 (4))) + +/* +** mul_4nop2_u64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u64_z_tied1, svuint64_t, + z0 = svmul_n_u64_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_u64_z_tied1: +** movprfx z0\.d, p0/z, z0\.d +** lsl z0\.d, p0/m, z0\.d, #63 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u64_z_tied1, svuint64_t, + z0 = svmul_n_u64_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_1_u64_z_tied1: +** mov z31.d, #1 +** movprfx z0.d, p0/z, z0.d +** mul z0.d, p0/m, z0.d, z31.d +** ret +*/ +TEST_UNIFORM_Z (mul_1_u64_z_tied1, svuint64_t, + z0 = svmul_n_u64_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_2_u64_z_tied1: +** movprfx z0.d, p0/z, z0.d +** lsl z0.d, p0/m, z0.d, #1 ** ret */ TEST_UNIFORM_Z (mul_2_u64_z_tied1, svuint64_t, @@ -158,8 +315,49 @@ TEST_UNIFORM_Z (mul_2_u64_z_tied1, svuint64_t, z0 = svmul_z (p0, z0, 2)) /* -** mul_2_u64_z_untied: -** mov (z[0-9]+\.d), #2 +** mul_3_u64_z_tied1: +** mov (z[0-9]+\.d), #3 +** movprfx z0\.d, p0/z, z0\.d +** mul z0\.d, p0/m, z0\.d, \1 +** ret +*/ +TEST_UNIFORM_Z (mul_3_u64_z_tied1, svuint64_t, + z0 = svmul_n_u64_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_u64_z_untied: +** movprfx z0\.d, p0/z, z1\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u64_z_untied, svuint64_t, + z0 = svmul_z (p0, z1, svdup_u64 (4)), + z0 = svmul_z (p0, z1, svdup_u64 (4))) + +/* +** mul_4nop2_u64_z_untied: +** movprfx z0\.d, p0/z, z1\.d +** lsl z0\.d, p0/m, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u64_z_untied, svuint64_t, + z0 = svmul_n_u64_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_u64_z_untied: +** movprfx z0\.d, p0/z, z1\.d +** lsl z0\.d, p0/m, z0\.d, #63 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u64_z_untied, svuint64_t, + z0 = svmul_n_u64_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) + +/* +** mul_3_u64_z_untied: +** mov (z[0-9]+\.d), #3 ** ( ** movprfx z0\.d, p0/z, z1\.d ** mul z0\.d, p0/m, z0\.d, \1 @@ -169,9 +367,9 @@ TEST_UNIFORM_Z (mul_2_u64_z_tied1, svuint64_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_u64_z_untied, svuint64_t, - z0 = svmul_n_u64_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u64_z_untied, svuint64_t, + z0 = svmul_n_u64_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_u64_x_tied1: @@ -226,9 +424,62 @@ TEST_UNIFORM_ZX (mul_x0_u64_x_untied, svuint64_t, uint64_t, z0 = svmul_n_u64_x (p0, z1, x0), z0 = svmul_x (p0, z1, x0)) +/* +** mul_4dupop1_u64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u64_x_tied1, svuint64_t, + z0 = svmul_x (p0, svdup_u64 (4), z0), + z0 = svmul_x (p0, svdup_u64 (4), z0)) + +/* +** mul_4dupop1ptrue_u64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u64_x_tied1, svuint64_t, + z0 = svmul_x (svptrue_b64 (), svdup_u64 (4), z0), + z0 = svmul_x (svptrue_b64 (), svdup_u64 (4), z0)) + +/* +** mul_4dupop2_u64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u64_x_tied1, svuint64_t, + z0 = svmul_x (p0, z0, svdup_u64 (4)), + z0 = svmul_x (p0, z0, svdup_u64 (4))) + +/* +** mul_4nop2_u64_x_tied1: +** lsl z0\.d, z0\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u64_x_tied1, svuint64_t, + z0 = svmul_n_u64_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_u64_x_tied1: +** lsl z0\.d, z0\.d, #63 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u64_x_tied1, svuint64_t, + z0 = svmul_n_u64_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_1_u64_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_u64_x_tied1, svuint64_t, + z0 = svmul_n_u64_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + /* ** mul_2_u64_x_tied1: -** mul z0\.d, z0\.d, #2 +** add z0\.d, z0\.d, z0\.d ** ret */ TEST_UNIFORM_Z (mul_2_u64_x_tied1, svuint64_t, @@ -236,14 +487,50 @@ TEST_UNIFORM_Z (mul_2_u64_x_tied1, svuint64_t, z0 = svmul_x (p0, z0, 2)) /* -** mul_2_u64_x_untied: +** mul_3_u64_x_tied1: +** mul z0\.d, z0\.d, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_u64_x_tied1, svuint64_t, + z0 = svmul_n_u64_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_u64_x_untied: +** lsl z0\.d, z1\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u64_x_untied, svuint64_t, + z0 = svmul_x (p0, z1, svdup_u64 (4)), + z0 = svmul_x (p0, z1, svdup_u64 (4))) + +/* +** mul_4nop2_u64_x_untied: +** lsl z0\.d, z1\.d, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u64_x_untied, svuint64_t, + z0 = svmul_n_u64_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_u64_x_untied: +** lsl z0\.d, z1\.d, #63 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u64_x_untied, svuint64_t, + z0 = svmul_n_u64_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) + +/* +** mul_3_u64_x_untied: ** movprfx z0, z1 -** mul z0\.d, z0\.d, #2 +** mul z0\.d, z0\.d, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_u64_x_untied, svuint64_t, - z0 = svmul_n_u64_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u64_x_untied, svuint64_t, + z0 = svmul_n_u64_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_u64_x: @@ -256,8 +543,7 @@ TEST_UNIFORM_Z (mul_127_u64_x, svuint64_t, /* ** mul_128_u64_x: -** mov (z[0-9]+\.d), #128 -** mul z0\.d, p0/m, z0\.d, \1 +** lsl z0\.d, z0\.d, #7 ** ret */ TEST_UNIFORM_Z (mul_128_u64_x, svuint64_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c index ef0a5220dc0..ed74742f36d 100644 --- a/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c +++ b/gcc/testsuite/gcc.target/aarch64/sve/acle/asm/mul_u8.c @@ -2,6 +2,8 @@ #include "test_sve_acle.h" +#define MAXPOW 1<<7 + /* ** mul_u8_m_tied1: ** mul z0\.b, p0/m, z0\.b, z1\.b @@ -54,30 +56,117 @@ TEST_UNIFORM_ZX (mul_w0_u8_m_untied, svuint8_t, uint8_t, z0 = svmul_m (p0, z1, x0)) /* -** mul_2_u8_m_tied1: -** mov (z[0-9]+\.b), #2 +** mul_4dupop1_u8_m_tied1: +** mov (z[0-9]+)\.b, #4 +** mov (z[0-9]+)\.d, z0\.d +** movprfx z0, \1 +** mul z0\.b, p0/m, z0\.b, \2\.b +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u8_m_tied1, svuint8_t, + z0 = svmul_m (p0, svdup_u8 (4), z0), + z0 = svmul_m (p0, svdup_u8 (4), z0)) + +/* +** mul_4dupop1ptrue_u8_m_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u8_m_tied1, svuint8_t, + z0 = svmul_m (svptrue_b8 (), svdup_u8 (4), z0), + z0 = svmul_m (svptrue_b8 (), svdup_u8 (4), z0)) + +/* +** mul_4dupop2_u8_m_tied1: +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u8_m_tied1, svuint8_t, + z0 = svmul_m (p0, z0, svdup_u8 (4)), + z0 = svmul_m (p0, z0, svdup_u8 (4))) + +/* +** mul_4nop2_u8_m_tied1: +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u8_m_tied1, svuint8_t, + z0 = svmul_n_u8_m (p0, z0, 4), + z0 = svmul_m (p0, z0, 4)) + +/* +** mul_maxpownop2_u8_m_tied1: +** lsl z0\.b, p0/m, z0\.b, #7 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u8_m_tied1, svuint8_t, + z0 = svmul_n_u8_m (p0, z0, MAXPOW), + z0 = svmul_m (p0, z0, MAXPOW)) + +/* +** mul_1_u8_m_tied1: +** sel z0\.b, p0, z0\.b, z0\.b +** ret +*/ +TEST_UNIFORM_Z (mul_1_u8_m_tied1, svuint8_t, + z0 = svmul_n_u8_m (p0, z0, 1), + z0 = svmul_m (p0, z0, 1)) + +/* +** mul_3_u8_m_tied1: +** mov (z[0-9]+\.b), #3 ** mul z0\.b, p0/m, z0\.b, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u8_m_tied1, svuint8_t, - z0 = svmul_n_u8_m (p0, z0, 2), - z0 = svmul_m (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_u8_m_tied1, svuint8_t, + z0 = svmul_n_u8_m (p0, z0, 3), + z0 = svmul_m (p0, z0, 3)) + +/* +** mul_4dupop2_u8_m_untied: +** movprfx z0, z1 +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u8_m_untied, svuint8_t, + z0 = svmul_m (p0, z1, svdup_u8 (4)), + z0 = svmul_m (p0, z1, svdup_u8 (4))) /* -** mul_2_u8_m_untied: -** mov (z[0-9]+\.b), #2 +** mul_4nop2_u8_m_untied: +** movprfx z0, z1 +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u8_m_untied, svuint8_t, + z0 = svmul_n_u8_m (p0, z1, 4), + z0 = svmul_m (p0, z1, 4)) + +/* +** mul_maxpownop2_u8_m_untied: +** movprfx z0, z1 +** lsl z0\.b, p0/m, z0\.b, #7 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u8_m_untied, svuint8_t, + z0 = svmul_n_u8_m (p0, z1, MAXPOW), + z0 = svmul_m (p0, z1, MAXPOW)) + +/* +** mul_3_u8_m_untied: +** mov (z[0-9]+\.b), #3 ** movprfx z0, z1 ** mul z0\.b, p0/m, z0\.b, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u8_m_untied, svuint8_t, - z0 = svmul_n_u8_m (p0, z1, 2), - z0 = svmul_m (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u8_m_untied, svuint8_t, + z0 = svmul_n_u8_m (p0, z1, 3), + z0 = svmul_m (p0, z1, 3)) /* ** mul_m1_u8_m: -** mov (z[0-9]+\.b), #-1 -** mul z0\.b, p0/m, z0\.b, \1 +** mov (z[0-9]+)\.b, #-1 +** mul z0\.b, p0/m, z0\.b, \1\.b ** ret */ TEST_UNIFORM_Z (mul_m1_u8_m, svuint8_t, @@ -147,19 +236,109 @@ TEST_UNIFORM_ZX (mul_w0_u8_z_untied, svuint8_t, uint8_t, z0 = svmul_z (p0, z1, x0)) /* -** mul_2_u8_z_tied1: -** mov (z[0-9]+\.b), #2 +** mul_4dupop1_u8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u8_z_tied1, svuint8_t, + z0 = svmul_z (p0, svdup_u8 (4), z0), + z0 = svmul_z (p0, svdup_u8 (4), z0)) + +/* +** mul_4dupop1ptrue_u8_z_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u8_z_tied1, svuint8_t, + z0 = svmul_z (svptrue_b8 (), svdup_u8 (4), z0), + z0 = svmul_z (svptrue_b8 (), svdup_u8 (4), z0)) + +/* +** mul_4dupop2_u8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u8_z_tied1, svuint8_t, + z0 = svmul_z (p0, z0, svdup_u8 (4)), + z0 = svmul_z (p0, z0, svdup_u8 (4))) + +/* +** mul_4nop2_u8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u8_z_tied1, svuint8_t, + z0 = svmul_n_u8_z (p0, z0, 4), + z0 = svmul_z (p0, z0, 4)) + +/* +** mul_maxpownop2_u8_z_tied1: +** movprfx z0\.b, p0/z, z0\.b +** lsl z0\.b, p0/m, z0\.b, #7 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u8_z_tied1, svuint8_t, + z0 = svmul_n_u8_z (p0, z0, MAXPOW), + z0 = svmul_z (p0, z0, MAXPOW)) + +/* +** mul_1_u8_z_tied1: +** mov z31.b, #1 +** movprfx z0.b, p0/z, z0.b +** mul z0.b, p0/m, z0.b, z31.b +** ret +*/ +TEST_UNIFORM_Z (mul_1_u8_z_tied1, svuint8_t, + z0 = svmul_n_u8_z (p0, z0, 1), + z0 = svmul_z (p0, z0, 1)) + +/* +** mul_3_u8_z_tied1: +** mov (z[0-9]+\.b), #3 ** movprfx z0\.b, p0/z, z0\.b ** mul z0\.b, p0/m, z0\.b, \1 ** ret */ -TEST_UNIFORM_Z (mul_2_u8_z_tied1, svuint8_t, - z0 = svmul_n_u8_z (p0, z0, 2), - z0 = svmul_z (p0, z0, 2)) +TEST_UNIFORM_Z (mul_3_u8_z_tied1, svuint8_t, + z0 = svmul_n_u8_z (p0, z0, 3), + z0 = svmul_z (p0, z0, 3)) + +/* +** mul_4dupop2_u8_z_untied: +** movprfx z0\.b, p0/z, z1\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u8_z_untied, svuint8_t, + z0 = svmul_z (p0, z1, svdup_u8 (4)), + z0 = svmul_z (p0, z1, svdup_u8 (4))) /* -** mul_2_u8_z_untied: -** mov (z[0-9]+\.b), #2 +** mul_4nop2_u8_z_untied: +** movprfx z0\.b, p0/z, z1\.b +** lsl z0\.b, p0/m, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u8_z_untied, svuint8_t, + z0 = svmul_n_u8_z (p0, z1, 4), + z0 = svmul_z (p0, z1, 4)) + +/* +** mul_maxpownop2_u8_z_untied: +** movprfx z0\.b, p0/z, z1\.b +** lsl z0\.b, p0/m, z0\.b, #7 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u8_z_untied, svuint8_t, + z0 = svmul_n_u8_z (p0, z1, MAXPOW), + z0 = svmul_z (p0, z1, MAXPOW)) + +/* +** mul_3_u8_z_untied: +** mov (z[0-9]+\.b), #3 ** ( ** movprfx z0\.b, p0/z, z1\.b ** mul z0\.b, p0/m, z0\.b, \1 @@ -169,9 +348,9 @@ TEST_UNIFORM_Z (mul_2_u8_z_tied1, svuint8_t, ** ) ** ret */ -TEST_UNIFORM_Z (mul_2_u8_z_untied, svuint8_t, - z0 = svmul_n_u8_z (p0, z1, 2), - z0 = svmul_z (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u8_z_untied, svuint8_t, + z0 = svmul_n_u8_z (p0, z1, 3), + z0 = svmul_z (p0, z1, 3)) /* ** mul_u8_x_tied1: @@ -227,23 +406,103 @@ TEST_UNIFORM_ZX (mul_w0_u8_x_untied, svuint8_t, uint8_t, z0 = svmul_x (p0, z1, x0)) /* -** mul_2_u8_x_tied1: -** mul z0\.b, z0\.b, #2 +** mul_4dupop1_u8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1_u8_x_tied1, svuint8_t, + z0 = svmul_x (p0, svdup_u8 (4), z0), + z0 = svmul_x (p0, svdup_u8 (4), z0)) + +/* +** mul_4dupop1ptrue_u8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop1ptrue_u8_x_tied1, svuint8_t, + z0 = svmul_x (svptrue_b8 (), svdup_u8 (4), z0), + z0 = svmul_x (svptrue_b8 (), svdup_u8 (4), z0)) + +/* +** mul_4dupop2_u8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u8_x_tied1, svuint8_t, + z0 = svmul_x (p0, z0, svdup_u8 (4)), + z0 = svmul_x (p0, z0, svdup_u8 (4))) + +/* +** mul_4nop2_u8_x_tied1: +** lsl z0\.b, z0\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u8_x_tied1, svuint8_t, + z0 = svmul_n_u8_x (p0, z0, 4), + z0 = svmul_x (p0, z0, 4)) + +/* +** mul_maxpownop2_u8_x_tied1: +** lsl z0\.b, z0\.b, #7 +** ret +*/ +TEST_UNIFORM_Z (mul_maxpownop2_u8_x_tied1, svuint8_t, + z0 = svmul_n_u8_x (p0, z0, MAXPOW), + z0 = svmul_x (p0, z0, MAXPOW)) + +/* +** mul_1_u8_x_tied1: +** ret +*/ +TEST_UNIFORM_Z (mul_1_u8_x_tied1, svuint8_t, + z0 = svmul_n_u8_x (p0, z0, 1), + z0 = svmul_x (p0, z0, 1)) + +/* +** mul_3_u8_x_tied1: +** mul z0\.b, z0\.b, #3 +** ret +*/ +TEST_UNIFORM_Z (mul_3_u8_x_tied1, svuint8_t, + z0 = svmul_n_u8_x (p0, z0, 3), + z0 = svmul_x (p0, z0, 3)) + +/* +** mul_4dupop2_u8_x_untied: +** lsl z0\.b, z1\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4dupop2_u8_x_untied, svuint8_t, + z0 = svmul_x (p0, z1, svdup_u8 (4)), + z0 = svmul_x (p0, z1, svdup_u8 (4))) + +/* +** mul_4nop2_u8_x_untied: +** lsl z0\.b, z1\.b, #2 +** ret +*/ +TEST_UNIFORM_Z (mul_4nop2_u8_x_untied, svuint8_t, + z0 = svmul_n_u8_x (p0, z1, 4), + z0 = svmul_x (p0, z1, 4)) + +/* +** mul_maxpownop2_u8_x_untied: +** lsl z0\.b, z1\.b, #7 ** ret */ -TEST_UNIFORM_Z (mul_2_u8_x_tied1, svuint8_t, - z0 = svmul_n_u8_x (p0, z0, 2), - z0 = svmul_x (p0, z0, 2)) +TEST_UNIFORM_Z (mul_maxpownop2_u8_x_untied, svuint8_t, + z0 = svmul_n_u8_x (p0, z1, MAXPOW), + z0 = svmul_x (p0, z1, MAXPOW)) /* -** mul_2_u8_x_untied: +** mul_3_u8_x_untied: ** movprfx z0, z1 -** mul z0\.b, z0\.b, #2 +** mul z0\.b, z0\.b, #3 ** ret */ -TEST_UNIFORM_Z (mul_2_u8_x_untied, svuint8_t, - z0 = svmul_n_u8_x (p0, z1, 2), - z0 = svmul_x (p0, z1, 2)) +TEST_UNIFORM_Z (mul_3_u8_x_untied, svuint8_t, + z0 = svmul_n_u8_x (p0, z1, 3), + z0 = svmul_x (p0, z1, 3)) /* ** mul_127_u8_x: @@ -256,7 +515,7 @@ TEST_UNIFORM_Z (mul_127_u8_x, svuint8_t, /* ** mul_128_u8_x: -** mul z0\.b, z0\.b, #-128 +** lsl z0\.b, z0\.b, #7 ** ret */ TEST_UNIFORM_Z (mul_128_u8_x, svuint8_t, @@ -292,7 +551,7 @@ TEST_UNIFORM_Z (mul_m127_u8_x, svuint8_t, /* ** mul_m128_u8_x: -** mul z0\.b, z0\.b, #-128 +** lsl z0\.b, z0\.b, #7 ** ret */ TEST_UNIFORM_Z (mul_m128_u8_x, svuint8_t, diff --git a/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c b/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c new file mode 100644 index 00000000000..6af00439e39 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/mul_const_run.c @@ -0,0 +1,101 @@ +/* { dg-do run { target aarch64_sve128_hw } } */ +/* { dg-options "-O2 -msve-vector-bits=128" } */ + +#include +#include + +typedef svbool_t pred __attribute__((arm_sve_vector_bits(128))); +typedef svfloat16_t svfloat16_ __attribute__((arm_sve_vector_bits(128))); +typedef svfloat32_t svfloat32_ __attribute__((arm_sve_vector_bits(128))); +typedef svfloat64_t svfloat64_ __attribute__((arm_sve_vector_bits(128))); +typedef svint32_t svint32_ __attribute__((arm_sve_vector_bits(128))); +typedef svint64_t svint64_ __attribute__((arm_sve_vector_bits(128))); +typedef svuint32_t svuint32_ __attribute__((arm_sve_vector_bits(128))); +typedef svuint64_t svuint64_ __attribute__((arm_sve_vector_bits(128))); + +#define F(T, TS, P, OP1, OP2) \ +{ \ + T##_t op1 = (T##_t) OP1; \ + T##_t op2 = (T##_t) OP2; \ + sv##T##_ res = svmul_##P (pg, svdup_##TS (op1), svdup_##TS (op2)); \ + sv##T##_ exp = svdup_##TS (op1 * op2); \ + if (svptest_any (pg, svcmpne (pg, exp, res))) \ + __builtin_abort (); \ + \ + sv##T##_ res_n = svmul_##P (pg, svdup_##TS (op1), op2); \ + if (svptest_any (pg, svcmpne (pg, exp, res_n))) \ + __builtin_abort (); \ +} + +#define TEST_TYPES_1(T, TS) \ + F (T, TS, m, 79, 16) \ + F (T, TS, z, 79, 16) \ + F (T, TS, x, 79, 16) + +#define TEST_TYPES \ + TEST_TYPES_1 (float16, f16) \ + TEST_TYPES_1 (float32, f32) \ + TEST_TYPES_1 (float64, f64) \ + TEST_TYPES_1 (int32, s32) \ + TEST_TYPES_1 (int64, s64) \ + TEST_TYPES_1 (uint32, u32) \ + TEST_TYPES_1 (uint64, u64) + +#define TEST_VALUES_S_1(B, OP1, OP2) \ + F (int##B, s##B, x, OP1, OP2) + +#define TEST_VALUES_S \ + TEST_VALUES_S_1 (32, INT32_MIN, INT32_MIN) \ + TEST_VALUES_S_1 (64, INT64_MIN, INT64_MIN) \ + TEST_VALUES_S_1 (32, 4, 4) \ + TEST_VALUES_S_1 (32, -7, 4) \ + TEST_VALUES_S_1 (32, 4, -7) \ + TEST_VALUES_S_1 (64, 4, 4) \ + TEST_VALUES_S_1 (64, -7, 4) \ + TEST_VALUES_S_1 (64, 4, -7) \ + TEST_VALUES_S_1 (32, INT32_MAX, (1 << 30)) \ + TEST_VALUES_S_1 (32, (1 << 30), INT32_MAX) \ + TEST_VALUES_S_1 (64, INT64_MAX, (1ULL << 62)) \ + TEST_VALUES_S_1 (64, (1ULL << 62), INT64_MAX) \ + TEST_VALUES_S_1 (32, INT32_MIN, (1 << 30)) \ + TEST_VALUES_S_1 (64, INT64_MIN, (1ULL << 62)) \ + TEST_VALUES_S_1 (32, INT32_MAX, 1) \ + TEST_VALUES_S_1 (32, INT32_MAX, 1) \ + TEST_VALUES_S_1 (64, 1, INT64_MAX) \ + TEST_VALUES_S_1 (64, 1, INT64_MAX) \ + TEST_VALUES_S_1 (32, INT32_MIN, 16) \ + TEST_VALUES_S_1 (64, INT64_MIN, 16) \ + TEST_VALUES_S_1 (32, INT32_MAX, -5) \ + TEST_VALUES_S_1 (64, INT64_MAX, -5) \ + TEST_VALUES_S_1 (32, INT32_MIN, -4) \ + TEST_VALUES_S_1 (64, INT64_MIN, -4) + +#define TEST_VALUES_U_1(B, OP1, OP2) \ + F (uint##B, u##B, x, OP1, OP2) + +#define TEST_VALUES_U \ + TEST_VALUES_U_1 (32, UINT32_MAX, UINT32_MAX) \ + TEST_VALUES_U_1 (64, UINT64_MAX, UINT64_MAX) \ + TEST_VALUES_U_1 (32, UINT32_MAX, (1 << 31)) \ + TEST_VALUES_U_1 (64, UINT64_MAX, (1ULL << 63)) \ + TEST_VALUES_U_1 (32, 7, 4) \ + TEST_VALUES_U_1 (32, 4, 7) \ + TEST_VALUES_U_1 (64, 7, 4) \ + TEST_VALUES_U_1 (64, 4, 7) \ + TEST_VALUES_U_1 (32, 7, 3) \ + TEST_VALUES_U_1 (64, 7, 3) \ + TEST_VALUES_U_1 (32, 11, 1) \ + TEST_VALUES_U_1 (64, 11, 1) + +#define TEST_VALUES \ + TEST_VALUES_S \ + TEST_VALUES_U + +int +main (void) +{ + const pred pg = svptrue_b8 (); + TEST_TYPES + TEST_VALUES + return 0; +}