From patchwork Mon Jul 8 20:41:27 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 1958117 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20230601 header.b=VTD5fgpX; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WHwzK2lK1z1xpP for ; Tue, 9 Jul 2024 06:42:21 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 33C40384A447 for ; Mon, 8 Jul 2024 20:42:04 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com [IPv6:2a00:1450:4864:20::22d]) by sourceware.org (Postfix) with ESMTPS id 15D05385C6D2 for ; Mon, 8 Jul 2024 20:41:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 15D05385C6D2 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 15D05385C6D2 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::22d ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1720471305; cv=none; b=eTWP+Ho42EmV39avqFJX3guEoaRzjQ1EwzmS8xEbZm2l8HEgh41LcgxyRiSdfKc5RDPdGXF4hSsIGO4KTg3vE+x1HJBovK9CtQh2P67xdScVtVZAcXL0XabAvQV6qpWrBnmvo7FQ7XVv+z8S48eseO5hFTUE2aosoMT95bJdQss= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1720471305; c=relaxed/simple; bh=vi9r8gGpU24B4vQZmsn6cpeZ0+am3rpnIgmt+7wSUyM=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=fAdcgDYqnADG72OIecTGDS1skKFH6CTbdF9xd7f7szVIW0KvcDlgVhoLOTGmnKF06AZseWeDQU7S2x8XxU9tlXdz/N9dx4PTYgM0M/C+/2Znqi1NZemK5JIuNAEs5szRb0pwBIpR9/vUeoQDkhxquSEuChvO8v/iPmE9ZWJ1P68= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-lj1-x22d.google.com with SMTP id 38308e7fff4ca-2ee910d6a9eso32770011fa.1 for ; Mon, 08 Jul 2024 13:41:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720471300; x=1721076100; darn=gcc.gnu.org; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=+9am4855KNvj99F5ftZXvYvL1+cfVhPqAZ3cfb73Jhw=; b=VTD5fgpX72VPLihyuo3Be/UcgAPk+QW0GqxMtXZwvKv3x7Bl5WRUUgmoxTlaYGcGIN Zwe4j8dpbt5Ga+GENOuObYytijwx0E4wijNj5Dk7ZgP0+iesSoplw35p1Hv1O+bjNIrH cynW30FNiDmDC5/rcQaf+NG6WT0+buagEl66hnkUlqAlKJibkrjr8Z7FPPsuKfr+K8XS cl7inRG95fBPy5uTNOfeuahEAk13kKo0TETXcWr6TIQHrHpiuxcb6hcysb/CP6vXLGOC 8bnP+9YtkKndp4h3ePlCinfQkVEwRNqC0jQmHgq8LPAT0attegujGgM5VHcx0Dfi+pGs WXGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720471300; x=1721076100; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=+9am4855KNvj99F5ftZXvYvL1+cfVhPqAZ3cfb73Jhw=; b=eCNRNSq3kh7TjwawvTYXAS464llyZCg0ovxHfhxe31U3R3TsSsVy1s1gdD2ps0NPof YxFr1i7oHLup2af0nwV+5s+tmegP9Gnm/alVHpalmwZ4fpejrUNkGont8nnLZZoPhg6f FkR1pKZk2QOlH6+2s2avMnJKM1nEOVA5JXsMp2qKRRJuKOIaEgb3S6R/aUrJqkfYIt4o CgdiS4m+Od7pYGsVTiaj15MoUqxaEPhaIZsUvPsHoFhG/2/FCReTzgvHSwVnRQcOhHRG aVsGIqywE75GnrPywjYmsjcZ5g9TRBIbAGxfyuser796Pja/xkX2NZ7Fd18p0D0QZEwz DKyw== X-Gm-Message-State: AOJu0YxBg76miK1a3fdXx72RbjoyMZD07ObP4jrUglh8u3rj04KU5cMf jX5mgfCH6iTk4dqx8RXmSx+H/SPWteXBYyUrkA7wJAxD+ngvFiFXUU1umccFLM1zZUMQHqhYOsS pB3uRKbqUmyb2o97Ij+bPXWK0YLrd5oWk7OTk7Q== X-Google-Smtp-Source: AGHT+IHDU1Feq4ZJhF2MlDkmbuGNArIvHgdZuCd/n0P7MfHS3D0nsrI9hfn3GVaSb2QYvARUoY1UCpM+l9mO5SNDROg= X-Received: by 2002:a2e:a615:0:b0:2ee:7b82:f815 with SMTP id 38308e7fff4ca-2eeb471f6c1mr767021fa.16.1720471299334; Mon, 08 Jul 2024 13:41:39 -0700 (PDT) MIME-Version: 1.0 From: Uros Bizjak Date: Mon, 8 Jul 2024 22:41:27 +0200 Message-ID: Subject: [committed] i386: Promote {QI, HI}mode x86_movcc_0_m1_neg to SImode To: "gcc-patches@gcc.gnu.org" X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org Promote HImode x86_movcc_0_m1_neg insn to SImode to avoid redundant prefixes. Also promote QImode insn when TARGET_PROMOTE_QImode is set. This is similar to promotable_binary_operator splitter, where we promote the result to SImode. Also correct insn condition for splitters to SImode of NEG and NOT instructions. The sizes of QImode and SImode instructions are always the same, so there is no need for optimize_insn_for_size bypass. gcc/ChangeLog: * config/i386/i386.md (x86_movcc_0_m1_neg splitter to SImode): New splitter. (NEG and NOT splitter to SImode): Remove optimize_insn_for_size_p predicate from insn condition. Bootstrapped and regression tested on x86_64-linux-gnu {,-m32}. Uros. diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md index b24c4fe5875..214cb2e239a 100644 --- a/gcc/config/i386/i386.md +++ b/gcc/config/i386/i386.md @@ -26576,9 +26576,7 @@ (define_split (clobber (reg:CC FLAGS_REG))] "! TARGET_PARTIAL_REG_STALL && reload_completed && (GET_MODE (operands[0]) == HImode - || (GET_MODE (operands[0]) == QImode - && (TARGET_PROMOTE_QImode - || optimize_insn_for_size_p ())))" + || (GET_MODE (operands[0]) == QImode && TARGET_PROMOTE_QImode))" [(parallel [(set (match_dup 0) (neg:SI (match_dup 1))) (clobber (reg:CC FLAGS_REG))])] @@ -26593,15 +26591,30 @@ (define_split (not (match_operand 1 "general_reg_operand")))] "! TARGET_PARTIAL_REG_STALL && reload_completed && (GET_MODE (operands[0]) == HImode - || (GET_MODE (operands[0]) == QImode - && (TARGET_PROMOTE_QImode - || optimize_insn_for_size_p ())))" + || (GET_MODE (operands[0]) == QImode && TARGET_PROMOTE_QImode))" [(set (match_dup 0) (not:SI (match_dup 1)))] { operands[0] = gen_lowpart (SImode, operands[0]); operands[1] = gen_lowpart (SImode, operands[1]); }) + +(define_split + [(set (match_operand 0 "general_reg_operand") + (neg (match_operator 1 "ix86_carry_flag_operator" + [(reg FLAGS_REG) (const_int 0)]))) + (clobber (reg:CC FLAGS_REG))] + "! TARGET_PARTIAL_REG_STALL && reload_completed + && (GET_MODE (operands[0]) == HImode + || (GET_MODE (operands[0]) == QImode && TARGET_PROMOTE_QImode))" + [(parallel [(set (match_dup 0) + (neg:SI (match_dup 1))) + (clobber (reg:CC FLAGS_REG))])] +{ + operands[0] = gen_lowpart (SImode, operands[0]); + operands[1] = shallow_copy_rtx (operands[1]); + PUT_MODE (operands[1], SImode); +}) ;; RTL Peephole optimizations, run before sched2. These primarily look to ;; transform a complex memory operation into two memory to register operations.