From patchwork Mon Nov 18 14:01:19 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Enkovich X-Patchwork-Id: 292102 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 21ED52C00C3 for ; Tue, 19 Nov 2013 01:03:13 +1100 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; q=dns; s= default; b=rIh3ZgsOIyiGF41ss6I3YGcHIQH66nF2Li9jC+B4fmllb7c6r23cy fgRFX8gOoYFIReD48I5zgAKw3oWdci0SYtO+b16bncNNG2qziRIBFeOz0QITwZ7q c/DtShFeDNxsMOoNPL2BcKkAxtHl0i43mSxxT8aX/HA/XrfrsPD9sQ= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; s= default; bh=0nYj82mHDaEHOjVnOaBJqW3bmcg=; b=M5tEo+xcVS7BvpJF5dNs DHruibSOqmkJLM9HdY3ZJNwFJFsn7rhFxppVS9DMf13S9lgkN8xN4VdgY+J2Y+sm 36ShRTsEbRV43fwpVi+eKrtWwY02ZofBpPtAPUT4vVRWxEVtpyHbXBQcyCKJN1/y hcWZFVttfY+QkL5e/7oQBd4= Received: (qmail 23792 invoked by alias); 18 Nov 2013 14:02:45 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 23631 invoked by uid 89); 18 Nov 2013 14:02:43 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=2.5 required=5.0 tests=AWL, BAYES_99, FREEMAIL_FROM, RDNS_NONE, SPF_PASS, URIBL_BLOCKED autolearn=no version=3.3.2 X-HELO: mail-pb0-f41.google.com Received: from Unknown (HELO mail-pb0-f41.google.com) (209.85.160.41) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Mon, 18 Nov 2013 14:02:14 +0000 Received: by mail-pb0-f41.google.com with SMTP id jt11so6785372pbb.0 for ; Mon, 18 Nov 2013 06:02:06 -0800 (PST) X-Received: by 10.66.159.234 with SMTP id xf10mr2856427pab.139.1384783324448; Mon, 18 Nov 2013 06:02:04 -0800 (PST) Received: from msticlxl57.ims.intel.com (fmdmzpr04-ext.fm.intel.com. [192.55.55.39]) by mx.google.com with ESMTPSA id og5sm23708033pbb.10.2013.11.18.06.02.02 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Mon, 18 Nov 2013 06:02:03 -0800 (PST) Date: Mon, 18 Nov 2013 18:01:19 +0400 From: Ilya Enkovich To: gcc-patches@gcc.gnu.org Subject: [PATCH, i386, MPX, 2/X] Pointers Checker [22/25] Target builtins Message-ID: <20131118140119.GP21297@msticlxl57.ims.intel.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes Hi, Here is a patch introducing i386 target versions of Pointer Bounds Checker builtins. Thanks, Ilya --- 2013-11-15 Ilya Enkovich * config/i386/i386-builtin-types.def (BND): New. (ULONG): New. (BND_FTYPE_PCVOID_ULONG): New. (VOID_FTYPE_BND_PCVOID): New. (VOID_FTYPE_PCVOID_PCVOID_BND): New. (BND_FTYPE_PCVOID_PCVOID): New. (BND_FTYPE_PCVOID): New. (BND_FTYPE_BND_BND): New. (PVOID_FTYPE_PVOID_PVOID_ULONG): New. (PVOID_FTYPE_PCVOID_BND_ULONG): New. (ULONG_FTYPE_VOID): New. (PVOID_FTYPE_BND): New. * config/i386/i386.c: Include tree-chkp.h, rtl-chkp.h. (ix86_builtins): Add IX86_BUILTIN_BNDMK, IX86_BUILTIN_BNDSTX, IX86_BUILTIN_BNDLDX, IX86_BUILTIN_BNDCL, IX86_BUILTIN_BNDCU, IX86_BUILTIN_BNDRET, IX86_BUILTIN_BNDSET, IX86_BUILTIN_BNDNARROW, IX86_BUILTIN_BNDINT, IX86_BUILTIN_ARG_BND, IX86_BUILTIN_SIZEOF, IX86_BUILTIN_BNDLOWER, IX86_BUILTIN_BNDUPPER. (builtin_isa): Add leaf_p and nothrow_p fields. (def_builtin): Initialize leaf_p and nothrow_p. (ix86_add_new_builtins): Handle leaf_p and nothrow_p flags. (bdesc_mpx): New. (bdesc_mpx_const): New. (ix86_init_mpx_builtins): New. (ix86_init_builtins): Call ix86_init_mpx_builtins. (ix86_expand_builtin): expand IX86_BUILTIN_BNDMK, IX86_BUILTIN_BNDSTX, IX86_BUILTIN_BNDLDX, IX86_BUILTIN_BNDCL, IX86_BUILTIN_BNDCU, IX86_BUILTIN_BNDRET, IX86_BUILTIN_BNDSET, IX86_BUILTIN_BNDNARROW, IX86_BUILTIN_BNDINT, IX86_BUILTIN_ARG_BND, IX86_BUILTIN_SIZEOF, IX86_BUILTIN_BNDLOWER, IX86_BUILTIN_BNDUPPER. diff --git a/gcc/config/i386/i386-builtin-types.def b/gcc/config/i386/i386-builtin-types.def index c866170..f82ac9b 100644 --- a/gcc/config/i386/i386-builtin-types.def +++ b/gcc/config/i386/i386-builtin-types.def @@ -47,6 +47,7 @@ DEF_PRIMITIVE_TYPE (UCHAR, unsigned_char_type_node) DEF_PRIMITIVE_TYPE (QI, char_type_node) DEF_PRIMITIVE_TYPE (HI, intHI_type_node) DEF_PRIMITIVE_TYPE (SI, intSI_type_node) +DEF_PRIMITIVE_TYPE (BND, pointer_bounds_type_node) # ??? Logically this should be intDI_type_node, but that maps to "long" # with 64-bit, and that's not how the emmintrin.h is written. Again, # changing this would change name mangling. @@ -60,6 +61,7 @@ DEF_PRIMITIVE_TYPE (USHORT, short_unsigned_type_node) DEF_PRIMITIVE_TYPE (INT, integer_type_node) DEF_PRIMITIVE_TYPE (UINT, unsigned_type_node) DEF_PRIMITIVE_TYPE (UNSIGNED, unsigned_type_node) +DEF_PRIMITIVE_TYPE (ULONG, long_unsigned_type_node) DEF_PRIMITIVE_TYPE (LONGLONG, long_long_integer_type_node) DEF_PRIMITIVE_TYPE (ULONGLONG, long_long_unsigned_type_node) DEF_PRIMITIVE_TYPE (UINT8, unsigned_char_type_node) @@ -239,6 +241,7 @@ DEF_FUNCTION_TYPE (V4DI, V8HI) DEF_FUNCTION_TYPE (V4DI, V4SI) DEF_FUNCTION_TYPE (V4DI, PV4DI) DEF_FUNCTION_TYPE (V4DI, V2DI) +DEF_FUNCTION_TYPE (BND, PCVOID, ULONG) DEF_FUNCTION_TYPE (DI, V2DI, INT) DEF_FUNCTION_TYPE (DOUBLE, V2DF, INT) @@ -374,6 +377,7 @@ DEF_FUNCTION_TYPE (VOID, PV4DI, V4DI) DEF_FUNCTION_TYPE (VOID, PV4SF, V4SF) DEF_FUNCTION_TYPE (VOID, PV8SF, V8SF) DEF_FUNCTION_TYPE (VOID, UNSIGNED, UNSIGNED) +DEF_FUNCTION_TYPE (VOID, BND, PCVOID) DEF_FUNCTION_TYPE (INT, V16QI, V16QI, INT) DEF_FUNCTION_TYPE (UCHAR, UINT, UINT, UINT) @@ -439,6 +443,14 @@ DEF_FUNCTION_TYPE (V8UHI, V8UHI, V8UHI, V8UHI) DEF_FUNCTION_TYPE (V16UQI, V16UQI, V16UQI, V16UQI) DEF_FUNCTION_TYPE (V4DF, V4DF, V4DF, V4DI) DEF_FUNCTION_TYPE (V8SF, V8SF, V8SF, V8SI) +DEF_FUNCTION_TYPE (VOID, PCVOID, PCVOID, BND) +DEF_FUNCTION_TYPE (BND, PCVOID, PCVOID) +DEF_FUNCTION_TYPE (BND, PCVOID) +DEF_FUNCTION_TYPE (BND, BND, BND) +DEF_FUNCTION_TYPE (PVOID, PVOID, PVOID, ULONG) +DEF_FUNCTION_TYPE (PVOID, PCVOID, BND, ULONG) +DEF_FUNCTION_TYPE (ULONG, VOID) +DEF_FUNCTION_TYPE (PVOID, BND) DEF_FUNCTION_TYPE (V2DI, V2DI, V2DI, UINT, UINT) DEF_FUNCTION_TYPE (V4HI, HI, HI, HI, HI) diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c index a427c15..6ddd37a 100644 --- a/gcc/config/i386/i386.c +++ b/gcc/config/i386/i386.c @@ -64,6 +64,8 @@ along with GCC; see the file COPYING3. If not see #include "tree-pass.h" #include "context.h" #include "pass_manager.h" +#include "tree-chkp.h" +#include "rtl-chkp.h" static rtx legitimize_dllimport_symbol (rtx, bool); static rtx legitimize_pe_coff_extern_decl (rtx, bool); @@ -27742,6 +27744,21 @@ enum ix86_builtins IX86_BUILTIN_XABORT, IX86_BUILTIN_XTEST, + /* MPX */ + IX86_BUILTIN_BNDMK, + IX86_BUILTIN_BNDSTX, + IX86_BUILTIN_BNDLDX, + IX86_BUILTIN_BNDCL, + IX86_BUILTIN_BNDCU, + IX86_BUILTIN_BNDRET, + IX86_BUILTIN_BNDSET, + IX86_BUILTIN_BNDNARROW, + IX86_BUILTIN_BNDINT, + IX86_BUILTIN_ARG_BND, + IX86_BUILTIN_SIZEOF, + IX86_BUILTIN_BNDLOWER, + IX86_BUILTIN_BNDUPPER, + /* BMI instructions. */ IX86_BUILTIN_BEXTR32, IX86_BUILTIN_BEXTR64, @@ -27811,6 +27828,8 @@ struct builtin_isa { enum ix86_builtin_func_type tcode; /* type to use in the declaration */ HOST_WIDE_INT isa; /* isa_flags this builtin is defined for */ bool const_p; /* true if the declaration is constant */ + bool leaf_p; /* true if the declaration has leaf attribute */ + bool nothrow_p; /* true if the declaration has nothrow attribute */ bool set_and_not_built_p; }; @@ -27862,6 +27881,8 @@ def_builtin (HOST_WIDE_INT mask, const char *name, ix86_builtins[(int) code] = NULL_TREE; ix86_builtins_isa[(int) code].tcode = tcode; ix86_builtins_isa[(int) code].name = name; + ix86_builtins_isa[(int) code].leaf_p = false; + ix86_builtins_isa[(int) code].nothrow_p = false; ix86_builtins_isa[(int) code].const_p = false; ix86_builtins_isa[(int) code].set_and_not_built_p = true; } @@ -27912,6 +27933,11 @@ ix86_add_new_builtins (HOST_WIDE_INT isa) ix86_builtins[i] = decl; if (ix86_builtins_isa[i].const_p) TREE_READONLY (decl) = 1; + if (ix86_builtins_isa[i].leaf_p) + DECL_ATTRIBUTES (decl) = build_tree_list (get_identifier ("leaf"), + NULL_TREE); + if (ix86_builtins_isa[i].nothrow_p) + TREE_NOTHROW (decl) = 1; } } } @@ -28950,6 +28976,29 @@ static const struct builtin_description bdesc_args[] = { OPTION_MASK_ISA_BMI2, CODE_FOR_bmi2_pext_di3, "__builtin_ia32_pext_di", IX86_BUILTIN_PEXT64, UNKNOWN, (int) UINT64_FTYPE_UINT64_UINT64 }, }; +/* Bultins for MPX. */ +static const struct builtin_description bdesc_mpx[] = +{ + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndstx", IX86_BUILTIN_BNDSTX, UNKNOWN, (int) VOID_FTYPE_PCVOID_PCVOID_BND }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndcl", IX86_BUILTIN_BNDCL, UNKNOWN, (int) VOID_FTYPE_BND_PCVOID }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndcu", IX86_BUILTIN_BNDCU, UNKNOWN, (int) VOID_FTYPE_BND_PCVOID }, +}; + +/* Const builtins for MPX. */ +static const struct builtin_description bdesc_mpx_const[] = +{ + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndmk", IX86_BUILTIN_BNDMK, UNKNOWN, (int) BND_FTYPE_PCVOID_ULONG }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndldx", IX86_BUILTIN_BNDLDX, UNKNOWN, (int) BND_FTYPE_PCVOID_PCVOID }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_set_bounds", IX86_BUILTIN_BNDSET, UNKNOWN, (int) PVOID_FTYPE_PVOID_PVOID_ULONG }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_narrow_bounds", IX86_BUILTIN_BNDNARROW, UNKNOWN, (int) PVOID_FTYPE_PCVOID_BND_ULONG }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndint", IX86_BUILTIN_BNDINT, UNKNOWN, (int) BND_FTYPE_BND_BND }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_arg_bnd", IX86_BUILTIN_ARG_BND, UNKNOWN, (int) BND_FTYPE_PCVOID }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_sizeof", IX86_BUILTIN_SIZEOF, UNKNOWN, (int) ULONG_FTYPE_VOID }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndlower", IX86_BUILTIN_BNDLOWER, UNKNOWN, (int) PVOID_FTYPE_BND }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndupper", IX86_BUILTIN_BNDUPPER, UNKNOWN, (int) PVOID_FTYPE_BND }, + { OPTION_MASK_ISA_MPX, (enum insn_code)0, "__builtin_ia32_bndret", IX86_BUILTIN_BNDRET, UNKNOWN, (int) BND_FTYPE_PCVOID }, +}; + /* FMA4 and XOP. */ #define MULTI_ARG_4_DF2_DI_I V2DF_FTYPE_V2DF_V2DF_V2DI_INT #define MULTI_ARG_4_DF2_DI_I1 V4DF_FTYPE_V4DF_V4DF_V4DI_INT @@ -29649,6 +29698,61 @@ ix86_init_mmx_sse_builtins (void) } } +static void +ix86_init_mpx_builtins () +{ + const struct builtin_description * d; + enum ix86_builtin_func_type ftype; + tree decl; + size_t i; + + for (i = 0, d = bdesc_mpx; + i < ARRAY_SIZE (bdesc_mpx); + i++, d++) + { + if (d->name == 0) + continue; + + ftype = (enum ix86_builtin_func_type) d->flag; + decl = def_builtin (d->mask, d->name, ftype, d->code); + + if (decl) + { + DECL_ATTRIBUTES (decl) = build_tree_list (get_identifier ("leaf"), + NULL_TREE); + TREE_NOTHROW (decl) = 1; + } + else + { + ix86_builtins_isa[(int)d->code].leaf_p = true; + ix86_builtins_isa[(int)d->code].nothrow_p = true; + } + } + + for (i = 0, d = bdesc_mpx_const; + i < ARRAY_SIZE (bdesc_mpx_const); + i++, d++) + { + if (d->name == 0) + continue; + + ftype = (enum ix86_builtin_func_type) d->flag; + decl = def_builtin_const (d->mask, d->name, ftype, d->code); + + if (decl) + { + DECL_ATTRIBUTES (decl) = build_tree_list (get_identifier ("leaf"), + NULL_TREE); + TREE_NOTHROW (decl) = 1; + } + else + { + ix86_builtins_isa[(int)d->code].leaf_p = true; + ix86_builtins_isa[(int)d->code].nothrow_p = true; + } + } +} + /* This adds a condition to the basic_block NEW_BB in function FUNCTION_DECL to return a pointer to VERSION_DECL if the outcome of the expression formed by PREDICATE_CHAIN is true. This function will be called during @@ -31124,6 +31228,7 @@ ix86_init_builtins (void) ix86_init_tm_builtins (); ix86_init_mmx_sse_builtins (); + ix86_init_mpx_builtins (); if (TARGET_LP64) ix86_init_builtins_va_builtins_abi (); @@ -32767,6 +32872,401 @@ ix86_expand_builtin (tree exp, rtx target, rtx subtarget, switch (fcode) { + case IX86_BUILTIN_BNDMK: + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + + /* Builtin arg1 is size of block but instruction op1 should + be (size - 1). */ + op0 = expand_normal (arg0); + op1 = expand_normal (fold_build2 (PLUS_EXPR, TREE_TYPE (arg1), + arg1, integer_minus_one_node)); + op0 = force_reg (Pmode, op0); + op1 = force_reg (Pmode, op1); + + emit_insn (TARGET_64BIT + ? gen_bnd64_mk (target, op0, op1) + : gen_bnd32_mk (target, op0, op1)); + return target; + + case IX86_BUILTIN_BNDSTX: + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + arg2 = CALL_EXPR_ARG (exp, 2); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + op2 = expand_normal (arg2); + + op0 = force_reg (Pmode, op0); + op1 = force_reg (Pmode, op1); + op2 = force_reg (BNDmode, op2); + + emit_insn (TARGET_64BIT + ? gen_bnd64_stx (op0, op1, op2) + : gen_bnd32_stx (op0, op1, op2)); + return 0; + + case IX86_BUILTIN_BNDLDX: + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + + op0 = force_reg (Pmode, op0); + op1 = force_reg (Pmode, op1); + + /* Avoid registers which connot be used as index. */ + if (REGNO (op1) == VIRTUAL_INCOMING_ARGS_REGNUM + || REGNO (op1) == VIRTUAL_STACK_VARS_REGNUM + || REGNO (op1) == VIRTUAL_OUTGOING_ARGS_REGNUM) + { + rtx temp = gen_reg_rtx (Pmode); + emit_move_insn (temp, op1); + op1 = temp; + } + + /* If op1 was a register originally then it may have + mode other than Pmode. We need to extend in such + case because bndldx may work only with Pmode regs. */ + if (GET_MODE (op1) != Pmode) + { + rtx ext = gen_rtx_ZERO_EXTEND (Pmode, op1); + op1 = gen_reg_rtx (Pmode); + emit_move_insn (op1, ext); + } + + emit_insn (TARGET_64BIT + ? gen_bnd64_ldx (target, op0, op1) + : gen_bnd32_ldx (target, op0, op1)); + return target; + + case IX86_BUILTIN_BNDCL: + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + + op0 = force_reg (BNDmode, op0); + op1 = force_reg (Pmode, op1); + + emit_insn (TARGET_64BIT + ? gen_bnd64_cl (op0, op1) + : gen_bnd32_cl (op0, op1)); + return 0; + + case IX86_BUILTIN_BNDCU: + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + + op0 = force_reg (BNDmode, op0); + op1 = force_reg (Pmode, op1); + + emit_insn (TARGET_64BIT + ? gen_bnd64_cu (op0, op1) + : gen_bnd32_cu (op0, op1)); + return 0; + + case IX86_BUILTIN_BNDRET: + arg0 = CALL_EXPR_ARG (exp, 0); + gcc_assert (TREE_CODE (arg0) == SSA_NAME); + target = chkp_get_rtl_bounds (arg0); + /* If no bounds were specified for returned value, + then use INIT bounds. It usually happens when + some built-in function is expanded. */ + if (!target) + { + rtx t1 = gen_reg_rtx (Pmode); + rtx t2 = gen_reg_rtx (Pmode); + target = gen_reg_rtx (BNDmode); + emit_move_insn (t1, const0_rtx); + emit_move_insn (t2, constm1_rtx); + emit_insn (TARGET_64BIT + ? gen_bnd64_mk (target, t1, t2) + : gen_bnd32_mk (target, t1, t2)); + } + gcc_assert (target && REG_P (target)); + return target; + + case IX86_BUILTIN_BNDSET: + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + + /* Size was passed but we need to use (size - 1) in bndmk. */ + arg1 = fold_build2 (PLUS_EXPR, TREE_TYPE (arg1), arg1, + integer_minus_one_node); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + + op0 = force_reg (Pmode, op0); + op1 = force_reg (Pmode, op1); + + /* Bounds are bound to return value, so put them into b0. */ + op2 = gen_reg_rtx (BNDmode); + emit_insn (TARGET_64BIT + ? gen_bnd64_mk (op2, op0, op1) + : gen_bnd32_mk (op2, op0, op1)); + return chkp_join_splitted_slot (op0, op2); + + case IX86_BUILTIN_BNDNARROW: + { + enum machine_mode mode = BNDmode; + enum machine_mode hmode = Pmode; + rtx m1, m1h1, m1h2, lb, ub, t1, t2; + + /* Return value and lb. */ + arg0 = CALL_EXPR_ARG (exp, 0); + /* Bounds. */ + arg1 = CALL_EXPR_ARG (exp, 1); + /* Size. */ + arg2 = CALL_EXPR_ARG (exp, 2); + + /* Size was passed but we need to use (size - 1) as for bndmk. */ + arg2 = fold_build2 (PLUS_EXPR, TREE_TYPE (arg2), arg2, + integer_minus_one_node); + + /* Add LB to size and inverse to get UB. */ + arg2 = fold_build2 (PLUS_EXPR, TREE_TYPE (arg2), arg2, arg0); + arg2 = fold_build1 (BIT_NOT_EXPR, TREE_TYPE (arg2), arg2); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + op2 = expand_normal (arg2); + + lb = force_reg (hmode, op0); + ub = force_reg (hmode, op2); + + /* We need to move bounds to memory before any computations. */ + if (!MEM_P (op1)) + { + m1 = assign_stack_local (mode, GET_MODE_SIZE (mode), 0); + emit_insn (gen_move_insn (m1, op1)); + } + else + m1 = op1; + + /* Generate mem expression to be used for access to LB and UB. */ + m1h1 = gen_rtx_MEM (hmode, XEXP (m1, 0)); + m1h2 = gen_rtx_MEM (hmode, + gen_rtx_PLUS (Pmode, XEXP (m1, 0), + GEN_INT (GET_MODE_SIZE (hmode)))); + + t1 = gen_reg_rtx (hmode); + + /* Compute LB. */ + emit_move_insn (t1, m1h1); + if (TARGET_CMOVE) + { + t2 = ix86_expand_compare (LTU, t1, lb); + emit_insn (gen_rtx_SET (VOIDmode, t1, + gen_rtx_IF_THEN_ELSE (hmode, t2, lb, t1))); + } + else + { + rtx nomove = gen_label_rtx (); + emit_cmp_and_jump_insns (t1, lb, GEU, const0_rtx, hmode, 1, nomove); + emit_insn (gen_rtx_SET (VOIDmode, t1, lb)); + emit_label (nomove); + } + emit_move_insn (m1h1, t1); + + + /* Compute UB. UB are stored in 1's complement form. Therefore + we also use LTU here. */ + emit_move_insn (t1, m1h2); + if (TARGET_CMOVE) + { + t2 = ix86_expand_compare (LTU, t1, ub); + emit_insn (gen_rtx_SET (VOIDmode, t1, + gen_rtx_IF_THEN_ELSE (hmode, t2, ub, t1))); + } + else + { + rtx nomove = gen_label_rtx (); + emit_cmp_and_jump_insns (t1, ub, GEU, const0_rtx, hmode, 1, nomove); + emit_insn (gen_rtx_SET (VOIDmode, t1, ub)); + emit_label (nomove); + } + emit_move_insn (m1h2, t1); + + op2 = gen_reg_rtx (BNDmode); + emit_move_insn (op2, m1); + + return chkp_join_splitted_slot (op0, op2); + } + + case IX86_BUILTIN_BNDINT: + { + enum machine_mode mode = BNDmode; + enum machine_mode hmode = Pmode; + rtx res = assign_stack_local (mode, GET_MODE_SIZE (mode), 0); + rtx m1, m2, m1h1, m1h2, m2h1, m2h2, t1, t2, t3, rh1, rh2; + + arg0 = CALL_EXPR_ARG (exp, 0); + arg1 = CALL_EXPR_ARG (exp, 1); + + op0 = expand_normal (arg0); + op1 = expand_normal (arg1); + + /* We need to move bounds to memory before any computations. */ + if (!MEM_P (op0)) + { + m1 = assign_stack_local (mode, GET_MODE_SIZE (mode), 0); + emit_insn (gen_move_insn (m1, op0)); + } + else + m1 = op0; + + if (!MEM_P (op1)) + { + m2 = assign_stack_local (mode, GET_MODE_SIZE (mode), 0); + emit_move_insn (m2, op1); + } + else + m2 = op1; + + /* Generate mem expression to be used for access to LB and UB. */ + m1h1 = gen_rtx_MEM (hmode, XEXP (m1, 0)); + m1h2 = gen_rtx_MEM (hmode, + gen_rtx_PLUS (Pmode, XEXP (m1, 0), + GEN_INT (GET_MODE_SIZE (hmode)))); + m2h1 = gen_rtx_MEM (hmode, XEXP (m2, 0)); + m2h2 = gen_rtx_MEM (hmode, + gen_rtx_PLUS (Pmode, XEXP (m2, 0), + GEN_INT (GET_MODE_SIZE (hmode)))); + rh1 = gen_rtx_MEM (hmode, XEXP (res, 0)); + rh2 = gen_rtx_MEM (hmode, + gen_rtx_PLUS (Pmode, XEXP (res, 0), + GEN_INT (GET_MODE_SIZE (hmode)))); + + /* Allocate temporaries. */ + t1 = gen_reg_rtx (hmode); + t2 = gen_reg_rtx (hmode); + + /* Compute LB. */ + emit_move_insn (t1, m1h1); + emit_move_insn (t2, m2h1); + if (TARGET_CMOVE) + { + t3 = ix86_expand_compare (LTU, t1, t2); + emit_insn (gen_rtx_SET (VOIDmode, t1, + gen_rtx_IF_THEN_ELSE (hmode, t3, t2, t1))); + } + else + { + rtx nomove = gen_label_rtx (); + emit_cmp_and_jump_insns (t1, t2, GEU, const0_rtx, hmode, 1, nomove); + emit_insn (gen_rtx_SET (VOIDmode, t1, t2)); + emit_label (nomove); + } + emit_move_insn (rh1, t1); + + /* Compute UB. UB are stored in 1's complement form. Therefore + we also use LTU here. */ + emit_move_insn (t1, m1h2); + emit_move_insn (t2, m2h2); + if (TARGET_CMOVE) + { + t3 = ix86_expand_compare (LTU, t1, t2); + emit_insn (gen_rtx_SET (VOIDmode, t1, + gen_rtx_IF_THEN_ELSE (hmode, t3, t2, t1))); + } + else + { + rtx nomove = gen_label_rtx (); + emit_cmp_and_jump_insns (t1, t2, GEU, const0_rtx, hmode, 1, nomove); + emit_insn (gen_rtx_SET (VOIDmode, t1, t2)); + emit_label (nomove); + } + emit_move_insn (rh2, t1); + + return res; + } + + case IX86_BUILTIN_ARG_BND: + arg0 = CALL_EXPR_ARG (exp, 0); + arg0 = chkp_parm_for_arg_bnd_arg (arg0); + target = DECL_BOUNDS_RTL (arg0); + gcc_assert (target); + return target; + + case IX86_BUILTIN_SIZEOF: + { + enum machine_mode mode = Pmode; + rtx t1, t2; + + arg0 = CALL_EXPR_ARG (exp, 0); + gcc_assert (TREE_CODE (arg0) == VAR_DECL); + + t1 = gen_reg_rtx (mode); + t2 = gen_rtx_SYMBOL_REF (Pmode, + IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME (arg0))); + t2 = gen_rtx_UNSPEC (mode, gen_rtvec (1, t2), UNSPEC_SIZEOF); + + emit_insn (gen_rtx_SET (VOIDmode, t1, gen_rtx_CONST (Pmode, t2))); + + return t1; + } + + case IX86_BUILTIN_BNDLOWER: + { + rtx mem, hmem; + + arg0 = CALL_EXPR_ARG (exp, 0); + op0 = expand_normal (arg0); + + /* We need to move bounds to memory first. */ + if (!MEM_P (op0)) + { + mem = assign_stack_local (BNDmode, GET_MODE_SIZE (BNDmode), 0); + emit_insn (gen_move_insn (mem, op0)); + } + else + mem = op0; + + /* Generate mem expression to access LB and load it. */ + hmem = gen_rtx_MEM (Pmode, XEXP (mem, 0)); + target = gen_reg_rtx (Pmode); + emit_move_insn (target, hmem); + + return target; + } + + case IX86_BUILTIN_BNDUPPER: + { + rtx mem, hmem; + + arg0 = CALL_EXPR_ARG (exp, 0); + op0 = expand_normal (arg0); + + /* We need to move bounds to memory first. */ + if (!MEM_P (op0)) + { + mem = assign_stack_local (BNDmode, GET_MODE_SIZE (BNDmode), 0); + emit_insn (gen_move_insn (mem, op0)); + } + else + mem = op0; + + /* Generate mem expression to access UB and load it. */ + hmem = gen_rtx_MEM (Pmode, + gen_rtx_PLUS (Pmode, XEXP (mem, 0), + GEN_INT (GET_MODE_SIZE (Pmode)))); + target = gen_reg_rtx (Pmode); + emit_move_insn (target, hmem); + + /* We need to inverse all bits of UB. */ + emit_insn (gen_rtx_SET (Pmode, target, gen_rtx_NOT (Pmode, target))); + + return target; + } + case IX86_BUILTIN_MASKMOVQ: case IX86_BUILTIN_MASKMOVDQU: icode = (fcode == IX86_BUILTIN_MASKMOVQ