From patchwork Mon Jul 22 11:43:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963201 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=ITVj12TD; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJNm4ZmWz1yXp for ; Mon, 22 Jul 2024 21:44:56 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSZ-0008Ne-JL; Mon, 22 Jul 2024 07:44:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrST-0008L7-UA for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:01 -0400 Received: from mail-wm1-x331.google.com ([2a00:1450:4864:20::331]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSP-0002Ov-Sq for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:01 -0400 Received: by mail-wm1-x331.google.com with SMTP id 5b1f17b1804b1-4265b7514fcso19673275e9.1 for ; Mon, 22 Jul 2024 04:43:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648635; x=1722253435; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=faagu3ZRFEUTlJZCSFm/DYZNyGfC2rcfLqmxjbWfHKM=; b=ITVj12TDXZAwgOFYtd6z71HHRe2DnUzEKrtpUEhr3ac+ZvyB1GjxSxFYlElYS5NlL5 5HSG7yfVZ49KAJpx/2cuHaOh3wre9G9kHIgBXnlNQUlqQttWBf2OcnlR0qGguwXbnuGc RvgVNFK9+IqhMgPNGAUW2xqkVqekueS/VeCrR8vLG0VW7EEJ7pyKkwDByGVEY1t+BHCE kqZZE6ob3tAMMTRxcriOLB8cPRNLhu4wxFV94BNyNHMA8XwtJ8unHIOicPrapnzEKi5a XENayU7+9H2Uo0X91yX1oga7E7aeXSI8rCdzppIo8S7/fqKge+UVC1F5VFtGY0D3nIfn DEHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648635; x=1722253435; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=faagu3ZRFEUTlJZCSFm/DYZNyGfC2rcfLqmxjbWfHKM=; b=KDZ8Op1tyhvx5pk5Ro8939SKNlSXTYc6pn4Jf9sYIfpfiFNPLPsRJczAHQ2Kwo5iep kVqt0RWMVkVVzFrBYWD+GxKM3CsPbR1FYpIzEFFX3GnrK12/KI7nmLGq74XCek1KjA9A BN4c0psBvHspcHkobBjyC8vWBDO0e1kEYxtaIY7kduG2BFlNyjENrpHrgjTbNJXtAGli wDbAyYQrg0/8DxgkBFu2v4fQIwxx8sdwfqFMVi+s8+zJ6kiMByBzTAVf/LoyXtv8Grms +R8hT8nfebikkl7nsRxMYgFj5TbZEWW/nBKrCu6svw2TuAtN2Pef2kju3NcEv4ip6/Rp zL3g== X-Gm-Message-State: AOJu0YytB3OEaKVPJCjS+4VJ1SawMwfzagGIfZ+cKRkEG8QaVhgXiwaE UWY/4CGAtnqkVhZtSzz6+lPM4ItoqSisANxAdZ5DUnx6woc2kTZX9SuoaFplg43JIARGZJBFKUa GuWg= X-Google-Smtp-Source: AGHT+IElJHbs59kq7VY/dzTlGdg9hRCCSniWs0YNKKN2Z/BUAAVQUuJYGw6fIFCqKjRYLjWs0yhvsw== X-Received: by 2002:a5d:6ac1:0:b0:366:ea4a:17ec with SMTP id ffacd0b85a97d-36873dc19aemr6333045f8f.2.1721648634896; Mon, 22 Jul 2024 04:43:54 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.43.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:43:54 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson , Paolo Bonzini Subject: [RFC PATCH v5 1/8] build-sys: Add rust feature option Date: Mon, 22 Jul 2024 14:43:31 +0300 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::331; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wm1-x331.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Add rust feature in meson.build, configure, to prepare for adding Rust code in the followup commits. Signed-off-by: Manos Pitsidianakis Reviewed-by: Zhao Liu --- MAINTAINERS | 5 +++++ configure | 12 ++++++++++++ meson.build | 17 ++++++++++++++++- Kconfig | 1 + Kconfig.host | 3 +++ meson_options.txt | 5 +++++ rust/Kconfig | 0 scripts/meson-buildoptions.sh | 3 +++ 8 files changed, 45 insertions(+), 1 deletion(-) create mode 100644 rust/Kconfig diff --git a/MAINTAINERS b/MAINTAINERS index 7d9811458c..d427f13b79 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4228,6 +4228,11 @@ F: docs/sphinx/ F: docs/_templates/ F: docs/devel/docs.rst +Rust build system integration +M: Manos Pitsidianakis +S: Maintained +F: rust/Kconfig + Miscellaneous ------------- Performance Tools and Tests diff --git a/configure b/configure index 019fcbd0ef..8476968b96 100755 --- a/configure +++ b/configure @@ -173,6 +173,7 @@ fi # default parameters container_engine="auto" +have_rust="no" cpu="" cross_compile="no" cross_prefix="" @@ -757,6 +758,15 @@ for opt do ;; --gdb=*) gdb_bin="$optarg" ;; + --enable-rust) + have_rust="yes" ; + meson_option_add -Dhave_rust=true + ;; + --disable-rust) + have_rust="no" ; + meson_option_add -Dhave_rust=false + meson_option_add -Ddisable_rust=true + ;; # everything else has the same name in configure and meson --*) meson_option_parse "$opt" "$optarg" ;; @@ -874,6 +884,8 @@ Advanced options (experts only): start the emulator (only use if you are including desired devices in configs/devices/) --with-devices-ARCH=NAME override default configs/devices + --enable-rust enable compilation of Rust code + --disable-rust disable compilation of Rust code --enable-debug enable common debug build options --cpu=CPU Build for host CPU [$cpu] --disable-containers don't use containers for cross-building diff --git a/meson.build b/meson.build index a1e51277b0..a3f346ab3c 100644 --- a/meson.build +++ b/meson.build @@ -70,6 +70,14 @@ if host_os == 'darwin' and \ all_languages += ['objc'] objc = meson.get_compiler('objc') endif +if get_option('have_rust') and meson.version().version_compare('<1.0.0') + error('Rust support requires Meson version >=1.0.0') +endif +have_rust = false +if not get_option('disable_rust') and add_languages('rust', required: get_option('have_rust'), native: false) + rustc = meson.get_compiler('rust') + have_rust = true +endif dtrace = not_found stap = not_found @@ -2119,6 +2127,7 @@ endif config_host_data = configuration_data() +config_host_data.set('CONFIG_HAVE_RUST', have_rust) audio_drivers_selected = [] if have_system audio_drivers_available = { @@ -3038,7 +3047,8 @@ host_kconfig = \ (host_os == 'linux' ? ['CONFIG_LINUX=y'] : []) + \ (multiprocess_allowed ? ['CONFIG_MULTIPROCESS_ALLOWED=y'] : []) + \ (vfio_user_server_allowed ? ['CONFIG_VFIO_USER_SERVER_ALLOWED=y'] : []) + \ - (hv_balloon ? ['CONFIG_HV_BALLOON_POSSIBLE=y'] : []) + (hv_balloon ? ['CONFIG_HV_BALLOON_POSSIBLE=y'] : []) + \ + (have_rust ? ['CONFIG_HAVE_RUST=y'] : []) ignored = [ 'TARGET_XML_FILES', 'TARGET_ABI_DIR', 'TARGET_ARCH' ] @@ -4242,6 +4252,11 @@ if 'objc' in all_languages else summary_info += {'Objective-C compiler': false} endif +summary_info += {'Rust support': have_rust} +if have_rust + summary_info += {'rustc version': rustc.version()} + summary_info += {'rustc': ' '.join(rustc.cmd_array())} +endif option_cflags = (get_option('debug') ? ['-g'] : []) if get_option('optimization') != 'plain' option_cflags += ['-O' + get_option('optimization')] diff --git a/Kconfig b/Kconfig index fb6a24a2de..63ca7f46df 100644 --- a/Kconfig +++ b/Kconfig @@ -4,3 +4,4 @@ source accel/Kconfig source target/Kconfig source hw/Kconfig source semihosting/Kconfig +source rust/Kconfig diff --git a/Kconfig.host b/Kconfig.host index 17f405004b..4ade7899d6 100644 --- a/Kconfig.host +++ b/Kconfig.host @@ -52,3 +52,6 @@ config VFIO_USER_SERVER_ALLOWED config HV_BALLOON_POSSIBLE bool + +config HAVE_RUST + bool diff --git a/meson_options.txt b/meson_options.txt index 0269fa0f16..9eef45f85b 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -371,3 +371,8 @@ option('hexagon_idef_parser', type : 'boolean', value : true, option('x86_version', type : 'combo', choices : ['0', '1', '2', '3', '4'], value: '1', description: 'tweak required x86_64 architecture version beyond compiler default') + +option('have_rust', type: 'boolean', value: false, + description: 'Have Rust support') +option('disable_rust', type: 'boolean', value: false, + description: 'Disable Rust support') diff --git a/rust/Kconfig b/rust/Kconfig new file mode 100644 index 0000000000..e69de29bb2 diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh index c97079a38c..bb5854115a 100644 --- a/scripts/meson-buildoptions.sh +++ b/scripts/meson-buildoptions.sh @@ -170,6 +170,7 @@ meson_options_help() { printf "%s\n" ' rbd Ceph block device driver' printf "%s\n" ' rdma Enable RDMA-based migration' printf "%s\n" ' replication replication support' + printf "%s\n" ' rust Rust support' printf "%s\n" ' rutabaga-gfx rutabaga_gfx support' printf "%s\n" ' sdl SDL user interface' printf "%s\n" ' sdl-image SDL Image support for icons' @@ -556,6 +557,8 @@ _meson_option_parse() { --enable-whpx) printf "%s" -Dwhpx=enabled ;; --disable-whpx) printf "%s" -Dwhpx=disabled ;; --x86-version=*) quote_sh "-Dx86_version=$2" ;; + --enable-with-rust) printf "%s" -Dhave_rust=true ;; + --disable-with-rust) printf "%s" -Dhave_rust=false ;; --enable-xen) printf "%s" -Dxen=enabled ;; --disable-xen) printf "%s" -Dxen=disabled ;; --enable-xen-pci-passthrough) printf "%s" -Dxen_pci_passthrough=enabled ;; From patchwork Mon Jul 22 11:43:32 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963206 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=XAIwZoDb; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJPR2Z60z1yZ7 for ; Mon, 22 Jul 2024 21:45:31 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSa-0008TV-W3; Mon, 22 Jul 2024 07:44:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrSW-0008Lo-ER for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:05 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSR-0002P6-O7 for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:04 -0400 Received: by mail-wr1-x42d.google.com with SMTP id ffacd0b85a97d-3687ea0521cso2764801f8f.1 for ; Mon, 22 Jul 2024 04:43:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648638; x=1722253438; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+n8BGOwUhxG6uJAAcZ/OYvOHaCJ7tZUjkrkd4d32s0c=; b=XAIwZoDbW9PJbBNMVaJ2/sRfKhN2t+xD4gCAyFZWCis4ceyMc3mDO3cjmbKoBwRrpv +DnzQp2X++jEyMIp+DNmE0KBOf+XSbcBDnqnm23cQESVYOp+RL96hxdlZjY0Oql/S1rn Sshqhgtx7eYwddFZ2Eu6c9k41fHi1M4YeY15Z9LLkbGwCDYuBka2AySNRZec1H/IwdXm OQMuqgifJ5Tw+KXKuCf9a3mFE57OA69bXd7rLIgpipZ9YDdMzQ3qGa+rFoUhPrSueQ59 K3JQ0faXVHcm9wh+kfRj9gS5266Rd0PNMX7qTLPWdieQk4ecWCY5Q8shhMs3lZuPDKqx SIjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648638; x=1722253438; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+n8BGOwUhxG6uJAAcZ/OYvOHaCJ7tZUjkrkd4d32s0c=; b=XqAvepxSDN9lEpdK/QqbMh9YdWYqw6KW5hXcImbJb8aIjXpLjSpezC6ft2UXNmBAmp 5wpbP7ROaRW1GAPb7SxUEP63LcM5NOh/qszMQGxHUFKeUOlEN5N9GJG7LqyaR2WcaqyI NfRV5gDMl8eXVHKOmg23Igs0l58TmgKInSgQP7simMTnqzc2rhuiK/i1vHzkYIY16WRF Gvygbxzmu94LDoJS1pwXAme5KYbAPj+39IwhHFlnvwvAn/W1d0RtgEY7LdrT/IS5au+t ceHQrnoU5HdJq5jI0ulX4iFEOG+PZuYPrHKdhTMx/EZe5WUxDLJoAw8lsv7FyiC3RjT6 yBEw== X-Gm-Message-State: AOJu0Yz3YezfYrGIL4Ml+i6M/dADh+kJbHxGCWapUmUMBTcG14RFmHxm z9uLkhSMlth57HFQZPymX6rjSMHjtc00tT0yKxXJH50QX4eoGdAX+QBBIMdQr5XeBeOsutlfCa1 5HTU= X-Google-Smtp-Source: AGHT+IGURgbnoCJNF1OivzIKVIAlSnX52qpcn/AiueU3RjPDaAFRXfhATGrNMpanFDgh24TjMvGeYw== X-Received: by 2002:adf:f5cd:0:b0:367:f059:4c55 with SMTP id ffacd0b85a97d-369bae4cf80mr4553688f8f.26.1721648637871; Mon, 22 Jul 2024 04:43:57 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.43.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:43:57 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson , Ed Maste , Li-Wen Hsu , Wainer dos Santos Moschetta , Beraldo Leal Subject: [RFC PATCH v5 2/8] build deps: update lcitool to include rust bits Date: Mon, 22 Jul 2024 14:43:32 +0300 Message-ID: <49e451adf4a3203760fb671e7509b24a7e31976f.1721648163.git.manos.pitsidianakis@linaro.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Alex Bennée For rust development we need cargo, rustc and bindgen in our various development environments. Update the libvirt-ci project to (!495) and regenerate the containers and other dependency lists. Signed-off-by: Alex Bennée Reviewed-by: Daniel P. Berrangé Reviewed-by: Philippe Mathieu-Daudé Signed-off-by: Manos Pitsidianakis --- .gitlab-ci.d/cirrus/freebsd-13.vars | 2 +- .gitlab-ci.d/cirrus/macos-13.vars | 2 +- .gitlab-ci.d/cirrus/macos-14.vars | 2 +- scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml | 3 +++ scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml | 3 +++ tests/docker/dockerfiles/alpine.docker | 3 +++ tests/docker/dockerfiles/centos9.docker | 3 +++ tests/docker/dockerfiles/debian-amd64-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-arm64-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-armel-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-armhf-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-i686-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-mips64el-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-mipsel-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-ppc64el-cross.docker | 4 ++++ tests/docker/dockerfiles/debian-s390x-cross.docker | 4 ++++ tests/docker/dockerfiles/debian.docker | 3 +++ tests/docker/dockerfiles/fedora-win64-cross.docker | 3 +++ tests/docker/dockerfiles/fedora.docker | 3 +++ tests/docker/dockerfiles/opensuse-leap.docker | 2 ++ tests/docker/dockerfiles/ubuntu2204.docker | 3 +++ tests/lcitool/libvirt-ci | 2 +- tests/lcitool/projects/qemu.yml | 3 +++ tests/vm/generated/freebsd.json | 2 ++ 24 files changed, 71 insertions(+), 4 deletions(-) diff --git a/.gitlab-ci.d/cirrus/freebsd-13.vars b/.gitlab-ci.d/cirrus/freebsd-13.vars index 3785afca36..8c3b02d089 100644 --- a/.gitlab-ci.d/cirrus/freebsd-13.vars +++ b/.gitlab-ci.d/cirrus/freebsd-13.vars @@ -11,6 +11,6 @@ MAKE='/usr/local/bin/gmake' NINJA='/usr/local/bin/ninja' PACKAGING_COMMAND='pkg' PIP3='/usr/local/bin/pip-3.8' -PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd' +PKGS='alsa-lib bash bison bzip2 ca_root_nss capstone4 ccache cmocka ctags curl cyrus-sasl dbus diffutils dtc flex fusefs-libs3 gettext git glib gmake gnutls gsed gtk3 json-c libepoxy libffi libgcrypt libjpeg-turbo libnfs libslirp libspice-server libssh libtasn1 llvm lzo2 meson mtools ncurses nettle ninja opencv pixman pkgconf png py39-numpy py39-pillow py39-pip py39-sphinx py39-sphinx_rtd_theme py39-tomli py39-yaml python3 rpm2cpio rust rust-bindgen-cli sdl2 sdl2_image snappy sndio socat spice-protocol tesseract usbredir virglrenderer vte3 xorriso zstd' PYPI_PKGS='' PYTHON='/usr/local/bin/python3' diff --git a/.gitlab-ci.d/cirrus/macos-13.vars b/.gitlab-ci.d/cirrus/macos-13.vars index 534f029956..3c8ba1a277 100644 --- a/.gitlab-ci.d/cirrus/macos-13.vars +++ b/.gitlab-ci.d/cirrus/macos-13.vars @@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake' NINJA='/opt/homebrew/bin/ninja' PACKAGING_COMMAND='brew' PIP3='/opt/homebrew/bin/pip3' -PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' +PKGS='bash bc bindgen bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio rust sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli' PYTHON='/opt/homebrew/bin/python3' diff --git a/.gitlab-ci.d/cirrus/macos-14.vars b/.gitlab-ci.d/cirrus/macos-14.vars index 43070f4a26..d227c5deca 100644 --- a/.gitlab-ci.d/cirrus/macos-14.vars +++ b/.gitlab-ci.d/cirrus/macos-14.vars @@ -11,6 +11,6 @@ MAKE='/opt/homebrew/bin/gmake' NINJA='/opt/homebrew/bin/ninja' PACKAGING_COMMAND='brew' PIP3='/opt/homebrew/bin/pip3' -PKGS='bash bc bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' +PKGS='bash bc bindgen bison bzip2 capstone ccache cmocka ctags curl dbus diffutils dtc flex gcovr gettext git glib gnu-sed gnutls gtk+3 jemalloc jpeg-turbo json-c libepoxy libffi libgcrypt libiscsi libnfs libpng libslirp libssh libtasn1 libusb llvm lzo make meson mtools ncurses nettle ninja pixman pkg-config python3 rpm2cpio rust sdl2 sdl2_image snappy socat sparse spice-protocol swtpm tesseract usbredir vde vte3 xorriso zlib zstd' PYPI_PKGS='PyYAML numpy pillow sphinx sphinx-rtd-theme tomli' PYTHON='/opt/homebrew/bin/python3' diff --git a/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml b/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml index fd5489cd82..6f856a5da2 100644 --- a/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml +++ b/scripts/ci/setup/ubuntu/ubuntu-2204-aarch64.yaml @@ -7,10 +7,12 @@ packages: - bash - bc + - bindgen - bison - bsdextrautils - bzip2 - ca-certificates + - cargo - ccache - clang - dbus @@ -112,6 +114,7 @@ packages: - python3-venv - python3-yaml - rpm2cpio + - rustc - sed - socat - sparse diff --git a/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml b/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml index afa04502cf..217515f90d 100644 --- a/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml +++ b/scripts/ci/setup/ubuntu/ubuntu-2204-s390x.yaml @@ -7,10 +7,12 @@ packages: - bash - bc + - bindgen - bison - bsdextrautils - bzip2 - ca-certificates + - cargo - ccache - clang - dbus @@ -110,6 +112,7 @@ packages: - python3-venv - python3-yaml - rpm2cpio + - rustc - sed - socat - sparse diff --git a/tests/docker/dockerfiles/alpine.docker b/tests/docker/dockerfiles/alpine.docker index b079a83fe2..fc488c94ac 100644 --- a/tests/docker/dockerfiles/alpine.docker +++ b/tests/docker/dockerfiles/alpine.docker @@ -18,6 +18,7 @@ RUN apk update && \ bzip2-dev \ ca-certificates \ capstone-dev \ + cargo \ ccache \ ceph-dev \ clang \ @@ -89,6 +90,8 @@ RUN apk update && \ py3-yaml \ python3 \ rpm2cpio \ + rust \ + rust-bindgen \ samurai \ sdl2-dev \ sdl2_image-dev \ diff --git a/tests/docker/dockerfiles/centos9.docker b/tests/docker/dockerfiles/centos9.docker index 0256865b9e..e6a69c56f8 100644 --- a/tests/docker/dockerfiles/centos9.docker +++ b/tests/docker/dockerfiles/centos9.docker @@ -16,12 +16,14 @@ RUN dnf distro-sync -y && \ alsa-lib-devel \ bash \ bc \ + bindgen-cli \ bison \ brlapi-devel \ bzip2 \ bzip2-devel \ ca-certificates \ capstone-devel \ + cargo \ ccache \ clang \ ctags \ @@ -102,6 +104,7 @@ RUN dnf distro-sync -y && \ python3-sphinx_rtd_theme \ python3-tomli \ rdma-core-devel \ + rust \ sed \ snappy-devel \ socat \ diff --git a/tests/docker/dockerfiles/debian-amd64-cross.docker b/tests/docker/dockerfiles/debian-amd64-cross.docker index 8058695979..62f8c000ed 100644 --- a/tests/docker/dockerfiles/debian-amd64-cross.docker +++ b/tests/docker/dockerfiles/debian-amd64-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -52,6 +54,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -169,6 +172,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/x86_64-linux-gnu && \ ENV ABI "x86_64-linux-gnu" ENV MESON_OPTS "--cross-file=x86_64-linux-gnu" +ENV RUST_TARGET "x86_64-unknown-linux-gnu" ENV QEMU_CONFIGURE_OPTS --cross-prefix=x86_64-linux-gnu- ENV DEF_TARGET_LIST x86_64-softmmu,x86_64-linux-user,i386-softmmu,i386-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-arm64-cross.docker b/tests/docker/dockerfiles/debian-arm64-cross.docker index 15457d7657..42523e9113 100644 --- a/tests/docker/dockerfiles/debian-arm64-cross.docker +++ b/tests/docker/dockerfiles/debian-arm64-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -52,6 +54,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -168,6 +171,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/aarch64-linux-gnu && \ ENV ABI "aarch64-linux-gnu" ENV MESON_OPTS "--cross-file=aarch64-linux-gnu" +ENV RUST_TARGET "aarch64-unknown-linux-gnu" ENV QEMU_CONFIGURE_OPTS --cross-prefix=aarch64-linux-gnu- ENV DEF_TARGET_LIST aarch64-softmmu,aarch64-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-armel-cross.docker b/tests/docker/dockerfiles/debian-armel-cross.docker index c26ffc2e9e..35d42dba97 100644 --- a/tests/docker/dockerfiles/debian-armel-cross.docker +++ b/tests/docker/dockerfiles/debian-armel-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -54,6 +56,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-wheel \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -169,6 +172,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/arm-linux-gnueabi && \ ENV ABI "arm-linux-gnueabi" ENV MESON_OPTS "--cross-file=arm-linux-gnueabi" +ENV RUST_TARGET "armv5te-unknown-linux-gnueabi" ENV QEMU_CONFIGURE_OPTS --cross-prefix=arm-linux-gnueabi- ENV DEF_TARGET_LIST arm-softmmu,arm-linux-user,armeb-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-armhf-cross.docker b/tests/docker/dockerfiles/debian-armhf-cross.docker index 8f87656d89..1b2c260e5a 100644 --- a/tests/docker/dockerfiles/debian-armhf-cross.docker +++ b/tests/docker/dockerfiles/debian-armhf-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -52,6 +54,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -168,6 +171,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/arm-linux-gnueabihf && \ ENV ABI "arm-linux-gnueabihf" ENV MESON_OPTS "--cross-file=arm-linux-gnueabihf" +ENV RUST_TARGET "armv7-unknown-linux-gnueabihf" ENV QEMU_CONFIGURE_OPTS --cross-prefix=arm-linux-gnueabihf- ENV DEF_TARGET_LIST arm-softmmu,arm-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-i686-cross.docker b/tests/docker/dockerfiles/debian-i686-cross.docker index f4ef054a2e..50ad6179a2 100644 --- a/tests/docker/dockerfiles/debian-i686-cross.docker +++ b/tests/docker/dockerfiles/debian-i686-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -54,6 +56,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-wheel \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -169,6 +172,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/i686-linux-gnu && \ ENV ABI "i686-linux-gnu" ENV MESON_OPTS "--cross-file=i686-linux-gnu" +ENV RUST_TARGET "i686-unknown-linux-gnu" ENV QEMU_CONFIGURE_OPTS --cross-prefix=i686-linux-gnu- ENV DEF_TARGET_LIST x86_64-softmmu,x86_64-linux-user,i386-softmmu,i386-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-mips64el-cross.docker b/tests/docker/dockerfiles/debian-mips64el-cross.docker index 59c4c68dce..27db4509b6 100644 --- a/tests/docker/dockerfiles/debian-mips64el-cross.docker +++ b/tests/docker/dockerfiles/debian-mips64el-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -54,6 +56,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-wheel \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -167,6 +170,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/mips64el-linux-gnuabi64 && \ ENV ABI "mips64el-linux-gnuabi64" ENV MESON_OPTS "--cross-file=mips64el-linux-gnuabi64" +ENV RUST_TARGET "mips64el-unknown-linux-gnuabi64" ENV QEMU_CONFIGURE_OPTS --cross-prefix=mips64el-linux-gnuabi64- ENV DEF_TARGET_LIST mips64el-softmmu,mips64el-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-mipsel-cross.docker b/tests/docker/dockerfiles/debian-mipsel-cross.docker index 880c774f1c..725a632e4b 100644 --- a/tests/docker/dockerfiles/debian-mipsel-cross.docker +++ b/tests/docker/dockerfiles/debian-mipsel-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -54,6 +56,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-wheel \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -167,6 +170,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/mipsel-linux-gnu && \ ENV ABI "mipsel-linux-gnu" ENV MESON_OPTS "--cross-file=mipsel-linux-gnu" +ENV RUST_TARGET "mipsel-unknown-linux-gnu" ENV QEMU_CONFIGURE_OPTS --cross-prefix=mipsel-linux-gnu- ENV DEF_TARGET_LIST mipsel-softmmu,mipsel-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-ppc64el-cross.docker b/tests/docker/dockerfiles/debian-ppc64el-cross.docker index 1d55b9514c..c85b43704c 100644 --- a/tests/docker/dockerfiles/debian-ppc64el-cross.docker +++ b/tests/docker/dockerfiles/debian-ppc64el-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -52,6 +54,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -167,6 +170,7 @@ endian = 'little'\n" > /usr/local/share/meson/cross/powerpc64le-linux-gnu && \ ENV ABI "powerpc64le-linux-gnu" ENV MESON_OPTS "--cross-file=powerpc64le-linux-gnu" +ENV RUST_TARGET "powerpc64le-unknown-linux-gnu" ENV QEMU_CONFIGURE_OPTS --cross-prefix=powerpc64le-linux-gnu- ENV DEF_TARGET_LIST ppc64-softmmu,ppc64-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian-s390x-cross.docker b/tests/docker/dockerfiles/debian-s390x-cross.docker index 62ccda6ab1..022c84b0da 100644 --- a/tests/docker/dockerfiles/debian-s390x-cross.docker +++ b/tests/docker/dockerfiles/debian-s390x-cross.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ dbus \ debianutils \ @@ -52,6 +54,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ @@ -166,6 +169,7 @@ endian = 'big'\n" > /usr/local/share/meson/cross/s390x-linux-gnu && \ ENV ABI "s390x-linux-gnu" ENV MESON_OPTS "--cross-file=s390x-linux-gnu" +ENV RUST_TARGET "s390x-unknown-linux-gnu" ENV QEMU_CONFIGURE_OPTS --cross-prefix=s390x-linux-gnu- ENV DEF_TARGET_LIST s390x-softmmu,s390x-linux-user # As a final step configure the user (if env is defined) diff --git a/tests/docker/dockerfiles/debian.docker b/tests/docker/dockerfiles/debian.docker index 0d1d401eb8..c30fab88f7 100644 --- a/tests/docker/dockerfiles/debian.docker +++ b/tests/docker/dockerfiles/debian.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ clang \ dbus \ @@ -119,6 +121,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ diff --git a/tests/docker/dockerfiles/fedora-win64-cross.docker b/tests/docker/dockerfiles/fedora-win64-cross.docker index 007e1574bd..d1a1922b35 100644 --- a/tests/docker/dockerfiles/fedora-win64-cross.docker +++ b/tests/docker/dockerfiles/fedora-win64-cross.docker @@ -20,9 +20,11 @@ exec "$@"\n' > /usr/bin/nosync && \ nosync dnf install -y \ bash \ bc \ + bindgen-cli \ bison \ bzip2 \ ca-certificates \ + cargo \ ccache \ ctags \ dbus-daemon \ @@ -52,6 +54,7 @@ exec "$@"\n' > /usr/bin/nosync && \ python3-sphinx \ python3-sphinx_rtd_theme \ python3-zombie-imp \ + rust \ sed \ socat \ sparse \ diff --git a/tests/docker/dockerfiles/fedora.docker b/tests/docker/dockerfiles/fedora.docker index 44f239c088..bb05ca4172 100644 --- a/tests/docker/dockerfiles/fedora.docker +++ b/tests/docker/dockerfiles/fedora.docker @@ -23,12 +23,14 @@ exec "$@"\n' > /usr/bin/nosync && \ alsa-lib-devel \ bash \ bc \ + bindgen-cli \ bison \ brlapi-devel \ bzip2 \ bzip2-devel \ ca-certificates \ capstone-devel \ + cargo \ ccache \ clang \ ctags \ @@ -112,6 +114,7 @@ exec "$@"\n' > /usr/bin/nosync && \ python3-sphinx_rtd_theme \ python3-zombie-imp \ rdma-core-devel \ + rust \ sed \ snappy-devel \ socat \ diff --git a/tests/docker/dockerfiles/opensuse-leap.docker b/tests/docker/dockerfiles/opensuse-leap.docker index 836f531ac1..3e8bc8447e 100644 --- a/tests/docker/dockerfiles/opensuse-leap.docker +++ b/tests/docker/dockerfiles/opensuse-leap.docker @@ -16,6 +16,7 @@ RUN zypper update -y && \ brlapi-devel \ bzip2 \ ca-certificates \ + cargo \ ccache \ clang \ ctags \ @@ -94,6 +95,7 @@ RUN zypper update -y && \ python311-pip \ python311-setuptools \ rdma-core-devel \ + rust \ sed \ snappy-devel \ sndio-devel \ diff --git a/tests/docker/dockerfiles/ubuntu2204.docker b/tests/docker/dockerfiles/ubuntu2204.docker index beeb44fc28..98ee55f7e0 100644 --- a/tests/docker/dockerfiles/ubuntu2204.docker +++ b/tests/docker/dockerfiles/ubuntu2204.docker @@ -13,10 +13,12 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ eatmydata apt-get install --no-install-recommends -y \ bash \ bc \ + bindgen \ bison \ bsdextrautils \ bzip2 \ ca-certificates \ + cargo \ ccache \ clang \ dbus \ @@ -119,6 +121,7 @@ RUN export DEBIAN_FRONTEND=noninteractive && \ python3-venv \ python3-yaml \ rpm2cpio \ + rustc \ sed \ socat \ sparse \ diff --git a/tests/lcitool/libvirt-ci b/tests/lcitool/libvirt-ci index 0e9490cebc..9b2b2ceb80 160000 --- a/tests/lcitool/libvirt-ci +++ b/tests/lcitool/libvirt-ci @@ -1 +1 @@ -Subproject commit 0e9490cebc726ef772b6c9e27dac32e7ae99f9b2 +Subproject commit 9b2b2ceb80a7a215bdfb9000bcc2c1a30457ec69 diff --git a/tests/lcitool/projects/qemu.yml b/tests/lcitool/projects/qemu.yml index 0c85784259..7e85b27b66 100644 --- a/tests/lcitool/projects/qemu.yml +++ b/tests/lcitool/projects/qemu.yml @@ -3,11 +3,13 @@ packages: - alsa - bash - bc + - bindgen - bison - brlapi - bzip2 - bzip2-libs - capstone + - cargo - ccache - clang - cmocka @@ -100,6 +102,7 @@ packages: - python3-tomli - python3-venv - rpm2cpio + - rust - sdl2 - sdl2-image - sed diff --git a/tests/vm/generated/freebsd.json b/tests/vm/generated/freebsd.json index 2d5895ebed..3d36c8af7c 100644 --- a/tests/vm/generated/freebsd.json +++ b/tests/vm/generated/freebsd.json @@ -60,6 +60,8 @@ "py39-yaml", "python3", "rpm2cpio", + "rust", + "rust-bindgen-cli", "sdl2", "sdl2_image", "snappy", From patchwork Mon Jul 22 11:43:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963204 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=UrNCAABs; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJNs6L5Fz1yXp for ; Mon, 22 Jul 2024 21:45:01 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSc-00006t-9I; Mon, 22 Jul 2024 07:44:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrSW-0008Ld-Cy for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:04 -0400 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSU-0002PC-2V for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:03 -0400 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-4265b7514fcso19673745e9.1 for ; Mon, 22 Jul 2024 04:44:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648640; x=1722253440; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ad1q3cWy2NP9DSF0U1i3kXx2egXQhijAsyWJ4WTxexM=; b=UrNCAABsyD0/Nviiw3F70+YNCU0DIziahMyhatgpx261oM0AQifJXUCni3FvJsIjWu EBCOFNbYkpt1DB+Kx5XRAgxOgIdqmGNRK8Z4GTSRMjGwG6pso9oC6AhlAxxV/rvzNyBH KIU4Nr4djXmN/q5+Vls8sofTmwXPuYQeEqhsDrdzqPxckqbi9EOgKbKoK/3caW3DUp6C Eol3QJ0F++DFK3xLRa+PIM7B6E+y2TkG8KCjWrr7pwKLfIrzpObclJqNH6SY1oYa7ZkZ k2mWMNivq0a3aVvsRU34KWaWkHTbG/UhSsleVpNhCe2eBsq9FsAWIYkghA7J6ETZbMYU tj+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648640; x=1722253440; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ad1q3cWy2NP9DSF0U1i3kXx2egXQhijAsyWJ4WTxexM=; b=bcs7n4qMXYzwVcyxMdBX4Tf5EkYqxnxxIznPYmelcAo9f3jXcBW9ooLnREGREx+qiW tnSr/ngzmanxzCsHrsV8/TaZ80wWEMMhCQ+X+oLY6VaCBlZx+X/ILIX/zE+13jKjsAp0 2cIF1pCNI6TUAo3xT3quI6CjpxhWYTB8sWJu3UEordfRbDK2Pn5kHpYoJxfI5dFFaALg Qii4fnuJ/jGtHHUavUNf5qOsXdEpOmhxlFcN+O/QyI+iwErXY7xaBQsW+XnnnjLYrdyD C4gmQBEq5JtgH33pgcbqxb2obrz54C6KgUVZgGzM/YgnFTyyrZWIltWO9b8UyPfBaZR7 FYPQ== X-Gm-Message-State: AOJu0Yym3ahGvy3k8XcG/sM6uhtDWi0Z5hrSwb5bnH8Uk9z8XcyYnApo kvbL+tPLBSRlpDIOZcQg/Vz/gsq1H7S90k17cx2Uj6XAGvOY6qfpJ5FW1Yxwbdyt8uSW73oAkzH D7HM= X-Google-Smtp-Source: AGHT+IGJPu1H8utodtxZtamymnhLfM7D9BY0V/NjE2wcgiAG537xZuLrloE2ItLEuodB5VVhXmsqhA== X-Received: by 2002:a5d:4e41:0:b0:366:e09c:56be with SMTP id ffacd0b85a97d-36873dc24admr7188496f8f.6.1721648639987; Mon, 22 Jul 2024 04:43:59 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.43.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:43:59 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson , Wainer dos Santos Moschetta , Beraldo Leal Subject: [RFC PATCH v5 3/8] CI: Add build-system-rust-debian job Date: Mon, 22 Jul 2024 14:43:33 +0300 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::334; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wm1-x334.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Add job that builds with rust support enabled on debian. Signed-off-by: Manos Pitsidianakis Reviewed-by: Richard Henderson --- .gitlab-ci.d/buildtest.yml | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml index e3a0758bd9..e025e2cbf6 100644 --- a/.gitlab-ci.d/buildtest.yml +++ b/.gitlab-ci.d/buildtest.yml @@ -107,6 +107,17 @@ crash-test-debian: - make NINJA=":" check-venv - pyvenv/bin/python3 scripts/device-crash-test -q --tcg-only ./qemu-system-i386 +build-system-rust-debian: + extends: + - .native_build_job_template + - .native_build_artifact_template + needs: + job: amd64-debian-container + variables: + IMAGE: debian + CONFIGURE_ARGS: --enable-rust + TARGETS: aarch64-softmmu + build-system-fedora: extends: - .native_build_job_template From patchwork Mon Jul 22 11:43:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963205 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=L10WXXJI; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJNt2tZ5z20Dw for ; Mon, 22 Jul 2024 21:45:02 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSb-0008Vn-Hb; Mon, 22 Jul 2024 07:44:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrSa-0008QV-7A for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:08 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSW-0002PN-0c for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:07 -0400 Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-367ab76d5e1so1434571f8f.3 for ; Mon, 22 Jul 2024 04:44:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648642; x=1722253442; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VkY4IFlFq/sCuBts6ED4xjkUhWykynxiOcez+6Rpuac=; b=L10WXXJIIRkGfhzdP6cE88Rg8lXHhfJBYvas5rOGNz7pRkzifqhYZYJ+wOIwp6T8wi /xM9lt3FZpCGf9Di2ySy1GPbkDkdQWHxHJDhK0iaoCI8dmOGUKYz2W3GaaVbdQPkfiVb +kmbXDWzqOFsTwZDHAAQUf5sPu51S1p0WQ3m3EQLgVxXJQh1h30hh4OPRBXrAw0DUIWF f8qsv03UuddBDzmTO3lw1zcxetiuFk5+jLgdqzoV3LHhu/CEnDCuQYnILcMo5YvY950I mi0RM8R3LFyqn9i9kbrPf2TAdrPWIHgHiWZRxmInZ//Sdeema7FOMIDRVT3oUYG+GoAo 4iig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648642; x=1722253442; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VkY4IFlFq/sCuBts6ED4xjkUhWykynxiOcez+6Rpuac=; b=I89waljn/3B5Bb0IUPdrueQQcSSSP5U28ksDXsg+o9Avtap/r3kgmh9CxPJED4PLWI yZ2XcW2srmPcVQtwYZ19xu2GAOIl1u4X1rqvO0rnJ6Qwy/Aue3pJ0sXum41a8DS+Q+h1 fuZ48Yn/k0OmxfrrMOQuVxQQhcaXWUzwScMtiFOKobneRgeK4dU7tHfCL3lsNTG8ZgfZ bmrCShCDGvdK9UZIS2kRSSbNAMt/kmomlCOm5+xufux6lmD+iPRGvnTlyuvvEuNyt46d 5we1S4Elmk1LarEcBVZ0r3r4pX4Uv/bTrJK57ujaV0QmXimvfZYE4GtDi9sGWd5esr7L 4MbA== X-Gm-Message-State: AOJu0Yxy6ps7REdMsAsSEfSMs6oVglHiiejsM+SaceAsMUA+2HljfjjA qDynHT4DgdIeA94UtyddsdKr5uoZeBOanL1DpY8bFNL8cI9yZ8KYywr7ULZc+1wm75r7Y0MVOfV Yv+M= X-Google-Smtp-Source: AGHT+IFCHK10zVglFYWug3rCOzbEgj6pgJWErO8/iWRP50HGVwF2usk3hw/qivV5I+rjRobTEnlTGg== X-Received: by 2002:adf:e8c3:0:b0:368:5b0c:7d34 with SMTP id ffacd0b85a97d-369bbbbdf54mr3532744f8f.22.1721648642308; Mon, 22 Jul 2024 04:44:02 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.44.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:44:02 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson , Paolo Bonzini , John Snow , Cleber Rosa Subject: [RFC PATCH v5 4/8] rust: add bindgen step as a meson dependency Date: Mon, 22 Jul 2024 14:43:34 +0300 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Add bindings_rs target for generating rust bindings to target-independent qemu C APIs. The bindings need be created before any rust crate that uses them is compiled. The bindings.rs file will end up in BUILDDIR/bindings.rs and have the same name as a target: ninja bindings.rs Signed-off-by: Manos Pitsidianakis --- MAINTAINERS | 4 +++ meson.build | 56 +++++++++++++++++++++++++++++ rust/wrapper.h | 39 ++++++++++++++++++++ rust/.gitignore | 3 ++ rust/meson.build | 0 scripts/rustc_args.py | 84 +++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 186 insertions(+) create mode 100644 rust/wrapper.h create mode 100644 rust/.gitignore create mode 100644 rust/meson.build create mode 100644 scripts/rustc_args.py diff --git a/MAINTAINERS b/MAINTAINERS index d427f13b79..5f0b586dd4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4231,7 +4231,11 @@ F: docs/devel/docs.rst Rust build system integration M: Manos Pitsidianakis S: Maintained +F: scripts/rustc_args.py +F: rust/.gitignore F: rust/Kconfig +F: rust/meson.build +F: rust/wrapper.h Miscellaneous ------------- diff --git a/meson.build b/meson.build index a3f346ab3c..a34c5ebb4a 100644 --- a/meson.build +++ b/meson.build @@ -298,6 +298,20 @@ foreach lang : all_languages endif endforeach +if have_rust + rust_args = [] + + if rustc.version().version_compare('<1.77.2') + error('rustc version ' + rustc.version() + ' is unsupported: Please upgrade to at least 1.77.2') + endif + + if get_option('debug') + rust_args += ['-g'] + endif + if get_option('optimization') not in ['0', '1', 'g'] + rust_args += ['-O'] + endif +endif # default flags for all hosts # We use -fwrapv to tell the compiler that we require a C dialect where # left shift of signed integers is well defined and has the expected @@ -3828,6 +3842,48 @@ common_all = static_library('common', implicit_include_directories: false, dependencies: common_ss.all_dependencies()) +if have_rust and have_system + rust_args += run_command( + meson.global_source_root() / 'scripts/rustc_args.py', + '--config-headers', meson.project_build_root() / 'config-host.h', + capture : true, + check: true).stdout().strip().split() + + bindings_rs = import('rust').bindgen( + input: 'rust/wrapper.h', + dependencies: common_ss.all_dependencies(), + output: 'bindings.rs', + include_directories: include_directories('.', 'include'), + bindgen_version: ['>=0.69.4'], + args: [ + '--raw-line', '#![allow(non_camel_case_types)]', + '--raw-line', '#![allow(non_snake_case)]', + '--raw-line', '#![allow(non_upper_case_globals)]', + '--raw-line', '#![allow(improper_ctypes_definitions)]', + '--raw-line', '#![allow(improper_ctypes)]', + '--raw-line', 'unsafe impl Send for Property {}', + '--raw-line', 'unsafe impl Sync for Property {}', + '--raw-line', 'unsafe impl Sync for TypeInfo {}', + '--raw-line', 'unsafe impl Sync for VMStateDescription {}', + '--ctypes-prefix', 'core::ffi', + '--formatter', 'rustfmt', + '--generate-block', + '--generate-cstr', + '--impl-debug', + '--merge-extern-blocks', + '--no-doc-comments', + '--no-include-path-detection', + '--use-core', + '--with-derive-default', + '--allowlist-file', meson.project_source_root() + '/include/.*', + '--allowlist-file', meson.project_source_root() + '/.*', + '--allowlist-file', meson.project_build_root() + '/.*' + ], + ) + subdir('rust') +endif + + feature_to_c = find_program('scripts/feature_to_c.py') if host_os == 'darwin' diff --git a/rust/wrapper.h b/rust/wrapper.h new file mode 100644 index 0000000000..51985f0ef1 --- /dev/null +++ b/rust/wrapper.h @@ -0,0 +1,39 @@ +/* + * QEMU System Emulator + * + * Copyright 2024 Manos Pitsidianakis + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "qemu/osdep.h" +#include "qemu/module.h" +#include "qemu-io.h" +#include "sysemu/sysemu.h" +#include "hw/sysbus.h" +#include "exec/memory.h" +#include "chardev/char-fe.h" +#include "hw/clock.h" +#include "hw/qdev-clock.h" +#include "hw/qdev-properties.h" +#include "hw/qdev-properties-system.h" +#include "hw/irq.h" +#include "qapi/error.h" +#include "migration/vmstate.h" +#include "chardev/char-serial.h" diff --git a/rust/.gitignore b/rust/.gitignore new file mode 100644 index 0000000000..1bf71b1f68 --- /dev/null +++ b/rust/.gitignore @@ -0,0 +1,3 @@ +# Ignore any cargo development build artifacts; for qemu-wide builds, all build +# artifacts will go to the meson build directory. +target diff --git a/rust/meson.build b/rust/meson.build new file mode 100644 index 0000000000..e69de29bb2 diff --git a/scripts/rustc_args.py b/scripts/rustc_args.py new file mode 100644 index 0000000000..e4cc9720e1 --- /dev/null +++ b/scripts/rustc_args.py @@ -0,0 +1,84 @@ +#!/usr/bin/env python3 + +"""Generate rustc arguments for meson rust builds. + +This program generates --cfg compile flags for the configuration headers passed +as arguments. + +Copyright (c) 2024 Linaro Ltd. + +Authors: + Manos Pitsidianakis + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 2 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program. If not, see . +""" + +import argparse +import logging + +from typing import List + + +def generate_cfg_flags(header: str) -> List[str]: + """Converts defines from config[..].h headers to rustc --cfg flags.""" + + def cfg_name(name: str) -> str: + """Filter function for C #defines""" + if ( + name.startswith("CONFIG_") + or name.startswith("TARGET_") + or name.startswith("HAVE_") + ): + return name + return "" + + with open(header, encoding="utf-8") as cfg: + config = [l.split()[1:] for l in cfg if l.startswith("#define")] + + cfg_list = [] + for cfg in config: + name = cfg_name(cfg[0]) + if not name: + continue + if len(cfg) >= 2 and cfg[1] != "1": + continue + cfg_list.append("--cfg") + cfg_list.append(name) + return cfg_list + + +def main() -> None: + # pylint: disable=missing-function-docstring + parser = argparse.ArgumentParser() + parser.add_argument("-v", "--verbose", action="store_true") + parser.add_argument( + "--config-headers", + metavar="CONFIG_HEADER", + action="append", + dest="config_headers", + help="paths to any configuration C headers (*.h files), if any", + required=False, + default=[], + ) + args = parser.parse_args() + if args.verbose: + logging.basicConfig(level=logging.DEBUG) + logging.debug("args: %s", args) + for header in args.config_headers: + for tok in generate_cfg_flags(header): + print(tok) + + +if __name__ == "__main__": + main() From patchwork Mon Jul 22 11:43:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963203 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=F0HqShY1; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJNr0zBDz1yXp for ; Mon, 22 Jul 2024 21:45:00 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSh-0000R1-2Z; Mon, 22 Jul 2024 07:44:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrSa-0008TI-T7 for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:08 -0400 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSY-0002Pd-3L for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:08 -0400 Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-368557c9e93so2126083f8f.2 for ; Mon, 22 Jul 2024 04:44:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648644; x=1722253444; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ydlMY+jO++7u8l188uJ0hLZA6tG2qknv2DZngIO06PQ=; b=F0HqShY1QnLxz1EMPYdZH2hTBm2L1ubND4xq8hIQKnOWmfxmzHceXjMh49B4hQNu9w FMUBKp+MwB1nar7duXUh2mQAxYjKiZ7Bx1dyCORUOq0R4XAJo6wfRX55C9TvEDTxD672 wbXiO8jPi7PqKHjHwm4jK9UTGfx9yNRpZmS/mBPo4eLI8eDWZQT+RrdDSeb6CftnAvPF Yt5OjlEpy1fzndVYF3+JjCSDm3w7MmAKHs3b/CmieByjgknmbnQquOqXErrI+i7+V77w IkcMqCCL7EEX7B9yhEQa5y7oiD1Tek1PHnmx2yOFXLjo8BptH/7bNQT3nYd/93ma8/9b retA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648644; x=1722253444; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ydlMY+jO++7u8l188uJ0hLZA6tG2qknv2DZngIO06PQ=; b=H4GbkZ65i9HDxJCVPW5080ScJzYqPQLR3iL8Thv3yEvvDh4+Hxc0gAy78XCp9XPrIs oiHt8FCz+IT68bfhZqN4EReeBEH7T4Hkd3w7ezfC91XLzazfHXMAHseWpvG63155XjkX gf184cYK3Yozp6It9sUFH/RBHE1yl5YOMyxmfS8f7/3eauezqsAg9pynOEQWr0KaF0UC iidh2/RIm9MCfepHqLZj7dqnSAQgFGQVEv2IbzmvGVY27N4g8D6KD7vpRBqf2oSRsgOi rsb729I1PEZkxlFfyBl9/ZRVMmxPDY+JSKZd2Pvn0W6IqFIlt55T0VQ6VvHV3H/dsTo3 tHpw== X-Gm-Message-State: AOJu0YwFIQAfJr1Fa6pp5dqHrHZl2aN2va05Ht2PfOWXfaGwyxKu6NPF Pwflbgi/Y7d3gA2+NFbjLTkC8LgAGhNsfaM/8vXsuA2PR5KfKSfBZ3QGYDKYG6YG5OUMMp2PZ5v OCko= X-Google-Smtp-Source: AGHT+IHHrgzKVftCc25J9a9x0QUtEGPKrDPNc8rb7Pqz5V5cJGT4thPtOlGflRhWpU9nk3i08luVzA== X-Received: by 2002:adf:e689:0:b0:368:591:9bc1 with SMTP id ffacd0b85a97d-369bae6ff54mr4738790f8f.46.1721648644391; Mon, 22 Jul 2024 04:44:04 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.44.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:44:04 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson Subject: [RFC PATCH v5 5/8] .gitattributes: add Rust diff and merge attributes Date: Mon, 22 Jul 2024 14:43:35 +0300 Message-ID: <990592c7c93a3b2b692dd773c4c9191a82146a80.1721648163.git.manos.pitsidianakis@linaro.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42f; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wr1-x42f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Set rust source code to diff=rust (built-in with new git versions) and merge=binary for Cargo.lock files (they should not be merged but auto-generated by cargo) Reviewed-by: Alex Bennée Signed-off-by: Manos Pitsidianakis Reviewed-by: Zhao Liu --- .gitattributes | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.gitattributes b/.gitattributes index a217cb7bfe..6dc6383d3d 100644 --- a/.gitattributes +++ b/.gitattributes @@ -2,3 +2,6 @@ *.h.inc diff=c *.m diff=objc *.py diff=python +*.rs diff=rust +*.rs.inc diff=rust +Cargo.lock diff=toml merge=binary From patchwork Mon Jul 22 11:43:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963202 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=a4oAHV0c; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJNm5Dhxz20Dw for ; Mon, 22 Jul 2024 21:44:56 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSk-0000ZO-R5; Mon, 22 Jul 2024 07:44:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrSe-0000HF-Mv for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:12 -0400 Received: from mail-wr1-x42f.google.com ([2a00:1450:4864:20::42f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSa-0002Pu-E0 for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:12 -0400 Received: by mail-wr1-x42f.google.com with SMTP id ffacd0b85a97d-367ab76d5e1so1434631f8f.3 for ; Mon, 22 Jul 2024 04:44:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648647; x=1722253447; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wwiTk3Z9mHUcszlD0OzPJHLsnbDOmeY/fc7ksbFd1yk=; b=a4oAHV0cB+vCfqtTLzMDZZvHEHCBnRJwbms9oH9VUq9yVxvx3tMdi/ChVz1exeXpwa Wfab5onneO38LF7wve35bxWvH1Kk6ziDjFrjaPjhaDo0CcJUn8I00oG85dCy17r+xyuV tWUQRIdVyMfvtHEVsg5U6ljxe8rZx3oHhW8wmb5eH7bgbGi60QwJioBRKae0eH/GA/VY +JhCThv5H5mlx0XDisEIQqd8mRgCOGxyOHdhNGhloqBn+LskSbELLQ/AxRuoTKwcjXGT ifMJwADS++Dg71DpAXSNwPUdJVfkmY2hMsbVmLVGgqDDCCV0uqP6Ju7B6U+APZVsjV8g mWiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648647; x=1722253447; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wwiTk3Z9mHUcszlD0OzPJHLsnbDOmeY/fc7ksbFd1yk=; b=s2h+ero9h6JMZOfnrOSjsHrHjLfbIzO8AaTe1hb8HAMtitG2iujD1I0/9rApFzKPW3 7SEK4PkUilvSPDkcKWSlIgmMATmySdtzGK4Tu3XtbAhM2UiMNKAd5PDNbsNvG1THeATz RzXwu4td1XWekEQEvW1n6j0dZdmksYlZZ+5BvxcB3CbqMSIeQ/48Lyle9ypK4RbG2AFa 0qnO8y6p7RjJ/HAcr5G2FeOWETAGpl8ckKr921sPD/cSs8RT8Ff/h/QmhrLaPfYHcNWH 3N+P/5LULO8NXmALqTtbLo8eNdXM0pGv9AdKIRh7Y9/83FvUaalBL/47tyiDSki+m8p8 FvNg== X-Gm-Message-State: AOJu0YyXYZR64m+vC90/QU1R4k0+1oaym69eufMaxKD4dv+2e8SwloTc dXl/Tj+pGQKxh0spcDAJ8JAOjiGXVqa6FsqHGd0Zorj5YGZl7BQSsa/dSqbZSI5kwk6fgpCZzst s3aU= X-Google-Smtp-Source: AGHT+IEDFxvwVRqqhRogjkUxluOq8eeF5vu2V8EZW52cqHuCtlEEWvJh8tYw51htAIbXuK/dJH+cHg== X-Received: by 2002:a5d:474f:0:b0:368:72c6:99c4 with SMTP id ffacd0b85a97d-369bbbb319bmr3907062f8f.10.1721648646685; Mon, 22 Jul 2024 04:44:06 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.44.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:44:06 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson Subject: [RFC PATCH v5 6/8] rust: add crate to expose bindings and interfaces Date: Mon, 22 Jul 2024 14:43:36 +0300 Message-ID: <80d1a9356de96979a67cd6eaf0d37be6a6a553bd.1721648163.git.manos.pitsidianakis@linaro.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42f; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wr1-x42f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Add rust/qemu-api, which exposes rust-bindgen generated FFI bindings and provides some declaration macros for symbols visible to the rest of QEMU. Signed-off-by: Manos Pitsidianakis --- MAINTAINERS | 6 ++ rust/meson.build | 13 +++ rust/qemu-api/.gitignore | 2 + rust/qemu-api/Cargo.lock | 7 ++ rust/qemu-api/Cargo.toml | 23 ++++++ rust/qemu-api/README.md | 17 ++++ rust/qemu-api/build.rs | 13 +++ rust/qemu-api/meson.build | 19 +++++ rust/qemu-api/rustfmt.toml | 1 + rust/qemu-api/src/bindings.rs | 7 ++ rust/qemu-api/src/definitions.rs | 107 +++++++++++++++++++++++++ rust/qemu-api/src/device_class.rs | 128 ++++++++++++++++++++++++++++++ rust/qemu-api/src/lib.rs | 100 +++++++++++++++++++++++ rust/qemu-api/src/tests.rs | 48 +++++++++++ rust/rustfmt.toml | 7 ++ 15 files changed, 498 insertions(+) create mode 100644 rust/qemu-api/.gitignore create mode 100644 rust/qemu-api/Cargo.lock create mode 100644 rust/qemu-api/Cargo.toml create mode 100644 rust/qemu-api/README.md create mode 100644 rust/qemu-api/build.rs create mode 100644 rust/qemu-api/meson.build create mode 120000 rust/qemu-api/rustfmt.toml create mode 100644 rust/qemu-api/src/bindings.rs create mode 100644 rust/qemu-api/src/definitions.rs create mode 100644 rust/qemu-api/src/device_class.rs create mode 100644 rust/qemu-api/src/lib.rs create mode 100644 rust/qemu-api/src/tests.rs create mode 100644 rust/rustfmt.toml diff --git a/MAINTAINERS b/MAINTAINERS index 5f0b586dd4..1789bcfd9b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3342,6 +3342,12 @@ F: hw/core/register.c F: include/hw/register.h F: include/hw/registerfields.h +Rust +M: Manos Pitsidianakis +S: Maintained +F: rust/qemu-api +F: rust/rustfmt.toml + SLIRP M: Samuel Thibault S: Maintained diff --git a/rust/meson.build b/rust/meson.build index e69de29bb2..a903c7c602 100644 --- a/rust/meson.build +++ b/rust/meson.build @@ -0,0 +1,13 @@ +add_languages('rust', required: true) + +_lib_bindings_rs = static_library( + '_bindings_rs', + bindings_rs, + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + ], +) + +subdir('qemu-api') diff --git a/rust/qemu-api/.gitignore b/rust/qemu-api/.gitignore new file mode 100644 index 0000000000..71eaff2035 --- /dev/null +++ b/rust/qemu-api/.gitignore @@ -0,0 +1,2 @@ +# Ignore generated bindings file overrides. +src/bindings.rs.inc diff --git a/rust/qemu-api/Cargo.lock b/rust/qemu-api/Cargo.lock new file mode 100644 index 0000000000..e9c51a243a --- /dev/null +++ b/rust/qemu-api/Cargo.lock @@ -0,0 +1,7 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +version = 3 + +[[package]] +name = "qemu_api" +version = "0.1.0" diff --git a/rust/qemu-api/Cargo.toml b/rust/qemu-api/Cargo.toml new file mode 100644 index 0000000000..51260cbe42 --- /dev/null +++ b/rust/qemu-api/Cargo.toml @@ -0,0 +1,23 @@ +[package] +name = "qemu_api" +version = "0.1.0" +edition = "2021" +authors = ["Manos Pitsidianakis "] +license = "GPL-2.0 OR GPL-3.0-or-later" +readme = "README.md" +homepage = "https://www.qemu.org" +description = "Rust bindings for QEMU" +repository = "https://gitlab.com/qemu-project/qemu/" +resolver = "2" +publish = false +keywords = [] +categories = [] + +[dependencies] + +[features] +default = ["allocator"] +allocator = [] + +# Do not include in any global workspace +[workspace] diff --git a/rust/qemu-api/README.md b/rust/qemu-api/README.md new file mode 100644 index 0000000000..7588fa29ef --- /dev/null +++ b/rust/qemu-api/README.md @@ -0,0 +1,17 @@ +# QEMU bindings and API wrappers + +This library exports helper Rust types, Rust macros and C FFI bindings for internal QEMU APIs. + +The C bindings can be generated with `bindgen`, using this build target: + +```console +$ ninja bindings.rs +``` + +## Generate Rust documentation + +To generate docs for this crate, including private items: + +```sh +cargo doc --no-deps --document-private-items +``` diff --git a/rust/qemu-api/build.rs b/rust/qemu-api/build.rs new file mode 100644 index 0000000000..2f57c2b3d4 --- /dev/null +++ b/rust/qemu-api/build.rs @@ -0,0 +1,13 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +use std::path::Path; + +fn main() { + if !Path::new("src/bindings.rs.inc").exists() { + panic!( + "No generated C bindings found! Either build them manually with bindgen or with meson \ + (`ninja bindings.rs`) and copy them to src/bindings.rs.inc, or build through meson." + ); + } +} diff --git a/rust/qemu-api/meson.build b/rust/qemu-api/meson.build new file mode 100644 index 0000000000..7992dc64ce --- /dev/null +++ b/rust/qemu-api/meson.build @@ -0,0 +1,19 @@ +add_languages('rust', required: true) + +_qemu_api_rs = static_library( + 'qemu_api', + [files('src/lib.rs')], + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + '--cfg', 'MESON_BINDINGS_RS', + ], + link_with: [ + _lib_bindings_rs, + ], +) + +qemu_api = declare_dependency( + link_with: _qemu_api_rs, +) diff --git a/rust/qemu-api/rustfmt.toml b/rust/qemu-api/rustfmt.toml new file mode 120000 index 0000000000..39f97b043b --- /dev/null +++ b/rust/qemu-api/rustfmt.toml @@ -0,0 +1 @@ +../rustfmt.toml \ No newline at end of file diff --git a/rust/qemu-api/src/bindings.rs b/rust/qemu-api/src/bindings.rs new file mode 100644 index 0000000000..fb595f1469 --- /dev/null +++ b/rust/qemu-api/src/bindings.rs @@ -0,0 +1,7 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later +#[cfg(not(MESON_BINDINGS_RS))] +include!("bindings.rs.inc"); + +#[cfg(MESON_BINDINGS_RS)] +pub use ::_bindings_rs::*; diff --git a/rust/qemu-api/src/definitions.rs b/rust/qemu-api/src/definitions.rs new file mode 100644 index 0000000000..5b9e92fda8 --- /dev/null +++ b/rust/qemu-api/src/definitions.rs @@ -0,0 +1,107 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +//! Definitions required by QEMU when registering a device. + +#[macro_export] +macro_rules! module_init { + ($func:expr, $type:expr) => { + #[used] + #[cfg_attr(target_os = "linux", link_section = ".ctors")] + #[cfg_attr(target_os = "macos", link_section = "__DATA,__mod_init_func")] + #[cfg_attr(target_os = "windows", link_section = ".CRT$XCU")] + pub static LOAD_MODULE: extern "C" fn() = { + assert!($type < $crate::bindings::module_init_type_MODULE_INIT_MAX); + + extern "C" fn __load() { + // ::std::panic::set_hook(::std::boxed::Box::new(|_| {})); + + unsafe { + $crate::bindings::register_module_init(Some($func), $type); + } + } + + __load + }; + }; + (qom: $func:ident => $body:block) => { + // NOTE: To have custom identifiers for the ctor func we need to either supply + // them directly as a macro argument or create them with a proc macro. + #[used] + #[cfg_attr(target_os = "linux", link_section = ".ctors")] + #[cfg_attr(target_os = "macos", link_section = "__DATA,__mod_init_func")] + #[cfg_attr(target_os = "windows", link_section = ".CRT$XCU")] + pub static LOAD_MODULE: extern "C" fn() = { + extern "C" fn __load() { + // ::std::panic::set_hook(::std::boxed::Box::new(|_| {})); + #[no_mangle] + unsafe extern "C" fn $func() { + $body + } + + unsafe { + $crate::bindings::register_module_init( + Some($func), + $crate::bindings::module_init_type_MODULE_INIT_QOM, + ); + } + } + + __load + }; + }; +} + +#[macro_export] +macro_rules! type_info { + ($(#[$outer:meta])* + $name:ident: $t:ty, + $(name: $tname:expr,)* + $(parent: $pname:expr,)* + $(instance_init: $ii_fn:expr,)* + $(instance_post_init: $ipi_fn:expr,)* + $(instance_finalize: $if_fn:expr,)* + $(abstract_: $a_val:expr,)* + $(class_init: $ci_fn:expr,)* + $(class_base_init: $cbi_fn:expr,)* + ) => { + #[used] + $(#[$outer])* + pub static $name: $crate::bindings::TypeInfo = $crate::bindings::TypeInfo { + $(name: { + #[used] + static TYPE_NAME: &::core::ffi::CStr = $tname; + $tname.as_ptr() + },)* + $(parent: { + #[used] + static PARENT_TYPE_NAME: &::core::ffi::CStr = $pname; + $pname.as_ptr() + },)* + instance_size: ::core::mem::size_of::<$t>(), + instance_align: ::core::mem::align_of::<$t>(), + $( + instance_init: $ii_fn, + )* + $( + instance_post_init: $ipi_fn, + )* + $( + instance_finalize: $if_fn, + )* + $( + abstract_: $a_val, + )* + class_size: 0, + $( + class_init: $ci_fn, + )* + $( + class_base_init: $cbi_fn, + )* + class_data: core::ptr::null_mut(), + interfaces: core::ptr::null_mut(), + ..unsafe { MaybeUninit::<$crate::bindings::TypeInfo>::zeroed().assume_init() } + }; + } +} diff --git a/rust/qemu-api/src/device_class.rs b/rust/qemu-api/src/device_class.rs new file mode 100644 index 0000000000..f8d4c04e03 --- /dev/null +++ b/rust/qemu-api/src/device_class.rs @@ -0,0 +1,128 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +use std::sync::OnceLock; + +use crate::bindings::Property; + +#[macro_export] +macro_rules! device_class_init { + ($func:ident, props => $props:ident, realize_fn => $realize_fn:expr, reset_fn => $reset_fn:expr, vmsd => $vmsd:ident$(,)*) => { + #[no_mangle] + pub unsafe extern "C" fn $func( + klass: *mut $crate::bindings::ObjectClass, + _: *mut ::core::ffi::c_void, + ) { + let mut dc = + ::core::ptr::NonNull::new(klass.cast::<$crate::bindings::DeviceClass>()).unwrap(); + dc.as_mut().realize = $realize_fn; + dc.as_mut().reset = $reset_fn; + dc.as_mut().vmsd = &$vmsd; + $crate::bindings::device_class_set_props(dc.as_mut(), $props.as_mut_ptr()); + } + }; +} + +#[macro_export] +macro_rules! define_property { + ($name:expr, $state:ty, $field:expr, $prop:expr, $type:expr, default = $defval:expr$(,)*) => { + $crate::bindings::Property { + name: { + #[used] + static _TEMP: &::core::ffi::CStr = $name; + _TEMP.as_ptr() + }, + info: $prop, + offset: ::core::mem::offset_of!($state, $field) + .try_into() + .expect("Could not fit offset value to type"), + bitnr: 0, + bitmask: 0, + set_default: true, + defval: $crate::bindings::Property__bindgen_ty_1 { u: $defval.into() }, + arrayoffset: 0, + arrayinfo: ::core::ptr::null(), + arrayfieldsize: 0, + link_type: ::core::ptr::null(), + } + }; + ($name:expr, $state:ty, $field:expr, $prop:expr, $type:expr$(,)*) => { + $crate::bindings::Property { + name: { + #[used] + static _TEMP: &::core::ffi::CStr = $name; + _TEMP.as_ptr() + }, + info: $prop, + offset: ::core::mem::offset_of!($state, $field) + .try_into() + .expect("Could not fit offset value to type"), + bitnr: 0, + bitmask: 0, + set_default: false, + defval: $crate::bindings::Property__bindgen_ty_1 { i: 0 }, + arrayoffset: 0, + arrayinfo: ::core::ptr::null(), + arrayfieldsize: 0, + link_type: ::core::ptr::null(), + } + }; +} + +#[repr(C)] +pub struct Properties(pub OnceLock<[Property; N]>, pub fn() -> [Property; N]); + +impl Properties { + pub fn as_mut_ptr(&mut self) -> *mut Property { + _ = self.0.get_or_init(self.1); + self.0.get_mut().unwrap().as_mut_ptr() + } +} + +#[macro_export] +macro_rules! declare_properties { + ($ident:ident, $($prop:expr),*$(,)*) => { + + const fn _calc_prop_len() -> usize { + let mut len = 1; + $({ + _ = stringify!($prop); + len += 1; + })* + len + } + const PROP_LEN: usize = _calc_prop_len(); + + #[no_mangle] + fn _make_properties() -> [$crate::bindings::Property; PROP_LEN] { + [ + $($prop),*, + unsafe { ::core::mem::MaybeUninit::<$crate::bindings::Property>::zeroed().assume_init() }, + ] + } + + #[no_mangle] + pub static mut $ident: $crate::device_class::Properties = $crate::device_class::Properties(::std::sync::OnceLock::new(), _make_properties); + }; +} + +#[macro_export] +macro_rules! vm_state_description { + ($(#[$outer:meta])* + $name:ident, + $(name: $vname:expr,)* + $(unmigratable: $um_val:expr,)* + ) => { + #[used] + $(#[$outer])* + pub static $name: $crate::bindings::VMStateDescription = $crate::bindings::VMStateDescription { + $(name: { + #[used] + static VMSTATE_NAME: &::core::ffi::CStr = $vname; + $vname.as_ptr() + },)* + unmigratable: true, + ..unsafe { ::core::mem::MaybeUninit::<$crate::bindings::VMStateDescription>::zeroed().assume_init() } + }; + } +} diff --git a/rust/qemu-api/src/lib.rs b/rust/qemu-api/src/lib.rs new file mode 100644 index 0000000000..6f02ac45e2 --- /dev/null +++ b/rust/qemu-api/src/lib.rs @@ -0,0 +1,100 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +#![doc = include_str!("../README.md")] + +#[cfg(MESON_BINDINGS_RS)] +extern crate _bindings_rs; + +#[cfg_attr(not(MESON_BINDINGS_RS), allow( + improper_ctypes_definitions, + improper_ctypes, + non_camel_case_types, + non_snake_case, + non_upper_case_globals +))] +#[cfg_attr(not(MESON_BINDINGS_RS), allow( + clippy::missing_const_for_fn, + clippy::too_many_arguments, + clippy::approx_constant, + clippy::use_self, + clippy::useless_transmute, + clippy::missing_safety_doc, +))] +#[cfg_attr(not(MESON_BINDINGS_RS), rustfmt::skip)] +pub mod bindings; + +pub mod definitions; +pub mod device_class; + +#[cfg(test)] +mod tests; + +use std::alloc::{GlobalAlloc, Layout}; + +extern "C" { + pub fn g_aligned_alloc0( + n_blocks: bindings::gsize, + n_block_bytes: bindings::gsize, + alignment: bindings::gsize, + ) -> bindings::gpointer; + pub fn g_aligned_free(mem: bindings::gpointer); + pub fn g_malloc0(n_bytes: bindings::gsize) -> bindings::gpointer; + pub fn g_free(mem: bindings::gpointer); +} + +/// An allocator that uses the same allocator as QEMU in C. +/// +/// It is enabled by default with the `allocator` feature. +/// +/// To set it up manually as a global allocator in your crate: +/// +/// ```ignore +/// use qemu_api::QemuAllocator; +/// +/// #[global_allocator] +/// static GLOBAL: QemuAllocator = QemuAllocator::new(); +/// ``` +#[derive(Clone, Copy, Debug)] +#[repr(C)] +pub struct QemuAllocator { + _unused: [u8; 0], +} + +#[cfg_attr(feature = "allocator", global_allocator)] +pub static GLOBAL: QemuAllocator = QemuAllocator::new(); + +impl QemuAllocator { + pub const fn new() -> Self { + Self { _unused: [] } + } +} + +impl Default for QemuAllocator { + fn default() -> Self { + Self::new() + } +} + +unsafe impl GlobalAlloc for QemuAllocator { + unsafe fn alloc(&self, layout: Layout) -> *mut u8 { + if layout.align() == 0 { + g_malloc0(layout.size().try_into().unwrap()).cast::() + } else { + g_aligned_alloc0( + layout.size().try_into().unwrap(), + 1, + layout.align().try_into().unwrap(), + ) + .cast::() + } + } + + unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) { + if layout.align() == 0 { + g_free(ptr.cast::<_>()) + } else { + g_aligned_free(ptr.cast::<_>()) + } + } +} diff --git a/rust/qemu-api/src/tests.rs b/rust/qemu-api/src/tests.rs new file mode 100644 index 0000000000..88c26308ee --- /dev/null +++ b/rust/qemu-api/src/tests.rs @@ -0,0 +1,48 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +use crate::{ + bindings::*, declare_properties, define_property, device_class_init, vm_state_description, +}; + +#[test] +fn test_device_decl_macros() { + // Test that macros can compile. + vm_state_description! { + VMSTATE, + name: c"name", + unmigratable: true, + } + + #[repr(C)] + pub struct DummyState { + pub char_backend: CharBackend, + pub migrate_clock: bool, + } + + declare_properties! { + DUMMY_PROPERTIES, + define_property!( + c"chardev", + DummyState, + char_backend, + unsafe { &qdev_prop_chr }, + CharBackend + ), + define_property!( + c"migrate-clk", + DummyState, + migrate_clock, + unsafe { &qdev_prop_bool }, + bool + ), + } + + device_class_init! { + dummy_class_init, + props => DUMMY_PROPERTIES, + realize_fn => None, + reset_fn => None, + vmsd => VMSTATE, + } +} diff --git a/rust/rustfmt.toml b/rust/rustfmt.toml new file mode 100644 index 0000000000..ebecb99fe0 --- /dev/null +++ b/rust/rustfmt.toml @@ -0,0 +1,7 @@ +edition = "2021" +format_generated_files = false +format_code_in_doc_comments = true +format_strings = true +imports_granularity = "Crate" +group_imports = "StdExternalCrate" +wrap_comments = true From patchwork Mon Jul 22 11:43:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963208 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=Y6FeaJGM; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJQ024xbz1yZ7 for ; Mon, 22 Jul 2024 21:46:00 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrSr-0000zg-4w; Mon, 22 Jul 2024 07:44:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrSj-0000Ye-4u for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:18 -0400 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrSe-0002QH-D4 for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:16 -0400 Received: by mail-wr1-x435.google.com with SMTP id ffacd0b85a97d-3686b554cfcso1929409f8f.1 for ; Mon, 22 Jul 2024 04:44:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648650; x=1722253450; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OVLGD6KIEzAVggABsFYZeBPl3B+hTeGtDK8Ro+VkMEw=; b=Y6FeaJGMVo0Pk2s6+ZwZjpn6fn+wwkvw0iyMEvEvx1iBHFbILy18ffMK++WMEPOb5A Y0nmZvhHbvO7fU05n1n4TpvhO8snGlLV8APS98NEmjCSgkLzQSOwA81I+QAuERz7QAVt vaajnVRl1bcTu2zlCYBAG0dOZrk6QKdplcdc/yu54LBfSKtrOhp7FsJqrb3ucW+bbgOG VjcPD/C3mBclEFypsuQQr/bCh1IwowgZF00Nt823RnvsePWhZt3QljIHniweHlnQEIUO w7QdJEJ2OJkfqOWGwb2UdynceaD7VvVPSeaxXlvF1rRex91UNLPX5uEnNiImaFuH0Gsd JjWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648650; x=1722253450; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OVLGD6KIEzAVggABsFYZeBPl3B+hTeGtDK8Ro+VkMEw=; b=bsScChN/2NR3PF2tgA9g1N7S7NglnPSBq9FhRDLNyzqihnsKLdS4bOik4LY82zC2MV LQ3Dm6qlgVjAVK+6V8nivVWd0kCcv5zFQJpCeV42olX7v6sqgmYun+At74UoFORNGvlC hL6kCFU76doeInTOcXwzBVcD4GDV9Z91DBhQkyT3yA4R2N2NIkoVL3w5rn1k95c97nXt ZCZXR/76Mq/rgWL1sfLoTlaCiQozuCiAIyZRB51SSRdmUFHW837sVOI2W0HAjzhnWqBR vD4CyhyRKtsnoqNHz8Up643uek/U4W5cck8dJknu5jbdpUR8Hl2bNKS3pAAVrtlVm49g hL4g== X-Gm-Message-State: AOJu0YzlAxYLev3OvmLG29AKHt/n4AEBIuCejycHMYZCkcMzWZhWjEK5 I1KKeQw91OgDLhpFb5k273WMtPTKAlEc/BTWSzlC31ShFoc5B8SxeWs2XmMVQ8E6rHglo8ezxv/ 4rh0= X-Google-Smtp-Source: AGHT+IFOjG96H258P5BGAEOpEnwrH+tHdP4vI7NIc3XHb9cX2gkQUmoT4hURn2SjXhYpaQ/f72dSxw== X-Received: by 2002:a5d:42c3:0:b0:368:48e6:5056 with SMTP id ffacd0b85a97d-369bbbbf9dfmr3658676f8f.22.1721648649723; Mon, 22 Jul 2024 04:44:09 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.44.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:44:09 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson , Paolo Bonzini , qemu-arm@nongnu.org Subject: [RFC PATCH v5 7/8] rust: add PL011 device model Date: Mon, 22 Jul 2024 14:43:37 +0300 Message-ID: X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::435; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wr1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org This commit adds a re-implementation of hw/char/pl011.c in Rust. How to build: 1. Configure a QEMU build with: --enable-system --target-list=aarch64-softmmu --enable-rust 2. Launching a VM with qemu-system-aarch64 should use the Rust version of the pl011 device Signed-off-by: Manos Pitsidianakis --- MAINTAINERS | 5 + hw/arm/Kconfig | 33 +- rust/Kconfig | 1 + rust/hw/Kconfig | 2 + rust/hw/char/Kconfig | 3 + rust/hw/char/meson.build | 1 + rust/hw/char/pl011/.gitignore | 2 + rust/hw/char/pl011/Cargo.lock | 123 ++++++ rust/hw/char/pl011/Cargo.toml | 26 ++ rust/hw/char/pl011/README.md | 31 ++ rust/hw/char/pl011/meson.build | 24 + rust/hw/char/pl011/rustfmt.toml | 1 + rust/hw/char/pl011/src/definitions.rs | 48 ++ rust/hw/char/pl011/src/device.rs | 541 +++++++++++++++++++++++ rust/hw/char/pl011/src/device_class.rs | 58 +++ rust/hw/char/pl011/src/lib.rs | 584 +++++++++++++++++++++++++ rust/hw/char/pl011/src/memory_ops.rs | 45 ++ rust/hw/meson.build | 1 + rust/meson.build | 2 + 19 files changed, 1520 insertions(+), 11 deletions(-) create mode 100644 rust/hw/Kconfig create mode 100644 rust/hw/char/Kconfig create mode 100644 rust/hw/char/meson.build create mode 100644 rust/hw/char/pl011/.gitignore create mode 100644 rust/hw/char/pl011/Cargo.lock create mode 100644 rust/hw/char/pl011/Cargo.toml create mode 100644 rust/hw/char/pl011/README.md create mode 100644 rust/hw/char/pl011/meson.build create mode 120000 rust/hw/char/pl011/rustfmt.toml create mode 100644 rust/hw/char/pl011/src/definitions.rs create mode 100644 rust/hw/char/pl011/src/device.rs create mode 100644 rust/hw/char/pl011/src/device_class.rs create mode 100644 rust/hw/char/pl011/src/lib.rs create mode 100644 rust/hw/char/pl011/src/memory_ops.rs create mode 100644 rust/hw/meson.build diff --git a/MAINTAINERS b/MAINTAINERS index 1789bcfd9b..0af4ef58ed 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1187,6 +1187,11 @@ F: include/hw/*/microbit*.h F: tests/qtest/microbit-test.c F: docs/system/arm/nrf.rst +ARM PL011 Rust device +M: Manos Pitsidianakis +S: Maintained +F: rust/hw/char/pl011/ + AVR Machines ------------- diff --git a/hw/arm/Kconfig b/hw/arm/Kconfig index 1ad60da7aa..45438c1bc4 100644 --- a/hw/arm/Kconfig +++ b/hw/arm/Kconfig @@ -20,7 +20,8 @@ config ARM_VIRT select PCI_EXPRESS select PCI_EXPRESS_GENERIC_BRIDGE select PFLASH_CFI01 - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL031 # RTC select PL061 # GPIO select GPIO_PWR @@ -80,7 +81,8 @@ config HIGHBANK select AHCI select ARM_TIMER # sp804 select ARM_V7M - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL022 # SPI select PL031 # RTC select PL061 # GPIO @@ -93,7 +95,8 @@ config INTEGRATOR depends on TCG && ARM select ARM_TIMER select INTEGRATOR_DEBUG - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL031 # RTC select PL041 # audio select PL050 # keyboard/mouse @@ -119,7 +122,8 @@ config MUSCA default y depends on TCG && ARM select ARMSSE - select PL011 + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL031 select SPLIT_IRQ select UNIMP @@ -228,7 +232,8 @@ config Z2 depends on TCG && ARM select PFLASH_CFI01 select WM8750 - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PXA2XX config REALVIEW @@ -248,7 +253,8 @@ config REALVIEW select WM8750 # audio codec select LSI_SCSI_PCI select PCI - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL031 # RTC select PL041 # audio codec select PL050 # keyboard/mouse @@ -273,7 +279,8 @@ config SBSA_REF select PCI_EXPRESS select PCI_EXPRESS_GENERIC_BRIDGE select PFLASH_CFI01 - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL031 # RTC select PL061 # GPIO select USB_XHCI_SYSBUS @@ -297,7 +304,8 @@ config STELLARIS select ARM_V7M select CMSDK_APB_WATCHDOG select I2C - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL022 # SPI select PL061 # GPIO select SSD0303 # OLED display @@ -356,7 +364,8 @@ config VEXPRESS select ARM_TIMER # sp804 select LAN9118 select PFLASH_CFI01 - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select PL041 # audio codec select PL181 # display select REALVIEW @@ -440,7 +449,8 @@ config RASPI default y depends on TCG && ARM select FRAMEBUFFER - select PL011 # UART + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select SDHCI select USB_DWC2 select BCM2835_SPI @@ -515,7 +525,8 @@ config XLNX_VERSAL select ARM_GIC select CPU_CLUSTER select DEVICE_TREE - select PL011 + select PL011 if !HAVE_RUST # UART + select X_PL011_RUST if HAVE_RUST # UART select CADENCE select VIRTIO_MMIO select UNIMP diff --git a/rust/Kconfig b/rust/Kconfig index e69de29bb2..f9f5c39098 100644 --- a/rust/Kconfig +++ b/rust/Kconfig @@ -0,0 +1 @@ +source hw/Kconfig diff --git a/rust/hw/Kconfig b/rust/hw/Kconfig new file mode 100644 index 0000000000..4d934f30af --- /dev/null +++ b/rust/hw/Kconfig @@ -0,0 +1,2 @@ +# devices Kconfig +source char/Kconfig diff --git a/rust/hw/char/Kconfig b/rust/hw/char/Kconfig new file mode 100644 index 0000000000..a1732a9e97 --- /dev/null +++ b/rust/hw/char/Kconfig @@ -0,0 +1,3 @@ +config X_PL011_RUST + bool + default y if HAVE_RUST diff --git a/rust/hw/char/meson.build b/rust/hw/char/meson.build new file mode 100644 index 0000000000..5716dc43ef --- /dev/null +++ b/rust/hw/char/meson.build @@ -0,0 +1 @@ +subdir('pl011') diff --git a/rust/hw/char/pl011/.gitignore b/rust/hw/char/pl011/.gitignore new file mode 100644 index 0000000000..71eaff2035 --- /dev/null +++ b/rust/hw/char/pl011/.gitignore @@ -0,0 +1,2 @@ +# Ignore generated bindings file overrides. +src/bindings.rs.inc diff --git a/rust/hw/char/pl011/Cargo.lock b/rust/hw/char/pl011/Cargo.lock new file mode 100644 index 0000000000..1af92fb16b --- /dev/null +++ b/rust/hw/char/pl011/Cargo.lock @@ -0,0 +1,123 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +version = 3 + +[[package]] +name = "arbitrary-int" +version = "1.2.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c84fc003e338a6f69fbd4f7fe9f92b535ff13e9af8997f3b14b6ddff8b1df46d" + +[[package]] +name = "bilge" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dc707ed8ebf81de5cd6c7f48f54b4c8621760926cdf35a57000747c512e67b57" +dependencies = [ + "arbitrary-int", + "bilge-impl", +] + +[[package]] +name = "bilge-impl" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "feb11e002038ad243af39c2068c8a72bcf147acf05025dcdb916fcc000adb2d8" +dependencies = [ + "itertools", + "proc-macro-error", + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "either" +version = "1.12.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3dca9240753cf90908d7e4aac30f630662b02aebaa1b58a3cadabdb23385b58b" + +[[package]] +name = "itertools" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b1c173a5686ce8bfa551b3563d0c2170bf24ca44da99c7ca4bfdab5418c3fe57" +dependencies = [ + "either", +] + +[[package]] +name = "pl011" +version = "0.1.0" +dependencies = [ + "bilge", + "qemu_api", +] + +[[package]] +name = "proc-macro-error" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c" +dependencies = [ + "proc-macro-error-attr", + "proc-macro2", + "quote", + "version_check", +] + +[[package]] +name = "proc-macro-error-attr" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869" +dependencies = [ + "proc-macro2", + "quote", + "version_check", +] + +[[package]] +name = "proc-macro2" +version = "1.0.84" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ec96c6a92621310b51366f1e28d05ef11489516e93be030060e5fc12024a49d6" +dependencies = [ + "unicode-ident", +] + +[[package]] +name = "qemu_api" +version = "0.1.0" + +[[package]] +name = "quote" +version = "1.0.36" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7" +dependencies = [ + "proc-macro2", +] + +[[package]] +name = "syn" +version = "2.0.66" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c42f3f41a2de00b01c0aaad383c5a45241efc8b2d1eda5661812fda5f3cdcff5" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "unicode-ident" +version = "1.0.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3354b9ac3fae1ff6755cb6db53683adb661634f67557942dea4facebec0fee4b" + +[[package]] +name = "version_check" +version = "0.9.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "49874b5167b65d7193b8aba1567f5c7d93d001cafc34600cee003eda787e483f" diff --git a/rust/hw/char/pl011/Cargo.toml b/rust/hw/char/pl011/Cargo.toml new file mode 100644 index 0000000000..67a6973da6 --- /dev/null +++ b/rust/hw/char/pl011/Cargo.toml @@ -0,0 +1,26 @@ +[package] +name = "pl011" +version = "0.1.0" +edition = "2021" +authors = ["Manos Pitsidianakis "] +license = "GPL-2.0 OR GPL-3.0-or-later" +readme = "README.md" +homepage = "https://www.qemu.org" +description = "pl011 device model for QEMU" +repository = "https://gitlab.com/epilys/rust-for-qemu" +resolver = "2" +publish = false +keywords = [] +categories = [] + +[lib] +crate-type = ["staticlib"] + +[dependencies] +arbitrary-int = { version = "1.2.7" } +bilge = { version = "0.2.0" } +bilge-impl = { version = "0.2.0" } +qemu_api = { path = "../../../qemu-api" } + +# Do not include in any global workspace +[workspace] diff --git a/rust/hw/char/pl011/README.md b/rust/hw/char/pl011/README.md new file mode 100644 index 0000000000..cd7dea3163 --- /dev/null +++ b/rust/hw/char/pl011/README.md @@ -0,0 +1,31 @@ +# PL011 QEMU Device Model + +This library implements a device model for the PrimeCell® UART (PL011) +device in QEMU. + +## Build static lib + +Host build target must be explicitly specified: + +```sh +cargo build --target x86_64-unknown-linux-gnu +``` + +Replace host target triplet if necessary. + +## Generate Rust documentation + +To generate docs for this crate, including private items: + +```sh +cargo doc --no-deps --document-private-items --target x86_64-unknown-linux-gnu +``` + +To include direct dependencies like `bilge` (bitmaps for register types): + +```sh +cargo tree --depth 1 -e normal --prefix none \ + | cut -d' ' -f1 \ + | xargs printf -- '-p %s\n' \ + | xargs cargo doc --no-deps --document-private-items --target x86_64-unknown-linux-gnu +``` diff --git a/rust/hw/char/pl011/meson.build b/rust/hw/char/pl011/meson.build new file mode 100644 index 0000000000..427788da6e --- /dev/null +++ b/rust/hw/char/pl011/meson.build @@ -0,0 +1,24 @@ +if not config_host_data.get('CONFIG_HAVE_RUST') + subdir_done() +endif + +add_languages('rust', required: true) +subdir('vendor') + +_libpl011_rs = static_library( + 'pl011', + files('src/lib.rs'), + rust_abi: 'c', + rust_args: rust_args + [ + '--edition', '2021', + ], + dependencies: [ + dep_bilge, + dep_bilge_impl, + qemu_api, + ], +) + +specific_ss.add(when: 'CONFIG_X_PL011_RUST', if_true: [declare_dependency( + link_whole: [_libpl011_rs], +)]) diff --git a/rust/hw/char/pl011/rustfmt.toml b/rust/hw/char/pl011/rustfmt.toml new file mode 120000 index 0000000000..39f97b043b --- /dev/null +++ b/rust/hw/char/pl011/rustfmt.toml @@ -0,0 +1 @@ +../rustfmt.toml \ No newline at end of file diff --git a/rust/hw/char/pl011/src/definitions.rs b/rust/hw/char/pl011/src/definitions.rs new file mode 100644 index 0000000000..d7958baa83 --- /dev/null +++ b/rust/hw/char/pl011/src/definitions.rs @@ -0,0 +1,48 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +//! Definitions required by QEMU when registering the device. + +use core::{mem::MaybeUninit, ptr::NonNull}; + +use qemu_api::bindings::*; + +use crate::{device::PL011State, device_class::pl011_class_init}; + +#[used] +#[no_mangle] +pub static TYPE_PL011: &std::ffi::CStr = c"pl011"; + +qemu_api::type_info! { + PL011_ARM_INFO: PL011State, + name: c"pl011", + parent: TYPE_SYS_BUS_DEVICE, + instance_init: Some(pl011_init), + abstract_: false, + class_init: Some(pl011_class_init), +} + +#[used] +pub static VMSTATE_PL011: VMStateDescription = VMStateDescription { + name: PL011_ARM_INFO.name, + unmigratable: true, + ..unsafe { MaybeUninit::::zeroed().assume_init() } +}; + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer, that has +/// the same size as [`PL011State`]. We also expect the device is +/// readable/writeable from one thread at any time. +#[no_mangle] +pub unsafe extern "C" fn pl011_init(obj: *mut Object) { + assert!(!obj.is_null()); + let mut state = NonNull::new_unchecked(obj.cast::()); + state.as_mut().init(); +} + +qemu_api::module_init! { + qom: register_type => { + type_register_static(&PL011_ARM_INFO); + } +} diff --git a/rust/hw/char/pl011/src/device.rs b/rust/hw/char/pl011/src/device.rs new file mode 100644 index 0000000000..3643b7bdee --- /dev/null +++ b/rust/hw/char/pl011/src/device.rs @@ -0,0 +1,541 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +use core::{ + ffi::{c_int, c_uchar, c_uint, c_void, CStr}, + ptr::{addr_of, addr_of_mut, NonNull}, +}; + +use qemu_api::bindings::{self, *}; + +use crate::{ + definitions::PL011_ARM_INFO, + memory_ops::PL011_OPS, + registers::{self, Interrupt}, + RegisterOffset, +}; + +static PL011_ID_ARM: [c_uchar; 8] = [0x11, 0x10, 0x14, 0x00, 0x0d, 0xf0, 0x05, 0xb1]; + +const DATA_BREAK: u32 = 1 << 10; + +/// QEMU sourced constant. +pub const PL011_FIFO_DEPTH: usize = 16_usize; + +#[repr(C)] +#[derive(Debug)] +/// PL011 Device Model in QEMU +pub struct PL011State { + pub parent_obj: SysBusDevice, + pub iomem: MemoryRegion, + pub readbuff: u32, + #[doc(alias = "fr")] + pub flags: registers::Flags, + #[doc(alias = "lcr")] + pub line_control: registers::LineControl, + #[doc(alias = "rsr")] + pub receive_status_error_clear: registers::ReceiveStatusErrorClear, + #[doc(alias = "cr")] + pub control: registers::Control, + pub dmacr: u32, + pub int_enabled: u32, + pub int_level: u32, + pub read_fifo: [u32; PL011_FIFO_DEPTH], + pub ilpr: u32, + pub ibrd: u32, + pub fbrd: u32, + pub ifl: u32, + pub read_pos: usize, + pub read_count: usize, + pub read_trigger: usize, + #[doc(alias = "chr")] + pub char_backend: CharBackend, + /// QEMU interrupts + /// + /// ```text + /// * sysbus MMIO region 0: device registers + /// * sysbus IRQ 0: `UARTINTR` (combined interrupt line) + /// * sysbus IRQ 1: `UARTRXINTR` (receive FIFO interrupt line) + /// * sysbus IRQ 2: `UARTTXINTR` (transmit FIFO interrupt line) + /// * sysbus IRQ 3: `UARTRTINTR` (receive timeout interrupt line) + /// * sysbus IRQ 4: `UARTMSINTR` (momem status interrupt line) + /// * sysbus IRQ 5: `UARTEINTR` (error interrupt line) + /// ``` + #[doc(alias = "irq")] + pub interrupts: [qemu_irq; 6usize], + #[doc(alias = "clk")] + pub clock: NonNull, + #[doc(alias = "migrate_clk")] + pub migrate_clock: bool, +} + +#[used] +pub static CLK_NAME: &CStr = c"clk"; + +impl PL011State { + pub fn init(&mut self) { + // SAFETY: + // + // self and self.iomem are guaranteed to be valid at this point since callers + // must make sure the `self` reference is valid. + unsafe { + memory_region_init_io( + addr_of_mut!(self.iomem), + addr_of_mut!(*self).cast::(), + &PL011_OPS, + addr_of_mut!(*self).cast::(), + PL011_ARM_INFO.name, + 0x1000, + ); + let sbd = addr_of_mut!(*self).cast::(); + let dev = addr_of_mut!(*self).cast::(); + sysbus_init_mmio(sbd, addr_of_mut!(self.iomem)); + for irq in self.interrupts.iter_mut() { + sysbus_init_irq(sbd, irq); + } + self.clock = NonNull::new(qdev_init_clock_in( + dev, + CLK_NAME.as_ptr(), + None, /* pl011_clock_update */ + addr_of_mut!(*self).cast::(), + ClockEvent_ClockUpdate, + )) + .unwrap(); + } + } + + pub fn read(&mut self, offset: hwaddr, _size: core::ffi::c_uint) -> u64 { + use RegisterOffset::*; + + match RegisterOffset::try_from(offset) { + Err(v) if (0x3f8..0x400).contains(&v) => { + u64::from(PL011_ID_ARM[((offset - 0xfe0) >> 2) as usize]) + } + Err(_) => { + // qemu_log_mask(LOG_GUEST_ERROR, "pl011_read: Bad offset 0x%x\n", (int)offset); + 0 + } + Ok(DR) => { + // s->flags &= ~PL011_FLAG_RXFF; + self.flags.set_receive_fifo_full(false); + let c = self.read_fifo[self.read_pos]; + if self.read_count > 0 { + self.read_count -= 1; + self.read_pos = (self.read_pos + 1) & (self.fifo_depth() - 1); + } + if self.read_count == 0 { + // self.flags |= PL011_FLAG_RXFE; + self.flags.set_receive_fifo_empty(true); + } + if self.read_count + 1 == self.read_trigger { + //self.int_level &= ~ INT_RX; + self.int_level &= !registers::INT_RX; + } + // Update error bits. + self.receive_status_error_clear = c.to_be_bytes()[3].into(); + self.update(); + // SAFETY: self.char_backend is a valid CharBackend instance after it's been + // initialized in realize(). + unsafe { qemu_chr_fe_accept_input(&mut self.char_backend) }; + c.into() + } + Ok(RSR) => u8::from(self.receive_status_error_clear).into(), + Ok(FR) => u16::from(self.flags).into(), + Ok(FBRD) => self.fbrd.into(), + Ok(ILPR) => self.ilpr.into(), + Ok(IBRD) => self.ibrd.into(), + Ok(LCR_H) => u16::from(self.line_control).into(), + Ok(CR) => { + // We exercise our self-control. + u16::from(self.control).into() + } + Ok(FLS) => self.ifl.into(), + Ok(IMSC) => self.int_enabled.into(), + Ok(RIS) => self.int_level.into(), + Ok(MIS) => u64::from(self.int_level & self.int_enabled), + Ok(ICR) => { + // "The UARTICR Register is the interrupt clear register and is write-only" + // Source: ARM DDI 0183G 3.3.13 Interrupt Clear Register, UARTICR + 0 + } + Ok(DMACR) => self.dmacr.into(), + } + } + + pub fn write(&mut self, offset: hwaddr, value: u64) { + // eprintln!("write offset {offset} value {value}"); + use RegisterOffset::*; + let value: u32 = value as u32; + match RegisterOffset::try_from(offset) { + Err(_bad_offset) => { + eprintln!("write bad offset {offset} value {value}"); + } + Ok(DR) => { + // ??? Check if transmitter is enabled. + let ch: u8 = value as u8; + // XXX this blocks entire thread. Rewrite to use + // qemu_chr_fe_write and background I/O callbacks + + // SAFETY: self.char_backend is a valid CharBackend instance after it's been + // initialized in realize(). + unsafe { + qemu_chr_fe_write_all(addr_of_mut!(self.char_backend), &ch, 1); + } + self.loopback_tx(value); + self.int_level |= registers::INT_TX; + self.update(); + } + Ok(RSR) => { + self.receive_status_error_clear = 0.into(); + } + Ok(FR) => { + // flag writes are ignored + } + Ok(ILPR) => { + self.ilpr = value; + } + Ok(IBRD) => { + self.ibrd = value; + } + Ok(FBRD) => { + self.fbrd = value; + } + Ok(LCR_H) => { + let value = value as u16; + let new_val: registers::LineControl = value.into(); + // Reset the FIFO state on FIFO enable or disable + if bool::from(self.line_control.fifos_enabled()) + ^ bool::from(new_val.fifos_enabled()) + { + self.reset_fifo(); + } + if self.line_control.send_break() ^ new_val.send_break() { + let mut break_enable: c_int = new_val.send_break().into(); + // SAFETY: self.char_backend is a valid CharBackend instance after it's been + // initialized in realize(). + unsafe { + qemu_chr_fe_ioctl( + addr_of_mut!(self.char_backend), + CHR_IOCTL_SERIAL_SET_BREAK as i32, + addr_of_mut!(break_enable).cast::(), + ); + } + self.loopback_break(break_enable > 0); + } + self.line_control = new_val; + self.set_read_trigger(); + } + Ok(CR) => { + // ??? Need to implement the enable bit. + let value = value as u16; + self.control = value.into(); + self.loopback_mdmctrl(); + } + Ok(FLS) => { + self.ifl = value; + self.set_read_trigger(); + } + Ok(IMSC) => { + self.int_enabled = value; + self.update(); + } + Ok(RIS) => {} + Ok(MIS) => {} + Ok(ICR) => { + self.int_level &= !value; + self.update(); + } + Ok(DMACR) => { + self.dmacr = value; + if value & 3 > 0 { + // qemu_log_mask(LOG_UNIMP, "pl011: DMA not implemented\n"); + eprintln!("pl011: DMA not implemented"); + } + } + } + } + + #[inline] + fn loopback_tx(&mut self, value: u32) { + if !self.loopback_enabled() { + return; + } + + // Caveat: + // + // In real hardware, TX loopback happens at the serial-bit level + // and then reassembled by the RX logics back into bytes and placed + // into the RX fifo. That is, loopback happens after TX fifo. + // + // Because the real hardware TX fifo is time-drained at the frame + // rate governed by the configured serial format, some loopback + // bytes in TX fifo may still be able to get into the RX fifo + // that could be full at times while being drained at software + // pace. + // + // In such scenario, the RX draining pace is the major factor + // deciding which loopback bytes get into the RX fifo, unless + // hardware flow-control is enabled. + // + // For simplicity, the above described is not emulated. + self.put_fifo(value); + } + + fn loopback_mdmctrl(&mut self) { + if !self.loopback_enabled() { + return; + } + + /* + * Loopback software-driven modem control outputs to modem status inputs: + * FR.RI <= CR.Out2 + * FR.DCD <= CR.Out1 + * FR.CTS <= CR.RTS + * FR.DSR <= CR.DTR + * + * The loopback happens immediately even if this call is triggered + * by setting only CR.LBE. + * + * CTS/RTS updates due to enabled hardware flow controls are not + * dealt with here. + */ + + //fr = s->flags & ~(PL011_FLAG_RI | PL011_FLAG_DCD | + // PL011_FLAG_DSR | PL011_FLAG_CTS); + //fr |= (cr & CR_OUT2) ? PL011_FLAG_RI : 0; + //fr |= (cr & CR_OUT1) ? PL011_FLAG_DCD : 0; + //fr |= (cr & CR_RTS) ? PL011_FLAG_CTS : 0; + //fr |= (cr & CR_DTR) ? PL011_FLAG_DSR : 0; + // + self.flags.set_ring_indicator(self.control.out_2()); + self.flags.set_data_carrier_detect(self.control.out_1()); + self.flags.set_clear_to_send(self.control.request_to_send()); + self.flags + .set_data_set_ready(self.control.data_transmit_ready()); + + // Change interrupts based on updated FR + let mut il = self.int_level; + + il &= !Interrupt::MS; + //il |= (fr & PL011_FLAG_DSR) ? INT_DSR : 0; + //il |= (fr & PL011_FLAG_DCD) ? INT_DCD : 0; + //il |= (fr & PL011_FLAG_CTS) ? INT_CTS : 0; + //il |= (fr & PL011_FLAG_RI) ? INT_RI : 0; + + if self.flags.data_set_ready() { + il |= Interrupt::DSR as u32; + } + if self.flags.data_carrier_detect() { + il |= Interrupt::DCD as u32; + } + if self.flags.clear_to_send() { + il |= Interrupt::CTS as u32; + } + if self.flags.ring_indicator() { + il |= Interrupt::RI as u32; + } + self.int_level = il; + self.update(); + } + + fn loopback_break(&mut self, enable: bool) { + if enable { + self.loopback_tx(DATA_BREAK); + } + } + + fn set_read_trigger(&mut self) { + //#if 0 + // /* The docs say the RX interrupt is triggered when the FIFO exceeds + // the threshold. However linux only reads the FIFO in response to an + // interrupt. Triggering the interrupt when the FIFO is non-empty seems + // to make things work. */ + // if (s->lcr & LCR_FEN) + // s->read_trigger = (s->ifl >> 1) & 0x1c; + // else + //#endif + self.read_trigger = 1; + } + + pub fn realize(&mut self) { + // SAFETY: self.char_backend has the correct size and alignment for a + // CharBackend object, and its callbacks are of the correct types. + unsafe { + qemu_chr_fe_set_handlers( + addr_of_mut!(self.char_backend), + Some(pl011_can_receive), + Some(pl011_receive), + Some(pl011_event), + None, + addr_of_mut!(*self).cast::(), + core::ptr::null_mut(), + true, + ); + } + } + + pub fn reset(&mut self) { + self.line_control.reset(); + self.receive_status_error_clear.reset(); + self.dmacr = 0; + self.int_enabled = 0; + self.int_level = 0; + self.ilpr = 0; + self.ibrd = 0; + self.fbrd = 0; + self.read_trigger = 1; + self.ifl = 0x12; + self.control.reset(); + self.flags = 0.into(); + self.reset_fifo(); + } + + pub fn reset_fifo(&mut self) { + self.read_count = 0; + self.read_pos = 0; + + /* Reset FIFO flags */ + self.flags.reset(); + } + + pub fn can_receive(&self) -> bool { + // trace_pl011_can_receive(s->lcr, s->read_count, r); + self.read_count < self.fifo_depth() + } + + pub fn event(&mut self, event: QEMUChrEvent) { + if event == bindings::QEMUChrEvent_CHR_EVENT_BREAK && !self.fifo_enabled() { + self.put_fifo(DATA_BREAK); + self.receive_status_error_clear.set_break_error(true); + } + } + + #[inline] + pub fn fifo_enabled(&self) -> bool { + matches!(self.line_control.fifos_enabled(), registers::Mode::FIFO) + } + + #[inline] + pub fn loopback_enabled(&self) -> bool { + self.control.enable_loopback() + } + + #[inline] + pub fn fifo_depth(&self) -> usize { + // Note: FIFO depth is expected to be power-of-2 + if self.fifo_enabled() { + return PL011_FIFO_DEPTH; + } + 1 + } + + pub fn put_fifo(&mut self, value: c_uint) { + let depth = self.fifo_depth(); + assert!(depth > 0); + let slot = (self.read_pos + self.read_count) & (depth - 1); + self.read_fifo[slot] = value; + self.read_count += 1; + // s->flags &= ~PL011_FLAG_RXFE; + self.flags.set_receive_fifo_empty(false); + if self.read_count == depth { + //s->flags |= PL011_FLAG_RXFF; + self.flags.set_receive_fifo_full(true); + } + + if self.read_count == self.read_trigger { + self.int_level |= registers::INT_RX; + self.update(); + } + } + + pub fn update(&mut self) { + let flags = self.int_level & self.int_enabled; + for (irq, i) in self.interrupts.iter().zip(IRQMASK) { + // SAFETY: self.interrupts have been initialized in init(). + unsafe { qemu_set_irq(*irq, i32::from(flags & i != 0)) }; + } + } +} + +/// Which bits in the interrupt status matter for each outbound IRQ line ? +pub const IRQMASK: [u32; 6] = [ + /* combined IRQ */ + Interrupt::E + | Interrupt::MS + | Interrupt::RT as u32 + | Interrupt::TX as u32 + | Interrupt::RX as u32, + Interrupt::RX as u32, + Interrupt::TX as u32, + Interrupt::RT as u32, + Interrupt::MS, + Interrupt::E, +]; + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer, that has +/// the same size as [`PL011State`]. We also expect the device is +/// readable/writeable from one thread at any time. +#[no_mangle] +pub unsafe extern "C" fn pl011_can_receive(opaque: *mut c_void) -> c_int { + assert!(!opaque.is_null()); + let state = NonNull::new_unchecked(opaque.cast::()); + state.as_ref().can_receive().into() +} + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer, that has +/// the same size as [`PL011State`]. We also expect the device is +/// readable/writeable from one thread at any time. +/// +/// The buffer and size arguments must also be valid. +#[no_mangle] +pub unsafe extern "C" fn pl011_receive( + opaque: *mut core::ffi::c_void, + buf: *const u8, + size: core::ffi::c_int, +) { + assert!(!opaque.is_null()); + let mut state = NonNull::new_unchecked(opaque.cast::()); + if state.as_ref().loopback_enabled() { + return; + } + if size > 0 { + assert!(!buf.is_null()); + state.as_mut().put_fifo(*buf.cast::()) + } +} + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer, that has +/// the same size as [`PL011State`]. We also expect the device is +/// readable/writeable from one thread at any time. +#[no_mangle] +pub unsafe extern "C" fn pl011_event(opaque: *mut core::ffi::c_void, event: QEMUChrEvent) { + assert!(!opaque.is_null()); + let mut state = NonNull::new_unchecked(opaque.cast::()); + state.as_mut().event(event) +} + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer for `chr`. +#[no_mangle] +pub unsafe extern "C" fn pl011_create( + addr: u64, + irq: qemu_irq, + chr: *mut Chardev, +) -> *mut DeviceState { + let dev: *mut DeviceState = unsafe { qdev_new(PL011_ARM_INFO.name) }; + assert!(!dev.is_null()); + let sysbus: *mut SysBusDevice = dev as *mut SysBusDevice; + + qdev_prop_set_chr(dev, bindings::TYPE_CHARDEV.as_ptr(), chr); + sysbus_realize_and_unref(sysbus, addr_of!(error_fatal) as *mut *mut Error); + sysbus_mmio_map(sysbus, 0, addr); + sysbus_connect_irq(sysbus, 0, irq); + dev +} diff --git a/rust/hw/char/pl011/src/device_class.rs b/rust/hw/char/pl011/src/device_class.rs new file mode 100644 index 0000000000..6b99239133 --- /dev/null +++ b/rust/hw/char/pl011/src/device_class.rs @@ -0,0 +1,58 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +use core::ptr::NonNull; + +use qemu_api::bindings::*; + +use crate::{definitions::VMSTATE_PL011, device::PL011State}; + +qemu_api::declare_properties! { + PL011_PROPERTIES, + qemu_api::define_property!( + c"chardev", + PL011State, + char_backend, + unsafe { &qdev_prop_chr }, + CharBackend + ), + qemu_api::define_property!( + c"migrate-clk", + PL011State, + migrate_clock, + unsafe { &qdev_prop_bool }, + bool + ), +} + +qemu_api::device_class_init! { + pl011_class_init, + props => PL011_PROPERTIES, + realize_fn => Some(pl011_realize), + reset_fn => Some(pl011_reset), + vmsd => VMSTATE_PL011, +} + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer, that has +/// the same size as [`PL011State`]. We also expect the device is +/// readable/writeable from one thread at any time. +#[no_mangle] +pub unsafe extern "C" fn pl011_realize(dev: *mut DeviceState, _errp: *mut *mut Error) { + assert!(!dev.is_null()); + let mut state = NonNull::new_unchecked(dev.cast::()); + state.as_mut().realize(); +} + +/// # Safety +/// +/// We expect the FFI user of this function to pass a valid pointer, that has +/// the same size as [`PL011State`]. We also expect the device is +/// readable/writeable from one thread at any time. +#[no_mangle] +pub unsafe extern "C" fn pl011_reset(dev: *mut DeviceState) { + assert!(!dev.is_null()); + let mut state = NonNull::new_unchecked(dev.cast::()); + state.as_mut().reset(); +} diff --git a/rust/hw/char/pl011/src/lib.rs b/rust/hw/char/pl011/src/lib.rs new file mode 100644 index 0000000000..697d2ef5a6 --- /dev/null +++ b/rust/hw/char/pl011/src/lib.rs @@ -0,0 +1,584 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later +// +// PL011 QEMU Device Model +// +// This library implements a device model for the PrimeCell® UART (PL011) +// device in QEMU. +// +#![doc = include_str!("../README.md")] +//! # Library crate +//! +//! See [`PL011State`](crate::device::PL011State) for the device model type and +//! the [`registers`] module for register types. + +#![deny( + rustdoc::broken_intra_doc_links, + rustdoc::redundant_explicit_links, + clippy::correctness, + clippy::suspicious, + clippy::complexity, + clippy::perf, + clippy::cargo, + clippy::nursery, + clippy::style, + // restriction group + clippy::dbg_macro, + clippy::as_underscore, + clippy::assertions_on_result_states, + // pedantic group + clippy::doc_markdown, + clippy::borrow_as_ptr, + clippy::cast_lossless, + clippy::option_if_let_else, + clippy::missing_const_for_fn, + clippy::cognitive_complexity, + clippy::missing_safety_doc, + )] + +extern crate bilge; +extern crate bilge_impl; +extern crate qemu_api; + +pub mod definitions; +pub mod device; +pub mod device_class; +pub mod memory_ops; + +/// Offset of each register from the base memory address of the device. +/// +/// # Source +/// ARM DDI 0183G, Table 3-1 p.3-3 +#[doc(alias = "offset")] +#[allow(non_camel_case_types)] +#[repr(u64)] +#[derive(Debug)] +pub enum RegisterOffset { + /// Data Register + /// + /// A write to this register initiates the actual data transmission + #[doc(alias = "UARTDR")] + DR = 0x000, + /// Receive Status Register or Error Clear Register + #[doc(alias = "UARTRSR")] + #[doc(alias = "UARTECR")] + RSR = 0x004, + /// Flag Register + /// + /// A read of this register shows if transmission is complete + #[doc(alias = "UARTFR")] + FR = 0x018, + /// Fractional Baud Rate Register + /// + /// responsible for baud rate speed + #[doc(alias = "UARTFBRD")] + FBRD = 0x028, + /// `IrDA` Low-Power Counter Register + #[doc(alias = "UARTILPR")] + ILPR = 0x020, + /// Integer Baud Rate Register + /// + /// Responsible for baud rate speed + #[doc(alias = "UARTIBRD")] + IBRD = 0x024, + /// line control register (data frame format) + #[doc(alias = "UARTLCR_H")] + LCR_H = 0x02C, + /// Toggle UART, transmission or reception + #[doc(alias = "UARTCR")] + CR = 0x030, + /// Interrupt FIFO Level Select Register + #[doc(alias = "UARTIFLS")] + FLS = 0x034, + /// Interrupt Mask Set/Clear Register + #[doc(alias = "UARTIMSC")] + IMSC = 0x038, + /// Raw Interrupt Status Register + #[doc(alias = "UARTRIS")] + RIS = 0x03C, + /// Masked Interrupt Status Register + #[doc(alias = "UARTMIS")] + MIS = 0x040, + /// Interrupt Clear Register + #[doc(alias = "UARTICR")] + ICR = 0x044, + /// DMA control Register + #[doc(alias = "UARTDMACR")] + DMACR = 0x048, + ///// Reserved, offsets `0x04C` to `0x07C`. + //Reserved = 0x04C, +} + +impl core::convert::TryFrom for RegisterOffset { + type Error = u64; + + fn try_from(value: u64) -> Result { + macro_rules! case { + ($($discriminant:ident),*$(,)*) => { + /* check that matching on all macro arguments compiles, which means we are not + * missing any enum value; if the type definition ever changes this will stop + * compiling. + */ + const fn _assert_exhaustive(val: RegisterOffset) { + match val { + $(RegisterOffset::$discriminant => (),)* + } + } + + match value { + $(x if x == Self::$discriminant as u64 => Ok(Self::$discriminant),)* + _ => Err(value), + } + } + } + case! { DR, RSR, FR, FBRD, ILPR, IBRD, LCR_H, CR, FLS, IMSC, RIS, MIS, ICR, DMACR } + } +} + +pub mod registers { + //! Device registers exposed as typed structs which are backed by arbitrary + //! integer bitmaps. [`Data`], [`Control`], [`LineControl`], etc. + //! + //! All PL011 registers are essentially 32-bit wide, but are typed here as + //! bitmaps with only the necessary width. That is, if a struct bitmap + //! in this module is for example 16 bits long, it should be conceived + //! as a 32-bit register where the unmentioned higher bits are always + //! unused thus treated as zero when read or written. + use bilge::prelude::*; + + // TODO: FIFO Mode has different semantics + /// Data Register, `UARTDR` + /// + /// The `UARTDR` register is the data register. + /// + /// For words to be transmitted: + /// + /// - if the FIFOs are enabled, data written to this location is pushed onto + /// the transmit + /// FIFO + /// - if the FIFOs are not enabled, data is stored in the transmitter + /// holding register (the + /// bottom word of the transmit FIFO). + /// + /// The write operation initiates transmission from the UART. The data is + /// prefixed with a start bit, appended with the appropriate parity bit + /// (if parity is enabled), and a stop bit. The resultant word is then + /// transmitted. + /// + /// For received words: + /// + /// - if the FIFOs are enabled, the data byte and the 4-bit status (break, + /// frame, parity, + /// and overrun) is pushed onto the 12-bit wide receive FIFO + /// - if the FIFOs are not enabled, the data byte and status are stored in + /// the receiving + /// holding register (the bottom word of the receive FIFO). + /// + /// The received data byte is read by performing reads from the `UARTDR` + /// register along with the corresponding status information. The status + /// information can also be read by a read of the `UARTRSR/UARTECR` + /// register. + /// + /// # Note + /// + /// You must disable the UART before any of the control registers are + /// reprogrammed. When the UART is disabled in the middle of + /// transmission or reception, it completes the current character before + /// stopping. + /// + /// # Source + /// ARM DDI 0183G 3.3.1 Data Register, UARTDR + #[bitsize(16)] + #[derive(Clone, Copy, DebugBits, FromBits)] + #[doc(alias = "UARTDR")] + pub struct Data { + _reserved: u4, + pub data: u8, + pub framing_error: bool, + pub parity_error: bool, + pub break_error: bool, + pub overrun_error: bool, + } + + // TODO: FIFO Mode has different semantics + /// Receive Status Register / Error Clear Register, `UARTRSR/UARTECR` + /// + /// The UARTRSR/UARTECR register is the receive status register/error clear + /// register. Receive status can also be read from the `UARTRSR` + /// register. If the status is read from this register, then the status + /// information for break, framing and parity corresponds to the + /// data character read from the [Data register](Data), `UARTDR` prior to + /// reading the UARTRSR register. The status information for overrun is + /// set immediately when an overrun condition occurs. + /// + /// + /// # Note + /// The received data character must be read first from the [Data + /// Register](Data), `UARTDR` before reading the error status associated + /// with that data character from the `UARTRSR` register. This read + /// sequence cannot be reversed, because the `UARTRSR` register is + /// updated only when a read occurs from the `UARTDR` register. However, + /// the status information can also be obtained by reading the `UARTDR` + /// register + /// + /// # Source + /// ARM DDI 0183G 3.3.2 Receive Status Register/Error Clear Register, + /// UARTRSR/UARTECR + #[bitsize(8)] + #[derive(Clone, Copy, DebugBits, FromBits)] + pub struct ReceiveStatusErrorClear { + pub framing_error: bool, + pub parity_error: bool, + pub break_error: bool, + pub overrun_error: bool, + _reserved_unpredictable: u4, + } + + impl ReceiveStatusErrorClear { + pub fn reset(&mut self) { + // All the bits are cleared to 0 on reset. + *self = 0.into(); + } + } + + impl Default for ReceiveStatusErrorClear { + fn default() -> Self { + 0.into() + } + } + + #[bitsize(16)] + #[derive(Clone, Copy, DebugBits, FromBits)] + /// Flag Register, `UARTFR` + #[doc(alias = "UARTFR")] + pub struct Flags { + /// CTS Clear to send. This bit is the complement of the UART clear to + /// send, `nUARTCTS`, modem status input. That is, the bit is 1 + /// when `nUARTCTS` is LOW. + pub clear_to_send: bool, + /// DSR Data set ready. This bit is the complement of the UART data set + /// ready, `nUARTDSR`, modem status input. That is, the bit is 1 when + /// `nUARTDSR` is LOW. + pub data_set_ready: bool, + /// DCD Data carrier detect. This bit is the complement of the UART data + /// carrier detect, `nUARTDCD`, modem status input. That is, the bit is + /// 1 when `nUARTDCD` is LOW. + pub data_carrier_detect: bool, + /// BUSY UART busy. If this bit is set to 1, the UART is busy + /// transmitting data. This bit remains set until the complete + /// byte, including all the stop bits, has been sent from the + /// shift register. This bit is set as soon as the transmit FIFO + /// becomes non-empty, regardless of whether the UART is enabled + /// or not. + pub busy: bool, + /// RXFE Receive FIFO empty. The meaning of this bit depends on the + /// state of the FEN bit in the UARTLCR_H register. If the FIFO + /// is disabled, this bit is set when the receive holding + /// register is empty. If the FIFO is enabled, the RXFE bit is + /// set when the receive FIFO is empty. + pub receive_fifo_empty: bool, + /// TXFF Transmit FIFO full. The meaning of this bit depends on the + /// state of the FEN bit in the UARTLCR_H register. If the FIFO + /// is disabled, this bit is set when the transmit holding + /// register is full. If the FIFO is enabled, the TXFF bit is + /// set when the transmit FIFO is full. + pub transmit_fifo_full: bool, + /// RXFF Receive FIFO full. The meaning of this bit depends on the state + /// of the FEN bit in the UARTLCR_H register. If the FIFO is + /// disabled, this bit is set when the receive holding register + /// is full. If the FIFO is enabled, the RXFF bit is set when + /// the receive FIFO is full. + pub receive_fifo_full: bool, + /// Transmit FIFO empty. The meaning of this bit depends on the state of + /// the FEN bit in the [Line Control register](LineControl), + /// `UARTLCR_H`. If the FIFO is disabled, this bit is set when the + /// transmit holding register is empty. If the FIFO is enabled, + /// the TXFE bit is set when the transmit FIFO is empty. This + /// bit does not indicate if there is data in the transmit shift + /// register. + pub transmit_fifo_empty: bool, + /// `RI`, is `true` when `nUARTRI` is `LOW`. + pub ring_indicator: bool, + _reserved_zero_no_modify: u7, + } + + impl Flags { + pub fn reset(&mut self) { + // After reset TXFF, RXFF, and BUSY are 0, and TXFE and RXFE are 1 + self.set_receive_fifo_full(false); + self.set_transmit_fifo_full(false); + self.set_busy(false); + self.set_receive_fifo_empty(true); + self.set_transmit_fifo_empty(true); + } + } + + impl Default for Flags { + fn default() -> Self { + let mut ret: Self = 0.into(); + ret.reset(); + ret + } + } + + #[bitsize(16)] + #[derive(Clone, Copy, DebugBits, FromBits)] + /// Line Control Register, `UARTLCR_H` + #[doc(alias = "UARTLCR_H")] + pub struct LineControl { + /// 15:8 - Reserved, do not modify, read as zero. + _reserved_zero_no_modify: u8, + /// 7 SPS Stick parity select. + /// 0 = stick parity is disabled + /// 1 = either: + /// • if the EPS bit is 0 then the parity bit is transmitted and checked + /// as a 1 • if the EPS bit is 1 then the parity bit is + /// transmitted and checked as a 0. This bit has no effect when + /// the PEN bit disables parity checking and generation. See Table 3-11 + /// on page 3-14 for the parity truth table. + pub sticky_parity: bool, + /// WLEN Word length. These bits indicate the number of data bits + /// transmitted or received in a frame as follows: b11 = 8 bits + /// b10 = 7 bits + /// b01 = 6 bits + /// b00 = 5 bits. + pub word_length: WordLength, + /// FEN Enable FIFOs: + /// 0 = FIFOs are disabled (character mode) that is, the FIFOs become + /// 1-byte-deep holding registers 1 = transmit and receive FIFO + /// buffers are enabled (FIFO mode). + pub fifos_enabled: Mode, + /// 3 STP2 Two stop bits select. If this bit is set to 1, two stop bits + /// are transmitted at the end of the frame. The receive + /// logic does not check for two stop bits being received. + pub two_stops_bits: bool, + /// EPS Even parity select. Controls the type of parity the UART uses + /// during transmission and reception: + /// - 0 = odd parity. The UART generates or checks for an odd number of + /// 1s in the data and parity bits. + /// - 1 = even parity. The UART generates or checks for an even number + /// of 1s in the data and parity bits. + /// This bit has no effect when the `PEN` bit disables parity checking + /// and generation. See Table 3-11 on page 3-14 for the parity + /// truth table. + pub parity: Parity, + /// 1 PEN Parity enable: + /// + /// - 0 = parity is disabled and no parity bit added to the data frame + /// - 1 = parity checking and generation is enabled. + /// + /// See Table 3-11 on page 3-14 for the parity truth table. + pub parity_enabled: bool, + /// BRK Send break. + /// + /// If this bit is set to `1`, a low-level is continually output on the + /// `UARTTXD` output, after completing transmission of the + /// current character. For the proper execution of the break command, + /// the software must set this bit for at least two complete + /// frames. For normal use, this bit must be cleared to `0`. + pub send_break: bool, + } + + impl LineControl { + pub fn reset(&mut self) { + // All the bits are cleared to 0 when reset. + *self = 0.into(); + } + } + + impl Default for LineControl { + fn default() -> Self { + 0.into() + } + } + + #[bitsize(1)] + #[derive(Clone, Copy, Debug, Eq, FromBits, PartialEq)] + /// `EPS` "Even parity select", field of [Line Control + /// register](LineControl). + pub enum Parity { + /// - 0 = odd parity. The UART generates or checks for an odd number of + /// 1s in the data and parity bits. + Odd = 0, + /// - 1 = even parity. The UART generates or checks for an even number + /// of 1s in the data and parity bits. + Even = 1, + } + + #[bitsize(1)] + #[derive(Clone, Copy, Debug, Eq, FromBits, PartialEq)] + /// `FEN` "Enable FIFOs" or Device mode, field of [Line Control + /// register](LineControl). + pub enum Mode { + /// 0 = FIFOs are disabled (character mode) that is, the FIFOs become + /// 1-byte-deep holding registers + Character = 0, + /// 1 = transmit and receive FIFO buffers are enabled (FIFO mode). + FIFO = 1, + } + + impl From for bool { + fn from(val: Mode) -> Self { + matches!(val, Mode::FIFO) + } + } + + #[bitsize(2)] + #[derive(Clone, Copy, Debug, Eq, FromBits, PartialEq)] + /// `WLEN` Word length, field of [Line Control register](LineControl). + /// + /// These bits indicate the number of data bits transmitted or received in a + /// frame as follows: + pub enum WordLength { + /// b11 = 8 bits + _8Bits = 0b11, + /// b10 = 7 bits + _7Bits = 0b10, + /// b01 = 6 bits + _6Bits = 0b01, + /// b00 = 5 bits. + _5Bits = 0b00, + } + + /// Control Register, `UARTCR` + /// + /// The `UARTCR` register is the control register. All the bits are cleared + /// to `0` on reset except for bits `9` and `8` that are set to `1`. + /// + /// # Source + /// ARM DDI 0183G, 3.3.8 Control Register, `UARTCR`, Table 3-12 + #[bitsize(16)] + #[doc(alias = "UARTCR")] + #[derive(Clone, Copy, DebugBits, FromBits)] + pub struct Control { + /// `UARTEN` UART enable: 0 = UART is disabled. If the UART is disabled + /// in the middle of transmission or reception, it completes the current + /// character before stopping. 1 = the UART is enabled. Data + /// transmission and reception occurs for either UART signals or SIR + /// signals depending on the setting of the SIREN bit. + pub enable_uart: bool, + /// `SIREN` `SIR` enable: 0 = IrDA SIR ENDEC is disabled. `nSIROUT` + /// remains LOW (no light pulse generated), and signal transitions on + /// SIRIN have no effect. 1 = IrDA SIR ENDEC is enabled. Data is + /// transmitted and received on nSIROUT and SIRIN. UARTTXD remains HIGH, + /// in the marking state. Signal transitions on UARTRXD or modem status + /// inputs have no effect. This bit has no effect if the UARTEN bit + /// disables the UART. + pub enable_sir: bool, + /// `SIRLP` SIR low-power IrDA mode. This bit selects the IrDA encoding + /// mode. If this bit is cleared to 0, low-level bits are transmitted as + /// an active high pulse with a width of 3/ 16th of the bit period. If + /// this bit is set to 1, low-level bits are transmitted with a pulse + /// width that is 3 times the period of the IrLPBaud16 input signal, + /// regardless of the selected bit rate. Setting this bit uses less + /// power, but might reduce transmission distances. + pub sir_lowpower_irda_mode: u1, + /// Reserved, do not modify, read as zero. + _reserved_zero_no_modify: u4, + /// `LBE` Loopback enable. If this bit is set to 1 and the SIREN bit is + /// set to 1 and the SIRTEST bit in the Test Control register, UARTTCR + /// on page 4-5 is set to 1, then the nSIROUT path is inverted, and fed + /// through to the SIRIN path. The SIRTEST bit in the test register must + /// be set to 1 to override the normal half-duplex SIR operation. This + /// must be the requirement for accessing the test registers during + /// normal operation, and SIRTEST must be cleared to 0 when loopback + /// testing is finished. This feature reduces the amount of external + /// coupling required during system test. If this bit is set to 1, and + /// the SIRTEST bit is set to 0, the UARTTXD path is fed through to the + /// UARTRXD path. In either SIR mode or UART mode, when this bit is set, + /// the modem outputs are also fed through to the modem inputs. This bit + /// is cleared to 0 on reset, to disable loopback. + pub enable_loopback: bool, + /// `TXE` Transmit enable. If this bit is set to 1, the transmit section + /// of the UART is enabled. Data transmission occurs for either UART + /// signals, or SIR signals depending on the setting of the SIREN bit. + /// When the UART is disabled in the middle of transmission, it + /// completes the current character before stopping. + pub enable_transmit: bool, + /// `RXE` Receive enable. If this bit is set to 1, the receive section + /// of the UART is enabled. Data reception occurs for either UART + /// signals or SIR signals depending on the setting of the SIREN bit. + /// When the UART is disabled in the middle of reception, it completes + /// the current character before stopping. + pub enable_receive: bool, + /// `DTR` Data transmit ready. This bit is the complement of the UART + /// data transmit ready, `nUARTDTR`, modem status output. That is, when + /// the bit is programmed to a 1 then `nUARTDTR` is LOW. + pub data_transmit_ready: bool, + /// `RTS` Request to send. This bit is the complement of the UART + /// request to send, `nUARTRTS`, modem status output. That is, when the + /// bit is programmed to a 1 then `nUARTRTS` is LOW. + pub request_to_send: bool, + /// `Out1` This bit is the complement of the UART Out1 (`nUARTOut1`) + /// modem status output. That is, when the bit is programmed to a 1 the + /// output is 0. For DTE this can be used as Data Carrier Detect (DCD). + pub out_1: bool, + /// `Out2` This bit is the complement of the UART Out2 (`nUARTOut2`) + /// modem status output. That is, when the bit is programmed to a 1, the + /// output is 0. For DTE this can be used as Ring Indicator (RI). + pub out_2: bool, + /// `RTSEn` RTS hardware flow control enable. If this bit is set to 1, + /// RTS hardware flow control is enabled. Data is only requested when + /// there is space in the receive FIFO for it to be received. + pub rts_hardware_flow_control_enable: bool, + /// `CTSEn` CTS hardware flow control enable. If this bit is set to 1, + /// CTS hardware flow control is enabled. Data is only transmitted when + /// the `nUARTCTS` signal is asserted. + pub cts_hardware_flow_control_enable: bool, + } + + impl Control { + pub fn reset(&mut self) { + *self = 0.into(); + self.set_enable_receive(true); + self.set_enable_transmit(true); + } + } + + impl Default for Control { + fn default() -> Self { + let mut ret: Self = 0.into(); + ret.reset(); + ret + } + } + + /// Interrupt status bits in UARTRIS, UARTMIS, UARTIMSC + pub const INT_OE: u32 = 1 << 10; + pub const INT_BE: u32 = 1 << 9; + pub const INT_PE: u32 = 1 << 8; + pub const INT_FE: u32 = 1 << 7; + pub const INT_RT: u32 = 1 << 6; + pub const INT_TX: u32 = 1 << 5; + pub const INT_RX: u32 = 1 << 4; + pub const INT_DSR: u32 = 1 << 3; + pub const INT_DCD: u32 = 1 << 2; + pub const INT_CTS: u32 = 1 << 1; + pub const INT_RI: u32 = 1 << 0; + pub const INT_E: u32 = INT_OE | INT_BE | INT_PE | INT_FE; + pub const INT_MS: u32 = INT_RI | INT_DSR | INT_DCD | INT_CTS; + + #[repr(u32)] + pub enum Interrupt { + OE = 1 << 10, + BE = 1 << 9, + PE = 1 << 8, + FE = 1 << 7, + RT = 1 << 6, + TX = 1 << 5, + RX = 1 << 4, + DSR = 1 << 3, + DCD = 1 << 2, + CTS = 1 << 1, + RI = 1 << 0, + } + + impl Interrupt { + pub const E: u32 = INT_OE | INT_BE | INT_PE | INT_FE; + pub const MS: u32 = INT_RI | INT_DSR | INT_DCD | INT_CTS; + } +} + +// TODO: You must disable the UART before any of the control registers are +// reprogrammed. When the UART is disabled in the middle of transmission or +// reception, it completes the current character before stopping diff --git a/rust/hw/char/pl011/src/memory_ops.rs b/rust/hw/char/pl011/src/memory_ops.rs new file mode 100644 index 0000000000..6144d28586 --- /dev/null +++ b/rust/hw/char/pl011/src/memory_ops.rs @@ -0,0 +1,45 @@ +// Copyright 2024 Manos Pitsidianakis +// SPDX-License-Identifier: GPL-2.0 OR GPL-3.0-or-later + +use core::{mem::MaybeUninit, ptr::NonNull}; + +use qemu_api::bindings::*; + +use crate::device::PL011State; + +pub static PL011_OPS: MemoryRegionOps = MemoryRegionOps { + read: Some(pl011_read), + write: Some(pl011_write), + read_with_attrs: None, + write_with_attrs: None, + endianness: device_endian_DEVICE_NATIVE_ENDIAN, + valid: unsafe { MaybeUninit::::zeroed().assume_init() }, + impl_: MemoryRegionOps__bindgen_ty_2 { + min_access_size: 4, + max_access_size: 4, + ..unsafe { MaybeUninit::::zeroed().assume_init() } + }, +}; + +#[no_mangle] +unsafe extern "C" fn pl011_read( + opaque: *mut core::ffi::c_void, + addr: hwaddr, + size: core::ffi::c_uint, +) -> u64 { + assert!(!opaque.is_null()); + let mut state = NonNull::new_unchecked(opaque.cast::()); + state.as_mut().read(addr, size) +} + +#[no_mangle] +unsafe extern "C" fn pl011_write( + opaque: *mut core::ffi::c_void, + addr: hwaddr, + data: u64, + _size: core::ffi::c_uint, +) { + assert!(!opaque.is_null()); + let mut state = NonNull::new_unchecked(opaque.cast::()); + state.as_mut().write(addr, data) +} diff --git a/rust/hw/meson.build b/rust/hw/meson.build new file mode 100644 index 0000000000..860196645e --- /dev/null +++ b/rust/hw/meson.build @@ -0,0 +1 @@ +subdir('char') diff --git a/rust/meson.build b/rust/meson.build index a903c7c602..4c858e9d1f 100644 --- a/rust/meson.build +++ b/rust/meson.build @@ -11,3 +11,5 @@ _lib_bindings_rs = static_library( ) subdir('qemu-api') + +subdir('hw') From patchwork Mon Jul 22 11:43:38 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Manos Pitsidianakis X-Patchwork-Id: 1963207 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256 header.s=google header.b=G0tfUt2b; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4WSJPS6Lz6z1yZ7 for ; Mon, 22 Jul 2024 21:45:32 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sVrTK-0002EL-Uh; Mon, 22 Jul 2024 07:44:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sVrTF-0001xm-K5 for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:50 -0400 Received: from mail-wr1-x42b.google.com ([2a00:1450:4864:20::42b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sVrT1-0002T3-5U for qemu-devel@nongnu.org; Mon, 22 Jul 2024 07:44:49 -0400 Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-367ab76d5e1so1434878f8f.3 for ; Mon, 22 Jul 2024 04:44:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1721648673; x=1722253473; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S/edxaR0inGKNqbLNa9KIUUyyOfX2ns3wekE/yVwiSI=; b=G0tfUt2bfCch6ahB+hM5N+DiYl2cF+6/+U+64UCIBRITsOjRn66iOMETRSVyMkw0Ql Sao5Z+VSnXlOFfoJ1qkOoONjsNDlB1ELF84u+dVnu25syB15DJ3WYc2ysp9iokOdaStG kVkr09SDnE/gOTFzS9WjTVafRFjRo6FoQEgzJOK0ClbYwz46nfaaV4R3SoOPe3TRZOmo W61jyLxXbIayimS67pAAVNCc1QDQ3evtNllYzwadQwW/bHNpGT/QYT3zNb7cWgzvuk12 QDHPoEYzL644cDBd1YWWwrg1Ac2Di05iH2l0XSJd4pjYUTh2SRPP7c1d0KnjeuS7grPl Vi6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721648673; x=1722253473; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S/edxaR0inGKNqbLNa9KIUUyyOfX2ns3wekE/yVwiSI=; b=gZRUaMu0rDWNYMc7aNwEtA8xrtSZGm6G7PqIKA7QrgVf8QpffBds7ghj++xvqGfdLA 4Ae31w4K/BCoPlGc347+9bU8IT3zcSiuxkzSdR9JSVuH2tv5BzuCPN39c62moUxJVnIm PT6n2SWjHzLfLojxsz33EAgvPleGWDEffP/TOSNeXSk/10m1l8fVqnjyOEcr5JQJd1s4 jlgXNCR/bfK4y4rdoerQkrTIOvPP7+rziOno+Sj57/Y5s6EjXE0Ufsr1P5vybkFU0gy/ NuXfqCiNIc4MEQYJZuhBrCBoGyTh86uT0j3V9xV34qj9u/w++vz9HF4IaucgzOFQinRA Kf4g== X-Gm-Message-State: AOJu0YxE8WiQBYC1ksIj6TmjKzNmd8nXfCt2KZDNiChmHkKBe6QeEnqZ 7YNhnVTH3N2i2JJsklFceAeHRdsyh3I2EpjGCuldezSVfQ3HW5Dvib4mC8HMJzBy5PybpxDs4sw 1Td8= X-Google-Smtp-Source: AGHT+IFhBH4CR1+nR/b+75Z/Yrt3sG8XKP1jTg/vP+KLNyxGqwOleLL2Sy7rpY+wJMZ5jf5IJaFNng== X-Received: by 2002:a5d:6dad:0:b0:368:31c7:19dd with SMTP id ffacd0b85a97d-369bbbb3030mr3838718f8f.5.1721648667457; Mon, 22 Jul 2024 04:44:27 -0700 (PDT) Received: from localhost.localdomain (adsl-231.37.6.1.tellas.gr. [37.6.1.231]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-368787eba8csm8323513f8f.91.2024.07.22.04.44.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jul 2024 04:44:25 -0700 (PDT) From: Manos Pitsidianakis To: qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Mads Ynddal , Peter Maydell , =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Daniel_P?= =?utf-8?q?=2E_Berrang=C3=A9?= , =?utf-8?q?Marc-Andr?= =?utf-8?q?=C3=A9_Lureau?= , Thomas Huth , Markus Armbruster , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Zhao Liu , Gustavo Romero , Pierrick Bouvier , rowan.hart@intel.com, Richard Henderson Subject: [RFC PATCH v5 8/8] rust/pl011: vendor dependencies Date: Mon, 22 Jul 2024 14:43:38 +0300 Message-ID: <43d9c3f65224f28f78f28d15e67b999d84a3b66f.1721648163.git.manos.pitsidianakis@linaro.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42b; envelope-from=manos.pitsidianakis@linaro.org; helo=mail-wr1-x42b.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Signed-off-by: Manos Pitsidianakis --- rust/hw/char/pl011/vendor/either/README.rst | 185 + .../vendor/arbitrary-int/.cargo-checksum.json | 1 + .../pl011/vendor/arbitrary-int/CHANGELOG.md | 47 + .../pl011/vendor/arbitrary-int/Cargo.toml | 54 + .../pl011/vendor/arbitrary-int/LICENSE.txt | 21 + .../char/pl011/vendor/arbitrary-int/README.md | 72 + .../pl011/vendor/arbitrary-int/meson.build | 14 + .../pl011/vendor/arbitrary-int/src/lib.rs | 1489 +++++ .../pl011/vendor/arbitrary-int/tests/tests.rs | 1913 ++++++ .../vendor/bilge-impl/.cargo-checksum.json | 1 + .../char/pl011/vendor/bilge-impl/Cargo.toml | 54 + .../hw/char/pl011/vendor/bilge-impl/README.md | 327 ++ .../char/pl011/vendor/bilge-impl/meson.build | 24 + .../pl011/vendor/bilge-impl/src/bitsize.rs | 187 + .../vendor/bilge-impl/src/bitsize/split.rs | 185 + .../vendor/bilge-impl/src/bitsize_internal.rs | 235 + .../src/bitsize_internal/struct_gen.rs | 402 ++ .../pl011/vendor/bilge-impl/src/debug_bits.rs | 55 + .../vendor/bilge-impl/src/default_bits.rs | 92 + .../pl011/vendor/bilge-impl/src/fmt_bits.rs | 112 + .../pl011/vendor/bilge-impl/src/from_bits.rs | 222 + .../char/pl011/vendor/bilge-impl/src/lib.rs | 79 + .../pl011/vendor/bilge-impl/src/shared.rs | 196 + .../src/shared/discriminant_assigner.rs | 56 + .../vendor/bilge-impl/src/shared/fallback.rs | 92 + .../vendor/bilge-impl/src/shared/util.rs | 91 + .../vendor/bilge-impl/src/try_from_bits.rs | 143 + .../pl011/vendor/bilge/.cargo-checksum.json | 1 + rust/hw/char/pl011/vendor/bilge/Cargo.toml | 69 + .../hw/char/pl011/vendor/bilge/LICENSE-APACHE | 176 + rust/hw/char/pl011/vendor/bilge/LICENSE-MIT | 17 + rust/hw/char/pl011/vendor/bilge/README.md | 327 ++ rust/hw/char/pl011/vendor/bilge/meson.build | 17 + rust/hw/char/pl011/vendor/bilge/src/lib.rs | 80 + .../pl011/vendor/either/.cargo-checksum.json | 1 + rust/hw/char/pl011/vendor/either/Cargo.toml | 54 + .../char/pl011/vendor/either/LICENSE-APACHE | 201 + rust/hw/char/pl011/vendor/either/LICENSE-MIT | 25 + .../pl011/vendor/either/README-crates.io.md | 10 + rust/hw/char/pl011/vendor/either/meson.build | 16 + .../pl011/vendor/either/src/into_either.rs | 64 + .../char/pl011/vendor/either/src/iterator.rs | 315 + rust/hw/char/pl011/vendor/either/src/lib.rs | 1519 +++++ .../pl011/vendor/either/src/serde_untagged.rs | 69 + .../either/src/serde_untagged_optional.rs | 74 + .../vendor/itertools/.cargo-checksum.json | 1 + .../char/pl011/vendor/itertools/CHANGELOG.md | 409 ++ .../hw/char/pl011/vendor/itertools/Cargo.lock | 681 +++ .../hw/char/pl011/vendor/itertools/Cargo.toml | 101 + .../pl011/vendor/itertools/LICENSE-APACHE | 201 + .../char/pl011/vendor/itertools/LICENSE-MIT | 25 + rust/hw/char/pl011/vendor/itertools/README.md | 44 + .../pl011/vendor/itertools/benches/bench1.rs | 877 +++ .../vendor/itertools/benches/combinations.rs | 125 + .../benches/combinations_with_replacement.rs | 40 + .../vendor/itertools/benches/extra/mod.rs | 2 + .../itertools/benches/extra/zipslices.rs | 188 + .../itertools/benches/fold_specialization.rs | 73 + .../vendor/itertools/benches/powerset.rs | 44 + .../vendor/itertools/benches/tree_fold1.rs | 144 + .../itertools/benches/tuple_combinations.rs | 113 + .../pl011/vendor/itertools/benches/tuples.rs | 213 + .../pl011/vendor/itertools/examples/iris.data | 150 + .../pl011/vendor/itertools/examples/iris.rs | 137 + .../char/pl011/vendor/itertools/meson.build | 18 + .../vendor/itertools/src/adaptors/coalesce.rs | 235 + .../vendor/itertools/src/adaptors/map.rs | 124 + .../vendor/itertools/src/adaptors/mod.rs | 1151 ++++ .../itertools/src/adaptors/multi_product.rs | 230 + .../vendor/itertools/src/combinations.rs | 128 + .../src/combinations_with_replacement.rs | 109 + .../pl011/vendor/itertools/src/concat_impl.rs | 23 + .../vendor/itertools/src/cons_tuples_impl.rs | 64 + .../char/pl011/vendor/itertools/src/diff.rs | 61 + .../vendor/itertools/src/duplicates_impl.rs | 216 + .../vendor/itertools/src/either_or_both.rs | 495 ++ .../vendor/itertools/src/exactly_one_err.rs | 110 + .../pl011/vendor/itertools/src/extrema_set.rs | 48 + .../pl011/vendor/itertools/src/flatten_ok.rs | 165 + .../char/pl011/vendor/itertools/src/format.rs | 168 + .../char/pl011/vendor/itertools/src/free.rs | 286 + .../pl011/vendor/itertools/src/group_map.rs | 32 + .../pl011/vendor/itertools/src/groupbylazy.rs | 579 ++ .../vendor/itertools/src/grouping_map.rs | 535 ++ .../pl011/vendor/itertools/src/impl_macros.rs | 29 + .../pl011/vendor/itertools/src/intersperse.rs | 118 + .../pl011/vendor/itertools/src/k_smallest.rs | 20 + .../pl011/vendor/itertools/src/kmerge_impl.rs | 227 + .../pl011/vendor/itertools/src/lazy_buffer.rs | 63 + .../hw/char/pl011/vendor/itertools/src/lib.rs | 3967 +++++++++++++ .../pl011/vendor/itertools/src/merge_join.rs | 220 + .../char/pl011/vendor/itertools/src/minmax.rs | 115 + .../vendor/itertools/src/multipeek_impl.rs | 101 + .../pl011/vendor/itertools/src/pad_tail.rs | 96 + .../pl011/vendor/itertools/src/peek_nth.rs | 102 + .../itertools/src/peeking_take_while.rs | 177 + .../vendor/itertools/src/permutations.rs | 277 + .../pl011/vendor/itertools/src/powerset.rs | 90 + .../itertools/src/process_results_impl.rs | 68 + .../vendor/itertools/src/put_back_n_impl.rs | 61 + .../pl011/vendor/itertools/src/rciter_impl.rs | 99 + .../pl011/vendor/itertools/src/repeatn.rs | 59 + .../pl011/vendor/itertools/src/size_hint.rs | 119 + .../pl011/vendor/itertools/src/sources.rs | 183 + .../itertools/src/take_while_inclusive.rs | 68 + .../hw/char/pl011/vendor/itertools/src/tee.rs | 78 + .../pl011/vendor/itertools/src/tuple_impl.rs | 331 ++ .../pl011/vendor/itertools/src/unique_impl.rs | 179 + .../pl011/vendor/itertools/src/unziptuple.rs | 80 + .../vendor/itertools/src/with_position.rs | 88 + .../pl011/vendor/itertools/src/zip_eq_impl.rs | 60 + .../pl011/vendor/itertools/src/zip_longest.rs | 83 + .../pl011/vendor/itertools/src/ziptuple.rs | 138 + .../itertools/tests/adaptors_no_collect.rs | 46 + .../vendor/itertools/tests/flatten_ok.rs | 76 + .../vendor/itertools/tests/macros_hygiene.rs | 13 + .../vendor/itertools/tests/merge_join.rs | 108 + .../itertools/tests/peeking_take_while.rs | 69 + .../pl011/vendor/itertools/tests/quick.rs | 1849 ++++++ .../vendor/itertools/tests/specializations.rs | 153 + .../pl011/vendor/itertools/tests/test_core.rs | 317 + .../pl011/vendor/itertools/tests/test_std.rs | 1184 ++++ .../pl011/vendor/itertools/tests/tuples.rs | 86 + .../char/pl011/vendor/itertools/tests/zip.rs | 77 + rust/hw/char/pl011/vendor/meson.build | 18 + .../.cargo-checksum.json | 1 + .../vendor/proc-macro-error-attr/Cargo.toml | 33 + .../proc-macro-error-attr/LICENSE-APACHE | 201 + .../vendor/proc-macro-error-attr/LICENSE-MIT | 21 + .../vendor/proc-macro-error-attr/build.rs | 5 + .../vendor/proc-macro-error-attr/meson.build | 20 + .../vendor/proc-macro-error-attr/src/lib.rs | 121 + .../vendor/proc-macro-error-attr/src/parse.rs | 89 + .../proc-macro-error-attr/src/settings.rs | 72 + .../proc-macro-error/.cargo-checksum.json | 1 + .../vendor/proc-macro-error/CHANGELOG.md | 162 + .../pl011/vendor/proc-macro-error/Cargo.toml | 56 + .../vendor/proc-macro-error/LICENSE-APACHE | 201 + .../pl011/vendor/proc-macro-error/LICENSE-MIT | 21 + .../pl011/vendor/proc-macro-error/README.md | 258 + .../pl011/vendor/proc-macro-error/build.rs | 11 + .../pl011/vendor/proc-macro-error/meson.build | 22 + .../vendor/proc-macro-error/src/diagnostic.rs | 349 ++ .../vendor/proc-macro-error/src/dummy.rs | 150 + .../proc-macro-error/src/imp/delegate.rs | 69 + .../proc-macro-error/src/imp/fallback.rs | 30 + .../pl011/vendor/proc-macro-error/src/lib.rs | 560 ++ .../vendor/proc-macro-error/src/macros.rs | 288 + .../vendor/proc-macro-error/src/sealed.rs | 3 + .../proc-macro-error/tests/macro-errors.rs | 8 + .../pl011/vendor/proc-macro-error/tests/ok.rs | 10 + .../proc-macro-error/tests/runtime-errors.rs | 13 + .../vendor/proc-macro-error/tests/ui/abort.rs | 11 + .../proc-macro-error/tests/ui/abort.stderr | 48 + .../proc-macro-error/tests/ui/append_dummy.rs | 13 + .../tests/ui/append_dummy.stderr | 5 + .../tests/ui/children_messages.rs | 6 + .../tests/ui/children_messages.stderr | 23 + .../vendor/proc-macro-error/tests/ui/dummy.rs | 13 + .../proc-macro-error/tests/ui/dummy.stderr | 5 + .../vendor/proc-macro-error/tests/ui/emit.rs | 7 + .../proc-macro-error/tests/ui/emit.stderr | 48 + .../tests/ui/explicit_span_range.rs | 6 + .../tests/ui/explicit_span_range.stderr | 5 + .../proc-macro-error/tests/ui/misuse.rs | 11 + .../proc-macro-error/tests/ui/misuse.stderr | 13 + .../tests/ui/multiple_tokens.rs | 6 + .../tests/ui/multiple_tokens.stderr | 5 + .../tests/ui/not_proc_macro.rs | 4 + .../tests/ui/not_proc_macro.stderr | 10 + .../proc-macro-error/tests/ui/option_ext.rs | 6 + .../tests/ui/option_ext.stderr | 7 + .../tests/ui/proc_macro_hack.rs | 10 + .../tests/ui/proc_macro_hack.stderr | 26 + .../proc-macro-error/tests/ui/result_ext.rs | 7 + .../tests/ui/result_ext.stderr | 11 + .../tests/ui/to_tokens_span.rs | 6 + .../tests/ui/to_tokens_span.stderr | 11 + .../tests/ui/unknown_setting.rs | 4 + .../tests/ui/unknown_setting.stderr | 5 + .../tests/ui/unrelated_panic.rs | 6 + .../tests/ui/unrelated_panic.stderr | 7 + .../vendor/proc-macro2/.cargo-checksum.json | 1 + .../char/pl011/vendor/proc-macro2/Cargo.toml | 104 + .../pl011/vendor/proc-macro2/LICENSE-APACHE | 176 + .../char/pl011/vendor/proc-macro2/LICENSE-MIT | 23 + .../char/pl011/vendor/proc-macro2/README.md | 94 + .../hw/char/pl011/vendor/proc-macro2/build.rs | 227 + .../pl011/vendor/proc-macro2/build/probe.rs | 25 + .../char/pl011/vendor/proc-macro2/meson.build | 19 + .../vendor/proc-macro2/rust-toolchain.toml | 2 + .../pl011/vendor/proc-macro2/src/detection.rs | 75 + .../pl011/vendor/proc-macro2/src/extra.rs | 151 + .../pl011/vendor/proc-macro2/src/fallback.rs | 1226 ++++ .../char/pl011/vendor/proc-macro2/src/lib.rs | 1369 +++++ .../pl011/vendor/proc-macro2/src/location.rs | 29 + .../pl011/vendor/proc-macro2/src/marker.rs | 17 + .../pl011/vendor/proc-macro2/src/parse.rs | 996 ++++ .../pl011/vendor/proc-macro2/src/rcvec.rs | 145 + .../pl011/vendor/proc-macro2/src/wrapper.rs | 993 ++++ .../vendor/proc-macro2/tests/comments.rs | 105 + .../vendor/proc-macro2/tests/features.rs | 8 + .../pl011/vendor/proc-macro2/tests/marker.rs | 100 + .../pl011/vendor/proc-macro2/tests/test.rs | 905 +++ .../vendor/proc-macro2/tests/test_fmt.rs | 28 + .../vendor/proc-macro2/tests/test_size.rs | 73 + .../pl011/vendor/quote/.cargo-checksum.json | 1 + rust/hw/char/pl011/vendor/quote/Cargo.toml | 50 + .../hw/char/pl011/vendor/quote/LICENSE-APACHE | 176 + rust/hw/char/pl011/vendor/quote/LICENSE-MIT | 23 + rust/hw/char/pl011/vendor/quote/README.md | 272 + rust/hw/char/pl011/vendor/quote/meson.build | 17 + .../pl011/vendor/quote/rust-toolchain.toml | 2 + rust/hw/char/pl011/vendor/quote/src/ext.rs | 110 + rust/hw/char/pl011/vendor/quote/src/format.rs | 168 + .../pl011/vendor/quote/src/ident_fragment.rs | 88 + rust/hw/char/pl011/vendor/quote/src/lib.rs | 1464 +++++ .../hw/char/pl011/vendor/quote/src/runtime.rs | 530 ++ .../hw/char/pl011/vendor/quote/src/spanned.rs | 50 + .../char/pl011/vendor/quote/src/to_tokens.rs | 209 + .../pl011/vendor/quote/tests/compiletest.rs | 7 + rust/hw/char/pl011/vendor/quote/tests/test.rs | 549 ++ .../ui/does-not-have-iter-interpolated-dup.rs | 9 + ...does-not-have-iter-interpolated-dup.stderr | 11 + .../ui/does-not-have-iter-interpolated.rs | 9 + .../ui/does-not-have-iter-interpolated.stderr | 11 + .../tests/ui/does-not-have-iter-separated.rs | 5 + .../ui/does-not-have-iter-separated.stderr | 10 + .../quote/tests/ui/does-not-have-iter.rs | 5 + .../quote/tests/ui/does-not-have-iter.stderr | 10 + .../vendor/quote/tests/ui/not-quotable.rs | 7 + .../vendor/quote/tests/ui/not-quotable.stderr | 20 + .../vendor/quote/tests/ui/not-repeatable.rs | 8 + .../quote/tests/ui/not-repeatable.stderr | 34 + .../vendor/quote/tests/ui/wrong-type-span.rs | 7 + .../quote/tests/ui/wrong-type-span.stderr | 10 + .../pl011/vendor/syn/.cargo-checksum.json | 1 + rust/hw/char/pl011/vendor/syn/Cargo.toml | 260 + rust/hw/char/pl011/vendor/syn/LICENSE-APACHE | 176 + rust/hw/char/pl011/vendor/syn/LICENSE-MIT | 23 + rust/hw/char/pl011/vendor/syn/README.md | 284 + rust/hw/char/pl011/vendor/syn/benches/file.rs | 57 + rust/hw/char/pl011/vendor/syn/benches/rust.rs | 182 + rust/hw/char/pl011/vendor/syn/meson.build | 24 + rust/hw/char/pl011/vendor/syn/src/attr.rs | 793 +++ rust/hw/char/pl011/vendor/syn/src/bigint.rs | 66 + rust/hw/char/pl011/vendor/syn/src/buffer.rs | 434 ++ rust/hw/char/pl011/vendor/syn/src/classify.rs | 377 ++ .../pl011/vendor/syn/src/custom_keyword.rs | 260 + .../vendor/syn/src/custom_punctuation.rs | 304 + rust/hw/char/pl011/vendor/syn/src/data.rs | 423 ++ rust/hw/char/pl011/vendor/syn/src/derive.rs | 259 + .../char/pl011/vendor/syn/src/discouraged.rs | 225 + rust/hw/char/pl011/vendor/syn/src/drops.rs | 58 + rust/hw/char/pl011/vendor/syn/src/error.rs | 467 ++ rust/hw/char/pl011/vendor/syn/src/export.rs | 73 + rust/hw/char/pl011/vendor/syn/src/expr.rs | 3960 +++++++++++++ rust/hw/char/pl011/vendor/syn/src/ext.rs | 136 + rust/hw/char/pl011/vendor/syn/src/file.rs | 130 + rust/hw/char/pl011/vendor/syn/src/fixup.rs | 218 + .../hw/char/pl011/vendor/syn/src/gen/clone.rs | 2209 +++++++ .../hw/char/pl011/vendor/syn/src/gen/debug.rs | 3160 ++++++++++ rust/hw/char/pl011/vendor/syn/src/gen/eq.rs | 2242 +++++++ rust/hw/char/pl011/vendor/syn/src/gen/fold.rs | 3779 ++++++++++++ rust/hw/char/pl011/vendor/syn/src/gen/hash.rs | 2807 +++++++++ .../hw/char/pl011/vendor/syn/src/gen/visit.rs | 3858 ++++++++++++ .../pl011/vendor/syn/src/gen/visit_mut.rs | 3855 ++++++++++++ rust/hw/char/pl011/vendor/syn/src/generics.rs | 1286 ++++ rust/hw/char/pl011/vendor/syn/src/group.rs | 291 + rust/hw/char/pl011/vendor/syn/src/ident.rs | 108 + rust/hw/char/pl011/vendor/syn/src/item.rs | 3441 +++++++++++ rust/hw/char/pl011/vendor/syn/src/lib.rs | 1019 ++++ rust/hw/char/pl011/vendor/syn/src/lifetime.rs | 156 + rust/hw/char/pl011/vendor/syn/src/lit.rs | 1830 ++++++ .../hw/char/pl011/vendor/syn/src/lookahead.rs | 169 + rust/hw/char/pl011/vendor/syn/src/mac.rs | 223 + rust/hw/char/pl011/vendor/syn/src/macros.rs | 166 + rust/hw/char/pl011/vendor/syn/src/meta.rs | 427 ++ rust/hw/char/pl011/vendor/syn/src/op.rs | 219 + rust/hw/char/pl011/vendor/syn/src/parse.rs | 1397 +++++ .../pl011/vendor/syn/src/parse_macro_input.rs | 128 + .../char/pl011/vendor/syn/src/parse_quote.rs | 210 + rust/hw/char/pl011/vendor/syn/src/pat.rs | 953 +++ rust/hw/char/pl011/vendor/syn/src/path.rs | 886 +++ .../char/pl011/vendor/syn/src/precedence.rs | 163 + rust/hw/char/pl011/vendor/syn/src/print.rs | 16 + .../char/pl011/vendor/syn/src/punctuated.rs | 1132 ++++ .../char/pl011/vendor/syn/src/restriction.rs | 176 + rust/hw/char/pl011/vendor/syn/src/sealed.rs | 4 + rust/hw/char/pl011/vendor/syn/src/span.rs | 63 + rust/hw/char/pl011/vendor/syn/src/spanned.rs | 118 + rust/hw/char/pl011/vendor/syn/src/stmt.rs | 481 ++ rust/hw/char/pl011/vendor/syn/src/thread.rs | 60 + rust/hw/char/pl011/vendor/syn/src/token.rs | 1138 ++++ rust/hw/char/pl011/vendor/syn/src/tt.rs | 107 + rust/hw/char/pl011/vendor/syn/src/ty.rs | 1216 ++++ rust/hw/char/pl011/vendor/syn/src/verbatim.rs | 33 + .../char/pl011/vendor/syn/src/whitespace.rs | 65 + .../char/pl011/vendor/syn/tests/common/eq.rs | 900 +++ .../char/pl011/vendor/syn/tests/common/mod.rs | 28 + .../pl011/vendor/syn/tests/common/parse.rs | 49 + .../char/pl011/vendor/syn/tests/debug/gen.rs | 5163 +++++++++++++++++ .../char/pl011/vendor/syn/tests/debug/mod.rs | 147 + .../char/pl011/vendor/syn/tests/macros/mod.rs | 93 + .../char/pl011/vendor/syn/tests/regression.rs | 5 + .../vendor/syn/tests/regression/issue1108.rs | 5 + .../vendor/syn/tests/regression/issue1235.rs | 32 + .../char/pl011/vendor/syn/tests/repo/mod.rs | 461 ++ .../pl011/vendor/syn/tests/repo/progress.rs | 37 + .../pl011/vendor/syn/tests/test_asyncness.rs | 43 + .../pl011/vendor/syn/tests/test_attribute.rs | 225 + .../vendor/syn/tests/test_derive_input.rs | 781 +++ .../char/pl011/vendor/syn/tests/test_expr.rs | 692 +++ .../pl011/vendor/syn/tests/test_generics.rs | 282 + .../pl011/vendor/syn/tests/test_grouping.rs | 53 + .../char/pl011/vendor/syn/tests/test_ident.rs | 87 + .../char/pl011/vendor/syn/tests/test_item.rs | 332 ++ .../pl011/vendor/syn/tests/test_iterators.rs | 70 + .../char/pl011/vendor/syn/tests/test_lit.rs | 331 ++ .../char/pl011/vendor/syn/tests/test_meta.rs | 154 + .../vendor/syn/tests/test_parse_buffer.rs | 103 + .../vendor/syn/tests/test_parse_quote.rs | 166 + .../vendor/syn/tests/test_parse_stream.rs | 187 + .../char/pl011/vendor/syn/tests/test_pat.rs | 152 + .../char/pl011/vendor/syn/tests/test_path.rs | 130 + .../pl011/vendor/syn/tests/test_precedence.rs | 537 ++ .../pl011/vendor/syn/tests/test_receiver.rs | 321 + .../pl011/vendor/syn/tests/test_round_trip.rs | 253 + .../pl011/vendor/syn/tests/test_shebang.rs | 67 + .../char/pl011/vendor/syn/tests/test_size.rs | 36 + .../char/pl011/vendor/syn/tests/test_stmt.rs | 322 + .../vendor/syn/tests/test_token_trees.rs | 32 + .../hw/char/pl011/vendor/syn/tests/test_ty.rs | 397 ++ .../pl011/vendor/syn/tests/test_visibility.rs | 185 + .../char/pl011/vendor/syn/tests/zzz_stable.rs | 33 + .../vendor/unicode-ident/.cargo-checksum.json | 1 + .../pl011/vendor/unicode-ident/Cargo.toml | 63 + .../pl011/vendor/unicode-ident/LICENSE-APACHE | 176 + .../pl011/vendor/unicode-ident/LICENSE-MIT | 23 + .../vendor/unicode-ident/LICENSE-UNICODE | 46 + .../char/pl011/vendor/unicode-ident/README.md | 283 + .../pl011/vendor/unicode-ident/benches/xid.rs | 124 + .../pl011/vendor/unicode-ident/meson.build | 14 + .../pl011/vendor/unicode-ident/src/lib.rs | 269 + .../pl011/vendor/unicode-ident/src/tables.rs | 651 +++ .../vendor/unicode-ident/tests/compare.rs | 67 + .../vendor/unicode-ident/tests/fst/mod.rs | 11 + .../unicode-ident/tests/fst/xid_continue.fst | Bin 0 -> 73249 bytes .../unicode-ident/tests/fst/xid_start.fst | Bin 0 -> 65487 bytes .../vendor/unicode-ident/tests/roaring/mod.rs | 21 + .../vendor/unicode-ident/tests/static_size.rs | 95 + .../vendor/unicode-ident/tests/tables/mod.rs | 7 + .../unicode-ident/tests/tables/tables.rs | 347 ++ .../vendor/unicode-ident/tests/trie/mod.rs | 7 + .../vendor/unicode-ident/tests/trie/trie.rs | 445 ++ .../vendor/version_check/.cargo-checksum.json | 1 + .../pl011/vendor/version_check/Cargo.toml | 24 + .../pl011/vendor/version_check/LICENSE-APACHE | 201 + .../pl011/vendor/version_check/LICENSE-MIT | 19 + .../char/pl011/vendor/version_check/README.md | 80 + .../pl011/vendor/version_check/meson.build | 14 + .../pl011/vendor/version_check/src/channel.rs | 193 + .../pl011/vendor/version_check/src/date.rs | 203 + .../pl011/vendor/version_check/src/lib.rs | 493 ++ .../pl011/vendor/version_check/src/version.rs | 316 + 365 files changed, 108770 insertions(+) create mode 100644 rust/hw/char/pl011/vendor/either/README.rst create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/CHANGELOG.md create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/LICENSE.txt create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/README.md create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/meson.build create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/arbitrary-int/tests/tests.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/README.md create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/meson.build create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/bitsize.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/bitsize/split.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal/struct_gen.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/debug_bits.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/default_bits.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/fmt_bits.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/from_bits.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/shared.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/shared/discriminant_assigner.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/shared/fallback.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/shared/util.rs create mode 100644 rust/hw/char/pl011/vendor/bilge-impl/src/try_from_bits.rs create mode 100644 rust/hw/char/pl011/vendor/bilge/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/bilge/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/bilge/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/bilge/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/bilge/README.md create mode 100644 rust/hw/char/pl011/vendor/bilge/meson.build create mode 100644 rust/hw/char/pl011/vendor/bilge/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/either/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/either/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/either/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/either/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/either/README-crates.io.md create mode 100644 rust/hw/char/pl011/vendor/either/meson.build create mode 100644 rust/hw/char/pl011/vendor/either/src/into_either.rs create mode 100644 rust/hw/char/pl011/vendor/either/src/iterator.rs create mode 100644 rust/hw/char/pl011/vendor/either/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/either/src/serde_untagged.rs create mode 100644 rust/hw/char/pl011/vendor/either/src/serde_untagged_optional.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/itertools/CHANGELOG.md create mode 100644 rust/hw/char/pl011/vendor/itertools/Cargo.lock create mode 100644 rust/hw/char/pl011/vendor/itertools/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/itertools/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/itertools/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/itertools/README.md create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/bench1.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/combinations.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/combinations_with_replacement.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/extra/mod.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/extra/zipslices.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/fold_specialization.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/powerset.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/tree_fold1.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/tuple_combinations.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/benches/tuples.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/examples/iris.data create mode 100644 rust/hw/char/pl011/vendor/itertools/examples/iris.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/meson.build create mode 100644 rust/hw/char/pl011/vendor/itertools/src/adaptors/coalesce.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/adaptors/map.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/adaptors/mod.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/adaptors/multi_product.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/combinations.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/combinations_with_replacement.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/concat_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/cons_tuples_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/diff.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/duplicates_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/either_or_both.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/exactly_one_err.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/extrema_set.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/flatten_ok.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/format.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/free.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/group_map.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/groupbylazy.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/grouping_map.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/impl_macros.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/intersperse.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/k_smallest.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/kmerge_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/lazy_buffer.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/merge_join.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/minmax.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/multipeek_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/pad_tail.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/peek_nth.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/peeking_take_while.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/permutations.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/powerset.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/process_results_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/put_back_n_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/rciter_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/repeatn.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/size_hint.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/sources.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/take_while_inclusive.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/tee.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/tuple_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/unique_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/unziptuple.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/with_position.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/zip_eq_impl.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/zip_longest.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/src/ziptuple.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/adaptors_no_collect.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/flatten_ok.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/macros_hygiene.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/merge_join.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/peeking_take_while.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/quick.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/specializations.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/test_core.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/test_std.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/tuples.rs create mode 100644 rust/hw/char/pl011/vendor/itertools/tests/zip.rs create mode 100644 rust/hw/char/pl011/vendor/meson.build create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/build.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/meson.build create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/src/parse.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error-attr/src/settings.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/CHANGELOG.md create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/README.md create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/build.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/meson.build create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/diagnostic.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/dummy.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/imp/delegate.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/imp/fallback.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/macros.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/src/sealed.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/macro-errors.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ok.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/runtime-errors.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.stderr create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/README.md create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/build.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/build/probe.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/meson.build create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/rust-toolchain.toml create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/detection.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/extra.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/fallback.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/location.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/marker.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/parse.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/rcvec.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/src/wrapper.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/tests/comments.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/tests/features.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/tests/marker.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/tests/test.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/tests/test_fmt.rs create mode 100644 rust/hw/char/pl011/vendor/proc-macro2/tests/test_size.rs create mode 100644 rust/hw/char/pl011/vendor/quote/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/quote/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/quote/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/quote/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/quote/README.md create mode 100644 rust/hw/char/pl011/vendor/quote/meson.build create mode 100644 rust/hw/char/pl011/vendor/quote/rust-toolchain.toml create mode 100644 rust/hw/char/pl011/vendor/quote/src/ext.rs create mode 100644 rust/hw/char/pl011/vendor/quote/src/format.rs create mode 100644 rust/hw/char/pl011/vendor/quote/src/ident_fragment.rs create mode 100644 rust/hw/char/pl011/vendor/quote/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/quote/src/runtime.rs create mode 100644 rust/hw/char/pl011/vendor/quote/src/spanned.rs create mode 100644 rust/hw/char/pl011/vendor/quote/src/to_tokens.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/compiletest.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/test.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.stderr create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.stderr create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.stderr create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.stderr create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.stderr create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.stderr create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.rs create mode 100644 rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.stderr create mode 100644 rust/hw/char/pl011/vendor/syn/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/syn/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/syn/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/syn/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/syn/README.md create mode 100644 rust/hw/char/pl011/vendor/syn/benches/file.rs create mode 100644 rust/hw/char/pl011/vendor/syn/benches/rust.rs create mode 100644 rust/hw/char/pl011/vendor/syn/meson.build create mode 100644 rust/hw/char/pl011/vendor/syn/src/attr.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/bigint.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/buffer.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/classify.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/custom_keyword.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/custom_punctuation.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/data.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/derive.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/discouraged.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/drops.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/error.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/export.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/expr.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/ext.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/file.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/fixup.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/clone.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/debug.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/eq.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/fold.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/hash.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/visit.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/gen/visit_mut.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/generics.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/group.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/ident.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/item.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/lifetime.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/lit.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/lookahead.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/mac.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/macros.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/meta.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/op.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/parse.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/parse_macro_input.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/parse_quote.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/pat.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/path.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/precedence.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/print.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/punctuated.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/restriction.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/sealed.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/span.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/spanned.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/stmt.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/thread.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/token.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/tt.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/ty.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/verbatim.rs create mode 100644 rust/hw/char/pl011/vendor/syn/src/whitespace.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/common/eq.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/common/mod.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/common/parse.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/debug/gen.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/debug/mod.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/macros/mod.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/regression.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/regression/issue1108.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/regression/issue1235.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/repo/mod.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/repo/progress.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_asyncness.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_attribute.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_derive_input.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_expr.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_generics.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_grouping.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_ident.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_item.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_iterators.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_lit.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_meta.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_parse_buffer.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_parse_quote.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_parse_stream.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_pat.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_path.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_precedence.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_receiver.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_round_trip.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_shebang.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_size.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_stmt.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_token_trees.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_ty.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/test_visibility.rs create mode 100644 rust/hw/char/pl011/vendor/syn/tests/zzz_stable.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/LICENSE-UNICODE create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/README.md create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/benches/xid.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/meson.build create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/src/tables.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/compare.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/fst/mod.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/fst/xid_continue.fst create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/fst/xid_start.fst create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/roaring/mod.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/static_size.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/tables/mod.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/tables/tables.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/trie/mod.rs create mode 100644 rust/hw/char/pl011/vendor/unicode-ident/tests/trie/trie.rs create mode 100644 rust/hw/char/pl011/vendor/version_check/.cargo-checksum.json create mode 100644 rust/hw/char/pl011/vendor/version_check/Cargo.toml create mode 100644 rust/hw/char/pl011/vendor/version_check/LICENSE-APACHE create mode 100644 rust/hw/char/pl011/vendor/version_check/LICENSE-MIT create mode 100644 rust/hw/char/pl011/vendor/version_check/README.md create mode 100644 rust/hw/char/pl011/vendor/version_check/meson.build create mode 100644 rust/hw/char/pl011/vendor/version_check/src/channel.rs create mode 100644 rust/hw/char/pl011/vendor/version_check/src/date.rs create mode 100644 rust/hw/char/pl011/vendor/version_check/src/lib.rs create mode 100644 rust/hw/char/pl011/vendor/version_check/src/version.rs diff --git a/rust/hw/char/pl011/vendor/either/README.rst b/rust/hw/char/pl011/vendor/either/README.rst new file mode 100644 index 0000000000..659257fdcd --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/README.rst @@ -0,0 +1,185 @@ + +Either +====== + +The enum ``Either`` with variants ``Left`` and ``Right`` and trait +implementations including Iterator, Read, Write. + +Either has methods that are similar to Option and Result. + +Includes convenience macros ``try_left!()`` and ``try_right!()`` to use for +short-circuiting logic. + +Please read the `API documentation here`__ + +__ https://docs.rs/either/ + +|build_status|_ |crates|_ + +.. |build_status| image:: https://github.com/rayon-rs/either/workflows/CI/badge.svg?branch=main +.. _build_status: https://github.com/rayon-rs/either/actions + +.. |crates| image:: https://img.shields.io/crates/v/either.svg +.. _crates: https://crates.io/crates/either + +How to use with cargo:: + + [dependencies] + either = "1.12" + + +Recent Changes +-------------- + +- 1.12.0 + + - **MSRV**: ``either`` now requires Rust 1.37 or later. + + - Specialize ``nth_back`` for ``Either`` and ``IterEither``, by @cuviper (#106) + +- 1.11.0 + + - Add new trait ``IntoEither`` that is useful to convert to ``Either`` in method chains, + by @SFM61319 (#101) + +- 1.10.0 + + - Add new methods ``.factor_iter()``, ``.factor_iter_mut()``, and ``.factor_into_iter()`` + that return ``Either`` items, plus ``.iter()`` and ``.iter_mut()`` to convert to direct + referene iterators; by @aj-bagwell and @cuviper (#91) + +- 1.9.0 + + - Add new methods ``.map_either()`` and ``.map_either_with()``, by @nasadorian (#82) + +- 1.8.1 + + - Clarified that the multiple licenses are combined with OR. + +- 1.8.0 + + - **MSRV**: ``either`` now requires Rust 1.36 or later. + + - Add new methods ``.as_pin_ref()`` and ``.as_pin_mut()`` to project a + pinned ``Either`` as inner ``Pin`` variants, by @cuviper (#77) + + - Implement the ``Future`` trait, by @cuviper (#77) + + - Specialize more methods of the ``io`` traits, by @Kixunil and @cuviper (#75) + +- 1.7.0 + + - **MSRV**: ``either`` now requires Rust 1.31 or later. + + - Export the macro ``for_both!``, by @thomaseizinger (#58) + + - Implement the ``io::Seek`` trait, by @Kerollmops (#60) + + - Add new method ``.either_into()`` for ``Into`` conversion, by @TonalidadeHidrica (#63) + + - Add new methods ``.factor_ok()``, ``.factor_err()``, and ``.factor_none()``, + by @zachs18 (#67) + + - Specialize ``source`` in the ``Error`` implementation, by @thomaseizinger (#69) + + - Specialize more iterator methods and implement the ``FusedIterator`` trait, + by @Ten0 (#66) and @cuviper (#71) + + - Specialize ``Clone::clone_from``, by @cuviper (#72) + +- 1.6.1 + + - Add new methods ``.expect_left()``, ``.unwrap_left()``, + and equivalents on the right, by @spenserblack (#51) + +- 1.6.0 + + - Add new modules ``serde_untagged`` and ``serde_untagged_optional`` to customize + how ``Either`` fields are serialized in other types, by @MikailBag (#49) + +- 1.5.3 + + - Add new method ``.map()`` for ``Either`` by @nvzqz (#40). + +- 1.5.2 + + - Add new methods ``.left_or()``, ``.left_or_default()``, ``.left_or_else()``, + and equivalents on the right, by @DCjanus (#36) + +- 1.5.1 + + - Add ``AsRef`` and ``AsMut`` implementations for common unsized types: + ``str``, ``[T]``, ``CStr``, ``OsStr``, and ``Path``, by @mexus (#29) + +- 1.5.0 + + - Add new methods ``.factor_first()``, ``.factor_second()`` and ``.into_inner()`` + by @mathstuf (#19) + +- 1.4.0 + + - Add inherent method ``.into_iter()`` by @cuviper (#12) + +- 1.3.0 + + - Add opt-in serde support by @hcpl + +- 1.2.0 + + - Add method ``.either_with()`` by @Twey (#13) + +- 1.1.0 + + - Add methods ``left_and_then``, ``right_and_then`` by @rampantmonkey + - Include license files in the repository and released crate + +- 1.0.3 + + - Add crate categories + +- 1.0.2 + + - Forward more ``Iterator`` methods + - Implement ``Extend`` for ``Either`` if ``L, R`` do. + +- 1.0.1 + + - Fix ``Iterator`` impl for ``Either`` to forward ``.fold()``. + +- 1.0.0 + + - Add default crate feature ``use_std`` so that you can opt out of linking to + std. + +- 0.1.7 + + - Add methods ``.map_left()``, ``.map_right()`` and ``.either()``. + - Add more documentation + +- 0.1.3 + + - Implement Display, Error + +- 0.1.2 + + - Add macros ``try_left!`` and ``try_right!``. + +- 0.1.1 + + - Implement Deref, DerefMut + +- 0.1.0 + + - Initial release + - Support Iterator, Read, Write + +License +------- + +Dual-licensed to be compatible with the Rust project. + +Licensed under the Apache License, Version 2.0 +https://www.apache.org/licenses/LICENSE-2.0 or the MIT license +https://opensource.org/licenses/MIT, at your +option. This file may not be copied, modified, or distributed +except according to those terms. diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/.cargo-checksum.json b/rust/hw/char/pl011/vendor/arbitrary-int/.cargo-checksum.json new file mode 100644 index 0000000000..39c2d4d0e0 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"CHANGELOG.md":"d34e39d5bd6b0ba740cae9b7afe9fdf73ae1bedc080de338d238ef577cffe963","Cargo.toml":"0a410a8ab28d72b00c04feeb289be7b725347732443cee6e1a91fb3f193e907b","LICENSE.txt":"6982f0cd109b04512cbb5f0e0f0ef82154f33a57d2127afe058ecc72039ab88c","README.md":"c3ee6e3ec5365bd9f6daddacf2b49204d7d777d09afe896b57451bb0365bea21","src/lib.rs":"9bda88688cfebe72e386d9fbb0bd4570a7631ccc20eef58a0e14b6aadd4724ea","tests/tests.rs":"116002067e9b697d4f22b5f28f23363ade2ed9dd6b59661388272f7c6d4b20f1"},"package":"c84fc003e338a6f69fbd4f7fe9f92b535ff13e9af8997f3b14b6ddff8b1df46d"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/CHANGELOG.md b/rust/hw/char/pl011/vendor/arbitrary-int/CHANGELOG.md new file mode 100644 index 0000000000..a31fa94c96 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/CHANGELOG.md @@ -0,0 +1,47 @@ +# Changelog + +## arbitrary-int 1.2.7 + +### Added + +- Support `Step` so that arbitrary-int can be used in a range expression, e.g. `for n in u3::MIN..=u3::MAX { println!("{n}") }`. Note this trait is currently unstable, and so is only usable in nightly. Enable this feature with `step_trait`. +- Support formatting via [defmt](https://crates.io/crates/defmt). Enable the option `defmt` feature +- Support serializing and deserializing via [serde](https://crates.io/crates/serde). Enable the option `serde` feature +- Support `Mul`, `MulAssign`, `Div`, `DivAssign` +- The following new methods were implemented to make arbitrary ints feel more like built-in types: + * `wrapping_add`, `wrapping_sub`, `wrapping_mul`, `wrapping_div`, `wrapping_shl`, `wrapping_shr` + * `saturating_add`, `saturating_sub`, `saturating_mul`, `saturating_div`, `saturating_pow` + * `checked_add`, `checked_sub`, `checked_mul`, `checked_div`, `checked_shl`, `checked_shr` + * `overflowing_add`, `overflowing_sub`, `overflowing_mul`, `overflowing_div`, `overflowing_shl`, `overflowing_shr` + +### Changed +- In debug builds, `<<` (`Shl`, `ShlAssign`) and `>>` (`Shr`, `ShrAssign`) now bounds-check the shift amount using the same semantics as built-in shifts. For example, shifting a u5 by 5 or more bits will now panic as expected. + +## arbitrary-int 1.2.6 + +### Added + +- Support `LowerHex`, `UpperHex`, `Octal`, `Binary` so that arbitrary-int can be printed via e.g. `format!("{:x}", u4::new(12))` +- Support `Hash` so that arbitrary-int can be used in hash tables + +### Changed + +- As support for `[const_trait]` has recently been removed from structs like `From` in upstream Rust, opting-in to the `nightly` feature no longer enables this behavior as that would break the build. To continue using this feature with older compiler versions, use `const_convert_and_const_trait_impl` instead. + +## arbitrary-int 1.2.5 + +### Added + +- Types that can be expressed as full bytes (e.g. u24, u48) have the following new methods: + * `swap_bytes()` + * `to_le_bytes()` + * `to_be_bytes()` + * `to_ne_bytes()` + * `to_be()` + * `to_le()` + +### Changed + +- `#[inline]` is specified in more places + +### Fixed diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/Cargo.toml b/rust/hw/char/pl011/vendor/arbitrary-int/Cargo.toml new file mode 100644 index 0000000000..810071d602 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/Cargo.toml @@ -0,0 +1,54 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2021" +name = "arbitrary-int" +version = "1.2.7" +authors = ["Daniel Lehmann "] +description = "Modern and lightweight implementation of u2, u3, u4, ..., u127." +readme = "README.md" +keywords = [ + "integer", + "unaligned", + "misaligned", +] +categories = [ + "embedded", + "no-std", + "data-structures", +] +license = "MIT" +repository = "https://github.com/danlehmann/arbitrary-int" + +[dependencies.defmt] +version = "0.3.5" +optional = true + +[dependencies.num-traits] +version = "0.2.17" +optional = true +default-features = false + +[dependencies.serde] +version = "1.0" +optional = true +default-features = false + +[dev-dependencies.serde_test] +version = "1.0" + +[features] +const_convert_and_const_trait_impl = [] +defmt = ["dep:defmt"] +serde = ["dep:serde"] +std = [] +step_trait = [] diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/LICENSE.txt b/rust/hw/char/pl011/vendor/arbitrary-int/LICENSE.txt new file mode 100644 index 0000000000..eb8c29c461 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/LICENSE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022 Daniel Lehmann + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/README.md b/rust/hw/char/pl011/vendor/arbitrary-int/README.md new file mode 100644 index 0000000000..d34676fd93 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/README.md @@ -0,0 +1,72 @@ +# arbitrary-int + +This crate implements arbitrary numbers for Rust. Once included, you can use types like `u5` or `u120`. + +## Why yet another arbitrary integer crate? + +There are quite a few similar crates to this one (the most famous being https://crates.io/crates/ux). After trying out a +few of them I just realized that they are all very heavy: They create a ton of classes and take seconds to compile. + +This crate is designed to be very short, using const generics. Instead of introducing ~123 new structs, this crates only +introduces 5 (one for `u8`, `u16`, `u32`, `u64`, `u128`) and uses const generics for the specific bit depth. +It does introduce 123 new type aliases (`u1`, `u2`, etc.), but these don't stress the compiler nearly as much. + +Additionally, most of its functions are const, so that they can be used in const contexts. + +## How to use + +Unlike primitive data types like `u32`, there is no intrinsic syntax (Rust does not allow that). An instance is created as +follows: + +```rust +let value9 = u9::new(30); +``` + +This will create a value with 9 bits. If the value passed into `new()` doesn't fit, a panic! will be raised. This means +that a function that accepts a `u9` as an argument can be certain that its contents are never larger than an `u9`. + +Standard operators are all overloaded, so it is possible to perform calculations using this type. Note that addition +and subtraction (at least in debug mode) performs bounds check. If this is undesired, see chapter num-traits below. + +Internally, `u9` will hold its data in an `u16`. It is possible to get this value: + +```rust +let value9 = u9::new(30).value(); +``` + +## Underlying data type + +This crate defines types `u1`, `u2`, .., `u126`, `u127` (skipping the normal `u8`, `u16`, `u32`, `u64`, `u128`). Each of those types holds +its actual data in the next larger data type (e.g. a `u14` internally has an `u16`, a `u120` internally has an `u128`). However, +`uXX` are just type aliases; it is also possible to use the actual underlying generic struct: + +```rust +let a = UInt::::new(0b10101)); +let b = UInt::::new(0b10101)); +``` + +In this example, `a` will have 5 bits and be represented by a `u8`. This is identical to `u5`. `b` however is represented by a +`u32`, so it is a different type from `u5`. + +## Extract + +A common source for arbitrary integers is by extracting them from bitfields. For example, if data contained 32 bits and +we want to extract bits `4..=9`, we could perform the following: + +```rust +let a = u6::new(((data >> 4) & 0b111111) as u8); +``` + +This is a pretty common operation, but it's easy to get it wrong: The number of 1s and `u6` have to match. Also, `new()` +will internally perform a bounds-check, which can panic. Thirdly, a type-cast is often needed. +To make this easier, various extract methods exist that handle shifting and masking, for example: + +```rust +let a = u6::extract_u32(data, 4); +let b = u12::extract_u128(data2, 63); +``` + +## num-traits + +By default, arbitrary-int doesn't require any other traits. It has optional support for num-traits however. It +implements `WrappingAdd`, `WrappingSub`, which (unlike the regular addition and subtraction) don't perform bounds checks. diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/meson.build b/rust/hw/char/pl011/vendor/arbitrary-int/meson.build new file mode 100644 index 0000000000..e02139a5bc --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/meson.build @@ -0,0 +1,14 @@ +_arbitrary_int_rs = static_library( + 'arbitrary_int', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + ], + dependencies: [], +) + +dep_arbitrary_int = declare_dependency( + link_with: _arbitrary_int_rs, +) diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/src/lib.rs b/rust/hw/char/pl011/vendor/arbitrary-int/src/lib.rs new file mode 100644 index 0000000000..4c2b9c3997 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/src/lib.rs @@ -0,0 +1,1489 @@ +#![cfg_attr(not(feature = "std"), no_std)] +#![cfg_attr( + feature = "const_convert_and_const_trait_impl", + feature(const_convert, const_trait_impl) +)] +#![cfg_attr(feature = "step_trait", feature(step_trait))] + +use core::fmt::{Binary, Debug, Display, Formatter, LowerHex, Octal, UpperHex}; +use core::hash::{Hash, Hasher}; +#[cfg(feature = "step_trait")] +use core::iter::Step; +#[cfg(feature = "num-traits")] +use core::num::Wrapping; +use core::ops::{ + Add, AddAssign, BitAnd, BitAndAssign, BitOr, BitOrAssign, BitXor, BitXorAssign, Div, DivAssign, + Mul, MulAssign, Not, Shl, ShlAssign, Shr, ShrAssign, Sub, SubAssign, +}; +#[cfg(feature = "serde")] +use serde::{Deserialize, Deserializer, Serialize, Serializer}; + +#[derive(Debug, Clone, Eq, PartialEq)] +pub struct TryNewError; + +impl Display for TryNewError { + fn fmt(&self, f: &mut Formatter) -> core::fmt::Result { + write!(f, "Value too large to fit within this integer type") + } +} + +#[cfg_attr(feature = "const_convert_and_const_trait_impl", const_trait)] +pub trait Number: Sized { + type UnderlyingType: Debug + + From + + TryFrom + + TryFrom + + TryFrom + + TryFrom; + + /// Number of bits that can fit in this type + const BITS: usize; + + /// Minimum value that can be represented by this type + const MIN: Self; + + /// Maximum value that can be represented by this type + const MAX: Self; + + fn new(value: Self::UnderlyingType) -> Self; + + fn try_new(value: Self::UnderlyingType) -> Result; + + fn value(self) -> Self::UnderlyingType; +} + +#[cfg(feature = "const_convert_and_const_trait_impl")] +macro_rules! impl_number_native { + ($( $type:ty ),+) => { + $( + impl const Number for $type { + type UnderlyingType = $type; + const BITS: usize = Self::BITS as usize; + const MIN: Self = Self::MIN; + const MAX: Self = Self::MAX; + + #[inline] + fn new(value: Self::UnderlyingType) -> Self { value } + + #[inline] + fn try_new(value: Self::UnderlyingType) -> Result { Ok(value) } + + #[inline] + fn value(self) -> Self::UnderlyingType { self } + } + )+ + }; +} + +#[cfg(not(feature = "const_convert_and_const_trait_impl"))] +macro_rules! impl_number_native { + ($( $type:ty ),+) => { + $( + impl Number for $type { + type UnderlyingType = $type; + const BITS: usize = Self::BITS as usize; + const MIN: Self = Self::MIN; + const MAX: Self = Self::MAX; + + #[inline] + fn new(value: Self::UnderlyingType) -> Self { value } + + #[inline] + fn try_new(value: Self::UnderlyingType) -> Result { Ok(value) } + + #[inline] + fn value(self) -> Self::UnderlyingType { self } + } + )+ + }; +} + +impl_number_native!(u8, u16, u32, u64, u128); + +struct CompileTimeAssert {} + +impl CompileTimeAssert { + pub const SMALLER_OR_EQUAL: () = { + assert!(A <= B); + }; +} + +#[derive(Copy, Clone, Eq, PartialEq, Default, Ord, PartialOrd)] +pub struct UInt { + value: T, +} + +impl UInt { + pub const BITS: usize = BITS; + + /// Returns the type as a fundamental data type + #[inline] + pub const fn value(self) -> T { + self.value + } + + /// Initializes a new value without checking the bounds + /// + /// # Safety + /// Must only be called with a value less than or equal to [Self::MAX](Self::MAX) value. + #[inline] + pub const unsafe fn new_unchecked(value: T) -> Self { + Self { value } + } +} + +impl UInt +where + Self: Number, + T: Copy, +{ + pub const MASK: T = Self::MAX.value; +} + +// Next are specific implementations for u8, u16, u32, u64 and u128. A couple notes: +// - The existence of MAX also serves as a neat bounds-check for BITS: If BITS is too large, +// the subtraction overflows which will fail to compile. This simplifies things a lot. +// However, that only works if every constructor also uses MAX somehow (doing let _ = MAX is enough) + +#[cfg(feature = "const_convert_and_const_trait_impl")] +macro_rules! uint_impl_num { + ($($type:ident),+) => { + $( + impl const Number for UInt<$type, BITS> { + type UnderlyingType = $type; + + const BITS: usize = BITS; + + const MIN: Self = Self { value: 0 }; + + // The existence of MAX also serves as a bounds check: If NUM_BITS is > available bits, + // we will get a compiler error right here + const MAX: Self = Self { value: (<$type as Number>::MAX >> (<$type as Number>::BITS - Self::BITS)) }; + + #[inline] + fn try_new(value: Self::UnderlyingType) -> Result { + if value <= Self::MAX.value { + Ok(Self { value }) + } else { + Err(TryNewError{}) + } + } + + #[inline] + fn new(value: $type) -> Self { + assert!(value <= Self::MAX.value); + + Self { value } + } + + #[inline] + fn value(self) -> $type { + self.value + } + } + )+ + }; +} + +#[cfg(not(feature = "const_convert_and_const_trait_impl"))] +macro_rules! uint_impl_num { + ($($type:ident),+) => { + $( + impl Number for UInt<$type, BITS> { + type UnderlyingType = $type; + + const BITS: usize = BITS; + + const MIN: Self = Self { value: 0 }; + + // The existence of MAX also serves as a bounds check: If NUM_BITS is > available bits, + // we will get a compiler error right here + const MAX: Self = Self { value: (<$type as Number>::MAX >> (<$type as Number>::BITS - Self::BITS)) }; + + #[inline] + fn try_new(value: Self::UnderlyingType) -> Result { + if value <= Self::MAX.value { + Ok(Self { value }) + } else { + Err(TryNewError{}) + } + } + + #[inline] + fn new(value: $type) -> Self { + assert!(value <= Self::MAX.value); + + Self { value } + } + + #[inline] + fn value(self) -> $type { + self.value + } + } + )+ + }; +} + +uint_impl_num!(u8, u16, u32, u64, u128); + +macro_rules! uint_impl { + ($($type:ident),+) => { + $( + impl UInt<$type, BITS> { + /// Creates an instance. Panics if the given value is outside of the valid range + #[inline] + pub const fn new(value: $type) -> Self { + assert!(value <= Self::MAX.value); + + Self { value } + } + + /// Creates an instance or an error if the given value is outside of the valid range + #[inline] + pub const fn try_new(value: $type) -> Result { + if value <= Self::MAX.value { + Ok(Self { value }) + } else { + Err(TryNewError {}) + } + } + + #[deprecated(note = "Use one of the specific functions like extract_u32")] + pub const fn extract(value: $type, start_bit: usize) -> Self { + assert!(start_bit + BITS <= $type::BITS as usize); + // Query MAX to ensure that we get a compiler error if the current definition is bogus (e.g. ) + let _ = Self::MAX; + + Self { + value: (value >> start_bit) & Self::MAX.value, + } + } + + /// Extracts bits from a given value. The extract is equivalent to: `new((value >> start_bit) & MASK)` + /// Unlike new, extract doesn't perform range-checking so it is slightly more efficient. + /// panics if start_bit+ doesn't fit within an u8, e.g. u5::extract_u8(8, 4); + #[inline] + pub const fn extract_u8(value: u8, start_bit: usize) -> Self { + assert!(start_bit + BITS <= 8); + // Query MAX to ensure that we get a compiler error if the current definition is bogus (e.g. ) + let _ = Self::MAX; + + Self { + value: ((value >> start_bit) as $type) & Self::MAX.value, + } + } + + /// Extracts bits from a given value. The extract is equivalent to: `new((value >> start_bit) & MASK)` + /// Unlike new, extract doesn't perform range-checking so it is slightly more efficient + /// panics if start_bit+ doesn't fit within a u16, e.g. u15::extract_u16(8, 2); + #[inline] + pub const fn extract_u16(value: u16, start_bit: usize) -> Self { + assert!(start_bit + BITS <= 16); + // Query MAX to ensure that we get a compiler error if the current definition is bogus (e.g. ) + let _ = Self::MAX; + + Self { + value: ((value >> start_bit) as $type) & Self::MAX.value, + } + } + + /// Extracts bits from a given value. The extract is equivalent to: `new((value >> start_bit) & MASK)` + /// Unlike new, extract doesn't perform range-checking so it is slightly more efficient + /// panics if start_bit+ doesn't fit within a u32, e.g. u30::extract_u32(8, 4); + #[inline] + pub const fn extract_u32(value: u32, start_bit: usize) -> Self { + assert!(start_bit + BITS <= 32); + // Query MAX to ensure that we get a compiler error if the current definition is bogus (e.g. ) + let _ = Self::MAX; + + Self { + value: ((value >> start_bit) as $type) & Self::MAX.value, + } + } + + /// Extracts bits from a given value. The extract is equivalent to: `new((value >> start_bit) & MASK)` + /// Unlike new, extract doesn't perform range-checking so it is slightly more efficient + /// panics if start_bit+ doesn't fit within a u64, e.g. u60::extract_u64(8, 5); + #[inline] + pub const fn extract_u64(value: u64, start_bit: usize) -> Self { + assert!(start_bit + BITS <= 64); + // Query MAX to ensure that we get a compiler error if the current definition is bogus (e.g. ) + let _ = Self::MAX; + + Self { + value: ((value >> start_bit) as $type) & Self::MAX.value, + } + } + + /// Extracts bits from a given value. The extract is equivalent to: `new((value >> start_bit) & MASK)` + /// Unlike new, extract doesn't perform range-checking so it is slightly more efficient + /// panics if start_bit+ doesn't fit within a u128, e.g. u120::extract_u64(8, 9); + #[inline] + pub const fn extract_u128(value: u128, start_bit: usize) -> Self { + assert!(start_bit + BITS <= 128); + // Query MAX to ensure that we get a compiler error if the current definition is bogus (e.g. ) + let _ = Self::MAX; + + Self { + value: ((value >> start_bit) as $type) & Self::MAX.value, + } + } + + /// Returns a UInt with a wider bit depth but with the same base data type + pub const fn widen( + self, + ) -> UInt<$type, BITS_RESULT> { + let _ = CompileTimeAssert::::SMALLER_OR_EQUAL; + // Query MAX of the result to ensure we get a compiler error if the current definition is bogus (e.g. ) + let _ = UInt::<$type, BITS_RESULT>::MAX; + UInt::<$type, BITS_RESULT> { value: self.value } + } + + pub const fn wrapping_add(self, rhs: Self) -> Self { + let sum = self.value.wrapping_add(rhs.value); + Self { + value: sum & Self::MASK, + } + } + + pub const fn wrapping_sub(self, rhs: Self) -> Self { + let sum = self.value.wrapping_sub(rhs.value); + Self { + value: sum & Self::MASK, + } + } + + pub const fn wrapping_mul(self, rhs: Self) -> Self { + let sum = self.value.wrapping_mul(rhs.value); + Self { + value: sum & Self::MASK, + } + } + + pub const fn wrapping_div(self, rhs: Self) -> Self { + let sum = self.value.wrapping_div(rhs.value); + Self { + // No need to mask here - divisions always produce a result that is <= self + value: sum, + } + } + + pub const fn wrapping_shl(self, rhs: u32) -> Self { + // modulo is expensive on some platforms, so only do it when necessary + let shift_amount = if rhs >= (BITS as u32) { + rhs % (BITS as u32) + } else { + rhs + }; + + Self { + // We could use wrapping_shl here to make Debug builds slightly smaller; + // the downside would be that on weird CPUs that don't do wrapping_shl by + // default release builds would get slightly worse. Using << should give + // good release performance everywere + value: (self.value << shift_amount) & Self::MASK, + } + } + + pub const fn wrapping_shr(self, rhs: u32) -> Self { + // modulo is expensive on some platforms, so only do it when necessary + let shift_amount = if rhs >= (BITS as u32) { + rhs % (BITS as u32) + } else { + rhs + }; + + Self { + value: (self.value >> shift_amount), + } + } + + pub const fn saturating_add(self, rhs: Self) -> Self { + let saturated = if core::mem::size_of::<$type>() << 3 == BITS { + // We are something like a UInt::. We can fallback to the base implementation + self.value.saturating_add(rhs.value) + } else { + // We're dealing with fewer bits than the underlying type (e.g. u7). + // That means the addition can never overflow the underlying type + let sum = self.value.wrapping_add(rhs.value); + let max = Self::MAX.value(); + if sum > max { max } else { sum } + }; + Self { + value: saturated, + } + } + + pub const fn saturating_sub(self, rhs: Self) -> Self { + // For unsigned numbers, the only difference is when we reach 0 - which is the same + // no matter the data size + Self { + value: self.value.saturating_sub(rhs.value), + } + } + + pub const fn saturating_mul(self, rhs: Self) -> Self { + let product = if BITS << 1 <= (core::mem::size_of::<$type>() << 3) { + // We have half the bits (e.g. u4 * u4) of the base type, so we can't overflow the base type + // wrapping_mul likely provides the best performance on all cpus + self.value.wrapping_mul(rhs.value) + } else { + // We have more than half the bits (e.g. u6 * u6) + self.value.saturating_mul(rhs.value) + }; + + let max = Self::MAX.value(); + let saturated = if product > max { max } else { product }; + Self { + value: saturated, + } + } + + pub const fn saturating_div(self, rhs: Self) -> Self { + // When dividing unsigned numbers, we never need to saturate. + // Divison by zero in saturating_div throws an exception (in debug and release mode), + // so no need to do anything special there either + Self { + value: self.value.saturating_div(rhs.value), + } + } + + pub const fn saturating_pow(self, exp: u32) -> Self { + // It might be possible to handwrite this to be slightly faster as both + // saturating_pow has to do a bounds-check and then we do second one + let powed = self.value.saturating_pow(exp); + let max = Self::MAX.value(); + let saturated = if powed > max { max } else { powed }; + Self { + value: saturated, + } + } + + pub const fn checked_add(self, rhs: Self) -> Option { + if core::mem::size_of::<$type>() << 3 == BITS { + // We are something like a UInt::. We can fallback to the base implementation + match self.value.checked_add(rhs.value) { + Some(value) => Some(Self { value }), + None => None + } + } else { + // We're dealing with fewer bits than the underlying type (e.g. u7). + // That means the addition can never overflow the underlying type + let sum = self.value.wrapping_add(rhs.value); + if sum > Self::MAX.value() { None } else { Some(Self { value: sum })} + } + } + + pub const fn checked_sub(self, rhs: Self) -> Option { + match self.value.checked_sub(rhs.value) { + Some(value) => Some(Self { value }), + None => None + } + } + + pub const fn checked_mul(self, rhs: Self) -> Option { + let product = if BITS << 1 <= (core::mem::size_of::<$type>() << 3) { + // We have half the bits (e.g. u4 * u4) of the base type, so we can't overflow the base type + // wrapping_mul likely provides the best performance on all cpus + Some(self.value.wrapping_mul(rhs.value)) + } else { + // We have more than half the bits (e.g. u6 * u6) + self.value.checked_mul(rhs.value) + }; + + match product { + Some(value) => { + if value > Self::MAX.value() { + None + } else { + Some(Self {value}) + } + } + None => None + } + } + + pub const fn checked_div(self, rhs: Self) -> Option { + match self.value.checked_div(rhs.value) { + Some(value) => Some(Self { value }), + None => None + } + } + + pub const fn checked_shl(self, rhs: u32) -> Option { + if rhs >= (BITS as u32) { + None + } else { + Some(Self { + value: (self.value << rhs) & Self::MASK, + }) + } + } + + pub const fn checked_shr(self, rhs: u32) -> Option { + if rhs >= (BITS as u32) { + None + } else { + Some(Self { + value: (self.value >> rhs), + }) + } + } + + pub const fn overflowing_add(self, rhs: Self) -> (Self, bool) { + let (value, overflow) = if core::mem::size_of::<$type>() << 3 == BITS { + // We are something like a UInt::. We can fallback to the base implementation + self.value.overflowing_add(rhs.value) + } else { + // We're dealing with fewer bits than the underlying type (e.g. u7). + // That means the addition can never overflow the underlying type + let sum = self.value.wrapping_add(rhs.value); + let masked = sum & Self::MASK; + (masked, masked != sum) + }; + (Self { value }, overflow) + } + + pub const fn overflowing_sub(self, rhs: Self) -> (Self, bool) { + // For unsigned numbers, the only difference is when we reach 0 - which is the same + // no matter the data size. In the case of overflow we do have the mask the result though + let (value, overflow) = self.value.overflowing_sub(rhs.value); + (Self { value: value & Self::MASK }, overflow) + } + + pub const fn overflowing_mul(self, rhs: Self) -> (Self, bool) { + let (wrapping_product, overflow) = if BITS << 1 <= (core::mem::size_of::<$type>() << 3) { + // We have half the bits (e.g. u4 * u4) of the base type, so we can't overflow the base type + // wrapping_mul likely provides the best performance on all cpus + self.value.overflowing_mul(rhs.value) + } else { + // We have more than half the bits (e.g. u6 * u6) + self.value.overflowing_mul(rhs.value) + }; + + let masked = wrapping_product & Self::MASK; + let overflow2 = masked != wrapping_product; + (Self { value: masked }, overflow || overflow2 ) + } + + pub const fn overflowing_div(self, rhs: Self) -> (Self, bool) { + let value = self.value.wrapping_div(rhs.value); + (Self { value }, false ) + } + + pub const fn overflowing_shl(self, rhs: u32) -> (Self, bool) { + if rhs >= (BITS as u32) { + (Self { value: self.value << (rhs % (BITS as u32)) }, true) + } else { + (Self { value: self.value << rhs }, false) + } + } + + pub const fn overflowing_shr(self, rhs: u32) -> (Self, bool) { + if rhs >= (BITS as u32) { + (Self { value: self.value >> (rhs % (BITS as u32)) }, true) + } else { + (Self { value: self.value >> rhs }, false) + } + } + + /// Reverses the order of bits in the integer. The least significant bit becomes the most significant bit, second least-significant bit becomes second most-significant bit, etc. + pub const fn reverse_bits(self) -> Self { + let shift_right = (core::mem::size_of::<$type>() << 3) - BITS; + Self { value: self.value.reverse_bits() >> shift_right } + } + + /// Returns the number of ones in the binary representation of self. + pub const fn count_ones(self) -> u32 { + // The upper bits are zero, so we can ignore them + self.value.count_ones() + } + + /// Returns the number of zeros in the binary representation of self. + pub const fn count_zeros(self) -> u32 { + // The upper bits are zero, so we can have to subtract them from the result + let filler_bits = ((core::mem::size_of::<$type>() << 3) - BITS) as u32; + self.value.count_zeros() - filler_bits + } + + /// Returns the number of leading ones in the binary representation of self. + pub const fn leading_ones(self) -> u32 { + let shift = ((core::mem::size_of::<$type>() << 3) - BITS) as u32; + (self.value << shift).leading_ones() + } + + /// Returns the number of leading zeros in the binary representation of self. + pub const fn leading_zeros(self) -> u32 { + let shift = ((core::mem::size_of::<$type>() << 3) - BITS) as u32; + (self.value << shift).leading_zeros() + } + + /// Returns the number of leading ones in the binary representation of self. + pub const fn trailing_ones(self) -> u32 { + self.value.trailing_ones() + } + + /// Returns the number of leading zeros in the binary representation of self. + pub const fn trailing_zeros(self) -> u32 { + self.value.trailing_zeros() + } + + /// Shifts the bits to the left by a specified amount, n, wrapping the truncated bits to the end of the resulting integer. + /// Please note this isn't the same operation as the << shifting operator! + pub const fn rotate_left(self, n: u32) -> Self { + let b = BITS as u32; + let n = if n >= b { n % b } else { n }; + + let moved_bits = (self.value << n) & Self::MASK; + let truncated_bits = self.value >> (b - n); + Self { value: moved_bits | truncated_bits } + } + + /// Shifts the bits to the right by a specified amount, n, wrapping the truncated bits to the beginning of the resulting integer. + /// Please note this isn't the same operation as the >> shifting operator! + pub const fn rotate_right(self, n: u32) -> Self { + let b = BITS as u32; + let n = if n >= b { n % b } else { n }; + + let moved_bits = self.value >> n; + let truncated_bits = (self.value << (b - n)) & Self::MASK; + Self { value: moved_bits | truncated_bits } + } + } + )+ + }; +} + +uint_impl!(u8, u16, u32, u64, u128); + +// Arithmetic implementations +impl Add for UInt +where + Self: Number, + T: PartialEq + + Copy + + BitAnd + + Not + + Add + + Sub + + From, +{ + type Output = UInt; + + fn add(self, rhs: Self) -> Self::Output { + let sum = self.value + rhs.value; + #[cfg(debug_assertions)] + if (sum & !Self::MASK) != T::from(0) { + panic!("attempt to add with overflow"); + } + Self { + value: sum & Self::MASK, + } + } +} + +impl AddAssign for UInt +where + Self: Number, + T: PartialEq + + Eq + + Not + + Copy + + AddAssign + + BitAnd + + BitAndAssign + + From, +{ + fn add_assign(&mut self, rhs: Self) { + self.value += rhs.value; + #[cfg(debug_assertions)] + if (self.value & !Self::MASK) != T::from(0) { + panic!("attempt to add with overflow"); + } + self.value &= Self::MASK; + } +} + +impl Sub for UInt +where + Self: Number, + T: Copy + BitAnd + Sub, +{ + type Output = UInt; + + fn sub(self, rhs: Self) -> Self::Output { + // No need for extra overflow checking as the regular minus operator already handles it for us + Self { + value: (self.value - rhs.value) & Self::MASK, + } + } +} + +impl SubAssign for UInt +where + Self: Number, + T: Copy + SubAssign + BitAnd + BitAndAssign + Sub, +{ + fn sub_assign(&mut self, rhs: Self) { + // No need for extra overflow checking as the regular minus operator already handles it for us + self.value -= rhs.value; + self.value &= Self::MASK; + } +} + +impl Mul for UInt +where + Self: Number, + T: PartialEq + Copy + BitAnd + Not + Mul + From, +{ + type Output = UInt; + + fn mul(self, rhs: Self) -> Self::Output { + // In debug builds, this will perform two bounds checks: Initial multiplication, followed by + // our bounds check. As wrapping_mul isn't available as a trait bound (in regular Rust), this + // is unavoidable + let product = self.value * rhs.value; + #[cfg(debug_assertions)] + if (product & !Self::MASK) != T::from(0) { + panic!("attempt to multiply with overflow"); + } + Self { + value: product & Self::MASK, + } + } +} + +impl MulAssign for UInt +where + Self: Number, + T: PartialEq + + Eq + + Not + + Copy + + MulAssign + + BitAnd + + BitAndAssign + + From, +{ + fn mul_assign(&mut self, rhs: Self) { + self.value *= rhs.value; + #[cfg(debug_assertions)] + if (self.value & !Self::MASK) != T::from(0) { + panic!("attempt to multiply with overflow"); + } + self.value &= Self::MASK; + } +} + +impl Div for UInt +where + Self: Number, + T: PartialEq + Div, +{ + type Output = UInt; + + fn div(self, rhs: Self) -> Self::Output { + // Integer division can only make the value smaller. And as the result is same type as + // Self, there's no need to range-check or mask + Self { + value: self.value / rhs.value, + } + } +} + +impl DivAssign for UInt +where + Self: Number, + T: PartialEq + DivAssign, +{ + fn div_assign(&mut self, rhs: Self) { + self.value /= rhs.value; + } +} + +impl BitAnd for UInt +where + Self: Number, + T: Copy + + BitAnd + + Sub + + Shl + + Shr + + From, +{ + type Output = UInt; + + fn bitand(self, rhs: Self) -> Self::Output { + Self { + value: self.value & rhs.value, + } + } +} + +impl BitAndAssign for UInt +where + T: Copy + BitAndAssign + Sub + Shl + From, +{ + fn bitand_assign(&mut self, rhs: Self) { + self.value &= rhs.value; + } +} + +impl BitOr for UInt +where + T: Copy + BitOr + Sub + Shl + From, +{ + type Output = UInt; + + fn bitor(self, rhs: Self) -> Self::Output { + Self { + value: self.value | rhs.value, + } + } +} + +impl BitOrAssign for UInt +where + T: Copy + BitOrAssign + Sub + Shl + From, +{ + fn bitor_assign(&mut self, rhs: Self) { + self.value |= rhs.value; + } +} + +impl BitXor for UInt +where + T: Copy + BitXor + Sub + Shl + From, +{ + type Output = UInt; + + fn bitxor(self, rhs: Self) -> Self::Output { + Self { + value: self.value ^ rhs.value, + } + } +} + +impl BitXorAssign for UInt +where + T: Copy + BitXorAssign + Sub + Shl + From, +{ + fn bitxor_assign(&mut self, rhs: Self) { + self.value ^= rhs.value; + } +} + +impl Not for UInt +where + Self: Number, + T: Copy + + BitAnd + + BitXor + + Sub + + Shl + + Shr + + From, +{ + type Output = UInt; + + fn not(self) -> Self::Output { + Self { + value: self.value ^ Self::MASK, + } + } +} + +impl Shl for UInt +where + Self: Number, + T: Copy + + BitAnd + + Shl + + Sub + + Shl + + Shr + + From, + TSHIFTBITS: TryInto + Copy, +{ + type Output = UInt; + + fn shl(self, rhs: TSHIFTBITS) -> Self::Output { + // With debug assertions, the << and >> operators throw an exception if the shift amount + // is larger than the number of bits (in which case the result would always be 0) + #[cfg(debug_assertions)] + if rhs.try_into().unwrap_or(usize::MAX) >= BITS { + panic!("attempt to shift left with overflow") + } + + Self { + value: (self.value << rhs) & Self::MASK, + } + } +} + +impl ShlAssign for UInt +where + Self: Number, + T: Copy + + BitAnd + + BitAndAssign + + ShlAssign + + Sub + + Shr + + Shl + + From, + TSHIFTBITS: TryInto + Copy, +{ + fn shl_assign(&mut self, rhs: TSHIFTBITS) { + // With debug assertions, the << and >> operators throw an exception if the shift amount + // is larger than the number of bits (in which case the result would always be 0) + #[cfg(debug_assertions)] + if rhs.try_into().unwrap_or(usize::MAX) >= BITS { + panic!("attempt to shift left with overflow") + } + self.value <<= rhs; + self.value &= Self::MASK; + } +} + +impl Shr for UInt +where + T: Copy + Shr + Sub + Shl + From, + TSHIFTBITS: TryInto + Copy, +{ + type Output = UInt; + + fn shr(self, rhs: TSHIFTBITS) -> Self::Output { + // With debug assertions, the << and >> operators throw an exception if the shift amount + // is larger than the number of bits (in which case the result would always be 0) + #[cfg(debug_assertions)] + if rhs.try_into().unwrap_or(usize::MAX) >= BITS { + panic!("attempt to shift left with overflow") + } + Self { + value: self.value >> rhs, + } + } +} + +impl ShrAssign for UInt +where + T: Copy + ShrAssign + Sub + Shl + From, + TSHIFTBITS: TryInto + Copy, +{ + fn shr_assign(&mut self, rhs: TSHIFTBITS) { + // With debug assertions, the << and >> operators throw an exception if the shift amount + // is larger than the number of bits (in which case the result would always be 0) + #[cfg(debug_assertions)] + if rhs.try_into().unwrap_or(usize::MAX) >= BITS { + panic!("attempt to shift left with overflow") + } + self.value >>= rhs; + } +} + +impl Display for UInt +where + T: Display, +{ + #[inline] + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + self.value.fmt(f) + } +} + +impl Debug for UInt +where + T: Debug, +{ + #[inline] + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + self.value.fmt(f) + } +} + +impl LowerHex for UInt +where + T: LowerHex, +{ + #[inline] + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + self.value.fmt(f) + } +} + +impl UpperHex for UInt +where + T: UpperHex, +{ + #[inline] + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + self.value.fmt(f) + } +} + +impl Octal for UInt +where + T: Octal, +{ + #[inline] + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + self.value.fmt(f) + } +} + +impl Binary for UInt +where + T: Binary, +{ + #[inline] + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + self.value.fmt(f) + } +} + +#[cfg(feature = "defmt")] +impl defmt::Format for UInt +where + T: defmt::Format, +{ + #[inline] + fn format(&self, f: defmt::Formatter) { + self.value.format(f) + } +} + +#[cfg(feature = "serde")] +impl Serialize for UInt +where + T: Serialize, +{ + fn serialize(&self, serializer: S) -> Result { + self.value.serialize(serializer) + } +} + +// Serde's invalid_value error (https://rust-lang.github.io/hashbrown/serde/de/trait.Error.html#method.invalid_value) +// takes an Unexpected (https://rust-lang.github.io/hashbrown/serde/de/enum.Unexpected.html) which only accepts a 64 bit +// unsigned integer. This is a problem for us because we want to support 128 bit unsigned integers. To work around this +// we define our own error type using the UInt's underlying type which implements Display and then use +// serde::de::Error::custom to create an error with our custom type. +#[cfg(feature = "serde")] +struct InvalidUIntValueError { + value: T, + max: T, +} + +#[cfg(feature = "serde")] +impl Display for InvalidUIntValueError { + fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result { + write!( + f, + "invalid value: integer `{}`, expected a value between `0` and `{}`", + self.value, self.max + ) + } +} + +#[cfg(feature = "serde")] +impl<'de, T: Display, const BITS: usize> Deserialize<'de> for UInt +where + Self: Number, + T: Deserialize<'de> + PartialOrd, +{ + fn deserialize>(deserializer: D) -> Result { + let value = T::deserialize(deserializer)?; + + if value <= Self::MAX.value { + Ok(Self { value }) + } else { + Err(serde::de::Error::custom(InvalidUIntValueError { + value, + max: Self::MAX.value, + })) + } + } +} + +impl Hash for UInt +where + T: Hash, +{ + #[inline] + fn hash(&self, state: &mut H) { + self.value.hash(state) + } +} + +#[cfg(feature = "step_trait")] +impl Step for UInt +where + Self: Number, + T: Copy + Step, +{ + #[inline] + fn steps_between(start: &Self, end: &Self) -> Option { + Step::steps_between(&start.value(), &end.value()) + } + + #[inline] + fn forward_checked(start: Self, count: usize) -> Option { + if let Some(res) = Step::forward_checked(start.value(), count) { + Self::try_new(res).ok() + } else { + None + } + } + + #[inline] + fn backward_checked(start: Self, count: usize) -> Option { + if let Some(res) = Step::backward_checked(start.value(), count) { + Self::try_new(res).ok() + } else { + None + } + } +} + +#[cfg(feature = "num-traits")] +impl num_traits::WrappingAdd for UInt +where + Self: Number, + T: PartialEq + + Eq + + Copy + + Add + + Sub + + BitAnd + + Not + + Shr + + Shl + + From, + Wrapping: Add, Output = Wrapping>, +{ + #[inline] + fn wrapping_add(&self, rhs: &Self) -> Self { + let sum = (Wrapping(self.value) + Wrapping(rhs.value)).0; + Self { + value: sum & Self::MASK, + } + } +} + +#[cfg(feature = "num-traits")] +impl num_traits::WrappingSub for UInt +where + Self: Number, + T: PartialEq + + Eq + + Copy + + Add + + Sub + + BitAnd + + Not + + Shr + + Shl + + From, + Wrapping: Sub, Output = Wrapping>, +{ + #[inline] + fn wrapping_sub(&self, rhs: &Self) -> Self { + let sum = (Wrapping(self.value) - Wrapping(rhs.value)).0; + Self { + value: sum & Self::MASK, + } + } +} + +#[cfg(feature = "num-traits")] +impl num_traits::bounds::Bounded for UInt +where + Self: Number, +{ + fn min_value() -> Self { + Self::MIN + } + + fn max_value() -> Self { + Self::MAX + } +} + +macro_rules! bytes_operation_impl { + ($base_data_type:ty, $bits:expr, [$($indices:expr),+]) => { + impl UInt<$base_data_type, $bits> + { + /// Reverses the byte order of the integer. + #[inline] + pub const fn swap_bytes(&self) -> Self { + // swap_bytes() of the underlying type does most of the work. Then, we just need to shift + const SHIFT_RIGHT: usize = (core::mem::size_of::<$base_data_type>() << 3) - $bits; + Self { value: self.value.swap_bytes() >> SHIFT_RIGHT } + } + + pub const fn to_le_bytes(&self) -> [u8; $bits >> 3] { + let v = self.value(); + + [ $( (v >> ($indices << 3)) as u8, )+ ] + } + + pub const fn from_le_bytes(from: [u8; $bits >> 3]) -> Self { + let value = { 0 $( | (from[$indices] as $base_data_type) << ($indices << 3))+ }; + Self { value } + } + + pub const fn to_be_bytes(&self) -> [u8; $bits >> 3] { + let v = self.value(); + + [ $( (v >> ($bits - 8 - ($indices << 3))) as u8, )+ ] + } + + pub const fn from_be_bytes(from: [u8; $bits >> 3]) -> Self { + let value = { 0 $( | (from[$indices] as $base_data_type) << ($bits - 8 - ($indices << 3)))+ }; + Self { value } + } + + #[inline] + pub const fn to_ne_bytes(&self) -> [u8; $bits >> 3] { + #[cfg(target_endian = "little")] + { + self.to_le_bytes() + } + #[cfg(target_endian = "big")] + { + self.to_be_bytes() + } + } + + #[inline] + pub const fn from_ne_bytes(bytes: [u8; $bits >> 3]) -> Self { + #[cfg(target_endian = "little")] + { + Self::from_le_bytes(bytes) + } + #[cfg(target_endian = "big")] + { + Self::from_be_bytes(bytes) + } + } + + #[inline] + pub const fn to_le(self) -> Self { + #[cfg(target_endian = "little")] + { + self + } + #[cfg(target_endian = "big")] + { + self.swap_bytes() + } + } + + #[inline] + pub const fn to_be(self) -> Self { + #[cfg(target_endian = "little")] + { + self.swap_bytes() + } + #[cfg(target_endian = "big")] + { + self + } + } + + #[inline] + pub const fn from_le(value: Self) -> Self { + value.to_le() + } + + #[inline] + pub const fn from_be(value: Self) -> Self { + value.to_be() + } + } + }; +} + +bytes_operation_impl!(u32, 24, [0, 1, 2]); +bytes_operation_impl!(u64, 24, [0, 1, 2]); +bytes_operation_impl!(u128, 24, [0, 1, 2]); +bytes_operation_impl!(u64, 40, [0, 1, 2, 3, 4]); +bytes_operation_impl!(u128, 40, [0, 1, 2, 3, 4]); +bytes_operation_impl!(u64, 48, [0, 1, 2, 3, 4, 5]); +bytes_operation_impl!(u128, 48, [0, 1, 2, 3, 4, 5]); +bytes_operation_impl!(u64, 56, [0, 1, 2, 3, 4, 5, 6]); +bytes_operation_impl!(u128, 56, [0, 1, 2, 3, 4, 5, 6]); +bytes_operation_impl!(u128, 72, [0, 1, 2, 3, 4, 5, 6, 7, 8]); +bytes_operation_impl!(u128, 80, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]); +bytes_operation_impl!(u128, 88, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]); +bytes_operation_impl!(u128, 96, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]); +bytes_operation_impl!(u128, 104, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]); +bytes_operation_impl!(u128, 112, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]); +bytes_operation_impl!( + u128, + 120, + [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14] +); + +// Conversions + +#[cfg(feature = "const_convert_and_const_trait_impl")] +macro_rules! from_arbitrary_int_impl { + ($from:ty, [$($into:ty),+]) => { + $( + impl const From> + for UInt<$into, BITS> + { + #[inline] + fn from(item: UInt<$from, BITS_FROM>) -> Self { + let _ = CompileTimeAssert::::SMALLER_OR_EQUAL; + Self { value: item.value as $into } + } + } + )+ + }; +} + +#[cfg(not(feature = "const_convert_and_const_trait_impl"))] +macro_rules! from_arbitrary_int_impl { + ($from:ty, [$($into:ty),+]) => { + $( + impl From> + for UInt<$into, BITS> + { + #[inline] + fn from(item: UInt<$from, BITS_FROM>) -> Self { + let _ = CompileTimeAssert::::SMALLER_OR_EQUAL; + Self { value: item.value as $into } + } + } + )+ + }; +} + +#[cfg(feature = "const_convert_and_const_trait_impl")] +macro_rules! from_native_impl { + ($from:ty, [$($into:ty),+]) => { + $( + impl const From<$from> for UInt<$into, BITS> { + #[inline] + fn from(from: $from) -> Self { + let _ = CompileTimeAssert::<{ <$from>::BITS as usize }, BITS>::SMALLER_OR_EQUAL; + Self { value: from as $into } + } + } + + impl const From> for $into { + #[inline] + fn from(from: UInt<$from, BITS>) -> Self { + let _ = CompileTimeAssert::::BITS as usize }>::SMALLER_OR_EQUAL; + from.value as $into + } + } + )+ + }; +} + +#[cfg(not(feature = "const_convert_and_const_trait_impl"))] +macro_rules! from_native_impl { + ($from:ty, [$($into:ty),+]) => { + $( + impl From<$from> for UInt<$into, BITS> { + #[inline] + fn from(from: $from) -> Self { + let _ = CompileTimeAssert::<{ <$from>::BITS as usize }, BITS>::SMALLER_OR_EQUAL; + Self { value: from as $into } + } + } + + impl From> for $into { + #[inline] + fn from(from: UInt<$from, BITS>) -> Self { + let _ = CompileTimeAssert::::BITS as usize }>::SMALLER_OR_EQUAL; + from.value as $into + } + } + )+ + }; +} + +from_arbitrary_int_impl!(u8, [u16, u32, u64, u128]); +from_arbitrary_int_impl!(u16, [u8, u32, u64, u128]); +from_arbitrary_int_impl!(u32, [u8, u16, u64, u128]); +from_arbitrary_int_impl!(u64, [u8, u16, u32, u128]); +from_arbitrary_int_impl!(u128, [u8, u32, u64, u16]); + +from_native_impl!(u8, [u8, u16, u32, u64, u128]); +from_native_impl!(u16, [u8, u16, u32, u64, u128]); +from_native_impl!(u32, [u8, u16, u32, u64, u128]); +from_native_impl!(u64, [u8, u16, u32, u64, u128]); +from_native_impl!(u128, [u8, u16, u32, u64, u128]); + +// Define type aliases like u1, u63 and u80 using the smallest possible underlying data type. +// These are for convenience only - UInt is still legal +macro_rules! type_alias { + ($storage:ty, $(($name:ident, $bits:expr)),+) => { + $( pub type $name = crate::UInt<$storage, $bits>; )+ + } +} + +pub use aliases::*; + +#[allow(non_camel_case_types)] +#[rustfmt::skip] +mod aliases { + type_alias!(u8, (u1, 1), (u2, 2), (u3, 3), (u4, 4), (u5, 5), (u6, 6), (u7, 7)); + type_alias!(u16, (u9, 9), (u10, 10), (u11, 11), (u12, 12), (u13, 13), (u14, 14), (u15, 15)); + type_alias!(u32, (u17, 17), (u18, 18), (u19, 19), (u20, 20), (u21, 21), (u22, 22), (u23, 23), (u24, 24), (u25, 25), (u26, 26), (u27, 27), (u28, 28), (u29, 29), (u30, 30), (u31, 31)); + type_alias!(u64, (u33, 33), (u34, 34), (u35, 35), (u36, 36), (u37, 37), (u38, 38), (u39, 39), (u40, 40), (u41, 41), (u42, 42), (u43, 43), (u44, 44), (u45, 45), (u46, 46), (u47, 47), (u48, 48), (u49, 49), (u50, 50), (u51, 51), (u52, 52), (u53, 53), (u54, 54), (u55, 55), (u56, 56), (u57, 57), (u58, 58), (u59, 59), (u60, 60), (u61, 61), (u62, 62), (u63, 63)); + type_alias!(u128, (u65, 65), (u66, 66), (u67, 67), (u68, 68), (u69, 69), (u70, 70), (u71, 71), (u72, 72), (u73, 73), (u74, 74), (u75, 75), (u76, 76), (u77, 77), (u78, 78), (u79, 79), (u80, 80), (u81, 81), (u82, 82), (u83, 83), (u84, 84), (u85, 85), (u86, 86), (u87, 87), (u88, 88), (u89, 89), (u90, 90), (u91, 91), (u92, 92), (u93, 93), (u94, 94), (u95, 95), (u96, 96), (u97, 97), (u98, 98), (u99, 99), (u100, 100), (u101, 101), (u102, 102), (u103, 103), (u104, 104), (u105, 105), (u106, 106), (u107, 107), (u108, 108), (u109, 109), (u110, 110), (u111, 111), (u112, 112), (u113, 113), (u114, 114), (u115, 115), (u116, 116), (u117, 117), (u118, 118), (u119, 119), (u120, 120), (u121, 121), (u122, 122), (u123, 123), (u124, 124), (u125, 125), (u126, 126), (u127, 127)); +} + +// We need to wrap this in a macro, currently: https://github.com/rust-lang/rust/issues/67792#issuecomment-1130369066 + +#[cfg(feature = "const_convert_and_const_trait_impl")] +macro_rules! boolu1 { + () => { + impl const From for u1 { + #[inline] + fn from(value: bool) -> Self { + u1::new(value as u8) + } + } + impl const From for bool { + #[inline] + fn from(value: u1) -> Self { + match value.value() { + 0 => false, + 1 => true, + _ => panic!("arbitrary_int_type already validates that this is unreachable"), //TODO: unreachable!() is not const yet + } + } + } + }; +} + +#[cfg(not(feature = "const_convert_and_const_trait_impl"))] +macro_rules! boolu1 { + () => { + impl From for u1 { + #[inline] + fn from(value: bool) -> Self { + u1::new(value as u8) + } + } + impl From for bool { + #[inline] + fn from(value: u1) -> Self { + match value.value() { + 0 => false, + 1 => true, + _ => panic!("arbitrary_int_type already validates that this is unreachable"), //TODO: unreachable!() is not const yet + } + } + } + }; +} + +boolu1!(); diff --git a/rust/hw/char/pl011/vendor/arbitrary-int/tests/tests.rs b/rust/hw/char/pl011/vendor/arbitrary-int/tests/tests.rs new file mode 100644 index 0000000000..e050f00c99 --- /dev/null +++ b/rust/hw/char/pl011/vendor/arbitrary-int/tests/tests.rs @@ -0,0 +1,1913 @@ +#![cfg_attr(feature = "step_trait", feature(step_trait))] + +extern crate core; + +use arbitrary_int::*; +use std::collections::HashMap; +#[cfg(feature = "step_trait")] +use std::iter::Step; + +#[test] +fn constants() { + // Make a constant to ensure new().value() works in a const-context + const TEST_CONSTANT: u8 = u7::new(127).value(); + assert_eq!(TEST_CONSTANT, 127u8); + + // Same with widen() + const TEST_CONSTANT2: u7 = u6::new(63).widen(); + assert_eq!(TEST_CONSTANT2, u7::new(63)); + + // Same with widen() + const TEST_CONSTANT3A: Result = u6::try_new(62); + assert_eq!(TEST_CONSTANT3A, Ok(u6::new(62))); + const TEST_CONSTANT3B: Result = u6::try_new(64); + assert!(TEST_CONSTANT3B.is_err()); +} + +#[test] +fn create_simple() { + let value7 = u7::new(123); + let value8 = UInt::::new(189); + + let value13 = u13::new(123); + let value16 = UInt::::new(60000); + + let value23 = u23::new(123); + let value67 = u67::new(123); + + assert_eq!(value7.value(), 123); + assert_eq!(value8.value(), 189); + + assert_eq!(value13.value(), 123); + assert_eq!(value16.value(), 60000); + + assert_eq!(value23.value(), 123); + assert_eq!(value67.value(), 123); +} + +#[test] +fn create_try_new() { + assert_eq!(u7::new(123).value(), 123); + assert_eq!(u7::try_new(190).expect_err("No error seen"), TryNewError {}); +} + +#[test] +#[should_panic] +fn create_panic_u7() { + u7::new(128); +} + +#[test] +#[should_panic] +fn create_panic_u15() { + u15::new(32768); +} + +#[test] +#[should_panic] +fn create_panic_u31() { + u31::new(2147483648); +} + +#[test] +#[should_panic] +fn create_panic_u63() { + u63::new(0x8000_0000_0000_0000); +} + +#[test] +#[should_panic] +fn create_panic_u127() { + u127::new(0x8000_0000_0000_0000_0000_0000_0000_0000); +} + +#[test] +fn add() { + assert_eq!(u7::new(10) + u7::new(20), u7::new(30)); + assert_eq!(u7::new(100) + u7::new(27), u7::new(127)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn add_overflow() { + let _ = u7::new(127) + u7::new(3); +} + +#[cfg(not(debug_assertions))] +#[test] +fn add_no_overflow() { + let _ = u7::new(127) + u7::new(3); +} + +#[cfg(feature = "num-traits")] +#[test] +fn num_traits_add_wrapping() { + let v1 = u7::new(120); + let v2 = u7::new(10); + let v3 = num_traits::WrappingAdd::wrapping_add(&v1, &v2); + assert_eq!(v3, u7::new(2)); +} + +#[cfg(feature = "num-traits")] +#[test] +fn num_traits_sub_wrapping() { + let v1 = u7::new(15); + let v2 = u7::new(20); + let v3 = num_traits::WrappingSub::wrapping_sub(&v1, &v2); + assert_eq!(v3, u7::new(123)); +} + +#[cfg(feature = "num-traits")] +#[test] +fn num_traits_bounded() { + use num_traits::bounds::Bounded; + assert_eq!(u7::MAX, u7::max_value()); + assert_eq!(u119::MAX, u119::max_value()); + assert_eq!(u7::new(0), u7::min_value()); + assert_eq!(u119::new(0), u119::min_value()); +} + +#[test] +fn addassign() { + let mut value = u9::new(500); + value += u9::new(11); + assert_eq!(value, u9::new(511)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn addassign_overflow() { + let mut value = u9::new(500); + value += u9::new(40); +} + +#[cfg(not(debug_assertions))] +#[test] +fn addassign_no_overflow() { + let mut value = u9::new(500); + value += u9::new(28); + assert_eq!(value, u9::new(16)); +} + +#[test] +fn sub() { + assert_eq!(u7::new(22) - u7::new(10), u7::new(12)); + assert_eq!(u7::new(127) - u7::new(127), u7::new(0)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn sub_overflow() { + let _ = u7::new(100) - u7::new(127); +} + +#[cfg(not(debug_assertions))] +#[test] +fn sub_no_overflow() { + let value = u7::new(100) - u7::new(127); + assert_eq!(value, u7::new(101)); +} + +#[test] +fn subassign() { + let mut value = u9::new(500); + value -= u9::new(11); + assert_eq!(value, u9::new(489)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn subassign_overflow() { + let mut value = u9::new(30); + value -= u9::new(40); +} + +#[cfg(not(debug_assertions))] +#[test] +fn subassign_no_overflow() { + let mut value = u9::new(30); + value -= u9::new(40); + assert_eq!(value, u9::new(502)); +} + +#[test] +fn mul() { + assert_eq!(u7::new(22) * u7::new(4), u7::new(88)); + assert_eq!(u7::new(127) * u7::new(0), u7::new(0)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn mul_overflow() { + let _ = u7::new(100) * u7::new(2); +} + +#[cfg(not(debug_assertions))] +#[test] +fn mul_no_overflow() { + let result = u7::new(100) * u7::new(2); + assert_eq!(result, u7::new(72)); +} + +#[test] +fn mulassign() { + let mut value = u9::new(240); + value *= u9::new(2); + assert_eq!(value, u9::new(480)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn mulassign_overflow() { + let mut value = u9::new(500); + value *= u9::new(2); +} + +#[cfg(not(debug_assertions))] +#[test] +fn mulassign_no_overflow() { + let mut value = u9::new(500); + value *= u9::new(40); + assert_eq!(value, u9::new(32)); +} + +#[test] +fn div() { + // div just forwards to the underlying type, so there isn't much to do + assert_eq!(u7::new(22) / u7::new(4), u7::new(5)); + assert_eq!(u7::new(127) / u7::new(1), u7::new(127)); + assert_eq!(u7::new(127) / u7::new(127), u7::new(1)); +} + +#[should_panic] +#[test] +fn div_by_zero() { + let _ = u7::new(22) / u7::new(0); +} + +#[test] +fn divassign() { + let mut value = u9::new(240); + value /= u9::new(2); + assert_eq!(value, u9::new(120)); +} + +#[should_panic] +#[test] +fn divassign_by_zero() { + let mut value = u9::new(240); + value /= u9::new(0); +} + +#[test] +fn bitand() { + assert_eq!( + u17::new(0b11001100) & u17::new(0b01101001), + u17::new(0b01001000) + ); + assert_eq!(u17::new(0b11001100) & u17::new(0), u17::new(0)); + assert_eq!( + u17::new(0b11001100) & u17::new(0x1_FFFF), + u17::new(0b11001100) + ); +} + +#[test] +fn bitandassign() { + let mut value = u4::new(0b0101); + value &= u4::new(0b0110); + assert_eq!(value, u4::new(0b0100)); +} + +#[test] +fn bitor() { + assert_eq!( + u17::new(0b11001100) | u17::new(0b01101001), + u17::new(0b11101101) + ); + assert_eq!(u17::new(0b11001100) | u17::new(0), u17::new(0b11001100)); + assert_eq!( + u17::new(0b11001100) | u17::new(0x1_FFFF), + u17::new(0x1_FFFF) + ); +} + +#[test] +fn bitorassign() { + let mut value = u4::new(0b0101); + value |= u4::new(0b0110); + assert_eq!(value, u4::new(0b0111)); +} + +#[test] +fn bitxor() { + assert_eq!( + u17::new(0b11001100) ^ u17::new(0b01101001), + u17::new(0b10100101) + ); + assert_eq!(u17::new(0b11001100) ^ u17::new(0), u17::new(0b11001100)); + assert_eq!( + u17::new(0b11001100) ^ u17::new(0x1_FFFF), + u17::new(0b1_11111111_00110011) + ); +} + +#[test] +fn bitxorassign() { + let mut value = u4::new(0b0101); + value ^= u4::new(0b0110); + assert_eq!(value, u4::new(0b0011)); +} + +#[test] +fn not() { + assert_eq!(!u17::new(0), u17::new(0b1_11111111_11111111)); + assert_eq!(!u5::new(0b10101), u5::new(0b01010)); +} + +#[test] +fn shl() { + assert_eq!(u17::new(0b1) << 5u8, u17::new(0b100000)); + // Ensure bits on the left are shifted out + assert_eq!(u9::new(0b11110000) << 3u64, u9::new(0b1_10000000)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shl_too_much8() { + let _ = u53::new(123) << 53u8; +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shl_too_much16() { + let _ = u53::new(123) << 53u16; +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shl_too_much32() { + let _ = u53::new(123) << 53u32; +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shl_too_much64() { + let _ = u53::new(123) << 53u64; +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shl_too_much128() { + let _ = u53::new(123) << 53u128; +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shl_too_much_usize() { + let _ = u53::new(123) << 53usize; +} + +#[test] +fn shlassign() { + let mut value = u9::new(0b11110000); + value <<= 3; + assert_eq!(value, u9::new(0b1_10000000)); +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shlassign_too_much() { + let mut value = u9::new(0b11110000); + value <<= 9; +} + +#[cfg(debug_assertions)] +#[test] +#[should_panic] +fn shlassign_too_much2() { + let mut value = u9::new(0b11110000); + value <<= 10; +} + +#[test] +fn shr() { + assert_eq!(u17::new(0b100110) >> 5usize, u17::new(1)); + + // Ensure there's no sign extension + assert_eq!(u17::new(0b1_11111111_11111111) >> 8, u17::new(0b1_11111111)); +} + +#[test] +fn shrassign() { + let mut value = u9::new(0b1_11110000); + value >>= 6; + assert_eq!(value, u9::new(0b0_00000111)); +} + +#[test] +fn compare() { + assert_eq!(true, u4::new(0b1100) > u4::new(0b0011)); + assert_eq!(true, u4::new(0b1100) >= u4::new(0b0011)); + assert_eq!(false, u4::new(0b1100) < u4::new(0b0011)); + assert_eq!(false, u4::new(0b1100) <= u4::new(0b0011)); + assert_eq!(true, u4::new(0b1100) != u4::new(0b0011)); + assert_eq!(false, u4::new(0b1100) == u4::new(0b0011)); + + assert_eq!(false, u4::new(0b1100) > u4::new(0b1100)); + assert_eq!(true, u4::new(0b1100) >= u4::new(0b1100)); + assert_eq!(false, u4::new(0b1100) < u4::new(0b1100)); + assert_eq!(true, u4::new(0b1100) <= u4::new(0b1100)); + assert_eq!(false, u4::new(0b1100) != u4::new(0b1100)); + assert_eq!(true, u4::new(0b1100) == u4::new(0b1100)); + + assert_eq!(false, u4::new(0b0011) > u4::new(0b1100)); + assert_eq!(false, u4::new(0b0011) >= u4::new(0b1100)); + assert_eq!(true, u4::new(0b0011) < u4::new(0b1100)); + assert_eq!(true, u4::new(0b0011) <= u4::new(0b1100)); + assert_eq!(true, u4::new(0b0011) != u4::new(0b1100)); + assert_eq!(false, u4::new(0b0011) == u4::new(0b1100)); +} + +#[test] +fn min_max() { + assert_eq!(0, u4::MIN.value()); + assert_eq!(0b1111, u4::MAX.value()); + assert_eq!(u4::new(0b1111), u4::MAX); + + assert_eq!(0, u15::MIN.value()); + assert_eq!(32767, u15::MAX.value()); + assert_eq!(u15::new(32767), u15::MAX); + + assert_eq!(0, u31::MIN.value()); + assert_eq!(2147483647, u31::MAX.value()); + + assert_eq!(0, u63::MIN.value()); + assert_eq!(0x7FFF_FFFF_FFFF_FFFF, u63::MAX.value()); + + assert_eq!(0, u127::MIN.value()); + assert_eq!(0x7FFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF_FFFF, u127::MAX.value()); +} + +#[test] +fn bits() { + assert_eq!(4, u4::BITS); + assert_eq!(12, u12::BITS); + assert_eq!(120, u120::BITS); + assert_eq!(13, UInt::::BITS); + + assert_eq!(8, u8::BITS); + assert_eq!(16, u16::BITS); +} + +#[test] +fn mask() { + assert_eq!(0x1u8, u1::MASK); + assert_eq!(0xFu8, u4::MASK); + assert_eq!(0x3FFFFu32, u18::MASK); + assert_eq!(0x7FFFFFFF_FFFFFFFF_FFFFFFFF_FFFFFFFFu128, u127::MASK); + assert_eq!(0x7FFFFFFF_FFFFFFFF_FFFFFFFF_FFFFFFFFu128, u127::MASK); + assert_eq!(0xFFFFFFFF_FFFFFFFF_FFFFFFFF_FFFFFFFFu128, u128::MAX); +} + +#[test] +fn min_max_fullwidth() { + assert_eq!(u8::MIN, UInt::::MIN.value()); + assert_eq!(u8::MAX, UInt::::MAX.value()); + + assert_eq!(u16::MIN, UInt::::MIN.value()); + assert_eq!(u16::MAX, UInt::::MAX.value()); + + assert_eq!(u32::MIN, UInt::::MIN.value()); + assert_eq!(u32::MAX, UInt::::MAX.value()); + + assert_eq!(u64::MIN, UInt::::MIN.value()); + assert_eq!(u64::MAX, UInt::::MAX.value()); + + assert_eq!(u128::MIN, UInt::::MIN.value()); + assert_eq!(u128::MAX, UInt::::MAX.value()); +} + +#[allow(deprecated)] +#[test] +fn extract() { + assert_eq!(u5::new(0b10000), u5::extract(0b11110000, 0)); + assert_eq!(u5::new(0b11100), u5::extract(0b11110000, 2)); + assert_eq!(u5::new(0b11110), u5::extract(0b11110000, 3)); + + // Use extract with a custom type (5 bits of u32) + assert_eq!( + UInt::::new(0b11110), + UInt::::extract(0b11110000, 3) + ); + assert_eq!( + u5::new(0b11110), + UInt::::extract(0b11110000, 3).into() + ); +} + +#[test] +fn extract_typed() { + assert_eq!(u5::new(0b10000), u5::extract_u8(0b11110000, 0)); + assert_eq!(u5::new(0b00011), u5::extract_u16(0b11110000_11110110, 6)); + assert_eq!( + u5::new(0b01011), + u5::extract_u32(0b11110010_11110110_00000000_00000000, 22) + ); + assert_eq!( + u5::new(0b01011), + u5::extract_u64( + 0b11110010_11110110_00000000_00000000_00000000_00000000_00000000_00000000, + 54 + ) + ); + assert_eq!(u5::new(0b01011), u5::extract_u128(0b11110010_11110110_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000000, 118)); +} + +#[test] +fn extract_full_width_typed() { + assert_eq!( + 0b1010_0011, + UInt::::extract_u8(0b1010_0011, 0).value() + ); + assert_eq!( + 0b1010_0011, + UInt::::extract_u16(0b1111_1111_1010_0011, 0).value() + ); +} + +#[test] +#[should_panic] +fn extract_not_enough_bits_8() { + let _ = u5::extract_u8(0b11110000, 4); +} + +#[test] +#[should_panic] +fn extract_not_enough_bits_8_full_width() { + let _ = UInt::::extract_u8(0b11110000, 1); +} + +#[test] +#[should_panic] +fn extract_not_enough_bits_16() { + let _ = u5::extract_u16(0b11110000, 12); +} + +#[test] +#[should_panic] +fn extract_not_enough_bits_32() { + let _ = u5::extract_u32(0b11110000, 28); +} + +#[test] +#[should_panic] +fn extract_not_enough_bits_64() { + let _ = u5::extract_u64(0b11110000, 60); +} + +#[test] +#[should_panic] +fn extract_not_enough_bits_128() { + let _ = u5::extract_u128(0b11110000, 124); +} + +#[test] +fn from_same_bit_widths() { + assert_eq!(u5::from(UInt::::new(0b10101)), u5::new(0b10101)); + assert_eq!(u5::from(UInt::::new(0b10101)), u5::new(0b10101)); + assert_eq!(u5::from(UInt::::new(0b10101)), u5::new(0b10101)); + assert_eq!(u5::from(UInt::::new(0b10101)), u5::new(0b10101)); + assert_eq!(u5::from(UInt::::new(0b10101)), u5::new(0b10101)); + + assert_eq!( + UInt::::from(UInt::::new(0b1110_0101)), + UInt::::new(0b1110_0101) + ); + + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + + assert_eq!( + UInt::::from(u6::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + UInt::::from(u14::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!(u30::from(UInt::::new(0b10101)), u30::new(0b10101)); + assert_eq!(u30::from(UInt::::new(0b10101)), u30::new(0b10101)); + assert_eq!(u30::from(UInt::::new(0b10101)), u30::new(0b10101)); + + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!(u60::from(u60::new(0b10101)), u60::new(0b10101)); + assert_eq!(u60::from(UInt::::new(0b10101)), u60::new(0b10101)); + + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + UInt::::from(UInt::::new(0b10101)), + UInt::::new(0b10101) + ); + assert_eq!( + u120::from(UInt::::new(0b10101)), + u120::new(0b10101) + ); +} + +#[cfg(feature = "num-traits")] +#[test] +fn calculation_with_number_trait() { + fn increment_by_1(foo: T) -> T { + foo.wrapping_add(&T::new(1.into())) + } + + fn increment_by_512( + foo: T, + ) -> Result::UnderlyingType as TryFrom>::Error> + where + <::UnderlyingType as TryFrom>::Error: core::fmt::Debug, + { + Ok(foo.wrapping_add(&T::new(512u32.try_into()?))) + } + + assert_eq!(increment_by_1(0u16), 1u16); + assert_eq!(increment_by_1(u7::new(3)), u7::new(4)); + assert_eq!(increment_by_1(u15::new(3)), u15::new(4)); + + assert_eq!(increment_by_512(0u16), Ok(512u16)); + assert!(increment_by_512(u7::new(3)).is_err()); + assert_eq!(increment_by_512(u15::new(3)), Ok(u15::new(515))); +} + +#[test] +fn from_smaller_bit_widths() { + // The code to get more bits from fewer bits (through From) is the same as the code above + // for identical bitwidths. Therefore just do a few point checks to ensure things compile + + // There are compile-breakers for the opposite direction (e.g. tryint to do u5 = From(u17), + // but we can't test compile failures here + + // from is not yet supported if the bitcounts are different but the base data types are the same (need + // fancier Rust features to support that) + assert_eq!(u6::from(UInt::::new(0b10101)), u6::new(0b10101)); + assert_eq!(u6::from(UInt::::new(0b10101)), u6::new(0b10101)); + assert_eq!(u6::from(UInt::::new(0b10101)), u6::new(0b10101)); + assert_eq!(u6::from(UInt::::new(0b10101)), u6::new(0b10101)); + + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + //assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); + assert_eq!(u15::from(UInt::::new(0b10101)), u15::new(0b10101)); +} + +#[allow(non_camel_case_types)] +#[test] +fn from_native_ints_same_bits() { + use std::primitive; + + type u8 = UInt; + type u16 = UInt; + type u32 = UInt; + type u64 = UInt; + type u128 = UInt; + + assert_eq!(u8::from(0x80_u8), u8::new(0x80)); + assert_eq!(u16::from(0x8000_u16), u16::new(0x8000)); + assert_eq!(u32::from(0x8000_0000_u32), u32::new(0x8000_0000)); + assert_eq!( + u64::from(0x8000_0000_0000_0000_u64), + u64::new(0x8000_0000_0000_0000) + ); + assert_eq!( + u128::from(0x8000_0000_0000_0000_0000_0000_0000_0000_u128), + u128::new(0x8000_0000_0000_0000_0000_0000_0000_0000) + ); +} + +#[test] +fn from_native_ints_fewer_bits() { + assert_eq!(u9::from(0x80_u8), u9::new(0x80)); + + assert_eq!(u17::from(0x80_u8), u17::new(0x80)); + assert_eq!(u17::from(0x8000_u16), u17::new(0x8000)); + + assert_eq!(u33::from(0x80_u8), u33::new(0x80)); + assert_eq!(u33::from(0x8000_u16), u33::new(0x8000)); + assert_eq!(u33::from(0x8000_0000_u32), u33::new(0x8000_0000)); + + assert_eq!(u65::from(0x80_u8), u65::new(0x80)); + assert_eq!(u65::from(0x8000_u16), u65::new(0x8000)); + assert_eq!(u65::from(0x8000_0000_u32), u65::new(0x8000_0000)); + assert_eq!( + u65::from(0x8000_0000_0000_0000_u64), + u65::new(0x8000_0000_0000_0000) + ); +} + +#[allow(non_camel_case_types)] +#[test] +fn into_native_ints_same_bits() { + assert_eq!(u8::from(UInt::::new(0x80)), 0x80); + assert_eq!(u16::from(UInt::::new(0x8000)), 0x8000); + assert_eq!(u32::from(UInt::::new(0x8000_0000)), 0x8000_0000); + assert_eq!( + u64::from(UInt::::new(0x8000_0000_0000_0000)), + 0x8000_0000_0000_0000 + ); + assert_eq!( + u128::from(UInt::::new( + 0x8000_0000_0000_0000_0000_0000_0000_0000 + )), + 0x8000_0000_0000_0000_0000_0000_0000_0000 + ); +} + +#[test] +fn into_native_ints_fewer_bits() { + assert_eq!(u8::from(u7::new(0x40)), 0x40); + assert_eq!(u16::from(u15::new(0x4000)), 0x4000); + assert_eq!(u32::from(u31::new(0x4000_0000)), 0x4000_0000); + assert_eq!( + u64::from(u63::new(0x4000_0000_0000_0000)), + 0x4000_0000_0000_0000 + ); + assert_eq!( + u128::from(u127::new(0x4000_0000_0000_0000_0000_0000_0000_0000)), + 0x4000_0000_0000_0000_0000_0000_0000_0000 + ); +} + +#[test] +fn from_into_bool() { + assert_eq!(u1::from(true), u1::new(1)); + assert_eq!(u1::from(false), u1::new(0)); + assert_eq!(bool::from(u1::new(1)), true); + assert_eq!(bool::from(u1::new(0)), false); +} + +#[test] +fn widen() { + // As From() can't be used while keeping the base-data-type, there's widen + + assert_eq!(u5::new(0b11011).widen::<6>(), u6::new(0b11011)); + assert_eq!(u5::new(0b11011).widen::<8>(), UInt::::new(0b11011)); + assert_eq!(u10::new(0b11011).widen::<11>(), u11::new(0b11011)); + assert_eq!(u20::new(0b11011).widen::<24>(), u24::new(0b11011)); + assert_eq!(u60::new(0b11011).widen::<61>(), u61::new(0b11011)); + assert_eq!(u80::new(0b11011).widen::<127>().value(), 0b11011); +} + +#[test] +fn to_string() { + assert_eq!("Value: 5", format!("Value: {}", 5u32.to_string())); + assert_eq!("Value: 5", format!("Value: {}", u5::new(5).to_string())); + assert_eq!("Value: 5", format!("Value: {}", u11::new(5).to_string())); + assert_eq!("Value: 5", format!("Value: {}", u17::new(5).to_string())); + assert_eq!("Value: 5", format!("Value: {}", u38::new(5).to_string())); + assert_eq!("Value: 60", format!("Value: {}", u65::new(60).to_string())); +} + +#[test] +fn display() { + assert_eq!("Value: 5", format!("Value: {}", 5u32)); + assert_eq!("Value: 5", format!("Value: {}", u5::new(5))); + assert_eq!("Value: 5", format!("Value: {}", u11::new(5))); + assert_eq!("Value: 5", format!("Value: {}", u17::new(5))); + assert_eq!("Value: 5", format!("Value: {}", u38::new(5))); + assert_eq!("Value: 60", format!("Value: {}", u65::new(60))); +} + +#[test] +fn debug() { + assert_eq!("Value: 5", format!("Value: {:?}", 5u32)); + assert_eq!("Value: 5", format!("Value: {:?}", u5::new(5))); + assert_eq!("Value: 5", format!("Value: {:?}", u11::new(5))); + assert_eq!("Value: 5", format!("Value: {:?}", u17::new(5))); + assert_eq!("Value: 5", format!("Value: {:?}", u38::new(5))); + assert_eq!("Value: 60", format!("Value: {:?}", u65::new(60))); +} + +#[test] +fn lower_hex() { + assert_eq!("Value: a", format!("Value: {:x}", 10u32)); + assert_eq!("Value: a", format!("Value: {:x}", u5::new(10))); + assert_eq!("Value: a", format!("Value: {:x}", u11::new(10))); + assert_eq!("Value: a", format!("Value: {:x}", u17::new(10))); + assert_eq!("Value: a", format!("Value: {:x}", u38::new(10))); + assert_eq!("Value: 3c", format!("Value: {:x}", 60)); + assert_eq!("Value: 3c", format!("Value: {:x}", u65::new(60))); +} + +#[test] +fn upper_hex() { + assert_eq!("Value: A", format!("Value: {:X}", 10u32)); + assert_eq!("Value: A", format!("Value: {:X}", u5::new(10))); + assert_eq!("Value: A", format!("Value: {:X}", u11::new(10))); + assert_eq!("Value: A", format!("Value: {:X}", u17::new(10))); + assert_eq!("Value: A", format!("Value: {:X}", u38::new(10))); + assert_eq!("Value: 3C", format!("Value: {:X}", 60)); + assert_eq!("Value: 3C", format!("Value: {:X}", u65::new(60))); +} + +#[test] +fn lower_hex_fancy() { + assert_eq!("Value: 0xa", format!("Value: {:#x}", 10u32)); + assert_eq!("Value: 0xa", format!("Value: {:#x}", u5::new(10))); + assert_eq!("Value: 0xa", format!("Value: {:#x}", u11::new(10))); + assert_eq!("Value: 0xa", format!("Value: {:#x}", u17::new(10))); + assert_eq!("Value: 0xa", format!("Value: {:#x}", u38::new(10))); + assert_eq!("Value: 0x3c", format!("Value: {:#x}", 60)); + assert_eq!("Value: 0x3c", format!("Value: {:#x}", u65::new(60))); +} + +#[test] +fn upper_hex_fancy() { + assert_eq!("Value: 0xA", format!("Value: {:#X}", 10u32)); + assert_eq!("Value: 0xA", format!("Value: {:#X}", u5::new(10))); + assert_eq!("Value: 0xA", format!("Value: {:#X}", u11::new(10))); + assert_eq!("Value: 0xA", format!("Value: {:#X}", u17::new(10))); + assert_eq!("Value: 0xA", format!("Value: {:#X}", u38::new(10))); + assert_eq!("Value: 0x3C", format!("Value: {:#X}", 60)); + assert_eq!("Value: 0x3C", format!("Value: {:#X}", u65::new(60))); +} + +#[test] +fn debug_lower_hex_fancy() { + assert_eq!("Value: 0xa", format!("Value: {:#x?}", 10u32)); + assert_eq!("Value: 0xa", format!("Value: {:#x?}", u5::new(10))); + assert_eq!("Value: 0xa", format!("Value: {:#x?}", u11::new(10))); + assert_eq!("Value: 0xa", format!("Value: {:#x?}", u17::new(10))); + assert_eq!("Value: 0xa", format!("Value: {:#x?}", u38::new(10))); + assert_eq!("Value: 0x3c", format!("Value: {:#x?}", 60)); + assert_eq!("Value: 0x3c", format!("Value: {:#x?}", u65::new(60))); +} + +#[test] +fn debug_upper_hex_fancy() { + assert_eq!("Value: 0xA", format!("Value: {:#X?}", 10u32)); + assert_eq!("Value: 0xA", format!("Value: {:#X?}", u5::new(10))); + assert_eq!("Value: 0xA", format!("Value: {:#X?}", u11::new(10))); + assert_eq!("Value: 0xA", format!("Value: {:#X?}", u17::new(10))); + assert_eq!("Value: 0xA", format!("Value: {:#X?}", u38::new(10))); + assert_eq!("Value: 0x3C", format!("Value: {:#X?}", 60)); + assert_eq!("Value: 0x3C", format!("Value: {:#X?}", u65::new(60))); +} + +#[test] +fn octal() { + assert_eq!("Value: 12", format!("Value: {:o}", 10u32)); + assert_eq!("Value: 12", format!("Value: {:o}", u5::new(10))); + assert_eq!("Value: 12", format!("Value: {:o}", u11::new(10))); + assert_eq!("Value: 12", format!("Value: {:o}", u17::new(10))); + assert_eq!("Value: 12", format!("Value: {:o}", u38::new(10))); + assert_eq!("Value: 74", format!("Value: {:o}", 0o74)); + assert_eq!("Value: 74", format!("Value: {:o}", u65::new(0o74))); +} + +#[test] +fn binary() { + assert_eq!("Value: 1010", format!("Value: {:b}", 10u32)); + assert_eq!("Value: 1010", format!("Value: {:b}", u5::new(10))); + assert_eq!("Value: 1010", format!("Value: {:b}", u11::new(10))); + assert_eq!("Value: 1010", format!("Value: {:b}", u17::new(10))); + assert_eq!("Value: 1010", format!("Value: {:b}", u38::new(10))); + assert_eq!("Value: 111100", format!("Value: {:b}", 0b111100)); + assert_eq!("Value: 111100", format!("Value: {:b}", u65::new(0b111100))); +} + +#[test] +fn hash() { + let mut hashmap = HashMap::::new(); + + hashmap.insert(u5::new(11), u7::new(9)); + + assert_eq!(Some(&u7::new(9)), hashmap.get(&u5::new(11))); + assert_eq!(None, hashmap.get(&u5::new(12))); +} + +#[test] +fn swap_bytes() { + assert_eq!(u24::new(0x12_34_56).swap_bytes(), u24::new(0x56_34_12)); + assert_eq!( + UInt::::new(0x12_34_56).swap_bytes(), + UInt::::new(0x56_34_12) + ); + assert_eq!( + UInt::::new(0x12_34_56).swap_bytes(), + UInt::::new(0x56_34_12) + ); + + assert_eq!( + u40::new(0x12_34_56_78_9A).swap_bytes(), + u40::new(0x9A_78_56_34_12) + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A).swap_bytes(), + UInt::::new(0x9A_78_56_34_12) + ); + + assert_eq!( + u48::new(0x12_34_56_78_9A_BC).swap_bytes(), + u48::new(0xBC_9A_78_56_34_12) + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A_BC).swap_bytes(), + UInt::::new(0xBC_9A_78_56_34_12) + ); + + assert_eq!( + u56::new(0x12_34_56_78_9A_BC_DE).swap_bytes(), + u56::new(0xDE_BC_9A_78_56_34_12) + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A_BC_DE).swap_bytes(), + UInt::::new(0xDE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u72::new(0x12_34_56_78_9A_BC_DE_FE_DC).swap_bytes(), + u72::new(0xDC_FE_DE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u80::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA).swap_bytes(), + u80::new(0xBA_DC_FE_DE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u88::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98).swap_bytes(), + u88::new(0x98_BA_DC_FE_DE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u96::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76).swap_bytes(), + u96::new(0x76_98_BA_DC_FE_DE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u104::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54).swap_bytes(), + u104::new(0x54_76_98_BA_DC_FE_DE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u112::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32).swap_bytes(), + u112::new(0x32_54_76_98_BA_DC_FE_DE_BC_9A_78_56_34_12) + ); + + assert_eq!( + u120::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32_10).swap_bytes(), + u120::new(0x10_32_54_76_98_BA_DC_FE_DE_BC_9A_78_56_34_12) + ); +} + +#[test] +fn to_le_and_be_bytes() { + assert_eq!(u24::new(0x12_34_56).to_le_bytes(), [0x56, 0x34, 0x12]); + assert_eq!( + UInt::::new(0x12_34_56).to_le_bytes(), + [0x56, 0x34, 0x12] + ); + assert_eq!( + UInt::::new(0x12_34_56).to_le_bytes(), + [0x56, 0x34, 0x12] + ); + + assert_eq!(u24::new(0x12_34_56).to_be_bytes(), [0x12, 0x34, 0x56]); + assert_eq!( + UInt::::new(0x12_34_56).to_be_bytes(), + [0x12, 0x34, 0x56] + ); + assert_eq!( + UInt::::new(0x12_34_56).to_be_bytes(), + [0x12, 0x34, 0x56] + ); + + assert_eq!( + u40::new(0x12_34_56_78_9A).to_le_bytes(), + [0x9A, 0x78, 0x56, 0x34, 0x12] + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A).to_le_bytes(), + [0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u40::new(0x12_34_56_78_9A).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A] + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A] + ); + + assert_eq!( + u48::new(0x12_34_56_78_9A_BC).to_le_bytes(), + [0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A_BC).to_le_bytes(), + [0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u48::new(0x12_34_56_78_9A_BC).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC] + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A_BC).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC] + ); + + assert_eq!( + u56::new(0x12_34_56_78_9A_BC_DE).to_le_bytes(), + [0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A_BC_DE).to_le_bytes(), + [0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u56::new(0x12_34_56_78_9A_BC_DE).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE] + ); + assert_eq!( + UInt::::new(0x12_34_56_78_9A_BC_DE).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE] + ); + + assert_eq!( + u72::new(0x12_34_56_78_9A_BC_DE_FE_DC).to_le_bytes(), + [0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u72::new(0x12_34_56_78_9A_BC_DE_FE_DC).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC] + ); + + assert_eq!( + u80::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA).to_le_bytes(), + [0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u80::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA] + ); + + assert_eq!( + u88::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98).to_le_bytes(), + [0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u88::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98] + ); + + assert_eq!( + u96::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76).to_le_bytes(), + [0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u96::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76] + ); + + assert_eq!( + u104::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54).to_le_bytes(), + [0x54, 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u104::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54] + ); + + assert_eq!( + u112::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32).to_le_bytes(), + [0x32, 0x54, 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12] + ); + + assert_eq!( + u112::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32).to_be_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32] + ); + + assert_eq!( + u120::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32_10).to_le_bytes(), + [ + 0x10, 0x32, 0x54, 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, + 0x12 + ] + ); + + assert_eq!( + u120::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32_10).to_be_bytes(), + [ + 0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, + 0x10 + ] + ); +} + +#[test] +fn from_le_and_be_bytes() { + assert_eq!(u24::from_le_bytes([0x56, 0x34, 0x12]), u24::new(0x12_34_56)); + assert_eq!( + UInt::::from_le_bytes([0x56, 0x34, 0x12]), + UInt::::new(0x12_34_56) + ); + assert_eq!( + UInt::::from_le_bytes([0x56, 0x34, 0x12]), + UInt::::new(0x12_34_56) + ); + + assert_eq!(u24::from_be_bytes([0x12, 0x34, 0x56]), u24::new(0x12_34_56)); + assert_eq!( + UInt::::from_be_bytes([0x12, 0x34, 0x56]), + UInt::::new(0x12_34_56) + ); + assert_eq!( + UInt::::from_be_bytes([0x12, 0x34, 0x56]), + UInt::::new(0x12_34_56) + ); + + assert_eq!( + u40::from_le_bytes([0x9A, 0x78, 0x56, 0x34, 0x12]), + u40::new(0x12_34_56_78_9A) + ); + assert_eq!( + UInt::::from_le_bytes([0x9A, 0x78, 0x56, 0x34, 0x12]), + UInt::::new(0x12_34_56_78_9A) + ); + + assert_eq!( + u40::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A]), + u40::new(0x12_34_56_78_9A) + ); + assert_eq!( + UInt::::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A]), + UInt::::new(0x12_34_56_78_9A) + ); + + assert_eq!( + u48::from_le_bytes([0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + u48::new(0x12_34_56_78_9A_BC) + ); + assert_eq!( + UInt::::from_le_bytes([0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + UInt::::new(0x12_34_56_78_9A_BC) + ); + + assert_eq!( + u48::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC]), + u48::new(0x12_34_56_78_9A_BC) + ); + assert_eq!( + UInt::::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC]), + UInt::::new(0x12_34_56_78_9A_BC) + ); + + assert_eq!( + u56::from_le_bytes([0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + u56::new(0x12_34_56_78_9A_BC_DE) + ); + assert_eq!( + UInt::::from_le_bytes([0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + UInt::::new(0x12_34_56_78_9A_BC_DE) + ); + + assert_eq!( + u56::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE]), + u56::new(0x12_34_56_78_9A_BC_DE) + ); + assert_eq!( + UInt::::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE]), + UInt::::new(0x12_34_56_78_9A_BC_DE) + ); + + assert_eq!( + u72::from_le_bytes([0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + u72::new(0x12_34_56_78_9A_BC_DE_FE_DC) + ); + + assert_eq!( + u72::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC]), + u72::new(0x12_34_56_78_9A_BC_DE_FE_DC) + ); + + assert_eq!( + u80::from_le_bytes([0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + u80::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA) + ); + + assert_eq!( + u80::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA]), + u80::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA) + ); + + assert_eq!( + u88::from_le_bytes([0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12]), + u88::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98) + ); + + assert_eq!( + u88::from_be_bytes([0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98]), + u88::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98) + ); + + assert_eq!( + u96::from_le_bytes([ + 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12 + ]), + u96::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76) + ); + + assert_eq!( + u96::from_be_bytes([ + 0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76 + ]), + u96::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76) + ); + + assert_eq!( + u104::from_le_bytes([ + 0x54, 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12 + ]), + u104::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54) + ); + + assert_eq!( + u104::from_be_bytes([ + 0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54 + ]), + u104::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54) + ); + + assert_eq!( + u112::from_le_bytes([ + 0x32, 0x54, 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, 0x12 + ]), + u112::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32) + ); + + assert_eq!( + u112::from_be_bytes([ + 0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32 + ]), + u112::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32) + ); + + assert_eq!( + u120::from_le_bytes([ + 0x10, 0x32, 0x54, 0x76, 0x98, 0xBA, 0xDC, 0xFE, 0xDE, 0xBC, 0x9A, 0x78, 0x56, 0x34, + 0x12 + ]), + u120::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32_10) + ); + + assert_eq!( + u120::from_be_bytes([ + 0x12, 0x34, 0x56, 0x78, 0x9A, 0xBC, 0xDE, 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, + 0x10 + ]), + u120::new(0x12_34_56_78_9A_BC_DE_FE_DC_BA_98_76_54_32_10) + ); +} + +#[test] +fn to_ne_bytes() { + if cfg!(target_endian = "little") { + assert_eq!( + u40::new(0x12_34_56_78_9A).to_ne_bytes(), + [0x9A, 0x78, 0x56, 0x34, 0x12] + ); + } else { + assert_eq!( + u40::new(0x12_34_56_78_9A).to_ne_bytes(), + [0x12, 0x34, 0x56, 0x78, 0x9A] + ); + } +} + +#[test] +fn from_ne_bytes() { + if cfg!(target_endian = "little") { + assert_eq!( + u40::from_ne_bytes([0x9A, 0x78, 0x56, 0x34, 0x12]), + u40::new(0x12_34_56_78_9A) + ); + } else { + assert_eq!( + u40::from_ne_bytes([0x12, 0x34, 0x56, 0x78, 0x9A]), + u40::new(0x12_34_56_78_9A) + ); + } +} + +#[test] +fn simple_le_be() { + const REGULAR: u40 = u40::new(0x12_34_56_78_9A); + const SWAPPED: u40 = u40::new(0x9A_78_56_34_12); + if cfg!(target_endian = "little") { + assert_eq!(REGULAR.to_le(), REGULAR); + assert_eq!(REGULAR.to_be(), SWAPPED); + assert_eq!(u40::from_le(REGULAR), REGULAR); + assert_eq!(u40::from_be(REGULAR), SWAPPED); + } else { + assert_eq!(REGULAR.to_le(), SWAPPED); + assert_eq!(REGULAR.to_be(), REGULAR); + assert_eq!(u40::from_le(REGULAR), SWAPPED); + assert_eq!(u40::from_be(REGULAR), REGULAR); + } +} + +#[test] +fn wrapping_add() { + assert_eq!(u7::new(120).wrapping_add(u7::new(1)), u7::new(121)); + assert_eq!(u7::new(120).wrapping_add(u7::new(10)), u7::new(2)); + assert_eq!(u7::new(127).wrapping_add(u7::new(127)), u7::new(126)); +} + +#[test] +fn wrapping_sub() { + assert_eq!(u7::new(120).wrapping_sub(u7::new(1)), u7::new(119)); + assert_eq!(u7::new(10).wrapping_sub(u7::new(20)), u7::new(118)); + assert_eq!(u7::new(0).wrapping_sub(u7::new(1)), u7::new(127)); +} + +#[test] +fn wrapping_mul() { + assert_eq!(u7::new(120).wrapping_mul(u7::new(0)), u7::new(0)); + assert_eq!(u7::new(120).wrapping_mul(u7::new(1)), u7::new(120)); + + // Overflow u7 + assert_eq!(u7::new(120).wrapping_mul(u7::new(2)), u7::new(112)); + + // Overflow the underlying type + assert_eq!(u7::new(120).wrapping_mul(u7::new(3)), u7::new(104)); +} + +#[test] +fn wrapping_div() { + assert_eq!(u7::new(120).wrapping_div(u7::new(1)), u7::new(120)); + assert_eq!(u7::new(120).wrapping_div(u7::new(2)), u7::new(60)); + assert_eq!(u7::new(120).wrapping_div(u7::new(120)), u7::new(1)); + assert_eq!(u7::new(120).wrapping_div(u7::new(121)), u7::new(0)); +} + +#[should_panic] +#[test] +fn wrapping_div_by_zero() { + let _ = u7::new(120).wrapping_div(u7::new(0)); +} + +#[test] +fn wrapping_shl() { + assert_eq!(u7::new(0b010_1101).wrapping_shl(0), u7::new(0b010_1101)); + assert_eq!(u7::new(0b010_1101).wrapping_shl(1), u7::new(0b101_1010)); + assert_eq!(u7::new(0b010_1101).wrapping_shl(6), u7::new(0b100_0000)); + assert_eq!(u7::new(0b010_1101).wrapping_shl(7), u7::new(0b010_1101)); + assert_eq!(u7::new(0b010_1101).wrapping_shl(8), u7::new(0b101_1010)); + assert_eq!(u7::new(0b010_1101).wrapping_shl(14), u7::new(0b010_1101)); + assert_eq!(u7::new(0b010_1101).wrapping_shl(15), u7::new(0b101_1010)); +} + +#[test] +fn wrapping_shr() { + assert_eq!(u7::new(0b010_1101).wrapping_shr(0), u7::new(0b010_1101)); + assert_eq!(u7::new(0b010_1101).wrapping_shr(1), u7::new(0b001_0110)); + assert_eq!(u7::new(0b010_1101).wrapping_shr(5), u7::new(0b000_0001)); + assert_eq!(u7::new(0b010_1101).wrapping_shr(7), u7::new(0b010_1101)); + assert_eq!(u7::new(0b010_1101).wrapping_shr(8), u7::new(0b001_0110)); + assert_eq!(u7::new(0b010_1101).wrapping_shr(14), u7::new(0b010_1101)); + assert_eq!(u7::new(0b010_1101).wrapping_shr(15), u7::new(0b001_0110)); +} + +#[test] +fn saturating_add() { + assert_eq!(u7::new(120).saturating_add(u7::new(1)), u7::new(121)); + assert_eq!(u7::new(120).saturating_add(u7::new(10)), u7::new(127)); + assert_eq!(u7::new(127).saturating_add(u7::new(127)), u7::new(127)); + assert_eq!( + UInt::::new(250).saturating_add(UInt::::new(10)), + UInt::::new(255) + ); +} + +#[test] +fn saturating_sub() { + assert_eq!(u7::new(120).saturating_sub(u7::new(30)), u7::new(90)); + assert_eq!(u7::new(120).saturating_sub(u7::new(119)), u7::new(1)); + assert_eq!(u7::new(120).saturating_sub(u7::new(120)), u7::new(0)); + assert_eq!(u7::new(120).saturating_sub(u7::new(121)), u7::new(0)); + assert_eq!(u7::new(0).saturating_sub(u7::new(127)), u7::new(0)); +} + +#[test] +fn saturating_mul() { + // Fast-path: Only the arbitrary int is bounds checked + assert_eq!(u4::new(5).saturating_mul(u4::new(2)), u4::new(10)); + assert_eq!(u4::new(5).saturating_mul(u4::new(3)), u4::new(15)); + assert_eq!(u4::new(5).saturating_mul(u4::new(4)), u4::new(15)); + assert_eq!(u4::new(5).saturating_mul(u4::new(5)), u4::new(15)); + assert_eq!(u4::new(5).saturating_mul(u4::new(6)), u4::new(15)); + assert_eq!(u4::new(5).saturating_mul(u4::new(7)), u4::new(15)); + + // Slow-path (well, one more comparison) + assert_eq!(u5::new(5).saturating_mul(u5::new(2)), u5::new(10)); + assert_eq!(u5::new(5).saturating_mul(u5::new(3)), u5::new(15)); + assert_eq!(u5::new(5).saturating_mul(u5::new(4)), u5::new(20)); + assert_eq!(u5::new(5).saturating_mul(u5::new(5)), u5::new(25)); + assert_eq!(u5::new(5).saturating_mul(u5::new(6)), u5::new(30)); + assert_eq!(u5::new(5).saturating_mul(u5::new(7)), u5::new(31)); + assert_eq!(u5::new(30).saturating_mul(u5::new(1)), u5::new(30)); + assert_eq!(u5::new(30).saturating_mul(u5::new(2)), u5::new(31)); + assert_eq!(u5::new(30).saturating_mul(u5::new(10)), u5::new(31)); +} + +#[test] +fn saturating_div() { + assert_eq!(u4::new(5).saturating_div(u4::new(1)), u4::new(5)); + assert_eq!(u4::new(5).saturating_div(u4::new(2)), u4::new(2)); + assert_eq!(u4::new(5).saturating_div(u4::new(3)), u4::new(1)); + assert_eq!(u4::new(5).saturating_div(u4::new(4)), u4::new(1)); + assert_eq!(u4::new(5).saturating_div(u4::new(5)), u4::new(1)); +} + +#[test] +#[should_panic] +fn saturating_divby0() { + // saturating_div throws an exception on zero + let _ = u4::new(5).saturating_div(u4::new(0)); +} + +#[test] +fn saturating_pow() { + assert_eq!(u7::new(5).saturating_pow(0), u7::new(1)); + assert_eq!(u7::new(5).saturating_pow(1), u7::new(5)); + assert_eq!(u7::new(5).saturating_pow(2), u7::new(25)); + assert_eq!(u7::new(5).saturating_pow(3), u7::new(125)); + assert_eq!(u7::new(5).saturating_pow(4), u7::new(127)); + assert_eq!(u7::new(5).saturating_pow(255), u7::new(127)); +} + +#[test] +fn checked_add() { + assert_eq!(u7::new(120).checked_add(u7::new(1)), Some(u7::new(121))); + assert_eq!(u7::new(120).checked_add(u7::new(7)), Some(u7::new(127))); + assert_eq!(u7::new(120).checked_add(u7::new(10)), None); + assert_eq!(u7::new(127).checked_add(u7::new(127)), None); + assert_eq!( + UInt::::new(250).checked_add(UInt::::new(10)), + None + ); +} + +#[test] +fn checked_sub() { + assert_eq!(u7::new(120).checked_sub(u7::new(30)), Some(u7::new(90))); + assert_eq!(u7::new(120).checked_sub(u7::new(119)), Some(u7::new(1))); + assert_eq!(u7::new(120).checked_sub(u7::new(120)), Some(u7::new(0))); + assert_eq!(u7::new(120).checked_sub(u7::new(121)), None); + assert_eq!(u7::new(0).checked_sub(u7::new(127)), None); +} + +#[test] +fn checked_mul() { + // Fast-path: Only the arbitrary int is bounds checked + assert_eq!(u4::new(5).checked_mul(u4::new(2)), Some(u4::new(10))); + assert_eq!(u4::new(5).checked_mul(u4::new(3)), Some(u4::new(15))); + assert_eq!(u4::new(5).checked_mul(u4::new(4)), None); + assert_eq!(u4::new(5).checked_mul(u4::new(5)), None); + assert_eq!(u4::new(5).checked_mul(u4::new(6)), None); + assert_eq!(u4::new(5).checked_mul(u4::new(7)), None); + + // Slow-path (well, one more comparison) + assert_eq!(u5::new(5).checked_mul(u5::new(2)), Some(u5::new(10))); + assert_eq!(u5::new(5).checked_mul(u5::new(3)), Some(u5::new(15))); + assert_eq!(u5::new(5).checked_mul(u5::new(4)), Some(u5::new(20))); + assert_eq!(u5::new(5).checked_mul(u5::new(5)), Some(u5::new(25))); + assert_eq!(u5::new(5).checked_mul(u5::new(6)), Some(u5::new(30))); + assert_eq!(u5::new(5).checked_mul(u5::new(7)), None); + assert_eq!(u5::new(30).checked_mul(u5::new(1)), Some(u5::new(30))); + assert_eq!(u5::new(30).checked_mul(u5::new(2)), None); + assert_eq!(u5::new(30).checked_mul(u5::new(10)), None); +} + +#[test] +fn checked_div() { + // checked_div handles division by zero without exception, unlike saturating_div + assert_eq!(u4::new(5).checked_div(u4::new(0)), None); + assert_eq!(u4::new(5).checked_div(u4::new(1)), Some(u4::new(5))); + assert_eq!(u4::new(5).checked_div(u4::new(2)), Some(u4::new(2))); + assert_eq!(u4::new(5).checked_div(u4::new(3)), Some(u4::new(1))); + assert_eq!(u4::new(5).checked_div(u4::new(4)), Some(u4::new(1))); + assert_eq!(u4::new(5).checked_div(u4::new(5)), Some(u4::new(1))); +} + +#[test] +fn checked_shl() { + assert_eq!( + u7::new(0b010_1101).checked_shl(0), + Some(u7::new(0b010_1101)) + ); + assert_eq!( + u7::new(0b010_1101).checked_shl(1), + Some(u7::new(0b101_1010)) + ); + assert_eq!( + u7::new(0b010_1101).checked_shl(6), + Some(u7::new(0b100_0000)) + ); + assert_eq!(u7::new(0b010_1101).checked_shl(7), None); + assert_eq!(u7::new(0b010_1101).checked_shl(8), None); + assert_eq!(u7::new(0b010_1101).checked_shl(14), None); + assert_eq!(u7::new(0b010_1101).checked_shl(15), None); +} + +#[test] +fn checked_shr() { + assert_eq!( + u7::new(0b010_1101).checked_shr(0), + Some(u7::new(0b010_1101)) + ); + assert_eq!( + u7::new(0b010_1101).checked_shr(1), + Some(u7::new(0b001_0110)) + ); + assert_eq!( + u7::new(0b010_1101).checked_shr(5), + Some(u7::new(0b000_0001)) + ); + assert_eq!(u7::new(0b010_1101).checked_shr(7), None); + assert_eq!(u7::new(0b010_1101).checked_shr(8), None); + assert_eq!(u7::new(0b010_1101).checked_shr(14), None); + assert_eq!(u7::new(0b010_1101).checked_shr(15), None); +} + +#[test] +fn overflowing_add() { + assert_eq!( + u7::new(120).overflowing_add(u7::new(1)), + (u7::new(121), false) + ); + assert_eq!( + u7::new(120).overflowing_add(u7::new(7)), + (u7::new(127), false) + ); + assert_eq!( + u7::new(120).overflowing_add(u7::new(10)), + (u7::new(2), true) + ); + assert_eq!( + u7::new(127).overflowing_add(u7::new(127)), + (u7::new(126), true) + ); + assert_eq!( + UInt::::new(250).overflowing_add(UInt::::new(5)), + (UInt::::new(255), false) + ); + assert_eq!( + UInt::::new(250).overflowing_add(UInt::::new(10)), + (UInt::::new(4), true) + ); +} + +#[test] +fn overflowing_sub() { + assert_eq!( + u7::new(120).overflowing_sub(u7::new(30)), + (u7::new(90), false) + ); + assert_eq!( + u7::new(120).overflowing_sub(u7::new(119)), + (u7::new(1), false) + ); + assert_eq!( + u7::new(120).overflowing_sub(u7::new(120)), + (u7::new(0), false) + ); + assert_eq!( + u7::new(120).overflowing_sub(u7::new(121)), + (u7::new(127), true) + ); + assert_eq!(u7::new(0).overflowing_sub(u7::new(127)), (u7::new(1), true)); +} + +#[test] +fn overflowing_mul() { + // Fast-path: Only the arbitrary int is bounds checked + assert_eq!(u4::new(5).overflowing_mul(u4::new(2)), (u4::new(10), false)); + assert_eq!(u4::new(5).overflowing_mul(u4::new(3)), (u4::new(15), false)); + assert_eq!(u4::new(5).overflowing_mul(u4::new(4)), (u4::new(4), true)); + assert_eq!(u4::new(5).overflowing_mul(u4::new(5)), (u4::new(9), true)); + assert_eq!(u4::new(5).overflowing_mul(u4::new(6)), (u4::new(14), true)); + assert_eq!(u4::new(5).overflowing_mul(u4::new(7)), (u4::new(3), true)); + + // Slow-path (well, one more comparison) + assert_eq!(u5::new(5).overflowing_mul(u5::new(2)), (u5::new(10), false)); + assert_eq!(u5::new(5).overflowing_mul(u5::new(3)), (u5::new(15), false)); + assert_eq!(u5::new(5).overflowing_mul(u5::new(4)), (u5::new(20), false)); + assert_eq!(u5::new(5).overflowing_mul(u5::new(5)), (u5::new(25), false)); + assert_eq!(u5::new(5).overflowing_mul(u5::new(6)), (u5::new(30), false)); + assert_eq!(u5::new(5).overflowing_mul(u5::new(7)), (u5::new(3), true)); + assert_eq!( + u5::new(30).overflowing_mul(u5::new(1)), + (u5::new(30), false) + ); + assert_eq!(u5::new(30).overflowing_mul(u5::new(2)), (u5::new(28), true)); + assert_eq!( + u5::new(30).overflowing_mul(u5::new(10)), + (u5::new(12), true) + ); +} + +#[test] +fn overflowing_div() { + assert_eq!(u4::new(5).overflowing_div(u4::new(1)), (u4::new(5), false)); + assert_eq!(u4::new(5).overflowing_div(u4::new(2)), (u4::new(2), false)); + assert_eq!(u4::new(5).overflowing_div(u4::new(3)), (u4::new(1), false)); + assert_eq!(u4::new(5).overflowing_div(u4::new(4)), (u4::new(1), false)); + assert_eq!(u4::new(5).overflowing_div(u4::new(5)), (u4::new(1), false)); +} + +#[should_panic] +#[test] +fn overflowing_div_by_zero() { + let _ = u4::new(5).overflowing_div(u4::new(0)); +} + +#[test] +fn overflowing_shl() { + assert_eq!( + u7::new(0b010_1101).overflowing_shl(0), + (u7::new(0b010_1101), false) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shl(1), + (u7::new(0b101_1010), false) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shl(6), + (u7::new(0b100_0000), false) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shl(7), + (u7::new(0b010_1101), true) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shl(8), + (u7::new(0b101_1010), true) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shl(14), + (u7::new(0b010_1101), true) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shl(15), + (u7::new(0b101_1010), true) + ); +} + +#[test] +fn overflowing_shr() { + assert_eq!( + u7::new(0b010_1101).overflowing_shr(0), + (u7::new(0b010_1101), false) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shr(1), + (u7::new(0b001_0110), false) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shr(5), + (u7::new(0b000_0001), false) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shr(7), + (u7::new(0b010_1101), true) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shr(8), + (u7::new(0b001_0110), true) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shr(14), + (u7::new(0b010_1101), true) + ); + assert_eq!( + u7::new(0b010_1101).overflowing_shr(15), + (u7::new(0b001_0110), true) + ); +} + +#[test] +fn reverse_bits() { + const A: u5 = u5::new(0b11101); + const B: u5 = A.reverse_bits(); + assert_eq!(B, u5::new(0b10111)); + + assert_eq!( + UInt::::new(0b101011), + UInt::::new(0b110101).reverse_bits() + ); + + assert_eq!(u1::new(1).reverse_bits().value(), 1); + assert_eq!(u1::new(0).reverse_bits().value(), 0); +} + +#[test] +fn count_ones_and_zeros() { + assert_eq!(4, u5::new(0b10111).count_ones()); + assert_eq!(1, u5::new(0b10111).count_zeros()); + assert_eq!(1, u5::new(0b10111).leading_ones()); + assert_eq!(0, u5::new(0b10111).leading_zeros()); + assert_eq!(3, u5::new(0b10111).trailing_ones()); + assert_eq!(0, u5::new(0b10111).trailing_zeros()); + + assert_eq!(2, u5::new(0b10100).trailing_zeros()); + assert_eq!(3, u5::new(0b00011).leading_zeros()); + + assert_eq!(0, u5::new(0b00000).count_ones()); + assert_eq!(5, u5::new(0b00000).count_zeros()); + + assert_eq!(5, u5::new(0b11111).count_ones()); + assert_eq!(0, u5::new(0b11111).count_zeros()); + + assert_eq!(3, u127::new(0b111).count_ones()); + assert_eq!(124, u127::new(0b111).count_zeros()); +} + +#[test] +fn rotate_left() { + assert_eq!(u1::new(0b1), u1::new(0b1).rotate_left(1)); + assert_eq!(u2::new(0b01), u2::new(0b10).rotate_left(1)); + + assert_eq!(u5::new(0b10111), u5::new(0b10111).rotate_left(0)); + assert_eq!(u5::new(0b01111), u5::new(0b10111).rotate_left(1)); + assert_eq!(u5::new(0b11110), u5::new(0b10111).rotate_left(2)); + assert_eq!(u5::new(0b11101), u5::new(0b10111).rotate_left(3)); + assert_eq!(u5::new(0b11011), u5::new(0b10111).rotate_left(4)); + assert_eq!(u5::new(0b10111), u5::new(0b10111).rotate_left(5)); + assert_eq!(u5::new(0b01111), u5::new(0b10111).rotate_left(6)); + assert_eq!(u5::new(0b01111), u5::new(0b10111).rotate_left(556)); + + assert_eq!(u24::new(0x0FFEEC), u24::new(0xC0FFEE).rotate_left(4)); +} + +#[test] +fn rotate_right() { + assert_eq!(u1::new(0b1), u1::new(0b1).rotate_right(1)); + assert_eq!(u2::new(0b01), u2::new(0b10).rotate_right(1)); + + assert_eq!(u5::new(0b10011), u5::new(0b10011).rotate_right(0)); + assert_eq!(u5::new(0b11001), u5::new(0b10011).rotate_right(1)); + assert_eq!(u5::new(0b11100), u5::new(0b10011).rotate_right(2)); + assert_eq!(u5::new(0b01110), u5::new(0b10011).rotate_right(3)); + assert_eq!(u5::new(0b00111), u5::new(0b10011).rotate_right(4)); + assert_eq!(u5::new(0b10011), u5::new(0b10011).rotate_right(5)); + assert_eq!(u5::new(0b11001), u5::new(0b10011).rotate_right(6)); + + assert_eq!(u24::new(0xEC0FFE), u24::new(0xC0FFEE).rotate_right(4)); +} + +#[cfg(feature = "step_trait")] +#[test] +fn range_agrees_with_underlying() { + compare_range(u19::MIN, u19::MAX); + compare_range(u37::new(95_993), u37::new(1_994_910)); + compare_range(u68::new(58_858_348), u68::new(58_860_000)); + compare_range(u122::new(111_222_333_444), u122::new(111_222_444_555)); + compare_range(u5::MIN, u5::MAX); + compare_range(u23::MIN, u23::MAX); + compare_range(u48::new(999_444), u48::new(1_005_000)); + compare_range(u99::new(12345), u99::new(54321)); + + fn compare_range(arb_start: UInt, arb_end: UInt) + where + T: Copy + Step, + UInt: Step, + { + let arbint_range = (arb_start..=arb_end).map(UInt::value); + let underlying_range = arb_start.value()..=arb_end.value(); + + assert!(arbint_range.eq(underlying_range)); + } +} + +#[cfg(feature = "step_trait")] +#[test] +fn forward_checked() { + // In range + assert_eq!(Some(u7::new(121)), Step::forward_checked(u7::new(120), 1)); + assert_eq!(Some(u7::new(127)), Step::forward_checked(u7::new(120), 7)); + + // Out of range + assert_eq!(None, Step::forward_checked(u7::new(120), 8)); + + // Out of range for the underlying type + assert_eq!(None, Step::forward_checked(u7::new(120), 140)); +} + +#[cfg(feature = "step_trait")] +#[test] +fn backward_checked() { + // In range + assert_eq!(Some(u7::new(1)), Step::backward_checked(u7::new(10), 9)); + assert_eq!(Some(u7::new(0)), Step::backward_checked(u7::new(10), 10)); + + // Out of range (for both the arbitrary int and and the underlying type) + assert_eq!(None, Step::backward_checked(u7::new(10), 11)); +} + +#[cfg(feature = "step_trait")] +#[test] +fn steps_between() { + assert_eq!(Some(0), Step::steps_between(&u50::new(50), &u50::new(50))); + + assert_eq!(Some(4), Step::steps_between(&u24::new(5), &u24::new(9))); + assert_eq!(None, Step::steps_between(&u24::new(9), &u24::new(5))); + + // this assumes usize is <= 64 bits. a test like this one exists in `core::iter::step`. + assert_eq!( + Some(usize::MAX), + Step::steps_between(&u125::new(0x7), &u125::new(0x1_0000_0000_0000_0006)) + ); + assert_eq!( + None, + Step::steps_between(&u125::new(0x7), &u125::new(0x1_0000_0000_0000_0007)) + ); +} + +#[cfg(feature = "serde")] +#[test] +fn serde() { + use serde_test::{assert_de_tokens_error, assert_tokens, Token}; + + let a = u7::new(0b0101_0101); + assert_tokens(&a, &[Token::U8(0b0101_0101)]); + + let b = u63::new(0x1234_5678_9ABC_DEFE); + assert_tokens(&b, &[Token::U64(0x1234_5678_9ABC_DEFE)]); + + // This requires https://github.com/serde-rs/test/issues/18 (Add Token::I128 and Token::U128 to serde_test) + // let c = u127::new(0x1234_5678_9ABC_DEFE_DCBA_9876_5432_1010); + // assert_tokens(&c, &[Token::U128(0x1234_5678_9ABC_DEFE_DCBA_9876_5432_1010)]); + + assert_de_tokens_error::( + &[Token::U8(0b0101_0101)], + "invalid value: integer `85`, expected a value between `0` and `3`", + ); + + assert_de_tokens_error::( + &[Token::I64(-1)], + "invalid value: integer `-1`, expected u128", + ); +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/.cargo-checksum.json b/rust/hw/char/pl011/vendor/bilge-impl/.cargo-checksum.json new file mode 100644 index 0000000000..304736708c --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"b1ebf0b5d89b3e8387d70b589b2557196f0dc902364900889acabed886b3ce1f","README.md":"6d4fcc631ed47bbe8e654649185ce987e9630192ea25c84edd264674e30efa4d","src/bitsize.rs":"8a0878699f18889c987954e1ab918d37c434a387a5dec0f1da8864596fcb14b4","src/bitsize/split.rs":"a59469023f48c5675159b6b46c3655033b4d9adefaba3575301fb485b4868e3d","src/bitsize_internal.rs":"30e67efe8e7baff1514b1a63f1a470701dfcfbf9933cc28aaccef663069a37b0","src/bitsize_internal/struct_gen.rs":"e04bd0346cd393467b3821e977c5deddfde11603ccfa9b63b5e1e60d726d51bb","src/debug_bits.rs":"e28a9e9101c2b365d21c2f6777a389d72dde03f2fcf5fc5add2c7aed278fc1a1","src/default_bits.rs":"bd1943f685f590cdb740b0071de302725dd9c8696d5ca83c7ce9e1dea967d573","src/fmt_bits.rs":"e656c5c019081a6322a678b4bc8c259493081b5888be3a982a12b896ce63deb7","src/from_bits.rs":"fa0acec12ccf1692f47f1b44d6b8ecce0f7da5bfdb465a85546304ede15efd4f","src/lib.rs":"e402a6aabc5b3715a1be94e022f27bf8a3760248ac62d3de7b4c0112cf53b7a2","src/shared.rs":"ac0fb16da63e96d7916f3d8e43e65895c0f0bf14f1afdb2196ec0a7ae5aa2aa2","src/shared/discriminant_assigner.rs":"1d719c4c1d8e1111888d32e930dbaf83a532b4df2b774faa8a0f8cdc6050682a","src/shared/fallback.rs":"8e8af0f66991fd93891d0d9eb1379ed7ead68725100568211677529c9007162c","src/shared/util.rs":"3c191d8585837b2ef391c05df1c201c4beedef0161f0bf37e19b292feef7ef5f","src/try_from_bits.rs":"bda602a90dd6df33e308f8ba9433032cd409213649ab5a0d0297199f4d93b2dd"},"package":"feb11e002038ad243af39c2068c8a72bcf147acf05025dcdb916fcc000adb2d8"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/bilge-impl/Cargo.toml b/rust/hw/char/pl011/vendor/bilge-impl/Cargo.toml new file mode 100644 index 0000000000..4cf7c59505 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/Cargo.toml @@ -0,0 +1,54 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2021" +name = "bilge-impl" +version = "0.2.0" +authors = ["Hecatia Elegua"] +description = "Use bitsized types as if they were a feature of rust." +documentation = "https://docs.rs/bilge" +readme = "README.md" +keywords = [ + "bilge", + "bitfield", + "bits", + "register", +] +license = "MIT OR Apache-2.0" +repository = "https://github.com/hecatia-elegua/bilge" + +[lib] +proc-macro = true + +[dependencies.itertools] +version = "0.11.0" + +[dependencies.proc-macro-error] +version = "1.0" +default-features = false + +[dependencies.proc-macro2] +version = "1.0" + +[dependencies.quote] +version = "1.0" + +[dependencies.syn] +version = "2.0" +features = ["full"] + +[dev-dependencies.syn-path] +version = "2.0" + +[features] +default = [] +nightly = [] diff --git a/rust/hw/char/pl011/vendor/bilge-impl/README.md b/rust/hw/char/pl011/vendor/bilge-impl/README.md new file mode 100644 index 0000000000..48daad0fcb --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/README.md @@ -0,0 +1,327 @@ +# bilge: the most readable bitfields + +[![crates.io](https://img.shields.io/crates/v/bilge.svg)](https://crates.io/crates/bilge) +[![docs.rs](https://docs.rs/bilge/badge.svg)](https://docs.rs/bilge) +[![loc](https://tokei.rs/b1/github/hecatia-elegua/bilge?category=code)](https://github.com/Aaronepower/tokei#badges) + +_Y e s_, this is yet another bitfield crate, but hear me out: + +This is a _**bit**_ better than what we had before. + +I wanted a design fitting rust: + +- **safe** + - types model as much of the functionality as possible and don't allow false usage +- **fast** + - like handwritten bit fiddling code +- **simple to complex** + - obvious and readable basic frontend, like normal structs + - only minimally and gradually introduce advanced concepts + - provide extension mechanisms + +The lib is **no-std** (and fully `const` behind a `"nightly"` feature gate). + +For some more explanations on the "why" and "how": [blog post](https://hecatia-elegua.github.io/blog/no-more-bit-fiddling/) and [reddit comments](https://www.reddit.com/r/rust/comments/13ic0mf/no_more_bit_fiddling_and_introducing_bilge/). + +## WARNING + +Our current version is still pre 1.0, which means nothing is completely stable. + +However, constructors, getters, setters and From/TryFrom should stay the same, since their semantics are very clear. + +[//]: # (keep this fixed to the version in .github/workflows/ci.yml, rust-toolchain.toml) + +The nightly feature is tested on `nightly-2022-11-03` and [will not work on the newest nightly until const_convert comes back](https://github.com/rust-lang/rust/issues/110395#issuecomment-1524775763). + +## Usage + +To make your life easier: + +```rust +use bilge::prelude::*; +``` + +### Infallible (From) + +You can just specify bitsized fields like normal fields: + +```rust +#[bitsize(14)] +struct Register { + header: u4, + body: u7, + footer: Footer, +} +``` + +The attribute `bitsize` generates the bitfield, while `14` works as a failsafe, emitting a compile error if your struct definition doesn't declare 14 bits. +Let's define the nested struct `Footer` as well: + +```rust +#[bitsize(3)] +#[derive(FromBits)] +struct Footer { + is_last: bool, + code: Code, +} +``` + +As you can see, we added `#[derive(FromBits)]`, which is needed for `Register`'s getters and setters. +Due to how rust macros work (outside-in), it needs to be below `#[bitsize]`. +Also, `bool` can be used as one bit. + +`Code` is another nesting, this time an enum: + +```rust +#[bitsize(2)] +#[derive(FromBits)] +enum Code { Success, Error, IoError, GoodExample } +``` + +Now we can construct `Register`: + +```rust +let reg1 = Register::new( + u4::new(0b1010), + u7::new(0b010_1010), + Footer::new(true, Code::GoodExample) +); +``` + +Or, if we add `#[derive(FromBits)]` to `Register` and want to parse a raw register value: + +```rust +let mut reg2 = Register::from(u14::new(0b11_1_0101010_1010)); +``` + +And getting and setting fields is done like this: + +```rust +let header = reg2.header(); +reg2.set_footer(Footer::new(false, Code::Success)); +``` + +Any kinds of tuple and array are also supported: + +```rust +#[bitsize(32)] +#[derive(FromBits)] +struct InterruptSetEnables([bool; 32]); +``` + +Which produces the usual getter and setter, but also element accessors: + +```rust +let mut ise = InterruptSetEnables::from(0b0000_0000_0000_0000_0000_0000_0001_0000); +let ise5 = ise.val_0_at(4); +ise.set_val_0_at(2, ise5); +assert_eq!(0b0000_0000_0000_0000_0000_0000_0001_0100, ise.value); +``` + +Depending on what you're working with, only a subset of enum values might be clear, or some values might be reserved. +In that case, you can use a fallback variant, defined like this: + +```rust +#[bitsize(32)] +#[derive(FromBits, Debug, PartialEq)] +enum Subclass { + Mouse, + Keyboard, + Speakers, + #[fallback] + Reserved, +} +``` + +which will convert any undeclared bits to `Reserved`: + +```rust +assert_eq!(Subclass::Reserved, Subclass::from(3)); +assert_eq!(Subclass::Reserved, Subclass::from(42)); +let num = u32::from(Subclass::from(42)); +assert_eq!(3, num); +assert_ne!(42, num); +``` + +or, if you need to keep the exact number saved, use: + +```rust +#[fallback] +Reserved(u32), +``` + +```rust +assert_eq!(Subclass2::Reserved(3), Subclass2::from(3)); +assert_eq!(Subclass2::Reserved(42), Subclass2::from(42)); +let num = u32::from(Subclass2::from(42)); +assert_eq!(42, num); +assert_ne!(3, num); +``` + +### Fallible (TryFrom) + +In contrast to structs, enums don't have to declare all of their bits: + +```rust +#[bitsize(2)] +#[derive(TryFromBits)] +enum Class { + Mobile, Semimobile, /* 0x2 undefined */ Stationary = 0x3 +} +``` + +meaning this will work: + +```rust +let class = Class::try_from(u2::new(2)); +assert!(class.is_err()); +``` + +except we first need to `#[derive(Debug, PartialEq)]` on `Class`, since `assert_eq!` needs those. + +Let's do that, and use `Class` as a field: + +```rust +#[bitsize(8)] +#[derive(TryFromBits)] +struct Device { + reserved: u2, + class: Class, + reserved: u4, +} +``` + +This shows `TryFrom` being propagated upward. There's also another small help: `reserved` fields (which are often used in registers) can all have the same name. + +Again, let's try to print this: + +```rust +println!("{:?}", Device::try_from(0b0000_11_00)); +println!("{:?}", Device::new(Class::Mobile)); +``` + +And again, `Device` doesn't implement `Debug`: + +### DebugBits + +For structs, you need to add `#[derive(DebugBits)]` to get an output like this: + +```rust +Ok(Device { reserved_i: 0, class: Stationary, reserved_ii: 0 }) +Device { reserved_i: 0, class: Mobile, reserved_ii: 0 } +``` + +For testing + overview, the full readme example code is in `/examples/readme.rs`. + +### Custom -Bits derives + +One of the main advantages of our approach is that we can keep `#[bitsize]` pretty slim, offloading all the other features to derive macros. +Besides the derive macros shown above, you can extend `bilge` with your own derive crates working on bitfields. +An example of this is given in [`/tests/custom_derive.rs`](https://github.com/hecatia-elegua/bilge/blob/main/tests/custom_derive.rs), with its implementation in [`tests/custom_bits`](https://github.com/hecatia-elegua/bilge/blob/1dfb6cf7d278d102d3f96ac31a9374e2b27fafc7/tests/custom_bits/custom_bits_derive/src/lib.rs). + +## Back- and Forwards Compatibility + +The syntax is kept very similar to usual rust structs for a simple reason: + +The endgoal of this library is to support the adoption of LLVM's arbitrary bitwidth integers into rust, +thereby allowing rust-native bitfields. +Until then, bilge is using the wonderful [`arbitrary-int` crate by danlehmann](https://github.com/danlehmann/arbitrary-int). + +After all attribute expansions, our generated bitfield contains a single field, somewhat like: + +```rust +struct Register { value: u14 } +``` + +This means you _could_ modify the inner value directly, but it breaks type safety guarantees (e.g. unfilled or read-only fields). +So if you need to modify the whole field, instead use the type-safe conversions `u14::from(register)` and `Register::from(u14)`. +It is possible that this inner type will be made private. + +For some more examples and an overview of functionality, take a look at `/examples` and `/tests`. + +## Alternatives + +### benchmarks, performance, asm line count + +First of all, [basic benchmarking](https://github.com/hecatia-elegua/bilge/blob/main/benches/compared/main.rs) reveals that all alternatives mentioned here (besides deku) have about the same performance and line count. This includes a handwritten version. + +### build-time + +Measuring build time of the crate inself (both with its dependencies and without), yields these numbers on my machine: +| | debug | debug single crate | release | release single crate | +|-----------------------|-------|--------------------|-----------|----------------------| +| bilge 1.67-nightly | 8 | 1.8 | 6 | 0.8 | +| bitbybit 1.69 | 4.5 | 1.3 | 13.5 [^*] | 9.5 [^*] | +| modular-bitfield 1.69 | 8 | 2.2 | 7.2 | 1.6 | + +[^*]: This is just a weird rustc regression or my setup or sth, not representative. + +This was measured with `cargo clean && cargo build [--release] --quiet --timings`. +Of course, the actual codegen time on an example project needs to be measured, too. + + +### handwritten implementation + +The common handwritten implementation pattern for bitfields in rust looks [somewhat like benches/compared/handmade.rs](https://github.com/hecatia-elegua/bilge/blob/main/benches/compared/handmade.rs), sometimes also throwing around a lot of consts for field offsets. The problems with this approach are: +- readability suffers +- offset, cast or masking errors could go unnoticed +- bit fiddling, shifting and masking is done all over the place, in contrast to bitfields +- beginners suffer, although I would argue even seniors, since it's more like: "Why do we need to learn and debug bit fiddling if we can get most of it done by using structs?" +- reimplementing different kinds of _fallible nested-struct enum-tuple array field access_ might not be so fun + +### modular-bitfield + +The often used and very inspiring [`modular-bitfield`](https://github.com/robbepop/modular-bitfield) has a few +problems: +- it is unmaintained and has a quirky structure +- constructors use the builder pattern + - makes user code unreadable if you have many fields + - can accidentally leave things uninitialized +- `from_bytes` can easily take invalid arguments, which turns verification inside-out: + - modular-bitfield flow: `u16` -> `PackedData::from_bytes([u16])` -> `PackedData::status_or_err()?` + - needs to check for `Err` on every single access + - adds duplicate getters and setters with postfix `_or_err` + - reinvents `From`/`TryFrom` as a kind of hybrid + - bilge: usual type-system centric flow: `u16` -> `PackedData::try_from(u16)?` -> `PackedData::status()` + - just works, needs to check nothing on access + - some more general info on this: [Parse, don't validate](https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) +- big god-macro + - powerful, but less readable to the devs of modular-bitfield + - needs to cover many derives in itself, like `impl Debug` (other bitfield crates do this as well) + - bilge: solves this by providing a kind of scope for `-Bits`-derives + +and implementation differences: +- underlying type is a byte array + - can be useful for bitfields larger than u128 + - bilge: if your bitfields get larger than u128, you can most often split them into multiple bitfields of a primitive size (like u64) and put those in a parent struct which is not a bitfield + +Still, modular-bitfield is pretty good and I had set out to build something equal or hopefully better than it. +Tell me where I can do better, I will try. + +### bitbybit + +One of the libs inspired by the same crate is [`bitbybit`](https://github.com/danlehmann/bitfield), which is much more readable and up-to-date. Actually, I even helped and am still helping on that one as well. After experimenting and hacking around in their code though, I realized it would need to be severely changed for the features and structure I had in mind. + +implementation differences (as of 26.04.23): +- it can do read/write-only, array strides and repeat the same bits for multiple fields + - bilge: these will be added the moment someone needs it +- redundant bit-offset specification, which can help or annoy, the same way bilge's `reserved` fields can help or annoy + +### deku + +After looking at a ton of bitfield libs on crates.io, I _didn't_ find [`deku`](https://github.com/sharksforarms/deku). +I will still mention it here because it uses a very interesting crate underneath (bitvec). +Currently (as of 26.04.23), it generates far more assembly and takes longer to run, since parts of the API are not `const`. +I've opened an issue on their repo about that. + +### most others + +Besides that, many bitfield libs try to imitate or look like C bitfields, even though these are hated by many. +I argue most beginners would have the idea to specify bits with basic primitives like u1, u2, ... +This also opens up some possibilities for calculation and conversion on those primitives. + +Something similar can be said about `bitflags`, which, under this model, can be turned into simple structs with bools and enums. + +Basically, `bilge` tries to convert bit fiddling, shifting and masking into more widely known concepts like struct access. + +About the name: a bilge is one of the "lowest" parts of a ship, nothing else to it :) diff --git a/rust/hw/char/pl011/vendor/bilge-impl/meson.build b/rust/hw/char/pl011/vendor/bilge-impl/meson.build new file mode 100644 index 0000000000..11f3dd186f --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/meson.build @@ -0,0 +1,24 @@ +rust = import('rust') + +_bilge_impl_rs = rust.proc_macro( + 'bilge_impl', + files('src/lib.rs'), + rust_args: rust_args + [ + '--edition', '2021', + '--cfg', 'use_fallback', + '--cfg', 'feature="syn-error"', + '--cfg', 'feature="proc-macro"', + ], + dependencies: [ + dep_itertools, + dep_proc_macro_error_attr, + dep_proc_macro_error, + dep_quote, + dep_syn, + dep_proc_macro2, + ], +) + +dep_bilge_impl = declare_dependency( + link_with: _bilge_impl_rs, +) diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize.rs new file mode 100644 index 0000000000..66660c3e30 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize.rs @@ -0,0 +1,187 @@ +mod split; + +use proc_macro2::{Ident, TokenStream}; +use proc_macro_error::{abort, abort_call_site}; +use quote::quote; +use split::SplitAttributes; +use syn::{punctuated::Iter, spanned::Spanned, Fields, Item, ItemEnum, ItemStruct, Type, Variant}; + +use crate::shared::{self, enum_fills_bitsize, is_fallback_attribute, unreachable, BitSize, MAX_ENUM_BIT_SIZE}; + +/// Intermediate Representation, just for bundling these together +struct ItemIr { + /// generated item (and size check) + expanded: TokenStream, +} + +pub(super) fn bitsize(args: TokenStream, item: TokenStream) -> TokenStream { + let (item, declared_bitsize) = parse(item, args); + let attrs = SplitAttributes::from_item(&item); + let ir = match item { + Item::Struct(mut item) => { + modify_special_field_names(&mut item.fields); + analyze_struct(&item.fields); + let expanded = generate_struct(&item, declared_bitsize); + ItemIr { expanded } + } + Item::Enum(item) => { + analyze_enum(declared_bitsize, item.variants.iter()); + let expanded = generate_enum(&item); + ItemIr { expanded } + } + _ => unreachable(()), + }; + generate_common(ir, attrs, declared_bitsize) +} + +fn parse(item: TokenStream, args: TokenStream) -> (Item, BitSize) { + let item = syn::parse2(item).unwrap_or_else(unreachable); + + if args.is_empty() { + abort_call_site!("missing attribute value"; help = "you need to define the size like this: `#[bitsize(32)]`") + } + + let (declared_bitsize, _arb_int) = shared::bitsize_and_arbitrary_int_from(args); + (item, declared_bitsize) +} + +fn check_type_is_supported(ty: &Type) { + use Type::*; + match ty { + Tuple(tuple) => tuple.elems.iter().for_each(check_type_is_supported), + Array(array) => check_type_is_supported(&array.elem), + // Probably okay (compilation would validate that this type is also Bitsized) + Path(_) => (), + // These don't work with structs or aren't useful in bitfields. + BareFn(_) | Group(_) | ImplTrait(_) | Infer(_) | Macro(_) | Never(_) | + // We could provide some info on error as to why Ptr/Reference won't work due to safety. + Ptr(_) | Reference(_) | + // The bitsize must be known at compile time. + Slice(_) | + // Something to investigate, but doesn't seem useful/usable here either. + TraitObject(_) | + // I have no idea where this is used. + Verbatim(_) | Paren(_) => abort!(ty, "This field type is not supported"), + _ => abort!(ty, "This field type is currently not supported"), + } +} + +/// Allows you to give multiple fields the name `reserved` or `padding` +/// by numbering them for you. +fn modify_special_field_names(fields: &mut Fields) { + // We could have just counted up, i.e. `reserved_0`, but people might interpret this as "reserved to zero". + // Using some other, more useful unique info as postfix would be nice. + // Also, it might be useful to generate no getters or setters for these fields and skipping some calc. + let mut reserved_count = 0; + let mut padding_count = 0; + let field_idents_mut = fields.iter_mut().filter_map(|field| field.ident.as_mut()); + for ident in field_idents_mut { + if ident == "reserved" || ident == "_reserved" { + reserved_count += 1; + let span = ident.span(); + let name = format!("reserved_{}", "i".repeat(reserved_count)); + *ident = Ident::new(&name, span) + } else if ident == "padding" || ident == "_padding" { + padding_count += 1; + let span = ident.span(); + let name = format!("padding_{}", "i".repeat(padding_count)); + *ident = Ident::new(&name, span) + } + } +} + +fn analyze_struct(fields: &Fields) { + if fields.is_empty() { + abort_call_site!("structs without fields are not supported") + } + + // don't move this. we validate all nested field types here as well + // and later assume this was checked. + for field in fields { + check_type_is_supported(&field.ty) + } +} + +fn analyze_enum(bitsize: BitSize, variants: Iter) { + if bitsize > MAX_ENUM_BIT_SIZE { + abort_call_site!("enum bitsize is limited to {}", MAX_ENUM_BIT_SIZE) + } + + let variant_count = variants.clone().count(); + if variant_count == 0 { + abort_call_site!("empty enums are not supported"); + } + + let has_fallback = variants.flat_map(|variant| &variant.attrs).any(is_fallback_attribute); + + if !has_fallback { + // this has a side-effect of validating the enum count + let _ = enum_fills_bitsize(bitsize, variant_count); + } +} + +fn generate_struct(item: &ItemStruct, declared_bitsize: u8) -> TokenStream { + let ItemStruct { vis, ident, fields, .. } = item; + let declared_bitsize = declared_bitsize as usize; + + let computed_bitsize = fields.iter().fold(quote!(0), |acc, next| { + let field_size = shared::generate_type_bitsize(&next.ty); + quote!(#acc + #field_size) + }); + + // we could remove this if the whole struct gets passed + let is_tuple_struct = fields.iter().any(|field| field.ident.is_none()); + let fields_def = if is_tuple_struct { + let fields = fields.iter(); + quote! { + ( #(#fields,)* ); + } + } else { + let fields = fields.iter(); + quote! { + { #(#fields,)* } + } + }; + + quote! { + #vis struct #ident #fields_def + + // constness: when we get const blocks evaluated at compile time, add a const computed_bitsize + const _: () = assert!( + (#computed_bitsize) == (#declared_bitsize), + concat!("struct size and declared bit size differ: ", + // stringify!(#computed_bitsize), + " != ", + stringify!(#declared_bitsize)) + ); + } +} + +// attributes are handled in `generate_common` +fn generate_enum(item: &ItemEnum) -> TokenStream { + let ItemEnum { vis, ident, variants, .. } = item; + quote! { + #vis enum #ident { + #variants + } + } +} + +/// we have _one_ generate_common function, which holds everything that struct and enum have _in common_. +/// Everything else has its own generate_ functions. +fn generate_common(ir: ItemIr, attrs: SplitAttributes, declared_bitsize: u8) -> TokenStream { + let ItemIr { expanded } = ir; + let SplitAttributes { + before_compression, + after_compression, + } = attrs; + + let bitsize_internal_attr = quote! {#[::bilge::bitsize_internal(#declared_bitsize)]}; + + quote! { + #(#before_compression)* + #bitsize_internal_attr + #(#after_compression)* + #expanded + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize/split.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize/split.rs new file mode 100644 index 0000000000..3848ba2c24 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize/split.rs @@ -0,0 +1,185 @@ +use proc_macro_error::{abort, abort_call_site}; +use quote::ToTokens; +use syn::{meta::ParseNestedMeta, parse_quote, Attribute, Item, Meta, Path}; + +use crate::shared::{unreachable, util::PathExt}; + +/// Since we want to be maximally interoperable, we need to handle attributes in a special way. +/// We use `#[bitsize]` as a sort of scope for all attributes below it and +/// the whole family of `-Bits` macros only works when used in that scope. +/// +/// Let's visualize why this is the case, starting with some user-code: +/// ```ignore +/// #[bitsize(6)] +/// #[derive(Clone, Copy, PartialEq, DebugBits, FromBits)] +/// struct Example { +/// field1: u2, +/// field2: u4, +/// } +/// ``` +/// First, the attributes get sorted, depending on their name. +/// Every attribute in need of field information gets resolved first, +/// in this case `DebugBits` and `FromBits`. +/// +/// Now, after resolving all `before_compression` attributes, the halfway-resolved +/// code looks like this: +/// ```ignore +/// #[::bilge::bitsize_internal(6)] +/// #[derive(Clone, Copy, PartialEq)] +/// struct Example { +/// field1: u2, +/// field2: u4, +/// } +/// ``` +/// This `#[bitsize_internal]` attribute is the one actually doing the compression and generating +/// all the getters, setters and a constructor. +/// +/// Finally, the struct ends up like this (excluding the generated impl blocks): +/// ```ignore +/// struct Example { +/// value: u6, +/// } +/// ``` +pub struct SplitAttributes { + pub before_compression: Vec, + pub after_compression: Vec, +} + +impl SplitAttributes { + /// Split item attributes into those applied before bitfield-compression and those applied after. + /// Also, abort on any invalid configuration. + /// + /// Any derives with suffix `Bits` will be able to access field information. + /// This way, users of `bilge` can define their own derives working on the uncompressed bitfield. + pub fn from_item(item: &Item) -> SplitAttributes { + let attrs = match item { + Item::Enum(item) => &item.attrs, + Item::Struct(item) => &item.attrs, + _ => abort_call_site!("item is not a struct or enum"; help = "`#[bitsize]` can only be used on structs and enums"), + }; + + let parsed = attrs.iter().map(parse_attribute); + + let is_struct = matches!(item, Item::Struct(..)); + + let mut from_bytes = None; + let mut has_frombits = false; + + let mut before_compression = vec![]; + let mut after_compression = vec![]; + + for parsed_attr in parsed { + match parsed_attr { + ParsedAttribute::DeriveList(derives) => { + for mut derive in derives { + if derive.matches(&["zerocopy", "FromBytes"]) { + from_bytes = Some(derive.clone()); + } else if derive.matches(&["bilge", "FromBits"]) { + has_frombits = true; + } else if derive.matches_core_or_std(&["fmt", "Debug"]) && is_struct { + abort!(derive.0, "use derive(DebugBits) for structs") + } else if derive.matches_core_or_std(&["default", "Default"]) && is_struct { + // emit_warning!(derive.0, "use derive(DefaultBits) for structs") + derive.0 = syn::parse_quote!(::bilge::DefaultBits); + } + + if derive.is_custom_bitfield_derive() { + before_compression.push(derive.into_attribute()); + } else { + // It is most probable that basic derive macros work if we put them on after compression + after_compression.push(derive.into_attribute()); + } + } + } + + ParsedAttribute::BitsizeInternal(attr) => { + abort!(attr, "remove bitsize_internal"; help = "attribute bitsize_internal can only be applied internally by the bitsize macros") + } + + ParsedAttribute::Other(attr) => { + // I don't know with which attrs I can hit Path and NameValue, + // so let's just put them on after compression. + after_compression.push(attr.to_owned()) + } + }; + } + + if let Some(from_bytes) = from_bytes { + if !has_frombits { + abort!(from_bytes.0, "a bitfield with zerocopy::FromBytes also needs to have FromBits") + } + } + + // currently, enums don't need special handling - so just put all attributes before compression + if !is_struct { + before_compression.append(&mut after_compression) + } + + SplitAttributes { + before_compression, + after_compression, + } + } +} + +fn parse_attribute(attribute: &Attribute) -> ParsedAttribute { + match &attribute.meta { + Meta::List(list) if list.path.is_ident("derive") => { + let mut derives = Vec::new(); + let add_derive = |meta: ParseNestedMeta| { + let derive = Derive(meta.path); + derives.push(derive); + + Ok(()) + }; + + list.parse_nested_meta(add_derive) + .unwrap_or_else(|e| abort!(list.tokens, "failed to parse derive: {}", e)); + + ParsedAttribute::DeriveList(derives) + } + + meta if contains_anywhere(meta, "bitsize_internal") => ParsedAttribute::BitsizeInternal(attribute), + + _ => ParsedAttribute::Other(attribute), + } +} + +/// a crude approximation of things we currently consider in item attributes +enum ParsedAttribute<'attr> { + DeriveList(Vec), + BitsizeInternal(&'attr Attribute), + Other(&'attr Attribute), +} + +/// the path of a single derive attribute, parsed from a list which may have contained several +#[derive(Clone)] +struct Derive(Path); + +impl Derive { + /// a new `#[derive]` attribute containing only this derive + fn into_attribute(self) -> Attribute { + let path = self.0; + parse_quote! { #[derive(#path)] } + } + + /// by `bilge` convention, any derive satisfying this condition is able + /// to access bitfield structure information pre-compression, + /// allowing for user derives + fn is_custom_bitfield_derive(&self) -> bool { + let last_segment = self.0.segments.last().unwrap_or_else(|| unreachable(())); + + last_segment.ident.to_string().ends_with("Bits") + } +} + +impl PathExt for Derive { + fn matches(&self, str_segments: &[&str]) -> bool { + self.0.matches(str_segments) + } +} + +/// slightly hacky. attempts to recognize cases where an ident is deeply-nested in the meta. +fn contains_anywhere(meta: &Meta, ident: &str) -> bool { + meta.to_token_stream().to_string().contains(ident) +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal.rs new file mode 100644 index 0000000000..ad10350372 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal.rs @@ -0,0 +1,235 @@ +use proc_macro2::{Ident, TokenStream}; +use quote::quote; +use syn::{Attribute, Field, Item, ItemEnum, ItemStruct, Type}; + +use crate::shared::{self, unreachable}; + +pub(crate) mod struct_gen; + +/// Intermediate Representation, just for bundling these together +struct ItemIr<'a> { + attrs: &'a Vec, + name: &'a Ident, + /// generated item (and setters, getters, constructor, impl Bitsized) + expanded: TokenStream, +} + +pub(super) fn bitsize_internal(args: TokenStream, item: TokenStream) -> TokenStream { + let (item, arb_int) = parse(item, args); + let ir = match item { + Item::Struct(ref item) => { + let expanded = generate_struct(item, &arb_int); + let attrs = &item.attrs; + let name = &item.ident; + ItemIr { attrs, name, expanded } + } + Item::Enum(ref item) => { + let expanded = generate_enum(item); + let attrs = &item.attrs; + let name = &item.ident; + ItemIr { attrs, name, expanded } + } + _ => unreachable(()), + }; + generate_common(ir, &arb_int) +} + +fn parse(item: TokenStream, args: TokenStream) -> (Item, TokenStream) { + let item = syn::parse2(item).unwrap_or_else(unreachable); + let (_declared_bitsize, arb_int) = shared::bitsize_and_arbitrary_int_from(args); + (item, arb_int) +} + +fn generate_struct(struct_data: &ItemStruct, arb_int: &TokenStream) -> TokenStream { + let ItemStruct { vis, ident, fields, .. } = struct_data; + + let mut fieldless_next_int = 0; + let mut previous_field_sizes = vec![]; + let (accessors, (constructor_args, constructor_parts)): (Vec, (Vec, Vec)) = fields + .iter() + .map(|field| { + // offset is needed for bit-shifting + // struct Example { field1: u8, field2: u4, field3: u4 } + // previous_field_sizes = [] -> unwrap_or_else -> field_offset = 0 + // previous_field_sizes = [8] -> reduce -> field_offset = 0 + 8 = 8 + // previous_field_sizes = [8, 4] -> reduce -> field_offset = 0 + 8 + 4 = 12 + let field_offset = previous_field_sizes + .iter() + .cloned() + .reduce(|acc, next| quote!(#acc + #next)) + .unwrap_or_else(|| quote!(0)); + let field_size = shared::generate_type_bitsize(&field.ty); + previous_field_sizes.push(field_size); + generate_field(field, &field_offset, &mut fieldless_next_int) + }) + .unzip(); + + let const_ = if cfg!(feature = "nightly") { quote!(const) } else { quote!() }; + + quote! { + #vis struct #ident { + /// WARNING: modifying this value directly can break invariants + value: #arb_int, + } + impl #ident { + // #[inline] + #[allow(clippy::too_many_arguments, clippy::type_complexity, unused_parens)] + pub #const_ fn new(#( #constructor_args )*) -> Self { + type ArbIntOf = ::ArbitraryInt; + type BaseIntOf = as Number>::UnderlyingType; + + let mut offset = 0; + let raw_value = #( #constructor_parts )|*; + let value = #arb_int::new(raw_value); + Self { value } + } + #( #accessors )* + } + } +} + +fn generate_field(field: &Field, field_offset: &TokenStream, fieldless_next_int: &mut usize) -> (TokenStream, (TokenStream, TokenStream)) { + let Field { ident, ty, .. } = field; + let name = if let Some(ident) = ident { + ident.clone() + } else { + let name = format!("val_{fieldless_next_int}"); + *fieldless_next_int += 1; + syn::parse_str(&name).unwrap_or_else(unreachable) + }; + + // skip reserved fields in constructors and setters + let name_str = name.to_string(); + if name_str.contains("reserved_") || name_str.contains("padding_") { + // needed for `DebugBits` + let getter = generate_getter(field, field_offset, &name); + let size = shared::generate_type_bitsize(ty); + let accessors = quote!(#getter); + let constructor_arg = quote!(); + let constructor_part = quote! { { + // we still need to shift by the element's size + offset += #size; + 0 + } }; + return (accessors, (constructor_arg, constructor_part)); + } + + let getter = generate_getter(field, field_offset, &name); + let setter = generate_setter(field, field_offset, &name); + let (constructor_arg, constructor_part) = generate_constructor_stuff(ty, &name); + + let accessors = quote! { + #getter + #setter + }; + + (accessors, (constructor_arg, constructor_part)) +} + +fn generate_getter(field: &Field, offset: &TokenStream, name: &Ident) -> TokenStream { + let Field { attrs, vis, ty, .. } = field; + + let getter_value = struct_gen::generate_getter_value(ty, offset, false); + + let const_ = if cfg!(feature = "nightly") { quote!(const) } else { quote!() }; + + let array_at = if let Type::Array(array) = ty { + let elem_ty = &array.elem; + let len_expr = &array.len; + let name: Ident = syn::parse_str(&format!("{name}_at")).unwrap_or_else(unreachable); + let getter_value = struct_gen::generate_getter_value(elem_ty, offset, true); + quote! { + // #[inline] + #(#attrs)* + #[allow(clippy::type_complexity, unused_parens)] + #vis #const_ fn #name(&self, index: usize) -> #elem_ty { + ::core::assert!(index < #len_expr); + #getter_value + } + } + } else { + quote!() + }; + + quote! { + // #[inline] + #(#attrs)* + #[allow(clippy::type_complexity, unused_parens)] + #vis #const_ fn #name(&self) -> #ty { + #getter_value + } + + #array_at + } +} + +fn generate_setter(field: &Field, offset: &TokenStream, name: &Ident) -> TokenStream { + let Field { attrs, vis, ty, .. } = field; + let setter_value = struct_gen::generate_setter_value(ty, offset, false); + + let name: Ident = syn::parse_str(&format!("set_{name}")).unwrap_or_else(unreachable); + + let const_ = if cfg!(feature = "nightly") { quote!(const) } else { quote!() }; + + let array_at = if let Type::Array(array) = ty { + let elem_ty = &array.elem; + let len_expr = &array.len; + let name: Ident = syn::parse_str(&format!("{name}_at")).unwrap_or_else(unreachable); + let setter_value = struct_gen::generate_setter_value(elem_ty, offset, true); + quote! { + // #[inline] + #(#attrs)* + #[allow(clippy::type_complexity, unused_parens)] + #vis #const_ fn #name(&mut self, index: usize, value: #elem_ty) { + ::core::assert!(index < #len_expr); + #setter_value + } + } + } else { + quote!() + }; + + quote! { + // #[inline] + #(#attrs)* + #[allow(clippy::type_complexity, unused_parens)] + #vis #const_ fn #name(&mut self, value: #ty) { + #setter_value + } + + #array_at + } +} + +fn generate_constructor_stuff(ty: &Type, name: &Ident) -> (TokenStream, TokenStream) { + let constructor_arg = quote! { + #name: #ty, + }; + let constructor_part = struct_gen::generate_constructor_part(ty, name); + (constructor_arg, constructor_part) +} + +fn generate_enum(enum_data: &ItemEnum) -> TokenStream { + let ItemEnum { vis, ident, variants, .. } = enum_data; + quote! { + #vis enum #ident { + #variants + } + } +} + +/// We have _one_ `generate_common` function, which holds everything struct and enum have _in common_. +/// Everything else has its own `generate_` functions. +fn generate_common(ir: ItemIr, arb_int: &TokenStream) -> TokenStream { + let ItemIr { attrs, name, expanded } = ir; + + quote! { + #(#attrs)* + #expanded + impl ::bilge::Bitsized for #name { + type ArbitraryInt = #arb_int; + const BITS: usize = ::BITS; + const MAX: Self::ArbitraryInt = ::MAX; + } + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal/struct_gen.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal/struct_gen.rs new file mode 100644 index 0000000000..74cd65fec1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/bitsize_internal/struct_gen.rs @@ -0,0 +1,402 @@ +//! We're keeping most of the generating together, to ease reading here and in `cargo_expand`. +//! For this reason, we also use more locals and types. +//! These locals, types, casts should be optimized away. +//! In simple cases they indeed are optimized away, but if some case is not, please report. +//! +//! ## Important +//! +//! We often do thing like: +//! ```ignore +//! quote! { +//! #value_shifted +//! value_shifted +//! } +//! ``` +//! By convention, `#value_shifted` has its name because we define a `let value_shifted` inside that `TokenStream`. +//! So the above code means we're returning the value of `let value_shifted`. +//! Earlier on, we would have done something like this: +//! ```ignore +//! quote! { +//! let value_shifted = { #value_shifted }; +//! value_shifted +//! } +//! ``` +//! which aids in reading this here macro code, but doesn't help reading the generated code since it introduces +//! lots of new scopes (curly brackets). We need the scope since `#value_shifted` expands to multiple lines. +use super::*; + +/// Top-level function which initializes the cursor and offsets it to what we want to read +/// +/// `is_array_elem_getter` allows us to generate an array_at getter more easily +pub(crate) fn generate_getter_value(ty: &Type, offset: &TokenStream, is_array_elem_getter: bool) -> TokenStream { + // if we generate `fn array_at(index)`, we need to offset to the array element + let elem_offset = if is_array_elem_getter { + let size = shared::generate_type_bitsize(ty); + quote! { + let size = #size; + // cursor now starts at this element + cursor >>= size * index; + } + } else { + quote!() + }; + + let inner = generate_getter_inner(ty, true); + quote! { + // for ease of reading + type ArbIntOf = ::ArbitraryInt; + type BaseIntOf = as Number>::UnderlyingType; + // cursor is the value we read from and starts at the struct's first field + let mut cursor = self.value.value(); + // this field's offset + let field_offset = #offset; + // cursor now starts at this field + cursor >>= field_offset; + #elem_offset + + #inner + } +} + +/// We heavily rely on the fact that transmuting into a nested array [[T; N1]; N2] can +/// be done in the same way as transmuting into an array [T; N1*N2]. +/// Otherwise, nested arrays would generate even more code. +/// +/// `is_getter` allows us to generate a try_from impl more easily +pub(crate) fn generate_getter_inner(ty: &Type, is_getter: bool) -> TokenStream { + use Type::*; + match ty { + Tuple(tuple) => { + let unbraced = tuple + .elems + .iter() + .map(|elem| { + // for every tuple element, generate its getter code + let getter = generate_getter_inner(elem, is_getter); + // and add a scope around it + quote! { {#getter} } + }) + .reduce(|acc, next| { + // join all getter codes with: + if is_getter { + // comma, to later produce (val_1, val_2, ...) + quote!(#acc, #next) + } else { + // bool-and, since for try_from we just generate bools + quote!(#acc && #next) + } + }) + // `field: (),` will be handled like this: + .unwrap_or_else(|| quote!()); + // add tuple braces, to produce (val_1, val_2, ...) + quote! { (#unbraced) } + } + Array(array) => { + // [[T; N1]; N2] -> (N1*N2, T) + let (len_expr, elem_ty) = length_and_type_of_nested_array(array); + // generate the getter code for one array element + let array_elem = generate_getter_inner(&elem_ty, is_getter); + // either generate an array or only check each value + if is_getter { + quote! { + // constness: iter, array::from_fn, for-loop, range are not const, so we're using while loops + // Modified version of the array init example in [`MaybeUninit`]: + let array = { + // [T; N1*N2] + let mut array: [::core::mem::MaybeUninit<#elem_ty>; #len_expr] = unsafe { + ::core::mem::MaybeUninit::uninit().assume_init() + }; + let mut i = 0; + while i < #len_expr { + // for every element, get its value + let elem_value = { + #array_elem + }; + // and write it to the output array + array[i].write(elem_value); + i += 1; + } + // [T; N1*N2] -> [[T; N1]; N2] + unsafe { ::core::mem::transmute(array) } + }; + array + } + } else { + quote! { { + let mut is_filled = true; + let mut i = 0; + // TODO: this could be simplified for always-filled values + while i < #len_expr { + // for every element, get its filled check + let elem_filled = { + #array_elem + }; + // and join it with the others + is_filled = is_filled && elem_filled; + i += 1; + } + is_filled + } } + } + } + Path(_) => { + // get the size, so we can shift to the next element's offset + let size = shared::generate_type_bitsize(ty); + // get the mask, so we can get this element's value + let mask = generate_ty_mask(ty); + + // do all steps until conversion + let elem_value = quote! { + // the element's mask + let mask = #mask; + // the cursor starts at this element's offset, now get its value + let raw_value = cursor & mask; + // after getting the value, we can shift by the element's size + // TODO: we could move this into tuple/array (and try_from, below) + let size = #size; + cursor = cursor.wrapping_shr(size as u32); + // cast the element value (e.g. u32 -> u8), + let raw_value: BaseIntOf<#ty> = raw_value as BaseIntOf<#ty>; + // which allows it to be used here (e.g. u4::new(u8)) + let elem_value = <#ty as Bitsized>::ArbitraryInt::new(raw_value); + }; + + if is_getter { + // generate the real value from the arbint `elem_value` + quote! { + #elem_value + match #ty::try_from(elem_value) { + Ok(v) => v, + Err(_) => panic!("unreachable"), + } + } + } else { + // generate only the filled check + if shared::is_always_filled(ty) { + // skip the obviously filled values + quote! { + // we still need to shift by the element's size + let size = #size; + cursor = cursor.wrapping_shr(size as u32); + true + } + } else { + // handle structs, enums - everything which can be unfilled + quote! { { + #elem_value + // so, has try_from impl + // note this is available even if the type is `From` + #ty::try_from(elem_value).is_ok() + } } + } + } + } + _ => unreachable(()), + } +} + +/// Top-level function which initializes the offset, masks other values and combines the final value +/// +/// `is_array_elem_setter` allows us to generate a set_array_at setter more easily +pub(crate) fn generate_setter_value(ty: &Type, offset: &TokenStream, is_array_elem_setter: bool) -> TokenStream { + // if we generate `fn set_array_at(index, value)`, we need to offset to the array element + let elem_offset = if is_array_elem_setter { + let size = shared::generate_type_bitsize(ty); + quote! { + let size = #size; + // offset now starts at this element + offset += size * index; + } + } else { + quote!() + }; + + let value_shifted = generate_setter_inner(ty); + // get the mask, so we can set this field's value + let mask = generate_ty_mask(ty); + quote! { + type ArbIntOf = ::ArbitraryInt; + type BaseIntOf = as Number>::UnderlyingType; + + // offset now starts at this field + let mut offset = #offset; + #elem_offset + + let field_mask = #mask; + // shift the mask into place + let field_mask: BaseIntOf = field_mask << offset; + // all other fields as a mask + let others_mask: BaseIntOf = !field_mask; + // the current struct value + let struct_value: BaseIntOf = self.value.value(); + // mask off the field getting set + let others_values: BaseIntOf = struct_value & others_mask; + + // get the new field value, shifted into place + #value_shifted + + // join the values using bit-or + let new_struct_value = others_values | value_shifted; + self.value = >::new(new_struct_value); + } +} + +/// We heavily rely on the fact that transmuting into a nested array [[T; N1]; N2] can +/// be done in the same way as transmuting into an array [T; N1*N2]. +/// Otherwise, nested arrays would generate even more code. +fn generate_setter_inner(ty: &Type) -> TokenStream { + use Type::*; + match ty { + Tuple(tuple) => { + // to index into the tuple value + let mut tuple_index = syn::Index::from(0); + let value_shifted = tuple + .elems + .iter() + .map(|elem| { + let elem_name = quote!(value.#tuple_index); + tuple_index.index += 1; + // for every tuple element, generate its setter code + let value_shifted = generate_setter_inner(elem); + // set the value and add a scope around it + quote! { { + let value = #elem_name; + #value_shifted + value_shifted + } } + }) + // join all setter codes with bit-or + .reduce(|acc, next| quote!(#acc | #next)) + // `field: (),` will be handled like this: + .unwrap_or_else(|| quote!(0)); + quote! { + let value_shifted = #value_shifted; + } + } + Array(array) => { + // [[T; N1]; N2] -> (N1*N2, T) + let (len_expr, elem_ty) = length_and_type_of_nested_array(array); + // generate the setter code for one array element + let value_shifted = generate_setter_inner(&elem_ty); + quote! { + // [[T; N1]; N2] -> [T; N1*N2], for example: [[(u2, u2); 3]; 4] -> [(u2, u2); 12] + #[allow(clippy::useless_transmute)] + let value: [#elem_ty; #len_expr] = unsafe { ::core::mem::transmute(value) }; + // constness: iter, for-loop, range are not const, so we're using while loops + // [u4; 8] -> u32 + let mut acc = 0; + let mut i = 0; + while i < #len_expr { + let value = value[i]; + // for every element, shift its value into its place + #value_shifted + // and bit-or them together + acc |= value_shifted; + i += 1; + } + let value_shifted = acc; + } + } + Path(_) => { + // get the size, so we can reach the next element afterwards + let size = shared::generate_type_bitsize(ty); + quote! { + // the element's value as it's underlying type + let value: BaseIntOf<#ty> = >::from(value).value(); + // cast the element value (e.g. u8 -> u32), + // which allows it to be combined with the struct's value later + let value: BaseIntOf = value as BaseIntOf; + let value_shifted = value << offset; + // increase the offset to allow the next element to be read + offset += #size; + } + } + _ => unreachable(()), + } +} + +/// The constructor code just needs every field setter. +/// +/// [`super::generate_struct`] contains the initialization of `offset`. +pub(crate) fn generate_constructor_part(ty: &Type, name: &Ident) -> TokenStream { + let value_shifted = generate_setter_inner(ty); + // setters look like this: `fn set_field1(&mut self, value: u3)` + // constructors like this: `fn new(field1: u3, field2: u4) -> Self` + // so we need to rename `field1` -> `value` and put this in a scope + quote! { { + let value = #name; + #value_shifted + value_shifted + } } +} + +/// We mostly need this in [`generate_setter_value`], to mask the whole field. +/// It basically combines a bunch of `Bitsized::MAX` values into a mask. +fn generate_ty_mask(ty: &Type) -> TokenStream { + use Type::*; + match ty { + Tuple(tuple) => { + let mut previous_elem_sizes = vec![]; + tuple + .elems + .iter() + .map(|elem| { + // for every element, generate a mask + let mask = generate_ty_mask(elem); + // get it's size + let elem_size = shared::generate_type_bitsize(elem); + // generate it's offset from all previous sizes + let elem_offset = previous_elem_sizes.iter().cloned().reduce(|acc, next| quote!((#acc + #next))); + previous_elem_sizes.push(elem_size); + // the first field doesn't need to be shifted + if let Some(elem_offset) = elem_offset { + quote!(#mask << #elem_offset) + } else { + quote!(#mask) + } + }) + // join all shifted masks with bit-or + .reduce(|acc, next| quote!(#acc | #next)) + // `field: (),` will be handled like this: + .unwrap_or_else(|| quote!(0)) + } + Array(array) => { + let elem_ty = &array.elem; + let len_expr = &array.len; + // generate the mask for one array element + let mask = generate_ty_mask(elem_ty); + // and the size + let ty_size = shared::generate_type_bitsize(elem_ty); + quote! { { + let mask = #mask; + let mut field_mask = 0; + let mut i = 0; + while i < #len_expr { + // for every element, shift its mask into its place + // and bit-or them together + field_mask |= mask << (i * #ty_size); + i += 1; + } + field_mask + } } + } + Path(_) => quote! { + // Casting this is needed in some places, but it might not be needed in some others. + // (u2, u12) -> u8 << 0 | u16 << 2 -> u8 | u16 not possible + (<#ty as Bitsized>::MAX.value() as BaseIntOf) + }, + _ => unreachable(()), + } +} + +/// We compute nested length here, to fold [[T; N]; M] to [T; N * M]. +fn length_and_type_of_nested_array(array: &syn::TypeArray) -> (TokenStream, Type) { + let elem_ty = &array.elem; + let len_expr = &array.len; + if let Type::Array(array) = &**elem_ty { + let (child_len, child_ty) = length_and_type_of_nested_array(array); + (quote!((#len_expr) * (#child_len)), child_ty) + } else { + (quote!(#len_expr), *elem_ty.clone()) + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/debug_bits.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/debug_bits.rs new file mode 100644 index 0000000000..95ba9c73c1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/debug_bits.rs @@ -0,0 +1,55 @@ +use proc_macro2::{Ident, TokenStream}; +use proc_macro_error::abort_call_site; +use quote::quote; +use syn::{Data, Fields}; + +use crate::shared::{self, unreachable}; + +pub(super) fn debug_bits(item: TokenStream) -> TokenStream { + let derive_input = shared::parse_derive(item); + let name = &derive_input.ident; + let name_str = name.to_string(); + let mut fieldless_next_int = 0; + let struct_data = match derive_input.data { + Data::Struct(s) => s, + Data::Enum(_) => abort_call_site!("use derive(Debug) for enums"), + Data::Union(_) => unreachable(()), + }; + + let fmt_impl = match struct_data.fields { + Fields::Named(fields) => { + let calls = fields.named.iter().map(|f| { + // We can unwrap since this is a named field + let call = f.ident.as_ref().unwrap(); + let name = call.to_string(); + quote!(.field(#name, &self.#call())) + }); + quote! { + f.debug_struct(#name_str) + // .field("field1", &self.field1()).field("field2", &self.field2()).field("field3", &self.field3()).finish() + #(#calls)*.finish() + } + } + Fields::Unnamed(fields) => { + let calls = fields.unnamed.iter().map(|_| { + let call: Ident = syn::parse_str(&format!("val_{}", fieldless_next_int)).unwrap_or_else(unreachable); + fieldless_next_int += 1; + quote!(.field(&self.#call())) + }); + quote! { + f.debug_tuple(#name_str) + // .field(&self.val0()).field(&self.val1()).finish() + #(#calls)*.finish() + } + } + Fields::Unit => todo!("this is a unit struct, which is not supported right now"), + }; + + quote! { + impl ::core::fmt::Debug for #name { + fn fmt(&self, f: &mut ::core::fmt::Formatter<'_>) -> ::core::fmt::Result { + #fmt_impl + } + } + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/default_bits.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/default_bits.rs new file mode 100644 index 0000000000..f664accf36 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/default_bits.rs @@ -0,0 +1,92 @@ +use proc_macro2::{Ident, TokenStream}; +use proc_macro_error::abort_call_site; +use quote::quote; +use syn::{Data, DeriveInput, Fields, Type}; + +use crate::shared::{self, fallback::Fallback, unreachable, BitSize}; + +pub(crate) fn default_bits(item: TokenStream) -> TokenStream { + let derive_input = parse(item); + //TODO: does fallback need handling? + let (derive_data, _, name, ..) = analyze(&derive_input); + + match derive_data { + Data::Struct(data) => generate_struct_default_impl(name, &data.fields), + Data::Enum(_) => abort_call_site!("use derive(Default) for enums"), + _ => unreachable(()), + } +} + +fn generate_struct_default_impl(struct_name: &Ident, fields: &Fields) -> TokenStream { + let default_value = fields + .iter() + .map(|field| generate_default_inner(&field.ty)) + .reduce(|acc, next| quote!(#acc | #next)); + + quote! { + impl ::core::default::Default for #struct_name { + fn default() -> Self { + let mut offset = 0; + let value = #default_value; + let value = <#struct_name as Bitsized>::ArbitraryInt::new(value); + Self { value } + } + } + } +} + +fn generate_default_inner(ty: &Type) -> TokenStream { + use Type::*; + match ty { + // TODO?: we could optimize nested arrays here like in `struct_gen.rs` + // NOTE: in std, Default is only derived for arrays with up to 32 elements, but we allow more + Array(array) => { + let len_expr = &array.len; + let elem_ty = &*array.elem; + // generate the default value code for one array element + let value_shifted = generate_default_inner(elem_ty); + quote! {{ + // constness: iter, array::from_fn, for-loop, range are not const, so we're using while loops + let mut acc = 0; + let mut i = 0; + while i < #len_expr { + // for every element, shift its value into its place + let value_shifted = #value_shifted; + // and bit-or them together + acc |= value_shifted; + i += 1; + } + acc + }} + } + Path(path) => { + let field_size = shared::generate_type_bitsize(ty); + // u2::from(HaveFun::default()).value() as u32; + quote! {{ + let as_int = <#path as Bitsized>::ArbitraryInt::from(<#path as ::core::default::Default>::default()).value(); + let as_base_int = as_int as <::ArbitraryInt as Number>::UnderlyingType; + let shifted = as_base_int << offset; + offset += #field_size; + shifted + }} + } + Tuple(tuple) => { + tuple + .elems + .iter() + .map(generate_default_inner) + .reduce(|acc, next| quote!(#acc | #next)) + // `field: (),` will be handled like this: + .unwrap_or_else(|| quote!(0)) + } + _ => unreachable(()), + } +} + +fn parse(item: TokenStream) -> DeriveInput { + shared::parse_derive(item) +} + +fn analyze(derive_input: &DeriveInput) -> (&Data, TokenStream, &Ident, BitSize, Option) { + shared::analyze_derive(derive_input, false) +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/fmt_bits.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/fmt_bits.rs new file mode 100644 index 0000000000..527691ed65 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/fmt_bits.rs @@ -0,0 +1,112 @@ +use proc_macro2::{Ident, TokenStream}; +use quote::quote; +use syn::{punctuated::Iter, Data, DeriveInput, Fields, Variant}; + +use crate::shared::{self, discriminant_assigner::DiscriminantAssigner, fallback::Fallback, unreachable, BitSize}; + +pub(crate) fn binary(item: TokenStream) -> TokenStream { + let derive_input = parse(item); + let (derive_data, arb_int, name, bitsize, fallback) = analyze(&derive_input); + + match derive_data { + Data::Struct(data) => generate_struct_binary_impl(name, &data.fields), + Data::Enum(data) => generate_enum_binary_impl(name, data.variants.iter(), arb_int, bitsize, fallback), + _ => unreachable(()), + } +} + +fn generate_struct_binary_impl(struct_name: &Ident, fields: &Fields) -> TokenStream { + let write_underscore = quote! { write!(f, "_")?; }; + + // fields are printed from most significant to least significant, separated by an underscore + let writes = fields + .iter() + .rev() + .map(|field| { + let field_size = shared::generate_type_bitsize(&field.ty); + + // `extracted` is `field_size` bits of `value`, starting from index `first_bit_pos` (counting from LSB) + quote! { + let field_size = #field_size; + let field_mask = mask >> (struct_size - field_size); + let first_bit_pos = last_bit_pos - field_size; + last_bit_pos -= field_size; + let extracted = field_mask & (self.value >> first_bit_pos); + write!(f, "{:0width$b}", extracted, width = field_size)?; + } + }) + .reduce(|acc, next| quote!(#acc #write_underscore #next)); + + quote! { + impl ::core::fmt::Binary for #struct_name { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + let struct_size = <#struct_name as Bitsized>::BITS; + let mut last_bit_pos = struct_size; + let mask = <#struct_name as Bitsized>::MAX; + #writes + Ok(()) + } + } + } +} + +fn generate_enum_binary_impl( + enum_name: &Ident, variants: Iter, arb_int: TokenStream, bitsize: BitSize, fallback: Option, +) -> TokenStream { + let to_int_match_arms = generate_to_int_match_arms(variants, enum_name, bitsize, arb_int, fallback); + + let body = if to_int_match_arms.is_empty() { + quote! { Ok(()) } + } else { + quote! { + let value = match self { + #( #to_int_match_arms )* + }; + write!(f, "{:0width$b}", value, width = <#enum_name as Bitsized>::BITS) + } + }; + + quote! { + impl ::core::fmt::Binary for #enum_name { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + #body + } + } + } +} + +/// generates the arms for an (infallible) conversion from an enum to the enum's underlying arbitrary_int +fn generate_to_int_match_arms( + variants: Iter, enum_name: &Ident, bitsize: BitSize, arb_int: TokenStream, fallback: Option, +) -> Vec { + let is_value_fallback = |variant_name| { + if let Some(Fallback::WithValue(name)) = &fallback { + variant_name == name + } else { + false + } + }; + + let mut assigner = DiscriminantAssigner::new(bitsize); + + variants + .map(|variant| { + let variant_name = &variant.ident; + let variant_value = assigner.assign_unsuffixed(variant); + + if is_value_fallback(variant_name) { + quote! { #enum_name::#variant_name(number) => *number, } + } else { + shared::to_int_match_arm(enum_name, variant_name, &arb_int, variant_value) + } + }) + .collect() +} + +fn parse(item: TokenStream) -> DeriveInput { + shared::parse_derive(item) +} + +fn analyze(derive_input: &DeriveInput) -> (&Data, TokenStream, &Ident, BitSize, Option) { + shared::analyze_derive(derive_input, false) +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/from_bits.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/from_bits.rs new file mode 100644 index 0000000000..e58b921521 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/from_bits.rs @@ -0,0 +1,222 @@ +use itertools::Itertools; +use proc_macro2::{Ident, TokenStream}; +use proc_macro_error::{abort, abort_call_site}; +use quote::quote; +use syn::{punctuated::Iter, Data, DeriveInput, Fields, Type, Variant}; + +use crate::shared::{ + self, discriminant_assigner::DiscriminantAssigner, enum_fills_bitsize, fallback::Fallback, + unreachable, BitSize, +}; + +pub(super) fn from_bits(item: TokenStream) -> TokenStream { + let derive_input = parse(item); + let (derive_data, arb_int, name, internal_bitsize, fallback) = analyze(&derive_input); + let expanded = match &derive_data { + Data::Struct(struct_data) => generate_struct(arb_int, name, &struct_data.fields), + Data::Enum(enum_data) => { + let variants = enum_data.variants.iter(); + let match_arms = analyze_enum( + variants, + name, + internal_bitsize, + fallback.as_ref(), + &arb_int, + ); + generate_enum(arb_int, name, match_arms, fallback) + } + _ => unreachable(()), + }; + generate_common(expanded) +} + +fn parse(item: TokenStream) -> DeriveInput { + shared::parse_derive(item) +} + +fn analyze( + derive_input: &DeriveInput, +) -> (&syn::Data, TokenStream, &Ident, BitSize, Option) { + shared::analyze_derive(derive_input, false) +} + +fn analyze_enum( + variants: Iter, + name: &Ident, + internal_bitsize: BitSize, + fallback: Option<&Fallback>, + arb_int: &TokenStream, +) -> (Vec, Vec) { + validate_enum_variants(variants.clone(), fallback); + + let enum_is_filled = enum_fills_bitsize(internal_bitsize, variants.len()); + if !enum_is_filled && fallback.is_none() { + abort_call_site!("enum doesn't fill its bitsize"; help = "you need to use `#[derive(TryFromBits)]` instead, or specify one of the variants as #[fallback]") + } + if enum_is_filled && fallback.is_some() { + // NOTE: I've shortly tried pointing to `#[fallback]` here but it wasn't easy enough + abort_call_site!("enum already has {} variants", variants.len(); help = "remove the `#[fallback]` attribute") + } + + let mut assigner = DiscriminantAssigner::new(internal_bitsize); + + let is_fallback = |variant_name| { + if let Some(Fallback::Unit(name) | Fallback::WithValue(name)) = fallback { + variant_name == name + } else { + false + } + }; + + let is_value_fallback = |variant_name| { + if let Some(Fallback::WithValue(name)) = fallback { + variant_name == name + } else { + false + } + }; + + variants + .map(|variant| { + let variant_name = &variant.ident; + let variant_value = assigner.assign_unsuffixed(variant); + + let from_int_match_arm = if is_fallback(variant_name) { + // this value will be handled by the catch-all arm + quote!() + } else { + quote! { #variant_value => Self::#variant_name, } + }; + + let to_int_match_arm = if is_value_fallback(variant_name) { + quote! { #name::#variant_name(number) => number, } + } else { + shared::to_int_match_arm(name, variant_name, arb_int, variant_value) + }; + + (from_int_match_arm, to_int_match_arm) + }) + .unzip() +} + +fn generate_enum( + arb_int: TokenStream, + enum_type: &Ident, + match_arms: (Vec, Vec), + fallback: Option, +) -> TokenStream { + let (from_int_match_arms, to_int_match_arms) = match_arms; + + let const_ = if cfg!(feature = "nightly") { + quote!(const) + } else { + quote!() + }; + + let from_enum_impl = + shared::generate_from_enum_impl(&arb_int, enum_type, to_int_match_arms, &const_); + + let catch_all_arm = match fallback { + Some(Fallback::WithValue(fallback_ident)) => quote! { + _ => Self::#fallback_ident(number), + }, + Some(Fallback::Unit(fallback_ident)) => quote! { + _ => Self::#fallback_ident, + }, + None => quote! { + // constness: unreachable!() is not const yet + _ => ::core::panic!("unreachable: arbitrary_int already validates that this is unreachable") + }, + }; + + quote! { + impl #const_ ::core::convert::From<#arb_int> for #enum_type { + fn from(number: #arb_int) -> Self { + match number.value() { + #( #from_int_match_arms )* + #catch_all_arm + } + } + } + #from_enum_impl + } +} + +/// a type is considered "filled" if it implements `Bitsized` with `BITS == N`, +/// and additionally is allowed to have any unsigned value from `0` to `2^N - 1`. +/// such a type can then safely implement `From`. +/// a filled type automatically implements the trait `Filled` thanks to a blanket impl. +/// the check generated by this function will prevent compilation if `ty` is not `Filled`. +fn generate_filled_check_for(ty: &Type, vec: &mut Vec) { + use Type::*; + match ty { + Path(_) => { + let assume = quote! { ::bilge::assume_filled::<#ty>(); }; + vec.push(assume); + } + Tuple(tuple) => { + for elem in &tuple.elems { + generate_filled_check_for(elem, vec) + } + } + Array(array) => generate_filled_check_for(&array.elem, vec), + _ => unreachable(()), + } +} + +fn generate_struct(arb_int: TokenStream, struct_type: &Ident, fields: &Fields) -> TokenStream { + let const_ = if cfg!(feature = "nightly") { + quote!(const) + } else { + quote!() + }; + + let mut assumes = Vec::new(); + for field in fields { + generate_filled_check_for(&field.ty, &mut assumes) + } + + // a single check per type is enough, so the checks can be deduped + let assumes = assumes.into_iter().unique_by(TokenStream::to_string); + + quote! { + impl #const_ ::core::convert::From<#arb_int> for #struct_type { + fn from(value: #arb_int) -> Self { + #( #assumes )* + Self { value } + } + } + impl #const_ ::core::convert::From<#struct_type> for #arb_int { + fn from(value: #struct_type) -> Self { + value.value + } + } + } +} + +fn generate_common(expanded: TokenStream) -> TokenStream { + quote! { + #expanded + } +} + +fn validate_enum_variants(variants: Iter, fallback: Option<&Fallback>) { + for variant in variants { + // we've already validated the correctness of the fallback variant, and that there's at most one such variant. + // this means we can safely skip a fallback variant if we find one. + if let Some(fallback) = &fallback { + if fallback.is_fallback_variant(&variant.ident) { + continue; + } + } + + if !matches!(variant.fields, Fields::Unit) { + let help_message = if fallback.is_some() { + "change this variant to a unit" + } else { + "add a fallback variant or change this variant to a unit" + }; + abort!(variant, "FromBits only supports unit variants for variants without `#[fallback]`"; help = help_message); + } + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/lib.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/lib.rs new file mode 100644 index 0000000000..4b34b4f306 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/lib.rs @@ -0,0 +1,79 @@ +extern crate itertools; +extern crate proc_macro_error; +extern crate proc_macro_error_attr; +use proc_macro::TokenStream; +use proc_macro_error::proc_macro_error; + +mod bitsize; +mod bitsize_internal; +mod debug_bits; +mod default_bits; +mod fmt_bits; +mod from_bits; +mod try_from_bits; + +mod shared; + +/// Defines the bitsize of a struct or an enum. +/// +/// e.g. `#[bitsize(4)]` represents the item as a u4, which is UInt underneath. +/// The size of structs is currently limited to 128 bits. +/// The size of enums is limited to 64 bits. +/// Please open an issue if you have a usecase for bigger bitfields. +#[proc_macro_error] +#[proc_macro_attribute] +pub fn bitsize(args: TokenStream, item: TokenStream) -> TokenStream { + bitsize::bitsize(args.into(), item.into()).into() +} + +/// This is internally used, not to be used by anything besides `bitsize`. +/// No guarantees are given. +#[proc_macro_error] +#[proc_macro_attribute] +pub fn bitsize_internal(args: TokenStream, item: TokenStream) -> TokenStream { + bitsize_internal::bitsize_internal(args.into(), item.into()).into() +} + +/// Generate an `impl TryFrom` for unfilled bitfields. +/// +/// This should be used when your enum or enums nested in +/// a struct don't fill their given `bitsize`. +#[proc_macro_error] +#[proc_macro_derive(TryFromBits, attributes(bitsize_internal, fallback))] +pub fn derive_try_from_bits(item: TokenStream) -> TokenStream { + try_from_bits::try_from_bits(item.into()).into() +} + +/// Generate an `impl From` for filled bitfields. +/// +/// This should be used when your enum or enums nested in +/// a struct fill their given `bitsize` or if you're not +/// using enums. +#[proc_macro_error] +#[proc_macro_derive(FromBits, attributes(bitsize_internal, fallback))] +pub fn derive_from_bits(item: TokenStream) -> TokenStream { + from_bits::from_bits(item.into()).into() +} + +/// Generate an `impl core::fmt::Debug` for bitfield structs. +/// +/// Please use normal #[derive(Debug)] for enums. +#[proc_macro_error] +#[proc_macro_derive(DebugBits, attributes(bitsize_internal))] +pub fn debug_bits(item: TokenStream) -> TokenStream { + debug_bits::debug_bits(item.into()).into() +} + +/// Generate an `impl core::fmt::Binary` for bitfields. +#[proc_macro_error] +#[proc_macro_derive(BinaryBits)] +pub fn derive_binary_bits(item: TokenStream) -> TokenStream { + fmt_bits::binary(item.into()).into() +} + +/// Generate an `impl core::default::Default` for bitfield structs. +#[proc_macro_error] +#[proc_macro_derive(DefaultBits)] +pub fn derive_default_bits(item: TokenStream) -> TokenStream { + default_bits::default_bits(item.into()).into() +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/shared.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/shared.rs new file mode 100644 index 0000000000..2e54e0d787 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/shared.rs @@ -0,0 +1,196 @@ +pub mod discriminant_assigner; +pub mod fallback; +pub mod util; + +use fallback::{fallback_variant, Fallback}; +use proc_macro2::{Ident, Literal, TokenStream}; +use proc_macro_error::{abort, abort_call_site}; +use quote::quote; +use syn::{Attribute, DeriveInput, LitInt, Meta, Type}; +use util::PathExt; + +/// As arbitrary_int is limited to basic rust primitives, the maximum is u128. +/// Is there a true usecase for bitfields above this size? +/// This would also be change-worthy when rust starts supporting LLVM's arbitrary integers. +pub const MAX_STRUCT_BIT_SIZE: BitSize = 128; +/// As `#[repr(u128)]` is unstable and currently no real usecase for higher sizes exists, the maximum is u64. +pub const MAX_ENUM_BIT_SIZE: BitSize = 64; +pub type BitSize = u8; + +pub(crate) fn parse_derive(item: TokenStream) -> DeriveInput { + syn::parse2(item).unwrap_or_else(unreachable) +} + +// allow since we want `if try_from` blocks to stand out +#[allow(clippy::collapsible_if)] +pub(crate) fn analyze_derive(derive_input: &DeriveInput, try_from: bool) -> (&syn::Data, TokenStream, &Ident, BitSize, Option) { + let DeriveInput { + attrs, + ident, + // generics, + data, + .. + } = derive_input; + + if !try_from { + if attrs.iter().any(is_non_exhaustive_attribute) { + abort_call_site!("Item can't be FromBits and non_exhaustive"; help = "remove #[non_exhaustive] or derive(FromBits) here") + } + } else { + // currently not allowed, would need some thinking: + if let syn::Data::Struct(_) = data { + if attrs.iter().any(is_non_exhaustive_attribute) { + abort_call_site!("Using #[non_exhaustive] on structs is currently not supported"; help = "open an issue on our repository if needed") + } + } + } + + // parsing the #[bitsize_internal(num)] attribute macro + let args = attrs + .iter() + .find_map(bitsize_internal_arg) + .unwrap_or_else(|| abort_call_site!("add #[bitsize] attribute above your derive attribute")); + let (bitsize, arb_int) = bitsize_and_arbitrary_int_from(args); + + let fallback = fallback_variant(data, bitsize); + if fallback.is_some() && try_from { + abort_call_site!("fallback is not allowed with `TryFromBits`"; help = "use `#[derive(FromBits)]` or remove this `#[fallback]`") + } + + (data, arb_int, ident, bitsize, fallback) +} + +// If we want to support bitsize(u4) besides bitsize(4), do that here. +pub fn bitsize_and_arbitrary_int_from(bitsize_arg: TokenStream) -> (BitSize, TokenStream) { + let bitsize: LitInt = syn::parse2(bitsize_arg.clone()) + .unwrap_or_else(|_| abort!(bitsize_arg, "attribute value is not a number"; help = "you need to define the size like this: `#[bitsize(32)]`")); + // without postfix + let bitsize = bitsize + .base10_parse() + .ok() + .filter(|&n| n != 0 && n <= MAX_STRUCT_BIT_SIZE) + .unwrap_or_else(|| abort!(bitsize_arg, "attribute value is not a valid number"; help = "currently, numbers from 1 to {} are allowed", MAX_STRUCT_BIT_SIZE)); + let arb_int = syn::parse_str(&format!("u{bitsize}")).unwrap_or_else(unreachable); + (bitsize, arb_int) +} + +pub fn generate_type_bitsize(ty: &Type) -> TokenStream { + use Type::*; + match ty { + Tuple(tuple) => { + tuple + .elems + .iter() + .map(generate_type_bitsize) + .reduce(|acc, next| quote!((#acc + #next))) + // `field: (),` will be handled like this: + .unwrap_or_else(|| quote!(0)) + } + Array(array) => { + let elem_bitsize = generate_type_bitsize(&array.elem); + let len_expr = &array.len; + quote!((#elem_bitsize * #len_expr)) + } + Path(_) => { + quote!(<#ty as Bitsized>::BITS) + } + _ => unreachable(()), + } +} + +pub(crate) fn generate_from_enum_impl( + arb_int: &TokenStream, enum_type: &Ident, to_int_match_arms: Vec, const_: &TokenStream, +) -> TokenStream { + quote! { + impl #const_ ::core::convert::From<#enum_type> for #arb_int { + fn from(enum_value: #enum_type) -> Self { + match enum_value { + #( #to_int_match_arms )* + } + } + } + } +} + +/// Filters fields which are always `FILLED`, meaning all bit-patterns are possible, +/// meaning they are (should be) From, not TryFrom +/// +/// Currently, this is exactly the set of types we can extract a bitsize out of, just by looking at their ident: `uN` and `bool`. +pub fn is_always_filled(ty: &Type) -> bool { + last_ident_of_path(ty).and_then(bitsize_from_type_ident).is_some() +} + +pub fn last_ident_of_path(ty: &Type) -> Option<&Ident> { + if let Type::Path(type_path) = ty { + // the type may have a qualified path, so I don't think we can use `get_ident()` here + let last_segment = type_path.path.segments.last()?; + Some(&last_segment.ident) + } else { + None + } +} + +/// in enums, internal_bitsize <= 64; u64::MAX + 1 = u128 +/// therefore the bitshift would not overflow. +pub fn enum_fills_bitsize(bitsize: u8, variants_count: usize) -> bool { + let max_variants_count = 1u128 << bitsize; + if variants_count as u128 > max_variants_count { + abort_call_site!("enum overflows its bitsize"; help = "there should only be at most {} variants defined", max_variants_count); + } + variants_count as u128 == max_variants_count +} + +#[inline] +pub fn unreachable(_: T) -> U { + unreachable!("should have already been validated") +} + +pub fn is_attribute(attr: &Attribute, name: &str) -> bool { + if let Meta::Path(path) = &attr.meta { + path.is_ident(name) + } else { + false + } +} + +fn is_non_exhaustive_attribute(attr: &Attribute) -> bool { + is_attribute(attr, "non_exhaustive") +} + +pub(crate) fn is_fallback_attribute(attr: &Attribute) -> bool { + is_attribute(attr, "fallback") +} + +/// attempts to extract the bitsize from an ident equal to `uN` or `bool`. +/// should return `Result` instead of `Option`, if we decide to add more descriptive error handling. +pub fn bitsize_from_type_ident(type_name: &Ident) -> Option { + let type_name = type_name.to_string(); + + if type_name == "bool" { + Some(1) + } else if let Some(suffix) = type_name.strip_prefix('u') { + // characters which may appear in this suffix are digits, letters and underscores. + // parse() will reject letters and underscores, so this should be correct. + let bitsize = suffix.parse().ok(); + + // the namespace contains u2 up to u{MAX_STRUCT_BIT_SIZE}. can't make assumptions about larger values + bitsize.filter(|&n| n <= MAX_STRUCT_BIT_SIZE) + } else { + None + } +} + +pub fn to_int_match_arm(enum_name: &Ident, variant_name: &Ident, arb_int: &TokenStream, variant_value: Literal) -> TokenStream { + quote! { #enum_name::#variant_name => #arb_int::new(#variant_value), } +} + +pub(crate) fn bitsize_internal_arg(attr: &Attribute) -> Option { + if let Meta::List(list) = &attr.meta { + if list.path.matches(&["bilge", "bitsize_internal"]) { + let arg = list.tokens.to_owned(); + return Some(arg); + } + } + + None +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/shared/discriminant_assigner.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/shared/discriminant_assigner.rs new file mode 100644 index 0000000000..5825baa4f1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/shared/discriminant_assigner.rs @@ -0,0 +1,56 @@ +use proc_macro2::Literal; +use proc_macro_error::abort; +use syn::{Expr, ExprLit, Lit, Variant}; + +use super::{unreachable, BitSize}; + +pub(crate) struct DiscriminantAssigner { + bitsize: BitSize, + next_expected_assignment: u128, +} + +impl DiscriminantAssigner { + pub fn new(bitsize: u8) -> DiscriminantAssigner { + DiscriminantAssigner { + bitsize, + next_expected_assignment: 0, + } + } + + fn max_value(&self) -> u128 { + (1u128 << self.bitsize) - 1 + } + + fn value_from_discriminant(&self, variant: &Variant) -> Option { + let discriminant = variant.discriminant.as_ref()?; + let discriminant_expr = &discriminant.1; + let variant_name = &variant.ident; + + let Expr::Lit(ExprLit { lit: Lit::Int(int), .. }) = discriminant_expr else { + abort!( + discriminant_expr, + "variant `{}` is not a number", variant_name; + help = "only literal integers currently supported" + ) + }; + + let discriminant_value: u128 = int.base10_parse().unwrap_or_else(unreachable); + if discriminant_value > self.max_value() { + abort!(variant, "Value of variant exceeds the given number of bits") + } + + Some(discriminant_value) + } + + fn assign(&mut self, variant: &Variant) -> u128 { + let value = self.value_from_discriminant(variant).unwrap_or(self.next_expected_assignment); + self.next_expected_assignment = value + 1; + value + } + + /// syn adds a suffix when printing Rust integers. we use an unsuffixed `Literal` for better-looking codegen + pub fn assign_unsuffixed(&mut self, variant: &Variant) -> Literal { + let next = self.assign(variant); + Literal::u128_unsuffixed(next) + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/shared/fallback.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/shared/fallback.rs new file mode 100644 index 0000000000..893919659e --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/shared/fallback.rs @@ -0,0 +1,92 @@ +use itertools::Itertools; +use proc_macro2::Ident; +use proc_macro_error::{abort, abort_call_site}; +use syn::{Data, Variant}; + +use super::{bitsize_from_type_ident, is_fallback_attribute, last_ident_of_path, unreachable, BitSize}; + +pub enum Fallback { + Unit(Ident), + WithValue(Ident), +} + +impl Fallback { + fn from_variant(variant: &Variant, enum_bitsize: BitSize, is_last_variant: bool) -> Fallback { + use syn::Fields::*; + + let ident = variant.ident.to_owned(); + + match &variant.fields { + Named(_) => { + abort!(variant, "`#[fallback]` does not support variants with named fields"; help = "use a tuple variant or remove this `#[fallback]`") + } + Unnamed(fields) => { + let variant_fields = fields.unnamed.iter(); + let Ok(fallback_value) = variant_fields.exactly_one() else { + abort!(variant, "fallback variant must have exactly one field"; help = "use only one field or change to a unit variant") + }; + + if !is_last_variant { + abort!(variant, "value fallback is not the last variant"; help = "a fallback variant with value must be the last variant of the enum") + } + + // here we validate that the fallback variant field type matches the bitsize + let size_from_type = last_ident_of_path(&fallback_value.ty).and_then(bitsize_from_type_ident); + + match size_from_type { + Some(bitsize) if bitsize == enum_bitsize => Fallback::WithValue(ident), + Some(bitsize) => abort!( + variant.fields, + "bitsize of fallback field ({}) does not match bitsize of enum ({})", + bitsize, + enum_bitsize + ), + None => abort!(variant.fields, "`#[fallback]` only supports arbitrary_int or bool types"), + } + } + Unit => Fallback::Unit(ident), + } + } + + pub fn is_fallback_variant(&self, variant_ident: &Ident) -> bool { + matches!(self, Fallback::Unit(fallback_ident) | Fallback::WithValue(fallback_ident) if variant_ident == fallback_ident) + } +} + +/// finds a single enum variant with the attribute "fallback". +/// a "fallback variant" may come in one of two forms: +/// 1. `#[fallback] Foo`, which we map to `Fallback::Unit` +/// 2. `#[fallback] Foo(uN)`, where `N` is the enum's bitsize and `Foo` is the enum's last variant, +/// which we map to `Fallback::WithValue` +pub fn fallback_variant(data: &Data, enum_bitsize: BitSize) -> Option { + match data { + Data::Enum(enum_data) => { + let variants_with_fallback = enum_data + .variants + .iter() + .filter(|variant| variant.attrs.iter().any(is_fallback_attribute)); + + match variants_with_fallback.at_most_one() { + Ok(None) => None, + Ok(Some(variant)) => { + let is_last_variant = variant.ident == enum_data.variants.last().unwrap().ident; + let fallback = Fallback::from_variant(variant, enum_bitsize, is_last_variant); + Some(fallback) + } + Err(_) => { + abort_call_site!("only one enum variant may be `#[fallback]`"; help = "remove #[fallback] attributes until you only have one") + } + } + } + Data::Struct(struct_data) => { + let mut field_attrs = struct_data.fields.iter().flat_map(|field| &field.attrs); + + if field_attrs.any(is_fallback_attribute) { + abort_call_site!("`#[fallback]` is only applicable to enums"; help = "remove all `#[fallback]` from this struct") + } else { + None + } + } + _ => unreachable(()), + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/shared/util.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/shared/util.rs new file mode 100644 index 0000000000..31a9be1f2a --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/shared/util.rs @@ -0,0 +1,91 @@ +use syn::Path; +#[cfg(test)] +use syn_path::path; + +pub trait PathExt { + /// match path segments. `str_segments` should contain the entire + /// qualified path from the crate root, for example `["bilge", "FromBits"]`. + /// allows partial matches - `["std", "default", "Default"]` will also match + /// the paths `Default` or `default::Default`. + fn matches(&self, str_segments: &[&str]) -> bool; + + /// match path segments, but also allow first segment to be either "core" or "std" + fn matches_core_or_std(&self, str_segments: &[&str]) -> bool { + let mut str_segments = str_segments.to_owned(); + + // try matching with "std" as first segment + // first, make "std" the first segment + match str_segments.first().copied() { + None => return false, // since path is non-empty, this is trivially false + Some("std") => (), + _ => str_segments.insert(0, "std"), + }; + + if self.matches(&str_segments) { + return true; + } + + // try matching with "core" as first segment + str_segments[0] = "core"; + self.matches(&str_segments) + } +} + +impl PathExt for Path { + fn matches(&self, str_segments: &[&str]) -> bool { + if self.segments.len() > str_segments.len() { + return false; + } + + let segments = self.segments.iter().map(|seg| seg.ident.to_string()).rev(); + let str_segments = str_segments.iter().copied().rev(); + + segments.zip(str_segments).all(|(a, b)| a == b) + } +} + +#[test] +fn path_matching() { + let paths = [ + path!(::std::default::Default), + path!(std::default::Default), + path!(default::Default), + path!(Default), + ]; + + let str_segments = &["std", "default", "Default"]; + + for path in paths { + assert!(path.matches(str_segments)); + } +} + +#[test] +fn partial_does_not_match() { + let full_path = path!(std::foo::bar::fizz::Buzz); + + let str_segments = ["std", "foo", "bar", "fizz", "Buzz"]; + + for i in 1..str_segments.len() { + let partial_str_segments = &str_segments[i..]; + assert!(!full_path.matches(partial_str_segments)) + } +} + +#[test] +fn path_matching_without_root() { + let paths = [ + path!(::core::fmt::Debug), + path!(core::fmt::Debug), + path!(::std::fmt::Debug), + path!(std::fmt::Debug), + path!(fmt::Debug), + path!(Debug), + ]; + + let str_segments_without_root = &["fmt", "Debug"]; + + for path in paths { + assert!(path.matches_core_or_std(str_segments_without_root)); + } +} diff --git a/rust/hw/char/pl011/vendor/bilge-impl/src/try_from_bits.rs b/rust/hw/char/pl011/vendor/bilge-impl/src/try_from_bits.rs new file mode 100644 index 0000000000..b27a5567c5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge-impl/src/try_from_bits.rs @@ -0,0 +1,143 @@ +use proc_macro2::{Ident, TokenStream}; +use proc_macro_error::{abort, emit_call_site_warning}; +use quote::quote; +use syn::{punctuated::Iter, Data, DeriveInput, Fields, Type, Variant}; + +use crate::shared::{self, discriminant_assigner::DiscriminantAssigner, enum_fills_bitsize, fallback::Fallback, unreachable, BitSize}; +use crate::shared::{bitsize_from_type_ident, last_ident_of_path}; + +pub(super) fn try_from_bits(item: TokenStream) -> TokenStream { + let derive_input = parse(item); + let (derive_data, arb_int, name, internal_bitsize, ..) = analyze(&derive_input); + match derive_data { + Data::Struct(ref data) => codegen_struct(arb_int, name, &data.fields), + Data::Enum(ref enum_data) => { + let variants = enum_data.variants.iter(); + let match_arms = analyze_enum(variants, name, internal_bitsize, &arb_int); + codegen_enum(arb_int, name, match_arms) + } + _ => unreachable(()), + } +} + +fn parse(item: TokenStream) -> DeriveInput { + shared::parse_derive(item) +} + +fn analyze(derive_input: &DeriveInput) -> (&syn::Data, TokenStream, &Ident, BitSize, Option) { + shared::analyze_derive(derive_input, true) +} + +fn analyze_enum(variants: Iter, name: &Ident, internal_bitsize: BitSize, arb_int: &TokenStream) -> (Vec, Vec) { + validate_enum_variants(variants.clone()); + + if enum_fills_bitsize(internal_bitsize, variants.len()) { + emit_call_site_warning!("enum fills its bitsize"; help = "you can use `#[derive(FromBits)]` instead, rust will provide `TryFrom` for you (so you don't necessarily have to update call-sites)"); + } + + let mut assigner = DiscriminantAssigner::new(internal_bitsize); + + variants + .map(|variant| { + let variant_name = &variant.ident; + let variant_value = assigner.assign_unsuffixed(variant); + + let from_int_match_arm = quote! { + #variant_value => Ok(Self::#variant_name), + }; + + let to_int_match_arm = shared::to_int_match_arm(name, variant_name, arb_int, variant_value); + + (from_int_match_arm, to_int_match_arm) + }) + .unzip() +} + +fn codegen_enum(arb_int: TokenStream, enum_type: &Ident, match_arms: (Vec, Vec)) -> TokenStream { + let (from_int_match_arms, to_int_match_arms) = match_arms; + + let const_ = if cfg!(feature = "nightly") { quote!(const) } else { quote!() }; + + let from_enum_impl = shared::generate_from_enum_impl(&arb_int, enum_type, to_int_match_arms, &const_); + quote! { + impl #const_ ::core::convert::TryFrom<#arb_int> for #enum_type { + type Error = ::bilge::BitsError; + + fn try_from(number: #arb_int) -> ::core::result::Result { + match number.value() { + #( #from_int_match_arms )* + i => Err(::bilge::give_me_error()), + } + } + } + + // this other direction is needed for get/set/new + #from_enum_impl + } +} + +fn generate_field_check(ty: &Type) -> TokenStream { + // Yes, this is hacky module management. + crate::bitsize_internal::struct_gen::generate_getter_inner(ty, false) +} + +fn codegen_struct(arb_int: TokenStream, struct_type: &Ident, fields: &Fields) -> TokenStream { + let is_ok: TokenStream = fields + .iter() + .map(|field| { + let ty = &field.ty; + let size_from_type = last_ident_of_path(ty).and_then(bitsize_from_type_ident); + if let Some(size) = size_from_type { + quote! { { + // we still need to shift by the element's size + let size = #size; + cursor = cursor.wrapping_shr(size as u32); + true + } } + } else { + generate_field_check(ty) + } + }) + .reduce(|acc, next| quote!((#acc && #next))) + // `Struct {}` would be handled like this: + .unwrap_or_else(|| quote!(true)); + + let const_ = if cfg!(feature = "nightly") { quote!(const) } else { quote!() }; + + quote! { + impl #const_ ::core::convert::TryFrom<#arb_int> for #struct_type { + type Error = ::bilge::BitsError; + + // validates all values, which means enums, even in inner structs (TODO: and reserved fields?) + fn try_from(value: #arb_int) -> ::core::result::Result { + type ArbIntOf = ::ArbitraryInt; + type BaseIntOf = as Number>::UnderlyingType; + + // cursor starts at value's first field + let mut cursor = value.value(); + + let is_ok: bool = {#is_ok}; + + if is_ok { + Ok(Self { value }) + } else { + Err(::bilge::give_me_error()) + } + } + } + + impl #const_ ::core::convert::From<#struct_type> for #arb_int { + fn from(struct_value: #struct_type) -> Self { + struct_value.value + } + } + } +} + +fn validate_enum_variants(variants: Iter) { + for variant in variants { + if !matches!(variant.fields, Fields::Unit) { + abort!(variant, "TryFromBits only supports unit variants in enums"; help = "change this variant to a unit"); + } + } +} diff --git a/rust/hw/char/pl011/vendor/bilge/.cargo-checksum.json b/rust/hw/char/pl011/vendor/bilge/.cargo-checksum.json new file mode 100644 index 0000000000..39c4922340 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"3bb4a52531b944f44649567e4308c98efe1a908ca15558eaf9139fe260c22184","LICENSE-APACHE":"2514772e5475f208616174f81b67168179a7c51bdcb9570a96a9dc5962b83116","LICENSE-MIT":"7363fc7e2596998f3fc0109b6908575bf1cd8f6fa2fc97aff6bd9d17177f50bb","README.md":"6d4fcc631ed47bbe8e654649185ce987e9630192ea25c84edd264674e30efa4d","src/lib.rs":"4c8546a19b3255895058b4d5a2e8f17b36d196275bbc6831fe1a8b8cbeb258dc"},"package":"dc707ed8ebf81de5cd6c7f48f54b4c8621760926cdf35a57000747c512e67b57"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/bilge/Cargo.toml b/rust/hw/char/pl011/vendor/bilge/Cargo.toml new file mode 100644 index 0000000000..3e4900f08c --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/Cargo.toml @@ -0,0 +1,69 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2021" +name = "bilge" +version = "0.2.0" +authors = ["Hecatia Elegua"] +include = [ + "src/lib.rs", + "LICENSE-*", + "README.md", +] +description = "Use bitsized types as if they were a feature of rust." +documentation = "https://docs.rs/bilge" +readme = "README.md" +keywords = [ + "bilge", + "bitfield", + "bits", + "register", +] +license = "MIT OR Apache-2.0" +repository = "https://github.com/hecatia-elegua/bilge" + +[lib] +bench = false + +[[bench]] +name = "compared" +path = "benches/compared/main.rs" +bench = false +harness = false + +[dependencies.arbitrary-int] +version = "1.2.6" + +[dependencies.bilge-impl] +version = "=0.2.0" + +[dev-dependencies.assert_matches] +version = "1.5.0" + +[dev-dependencies.rustversion] +version = "1.0" + +[dev-dependencies.trybuild] +version = "1.0" + +[dev-dependencies.volatile] +version = "0.5.1" + +[dev-dependencies.zerocopy] +version = "0.5.0" + +[features] +default = [] +nightly = [ + "arbitrary-int/const_convert_and_const_trait_impl", + "bilge-impl/nightly", +] diff --git a/rust/hw/char/pl011/vendor/bilge/LICENSE-APACHE b/rust/hw/char/pl011/vendor/bilge/LICENSE-APACHE new file mode 100644 index 0000000000..21254fc75d --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/LICENSE-APACHE @@ -0,0 +1,176 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/bilge/LICENSE-MIT b/rust/hw/char/pl011/vendor/bilge/LICENSE-MIT new file mode 100644 index 0000000000..2b1af07674 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/LICENSE-MIT @@ -0,0 +1,17 @@ +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/bilge/README.md b/rust/hw/char/pl011/vendor/bilge/README.md new file mode 100644 index 0000000000..48daad0fcb --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/README.md @@ -0,0 +1,327 @@ +# bilge: the most readable bitfields + +[![crates.io](https://img.shields.io/crates/v/bilge.svg)](https://crates.io/crates/bilge) +[![docs.rs](https://docs.rs/bilge/badge.svg)](https://docs.rs/bilge) +[![loc](https://tokei.rs/b1/github/hecatia-elegua/bilge?category=code)](https://github.com/Aaronepower/tokei#badges) + +_Y e s_, this is yet another bitfield crate, but hear me out: + +This is a _**bit**_ better than what we had before. + +I wanted a design fitting rust: + +- **safe** + - types model as much of the functionality as possible and don't allow false usage +- **fast** + - like handwritten bit fiddling code +- **simple to complex** + - obvious and readable basic frontend, like normal structs + - only minimally and gradually introduce advanced concepts + - provide extension mechanisms + +The lib is **no-std** (and fully `const` behind a `"nightly"` feature gate). + +For some more explanations on the "why" and "how": [blog post](https://hecatia-elegua.github.io/blog/no-more-bit-fiddling/) and [reddit comments](https://www.reddit.com/r/rust/comments/13ic0mf/no_more_bit_fiddling_and_introducing_bilge/). + +## WARNING + +Our current version is still pre 1.0, which means nothing is completely stable. + +However, constructors, getters, setters and From/TryFrom should stay the same, since their semantics are very clear. + +[//]: # (keep this fixed to the version in .github/workflows/ci.yml, rust-toolchain.toml) + +The nightly feature is tested on `nightly-2022-11-03` and [will not work on the newest nightly until const_convert comes back](https://github.com/rust-lang/rust/issues/110395#issuecomment-1524775763). + +## Usage + +To make your life easier: + +```rust +use bilge::prelude::*; +``` + +### Infallible (From) + +You can just specify bitsized fields like normal fields: + +```rust +#[bitsize(14)] +struct Register { + header: u4, + body: u7, + footer: Footer, +} +``` + +The attribute `bitsize` generates the bitfield, while `14` works as a failsafe, emitting a compile error if your struct definition doesn't declare 14 bits. +Let's define the nested struct `Footer` as well: + +```rust +#[bitsize(3)] +#[derive(FromBits)] +struct Footer { + is_last: bool, + code: Code, +} +``` + +As you can see, we added `#[derive(FromBits)]`, which is needed for `Register`'s getters and setters. +Due to how rust macros work (outside-in), it needs to be below `#[bitsize]`. +Also, `bool` can be used as one bit. + +`Code` is another nesting, this time an enum: + +```rust +#[bitsize(2)] +#[derive(FromBits)] +enum Code { Success, Error, IoError, GoodExample } +``` + +Now we can construct `Register`: + +```rust +let reg1 = Register::new( + u4::new(0b1010), + u7::new(0b010_1010), + Footer::new(true, Code::GoodExample) +); +``` + +Or, if we add `#[derive(FromBits)]` to `Register` and want to parse a raw register value: + +```rust +let mut reg2 = Register::from(u14::new(0b11_1_0101010_1010)); +``` + +And getting and setting fields is done like this: + +```rust +let header = reg2.header(); +reg2.set_footer(Footer::new(false, Code::Success)); +``` + +Any kinds of tuple and array are also supported: + +```rust +#[bitsize(32)] +#[derive(FromBits)] +struct InterruptSetEnables([bool; 32]); +``` + +Which produces the usual getter and setter, but also element accessors: + +```rust +let mut ise = InterruptSetEnables::from(0b0000_0000_0000_0000_0000_0000_0001_0000); +let ise5 = ise.val_0_at(4); +ise.set_val_0_at(2, ise5); +assert_eq!(0b0000_0000_0000_0000_0000_0000_0001_0100, ise.value); +``` + +Depending on what you're working with, only a subset of enum values might be clear, or some values might be reserved. +In that case, you can use a fallback variant, defined like this: + +```rust +#[bitsize(32)] +#[derive(FromBits, Debug, PartialEq)] +enum Subclass { + Mouse, + Keyboard, + Speakers, + #[fallback] + Reserved, +} +``` + +which will convert any undeclared bits to `Reserved`: + +```rust +assert_eq!(Subclass::Reserved, Subclass::from(3)); +assert_eq!(Subclass::Reserved, Subclass::from(42)); +let num = u32::from(Subclass::from(42)); +assert_eq!(3, num); +assert_ne!(42, num); +``` + +or, if you need to keep the exact number saved, use: + +```rust +#[fallback] +Reserved(u32), +``` + +```rust +assert_eq!(Subclass2::Reserved(3), Subclass2::from(3)); +assert_eq!(Subclass2::Reserved(42), Subclass2::from(42)); +let num = u32::from(Subclass2::from(42)); +assert_eq!(42, num); +assert_ne!(3, num); +``` + +### Fallible (TryFrom) + +In contrast to structs, enums don't have to declare all of their bits: + +```rust +#[bitsize(2)] +#[derive(TryFromBits)] +enum Class { + Mobile, Semimobile, /* 0x2 undefined */ Stationary = 0x3 +} +``` + +meaning this will work: + +```rust +let class = Class::try_from(u2::new(2)); +assert!(class.is_err()); +``` + +except we first need to `#[derive(Debug, PartialEq)]` on `Class`, since `assert_eq!` needs those. + +Let's do that, and use `Class` as a field: + +```rust +#[bitsize(8)] +#[derive(TryFromBits)] +struct Device { + reserved: u2, + class: Class, + reserved: u4, +} +``` + +This shows `TryFrom` being propagated upward. There's also another small help: `reserved` fields (which are often used in registers) can all have the same name. + +Again, let's try to print this: + +```rust +println!("{:?}", Device::try_from(0b0000_11_00)); +println!("{:?}", Device::new(Class::Mobile)); +``` + +And again, `Device` doesn't implement `Debug`: + +### DebugBits + +For structs, you need to add `#[derive(DebugBits)]` to get an output like this: + +```rust +Ok(Device { reserved_i: 0, class: Stationary, reserved_ii: 0 }) +Device { reserved_i: 0, class: Mobile, reserved_ii: 0 } +``` + +For testing + overview, the full readme example code is in `/examples/readme.rs`. + +### Custom -Bits derives + +One of the main advantages of our approach is that we can keep `#[bitsize]` pretty slim, offloading all the other features to derive macros. +Besides the derive macros shown above, you can extend `bilge` with your own derive crates working on bitfields. +An example of this is given in [`/tests/custom_derive.rs`](https://github.com/hecatia-elegua/bilge/blob/main/tests/custom_derive.rs), with its implementation in [`tests/custom_bits`](https://github.com/hecatia-elegua/bilge/blob/1dfb6cf7d278d102d3f96ac31a9374e2b27fafc7/tests/custom_bits/custom_bits_derive/src/lib.rs). + +## Back- and Forwards Compatibility + +The syntax is kept very similar to usual rust structs for a simple reason: + +The endgoal of this library is to support the adoption of LLVM's arbitrary bitwidth integers into rust, +thereby allowing rust-native bitfields. +Until then, bilge is using the wonderful [`arbitrary-int` crate by danlehmann](https://github.com/danlehmann/arbitrary-int). + +After all attribute expansions, our generated bitfield contains a single field, somewhat like: + +```rust +struct Register { value: u14 } +``` + +This means you _could_ modify the inner value directly, but it breaks type safety guarantees (e.g. unfilled or read-only fields). +So if you need to modify the whole field, instead use the type-safe conversions `u14::from(register)` and `Register::from(u14)`. +It is possible that this inner type will be made private. + +For some more examples and an overview of functionality, take a look at `/examples` and `/tests`. + +## Alternatives + +### benchmarks, performance, asm line count + +First of all, [basic benchmarking](https://github.com/hecatia-elegua/bilge/blob/main/benches/compared/main.rs) reveals that all alternatives mentioned here (besides deku) have about the same performance and line count. This includes a handwritten version. + +### build-time + +Measuring build time of the crate inself (both with its dependencies and without), yields these numbers on my machine: +| | debug | debug single crate | release | release single crate | +|-----------------------|-------|--------------------|-----------|----------------------| +| bilge 1.67-nightly | 8 | 1.8 | 6 | 0.8 | +| bitbybit 1.69 | 4.5 | 1.3 | 13.5 [^*] | 9.5 [^*] | +| modular-bitfield 1.69 | 8 | 2.2 | 7.2 | 1.6 | + +[^*]: This is just a weird rustc regression or my setup or sth, not representative. + +This was measured with `cargo clean && cargo build [--release] --quiet --timings`. +Of course, the actual codegen time on an example project needs to be measured, too. + + +### handwritten implementation + +The common handwritten implementation pattern for bitfields in rust looks [somewhat like benches/compared/handmade.rs](https://github.com/hecatia-elegua/bilge/blob/main/benches/compared/handmade.rs), sometimes also throwing around a lot of consts for field offsets. The problems with this approach are: +- readability suffers +- offset, cast or masking errors could go unnoticed +- bit fiddling, shifting and masking is done all over the place, in contrast to bitfields +- beginners suffer, although I would argue even seniors, since it's more like: "Why do we need to learn and debug bit fiddling if we can get most of it done by using structs?" +- reimplementing different kinds of _fallible nested-struct enum-tuple array field access_ might not be so fun + +### modular-bitfield + +The often used and very inspiring [`modular-bitfield`](https://github.com/robbepop/modular-bitfield) has a few +problems: +- it is unmaintained and has a quirky structure +- constructors use the builder pattern + - makes user code unreadable if you have many fields + - can accidentally leave things uninitialized +- `from_bytes` can easily take invalid arguments, which turns verification inside-out: + - modular-bitfield flow: `u16` -> `PackedData::from_bytes([u16])` -> `PackedData::status_or_err()?` + - needs to check for `Err` on every single access + - adds duplicate getters and setters with postfix `_or_err` + - reinvents `From`/`TryFrom` as a kind of hybrid + - bilge: usual type-system centric flow: `u16` -> `PackedData::try_from(u16)?` -> `PackedData::status()` + - just works, needs to check nothing on access + - some more general info on this: [Parse, don't validate](https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/) +- big god-macro + - powerful, but less readable to the devs of modular-bitfield + - needs to cover many derives in itself, like `impl Debug` (other bitfield crates do this as well) + - bilge: solves this by providing a kind of scope for `-Bits`-derives + +and implementation differences: +- underlying type is a byte array + - can be useful for bitfields larger than u128 + - bilge: if your bitfields get larger than u128, you can most often split them into multiple bitfields of a primitive size (like u64) and put those in a parent struct which is not a bitfield + +Still, modular-bitfield is pretty good and I had set out to build something equal or hopefully better than it. +Tell me where I can do better, I will try. + +### bitbybit + +One of the libs inspired by the same crate is [`bitbybit`](https://github.com/danlehmann/bitfield), which is much more readable and up-to-date. Actually, I even helped and am still helping on that one as well. After experimenting and hacking around in their code though, I realized it would need to be severely changed for the features and structure I had in mind. + +implementation differences (as of 26.04.23): +- it can do read/write-only, array strides and repeat the same bits for multiple fields + - bilge: these will be added the moment someone needs it +- redundant bit-offset specification, which can help or annoy, the same way bilge's `reserved` fields can help or annoy + +### deku + +After looking at a ton of bitfield libs on crates.io, I _didn't_ find [`deku`](https://github.com/sharksforarms/deku). +I will still mention it here because it uses a very interesting crate underneath (bitvec). +Currently (as of 26.04.23), it generates far more assembly and takes longer to run, since parts of the API are not `const`. +I've opened an issue on their repo about that. + +### most others + +Besides that, many bitfield libs try to imitate or look like C bitfields, even though these are hated by many. +I argue most beginners would have the idea to specify bits with basic primitives like u1, u2, ... +This also opens up some possibilities for calculation and conversion on those primitives. + +Something similar can be said about `bitflags`, which, under this model, can be turned into simple structs with bools and enums. + +Basically, `bilge` tries to convert bit fiddling, shifting and masking into more widely known concepts like struct access. + +About the name: a bilge is one of the "lowest" parts of a ship, nothing else to it :) diff --git a/rust/hw/char/pl011/vendor/bilge/meson.build b/rust/hw/char/pl011/vendor/bilge/meson.build new file mode 100644 index 0000000000..906cec7764 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/meson.build @@ -0,0 +1,17 @@ +_bilge_rs = static_library( + 'bilge', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + ], + dependencies: [ + dep_arbitrary_int, + dep_bilge_impl, + ], +) + +dep_bilge = declare_dependency( + link_with: _bilge_rs, +) diff --git a/rust/hw/char/pl011/vendor/bilge/src/lib.rs b/rust/hw/char/pl011/vendor/bilge/src/lib.rs new file mode 100644 index 0000000000..c6c9752ea5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/bilge/src/lib.rs @@ -0,0 +1,80 @@ +#![cfg_attr(not(doctest), doc = include_str!("../README.md"))] +#![no_std] + +#[doc(no_inline)] +pub use arbitrary_int; +pub use bilge_impl::{bitsize, bitsize_internal, BinaryBits, DebugBits, DefaultBits, FromBits, TryFromBits}; + +/// used for `use bilge::prelude::*;` +pub mod prelude { + #[rustfmt::skip] + #[doc(no_inline)] + pub use super::{ + bitsize, Bitsized, + FromBits, TryFromBits, DebugBits, BinaryBits, DefaultBits, + // we control the version, so this should not be a problem + arbitrary_int::*, + }; +} + +/// This is internally used, but might be useful. No guarantees are given (for now). +pub trait Bitsized { + type ArbitraryInt; + const BITS: usize; + const MAX: Self::ArbitraryInt; +} + +/// Internally used marker trait. +/// # Safety +/// +/// Avoid implementing this for your types. Implementing this trait could break invariants. +pub unsafe trait Filled: Bitsized {} +unsafe impl Filled for T where T: Bitsized + From<::ArbitraryInt> {} + +/// This is generated to statically validate that a type implements `FromBits`. +pub const fn assume_filled() {} + +#[non_exhaustive] +#[derive(Debug, PartialEq)] +pub struct BitsError; + +/// Internally used for generating the `Result::Err` type in `TryFrom`. +/// +/// This is needed since we don't want users to be able to create `BitsError` right now. +/// We'll be able to turn `BitsError` into an enum later, or anything else really. +pub const fn give_me_error() -> BitsError { + BitsError +} + +/// Only basing this on Number did not work, as bool and others are not Number. +/// We could remove the whole macro_rules thing if it worked, though. +/// Maybe there is some way to do this, I'm not deep into types. +/// Finding some way to combine Number and Bitsized would be good as well. +impl Bitsized for arbitrary_int::UInt +where + arbitrary_int::UInt: arbitrary_int::Number, +{ + type ArbitraryInt = Self; + const BITS: usize = BITS; + const MAX: Self::ArbitraryInt = ::MAX; +} + +macro_rules! bitsized_impl { + ($(($name:ident, $bits:expr)),+) => { + $( + impl Bitsized for $name { + type ArbitraryInt = Self; + const BITS: usize = $bits; + const MAX: Self::ArbitraryInt = ::MAX; + } + )+ + }; +} +bitsized_impl!((u8, 8), (u16, 16), (u32, 32), (u64, 64), (u128, 128)); + +/// Handle bool as a u1 +impl Bitsized for bool { + type ArbitraryInt = arbitrary_int::u1; + const BITS: usize = 1; + const MAX: Self::ArbitraryInt = ::MAX; +} diff --git a/rust/hw/char/pl011/vendor/either/.cargo-checksum.json b/rust/hw/char/pl011/vendor/either/.cargo-checksum.json new file mode 100644 index 0000000000..d145aae980 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"96ca858a773ab30021cc60d1838bfccfc83b10e1279d8148187c8a049f18dbd6","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"7576269ea71f767b99297934c0b2367532690f8c4badc695edf8e04ab6a1e545","README-crates.io.md":"b775991a01ab4a0a8de6169f597775319d9ce8178f5c74ccdc634f13a286b20c","README.rst":"4fef58c3451b2eac9fd941f1fa0135d5df8183c124d75681497fa14bd1872b8b","src/into_either.rs":"0477f226bbba78ef017de08b87d421d3cd99fbc95b90ba4e6e3e803e3d15254e","src/iterator.rs":"fa2a6d14141980ce8a0bfcf7df2113d1e056d0f9815773dc9c2fb92a88923f4a","src/lib.rs":"4fbfa03b22b84d877610dfce1c7f279c97d80f4dc2c079c7dda364e4cf56ef13","src/serde_untagged.rs":"e826ee0ab31616e49c3e3f3711c8441001ee424b3e7a8c4c466cfcc4f8a7701a","src/serde_untagged_optional.rs":"86265f09d0795428bb2ce013b070d1badf1e2210217844a9ff3f04b2795868ab"},"package":"3dca9240753cf90908d7e4aac30f630662b02aebaa1b58a3cadabdb23385b58b"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/either/Cargo.toml b/rust/hw/char/pl011/vendor/either/Cargo.toml new file mode 100644 index 0000000000..1bfc7d42f1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/Cargo.toml @@ -0,0 +1,54 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2018" +rust-version = "1.37" +name = "either" +version = "1.12.0" +authors = ["bluss"] +description = """ +The enum `Either` with variants `Left` and `Right` is a general purpose sum type with two cases. +""" +documentation = "https://docs.rs/either/1/" +readme = "README-crates.io.md" +keywords = [ + "data-structure", + "no_std", +] +categories = [ + "data-structures", + "no-std", +] +license = "MIT OR Apache-2.0" +repository = "https://github.com/rayon-rs/either" + +[package.metadata.docs.rs] +features = ["serde"] + +[package.metadata.playground] +features = ["serde"] + +[package.metadata.release] +no-dev-version = true +tag-name = "{{version}}" + +[dependencies.serde] +version = "1.0" +features = ["derive"] +optional = true + +[dev-dependencies.serde_json] +version = "1.0.0" + +[features] +default = ["use_std"] +use_std = [] diff --git a/rust/hw/char/pl011/vendor/either/LICENSE-APACHE b/rust/hw/char/pl011/vendor/either/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/rust/hw/char/pl011/vendor/either/LICENSE-MIT b/rust/hw/char/pl011/vendor/either/LICENSE-MIT new file mode 100644 index 0000000000..9203baa055 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2015 + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/either/README-crates.io.md b/rust/hw/char/pl011/vendor/either/README-crates.io.md new file mode 100644 index 0000000000..d36890278b --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/README-crates.io.md @@ -0,0 +1,10 @@ +The enum `Either` with variants `Left` and `Right` is a general purpose +sum type with two cases. + +Either has methods that are similar to Option and Result, and it also implements +traits like `Iterator`. + +Includes macros `try_left!()` and `try_right!()` to use for +short-circuiting logic, similar to how the `?` operator is used with `Result`. +Note that `Either` is general purpose. For describing success or error, use the +regular `Result`. diff --git a/rust/hw/char/pl011/vendor/either/meson.build b/rust/hw/char/pl011/vendor/either/meson.build new file mode 100644 index 0000000000..2d2d3057bc --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/meson.build @@ -0,0 +1,16 @@ +_either_rs = static_library( + 'either', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2018', + '--cfg', 'feature="use_std"', + '--cfg', 'feature="use_alloc"', + ], + dependencies: [], +) + +dep_either = declare_dependency( + link_with: _either_rs, +) diff --git a/rust/hw/char/pl011/vendor/either/src/into_either.rs b/rust/hw/char/pl011/vendor/either/src/into_either.rs new file mode 100644 index 0000000000..73746c80f1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/src/into_either.rs @@ -0,0 +1,64 @@ +//! The trait [`IntoEither`] provides methods for converting a type `Self`, whose +//! size is constant and known at compile-time, into an [`Either`] variant. + +use super::{Either, Left, Right}; + +/// Provides methods for converting a type `Self` into either a [`Left`] or [`Right`] +/// variant of [`Either`](Either). +/// +/// The [`into_either`](IntoEither::into_either) method takes a [`bool`] to determine +/// whether to convert to [`Left`] or [`Right`]. +/// +/// The [`into_either_with`](IntoEither::into_either_with) method takes a +/// [predicate function](FnOnce) to determine whether to convert to [`Left`] or [`Right`]. +pub trait IntoEither: Sized { + /// Converts `self` into a [`Left`] variant of [`Either`](Either) + /// if `into_left` is `true`. + /// Converts `self` into a [`Right`] variant of [`Either`](Either) + /// otherwise. + /// + /// # Examples + /// + /// ``` + /// use either::{IntoEither, Left, Right}; + /// + /// let x = 0; + /// assert_eq!(x.into_either(true), Left(x)); + /// assert_eq!(x.into_either(false), Right(x)); + /// ``` + fn into_either(self, into_left: bool) -> Either { + if into_left { + Left(self) + } else { + Right(self) + } + } + + /// Converts `self` into a [`Left`] variant of [`Either`](Either) + /// if `into_left(&self)` returns `true`. + /// Converts `self` into a [`Right`] variant of [`Either`](Either) + /// otherwise. + /// + /// # Examples + /// + /// ``` + /// use either::{IntoEither, Left, Right}; + /// + /// fn is_even(x: &u8) -> bool { + /// x % 2 == 0 + /// } + /// + /// let x = 0; + /// assert_eq!(x.into_either_with(is_even), Left(x)); + /// assert_eq!(x.into_either_with(|x| !is_even(x)), Right(x)); + /// ``` + fn into_either_with(self, into_left: F) -> Either + where + F: FnOnce(&Self) -> bool, + { + let into_left = into_left(&self); + self.into_either(into_left) + } +} + +impl IntoEither for T {} diff --git a/rust/hw/char/pl011/vendor/either/src/iterator.rs b/rust/hw/char/pl011/vendor/either/src/iterator.rs new file mode 100644 index 0000000000..9c5a83f9a5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/src/iterator.rs @@ -0,0 +1,315 @@ +use super::{for_both, Either, Left, Right}; +use core::iter; + +macro_rules! wrap_either { + ($value:expr => $( $tail:tt )*) => { + match $value { + Left(inner) => inner.map(Left) $($tail)*, + Right(inner) => inner.map(Right) $($tail)*, + } + }; +} + +/// Iterator that maps left or right iterators to corresponding `Either`-wrapped items. +/// +/// This struct is created by the [`Either::factor_into_iter`], +/// [`factor_iter`][Either::factor_iter], +/// and [`factor_iter_mut`][Either::factor_iter_mut] methods. +#[derive(Clone, Debug)] +pub struct IterEither { + inner: Either, +} + +impl IterEither { + pub(crate) fn new(inner: Either) -> Self { + IterEither { inner } + } +} + +impl Extend for Either +where + L: Extend, + R: Extend, +{ + fn extend(&mut self, iter: T) + where + T: IntoIterator, + { + for_both!(*self, ref mut inner => inner.extend(iter)) + } +} + +/// `Either` is an iterator if both `L` and `R` are iterators. +impl Iterator for Either +where + L: Iterator, + R: Iterator, +{ + type Item = L::Item; + + fn next(&mut self) -> Option { + for_both!(*self, ref mut inner => inner.next()) + } + + fn size_hint(&self) -> (usize, Option) { + for_both!(*self, ref inner => inner.size_hint()) + } + + fn fold(self, init: Acc, f: G) -> Acc + where + G: FnMut(Acc, Self::Item) -> Acc, + { + for_both!(self, inner => inner.fold(init, f)) + } + + fn for_each(self, f: F) + where + F: FnMut(Self::Item), + { + for_both!(self, inner => inner.for_each(f)) + } + + fn count(self) -> usize { + for_both!(self, inner => inner.count()) + } + + fn last(self) -> Option { + for_both!(self, inner => inner.last()) + } + + fn nth(&mut self, n: usize) -> Option { + for_both!(*self, ref mut inner => inner.nth(n)) + } + + fn collect(self) -> B + where + B: iter::FromIterator, + { + for_both!(self, inner => inner.collect()) + } + + fn partition(self, f: F) -> (B, B) + where + B: Default + Extend, + F: FnMut(&Self::Item) -> bool, + { + for_both!(self, inner => inner.partition(f)) + } + + fn all(&mut self, f: F) -> bool + where + F: FnMut(Self::Item) -> bool, + { + for_both!(*self, ref mut inner => inner.all(f)) + } + + fn any(&mut self, f: F) -> bool + where + F: FnMut(Self::Item) -> bool, + { + for_both!(*self, ref mut inner => inner.any(f)) + } + + fn find

(&mut self, predicate: P) -> Option + where + P: FnMut(&Self::Item) -> bool, + { + for_both!(*self, ref mut inner => inner.find(predicate)) + } + + fn find_map(&mut self, f: F) -> Option + where + F: FnMut(Self::Item) -> Option, + { + for_both!(*self, ref mut inner => inner.find_map(f)) + } + + fn position

(&mut self, predicate: P) -> Option + where + P: FnMut(Self::Item) -> bool, + { + for_both!(*self, ref mut inner => inner.position(predicate)) + } +} + +impl DoubleEndedIterator for Either +where + L: DoubleEndedIterator, + R: DoubleEndedIterator, +{ + fn next_back(&mut self) -> Option { + for_both!(*self, ref mut inner => inner.next_back()) + } + + fn nth_back(&mut self, n: usize) -> Option { + for_both!(*self, ref mut inner => inner.nth_back(n)) + } + + fn rfold(self, init: Acc, f: G) -> Acc + where + G: FnMut(Acc, Self::Item) -> Acc, + { + for_both!(self, inner => inner.rfold(init, f)) + } + + fn rfind

(&mut self, predicate: P) -> Option + where + P: FnMut(&Self::Item) -> bool, + { + for_both!(*self, ref mut inner => inner.rfind(predicate)) + } +} + +impl ExactSizeIterator for Either +where + L: ExactSizeIterator, + R: ExactSizeIterator, +{ + fn len(&self) -> usize { + for_both!(*self, ref inner => inner.len()) + } +} + +impl iter::FusedIterator for Either +where + L: iter::FusedIterator, + R: iter::FusedIterator, +{ +} + +impl Iterator for IterEither +where + L: Iterator, + R: Iterator, +{ + type Item = Either; + + fn next(&mut self) -> Option { + Some(map_either!(self.inner, ref mut inner => inner.next()?)) + } + + fn size_hint(&self) -> (usize, Option) { + for_both!(self.inner, ref inner => inner.size_hint()) + } + + fn fold(self, init: Acc, f: G) -> Acc + where + G: FnMut(Acc, Self::Item) -> Acc, + { + wrap_either!(self.inner => .fold(init, f)) + } + + fn for_each(self, f: F) + where + F: FnMut(Self::Item), + { + wrap_either!(self.inner => .for_each(f)) + } + + fn count(self) -> usize { + for_both!(self.inner, inner => inner.count()) + } + + fn last(self) -> Option { + Some(map_either!(self.inner, inner => inner.last()?)) + } + + fn nth(&mut self, n: usize) -> Option { + Some(map_either!(self.inner, ref mut inner => inner.nth(n)?)) + } + + fn collect(self) -> B + where + B: iter::FromIterator, + { + wrap_either!(self.inner => .collect()) + } + + fn partition(self, f: F) -> (B, B) + where + B: Default + Extend, + F: FnMut(&Self::Item) -> bool, + { + wrap_either!(self.inner => .partition(f)) + } + + fn all(&mut self, f: F) -> bool + where + F: FnMut(Self::Item) -> bool, + { + wrap_either!(&mut self.inner => .all(f)) + } + + fn any(&mut self, f: F) -> bool + where + F: FnMut(Self::Item) -> bool, + { + wrap_either!(&mut self.inner => .any(f)) + } + + fn find

(&mut self, predicate: P) -> Option + where + P: FnMut(&Self::Item) -> bool, + { + wrap_either!(&mut self.inner => .find(predicate)) + } + + fn find_map(&mut self, f: F) -> Option + where + F: FnMut(Self::Item) -> Option, + { + wrap_either!(&mut self.inner => .find_map(f)) + } + + fn position

(&mut self, predicate: P) -> Option + where + P: FnMut(Self::Item) -> bool, + { + wrap_either!(&mut self.inner => .position(predicate)) + } +} + +impl DoubleEndedIterator for IterEither +where + L: DoubleEndedIterator, + R: DoubleEndedIterator, +{ + fn next_back(&mut self) -> Option { + Some(map_either!(self.inner, ref mut inner => inner.next_back()?)) + } + + fn nth_back(&mut self, n: usize) -> Option { + Some(map_either!(self.inner, ref mut inner => inner.nth_back(n)?)) + } + + fn rfold(self, init: Acc, f: G) -> Acc + where + G: FnMut(Acc, Self::Item) -> Acc, + { + wrap_either!(self.inner => .rfold(init, f)) + } + + fn rfind

(&mut self, predicate: P) -> Option + where + P: FnMut(&Self::Item) -> bool, + { + wrap_either!(&mut self.inner => .rfind(predicate)) + } +} + +impl ExactSizeIterator for IterEither +where + L: ExactSizeIterator, + R: ExactSizeIterator, +{ + fn len(&self) -> usize { + for_both!(self.inner, ref inner => inner.len()) + } +} + +impl iter::FusedIterator for IterEither +where + L: iter::FusedIterator, + R: iter::FusedIterator, +{ +} diff --git a/rust/hw/char/pl011/vendor/either/src/lib.rs b/rust/hw/char/pl011/vendor/either/src/lib.rs new file mode 100644 index 0000000000..e0792f2631 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/src/lib.rs @@ -0,0 +1,1519 @@ +//! The enum [`Either`] with variants `Left` and `Right` is a general purpose +//! sum type with two cases. +//! +//! [`Either`]: enum.Either.html +//! +//! **Crate features:** +//! +//! * `"use_std"` +//! Enabled by default. Disable to make the library `#![no_std]`. +//! +//! * `"serde"` +//! Disabled by default. Enable to `#[derive(Serialize, Deserialize)]` for `Either` +//! + +#![doc(html_root_url = "https://docs.rs/either/1/")] +#![no_std] + +#[cfg(any(test, feature = "use_std"))] +extern crate std; + +#[cfg(feature = "serde")] +pub mod serde_untagged; + +#[cfg(feature = "serde")] +pub mod serde_untagged_optional; + +use core::convert::{AsMut, AsRef}; +use core::fmt; +use core::future::Future; +use core::ops::Deref; +use core::ops::DerefMut; +use core::pin::Pin; + +#[cfg(any(test, feature = "use_std"))] +use std::error::Error; +#[cfg(any(test, feature = "use_std"))] +use std::io::{self, BufRead, Read, Seek, SeekFrom, Write}; + +pub use crate::Either::{Left, Right}; + +/// The enum `Either` with variants `Left` and `Right` is a general purpose +/// sum type with two cases. +/// +/// The `Either` type is symmetric and treats its variants the same way, without +/// preference. +/// (For representing success or error, use the regular `Result` enum instead.) +#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] +#[derive(Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Debug)] +pub enum Either { + /// A value of type `L`. + Left(L), + /// A value of type `R`. + Right(R), +} + +/// Evaluate the provided expression for both [`Either::Left`] and [`Either::Right`]. +/// +/// This macro is useful in cases where both sides of [`Either`] can be interacted with +/// in the same way even though the don't share the same type. +/// +/// Syntax: `either::for_both!(` *expression* `,` *pattern* `=>` *expression* `)` +/// +/// # Example +/// +/// ``` +/// use either::Either; +/// +/// fn length(owned_or_borrowed: Either) -> usize { +/// either::for_both!(owned_or_borrowed, s => s.len()) +/// } +/// +/// fn main() { +/// let borrowed = Either::Right("Hello world!"); +/// let owned = Either::Left("Hello world!".to_owned()); +/// +/// assert_eq!(length(borrowed), 12); +/// assert_eq!(length(owned), 12); +/// } +/// ``` +#[macro_export] +macro_rules! for_both { + ($value:expr, $pattern:pat => $result:expr) => { + match $value { + $crate::Either::Left($pattern) => $result, + $crate::Either::Right($pattern) => $result, + } + }; +} + +/// Macro for unwrapping the left side of an [`Either`], which fails early +/// with the opposite side. Can only be used in functions that return +/// `Either` because of the early return of `Right` that it provides. +/// +/// See also [`try_right!`] for its dual, which applies the same just to the +/// right side. +/// +/// # Example +/// +/// ``` +/// use either::{Either, Left, Right}; +/// +/// fn twice(wrapper: Either) -> Either { +/// let value = either::try_left!(wrapper); +/// Left(value * 2) +/// } +/// +/// fn main() { +/// assert_eq!(twice(Left(2)), Left(4)); +/// assert_eq!(twice(Right("ups")), Right("ups")); +/// } +/// ``` +#[macro_export] +macro_rules! try_left { + ($expr:expr) => { + match $expr { + $crate::Left(val) => val, + $crate::Right(err) => return $crate::Right(::core::convert::From::from(err)), + } + }; +} + +/// Dual to [`try_left!`], see its documentation for more information. +#[macro_export] +macro_rules! try_right { + ($expr:expr) => { + match $expr { + $crate::Left(err) => return $crate::Left(::core::convert::From::from(err)), + $crate::Right(val) => val, + } + }; +} + +macro_rules! map_either { + ($value:expr, $pattern:pat => $result:expr) => { + match $value { + Left($pattern) => Left($result), + Right($pattern) => Right($result), + } + }; +} + +mod iterator; +pub use self::iterator::IterEither; + +mod into_either; +pub use self::into_either::IntoEither; + +impl Clone for Either { + fn clone(&self) -> Self { + match self { + Left(inner) => Left(inner.clone()), + Right(inner) => Right(inner.clone()), + } + } + + fn clone_from(&mut self, source: &Self) { + match (self, source) { + (Left(dest), Left(source)) => dest.clone_from(source), + (Right(dest), Right(source)) => dest.clone_from(source), + (dest, source) => *dest = source.clone(), + } + } +} + +impl Either { + /// Return true if the value is the `Left` variant. + /// + /// ``` + /// use either::*; + /// + /// let values = [Left(1), Right("the right value")]; + /// assert_eq!(values[0].is_left(), true); + /// assert_eq!(values[1].is_left(), false); + /// ``` + pub fn is_left(&self) -> bool { + match *self { + Left(_) => true, + Right(_) => false, + } + } + + /// Return true if the value is the `Right` variant. + /// + /// ``` + /// use either::*; + /// + /// let values = [Left(1), Right("the right value")]; + /// assert_eq!(values[0].is_right(), false); + /// assert_eq!(values[1].is_right(), true); + /// ``` + pub fn is_right(&self) -> bool { + !self.is_left() + } + + /// Convert the left side of `Either` to an `Option`. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, ()> = Left("some value"); + /// assert_eq!(left.left(), Some("some value")); + /// + /// let right: Either<(), _> = Right(321); + /// assert_eq!(right.left(), None); + /// ``` + pub fn left(self) -> Option { + match self { + Left(l) => Some(l), + Right(_) => None, + } + } + + /// Convert the right side of `Either` to an `Option`. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, ()> = Left("some value"); + /// assert_eq!(left.right(), None); + /// + /// let right: Either<(), _> = Right(321); + /// assert_eq!(right.right(), Some(321)); + /// ``` + pub fn right(self) -> Option { + match self { + Left(_) => None, + Right(r) => Some(r), + } + } + + /// Convert `&Either` to `Either<&L, &R>`. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, ()> = Left("some value"); + /// assert_eq!(left.as_ref(), Left(&"some value")); + /// + /// let right: Either<(), _> = Right("some value"); + /// assert_eq!(right.as_ref(), Right(&"some value")); + /// ``` + pub fn as_ref(&self) -> Either<&L, &R> { + match *self { + Left(ref inner) => Left(inner), + Right(ref inner) => Right(inner), + } + } + + /// Convert `&mut Either` to `Either<&mut L, &mut R>`. + /// + /// ``` + /// use either::*; + /// + /// fn mutate_left(value: &mut Either) { + /// if let Some(l) = value.as_mut().left() { + /// *l = 999; + /// } + /// } + /// + /// let mut left = Left(123); + /// let mut right = Right(123); + /// mutate_left(&mut left); + /// mutate_left(&mut right); + /// assert_eq!(left, Left(999)); + /// assert_eq!(right, Right(123)); + /// ``` + pub fn as_mut(&mut self) -> Either<&mut L, &mut R> { + match *self { + Left(ref mut inner) => Left(inner), + Right(ref mut inner) => Right(inner), + } + } + + /// Convert `Pin<&Either>` to `Either, Pin<&R>>`, + /// pinned projections of the inner variants. + pub fn as_pin_ref(self: Pin<&Self>) -> Either, Pin<&R>> { + // SAFETY: We can use `new_unchecked` because the `inner` parts are + // guaranteed to be pinned, as they come from `self` which is pinned. + unsafe { + match *Pin::get_ref(self) { + Left(ref inner) => Left(Pin::new_unchecked(inner)), + Right(ref inner) => Right(Pin::new_unchecked(inner)), + } + } + } + + /// Convert `Pin<&mut Either>` to `Either, Pin<&mut R>>`, + /// pinned projections of the inner variants. + pub fn as_pin_mut(self: Pin<&mut Self>) -> Either, Pin<&mut R>> { + // SAFETY: `get_unchecked_mut` is fine because we don't move anything. + // We can use `new_unchecked` because the `inner` parts are guaranteed + // to be pinned, as they come from `self` which is pinned, and we never + // offer an unpinned `&mut L` or `&mut R` through `Pin<&mut Self>`. We + // also don't have an implementation of `Drop`, nor manual `Unpin`. + unsafe { + match *Pin::get_unchecked_mut(self) { + Left(ref mut inner) => Left(Pin::new_unchecked(inner)), + Right(ref mut inner) => Right(Pin::new_unchecked(inner)), + } + } + } + + /// Convert `Either` to `Either`. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, ()> = Left(123); + /// assert_eq!(left.flip(), Right(123)); + /// + /// let right: Either<(), _> = Right("some value"); + /// assert_eq!(right.flip(), Left("some value")); + /// ``` + pub fn flip(self) -> Either { + match self { + Left(l) => Right(l), + Right(r) => Left(r), + } + } + + /// Apply the function `f` on the value in the `Left` variant if it is present rewrapping the + /// result in `Left`. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, u32> = Left(123); + /// assert_eq!(left.map_left(|x| x * 2), Left(246)); + /// + /// let right: Either = Right(123); + /// assert_eq!(right.map_left(|x| x * 2), Right(123)); + /// ``` + pub fn map_left(self, f: F) -> Either + where + F: FnOnce(L) -> M, + { + match self { + Left(l) => Left(f(l)), + Right(r) => Right(r), + } + } + + /// Apply the function `f` on the value in the `Right` variant if it is present rewrapping the + /// result in `Right`. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, u32> = Left(123); + /// assert_eq!(left.map_right(|x| x * 2), Left(123)); + /// + /// let right: Either = Right(123); + /// assert_eq!(right.map_right(|x| x * 2), Right(246)); + /// ``` + pub fn map_right(self, f: F) -> Either + where + F: FnOnce(R) -> S, + { + match self { + Left(l) => Left(l), + Right(r) => Right(f(r)), + } + } + + /// Apply the functions `f` and `g` to the `Left` and `Right` variants + /// respectively. This is equivalent to + /// [bimap](https://hackage.haskell.org/package/bifunctors-5/docs/Data-Bifunctor.html) + /// in functional programming. + /// + /// ``` + /// use either::*; + /// + /// let f = |s: String| s.len(); + /// let g = |u: u8| u.to_string(); + /// + /// let left: Either = Left("loopy".into()); + /// assert_eq!(left.map_either(f, g), Left(5)); + /// + /// let right: Either = Right(42); + /// assert_eq!(right.map_either(f, g), Right("42".into())); + /// ``` + pub fn map_either(self, f: F, g: G) -> Either + where + F: FnOnce(L) -> M, + G: FnOnce(R) -> S, + { + match self { + Left(l) => Left(f(l)), + Right(r) => Right(g(r)), + } + } + + /// Similar to [`map_either`][Self::map_either], with an added context `ctx` accessible to + /// both functions. + /// + /// ``` + /// use either::*; + /// + /// let mut sum = 0; + /// + /// // Both closures want to update the same value, so pass it as context. + /// let mut f = |sum: &mut usize, s: String| { *sum += s.len(); s.to_uppercase() }; + /// let mut g = |sum: &mut usize, u: usize| { *sum += u; u.to_string() }; + /// + /// let left: Either = Left("loopy".into()); + /// assert_eq!(left.map_either_with(&mut sum, &mut f, &mut g), Left("LOOPY".into())); + /// + /// let right: Either = Right(42); + /// assert_eq!(right.map_either_with(&mut sum, &mut f, &mut g), Right("42".into())); + /// + /// assert_eq!(sum, 47); + /// ``` + pub fn map_either_with(self, ctx: Ctx, f: F, g: G) -> Either + where + F: FnOnce(Ctx, L) -> M, + G: FnOnce(Ctx, R) -> S, + { + match self { + Left(l) => Left(f(ctx, l)), + Right(r) => Right(g(ctx, r)), + } + } + + /// Apply one of two functions depending on contents, unifying their result. If the value is + /// `Left(L)` then the first function `f` is applied; if it is `Right(R)` then the second + /// function `g` is applied. + /// + /// ``` + /// use either::*; + /// + /// fn square(n: u32) -> i32 { (n * n) as i32 } + /// fn negate(n: i32) -> i32 { -n } + /// + /// let left: Either = Left(4); + /// assert_eq!(left.either(square, negate), 16); + /// + /// let right: Either = Right(-4); + /// assert_eq!(right.either(square, negate), 4); + /// ``` + pub fn either(self, f: F, g: G) -> T + where + F: FnOnce(L) -> T, + G: FnOnce(R) -> T, + { + match self { + Left(l) => f(l), + Right(r) => g(r), + } + } + + /// Like [`either`][Self::either], but provide some context to whichever of the + /// functions ends up being called. + /// + /// ``` + /// // In this example, the context is a mutable reference + /// use either::*; + /// + /// let mut result = Vec::new(); + /// + /// let values = vec![Left(2), Right(2.7)]; + /// + /// for value in values { + /// value.either_with(&mut result, + /// |ctx, integer| ctx.push(integer), + /// |ctx, real| ctx.push(f64::round(real) as i32)); + /// } + /// + /// assert_eq!(result, vec![2, 3]); + /// ``` + pub fn either_with(self, ctx: Ctx, f: F, g: G) -> T + where + F: FnOnce(Ctx, L) -> T, + G: FnOnce(Ctx, R) -> T, + { + match self { + Left(l) => f(ctx, l), + Right(r) => g(ctx, r), + } + } + + /// Apply the function `f` on the value in the `Left` variant if it is present. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, u32> = Left(123); + /// assert_eq!(left.left_and_then::<_,()>(|x| Right(x * 2)), Right(246)); + /// + /// let right: Either = Right(123); + /// assert_eq!(right.left_and_then(|x| Right::<(), _>(x * 2)), Right(123)); + /// ``` + pub fn left_and_then(self, f: F) -> Either + where + F: FnOnce(L) -> Either, + { + match self { + Left(l) => f(l), + Right(r) => Right(r), + } + } + + /// Apply the function `f` on the value in the `Right` variant if it is present. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, u32> = Left(123); + /// assert_eq!(left.right_and_then(|x| Right(x * 2)), Left(123)); + /// + /// let right: Either = Right(123); + /// assert_eq!(right.right_and_then(|x| Right(x * 2)), Right(246)); + /// ``` + pub fn right_and_then(self, f: F) -> Either + where + F: FnOnce(R) -> Either, + { + match self { + Left(l) => Left(l), + Right(r) => f(r), + } + } + + /// Convert the inner value to an iterator. + /// + /// This requires the `Left` and `Right` iterators to have the same item type. + /// See [`factor_into_iter`][Either::factor_into_iter] to iterate different types. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, Vec> = Left(vec![1, 2, 3, 4, 5]); + /// let mut right: Either, _> = Right(vec![]); + /// right.extend(left.into_iter()); + /// assert_eq!(right, Right(vec![1, 2, 3, 4, 5])); + /// ``` + #[allow(clippy::should_implement_trait)] + pub fn into_iter(self) -> Either + where + L: IntoIterator, + R: IntoIterator, + { + map_either!(self, inner => inner.into_iter()) + } + + /// Borrow the inner value as an iterator. + /// + /// This requires the `Left` and `Right` iterators to have the same item type. + /// See [`factor_iter`][Either::factor_iter] to iterate different types. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, &[u32]> = Left(vec![2, 3]); + /// let mut right: Either, _> = Right(&[4, 5][..]); + /// let mut all = vec![1]; + /// all.extend(left.iter()); + /// all.extend(right.iter()); + /// assert_eq!(all, vec![1, 2, 3, 4, 5]); + /// ``` + pub fn iter(&self) -> Either<<&L as IntoIterator>::IntoIter, <&R as IntoIterator>::IntoIter> + where + for<'a> &'a L: IntoIterator, + for<'a> &'a R: IntoIterator::Item>, + { + map_either!(self, inner => inner.into_iter()) + } + + /// Mutably borrow the inner value as an iterator. + /// + /// This requires the `Left` and `Right` iterators to have the same item type. + /// See [`factor_iter_mut`][Either::factor_iter_mut] to iterate different types. + /// + /// ``` + /// use either::*; + /// + /// let mut left: Either<_, &mut [u32]> = Left(vec![2, 3]); + /// for l in left.iter_mut() { + /// *l *= *l + /// } + /// assert_eq!(left, Left(vec![4, 9])); + /// + /// let mut inner = [4, 5]; + /// let mut right: Either, _> = Right(&mut inner[..]); + /// for r in right.iter_mut() { + /// *r *= *r + /// } + /// assert_eq!(inner, [16, 25]); + /// ``` + pub fn iter_mut( + &mut self, + ) -> Either<<&mut L as IntoIterator>::IntoIter, <&mut R as IntoIterator>::IntoIter> + where + for<'a> &'a mut L: IntoIterator, + for<'a> &'a mut R: IntoIterator::Item>, + { + map_either!(self, inner => inner.into_iter()) + } + + /// Converts an `Either` of `Iterator`s to be an `Iterator` of `Either`s + /// + /// Unlike [`into_iter`][Either::into_iter], this does not require the + /// `Left` and `Right` iterators to have the same item type. + /// + /// ``` + /// use either::*; + /// let left: Either<_, Vec> = Left(&["hello"]); + /// assert_eq!(left.factor_into_iter().next(), Some(Left(&"hello"))); + + /// let right: Either<&[&str], _> = Right(vec![0, 1]); + /// assert_eq!(right.factor_into_iter().collect::>(), vec![Right(0), Right(1)]); + /// + /// ``` + // TODO(MSRV): doc(alias) was stabilized in Rust 1.48 + // #[doc(alias = "transpose")] + pub fn factor_into_iter(self) -> IterEither + where + L: IntoIterator, + R: IntoIterator, + { + IterEither::new(map_either!(self, inner => inner.into_iter())) + } + + /// Borrows an `Either` of `Iterator`s to be an `Iterator` of `Either`s + /// + /// Unlike [`iter`][Either::iter], this does not require the + /// `Left` and `Right` iterators to have the same item type. + /// + /// ``` + /// use either::*; + /// let left: Either<_, Vec> = Left(["hello"]); + /// assert_eq!(left.factor_iter().next(), Some(Left(&"hello"))); + + /// let right: Either<[&str; 2], _> = Right(vec![0, 1]); + /// assert_eq!(right.factor_iter().collect::>(), vec![Right(&0), Right(&1)]); + /// + /// ``` + pub fn factor_iter( + &self, + ) -> IterEither<<&L as IntoIterator>::IntoIter, <&R as IntoIterator>::IntoIter> + where + for<'a> &'a L: IntoIterator, + for<'a> &'a R: IntoIterator, + { + IterEither::new(map_either!(self, inner => inner.into_iter())) + } + + /// Mutably borrows an `Either` of `Iterator`s to be an `Iterator` of `Either`s + /// + /// Unlike [`iter_mut`][Either::iter_mut], this does not require the + /// `Left` and `Right` iterators to have the same item type. + /// + /// ``` + /// use either::*; + /// let mut left: Either<_, Vec> = Left(["hello"]); + /// left.factor_iter_mut().for_each(|x| *x.unwrap_left() = "goodbye"); + /// assert_eq!(left, Left(["goodbye"])); + + /// let mut right: Either<[&str; 2], _> = Right(vec![0, 1, 2]); + /// right.factor_iter_mut().for_each(|x| if let Right(r) = x { *r = -*r; }); + /// assert_eq!(right, Right(vec![0, -1, -2])); + /// + /// ``` + pub fn factor_iter_mut( + &mut self, + ) -> IterEither<<&mut L as IntoIterator>::IntoIter, <&mut R as IntoIterator>::IntoIter> + where + for<'a> &'a mut L: IntoIterator, + for<'a> &'a mut R: IntoIterator, + { + IterEither::new(map_either!(self, inner => inner.into_iter())) + } + + /// Return left value or given value + /// + /// Arguments passed to `left_or` are eagerly evaluated; if you are passing + /// the result of a function call, it is recommended to use + /// [`left_or_else`][Self::left_or_else], which is lazily evaluated. + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either<&str, &str> = Left("left"); + /// assert_eq!(left.left_or("foo"), "left"); + /// + /// let right: Either<&str, &str> = Right("right"); + /// assert_eq!(right.left_or("left"), "left"); + /// ``` + pub fn left_or(self, other: L) -> L { + match self { + Either::Left(l) => l, + Either::Right(_) => other, + } + } + + /// Return left or a default + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either = Left("left".to_string()); + /// assert_eq!(left.left_or_default(), "left"); + /// + /// let right: Either = Right(42); + /// assert_eq!(right.left_or_default(), String::default()); + /// ``` + pub fn left_or_default(self) -> L + where + L: Default, + { + match self { + Either::Left(l) => l, + Either::Right(_) => L::default(), + } + } + + /// Returns left value or computes it from a closure + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either = Left("3".to_string()); + /// assert_eq!(left.left_or_else(|_| unreachable!()), "3"); + /// + /// let right: Either = Right(3); + /// assert_eq!(right.left_or_else(|x| x.to_string()), "3"); + /// ``` + pub fn left_or_else(self, f: F) -> L + where + F: FnOnce(R) -> L, + { + match self { + Either::Left(l) => l, + Either::Right(r) => f(r), + } + } + + /// Return right value or given value + /// + /// Arguments passed to `right_or` are eagerly evaluated; if you are passing + /// the result of a function call, it is recommended to use + /// [`right_or_else`][Self::right_or_else], which is lazily evaluated. + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let right: Either<&str, &str> = Right("right"); + /// assert_eq!(right.right_or("foo"), "right"); + /// + /// let left: Either<&str, &str> = Left("left"); + /// assert_eq!(left.right_or("right"), "right"); + /// ``` + pub fn right_or(self, other: R) -> R { + match self { + Either::Left(_) => other, + Either::Right(r) => r, + } + } + + /// Return right or a default + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either = Left("left".to_string()); + /// assert_eq!(left.right_or_default(), u32::default()); + /// + /// let right: Either = Right(42); + /// assert_eq!(right.right_or_default(), 42); + /// ``` + pub fn right_or_default(self) -> R + where + R: Default, + { + match self { + Either::Left(_) => R::default(), + Either::Right(r) => r, + } + } + + /// Returns right value or computes it from a closure + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either = Left("3".to_string()); + /// assert_eq!(left.right_or_else(|x| x.parse().unwrap()), 3); + /// + /// let right: Either = Right(3); + /// assert_eq!(right.right_or_else(|_| unreachable!()), 3); + /// ``` + pub fn right_or_else(self, f: F) -> R + where + F: FnOnce(L) -> R, + { + match self { + Either::Left(l) => f(l), + Either::Right(r) => r, + } + } + + /// Returns the left value + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either<_, ()> = Left(3); + /// assert_eq!(left.unwrap_left(), 3); + /// ``` + /// + /// # Panics + /// + /// When `Either` is a `Right` value + /// + /// ```should_panic + /// # use either::*; + /// let right: Either<(), _> = Right(3); + /// right.unwrap_left(); + /// ``` + pub fn unwrap_left(self) -> L + where + R: core::fmt::Debug, + { + match self { + Either::Left(l) => l, + Either::Right(r) => { + panic!("called `Either::unwrap_left()` on a `Right` value: {:?}", r) + } + } + } + + /// Returns the right value + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let right: Either<(), _> = Right(3); + /// assert_eq!(right.unwrap_right(), 3); + /// ``` + /// + /// # Panics + /// + /// When `Either` is a `Left` value + /// + /// ```should_panic + /// # use either::*; + /// let left: Either<_, ()> = Left(3); + /// left.unwrap_right(); + /// ``` + pub fn unwrap_right(self) -> R + where + L: core::fmt::Debug, + { + match self { + Either::Right(r) => r, + Either::Left(l) => panic!("called `Either::unwrap_right()` on a `Left` value: {:?}", l), + } + } + + /// Returns the left value + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let left: Either<_, ()> = Left(3); + /// assert_eq!(left.expect_left("value was Right"), 3); + /// ``` + /// + /// # Panics + /// + /// When `Either` is a `Right` value + /// + /// ```should_panic + /// # use either::*; + /// let right: Either<(), _> = Right(3); + /// right.expect_left("value was Right"); + /// ``` + pub fn expect_left(self, msg: &str) -> L + where + R: core::fmt::Debug, + { + match self { + Either::Left(l) => l, + Either::Right(r) => panic!("{}: {:?}", msg, r), + } + } + + /// Returns the right value + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// let right: Either<(), _> = Right(3); + /// assert_eq!(right.expect_right("value was Left"), 3); + /// ``` + /// + /// # Panics + /// + /// When `Either` is a `Left` value + /// + /// ```should_panic + /// # use either::*; + /// let left: Either<_, ()> = Left(3); + /// left.expect_right("value was Right"); + /// ``` + pub fn expect_right(self, msg: &str) -> R + where + L: core::fmt::Debug, + { + match self { + Either::Right(r) => r, + Either::Left(l) => panic!("{}: {:?}", msg, l), + } + } + + /// Convert the contained value into `T` + /// + /// # Examples + /// + /// ``` + /// # use either::*; + /// // Both u16 and u32 can be converted to u64. + /// let left: Either = Left(3u16); + /// assert_eq!(left.either_into::(), 3u64); + /// let right: Either = Right(7u32); + /// assert_eq!(right.either_into::(), 7u64); + /// ``` + pub fn either_into(self) -> T + where + L: Into, + R: Into, + { + match self { + Either::Left(l) => l.into(), + Either::Right(r) => r.into(), + } + } +} + +impl Either, Option> { + /// Factors out `None` from an `Either` of [`Option`]. + /// + /// ``` + /// use either::*; + /// let left: Either<_, Option> = Left(Some(vec![0])); + /// assert_eq!(left.factor_none(), Some(Left(vec![0]))); + /// + /// let right: Either>, _> = Right(Some(String::new())); + /// assert_eq!(right.factor_none(), Some(Right(String::new()))); + /// ``` + // TODO(MSRV): doc(alias) was stabilized in Rust 1.48 + // #[doc(alias = "transpose")] + pub fn factor_none(self) -> Option> { + match self { + Left(l) => l.map(Either::Left), + Right(r) => r.map(Either::Right), + } + } +} + +impl Either, Result> { + /// Factors out a homogenous type from an `Either` of [`Result`]. + /// + /// Here, the homogeneous type is the `Err` type of the [`Result`]. + /// + /// ``` + /// use either::*; + /// let left: Either<_, Result> = Left(Ok(vec![0])); + /// assert_eq!(left.factor_err(), Ok(Left(vec![0]))); + /// + /// let right: Either, u32>, _> = Right(Ok(String::new())); + /// assert_eq!(right.factor_err(), Ok(Right(String::new()))); + /// ``` + // TODO(MSRV): doc(alias) was stabilized in Rust 1.48 + // #[doc(alias = "transpose")] + pub fn factor_err(self) -> Result, E> { + match self { + Left(l) => l.map(Either::Left), + Right(r) => r.map(Either::Right), + } + } +} + +impl Either, Result> { + /// Factors out a homogenous type from an `Either` of [`Result`]. + /// + /// Here, the homogeneous type is the `Ok` type of the [`Result`]. + /// + /// ``` + /// use either::*; + /// let left: Either<_, Result> = Left(Err(vec![0])); + /// assert_eq!(left.factor_ok(), Err(Left(vec![0]))); + /// + /// let right: Either>, _> = Right(Err(String::new())); + /// assert_eq!(right.factor_ok(), Err(Right(String::new()))); + /// ``` + // TODO(MSRV): doc(alias) was stabilized in Rust 1.48 + // #[doc(alias = "transpose")] + pub fn factor_ok(self) -> Result> { + match self { + Left(l) => l.map_err(Either::Left), + Right(r) => r.map_err(Either::Right), + } + } +} + +impl Either<(T, L), (T, R)> { + /// Factor out a homogeneous type from an either of pairs. + /// + /// Here, the homogeneous type is the first element of the pairs. + /// + /// ``` + /// use either::*; + /// let left: Either<_, (u32, String)> = Left((123, vec![0])); + /// assert_eq!(left.factor_first().0, 123); + /// + /// let right: Either<(u32, Vec), _> = Right((123, String::new())); + /// assert_eq!(right.factor_first().0, 123); + /// ``` + pub fn factor_first(self) -> (T, Either) { + match self { + Left((t, l)) => (t, Left(l)), + Right((t, r)) => (t, Right(r)), + } + } +} + +impl Either<(L, T), (R, T)> { + /// Factor out a homogeneous type from an either of pairs. + /// + /// Here, the homogeneous type is the second element of the pairs. + /// + /// ``` + /// use either::*; + /// let left: Either<_, (String, u32)> = Left((vec![0], 123)); + /// assert_eq!(left.factor_second().1, 123); + /// + /// let right: Either<(Vec, u32), _> = Right((String::new(), 123)); + /// assert_eq!(right.factor_second().1, 123); + /// ``` + pub fn factor_second(self) -> (Either, T) { + match self { + Left((l, t)) => (Left(l), t), + Right((r, t)) => (Right(r), t), + } + } +} + +impl Either { + /// Extract the value of an either over two equivalent types. + /// + /// ``` + /// use either::*; + /// + /// let left: Either<_, u32> = Left(123); + /// assert_eq!(left.into_inner(), 123); + /// + /// let right: Either = Right(123); + /// assert_eq!(right.into_inner(), 123); + /// ``` + pub fn into_inner(self) -> T { + for_both!(self, inner => inner) + } + + /// Map `f` over the contained value and return the result in the + /// corresponding variant. + /// + /// ``` + /// use either::*; + /// + /// let value: Either<_, i32> = Right(42); + /// + /// let other = value.map(|x| x * 2); + /// assert_eq!(other, Right(84)); + /// ``` + pub fn map(self, f: F) -> Either + where + F: FnOnce(T) -> M, + { + match self { + Left(l) => Left(f(l)), + Right(r) => Right(f(r)), + } + } +} + +/// Convert from `Result` to `Either` with `Ok => Right` and `Err => Left`. +impl From> for Either { + fn from(r: Result) -> Self { + match r { + Err(e) => Left(e), + Ok(o) => Right(o), + } + } +} + +/// Convert from `Either` to `Result` with `Right => Ok` and `Left => Err`. +#[allow(clippy::from_over_into)] // From requires RFC 2451, Rust 1.41 +impl Into> for Either { + fn into(self) -> Result { + match self { + Left(l) => Err(l), + Right(r) => Ok(r), + } + } +} + +/// `Either` is a future if both `L` and `R` are futures. +impl Future for Either +where + L: Future, + R: Future, +{ + type Output = L::Output; + + fn poll( + self: Pin<&mut Self>, + cx: &mut core::task::Context<'_>, + ) -> core::task::Poll { + for_both!(self.as_pin_mut(), inner => inner.poll(cx)) + } +} + +#[cfg(any(test, feature = "use_std"))] +/// `Either` implements `Read` if both `L` and `R` do. +/// +/// Requires crate feature `"use_std"` +impl Read for Either +where + L: Read, + R: Read, +{ + fn read(&mut self, buf: &mut [u8]) -> io::Result { + for_both!(*self, ref mut inner => inner.read(buf)) + } + + fn read_exact(&mut self, buf: &mut [u8]) -> io::Result<()> { + for_both!(*self, ref mut inner => inner.read_exact(buf)) + } + + fn read_to_end(&mut self, buf: &mut std::vec::Vec) -> io::Result { + for_both!(*self, ref mut inner => inner.read_to_end(buf)) + } + + fn read_to_string(&mut self, buf: &mut std::string::String) -> io::Result { + for_both!(*self, ref mut inner => inner.read_to_string(buf)) + } +} + +#[cfg(any(test, feature = "use_std"))] +/// `Either` implements `Seek` if both `L` and `R` do. +/// +/// Requires crate feature `"use_std"` +impl Seek for Either +where + L: Seek, + R: Seek, +{ + fn seek(&mut self, pos: SeekFrom) -> io::Result { + for_both!(*self, ref mut inner => inner.seek(pos)) + } +} + +#[cfg(any(test, feature = "use_std"))] +/// Requires crate feature `"use_std"` +impl BufRead for Either +where + L: BufRead, + R: BufRead, +{ + fn fill_buf(&mut self) -> io::Result<&[u8]> { + for_both!(*self, ref mut inner => inner.fill_buf()) + } + + fn consume(&mut self, amt: usize) { + for_both!(*self, ref mut inner => inner.consume(amt)) + } + + fn read_until(&mut self, byte: u8, buf: &mut std::vec::Vec) -> io::Result { + for_both!(*self, ref mut inner => inner.read_until(byte, buf)) + } + + fn read_line(&mut self, buf: &mut std::string::String) -> io::Result { + for_both!(*self, ref mut inner => inner.read_line(buf)) + } +} + +#[cfg(any(test, feature = "use_std"))] +/// `Either` implements `Write` if both `L` and `R` do. +/// +/// Requires crate feature `"use_std"` +impl Write for Either +where + L: Write, + R: Write, +{ + fn write(&mut self, buf: &[u8]) -> io::Result { + for_both!(*self, ref mut inner => inner.write(buf)) + } + + fn write_all(&mut self, buf: &[u8]) -> io::Result<()> { + for_both!(*self, ref mut inner => inner.write_all(buf)) + } + + fn write_fmt(&mut self, fmt: fmt::Arguments<'_>) -> io::Result<()> { + for_both!(*self, ref mut inner => inner.write_fmt(fmt)) + } + + fn flush(&mut self) -> io::Result<()> { + for_both!(*self, ref mut inner => inner.flush()) + } +} + +impl AsRef for Either +where + L: AsRef, + R: AsRef, +{ + fn as_ref(&self) -> &Target { + for_both!(*self, ref inner => inner.as_ref()) + } +} + +macro_rules! impl_specific_ref_and_mut { + ($t:ty, $($attr:meta),* ) => { + $(#[$attr])* + impl AsRef<$t> for Either + where L: AsRef<$t>, R: AsRef<$t> + { + fn as_ref(&self) -> &$t { + for_both!(*self, ref inner => inner.as_ref()) + } + } + + $(#[$attr])* + impl AsMut<$t> for Either + where L: AsMut<$t>, R: AsMut<$t> + { + fn as_mut(&mut self) -> &mut $t { + for_both!(*self, ref mut inner => inner.as_mut()) + } + } + }; +} + +impl_specific_ref_and_mut!(str,); +impl_specific_ref_and_mut!( + ::std::path::Path, + cfg(feature = "use_std"), + doc = "Requires crate feature `use_std`." +); +impl_specific_ref_and_mut!( + ::std::ffi::OsStr, + cfg(feature = "use_std"), + doc = "Requires crate feature `use_std`." +); +impl_specific_ref_and_mut!( + ::std::ffi::CStr, + cfg(feature = "use_std"), + doc = "Requires crate feature `use_std`." +); + +impl AsRef<[Target]> for Either +where + L: AsRef<[Target]>, + R: AsRef<[Target]>, +{ + fn as_ref(&self) -> &[Target] { + for_both!(*self, ref inner => inner.as_ref()) + } +} + +impl AsMut for Either +where + L: AsMut, + R: AsMut, +{ + fn as_mut(&mut self) -> &mut Target { + for_both!(*self, ref mut inner => inner.as_mut()) + } +} + +impl AsMut<[Target]> for Either +where + L: AsMut<[Target]>, + R: AsMut<[Target]>, +{ + fn as_mut(&mut self) -> &mut [Target] { + for_both!(*self, ref mut inner => inner.as_mut()) + } +} + +impl Deref for Either +where + L: Deref, + R: Deref, +{ + type Target = L::Target; + + fn deref(&self) -> &Self::Target { + for_both!(*self, ref inner => &**inner) + } +} + +impl DerefMut for Either +where + L: DerefMut, + R: DerefMut, +{ + fn deref_mut(&mut self) -> &mut Self::Target { + for_both!(*self, ref mut inner => &mut *inner) + } +} + +#[cfg(any(test, feature = "use_std"))] +/// `Either` implements `Error` if *both* `L` and `R` implement it. +/// +/// Requires crate feature `"use_std"` +impl Error for Either +where + L: Error, + R: Error, +{ + fn source(&self) -> Option<&(dyn Error + 'static)> { + for_both!(*self, ref inner => inner.source()) + } + + #[allow(deprecated)] + fn description(&self) -> &str { + for_both!(*self, ref inner => inner.description()) + } + + #[allow(deprecated)] + fn cause(&self) -> Option<&dyn Error> { + for_both!(*self, ref inner => inner.cause()) + } +} + +impl fmt::Display for Either +where + L: fmt::Display, + R: fmt::Display, +{ + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + for_both!(*self, ref inner => inner.fmt(f)) + } +} + +#[test] +fn basic() { + let mut e = Left(2); + let r = Right(2); + assert_eq!(e, Left(2)); + e = r; + assert_eq!(e, Right(2)); + assert_eq!(e.left(), None); + assert_eq!(e.right(), Some(2)); + assert_eq!(e.as_ref().right(), Some(&2)); + assert_eq!(e.as_mut().right(), Some(&mut 2)); +} + +#[test] +fn macros() { + use std::string::String; + + fn a() -> Either { + let x: u32 = try_left!(Right(1337u32)); + Left(x * 2) + } + assert_eq!(a(), Right(1337)); + + fn b() -> Either { + Right(try_right!(Left("foo bar"))) + } + assert_eq!(b(), Left(String::from("foo bar"))); +} + +#[test] +fn deref() { + use std::string::String; + + fn is_str(_: &str) {} + let value: Either = Left(String::from("test")); + is_str(&*value); +} + +#[test] +fn iter() { + let x = 3; + let mut iter = match x { + 3 => Left(0..10), + _ => Right(17..), + }; + + assert_eq!(iter.next(), Some(0)); + assert_eq!(iter.count(), 9); +} + +#[test] +fn seek() { + use std::io; + + let use_empty = false; + let mut mockdata = [0x00; 256]; + for i in 0..256 { + mockdata[i] = i as u8; + } + + let mut reader = if use_empty { + // Empty didn't impl Seek until Rust 1.51 + Left(io::Cursor::new([])) + } else { + Right(io::Cursor::new(&mockdata[..])) + }; + + let mut buf = [0u8; 16]; + assert_eq!(reader.read(&mut buf).unwrap(), buf.len()); + assert_eq!(buf, mockdata[..buf.len()]); + + // the first read should advance the cursor and return the next 16 bytes thus the `ne` + assert_eq!(reader.read(&mut buf).unwrap(), buf.len()); + assert_ne!(buf, mockdata[..buf.len()]); + + // if the seek operation fails it should read 16..31 instead of 0..15 + reader.seek(io::SeekFrom::Start(0)).unwrap(); + assert_eq!(reader.read(&mut buf).unwrap(), buf.len()); + assert_eq!(buf, mockdata[..buf.len()]); +} + +#[test] +fn read_write() { + use std::io; + + let use_stdio = false; + let mockdata = [0xff; 256]; + + let mut reader = if use_stdio { + Left(io::stdin()) + } else { + Right(&mockdata[..]) + }; + + let mut buf = [0u8; 16]; + assert_eq!(reader.read(&mut buf).unwrap(), buf.len()); + assert_eq!(&buf, &mockdata[..buf.len()]); + + let mut mockbuf = [0u8; 256]; + let mut writer = if use_stdio { + Left(io::stdout()) + } else { + Right(&mut mockbuf[..]) + }; + + let buf = [1u8; 16]; + assert_eq!(writer.write(&buf).unwrap(), buf.len()); +} + +#[test] +fn error() { + let invalid_utf8 = b"\xff"; + #[allow(invalid_from_utf8)] + let res = if let Err(error) = ::std::str::from_utf8(invalid_utf8) { + Err(Left(error)) + } else if let Err(error) = "x".parse::() { + Err(Right(error)) + } else { + Ok(()) + }; + assert!(res.is_err()); + #[allow(deprecated)] + res.unwrap_err().description(); // make sure this can be called +} + +/// A helper macro to check if AsRef and AsMut are implemented for a given type. +macro_rules! check_t { + ($t:ty) => {{ + fn check_ref>() {} + fn propagate_ref, T2: AsRef<$t>>() { + check_ref::>() + } + fn check_mut>() {} + fn propagate_mut, T2: AsMut<$t>>() { + check_mut::>() + } + }}; +} + +// This "unused" method is here to ensure that compilation doesn't fail on given types. +fn _unsized_ref_propagation() { + check_t!(str); + + fn check_array_ref, Item>() {} + fn check_array_mut, Item>() {} + + fn propagate_array_ref, T2: AsRef<[Item]>, Item>() { + check_array_ref::, _>() + } + + fn propagate_array_mut, T2: AsMut<[Item]>, Item>() { + check_array_mut::, _>() + } +} + +// This "unused" method is here to ensure that compilation doesn't fail on given types. +#[cfg(feature = "use_std")] +fn _unsized_std_propagation() { + check_t!(::std::path::Path); + check_t!(::std::ffi::OsStr); + check_t!(::std::ffi::CStr); +} diff --git a/rust/hw/char/pl011/vendor/either/src/serde_untagged.rs b/rust/hw/char/pl011/vendor/either/src/serde_untagged.rs new file mode 100644 index 0000000000..72078c3ec8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/src/serde_untagged.rs @@ -0,0 +1,69 @@ +//! Untagged serialization/deserialization support for Either. +//! +//! `Either` uses default, externally-tagged representation. +//! However, sometimes it is useful to support several alternative types. +//! For example, we may have a field which is generally Map +//! but in typical cases Vec would suffice, too. +//! +//! ```rust +//! # fn main() -> Result<(), Box> { +//! use either::Either; +//! use std::collections::HashMap; +//! +//! #[derive(serde::Serialize, serde::Deserialize, Debug)] +//! #[serde(transparent)] +//! struct IntOrString { +//! #[serde(with = "either::serde_untagged")] +//! inner: Either, HashMap> +//! }; +//! +//! // serialization +//! let data = IntOrString { +//! inner: Either::Left(vec!["Hello".to_string()]) +//! }; +//! // notice: no tags are emitted. +//! assert_eq!(serde_json::to_string(&data)?, r#"["Hello"]"#); +//! +//! // deserialization +//! let data: IntOrString = serde_json::from_str( +//! r#"{"a": 0, "b": 14}"# +//! )?; +//! println!("found {:?}", data); +//! # Ok(()) +//! # } +//! ``` + +use serde::{Deserialize, Deserializer, Serialize, Serializer}; + +#[derive(serde::Serialize, serde::Deserialize)] +#[serde(untagged)] +enum Either { + Left(L), + Right(R), +} + +pub fn serialize(this: &super::Either, serializer: S) -> Result +where + S: Serializer, + L: Serialize, + R: Serialize, +{ + let untagged = match this { + super::Either::Left(left) => Either::Left(left), + super::Either::Right(right) => Either::Right(right), + }; + untagged.serialize(serializer) +} + +pub fn deserialize<'de, L, R, D>(deserializer: D) -> Result, D::Error> +where + D: Deserializer<'de>, + L: Deserialize<'de>, + R: Deserialize<'de>, +{ + match Either::deserialize(deserializer) { + Ok(Either::Left(left)) => Ok(super::Either::Left(left)), + Ok(Either::Right(right)) => Ok(super::Either::Right(right)), + Err(error) => Err(error), + } +} diff --git a/rust/hw/char/pl011/vendor/either/src/serde_untagged_optional.rs b/rust/hw/char/pl011/vendor/either/src/serde_untagged_optional.rs new file mode 100644 index 0000000000..fb3239ace1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/either/src/serde_untagged_optional.rs @@ -0,0 +1,74 @@ +//! Untagged serialization/deserialization support for Option>. +//! +//! `Either` uses default, externally-tagged representation. +//! However, sometimes it is useful to support several alternative types. +//! For example, we may have a field which is generally Map +//! but in typical cases Vec would suffice, too. +//! +//! ```rust +//! # fn main() -> Result<(), Box> { +//! use either::Either; +//! use std::collections::HashMap; +//! +//! #[derive(serde::Serialize, serde::Deserialize, Debug)] +//! #[serde(transparent)] +//! struct IntOrString { +//! #[serde(with = "either::serde_untagged_optional")] +//! inner: Option, HashMap>> +//! }; +//! +//! // serialization +//! let data = IntOrString { +//! inner: Some(Either::Left(vec!["Hello".to_string()])) +//! }; +//! // notice: no tags are emitted. +//! assert_eq!(serde_json::to_string(&data)?, r#"["Hello"]"#); +//! +//! // deserialization +//! let data: IntOrString = serde_json::from_str( +//! r#"{"a": 0, "b": 14}"# +//! )?; +//! println!("found {:?}", data); +//! # Ok(()) +//! # } +//! ``` + +use serde::{Deserialize, Deserializer, Serialize, Serializer}; + +#[derive(Serialize, Deserialize)] +#[serde(untagged)] +enum Either { + Left(L), + Right(R), +} + +pub fn serialize( + this: &Option>, + serializer: S, +) -> Result +where + S: Serializer, + L: Serialize, + R: Serialize, +{ + let untagged = match this { + Some(super::Either::Left(left)) => Some(Either::Left(left)), + Some(super::Either::Right(right)) => Some(Either::Right(right)), + None => None, + }; + untagged.serialize(serializer) +} + +pub fn deserialize<'de, L, R, D>(deserializer: D) -> Result>, D::Error> +where + D: Deserializer<'de>, + L: Deserialize<'de>, + R: Deserialize<'de>, +{ + match Option::deserialize(deserializer) { + Ok(Some(Either::Left(left))) => Ok(Some(super::Either::Left(left))), + Ok(Some(Either::Right(right))) => Ok(Some(super::Either::Right(right))), + Ok(None) => Ok(None), + Err(error) => Err(error), + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/.cargo-checksum.json b/rust/hw/char/pl011/vendor/itertools/.cargo-checksum.json new file mode 100644 index 0000000000..327f66ceb5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"CHANGELOG.md":"9f94a3c5bdd8dd758864440205c84d73005b8619cd20833449db54f1f484c6bf","Cargo.lock":"b0443f54560491073ca861d8ed664a07a8039872568a527b2add8f362dd9734b","Cargo.toml":"e64e6e088ab537ba843f25a111af102dd434fd58cea3d446dff314cf42ad33e2","LICENSE-APACHE":"a60eea817514531668d7e00765731449fe14d059d3249e0bc93b36de45f759f2","LICENSE-MIT":"7576269ea71f767b99297934c0b2367532690f8c4badc695edf8e04ab6a1e545","README.md":"9de81a03c86ca4573d5d0a98eaa4d938bc6c538816f421d1b7499301efb5a454","benches/bench1.rs":"bb06f39db0544b1380cd4929139ccf521a9eecab7ca3f910b9499f965ec0a047","benches/combinations.rs":"51523ee1ca438a56f14711f0b04ee943895062d35859fbe23a2714d2fca3289d","benches/combinations_with_replacement.rs":"11f29160652a2d90ce7ca4b1c339c4457888ab6867e2456ce1c62e3adf9be737","benches/extra/mod.rs":"6ca290d72302a1945078621610b5788060b0de29639decebbdc557a80044aa97","benches/extra/zipslices.rs":"40e9f68a7c00f8429193fca463caef18851fa49b33355cc136bad3ccc840d655","benches/fold_specialization.rs":"5a517bbe29d366a15f6f751660e17ab1aa3e7b21552a1983048c662e34f0d69e","benches/powerset.rs":"6fd9d69a3483b37dc2411f99fb4efa6131577696f2dbdc8d1de9e4d7642fe3a3","benches/tree_fold1.rs":"539232e74f9aaea295a42069ac5af707811e90dc1c71c6e0a9064ffc731999de","benches/tuple_combinations.rs":"16366158743307a0289fc1df423a3cec45009807d410a9fe9922d5b6f8b7d002","benches/tuples.rs":"5a620783ae203e9ff9623d10d2c7fe9911d8b6c811cbad7613afa30e390c759d","examples/iris.data":"596ffd580471ca4d4880f8e439c7281f3b50d8249a5960353cb200b1490f63a0","examples/iris.rs":"1b465ed6a417180913104bc95a545fd9d1a3d67d121871ab737ad87e31b8be37","src/adaptors/coalesce.rs":"a0073325d40f297d29101538d18a267aef81889a999338dc09cb43a31cb4ec8b","src/adaptors/map.rs":"241971e856e468d71323071fb4a09867fbcedb83877320be132dc03516fe60e8","src/adaptors/mod.rs":"7f3bd7d011a348ce5e4bea486ef2e6346b64c7fe27540334d56d3f147f981d59","src/adaptors/multi_product.rs":"bb43e6dce68c815c21006d5b01c56e038d54b0c3bb8ee6bb8a4de11e2952c7ad","src/combinations.rs":"fb25babb459389093f886721016c72bf9f00e51d02735f638d871bb3a447ffd0","src/combinations_with_replacement.rs":"463011a574facbdd84278386b533a90e4dd517f0417e05adb82d182049db1f50","src/concat_impl.rs":"03b1ed61cbed242c286c3c4c5c848dbd57e02ab83fcef264f3a592b58107f324","src/cons_tuples_impl.rs":"c253d03b861831c01d62cacc57b49715ee62f6171e69f6886bb5a6ca0863bc3a","src/diff.rs":"a7800e9ce7a87b53ebe2338481335751fb43d44fa6a1ca719aceaaab40e5c8fe","src/duplicates_impl.rs":"f62fe4b642f501f785721ce5a505cf622a771e457210726dd0fb8b30be7ebbbc","src/either_or_both.rs":"c7ffe60772350c470fb42a5e4ff5087587985206733fe9814eeefa249983239a","src/exactly_one_err.rs":"aa50081f6a31b5109b30e3ed305e3ec2413c6908dedc8990ec5378a99cee2b39","src/extrema_set.rs":"2a25b0b86eed2fd5d05622d591a3085cab823973d450816c2c3b8cb76e9c187e","src/flatten_ok.rs":"fe209fd886ecd9cb98d99625aa0c7274af7e644eff4a10de15b4dec8bbbc934a","src/format.rs":"20fbbe35a98315ceb77ad910ff92319e163ae16452b0c24a8f1eccbc71c9e171","src/free.rs":"dfc57b7f56a08d4986a96b679018b41346576a7a34b668e008cc01109e728750","src/group_map.rs":"f7b02c964f63505d3e36280cfdc1755e05287714201efe983dacf702eee61434","src/groupbylazy.rs":"57ebf7d8a5a752045f94b76db8b80073f46964c28cc0919510fbdea102244918","src/grouping_map.rs":"cbc45ac563345c96f3ac50c78f73c83d870523436a7ab88c1c9a685d204461d3","src/impl_macros.rs":"4f829b458873bed556f1aff2ae4e88dbd576766e2b5bcc07ff3ac8756758e6f4","src/intersperse.rs":"b9717242495846a4a979c95d93d5681caccb7c07a0d889eab763ad3d49a46125","src/k_smallest.rs":"603eb34314c01769ff7f6def2a24cf7a7b38507e6f3658b7aafc23a3b2e9b322","src/kmerge_impl.rs":"a347b0f6fa7715afd8a54d85ce139ed5b14c9e58a16c2b3648f5b288fdb5375f","src/lazy_buffer.rs":"834f6ef7fdf9f00c8a6329beb38eaefb706847ceeec309c221dce705c2c1e05b","src/lib.rs":"703fa755955007c2ddf1c1abe6a20e9a762ba09746c4eeae905e6d417bf3bf31","src/merge_join.rs":"20574fbb0ca610a6ac0ad89fb7e856a629235a14f285954760386cd0de3dc687","src/minmax.rs":"96d3897c28c8c63284d4729becc9ada6855e0953cac6e1bd35cf6f38c50b0ec0","src/multipeek_impl.rs":"35162bca4456bfa20a08e8d40e4d1cc6783dc662778789fdcded60371e975122","src/pad_tail.rs":"04be2ca73abb85815b06b5524c99d6feb2919180c486a4646f9cc6c87462f67b","src/peek_nth.rs":"6a0a51f2f373ce14d3d58595c46464878a14976bf00841a7396c03f9f9ab07ac","src/peeking_take_while.rs":"2293eaba60142f427a8bd1fa6d347b21469cadaaef69a70f28daed3a4166c1b4","src/permutations.rs":"97831e7e26904c3cae68c97e74f7c6981ceb2fb2f2217282a0e5e54083a565fc","src/powerset.rs":"e0ee6b1316b4dd314c1e81502b90ae8113e1cda12168322520c5a65410e584b2","src/process_results_impl.rs":"fd51b2a4785c3b65145703dea4c088c822e5592de939cf228917c6275bee0778","src/put_back_n_impl.rs":"821e047fecd6ca0036290029f4febe7638a3abf1faa05e1e747a3bf9d80ff464","src/rciter_impl.rs":"5b156082ef2d25a94a4ad01d94cba2813c4b3e72e212515a8ad0fc8588f8045d","src/repeatn.rs":"bfc8f9145c9d8a3ea651f012b7d5a8d2fbbcbefdee76eafd098d02e7c54cda90","src/size_hint.rs":"021e57aad7df8f1e70ef588e9e9b8a1695aab183b1098f1848561f96c5dc9bcb","src/sources.rs":"61637f32c2cea2290ecfc1980c0b2d0f68463839ac09bd81006f8258ab8ecaae","src/take_while_inclusive.rs":"f567e91a7f25ed785c3132ff408e3f17b59dce98909041a8c40cd14c0f350f55","src/tee.rs":"665832aa547389a420c3441470ff2494249f0ed2841be0c6a578367fe9dbd381","src/tuple_impl.rs":"8d6c52850bf7f3b9d03fcbaed0e60e5a5becc2f8421ca4bc79e876659804a258","src/unique_impl.rs":"3b89cdd668b74cc0a0eabb1522489e2305a0d2d8da25d6a1884e8626bbdb5959","src/unziptuple.rs":"84b50e5d29b9ddbf21a46a1cc2fd7877729c7f7da9bdc8ae1966dbaf2d2f6f60","src/with_position.rs":"a3652e3e97de78c5c7eeb9a5306225b5ce517d6165b96663820b5f00fae1bff9","src/zip_eq_impl.rs":"4a41dc6dfe99359585d50ce648bdc85f15276c602048872b1d152e90841d8cad","src/zip_longest.rs":"f7cf5fffc3ca053ee80b410a05b27de1a475021f6de3181aea981010d7e8453f","src/ziptuple.rs":"7f9df12bf6556f382bbd4ad8cf17eb8b60c1c47fadbce016141133ba0f3384a1","tests/adaptors_no_collect.rs":"f459f36d54f5d475b2b2e83f5a1c98109c15062756ae822fa379486f3eeed666","tests/flatten_ok.rs":"b7894874132918b8229c7150b2637511d8e3e14197d8eeb9382d46b2a514efa2","tests/macros_hygiene.rs":"522afa0106e3f11a5149e9218f89c2329e405546d2ef0ea756d6a27e8a0e9ca3","tests/merge_join.rs":"b08c4ee6529d234c68d411a413b8781455d18a1eab17872d1828bb75a4fcf79b","tests/peeking_take_while.rs":"f834361c5520dda15eb9e9ebe87507c905462201412b21859d9f83dab91d0e0b","tests/quick.rs":"203619d7de9ae068a5c0c61c398f65f15a878b6ac759cc4575d19f0c90dfd9fa","tests/specializations.rs":"fdd16dc663330033fedcc478609b393d4aa369dc07dc8cda31a75219fb793087","tests/test_core.rs":"32576ba90aa8e5db985b6e6ffe30e3046bc6a11d392db8f6b4bdd2ba48d9b24d","tests/test_std.rs":"16a03cfe359a570685b48b80473d1947a89a49ec9ef744ea175252e2b95c0336","tests/tuples.rs":"014e4da776174bfe923270e2a359cd9c95b372fce4b952b8138909d6e2c52762","tests/zip.rs":"99af365fe6054ef1c6089d3e604e34da8fea66e55861ae4be9e7336ec8de4b56"},"package":"b1c173a5686ce8bfa551b3563d0c2170bf24ca44da99c7ca4bfdab5418c3fe57"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/itertools/CHANGELOG.md b/rust/hw/char/pl011/vendor/itertools/CHANGELOG.md new file mode 100644 index 0000000000..8d7404e759 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/CHANGELOG.md @@ -0,0 +1,409 @@ +# Changelog + +## 0.11.0 + +### Breaking +- Make `Itertools::merge_join_by` also accept functions returning bool (#704) +- Implement `PeekingNext` transitively over mutable references (#643) +- Change `with_position` to yield `(Position, Item)` instead of `Position` (#699) + +### Added +- Add `Itertools::take_while_inclusive` (#616) +- Implement `PeekingNext` for `PeekingTakeWhile` (#644) +- Add `EitherOrBoth::{just_left, just_right, into_left, into_right, as_deref, as_deref_mut, left_or_insert, right_or_insert, left_or_insert_with, right_or_insert_with, insert_left, insert_right, insert_both}` (#629) +- Implement `Clone` for `CircularTupleWindows` (#686) +- Implement `Clone` for `Chunks` (#683) +- Add `Itertools::process_results` (#680) + +### Changed +- Use `Cell` instead of `RefCell` in `Format` and `FormatWith` (#608) +- CI tweaks (#674, #675) +- Document and test the difference between stable and unstable sorts (#653) +- Fix documentation error on `Itertools::max_set_by_key` (#692) +- Move MSRV metadata to `Cargo.toml` (#672) +- Implement `equal` with `Iterator::eq` (#591) + +## 0.10.5 + - Maintenance + +## 0.10.4 + - Add `EitherOrBoth::or` and `EitherOrBoth::or_else` (#593) + - Add `min_set`, `max_set` et al. (#613, #323) + - Use `either/use_std` (#628) + - Documentation fixes (#612, #625, #632, #633, #634, #638) + - Code maintenance (#623, #624, #627, #630) + +## 0.10.3 + - Maintenance + +## 0.10.2 + - Add `Itertools::multiunzip` (#362, #565) + - Add `intersperse` and `intersperse_with` free functions (#555) + - Add `Itertools::sorted_by_cached_key` (#424, #575) + - Specialize `ProcessResults::fold` (#563) + - Fix subtraction overflow in `DuplicatesBy::size_hint` (#552) + - Fix specialization tests (#574) + - More `Debug` impls (#573) + - Deprecate `fold1` (use `reduce` instead) (#580) + - Documentation fixes (`HomogenousTuple`, `into_group_map`, `into_group_map_by`, `MultiPeek::peek`) (#543 et al.) + +## 0.10.1 + - Add `Itertools::contains` (#514) + - Add `Itertools::counts_by` (#515) + - Add `Itertools::partition_result` (#511) + - Add `Itertools::all_unique` (#241) + - Add `Itertools::duplicates` and `Itertools::duplicates_by` (#502) + - Add `chain!` (#525) + - Add `Itertools::at_most_one` (#523) + - Add `Itertools::flatten_ok` (#527) + - Add `EitherOrBoth::or_default` (#583) + - Add `Itertools::find_or_last` and `Itertools::find_or_first` (#535) + - Implement `FusedIterator` for `FilterOk`, `FilterMapOk`, `InterleaveShortest`, `KMergeBy`, `MergeBy`, `PadUsing`, `Positions`, `Product` , `RcIter`, `TupleWindows`, `Unique`, `UniqueBy`, `Update`, `WhileSome`, `Combinations`, `CombinationsWithReplacement`, `Powerset`, `RepeatN`, and `WithPosition` (#550) + - Implement `FusedIterator` for `Interleave`, `IntersperseWith`, and `ZipLongest` (#548) + +## 0.10.0 + - **Increase minimum supported Rust version to 1.32.0** + - Improve macro hygiene (#507) + - Add `Itertools::powerset` (#335) + - Add `Itertools::sorted_unstable`, `Itertools::sorted_unstable_by`, and `Itertools::sorted_unstable_by_key` (#494) + - Implement `Error` for `ExactlyOneError` (#484) + - Undeprecate `Itertools::fold_while` (#476) + - Tuple-related adapters work for tuples of arity up to 12 (#475) + - `use_alloc` feature for users who have `alloc`, but not `std` (#474) + - Add `Itertools::k_smallest` (#473) + - Add `Itertools::into_grouping_map` and `GroupingMap` (#465) + - Add `Itertools::into_grouping_map_by` and `GroupingMapBy` (#465) + - Add `Itertools::counts` (#468) + - Add implementation of `DoubleEndedIterator` for `Unique` (#442) + - Add implementation of `DoubleEndedIterator` for `UniqueBy` (#442) + - Add implementation of `DoubleEndedIterator` for `Zip` (#346) + - Add `Itertools::multipeek` (#435) + - Add `Itertools::dedup_with_count` and `DedupWithCount` (#423) + - Add `Itertools::dedup_by_with_count` and `DedupByWithCount` (#423) + - Add `Itertools::intersperse_with` and `IntersperseWith` (#381) + - Add `Itertools::filter_ok` and `FilterOk` (#377) + - Add `Itertools::filter_map_ok` and `FilterMapOk` (#377) + - Deprecate `Itertools::fold_results`, use `Itertools::fold_ok` instead (#377) + - Deprecate `Itertools::map_results`, use `Itertools::map_ok` instead (#377) + - Deprecate `FoldResults`, use `FoldOk` instead (#377) + - Deprecate `MapResults`, use `MapOk` instead (#377) + - Add `Itertools::circular_tuple_windows` and `CircularTupleWindows` (#350) + - Add `peek_nth` and `PeekNth` (#303) + +## 0.9.0 + - Fix potential overflow in `MergeJoinBy::size_hint` (#385) + - Add `derive(Clone)` where possible (#382) + - Add `try_collect` method (#394) + - Add `HomogeneousTuple` trait (#389) + - Fix `combinations(0)` and `combinations_with_replacement(0)` (#383) + - Don't require `ParitalEq` to the `Item` of `DedupBy` (#397) + - Implement missing specializations on the `PutBack` adaptor and on the `MergeJoinBy` iterator (#372) + - Add `position_*` methods (#412) + - Derive `Hash` for `EitherOrBoth` (#417) + - Increase minimum supported Rust version to 1.32.0 + +## 0.8.2 + - Use `slice::iter` instead of `into_iter` to avoid future breakage (#378, by @LukasKalbertodt) +## 0.8.1 + - Added a [`.exactly_one()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.exactly_one) iterator method that, on success, extracts the single value of an iterator ; by @Xaeroxe + - Added combinatory iterator adaptors: + - [`.permutations(k)`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.permutations): + + `[0, 1, 2].iter().permutations(2)` yields + + ```rust + [ + vec![0, 1], + vec![0, 2], + vec![1, 0], + vec![1, 2], + vec![2, 0], + vec![2, 1], + ] + ``` + + ; by @tobz1000 + + - [`.combinations_with_replacement(k)`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.combinations_with_replacement): + + `[0, 1, 2].iter().combinations_with_replacement(2)` yields + + ```rust + [ + vec![0, 0], + vec![0, 1], + vec![0, 2], + vec![1, 1], + vec![1, 2], + vec![2, 2], + ] + ``` + + ; by @tommilligan + + - For reference, these methods join the already existing [`.combinations(k)`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.combinations): + + `[0, 1, 2].iter().combinations(2)` yields + + ```rust + [ + vec![0, 1], + vec![0, 2], + vec![1, 2], + ] + ``` + + - Improved the performance of [`.fold()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.fold)-based internal iteration for the [`.intersperse()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.intersperse) iterator ; by @jswrenn + - Added [`.dedup_by()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.dedup_by), [`.merge_by()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.merge_by) and [`.kmerge_by()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.kmerge_by) adaptors that work like [`.dedup()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.dedup), [`.merge()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.merge) and [`.kmerge()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.kmerge), but taking an additional custom comparison closure parameter. ; by @phimuemue + - Improved the performance of [`.all_equal()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.all_equal) ; by @fyrchik + - Loosened the bounds on [`.partition_map()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.partition_map) to take just a `FnMut` closure rather than a `Fn` closure, and made its implementation use internal iteration for better performance ; by @danielhenrymantilla + - Added convenience methods to [`EitherOrBoth`](https://docs.rs/itertools/0.8.1/itertools/enum.EitherOrBoth.html) elements yielded from the [`.zip_longest()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.zip_longest) iterator adaptor ; by @Avi-D-coder + - Added [`.sum1()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.sum1) and [`.product1()`](https://docs.rs/itertools/0.8.1/itertools/trait.Itertools.html#method.product1) iterator methods that respectively try to return the sum and the product of the elements of an iterator **when it is not empty**, otherwise they return `None` ; by @Emerentius +## 0.8.0 + - Added new adaptor `.map_into()` for conversions using `Into` by @vorner + - Improved `Itertools` docs by @JohnHeitmann + - The return type of `.sorted_by_by_key()` is now an iterator, not a Vec. + - The return type of the `izip!(x, y)` macro with exactly two arguments is now the usual `Iterator::zip`. + - Remove `.flatten()` in favour of std's `.flatten()` + - Deprecate `.foreach()` in favour of std's `.for_each()` + - Deprecate `.step()` in favour of std's `.step_by()` + - Deprecate `repeat_call` in favour of std's `repeat_with` + - Deprecate `.fold_while()` in favour of std's `.try_fold()` + - Require Rust 1.24 as minimal version. +## 0.7.11 + - Add convenience methods to `EitherOrBoth`, making it more similar to `Option` and `Either` by @jethrogb +## 0.7.10 + - No changes. +## 0.7.9 + - New inclusion policy: See the readme about suggesting features for std before accepting them in itertools. + - The `FoldWhile` type now implements `Eq` and `PartialEq` by @jturner314 +## 0.7.8 + - Add new iterator method `.tree_fold1()` which is like `.fold1()` except items are combined in a tree structure (see its docs). By @scottmcm + - Add more `Debug` impls by @phimuemue: KMerge, KMergeBy, MergeJoinBy, ConsTuples, Intersperse, ProcessResults, RcIter, Tee, TupleWindows, Tee, ZipLongest, ZipEq, Zip. +## 0.7.7 + - Add new iterator method `.into_group_map() -> HashMap>` which turns an iterator of `(K, V)` elements into such a hash table, where values are grouped by key. By @tobz1000 + - Add new free function `flatten` for the `.flatten()` adaptor. **NOTE:** recent Rust nightlies have `Iterator::flatten` and thus a clash with our flatten adaptor. One workaround is to use the itertools `flatten` free function. +## 0.7.6 + - Add new adaptor `.multi_cartesian_product()` which is an n-ary product iterator by @tobz1000 + - Add new method `.sorted_by_key()` by @Xion + - Provide simpler and faster `.count()` for `.unique()` and `.unique_by()` +## 0.7.5 + - `.multipeek()` now implements `PeekingNext`, by @nicopap. +## 0.7.4 + - Add new adaptor `.update()` by @lucasem; this adaptor is used to modify an element before passing it on in an iterator chain. +## 0.7.3 + - Add new method `.collect_tuple()` by @matklad; it makes a tuple out of the iterator's elements if the number of them matches **exactly**. + - Implement `fold` and `collect` for `.map_results()` which means it reuses the code of the standard `.map()` for these methods. +## 0.7.2 + - Add new adaptor `.merge_join_by` by @srijs; a heterogeneous merge join for two ordered sequences. +## 0.7.1 + - Iterator adaptors and iterators in itertools now use the same `must_use` reminder that the standard library adaptors do, by @matematikaedit and @bluss *“iterator adaptors are lazy and do nothing unless consumed”*. +## 0.7.0 + - Faster `izip!()` by @krdln + - `izip!()` is now a wrapper for repeated regular `.zip()` and a single `.map()`. This means it optimizes as well as the standard library `.zip()` it uses. **Note:** `multizip` and `izip!()` are now different! The former has a named type but the latter optimizes better. + - Faster `.unique()` + - `no_std` support, which is opt-in! + - Many lovable features are still there without std, like `izip!()` or `.format()` or `.merge()`, but not those that use collections. + - Trait bounds were required up front instead of just on the type: `group_by`'s `PartialEq` by @Phlosioneer and `repeat_call`'s `FnMut`. + - Removed deprecated constructor `Zip::new` — use `izip!()` or `multizip()` +## 0.6.5 + - Fix bug in `.cartesian_product()`'s fold (which only was visible for unfused iterators). +## 0.6.4 + - Add specific `fold` implementations for `.cartesian_product()` and `cons_tuples()`, which improves their performance in fold, foreach, and iterator consumers derived from them. +## 0.6.3 + - Add iterator adaptor `.positions(predicate)` by @tmccombs +## 0.6.2 + - Add function `process_results` which can “lift” a function of the regular values of an iterator so that it can process the `Ok` values from an iterator of `Results` instead, by @shepmaster + - Add iterator method `.concat()` which combines all iterator elements into a single collection using the `Extend` trait, by @srijs +## 0.6.1 + - Better size hint testing and subsequent size hint bugfixes by @rkarp. Fixes bugs in product, `interleave_shortest` size hints. + - New iterator method `.all_equal()` by @phimuemue +## 0.6.0 + - Deprecated names were removed in favour of their replacements + - `.flatten()` does not implement double ended iteration anymore + - `.fold_while()` uses `&mut self` and returns `FoldWhile`, for composability #168 + - `.foreach()` and `.fold1()` use `self`, like `.fold()` does. + - `.combinations(0)` now produces a single empty vector. #174 +## 0.5.10 + - Add itertools method `.kmerge_by()` (and corresponding free function) + - Relaxed trait requirement of `.kmerge()` and `.minmax()` to PartialOrd. +## 0.5.9 + - Add multipeek method `.reset_peek()` + - Add categories +## 0.5.8 + - Add iterator adaptor `.peeking_take_while()` and its trait `PeekingNext`. +## 0.5.7 + - Add iterator adaptor `.with_position()` + - Fix multipeek's performance for long peeks by using `VecDeque`. +## 0.5.6 + - Add `.map_results()` +## 0.5.5 + - Many more adaptors now implement `Debug` + - Add free function constructor `repeat_n`. `RepeatN::new` is now deprecated. +## 0.5.4 + - Add infinite generator function `iterate`, that takes a seed and a closure. +## 0.5.3 + - Special-cased `.fold()` for flatten and put back. `.foreach()` now uses fold on the iterator, to pick up any iterator specific loop implementation. + - `.combinations(n)` asserts up front that `n != 0`, instead of running into an error on the second iterator element. +## 0.5.2 + - Add `.tuples::()` that iterates by two, three or four elements at a time (where `T` is a tuple type). + - Add `.tuple_windows::()` that iterates using a window of the two, three or four most recent elements. + - Add `.next_tuple::()` method, that picks the next two, three or four elements in one go. + - `.interleave()` now has an accurate size hint. +## 0.5.1 + - Workaround module/function name clash that made racer crash on completing itertools. Only internal changes needed. +## 0.5.0 + - [Release announcement](https://bluss.github.io/rust/2016/09/26/itertools-0.5.0/) + - Renamed: + - `combinations` is now `tuple_combinations` + - `combinations_n` to `combinations` + - `group_by_lazy`, `chunks_lazy` to `group_by`, `chunks` + - `Unfold::new` to `unfold()` + - `RepeatCall::new` to `repeat_call()` + - `Zip::new` to `multizip` + - `PutBack::new`, `PutBackN::new` to `put_back`, `put_back_n` + - `PutBack::with_value` is now a builder setter, not a constructor + - `MultiPeek::new`, `.multipeek()` to `multipeek()` + - `format` to `format_with` and `format_default` to `format` + - `.into_rc()` to `rciter` + - `Partition` enum is now `Either` + - Module reorganization: + - All iterator structs are under `itertools::structs` but also reexported to the top level, for backwards compatibility + - All free functions are reexported at the root, `itertools::free` will be removed in the next version + - Removed: + - `ZipSlices`, use `.zip()` instead + - `.enumerate_from()`, `ZipTrusted`, due to being unstable + - `.mend_slices()`, moved to crate `odds` + - Stride, StrideMut, moved to crate `odds` + - `linspace()`, moved to crate `itertools-num` + - `.sort_by()`, use `.sorted_by()` + - `.is_empty_hint()`, use `.size_hint()` + - `.dropn()`, use `.dropping()` + - `.map_fn()`, use `.map()` + - `.slice()`, use `.take()` / `.skip()` + - helper traits in `misc` + - `new` constructors on iterator structs, use `Itertools` trait or free functions instead + - `itertools::size_hint` is now private + - Behaviour changes: + - `format` and `format_with` helpers now panic if you try to format them more than once. + - `repeat_call` is not double ended anymore + - New features: + - tuple flattening iterator is constructible with `cons_tuples` + - itertools reexports `Either` from the `either` crate. `Either` is an iterator when `L, R` are. + - `MinMaxResult` now implements `Copy` and `Clone` + - `tuple_combinations` supports 1-4 tuples of combinations (previously just 2) +## 0.4.19 + - Add `.minmax_by()` + - Add `itertools::free::cloned` + - Add `itertools::free::rciter` + - Improve `.step(n)` slightly to take advantage of specialized Fuse better. +## 0.4.18 + - Only changes related to the "unstable" crate feature. This feature is more or less deprecated. + - Use deprecated warnings when unstable is enabled. `.enumerate_from()` will be removed imminently since it's using a deprecated libstd trait. +## 0.4.17 + - Fix bug in `.kmerge()` that caused it to often produce the wrong order #134 +## 0.4.16 + - Improve precision of the `interleave_shortest` adaptor's size hint (it is now computed exactly when possible). +## 0.4.15 + - Fixup on top of the workaround in 0.4.14. A function in `itertools::free` was removed by mistake and now it is added back again. +## 0.4.14 + - Workaround an upstream regression in a Rust nightly build that broke compilation of of `itertools::free::{interleave, merge}` +## 0.4.13 + - Add `.minmax()` and `.minmax_by_key()`, iterator methods for finding both minimum and maximum in one scan. + - Add `.format_default()`, a simpler version of `.format()` (lazy formatting for iterators). +## 0.4.12 + - Add `.zip_eq()`, an adaptor like `.zip()` except it ensures iterators of inequal length don't pass silently (instead it panics). + - Add `.fold_while()`, an iterator method that is a fold that can short-circuit. + - Add `.partition_map()`, an iterator method that can separate elements into two collections. +## 0.4.11 + - Add `.get()` for `Stride{,Mut}` and `.get_mut()` for `StrideMut` +## 0.4.10 + - Improve performance of `.kmerge()` +## 0.4.9 + - Add k-ary merge adaptor `.kmerge()` + - Fix a bug in `.islice()` with ranges `a..b` where a `> b`. +## 0.4.8 + - Implement `Clone`, `Debug` for `Linspace` +## 0.4.7 + - Add function `diff_with()` that compares two iterators + - Add `.combinations_n()`, an n-ary combinations iterator + - Add methods `PutBack::with_value` and `PutBack::into_parts`. +## 0.4.6 + - Add method `.sorted()` + - Add module `itertools::free` with free function variants of common iterator adaptors and methods. For example `enumerate(iterable)`, `rev(iterable)`, and so on. +## 0.4.5 + - Add `.flatten()` +## 0.4.4 + - Allow composing `ZipSlices` with itself +## 0.4.3 + - Write `iproduct!()` as a single expression; this allows temporary values in its arguments. +## 0.4.2 + - Add `.fold_options()` + - Require Rust 1.1 or later +## 0.4.1 + - Update `.dropping()` to take advantage of `.nth()` +## 0.4.0 + - `.merge()`, `.unique()` and `.dedup()` now perform better due to not using function pointers + - Add free functions `enumerate()` and `rev()` + - Breaking changes: + - Return types of `.merge()` and `.merge_by()` renamed and changed + - Method `Merge::new` removed + - `.merge_by()` now takes a closure that returns bool. + - Return type of `.dedup()` changed + - Return type of `.mend_slices()` changed + - Return type of `.unique()` changed + - Removed function `times()`, struct `Times`: use a range instead + - Removed deprecated macro `icompr!()` + - Removed deprecated `FnMap` and method `.fn_map()`: use `.map_fn()` + - `.interleave_shortest()` is no longer guaranteed to act like fused +## 0.3.25 + - Rename `.sort_by()` to `.sorted_by()`. Old name is deprecated. + - Fix well-formedness warnings from RFC 1214, no user visible impact +## 0.3.24 + - Improve performance of `.merge()`'s ordering function slightly +## 0.3.23 + - Added `.chunks()`, similar to (and based on) `.group_by_lazy()`. + - Tweak linspace to match numpy.linspace and make it double ended. +## 0.3.22 + - Added `ZipSlices`, a fast zip for slices +## 0.3.21 + - Remove `Debug` impl for `Format`, it will have different use later +## 0.3.20 + - Optimize `.group_by_lazy()` +## 0.3.19 + - Added `.group_by_lazy()`, a possibly nonallocating group by + - Added `.format()`, a nonallocating formatting helper for iterators + - Remove uses of `RandomAccessIterator` since it has been deprecated in Rust. +## 0.3.17 + - Added (adopted) `Unfold` from Rust +## 0.3.16 + - Added adaptors `.unique()`, `.unique_by()` +## 0.3.15 + - Added method `.sort_by()` +## 0.3.14 + - Added adaptor `.while_some()` +## 0.3.13 + - Added adaptor `.interleave_shortest()` + - Added adaptor `.pad_using()` +## 0.3.11 + - Added `assert_equal` function +## 0.3.10 + - Bugfix `.combinations()` `size_hint`. +## 0.3.8 + - Added source `RepeatCall` +## 0.3.7 + - Added adaptor `PutBackN` + - Added adaptor `.combinations()` +## 0.3.6 + - Added `itertools::partition`, partition a sequence in place based on a predicate. + - Deprecate `icompr!()` with no replacement. +## 0.3.5 + - `.map_fn()` replaces deprecated `.fn_map()`. +## 0.3.4 + - `.take_while_ref()` *by-ref adaptor* + - `.coalesce()` *adaptor* + - `.mend_slices()` *adaptor* +## 0.3.3 + - `.dropping_back()` *method* + - `.fold1()` *method* + - `.is_empty_hint()` *method* diff --git a/rust/hw/char/pl011/vendor/itertools/Cargo.lock b/rust/hw/char/pl011/vendor/itertools/Cargo.lock new file mode 100644 index 0000000000..76936c9eea --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/Cargo.lock @@ -0,0 +1,681 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +version = 3 + +[[package]] +name = "anes" +version = "0.1.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4b46cbb362ab8752921c97e041f5e366ee6297bd428a31275b9fcf1e380f7299" + +[[package]] +name = "atty" +version = "0.2.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8" +dependencies = [ + "hermit-abi", + "libc", + "winapi", +] + +[[package]] +name = "autocfg" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa" + +[[package]] +name = "bitflags" +version = "1.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" + +[[package]] +name = "bumpalo" +version = "3.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c1ad822118d20d2c234f427000d5acc36eabe1e29a348c89b63dd60b13f28e5d" + +[[package]] +name = "cast" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5" + +[[package]] +name = "cfg-if" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" + +[[package]] +name = "ciborium" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b0c137568cc60b904a7724001b35ce2630fd00d5d84805fbb608ab89509d788f" +dependencies = [ + "ciborium-io", + "ciborium-ll", + "serde", +] + +[[package]] +name = "ciborium-io" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "346de753af073cc87b52b2083a506b38ac176a44cfb05497b622e27be899b369" + +[[package]] +name = "ciborium-ll" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "213030a2b5a4e0c0892b6652260cf6ccac84827b83a85a534e178e3906c4cf1b" +dependencies = [ + "ciborium-io", + "half", +] + +[[package]] +name = "clap" +version = "3.2.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "86447ad904c7fb335a790c9d7fe3d0d971dc523b8ccd1561a520de9a85302750" +dependencies = [ + "bitflags", + "clap_lex", + "indexmap", + "textwrap", +] + +[[package]] +name = "clap_lex" +version = "0.2.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2850f2f5a82cbf437dd5af4d49848fbdfc27c157c3d010345776f952765261c5" +dependencies = [ + "os_str_bytes", +] + +[[package]] +name = "criterion" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e7c76e09c1aae2bc52b3d2f29e13c6572553b30c4aa1b8a49fd70de6412654cb" +dependencies = [ + "anes", + "atty", + "cast", + "ciborium", + "clap", + "criterion-plot", + "itertools 0.10.4", + "lazy_static", + "num-traits", + "oorandom", + "plotters", + "rayon", + "regex", + "serde", + "serde_derive", + "serde_json", + "tinytemplate", + "walkdir", +] + +[[package]] +name = "criterion-plot" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6b50826342786a51a89e2da3a28f1c32b06e387201bc2d19791f622c673706b1" +dependencies = [ + "cast", + "itertools 0.10.4", +] + +[[package]] +name = "crossbeam-channel" +version = "0.5.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c2dd04ddaf88237dc3b8d8f9a3c1004b506b54b3313403944054d23c0870c521" +dependencies = [ + "cfg-if", + "crossbeam-utils", +] + +[[package]] +name = "crossbeam-deque" +version = "0.8.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "715e8152b692bba2d374b53d4875445368fdf21a94751410af607a5ac677d1fc" +dependencies = [ + "cfg-if", + "crossbeam-epoch", + "crossbeam-utils", +] + +[[package]] +name = "crossbeam-epoch" +version = "0.9.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "045ebe27666471bb549370b4b0b3e51b07f56325befa4284db65fc89c02511b1" +dependencies = [ + "autocfg", + "cfg-if", + "crossbeam-utils", + "memoffset", + "once_cell", + "scopeguard", +] + +[[package]] +name = "crossbeam-utils" +version = "0.8.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "51887d4adc7b564537b15adcfb307936f8075dfcd5f00dde9a9f1d29383682bc" +dependencies = [ + "cfg-if", + "once_cell", +] + +[[package]] +name = "either" +version = "1.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "90e5c1c8368803113bf0c9584fc495a58b86dc8a29edbf8fe877d21d9507e797" + +[[package]] +name = "getrandom" +version = "0.1.16" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8fc3cb4d91f53b50155bdcfd23f6a4c39ae1969c2ae85982b135750cccaf5fce" +dependencies = [ + "cfg-if", + "libc", + "wasi", +] + +[[package]] +name = "half" +version = "1.8.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "eabb4a44450da02c90444cf74558da904edde8fb4e9035a9a6a4e15445af0bd7" + +[[package]] +name = "hashbrown" +version = "0.12.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888" + +[[package]] +name = "hermit-abi" +version = "0.1.19" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33" +dependencies = [ + "libc", +] + +[[package]] +name = "indexmap" +version = "1.9.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "10a35a97730320ffe8e2d410b5d3b69279b98d2c14bdb8b70ea89ecf7888d41e" +dependencies = [ + "autocfg", + "hashbrown", +] + +[[package]] +name = "itertools" +version = "0.10.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d8bf247779e67a9082a4790b45e71ac7cfd1321331a5c856a74a9faebdab78d0" +dependencies = [ + "either", +] + +[[package]] +name = "itertools" +version = "0.11.0" +dependencies = [ + "criterion", + "either", + "paste", + "permutohedron", + "quickcheck", + "rand", +] + +[[package]] +name = "itoa" +version = "1.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6c8af84674fe1f223a982c933a0ee1086ac4d4052aa0fb8060c12c6ad838e754" + +[[package]] +name = "js-sys" +version = "0.3.60" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "49409df3e3bf0856b916e2ceaca09ee28e6871cf7d9ce97a692cacfdb2a25a47" +dependencies = [ + "wasm-bindgen", +] + +[[package]] +name = "lazy_static" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646" + +[[package]] +name = "libc" +version = "0.2.133" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c0f80d65747a3e43d1596c7c5492d95d5edddaabd45a7fcdb02b95f644164966" + +[[package]] +name = "log" +version = "0.4.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "abb12e687cfb44aa40f41fc3978ef76448f9b6038cad6aef4259d3c095a2382e" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "memoffset" +version = "0.6.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce" +dependencies = [ + "autocfg", +] + +[[package]] +name = "num-traits" +version = "0.2.15" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "578ede34cf02f8924ab9447f50c28075b4d3e5b269972345e7e0372b38c6cdcd" +dependencies = [ + "autocfg", +] + +[[package]] +name = "num_cpus" +version = "1.13.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "19e64526ebdee182341572e50e9ad03965aa510cd94427a4549448f285e957a1" +dependencies = [ + "hermit-abi", + "libc", +] + +[[package]] +name = "once_cell" +version = "1.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2f7254b99e31cad77da24b08ebf628882739a608578bb1bcdfc1f9c21260d7c0" + +[[package]] +name = "oorandom" +version = "11.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0ab1bc2a289d34bd04a330323ac98a1b4bc82c9d9fcb1e66b63caa84da26b575" + +[[package]] +name = "os_str_bytes" +version = "6.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9ff7415e9ae3fff1225851df9e0d9e4e5479f947619774677a63572e55e80eff" + +[[package]] +name = "paste" +version = "1.0.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b1de2e551fb905ac83f73f7aedf2f0cb4a0da7e35efa24a202a936269f1f18e1" + +[[package]] +name = "permutohedron" +version = "0.2.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b687ff7b5da449d39e418ad391e5e08da53ec334903ddbb921db208908fc372c" + +[[package]] +name = "plotters" +version = "0.3.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2538b639e642295546c50fcd545198c9d64ee2a38620a628724a3b266d5fbf97" +dependencies = [ + "num-traits", + "plotters-backend", + "plotters-svg", + "wasm-bindgen", + "web-sys", +] + +[[package]] +name = "plotters-backend" +version = "0.3.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "193228616381fecdc1224c62e96946dfbc73ff4384fba576e052ff8c1bea8142" + +[[package]] +name = "plotters-svg" +version = "0.3.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f9a81d2759aae1dae668f783c308bc5c8ebd191ff4184aaa1b37f65a6ae5a56f" +dependencies = [ + "plotters-backend", +] + +[[package]] +name = "ppv-lite86" +version = "0.2.16" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "eb9f9e6e233e5c4a35559a617bf40a4ec447db2e84c20b55a6f83167b7e57872" + +[[package]] +name = "proc-macro2" +version = "1.0.43" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0a2ca2c61bc9f3d74d2886294ab7b9853abd9c1ad903a3ac7815c58989bb7bab" +dependencies = [ + "unicode-ident", +] + +[[package]] +name = "quickcheck" +version = "0.9.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a44883e74aa97ad63db83c4bf8ca490f02b2fc02f92575e720c8551e843c945f" +dependencies = [ + "rand", + "rand_core", +] + +[[package]] +name = "quote" +version = "1.0.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bbe448f377a7d6961e30f5955f9b8d106c3f5e449d493ee1b125c1d43c2b5179" +dependencies = [ + "proc-macro2", +] + +[[package]] +name = "rand" +version = "0.7.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6a6b1679d49b24bbfe0c803429aa1874472f50d9b363131f0e89fc356b544d03" +dependencies = [ + "getrandom", + "libc", + "rand_chacha", + "rand_core", + "rand_hc", +] + +[[package]] +name = "rand_chacha" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f4c8ed856279c9737206bf725bf36935d8666ead7aa69b52be55af369d193402" +dependencies = [ + "ppv-lite86", + "rand_core", +] + +[[package]] +name = "rand_core" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "90bde5296fc891b0cef12a6d03ddccc162ce7b2aff54160af9338f8d40df6d19" +dependencies = [ + "getrandom", +] + +[[package]] +name = "rand_hc" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ca3129af7b92a17112d59ad498c6f81eaf463253766b90396d39ea7a39d6613c" +dependencies = [ + "rand_core", +] + +[[package]] +name = "rayon" +version = "1.5.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bd99e5772ead8baa5215278c9b15bf92087709e9c1b2d1f97cdb5a183c933a7d" +dependencies = [ + "autocfg", + "crossbeam-deque", + "either", + "rayon-core", +] + +[[package]] +name = "rayon-core" +version = "1.9.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "258bcdb5ac6dad48491bb2992db6b7cf74878b0384908af124823d118c99683f" +dependencies = [ + "crossbeam-channel", + "crossbeam-deque", + "crossbeam-utils", + "num_cpus", +] + +[[package]] +name = "regex" +version = "1.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4c4eb3267174b8c6c2f654116623910a0fef09c4753f8dd83db29c48a0df988b" +dependencies = [ + "regex-syntax", +] + +[[package]] +name = "regex-syntax" +version = "0.6.27" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a3f87b73ce11b1619a3c6332f45341e0047173771e8b8b73f87bfeefb7b56244" + +[[package]] +name = "ryu" +version = "1.0.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4501abdff3ae82a1c1b477a17252eb69cee9e66eb915c1abaa4f44d873df9f09" + +[[package]] +name = "same-file" +version = "1.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" +dependencies = [ + "winapi-util", +] + +[[package]] +name = "scopeguard" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd" + +[[package]] +name = "serde" +version = "1.0.144" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0f747710de3dcd43b88c9168773254e809d8ddbdf9653b84e2554ab219f17860" +dependencies = [ + "serde_derive", +] + +[[package]] +name = "serde_derive" +version = "1.0.144" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "94ed3a816fb1d101812f83e789f888322c34e291f894f19590dc310963e87a00" +dependencies = [ + "proc-macro2", + "quote", + "syn", +] + +[[package]] +name = "serde_json" +version = "1.0.85" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e55a28e3aaef9d5ce0506d0a14dbba8054ddc7e499ef522dd8b26859ec9d4a44" +dependencies = [ + "itoa", + "ryu", + "serde", +] + +[[package]] +name = "syn" +version = "1.0.100" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "52205623b1b0f064a4e71182c3b18ae902267282930c6d5462c91b859668426e" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "textwrap" +version = "0.15.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "949517c0cf1bf4ee812e2e07e08ab448e3ae0d23472aee8a06c985f0c8815b16" + +[[package]] +name = "tinytemplate" +version = "1.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "be4d6b5f19ff7664e8c98d03e2139cb510db9b0a60b55f8e8709b689d939b6bc" +dependencies = [ + "serde", + "serde_json", +] + +[[package]] +name = "unicode-ident" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dcc811dc4066ac62f84f11307873c4850cb653bfa9b1719cee2bd2204a4bc5dd" + +[[package]] +name = "walkdir" +version = "2.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "808cf2735cd4b6866113f648b791c6adc5714537bc222d9347bb203386ffda56" +dependencies = [ + "same-file", + "winapi", + "winapi-util", +] + +[[package]] +name = "wasi" +version = "0.9.0+wasi-snapshot-preview1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cccddf32554fecc6acb585f82a32a72e28b48f8c4c1883ddfeeeaa96f7d8e519" + +[[package]] +name = "wasm-bindgen" +version = "0.2.83" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "eaf9f5aceeec8be17c128b2e93e031fb8a4d469bb9c4ae2d7dc1888b26887268" +dependencies = [ + "cfg-if", + "wasm-bindgen-macro", +] + +[[package]] +name = "wasm-bindgen-backend" +version = "0.2.83" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4c8ffb332579b0557b52d268b91feab8df3615f265d5270fec2a8c95b17c1142" +dependencies = [ + "bumpalo", + "log", + "once_cell", + "proc-macro2", + "quote", + "syn", + "wasm-bindgen-shared", +] + +[[package]] +name = "wasm-bindgen-macro" +version = "0.2.83" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "052be0f94026e6cbc75cdefc9bae13fd6052cdcaf532fa6c45e7ae33a1e6c810" +dependencies = [ + "quote", + "wasm-bindgen-macro-support", +] + +[[package]] +name = "wasm-bindgen-macro-support" +version = "0.2.83" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "07bc0c051dc5f23e307b13285f9d75df86bfdf816c5721e573dec1f9b8aa193c" +dependencies = [ + "proc-macro2", + "quote", + "syn", + "wasm-bindgen-backend", + "wasm-bindgen-shared", +] + +[[package]] +name = "wasm-bindgen-shared" +version = "0.2.83" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1c38c045535d93ec4f0b4defec448e4291638ee608530863b1e2ba115d4fff7f" + +[[package]] +name = "web-sys" +version = "0.3.60" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bcda906d8be16e728fd5adc5b729afad4e444e106ab28cd1c7256e54fa61510f" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + +[[package]] +name = "winapi" +version = "0.3.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" +dependencies = [ + "winapi-i686-pc-windows-gnu", + "winapi-x86_64-pc-windows-gnu", +] + +[[package]] +name = "winapi-i686-pc-windows-gnu" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" + +[[package]] +name = "winapi-util" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178" +dependencies = [ + "winapi", +] + +[[package]] +name = "winapi-x86_64-pc-windows-gnu" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" diff --git a/rust/hw/char/pl011/vendor/itertools/Cargo.toml b/rust/hw/char/pl011/vendor/itertools/Cargo.toml new file mode 100644 index 0000000000..df3cbd8fd3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/Cargo.toml @@ -0,0 +1,101 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2018" +rust-version = "1.36.0" +name = "itertools" +version = "0.11.0" +authors = ["bluss"] +exclude = ["/bors.toml"] +description = "Extra iterator adaptors, iterator methods, free functions, and macros." +documentation = "https://docs.rs/itertools/" +readme = "README.md" +keywords = [ + "iterator", + "data-structure", + "zip", + "product", + "group-by", +] +categories = [ + "algorithms", + "rust-patterns", +] +license = "MIT OR Apache-2.0" +repository = "https://github.com/rust-itertools/itertools" + +[profile.bench] +debug = 2 + +[lib] +test = false +bench = false + +[[bench]] +name = "tuple_combinations" +harness = false + +[[bench]] +name = "tuples" +harness = false + +[[bench]] +name = "fold_specialization" +harness = false + +[[bench]] +name = "combinations_with_replacement" +harness = false + +[[bench]] +name = "tree_fold1" +harness = false + +[[bench]] +name = "bench1" +harness = false + +[[bench]] +name = "combinations" +harness = false + +[[bench]] +name = "powerset" +harness = false + +[dependencies.either] +version = "1.0" +default-features = false + +[dev-dependencies.criterion] +version = "0.4.0" + +[dev-dependencies.paste] +version = "1.0.0" + +[dev-dependencies.permutohedron] +version = "0.2" + +[dev-dependencies.quickcheck] +version = "0.9" +default_features = false + +[dev-dependencies.rand] +version = "0.7" + +[features] +default = ["use_std"] +use_alloc = [] +use_std = [ + "use_alloc", + "either/use_std", +] diff --git a/rust/hw/char/pl011/vendor/itertools/LICENSE-APACHE b/rust/hw/char/pl011/vendor/itertools/LICENSE-APACHE new file mode 100644 index 0000000000..16fe87b06e --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright [yyyy] [name of copyright owner] + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/rust/hw/char/pl011/vendor/itertools/LICENSE-MIT b/rust/hw/char/pl011/vendor/itertools/LICENSE-MIT new file mode 100644 index 0000000000..9203baa055 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/LICENSE-MIT @@ -0,0 +1,25 @@ +Copyright (c) 2015 + +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/itertools/README.md b/rust/hw/char/pl011/vendor/itertools/README.md new file mode 100644 index 0000000000..626d10d0d0 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/README.md @@ -0,0 +1,44 @@ +# Itertools + +Extra iterator adaptors, functions and macros. + +Please read the [API documentation here](https://docs.rs/itertools/). + +[![build_status](https://github.com/rust-itertools/itertools/actions/workflows/ci.yml/badge.svg)](https://github.com/rust-itertools/itertools/actions) +[![crates.io](https://img.shields.io/crates/v/itertools.svg)](https://crates.io/crates/itertools) + +How to use with Cargo: + +```toml +[dependencies] +itertools = "0.11.0" +``` + +How to use in your crate: + +```rust +use itertools::Itertools; +``` + +## How to contribute + +- Fix a bug or implement a new thing +- Include tests for your new feature, preferably a QuickCheck test +- Make a Pull Request + +For new features, please first consider filing a PR to [rust-lang/rust](https://github.com/rust-lang/rust), +adding your new feature to the `Iterator` trait of the standard library, if you believe it is reasonable. +If it isn't accepted there, proposing it for inclusion in ``itertools`` is a good idea. +The reason for doing is this is so that we avoid future breakage as with ``.flatten()``. +However, if your feature involves heap allocation, such as storing elements in a ``Vec``, +then it can't be accepted into ``libcore``, and you should propose it for ``itertools`` directly instead. + +## License + +Dual-licensed to be compatible with the Rust project. + +Licensed under the Apache License, Version 2.0 +https://www.apache.org/licenses/LICENSE-2.0 or the MIT license +https://opensource.org/licenses/MIT, at your +option. This file may not be copied, modified, or distributed +except according to those terms. diff --git a/rust/hw/char/pl011/vendor/itertools/benches/bench1.rs b/rust/hw/char/pl011/vendor/itertools/benches/bench1.rs new file mode 100644 index 0000000000..71278d17b6 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/bench1.rs @@ -0,0 +1,877 @@ +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use itertools::Itertools; +use itertools::free::cloned; +use itertools::iproduct; + +use std::iter::repeat; +use std::cmp; +use std::ops::{Add, Range}; + +mod extra; + +use crate::extra::ZipSlices; + +fn slice_iter(c: &mut Criterion) { + let xs: Vec<_> = repeat(1i32).take(20).collect(); + + c.bench_function("slice iter", move |b| { + b.iter(|| for elt in xs.iter() { + black_box(elt); + }) + }); +} + +fn slice_iter_rev(c: &mut Criterion) { + let xs: Vec<_> = repeat(1i32).take(20).collect(); + + c.bench_function("slice iter rev", move |b| { + b.iter(|| for elt in xs.iter().rev() { + black_box(elt); + }) + }); +} + +fn zip_default_zip(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zip default zip", move |b| { + b.iter(|| { + for (&x, &y) in xs.iter().zip(&ys) { + black_box(x); + black_box(y); + } + }) + }); +} + +fn zipdot_i32_default_zip(c: &mut Criterion) { + let xs = vec![2; 1024]; + let ys = vec![2; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot i32 default zip", move |b| { + b.iter(|| { + let mut s = 0; + for (&x, &y) in xs.iter().zip(&ys) { + s += x * y; + } + s + }) + }); +} + +fn zipdot_f32_default_zip(c: &mut Criterion) { + let xs = vec![2f32; 1024]; + let ys = vec![2f32; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot f32 default zip", move |b| { + b.iter(|| { + let mut s = 0.; + for (&x, &y) in xs.iter().zip(&ys) { + s += x * y; + } + s + }) + }); +} + +fn zip_default_zip3(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let zs = vec![0; 766]; + let xs = black_box(xs); + let ys = black_box(ys); + let zs = black_box(zs); + + c.bench_function("zip default zip3", move |b| { + b.iter(|| { + for ((&x, &y), &z) in xs.iter().zip(&ys).zip(&zs) { + black_box(x); + black_box(y); + black_box(z); + } + }) + }); +} + +fn zip_slices_ziptuple(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + + c.bench_function("zip slices ziptuple", move |b| { + b.iter(|| { + let xs = black_box(&xs); + let ys = black_box(&ys); + for (&x, &y) in itertools::multizip((xs, ys)) { + black_box(x); + black_box(y); + } + }) + }); +} + +fn zipslices(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipslices", move |b| { + b.iter(|| { + for (&x, &y) in ZipSlices::new(&xs, &ys) { + black_box(x); + black_box(y); + } + }) + }); +} + +fn zipslices_mut(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let xs = black_box(xs); + let mut ys = black_box(ys); + + c.bench_function("zipslices mut", move |b| { + b.iter(|| { + for (&x, &mut y) in ZipSlices::from_slices(&xs[..], &mut ys[..]) { + black_box(x); + black_box(y); + } + }) + }); +} + +fn zipdot_i32_zipslices(c: &mut Criterion) { + let xs = vec![2; 1024]; + let ys = vec![2; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot i32 zipslices", move |b| { + b.iter(|| { + let mut s = 0i32; + for (&x, &y) in ZipSlices::new(&xs, &ys) { + s += x * y; + } + s + }) + }); +} + +fn zipdot_f32_zipslices(c: &mut Criterion) { + let xs = vec![2f32; 1024]; + let ys = vec![2f32; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot f32 zipslices", move |b| { + b.iter(|| { + let mut s = 0.; + for (&x, &y) in ZipSlices::new(&xs, &ys) { + s += x * y; + } + s + }) + }); +} + +fn zip_checked_counted_loop(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zip checked counted loop", move |b| { + b.iter(|| { + // Must slice to equal lengths, and then bounds checks are eliminated! + let len = cmp::min(xs.len(), ys.len()); + let xs = &xs[..len]; + let ys = &ys[..len]; + + for i in 0..len { + let x = xs[i]; + let y = ys[i]; + black_box(x); + black_box(y); + } + }) + }); +} + +fn zipdot_i32_checked_counted_loop(c: &mut Criterion) { + let xs = vec![2; 1024]; + let ys = vec![2; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot i32 checked counted loop", move |b| { + b.iter(|| { + // Must slice to equal lengths, and then bounds checks are eliminated! + let len = cmp::min(xs.len(), ys.len()); + let xs = &xs[..len]; + let ys = &ys[..len]; + + let mut s = 0i32; + + for i in 0..len { + s += xs[i] * ys[i]; + } + s + }) + }); +} + +fn zipdot_f32_checked_counted_loop(c: &mut Criterion) { + let xs = vec![2f32; 1024]; + let ys = vec![2f32; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot f32 checked counted loop", move |b| { + b.iter(|| { + // Must slice to equal lengths, and then bounds checks are eliminated! + let len = cmp::min(xs.len(), ys.len()); + let xs = &xs[..len]; + let ys = &ys[..len]; + + let mut s = 0.; + + for i in 0..len { + s += xs[i] * ys[i]; + } + s + }) + }); +} + +fn zipdot_f32_checked_counted_unrolled_loop(c: &mut Criterion) { + let xs = vec![2f32; 1024]; + let ys = vec![2f32; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot f32 checked counted unrolled loop", move |b| { + b.iter(|| { + // Must slice to equal lengths, and then bounds checks are eliminated! + let len = cmp::min(xs.len(), ys.len()); + let mut xs = &xs[..len]; + let mut ys = &ys[..len]; + + let mut s = 0.; + let (mut p0, mut p1, mut p2, mut p3, mut p4, mut p5, mut p6, mut p7) = + (0., 0., 0., 0., 0., 0., 0., 0.); + + // how to unroll and have bounds checks eliminated (by cristicbz) + // split sum into eight parts to enable vectorization (by bluss) + while xs.len() >= 8 { + p0 += xs[0] * ys[0]; + p1 += xs[1] * ys[1]; + p2 += xs[2] * ys[2]; + p3 += xs[3] * ys[3]; + p4 += xs[4] * ys[4]; + p5 += xs[5] * ys[5]; + p6 += xs[6] * ys[6]; + p7 += xs[7] * ys[7]; + + xs = &xs[8..]; + ys = &ys[8..]; + } + s += p0 + p4; + s += p1 + p5; + s += p2 + p6; + s += p3 + p7; + + for i in 0..xs.len() { + s += xs[i] * ys[i]; + } + s + }) + }); +} + +fn zip_unchecked_counted_loop(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zip unchecked counted loop", move |b| { + b.iter(|| { + let len = cmp::min(xs.len(), ys.len()); + for i in 0..len { + unsafe { + let x = *xs.get_unchecked(i); + let y = *ys.get_unchecked(i); + black_box(x); + black_box(y); + } + } + }) + }); +} + +fn zipdot_i32_unchecked_counted_loop(c: &mut Criterion) { + let xs = vec![2; 1024]; + let ys = vec![2; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot i32 unchecked counted loop", move |b| { + b.iter(|| { + let len = cmp::min(xs.len(), ys.len()); + let mut s = 0i32; + for i in 0..len { + unsafe { + let x = *xs.get_unchecked(i); + let y = *ys.get_unchecked(i); + s += x * y; + } + } + s + }) + }); +} + +fn zipdot_f32_unchecked_counted_loop(c: &mut Criterion) { + let xs = vec![2.; 1024]; + let ys = vec![2.; 768]; + let xs = black_box(xs); + let ys = black_box(ys); + + c.bench_function("zipdot f32 unchecked counted loop", move |b| { + b.iter(|| { + let len = cmp::min(xs.len(), ys.len()); + let mut s = 0f32; + for i in 0..len { + unsafe { + let x = *xs.get_unchecked(i); + let y = *ys.get_unchecked(i); + s += x * y; + } + } + s + }) + }); +} + +fn zip_unchecked_counted_loop3(c: &mut Criterion) { + let xs = vec![0; 1024]; + let ys = vec![0; 768]; + let zs = vec![0; 766]; + let xs = black_box(xs); + let ys = black_box(ys); + let zs = black_box(zs); + + c.bench_function("zip unchecked counted loop3", move |b| { + b.iter(|| { + let len = cmp::min(xs.len(), cmp::min(ys.len(), zs.len())); + for i in 0..len { + unsafe { + let x = *xs.get_unchecked(i); + let y = *ys.get_unchecked(i); + let z = *zs.get_unchecked(i); + black_box(x); + black_box(y); + black_box(z); + } + } + }) + }); +} + +fn group_by_lazy_1(c: &mut Criterion) { + let mut data = vec![0; 1024]; + for (index, elt) in data.iter_mut().enumerate() { + *elt = index / 10; + } + + let data = black_box(data); + + c.bench_function("group by lazy 1", move |b| { + b.iter(|| { + for (_key, group) in &data.iter().group_by(|elt| **elt) { + for elt in group { + black_box(elt); + } + } + }) + }); +} + +fn group_by_lazy_2(c: &mut Criterion) { + let mut data = vec![0; 1024]; + for (index, elt) in data.iter_mut().enumerate() { + *elt = index / 2; + } + + let data = black_box(data); + + c.bench_function("group by lazy 2", move |b| { + b.iter(|| { + for (_key, group) in &data.iter().group_by(|elt| **elt) { + for elt in group { + black_box(elt); + } + } + }) + }); +} + +fn slice_chunks(c: &mut Criterion) { + let data = vec![0; 1024]; + + let data = black_box(data); + let sz = black_box(10); + + c.bench_function("slice chunks", move |b| { + b.iter(|| { + for group in data.chunks(sz) { + for elt in group { + black_box(elt); + } + } + }) + }); +} + +fn chunks_lazy_1(c: &mut Criterion) { + let data = vec![0; 1024]; + + let data = black_box(data); + let sz = black_box(10); + + c.bench_function("chunks lazy 1", move |b| { + b.iter(|| { + for group in &data.iter().chunks(sz) { + for elt in group { + black_box(elt); + } + } + }) + }); +} + +fn equal(c: &mut Criterion) { + let data = vec![7; 1024]; + let l = data.len(); + let alpha = black_box(&data[1..]); + let beta = black_box(&data[..l - 1]); + + c.bench_function("equal", move |b| { + b.iter(|| { + itertools::equal(alpha, beta) + }) + }); +} + +fn merge_default(c: &mut Criterion) { + let mut data1 = vec![0; 1024]; + let mut data2 = vec![0; 800]; + let mut x = 0; + for (_, elt) in data1.iter_mut().enumerate() { + *elt = x; + x += 1; + } + + let mut y = 0; + for (i, elt) in data2.iter_mut().enumerate() { + *elt += y; + if i % 3 == 0 { + y += 3; + } else { + y += 0; + } + } + let data1 = black_box(data1); + let data2 = black_box(data2); + + c.bench_function("merge default", move |b| { + b.iter(|| { + data1.iter().merge(&data2).count() + }) + }); +} + +fn merge_by_cmp(c: &mut Criterion) { + let mut data1 = vec![0; 1024]; + let mut data2 = vec![0; 800]; + let mut x = 0; + for (_, elt) in data1.iter_mut().enumerate() { + *elt = x; + x += 1; + } + + let mut y = 0; + for (i, elt) in data2.iter_mut().enumerate() { + *elt += y; + if i % 3 == 0 { + y += 3; + } else { + y += 0; + } + } + let data1 = black_box(data1); + let data2 = black_box(data2); + + c.bench_function("merge by cmp", move |b| { + b.iter(|| { + data1.iter().merge_by(&data2, PartialOrd::le).count() + }) + }); +} + +fn merge_by_lt(c: &mut Criterion) { + let mut data1 = vec![0; 1024]; + let mut data2 = vec![0; 800]; + let mut x = 0; + for (_, elt) in data1.iter_mut().enumerate() { + *elt = x; + x += 1; + } + + let mut y = 0; + for (i, elt) in data2.iter_mut().enumerate() { + *elt += y; + if i % 3 == 0 { + y += 3; + } else { + y += 0; + } + } + let data1 = black_box(data1); + let data2 = black_box(data2); + + c.bench_function("merge by lt", move |b| { + b.iter(|| { + data1.iter().merge_by(&data2, |a, b| a <= b).count() + }) + }); +} + +fn kmerge_default(c: &mut Criterion) { + let mut data1 = vec![0; 1024]; + let mut data2 = vec![0; 800]; + let mut x = 0; + for (_, elt) in data1.iter_mut().enumerate() { + *elt = x; + x += 1; + } + + let mut y = 0; + for (i, elt) in data2.iter_mut().enumerate() { + *elt += y; + if i % 3 == 0 { + y += 3; + } else { + y += 0; + } + } + let data1 = black_box(data1); + let data2 = black_box(data2); + let its = &[data1.iter(), data2.iter()]; + + c.bench_function("kmerge default", move |b| { + b.iter(|| { + its.iter().cloned().kmerge().count() + }) + }); +} + +fn kmerge_tenway(c: &mut Criterion) { + let mut data = vec![0; 10240]; + + let mut state = 1729u16; + fn rng(state: &mut u16) -> u16 { + let new = state.wrapping_mul(31421) + 6927; + *state = new; + new + } + + for elt in &mut data { + *elt = rng(&mut state); + } + + let mut chunks = Vec::new(); + let mut rest = &mut data[..]; + while rest.len() > 0 { + let chunk_len = 1 + rng(&mut state) % 512; + let chunk_len = cmp::min(rest.len(), chunk_len as usize); + let (fst, tail) = {rest}.split_at_mut(chunk_len); + fst.sort(); + chunks.push(fst.iter().cloned()); + rest = tail; + } + + // println!("Chunk lengths: {}", chunks.iter().format_with(", ", |elt, f| f(&elt.len()))); + + c.bench_function("kmerge tenway", move |b| { + b.iter(|| { + chunks.iter().cloned().kmerge().count() + }) + }); +} + +fn fast_integer_sum(iter: I) -> I::Item + where I: IntoIterator, + I::Item: Default + Add +{ + iter.into_iter().fold(<_>::default(), |x, y| x + y) +} + +fn step_vec_2(c: &mut Criterion) { + let v = vec![0; 1024]; + + c.bench_function("step vec 2", move |b| { + b.iter(|| { + fast_integer_sum(cloned(v.iter().step_by(2))) + }) + }); +} + +fn step_vec_10(c: &mut Criterion) { + let v = vec![0; 1024]; + + c.bench_function("step vec 10", move |b| { + b.iter(|| { + fast_integer_sum(cloned(v.iter().step_by(10))) + }) + }); +} + +fn step_range_2(c: &mut Criterion) { + let v = black_box(0..1024); + + c.bench_function("step range 2", move |b| { + b.iter(|| { + fast_integer_sum(v.clone().step_by(2)) + }) + }); +} + +fn step_range_10(c: &mut Criterion) { + let v = black_box(0..1024); + + c.bench_function("step range 10", move |b| { + b.iter(|| { + fast_integer_sum(v.clone().step_by(10)) + }) + }); +} + +fn cartesian_product_iterator(c: &mut Criterion) { + let xs = vec![0; 16]; + + c.bench_function("cartesian product iterator", move |b| { + b.iter(|| { + let mut sum = 0; + for (&x, &y, &z) in iproduct!(&xs, &xs, &xs) { + sum += x; + sum += y; + sum += z; + } + sum + }) + }); +} + +fn cartesian_product_fold(c: &mut Criterion) { + let xs = vec![0; 16]; + + c.bench_function("cartesian product fold", move |b| { + b.iter(|| { + let mut sum = 0; + iproduct!(&xs, &xs, &xs).fold((), |(), (&x, &y, &z)| { + sum += x; + sum += y; + sum += z; + }); + sum + }) + }); +} + +fn multi_cartesian_product_iterator(c: &mut Criterion) { + let xs = [vec![0; 16], vec![0; 16], vec![0; 16]]; + + c.bench_function("multi cartesian product iterator", move |b| { + b.iter(|| { + let mut sum = 0; + for x in xs.iter().multi_cartesian_product() { + sum += x[0]; + sum += x[1]; + sum += x[2]; + } + sum + }) + }); +} + +fn multi_cartesian_product_fold(c: &mut Criterion) { + let xs = [vec![0; 16], vec![0; 16], vec![0; 16]]; + + c.bench_function("multi cartesian product fold", move |b| { + b.iter(|| { + let mut sum = 0; + xs.iter().multi_cartesian_product().fold((), |(), x| { + sum += x[0]; + sum += x[1]; + sum += x[2]; + }); + sum + }) + }); +} + +fn cartesian_product_nested_for(c: &mut Criterion) { + let xs = vec![0; 16]; + + c.bench_function("cartesian product nested for", move |b| { + b.iter(|| { + let mut sum = 0; + for &x in &xs { + for &y in &xs { + for &z in &xs { + sum += x; + sum += y; + sum += z; + } + } + } + sum + }) + }); +} + +fn all_equal(c: &mut Criterion) { + let mut xs = vec![0; 5_000_000]; + xs.extend(vec![1; 5_000_000]); + + c.bench_function("all equal", move |b| { + b.iter(|| xs.iter().all_equal()) + }); +} + +fn all_equal_for(c: &mut Criterion) { + let mut xs = vec![0; 5_000_000]; + xs.extend(vec![1; 5_000_000]); + + c.bench_function("all equal for", move |b| { + b.iter(|| { + for &x in &xs { + if x != xs[0] { + return false; + } + } + true + }) + }); +} + +fn all_equal_default(c: &mut Criterion) { + let mut xs = vec![0; 5_000_000]; + xs.extend(vec![1; 5_000_000]); + + c.bench_function("all equal default", move |b| { + b.iter(|| xs.iter().dedup().nth(1).is_none()) + }); +} + +const PERM_COUNT: usize = 6; + +fn permutations_iter(c: &mut Criterion) { + struct NewIterator(Range); + + impl Iterator for NewIterator { + type Item = usize; + + fn next(&mut self) -> Option { + self.0.next() + } + } + + c.bench_function("permutations iter", move |b| { + b.iter(|| { + for _ in NewIterator(0..PERM_COUNT).permutations(PERM_COUNT) { + + } + }) + }); +} + +fn permutations_range(c: &mut Criterion) { + c.bench_function("permutations range", move |b| { + b.iter(|| { + for _ in (0..PERM_COUNT).permutations(PERM_COUNT) { + + } + }) + }); +} + +fn permutations_slice(c: &mut Criterion) { + let v = (0..PERM_COUNT).collect_vec(); + + c.bench_function("permutations slice", move |b| { + b.iter(|| { + for _ in v.as_slice().iter().permutations(PERM_COUNT) { + + } + }) + }); +} + +criterion_group!( + benches, + slice_iter, + slice_iter_rev, + zip_default_zip, + zipdot_i32_default_zip, + zipdot_f32_default_zip, + zip_default_zip3, + zip_slices_ziptuple, + zipslices, + zipslices_mut, + zipdot_i32_zipslices, + zipdot_f32_zipslices, + zip_checked_counted_loop, + zipdot_i32_checked_counted_loop, + zipdot_f32_checked_counted_loop, + zipdot_f32_checked_counted_unrolled_loop, + zip_unchecked_counted_loop, + zipdot_i32_unchecked_counted_loop, + zipdot_f32_unchecked_counted_loop, + zip_unchecked_counted_loop3, + group_by_lazy_1, + group_by_lazy_2, + slice_chunks, + chunks_lazy_1, + equal, + merge_default, + merge_by_cmp, + merge_by_lt, + kmerge_default, + kmerge_tenway, + step_vec_2, + step_vec_10, + step_range_2, + step_range_10, + cartesian_product_iterator, + cartesian_product_fold, + multi_cartesian_product_iterator, + multi_cartesian_product_fold, + cartesian_product_nested_for, + all_equal, + all_equal_for, + all_equal_default, + permutations_iter, + permutations_range, + permutations_slice, +); +criterion_main!(benches); diff --git a/rust/hw/char/pl011/vendor/itertools/benches/combinations.rs b/rust/hw/char/pl011/vendor/itertools/benches/combinations.rs new file mode 100644 index 0000000000..e7433a4cb0 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/combinations.rs @@ -0,0 +1,125 @@ +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use itertools::Itertools; + +// approximate 100_000 iterations for each combination +const N1: usize = 100_000; +const N2: usize = 448; +const N3: usize = 86; +const N4: usize = 41; +const N14: usize = 21; + +fn comb_for1(c: &mut Criterion) { + c.bench_function("comb for1", move |b| { + b.iter(|| { + for i in 0..N1 { + black_box(vec![i]); + } + }) + }); +} + +fn comb_for2(c: &mut Criterion) { + c.bench_function("comb for2", move |b| { + b.iter(|| { + for i in 0..N2 { + for j in (i + 1)..N2 { + black_box(vec![i, j]); + } + } + }) + }); +} + +fn comb_for3(c: &mut Criterion) { + c.bench_function("comb for3", move |b| { + b.iter(|| { + for i in 0..N3 { + for j in (i + 1)..N3 { + for k in (j + 1)..N3 { + black_box(vec![i, j, k]); + } + } + } + }) + }); +} + +fn comb_for4(c: &mut Criterion) { + c.bench_function("comb for4", move |b| { + b.iter(|| { + for i in 0..N4 { + for j in (i + 1)..N4 { + for k in (j + 1)..N4 { + for l in (k + 1)..N4 { + black_box(vec![i, j, k, l]); + } + } + } + } + }) + }); +} + +fn comb_c1(c: &mut Criterion) { + c.bench_function("comb c1", move |b| { + b.iter(|| { + for combo in (0..N1).combinations(1) { + black_box(combo); + } + }) + }); +} + +fn comb_c2(c: &mut Criterion) { + c.bench_function("comb c2", move |b| { + b.iter(|| { + for combo in (0..N2).combinations(2) { + black_box(combo); + } + }) + }); +} + +fn comb_c3(c: &mut Criterion) { + c.bench_function("comb c3", move |b| { + b.iter(|| { + for combo in (0..N3).combinations(3) { + black_box(combo); + } + }) + }); +} + +fn comb_c4(c: &mut Criterion) { + c.bench_function("comb c4", move |b| { + b.iter(|| { + for combo in (0..N4).combinations(4) { + black_box(combo); + } + }) + }); +} + +fn comb_c14(c: &mut Criterion) { + c.bench_function("comb c14", move |b| { + b.iter(|| { + for combo in (0..N14).combinations(14) { + black_box(combo); + } + }) + }); +} + +criterion_group!( + benches, + comb_for1, + comb_for2, + comb_for3, + comb_for4, + comb_c1, + comb_c2, + comb_c3, + comb_c4, + comb_c14, +); +criterion_main!(benches); diff --git a/rust/hw/char/pl011/vendor/itertools/benches/combinations_with_replacement.rs b/rust/hw/char/pl011/vendor/itertools/benches/combinations_with_replacement.rs new file mode 100644 index 0000000000..8e4fa3dc3b --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/combinations_with_replacement.rs @@ -0,0 +1,40 @@ +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use itertools::Itertools; + +fn comb_replacement_n10_k5(c: &mut Criterion) { + c.bench_function("comb replacement n10k5", move |b| { + b.iter(|| { + for i in (0..10).combinations_with_replacement(5) { + black_box(i); + } + }) + }); +} + +fn comb_replacement_n5_k10(c: &mut Criterion) { + c.bench_function("comb replacement n5 k10", move |b| { + b.iter(|| { + for i in (0..5).combinations_with_replacement(10) { + black_box(i); + } + }) + }); +} + +fn comb_replacement_n10_k10(c: &mut Criterion) { + c.bench_function("comb replacement n10 k10", move |b| { + b.iter(|| { + for i in (0..10).combinations_with_replacement(10) { + black_box(i); + } + }) + }); +} + +criterion_group!( + benches, + comb_replacement_n10_k5, + comb_replacement_n5_k10, + comb_replacement_n10_k10, +); +criterion_main!(benches); diff --git a/rust/hw/char/pl011/vendor/itertools/benches/extra/mod.rs b/rust/hw/char/pl011/vendor/itertools/benches/extra/mod.rs new file mode 100644 index 0000000000..52fe5cc3fe --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/extra/mod.rs @@ -0,0 +1,2 @@ +pub use self::zipslices::ZipSlices; +mod zipslices; diff --git a/rust/hw/char/pl011/vendor/itertools/benches/extra/zipslices.rs b/rust/hw/char/pl011/vendor/itertools/benches/extra/zipslices.rs new file mode 100644 index 0000000000..633be59068 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/extra/zipslices.rs @@ -0,0 +1,188 @@ +use std::cmp; + +// Note: There are different ways to implement ZipSlices. +// This version performed the best in benchmarks. +// +// I also implemented a version with three pointers (tptr, tend, uptr), +// that mimiced slice::Iter and only checked bounds by using tptr == tend, +// but that was inferior to this solution. + +/// An iterator which iterates two slices simultaneously. +/// +/// `ZipSlices` acts like a double-ended `.zip()` iterator. +/// +/// It was intended to be more efficient than `.zip()`, and it was, then +/// rustc changed how it optimizes so it can not promise improved performance +/// at this time. +/// +/// Note that elements past the end of the shortest of the two slices are ignored. +/// +/// Iterator element type for `ZipSlices` is `(T::Item, U::Item)`. For example, +/// for a `ZipSlices<&'a [A], &'b mut [B]>`, the element type is `(&'a A, &'b mut B)`. +#[derive(Clone)] +pub struct ZipSlices { + t: T, + u: U, + len: usize, + index: usize, +} + +impl<'a, 'b, A, B> ZipSlices<&'a [A], &'b [B]> { + /// Create a new `ZipSlices` from slices `a` and `b`. + /// + /// Act like a double-ended `.zip()` iterator, but more efficiently. + /// + /// Note that elements past the end of the shortest of the two slices are ignored. + #[inline(always)] + pub fn new(a: &'a [A], b: &'b [B]) -> Self { + let minl = cmp::min(a.len(), b.len()); + ZipSlices { + t: a, + u: b, + len: minl, + index: 0, + } + } +} + +impl ZipSlices + where T: Slice, + U: Slice +{ + /// Create a new `ZipSlices` from slices `a` and `b`. + /// + /// Act like a double-ended `.zip()` iterator, but more efficiently. + /// + /// Note that elements past the end of the shortest of the two slices are ignored. + #[inline(always)] + pub fn from_slices(a: T, b: U) -> Self { + let minl = cmp::min(a.len(), b.len()); + ZipSlices { + t: a, + u: b, + len: minl, + index: 0, + } + } +} + +impl Iterator for ZipSlices + where T: Slice, + U: Slice +{ + type Item = (T::Item, U::Item); + + #[inline(always)] + fn next(&mut self) -> Option { + unsafe { + if self.index >= self.len { + None + } else { + let i = self.index; + self.index += 1; + Some(( + self.t.get_unchecked(i), + self.u.get_unchecked(i))) + } + } + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + let len = self.len - self.index; + (len, Some(len)) + } +} + +impl DoubleEndedIterator for ZipSlices + where T: Slice, + U: Slice +{ + #[inline(always)] + fn next_back(&mut self) -> Option { + unsafe { + if self.index >= self.len { + None + } else { + self.len -= 1; + let i = self.len; + Some(( + self.t.get_unchecked(i), + self.u.get_unchecked(i))) + } + } + } +} + +impl ExactSizeIterator for ZipSlices + where T: Slice, + U: Slice +{} + +unsafe impl Slice for ZipSlices + where T: Slice, + U: Slice +{ + type Item = (T::Item, U::Item); + + fn len(&self) -> usize { + self.len - self.index + } + + unsafe fn get_unchecked(&mut self, i: usize) -> Self::Item { + (self.t.get_unchecked(i), + self.u.get_unchecked(i)) + } +} + +/// A helper trait to let `ZipSlices` accept both `&[T]` and `&mut [T]`. +/// +/// Unsafe trait because: +/// +/// - Implementors must guarantee that `get_unchecked` is valid for all indices `0..len()`. +pub unsafe trait Slice { + /// The type of a reference to the slice's elements + type Item; + #[doc(hidden)] + fn len(&self) -> usize; + #[doc(hidden)] + unsafe fn get_unchecked(&mut self, i: usize) -> Self::Item; +} + +unsafe impl<'a, T> Slice for &'a [T] { + type Item = &'a T; + #[inline(always)] + fn len(&self) -> usize { (**self).len() } + #[inline(always)] + unsafe fn get_unchecked(&mut self, i: usize) -> &'a T { + debug_assert!(i < self.len()); + (**self).get_unchecked(i) + } +} + +unsafe impl<'a, T> Slice for &'a mut [T] { + type Item = &'a mut T; + #[inline(always)] + fn len(&self) -> usize { (**self).len() } + #[inline(always)] + unsafe fn get_unchecked(&mut self, i: usize) -> &'a mut T { + debug_assert!(i < self.len()); + // override the lifetime constraints of &mut &'a mut [T] + (*(*self as *mut [T])).get_unchecked_mut(i) + } +} + +#[test] +fn zipslices() { + + let xs = [1, 2, 3, 4, 5, 6]; + let ys = [1, 2, 3, 7]; + ::itertools::assert_equal(ZipSlices::new(&xs, &ys), xs.iter().zip(&ys)); + + let xs = [1, 2, 3, 4, 5, 6]; + let mut ys = [0; 6]; + for (x, y) in ZipSlices::from_slices(&xs[..], &mut ys[..]) { + *y = *x; + } + ::itertools::assert_equal(&xs, &ys); +} diff --git a/rust/hw/char/pl011/vendor/itertools/benches/fold_specialization.rs b/rust/hw/char/pl011/vendor/itertools/benches/fold_specialization.rs new file mode 100644 index 0000000000..5de4671e98 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/fold_specialization.rs @@ -0,0 +1,73 @@ +use criterion::{criterion_group, criterion_main, Criterion}; +use itertools::Itertools; + +struct Unspecialized(I); + +impl Iterator for Unspecialized +where I: Iterator +{ + type Item = I::Item; + + #[inline(always)] + fn next(&mut self) -> Option { + self.0.next() + } + + #[inline(always)] + fn size_hint(&self) -> (usize, Option) { + self.0.size_hint() + } +} + +mod specialization { + use super::*; + + pub mod intersperse { + use super::*; + + pub fn external(c: &mut Criterion) + { + let arr = [1; 1024]; + + c.bench_function("external", move |b| { + b.iter(|| { + let mut sum = 0; + for &x in arr.iter().intersperse(&0) { + sum += x; + } + sum + }) + }); + } + + pub fn internal_specialized(c: &mut Criterion) + { + let arr = [1; 1024]; + + c.bench_function("internal specialized", move |b| { + b.iter(|| { + arr.iter().intersperse(&0).fold(0, |acc, x| acc + x) + }) + }); + } + + pub fn internal_unspecialized(c: &mut Criterion) + { + let arr = [1; 1024]; + + c.bench_function("internal unspecialized", move |b| { + b.iter(|| { + Unspecialized(arr.iter().intersperse(&0)).fold(0, |acc, x| acc + x) + }) + }); + } + } +} + +criterion_group!( + benches, + specialization::intersperse::external, + specialization::intersperse::internal_specialized, + specialization::intersperse::internal_unspecialized, +); +criterion_main!(benches); diff --git a/rust/hw/char/pl011/vendor/itertools/benches/powerset.rs b/rust/hw/char/pl011/vendor/itertools/benches/powerset.rs new file mode 100644 index 0000000000..074550bc44 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/powerset.rs @@ -0,0 +1,44 @@ +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use itertools::Itertools; + +// Keep aggregate generated elements the same, regardless of powerset length. +const TOTAL_ELEMENTS: usize = 1 << 12; +const fn calc_iters(n: usize) -> usize { + TOTAL_ELEMENTS / (1 << n) +} + +fn powerset_n(c: &mut Criterion, n: usize) { + let id = format!("powerset {}", n); + c.bench_function(id.as_str(), move |b| { + b.iter(|| { + for _ in 0..calc_iters(n) { + for elt in (0..n).powerset() { + black_box(elt); + } + } + }) + }); +} + +fn powerset_0(c: &mut Criterion) { powerset_n(c, 0); } + +fn powerset_1(c: &mut Criterion) { powerset_n(c, 1); } + +fn powerset_2(c: &mut Criterion) { powerset_n(c, 2); } + +fn powerset_4(c: &mut Criterion) { powerset_n(c, 4); } + +fn powerset_8(c: &mut Criterion) { powerset_n(c, 8); } + +fn powerset_12(c: &mut Criterion) { powerset_n(c, 12); } + +criterion_group!( + benches, + powerset_0, + powerset_1, + powerset_2, + powerset_4, + powerset_8, + powerset_12, +); +criterion_main!(benches); \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/itertools/benches/tree_fold1.rs b/rust/hw/char/pl011/vendor/itertools/benches/tree_fold1.rs new file mode 100644 index 0000000000..f12995db8e --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/tree_fold1.rs @@ -0,0 +1,144 @@ +use criterion::{criterion_group, criterion_main, Criterion}; +use itertools::{Itertools, cloned}; + +trait IterEx : Iterator { + // Another efficient implementation against which to compare, + // but needs `std` so is less desirable. + fn tree_fold1_vec(self, mut f: F) -> Option + where F: FnMut(Self::Item, Self::Item) -> Self::Item, + Self: Sized, + { + let hint = self.size_hint().0; + let cap = std::mem::size_of::() * 8 - hint.leading_zeros() as usize; + let mut stack = Vec::with_capacity(cap); + self.enumerate().for_each(|(mut i, mut x)| { + while (i & 1) != 0 { + x = f(stack.pop().unwrap(), x); + i >>= 1; + } + stack.push(x); + }); + stack.into_iter().fold1(f) + } +} +impl IterEx for T {} + +macro_rules! def_benchs { + ($N:expr, + $FUN:ident, + $BENCH_NAME:ident, + ) => ( + mod $BENCH_NAME { + use super::*; + + pub fn sum(c: &mut Criterion) { + let v: Vec = (0.. $N).collect(); + + c.bench_function(&(stringify!($BENCH_NAME).replace('_', " ") + " sum"), move |b| { + b.iter(|| { + cloned(&v).$FUN(|x, y| x + y) + }) + }); + } + + pub fn complex_iter(c: &mut Criterion) { + let u = (3..).take($N / 2); + let v = (5..).take($N / 2); + let it = u.chain(v); + + c.bench_function(&(stringify!($BENCH_NAME).replace('_', " ") + " complex iter"), move |b| { + b.iter(|| { + it.clone().map(|x| x as f32).$FUN(f32::atan2) + }) + }); + } + + pub fn string_format(c: &mut Criterion) { + // This goes quadratic with linear `fold1`, so use a smaller + // size to not waste too much time in travis. The allocations + // in here are so expensive anyway that it'll still take + // way longer per iteration than the other two benchmarks. + let v: Vec = (0.. ($N/4)).collect(); + + c.bench_function(&(stringify!($BENCH_NAME).replace('_', " ") + " string format"), move |b| { + b.iter(|| { + cloned(&v).map(|x| x.to_string()).$FUN(|x, y| format!("{} + {}", x, y)) + }) + }); + } + } + + criterion_group!( + $BENCH_NAME, + $BENCH_NAME::sum, + $BENCH_NAME::complex_iter, + $BENCH_NAME::string_format, + ); + ) +} + +def_benchs!{ + 10_000, + fold1, + fold1_10k, +} + +def_benchs!{ + 10_000, + tree_fold1, + tree_fold1_stack_10k, +} + +def_benchs!{ + 10_000, + tree_fold1_vec, + tree_fold1_vec_10k, +} + +def_benchs!{ + 100, + fold1, + fold1_100, +} + +def_benchs!{ + 100, + tree_fold1, + tree_fold1_stack_100, +} + +def_benchs!{ + 100, + tree_fold1_vec, + tree_fold1_vec_100, +} + +def_benchs!{ + 8, + fold1, + fold1_08, +} + +def_benchs!{ + 8, + tree_fold1, + tree_fold1_stack_08, +} + +def_benchs!{ + 8, + tree_fold1_vec, + tree_fold1_vec_08, +} + +criterion_main!( + fold1_10k, + tree_fold1_stack_10k, + tree_fold1_vec_10k, + fold1_100, + tree_fold1_stack_100, + tree_fold1_vec_100, + fold1_08, + tree_fold1_stack_08, + tree_fold1_vec_08, +); diff --git a/rust/hw/char/pl011/vendor/itertools/benches/tuple_combinations.rs b/rust/hw/char/pl011/vendor/itertools/benches/tuple_combinations.rs new file mode 100644 index 0000000000..4e26b282e8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/tuple_combinations.rs @@ -0,0 +1,113 @@ +use criterion::{black_box, criterion_group, criterion_main, Criterion}; +use itertools::Itertools; + +// approximate 100_000 iterations for each combination +const N1: usize = 100_000; +const N2: usize = 448; +const N3: usize = 86; +const N4: usize = 41; + +fn tuple_comb_for1(c: &mut Criterion) { + c.bench_function("tuple comb for1", move |b| { + b.iter(|| { + for i in 0..N1 { + black_box(i); + } + }) + }); +} + +fn tuple_comb_for2(c: &mut Criterion) { + c.bench_function("tuple comb for2", move |b| { + b.iter(|| { + for i in 0..N2 { + for j in (i + 1)..N2 { + black_box(i + j); + } + } + }) + }); +} + +fn tuple_comb_for3(c: &mut Criterion) { + c.bench_function("tuple comb for3", move |b| { + b.iter(|| { + for i in 0..N3 { + for j in (i + 1)..N3 { + for k in (j + 1)..N3 { + black_box(i + j + k); + } + } + } + }) + }); +} + +fn tuple_comb_for4(c: &mut Criterion) { + c.bench_function("tuple comb for4", move |b| { + b.iter(|| { + for i in 0..N4 { + for j in (i + 1)..N4 { + for k in (j + 1)..N4 { + for l in (k + 1)..N4 { + black_box(i + j + k + l); + } + } + } + } + }) + }); +} + +fn tuple_comb_c1(c: &mut Criterion) { + c.bench_function("tuple comb c1", move |b| { + b.iter(|| { + for (i,) in (0..N1).tuple_combinations() { + black_box(i); + } + }) + }); +} + +fn tuple_comb_c2(c: &mut Criterion) { + c.bench_function("tuple comb c2", move |b| { + b.iter(|| { + for (i, j) in (0..N2).tuple_combinations() { + black_box(i + j); + } + }) + }); +} + +fn tuple_comb_c3(c: &mut Criterion) { + c.bench_function("tuple comb c3", move |b| { + b.iter(|| { + for (i, j, k) in (0..N3).tuple_combinations() { + black_box(i + j + k); + } + }) + }); +} + +fn tuple_comb_c4(c: &mut Criterion) { + c.bench_function("tuple comb c4", move |b| { + b.iter(|| { + for (i, j, k, l) in (0..N4).tuple_combinations() { + black_box(i + j + k + l); + } + }) + }); +} + +criterion_group!( + benches, + tuple_comb_for1, + tuple_comb_for2, + tuple_comb_for3, + tuple_comb_for4, + tuple_comb_c1, + tuple_comb_c2, + tuple_comb_c3, + tuple_comb_c4, +); +criterion_main!(benches); diff --git a/rust/hw/char/pl011/vendor/itertools/benches/tuples.rs b/rust/hw/char/pl011/vendor/itertools/benches/tuples.rs new file mode 100644 index 0000000000..ea50aaaee1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/benches/tuples.rs @@ -0,0 +1,213 @@ +use criterion::{criterion_group, criterion_main, Criterion}; +use itertools::Itertools; + +fn s1(a: u32) -> u32 { + a +} + +fn s2(a: u32, b: u32) -> u32 { + a + b +} + +fn s3(a: u32, b: u32, c: u32) -> u32 { + a + b + c +} + +fn s4(a: u32, b: u32, c: u32, d: u32) -> u32 { + a + b + c + d +} + +fn sum_s1(s: &[u32]) -> u32 { + s1(s[0]) +} + +fn sum_s2(s: &[u32]) -> u32 { + s2(s[0], s[1]) +} + +fn sum_s3(s: &[u32]) -> u32 { + s3(s[0], s[1], s[2]) +} + +fn sum_s4(s: &[u32]) -> u32 { + s4(s[0], s[1], s[2], s[3]) +} + +fn sum_t1(s: &(&u32, )) -> u32 { + s1(*s.0) +} + +fn sum_t2(s: &(&u32, &u32)) -> u32 { + s2(*s.0, *s.1) +} + +fn sum_t3(s: &(&u32, &u32, &u32)) -> u32 { + s3(*s.0, *s.1, *s.2) +} + +fn sum_t4(s: &(&u32, &u32, &u32, &u32)) -> u32 { + s4(*s.0, *s.1, *s.2, *s.3) +} + +macro_rules! def_benchs { + ($N:expr; + $BENCH_GROUP:ident, + $TUPLE_FUN:ident, + $TUPLES:ident, + $TUPLE_WINDOWS:ident; + $SLICE_FUN:ident, + $CHUNKS:ident, + $WINDOWS:ident; + $FOR_CHUNKS:ident, + $FOR_WINDOWS:ident + ) => ( + fn $FOR_CHUNKS(c: &mut Criterion) { + let v: Vec = (0.. $N * 1_000).collect(); + let mut s = 0; + c.bench_function(&stringify!($FOR_CHUNKS).replace('_', " "), move |b| { + b.iter(|| { + let mut j = 0; + for _ in 0..1_000 { + s += $SLICE_FUN(&v[j..(j + $N)]); + j += $N; + } + s + }) + }); + } + + fn $FOR_WINDOWS(c: &mut Criterion) { + let v: Vec = (0..1_000).collect(); + let mut s = 0; + c.bench_function(&stringify!($FOR_WINDOWS).replace('_', " "), move |b| { + b.iter(|| { + for i in 0..(1_000 - $N) { + s += $SLICE_FUN(&v[i..(i + $N)]); + } + s + }) + }); + } + + fn $TUPLES(c: &mut Criterion) { + let v: Vec = (0.. $N * 1_000).collect(); + let mut s = 0; + c.bench_function(&stringify!($TUPLES).replace('_', " "), move |b| { + b.iter(|| { + for x in v.iter().tuples() { + s += $TUPLE_FUN(&x); + } + s + }) + }); + } + + fn $CHUNKS(c: &mut Criterion) { + let v: Vec = (0.. $N * 1_000).collect(); + let mut s = 0; + c.bench_function(&stringify!($CHUNKS).replace('_', " "), move |b| { + b.iter(|| { + for x in v.chunks($N) { + s += $SLICE_FUN(x); + } + s + }) + }); + } + + fn $TUPLE_WINDOWS(c: &mut Criterion) { + let v: Vec = (0..1_000).collect(); + let mut s = 0; + c.bench_function(&stringify!($TUPLE_WINDOWS).replace('_', " "), move |b| { + b.iter(|| { + for x in v.iter().tuple_windows() { + s += $TUPLE_FUN(&x); + } + s + }) + }); + } + + fn $WINDOWS(c: &mut Criterion) { + let v: Vec = (0..1_000).collect(); + let mut s = 0; + c.bench_function(&stringify!($WINDOWS).replace('_', " "), move |b| { + b.iter(|| { + for x in v.windows($N) { + s += $SLICE_FUN(x); + } + s + }) + }); + } + + criterion_group!( + $BENCH_GROUP, + $FOR_CHUNKS, + $FOR_WINDOWS, + $TUPLES, + $CHUNKS, + $TUPLE_WINDOWS, + $WINDOWS, + ); + ) +} + +def_benchs!{ + 1; + benches_1, + sum_t1, + tuple_chunks_1, + tuple_windows_1; + sum_s1, + slice_chunks_1, + slice_windows_1; + for_chunks_1, + for_windows_1 +} + +def_benchs!{ + 2; + benches_2, + sum_t2, + tuple_chunks_2, + tuple_windows_2; + sum_s2, + slice_chunks_2, + slice_windows_2; + for_chunks_2, + for_windows_2 +} + +def_benchs!{ + 3; + benches_3, + sum_t3, + tuple_chunks_3, + tuple_windows_3; + sum_s3, + slice_chunks_3, + slice_windows_3; + for_chunks_3, + for_windows_3 +} + +def_benchs!{ + 4; + benches_4, + sum_t4, + tuple_chunks_4, + tuple_windows_4; + sum_s4, + slice_chunks_4, + slice_windows_4; + for_chunks_4, + for_windows_4 +} + +criterion_main!( + benches_1, + benches_2, + benches_3, + benches_4, +); diff --git a/rust/hw/char/pl011/vendor/itertools/examples/iris.data b/rust/hw/char/pl011/vendor/itertools/examples/iris.data new file mode 100644 index 0000000000..a3490e0e07 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/examples/iris.data @@ -0,0 +1,150 @@ +5.1,3.5,1.4,0.2,Iris-setosa +4.9,3.0,1.4,0.2,Iris-setosa +4.7,3.2,1.3,0.2,Iris-setosa +4.6,3.1,1.5,0.2,Iris-setosa +5.0,3.6,1.4,0.2,Iris-setosa +5.4,3.9,1.7,0.4,Iris-setosa +4.6,3.4,1.4,0.3,Iris-setosa +5.0,3.4,1.5,0.2,Iris-setosa +4.4,2.9,1.4,0.2,Iris-setosa +4.9,3.1,1.5,0.1,Iris-setosa +5.4,3.7,1.5,0.2,Iris-setosa +4.8,3.4,1.6,0.2,Iris-setosa +4.8,3.0,1.4,0.1,Iris-setosa +4.3,3.0,1.1,0.1,Iris-setosa +5.8,4.0,1.2,0.2,Iris-setosa +5.7,4.4,1.5,0.4,Iris-setosa +5.4,3.9,1.3,0.4,Iris-setosa +5.1,3.5,1.4,0.3,Iris-setosa +5.7,3.8,1.7,0.3,Iris-setosa +5.1,3.8,1.5,0.3,Iris-setosa +5.4,3.4,1.7,0.2,Iris-setosa +5.1,3.7,1.5,0.4,Iris-setosa +4.6,3.6,1.0,0.2,Iris-setosa +5.1,3.3,1.7,0.5,Iris-setosa +4.8,3.4,1.9,0.2,Iris-setosa +5.0,3.0,1.6,0.2,Iris-setosa +5.0,3.4,1.6,0.4,Iris-setosa +5.2,3.5,1.5,0.2,Iris-setosa +5.2,3.4,1.4,0.2,Iris-setosa +4.7,3.2,1.6,0.2,Iris-setosa +4.8,3.1,1.6,0.2,Iris-setosa +5.4,3.4,1.5,0.4,Iris-setosa +5.2,4.1,1.5,0.1,Iris-setosa +5.5,4.2,1.4,0.2,Iris-setosa +4.9,3.1,1.5,0.1,Iris-setosa +5.0,3.2,1.2,0.2,Iris-setosa +5.5,3.5,1.3,0.2,Iris-setosa +4.9,3.1,1.5,0.1,Iris-setosa +4.4,3.0,1.3,0.2,Iris-setosa +5.1,3.4,1.5,0.2,Iris-setosa +5.0,3.5,1.3,0.3,Iris-setosa +4.5,2.3,1.3,0.3,Iris-setosa +4.4,3.2,1.3,0.2,Iris-setosa +5.0,3.5,1.6,0.6,Iris-setosa +5.1,3.8,1.9,0.4,Iris-setosa +4.8,3.0,1.4,0.3,Iris-setosa +5.1,3.8,1.6,0.2,Iris-setosa +4.6,3.2,1.4,0.2,Iris-setosa +5.3,3.7,1.5,0.2,Iris-setosa +5.0,3.3,1.4,0.2,Iris-setosa +7.0,3.2,4.7,1.4,Iris-versicolor +6.4,3.2,4.5,1.5,Iris-versicolor +6.9,3.1,4.9,1.5,Iris-versicolor +5.5,2.3,4.0,1.3,Iris-versicolor +6.5,2.8,4.6,1.5,Iris-versicolor +5.7,2.8,4.5,1.3,Iris-versicolor +6.3,3.3,4.7,1.6,Iris-versicolor +4.9,2.4,3.3,1.0,Iris-versicolor +6.6,2.9,4.6,1.3,Iris-versicolor +5.2,2.7,3.9,1.4,Iris-versicolor +5.0,2.0,3.5,1.0,Iris-versicolor +5.9,3.0,4.2,1.5,Iris-versicolor +6.0,2.2,4.0,1.0,Iris-versicolor +6.1,2.9,4.7,1.4,Iris-versicolor +5.6,2.9,3.6,1.3,Iris-versicolor +6.7,3.1,4.4,1.4,Iris-versicolor +5.6,3.0,4.5,1.5,Iris-versicolor +5.8,2.7,4.1,1.0,Iris-versicolor +6.2,2.2,4.5,1.5,Iris-versicolor +5.6,2.5,3.9,1.1,Iris-versicolor +5.9,3.2,4.8,1.8,Iris-versicolor +6.1,2.8,4.0,1.3,Iris-versicolor +6.3,2.5,4.9,1.5,Iris-versicolor +6.1,2.8,4.7,1.2,Iris-versicolor +6.4,2.9,4.3,1.3,Iris-versicolor +6.6,3.0,4.4,1.4,Iris-versicolor +6.8,2.8,4.8,1.4,Iris-versicolor +6.7,3.0,5.0,1.7,Iris-versicolor +6.0,2.9,4.5,1.5,Iris-versicolor +5.7,2.6,3.5,1.0,Iris-versicolor +5.5,2.4,3.8,1.1,Iris-versicolor +5.5,2.4,3.7,1.0,Iris-versicolor +5.8,2.7,3.9,1.2,Iris-versicolor +6.0,2.7,5.1,1.6,Iris-versicolor +5.4,3.0,4.5,1.5,Iris-versicolor +6.0,3.4,4.5,1.6,Iris-versicolor +6.7,3.1,4.7,1.5,Iris-versicolor +6.3,2.3,4.4,1.3,Iris-versicolor +5.6,3.0,4.1,1.3,Iris-versicolor +5.5,2.5,4.0,1.3,Iris-versicolor +5.5,2.6,4.4,1.2,Iris-versicolor +6.1,3.0,4.6,1.4,Iris-versicolor +5.8,2.6,4.0,1.2,Iris-versicolor +5.0,2.3,3.3,1.0,Iris-versicolor +5.6,2.7,4.2,1.3,Iris-versicolor +5.7,3.0,4.2,1.2,Iris-versicolor +5.7,2.9,4.2,1.3,Iris-versicolor +6.2,2.9,4.3,1.3,Iris-versicolor +5.1,2.5,3.0,1.1,Iris-versicolor +5.7,2.8,4.1,1.3,Iris-versicolor +6.3,3.3,6.0,2.5,Iris-virginica +5.8,2.7,5.1,1.9,Iris-virginica +7.1,3.0,5.9,2.1,Iris-virginica +6.3,2.9,5.6,1.8,Iris-virginica +6.5,3.0,5.8,2.2,Iris-virginica +7.6,3.0,6.6,2.1,Iris-virginica +4.9,2.5,4.5,1.7,Iris-virginica +7.3,2.9,6.3,1.8,Iris-virginica +6.7,2.5,5.8,1.8,Iris-virginica +7.2,3.6,6.1,2.5,Iris-virginica +6.5,3.2,5.1,2.0,Iris-virginica +6.4,2.7,5.3,1.9,Iris-virginica +6.8,3.0,5.5,2.1,Iris-virginica +5.7,2.5,5.0,2.0,Iris-virginica +5.8,2.8,5.1,2.4,Iris-virginica +6.4,3.2,5.3,2.3,Iris-virginica +6.5,3.0,5.5,1.8,Iris-virginica +7.7,3.8,6.7,2.2,Iris-virginica +7.7,2.6,6.9,2.3,Iris-virginica +6.0,2.2,5.0,1.5,Iris-virginica +6.9,3.2,5.7,2.3,Iris-virginica +5.6,2.8,4.9,2.0,Iris-virginica +7.7,2.8,6.7,2.0,Iris-virginica +6.3,2.7,4.9,1.8,Iris-virginica +6.7,3.3,5.7,2.1,Iris-virginica +7.2,3.2,6.0,1.8,Iris-virginica +6.2,2.8,4.8,1.8,Iris-virginica +6.1,3.0,4.9,1.8,Iris-virginica +6.4,2.8,5.6,2.1,Iris-virginica +7.2,3.0,5.8,1.6,Iris-virginica +7.4,2.8,6.1,1.9,Iris-virginica +7.9,3.8,6.4,2.0,Iris-virginica +6.4,2.8,5.6,2.2,Iris-virginica +6.3,2.8,5.1,1.5,Iris-virginica +6.1,2.6,5.6,1.4,Iris-virginica +7.7,3.0,6.1,2.3,Iris-virginica +6.3,3.4,5.6,2.4,Iris-virginica +6.4,3.1,5.5,1.8,Iris-virginica +6.0,3.0,4.8,1.8,Iris-virginica +6.9,3.1,5.4,2.1,Iris-virginica +6.7,3.1,5.6,2.4,Iris-virginica +6.9,3.1,5.1,2.3,Iris-virginica +5.8,2.7,5.1,1.9,Iris-virginica +6.8,3.2,5.9,2.3,Iris-virginica +6.7,3.3,5.7,2.5,Iris-virginica +6.7,3.0,5.2,2.3,Iris-virginica +6.3,2.5,5.0,1.9,Iris-virginica +6.5,3.0,5.2,2.0,Iris-virginica +6.2,3.4,5.4,2.3,Iris-virginica +5.9,3.0,5.1,1.8,Iris-virginica diff --git a/rust/hw/char/pl011/vendor/itertools/examples/iris.rs b/rust/hw/char/pl011/vendor/itertools/examples/iris.rs new file mode 100644 index 0000000000..987d9e9cba --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/examples/iris.rs @@ -0,0 +1,137 @@ +/// +/// This example parses, sorts and groups the iris dataset +/// and does some simple manipulations. +/// +/// Iterators and itertools functionality are used throughout. + +use itertools::Itertools; +use std::collections::HashMap; +use std::iter::repeat; +use std::num::ParseFloatError; +use std::str::FromStr; + +static DATA: &'static str = include_str!("iris.data"); + +#[derive(Clone, Debug)] +struct Iris { + name: String, + data: [f32; 4], +} + +#[derive(Clone, Debug)] +enum ParseError { + Numeric(ParseFloatError), + Other(&'static str), +} + +impl From for ParseError { + fn from(err: ParseFloatError) -> Self { + ParseError::Numeric(err) + } +} + +/// Parse an Iris from a comma-separated line +impl FromStr for Iris { + type Err = ParseError; + + fn from_str(s: &str) -> Result { + let mut iris = Iris { name: "".into(), data: [0.; 4] }; + let mut parts = s.split(",").map(str::trim); + + // using Iterator::by_ref() + for (index, part) in parts.by_ref().take(4).enumerate() { + iris.data[index] = part.parse::()?; + } + if let Some(name) = parts.next() { + iris.name = name.into(); + } else { + return Err(ParseError::Other("Missing name")) + } + Ok(iris) + } +} + +fn main() { + // using Itertools::fold_results to create the result of parsing + let irises = DATA.lines() + .map(str::parse) + .fold_ok(Vec::new(), |mut v, iris: Iris| { + v.push(iris); + v + }); + let mut irises = match irises { + Err(e) => { + println!("Error parsing: {:?}", e); + std::process::exit(1); + } + Ok(data) => data, + }; + + // Sort them and group them + irises.sort_by(|a, b| Ord::cmp(&a.name, &b.name)); + + // using Iterator::cycle() + let mut plot_symbols = "+ox".chars().cycle(); + let mut symbolmap = HashMap::new(); + + // using Itertools::group_by + for (species, species_group) in &irises.iter().group_by(|iris| &iris.name) { + // assign a plot symbol + symbolmap.entry(species).or_insert_with(|| { + plot_symbols.next().unwrap() + }); + println!("{} (symbol={})", species, symbolmap[species]); + + for iris in species_group { + // using Itertools::format for lazy formatting + println!("{:>3.1}", iris.data.iter().format(", ")); + } + + } + + // Look at all combinations of the four columns + // + // See https://en.wikipedia.org/wiki/Iris_flower_data_set + // + let n = 30; // plot size + let mut plot = vec![' '; n * n]; + + // using Itertools::tuple_combinations + for (a, b) in (0..4).tuple_combinations() { + println!("Column {} vs {}:", a, b); + + // Clear plot + // + // using std::iter::repeat; + // using Itertools::set_from + plot.iter_mut().set_from(repeat(' ')); + + // using Itertools::minmax + let min_max = |data: &[Iris], col| { + data.iter() + .map(|iris| iris.data[col]) + .minmax() + .into_option() + .expect("Can't find min/max of empty iterator") + }; + let (min_x, max_x) = min_max(&irises, a); + let (min_y, max_y) = min_max(&irises, b); + + // Plot the data points + let round_to_grid = |x, min, max| ((x - min) / (max - min) * ((n - 1) as f32)) as usize; + let flip = |ix| n - 1 - ix; // reverse axis direction + + for iris in &irises { + let ix = round_to_grid(iris.data[a], min_x, max_x); + let iy = flip(round_to_grid(iris.data[b], min_y, max_y)); + plot[n * iy + ix] = symbolmap[&iris.name]; + } + + // render plot + // + // using Itertools::join + for line in plot.chunks(n) { + println!("{}", line.iter().join(" ")) + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/meson.build b/rust/hw/char/pl011/vendor/itertools/meson.build new file mode 100644 index 0000000000..3fb976c06d --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/meson.build @@ -0,0 +1,18 @@ +_itertools_rs = static_library( + 'itertools', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2018', + '--cfg', 'feature="use_std"', + '--cfg', 'feature="use_alloc"', + ], + dependencies: [ + dep_either, + ], +) + +dep_itertools = declare_dependency( + link_with: _itertools_rs, +) diff --git a/rust/hw/char/pl011/vendor/itertools/src/adaptors/coalesce.rs b/rust/hw/char/pl011/vendor/itertools/src/adaptors/coalesce.rs new file mode 100644 index 0000000000..3df7cc5823 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/adaptors/coalesce.rs @@ -0,0 +1,235 @@ +use std::fmt; +use std::iter::FusedIterator; + +use crate::size_hint; + +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct CoalesceBy +where + I: Iterator, +{ + iter: I, + last: Option, + f: F, +} + +impl Clone for CoalesceBy +where + I: Iterator, +{ + clone_fields!(last, iter, f); +} + +impl fmt::Debug for CoalesceBy +where + I: Iterator + fmt::Debug, + T: fmt::Debug, +{ + debug_fmt_fields!(CoalesceBy, iter); +} + +pub trait CoalescePredicate { + fn coalesce_pair(&mut self, t: T, item: Item) -> Result; +} + +impl Iterator for CoalesceBy +where + I: Iterator, + F: CoalescePredicate, +{ + type Item = T; + + fn next(&mut self) -> Option { + // this fuses the iterator + let last = self.last.take()?; + + let self_last = &mut self.last; + let self_f = &mut self.f; + Some( + self.iter + .try_fold(last, |last, next| match self_f.coalesce_pair(last, next) { + Ok(joined) => Ok(joined), + Err((last_, next_)) => { + *self_last = Some(next_); + Err(last_) + } + }) + .unwrap_or_else(|x| x), + ) + } + + fn size_hint(&self) -> (usize, Option) { + let (low, hi) = size_hint::add_scalar(self.iter.size_hint(), self.last.is_some() as usize); + ((low > 0) as usize, hi) + } + + fn fold(self, acc: Acc, mut fn_acc: FnAcc) -> Acc + where + FnAcc: FnMut(Acc, Self::Item) -> Acc, + { + if let Some(last) = self.last { + let mut f = self.f; + let (last, acc) = self.iter.fold((last, acc), |(last, acc), elt| { + match f.coalesce_pair(last, elt) { + Ok(joined) => (joined, acc), + Err((last_, next_)) => (next_, fn_acc(acc, last_)), + } + }); + fn_acc(acc, last) + } else { + acc + } + } +} + +impl, T> FusedIterator for CoalesceBy {} + +/// An iterator adaptor that may join together adjacent elements. +/// +/// See [`.coalesce()`](crate::Itertools::coalesce) for more information. +pub type Coalesce = CoalesceBy::Item>; + +impl CoalescePredicate for F +where + F: FnMut(T, Item) -> Result, +{ + fn coalesce_pair(&mut self, t: T, item: Item) -> Result { + self(t, item) + } +} + +/// Create a new `Coalesce`. +pub fn coalesce(mut iter: I, f: F) -> Coalesce +where + I: Iterator, +{ + Coalesce { + last: iter.next(), + iter, + f, + } +} + +/// An iterator adaptor that removes repeated duplicates, determining equality using a comparison function. +/// +/// See [`.dedup_by()`](crate::Itertools::dedup_by) or [`.dedup()`](crate::Itertools::dedup) for more information. +pub type DedupBy = CoalesceBy, ::Item>; + +#[derive(Clone)] +pub struct DedupPred2CoalescePred(DP); + +impl fmt::Debug for DedupPred2CoalescePred { + debug_fmt_fields!(DedupPred2CoalescePred,); +} + +pub trait DedupPredicate { + // TODO replace by Fn(&T, &T)->bool once Rust supports it + fn dedup_pair(&mut self, a: &T, b: &T) -> bool; +} + +impl CoalescePredicate for DedupPred2CoalescePred +where + DP: DedupPredicate, +{ + fn coalesce_pair(&mut self, t: T, item: T) -> Result { + if self.0.dedup_pair(&t, &item) { + Ok(t) + } else { + Err((t, item)) + } + } +} + +#[derive(Clone, Debug)] +pub struct DedupEq; + +impl DedupPredicate for DedupEq { + fn dedup_pair(&mut self, a: &T, b: &T) -> bool { + a == b + } +} + +impl bool> DedupPredicate for F { + fn dedup_pair(&mut self, a: &T, b: &T) -> bool { + self(a, b) + } +} + +/// Create a new `DedupBy`. +pub fn dedup_by(mut iter: I, dedup_pred: Pred) -> DedupBy +where + I: Iterator, +{ + DedupBy { + last: iter.next(), + iter, + f: DedupPred2CoalescePred(dedup_pred), + } +} + +/// An iterator adaptor that removes repeated duplicates. +/// +/// See [`.dedup()`](crate::Itertools::dedup) for more information. +pub type Dedup = DedupBy; + +/// Create a new `Dedup`. +pub fn dedup(iter: I) -> Dedup +where + I: Iterator, +{ + dedup_by(iter, DedupEq) +} + +/// An iterator adaptor that removes repeated duplicates, while keeping a count of how many +/// repeated elements were present. This will determine equality using a comparison function. +/// +/// See [`.dedup_by_with_count()`](crate::Itertools::dedup_by_with_count) or +/// [`.dedup_with_count()`](crate::Itertools::dedup_with_count) for more information. +pub type DedupByWithCount = + CoalesceBy, (usize, ::Item)>; + +#[derive(Clone, Debug)] +pub struct DedupPredWithCount2CoalescePred(DP); + +impl CoalescePredicate for DedupPredWithCount2CoalescePred +where + DP: DedupPredicate, +{ + fn coalesce_pair( + &mut self, + (c, t): (usize, T), + item: T, + ) -> Result<(usize, T), ((usize, T), (usize, T))> { + if self.0.dedup_pair(&t, &item) { + Ok((c + 1, t)) + } else { + Err(((c, t), (1, item))) + } + } +} + +/// An iterator adaptor that removes repeated duplicates, while keeping a count of how many +/// repeated elements were present. +/// +/// See [`.dedup_with_count()`](crate::Itertools::dedup_with_count) for more information. +pub type DedupWithCount = DedupByWithCount; + +/// Create a new `DedupByWithCount`. +pub fn dedup_by_with_count(mut iter: I, dedup_pred: Pred) -> DedupByWithCount +where + I: Iterator, +{ + DedupByWithCount { + last: iter.next().map(|v| (1, v)), + iter, + f: DedupPredWithCount2CoalescePred(dedup_pred), + } +} + +/// Create a new `DedupWithCount`. +pub fn dedup_with_count(iter: I) -> DedupWithCount +where + I: Iterator, +{ + dedup_by_with_count(iter, DedupEq) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/adaptors/map.rs b/rust/hw/char/pl011/vendor/itertools/src/adaptors/map.rs new file mode 100644 index 0000000000..cf5e5a00d5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/adaptors/map.rs @@ -0,0 +1,124 @@ +use std::iter::FromIterator; +use std::marker::PhantomData; + +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct MapSpecialCase { + iter: I, + f: F, +} + +pub trait MapSpecialCaseFn { + type Out; + fn call(&mut self, t: T) -> Self::Out; +} + +impl Iterator for MapSpecialCase +where + I: Iterator, + R: MapSpecialCaseFn, +{ + type Item = R::Out; + + fn next(&mut self) -> Option { + self.iter.next().map(|i| self.f.call(i)) + } + + fn size_hint(&self) -> (usize, Option) { + self.iter.size_hint() + } + + fn fold(self, init: Acc, mut fold_f: Fold) -> Acc + where + Fold: FnMut(Acc, Self::Item) -> Acc, + { + let mut f = self.f; + self.iter.fold(init, move |acc, v| fold_f(acc, f.call(v))) + } + + fn collect(self) -> C + where + C: FromIterator, + { + let mut f = self.f; + self.iter.map(move |v| f.call(v)).collect() + } +} + +impl DoubleEndedIterator for MapSpecialCase +where + I: DoubleEndedIterator, + R: MapSpecialCaseFn, +{ + fn next_back(&mut self) -> Option { + self.iter.next_back().map(|i| self.f.call(i)) + } +} + +impl ExactSizeIterator for MapSpecialCase +where + I: ExactSizeIterator, + R: MapSpecialCaseFn, +{ +} + +/// An iterator adapter to apply a transformation within a nested `Result::Ok`. +/// +/// See [`.map_ok()`](crate::Itertools::map_ok) for more information. +pub type MapOk = MapSpecialCase>; + +/// See [`MapOk`]. +#[deprecated(note = "Use MapOk instead", since = "0.10.0")] +pub type MapResults = MapOk; + +impl MapSpecialCaseFn> for MapSpecialCaseFnOk +where + F: FnMut(T) -> U, +{ + type Out = Result; + fn call(&mut self, t: Result) -> Self::Out { + t.map(|v| self.0(v)) + } +} + +#[derive(Clone)] +pub struct MapSpecialCaseFnOk(F); + +impl std::fmt::Debug for MapSpecialCaseFnOk { + debug_fmt_fields!(MapSpecialCaseFnOk,); +} + +/// Create a new `MapOk` iterator. +pub fn map_ok(iter: I, f: F) -> MapOk +where + I: Iterator>, + F: FnMut(T) -> U, +{ + MapSpecialCase { + iter, + f: MapSpecialCaseFnOk(f), + } +} + +/// An iterator adapter to apply `Into` conversion to each element. +/// +/// See [`.map_into()`](crate::Itertools::map_into) for more information. +pub type MapInto = MapSpecialCase>; + +impl, U> MapSpecialCaseFn for MapSpecialCaseFnInto { + type Out = U; + fn call(&mut self, t: T) -> Self::Out { + t.into() + } +} + +#[derive(Clone, Debug)] +pub struct MapSpecialCaseFnInto(PhantomData); + +/// Create a new [`MapInto`] iterator. +pub fn map_into(iter: I) -> MapInto { + MapSpecialCase { + iter, + f: MapSpecialCaseFnInto(PhantomData), + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/adaptors/mod.rs b/rust/hw/char/pl011/vendor/itertools/src/adaptors/mod.rs new file mode 100644 index 0000000000..1695bbd655 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/adaptors/mod.rs @@ -0,0 +1,1151 @@ +//! Licensed under the Apache License, Version 2.0 +//! or the MIT license +//! , at your +//! option. This file may not be copied, modified, or distributed +//! except according to those terms. + +mod coalesce; +mod map; +mod multi_product; +pub use self::coalesce::*; +pub use self::map::{map_into, map_ok, MapInto, MapOk}; +#[allow(deprecated)] +pub use self::map::MapResults; +#[cfg(feature = "use_alloc")] +pub use self::multi_product::*; + +use std::fmt; +use std::iter::{Fuse, Peekable, FromIterator, FusedIterator}; +use std::marker::PhantomData; +use crate::size_hint; + +/// An iterator adaptor that alternates elements from two iterators until both +/// run out. +/// +/// This iterator is *fused*. +/// +/// See [`.interleave()`](crate::Itertools::interleave) for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Interleave { + a: Fuse, + b: Fuse, + flag: bool, +} + +/// Create an iterator that interleaves elements in `i` and `j`. +/// +/// [`IntoIterator`] enabled version of `[Itertools::interleave]`. +pub fn interleave(i: I, j: J) -> Interleave<::IntoIter, ::IntoIter> + where I: IntoIterator, + J: IntoIterator +{ + Interleave { + a: i.into_iter().fuse(), + b: j.into_iter().fuse(), + flag: false, + } +} + +impl Iterator for Interleave + where I: Iterator, + J: Iterator +{ + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + self.flag = !self.flag; + if self.flag { + match self.a.next() { + None => self.b.next(), + r => r, + } + } else { + match self.b.next() { + None => self.a.next(), + r => r, + } + } + } + + fn size_hint(&self) -> (usize, Option) { + size_hint::add(self.a.size_hint(), self.b.size_hint()) + } +} + +impl FusedIterator for Interleave + where I: Iterator, + J: Iterator +{} + +/// An iterator adaptor that alternates elements from the two iterators until +/// one of them runs out. +/// +/// This iterator is *fused*. +/// +/// See [`.interleave_shortest()`](crate::Itertools::interleave_shortest) +/// for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct InterleaveShortest + where I: Iterator, + J: Iterator +{ + it0: I, + it1: J, + phase: bool, // false ==> it0, true ==> it1 +} + +/// Create a new `InterleaveShortest` iterator. +pub fn interleave_shortest(a: I, b: J) -> InterleaveShortest + where I: Iterator, + J: Iterator +{ + InterleaveShortest { + it0: a, + it1: b, + phase: false, + } +} + +impl Iterator for InterleaveShortest + where I: Iterator, + J: Iterator +{ + type Item = I::Item; + + #[inline] + fn next(&mut self) -> Option { + let e = if self.phase { self.it1.next() } else { self.it0.next() }; + if e.is_some() { + self.phase = !self.phase; + } + e + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + let (curr_hint, next_hint) = { + let it0_hint = self.it0.size_hint(); + let it1_hint = self.it1.size_hint(); + if self.phase { + (it1_hint, it0_hint) + } else { + (it0_hint, it1_hint) + } + }; + let (curr_lower, curr_upper) = curr_hint; + let (next_lower, next_upper) = next_hint; + let (combined_lower, combined_upper) = + size_hint::mul_scalar(size_hint::min(curr_hint, next_hint), 2); + let lower = + if curr_lower > next_lower { + combined_lower + 1 + } else { + combined_lower + }; + let upper = { + let extra_elem = match (curr_upper, next_upper) { + (_, None) => false, + (None, Some(_)) => true, + (Some(curr_max), Some(next_max)) => curr_max > next_max, + }; + if extra_elem { + combined_upper.and_then(|x| x.checked_add(1)) + } else { + combined_upper + } + }; + (lower, upper) + } +} + +impl FusedIterator for InterleaveShortest + where I: FusedIterator, + J: FusedIterator +{} + +#[derive(Clone, Debug)] +/// An iterator adaptor that allows putting back a single +/// item to the front of the iterator. +/// +/// Iterator element type is `I::Item`. +pub struct PutBack + where I: Iterator +{ + top: Option, + iter: I, +} + +/// Create an iterator where you can put back a single item +pub fn put_back(iterable: I) -> PutBack + where I: IntoIterator +{ + PutBack { + top: None, + iter: iterable.into_iter(), + } +} + +impl PutBack + where I: Iterator +{ + /// put back value `value` (builder method) + pub fn with_value(mut self, value: I::Item) -> Self { + self.put_back(value); + self + } + + /// Split the `PutBack` into its parts. + #[inline] + pub fn into_parts(self) -> (Option, I) { + let PutBack{top, iter} = self; + (top, iter) + } + + /// Put back a single value to the front of the iterator. + /// + /// If a value is already in the put back slot, it is overwritten. + #[inline] + pub fn put_back(&mut self, x: I::Item) { + self.top = Some(x); + } +} + +impl Iterator for PutBack + where I: Iterator +{ + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + match self.top { + None => self.iter.next(), + ref mut some => some.take(), + } + } + #[inline] + fn size_hint(&self) -> (usize, Option) { + // Not ExactSizeIterator because size may be larger than usize + size_hint::add_scalar(self.iter.size_hint(), self.top.is_some() as usize) + } + + fn count(self) -> usize { + self.iter.count() + (self.top.is_some() as usize) + } + + fn last(self) -> Option { + self.iter.last().or(self.top) + } + + fn nth(&mut self, n: usize) -> Option { + match self.top { + None => self.iter.nth(n), + ref mut some => { + if n == 0 { + some.take() + } else { + *some = None; + self.iter.nth(n - 1) + } + } + } + } + + fn all(&mut self, mut f: G) -> bool + where G: FnMut(Self::Item) -> bool + { + if let Some(elt) = self.top.take() { + if !f(elt) { + return false; + } + } + self.iter.all(f) + } + + fn fold(mut self, init: Acc, mut f: G) -> Acc + where G: FnMut(Acc, Self::Item) -> Acc, + { + let mut accum = init; + if let Some(elt) = self.top.take() { + accum = f(accum, elt); + } + self.iter.fold(accum, f) + } +} + +#[derive(Debug, Clone)] +/// An iterator adaptor that iterates over the cartesian product of +/// the element sets of two iterators `I` and `J`. +/// +/// Iterator element type is `(I::Item, J::Item)`. +/// +/// See [`.cartesian_product()`](crate::Itertools::cartesian_product) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Product + where I: Iterator +{ + a: I, + a_cur: Option, + b: J, + b_orig: J, +} + +/// Create a new cartesian product iterator +/// +/// Iterator element type is `(I::Item, J::Item)`. +pub fn cartesian_product(mut i: I, j: J) -> Product + where I: Iterator, + J: Clone + Iterator, + I::Item: Clone +{ + Product { + a_cur: i.next(), + a: i, + b: j.clone(), + b_orig: j, + } +} + +impl Iterator for Product + where I: Iterator, + J: Clone + Iterator, + I::Item: Clone +{ + type Item = (I::Item, J::Item); + + fn next(&mut self) -> Option { + let elt_b = match self.b.next() { + None => { + self.b = self.b_orig.clone(); + match self.b.next() { + None => return None, + Some(x) => { + self.a_cur = self.a.next(); + x + } + } + } + Some(x) => x + }; + self.a_cur.as_ref().map(|a| (a.clone(), elt_b)) + } + + fn size_hint(&self) -> (usize, Option) { + let has_cur = self.a_cur.is_some() as usize; + // Not ExactSizeIterator because size may be larger than usize + let (b_min, b_max) = self.b.size_hint(); + + // Compute a * b_orig + b for both lower and upper bound + size_hint::add( + size_hint::mul(self.a.size_hint(), self.b_orig.size_hint()), + (b_min * has_cur, b_max.map(move |x| x * has_cur))) + } + + fn fold(mut self, mut accum: Acc, mut f: G) -> Acc + where G: FnMut(Acc, Self::Item) -> Acc, + { + // use a split loop to handle the loose a_cur as well as avoiding to + // clone b_orig at the end. + if let Some(mut a) = self.a_cur.take() { + let mut b = self.b; + loop { + accum = b.fold(accum, |acc, elt| f(acc, (a.clone(), elt))); + + // we can only continue iterating a if we had a first element; + if let Some(next_a) = self.a.next() { + b = self.b_orig.clone(); + a = next_a; + } else { + break; + } + } + } + accum + } +} + +impl FusedIterator for Product + where I: FusedIterator, + J: Clone + FusedIterator, + I::Item: Clone +{} + +/// A “meta iterator adaptor”. Its closure receives a reference to the iterator +/// and may pick off as many elements as it likes, to produce the next iterator element. +/// +/// Iterator element type is *X*, if the return type of `F` is *Option\*. +/// +/// See [`.batching()`](crate::Itertools::batching) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Batching { + f: F, + iter: I, +} + +impl fmt::Debug for Batching where I: fmt::Debug { + debug_fmt_fields!(Batching, iter); +} + +/// Create a new Batching iterator. +pub fn batching(iter: I, f: F) -> Batching { + Batching { f, iter } +} + +impl Iterator for Batching + where I: Iterator, + F: FnMut(&mut I) -> Option +{ + type Item = B; + #[inline] + fn next(&mut self) -> Option { + (self.f)(&mut self.iter) + } +} + +/// An iterator adaptor that steps a number elements in the base iterator +/// for each iteration. +/// +/// The iterator steps by yielding the next element from the base iterator, +/// then skipping forward *n-1* elements. +/// +/// See [`.step()`](crate::Itertools::step) for more information. +#[deprecated(note="Use std .step_by() instead", since="0.8.0")] +#[allow(deprecated)] +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Step { + iter: Fuse, + skip: usize, +} + +/// Create a `Step` iterator. +/// +/// **Panics** if the step is 0. +#[allow(deprecated)] +pub fn step(iter: I, step: usize) -> Step + where I: Iterator +{ + assert!(step != 0); + Step { + iter: iter.fuse(), + skip: step - 1, + } +} + +#[allow(deprecated)] +impl Iterator for Step + where I: Iterator +{ + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + let elt = self.iter.next(); + if self.skip > 0 { + self.iter.nth(self.skip - 1); + } + elt + } + + fn size_hint(&self) -> (usize, Option) { + let (low, high) = self.iter.size_hint(); + let div = |x: usize| { + if x == 0 { + 0 + } else { + 1 + (x - 1) / (self.skip + 1) + } + }; + (div(low), high.map(div)) + } +} + +// known size +#[allow(deprecated)] +impl ExactSizeIterator for Step + where I: ExactSizeIterator +{} + +pub trait MergePredicate { + fn merge_pred(&mut self, a: &T, b: &T) -> bool; +} + +#[derive(Clone, Debug)] +pub struct MergeLte; + +impl MergePredicate for MergeLte { + fn merge_pred(&mut self, a: &T, b: &T) -> bool { + a <= b + } +} + +/// An iterator adaptor that merges the two base iterators in ascending order. +/// If both base iterators are sorted (ascending), the result is sorted. +/// +/// Iterator element type is `I::Item`. +/// +/// See [`.merge()`](crate::Itertools::merge_by) for more information. +pub type Merge = MergeBy; + +/// Create an iterator that merges elements in `i` and `j`. +/// +/// [`IntoIterator`] enabled version of [`Itertools::merge`](crate::Itertools::merge). +/// +/// ``` +/// use itertools::merge; +/// +/// for elt in merge(&[1, 2, 3], &[2, 3, 4]) { +/// /* loop body */ +/// } +/// ``` +pub fn merge(i: I, j: J) -> Merge<::IntoIter, ::IntoIter> + where I: IntoIterator, + J: IntoIterator, + I::Item: PartialOrd +{ + merge_by_new(i, j, MergeLte) +} + +/// An iterator adaptor that merges the two base iterators in ascending order. +/// If both base iterators are sorted (ascending), the result is sorted. +/// +/// Iterator element type is `I::Item`. +/// +/// See [`.merge_by()`](crate::Itertools::merge_by) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct MergeBy + where I: Iterator, + J: Iterator +{ + a: Peekable, + b: Peekable, + fused: Option, + cmp: F, +} + +impl fmt::Debug for MergeBy + where I: Iterator + fmt::Debug, J: Iterator + fmt::Debug, + I::Item: fmt::Debug, +{ + debug_fmt_fields!(MergeBy, a, b); +} + +implbool> MergePredicate for F { + fn merge_pred(&mut self, a: &T, b: &T) -> bool { + self(a, b) + } +} + +/// Create a `MergeBy` iterator. +pub fn merge_by_new(a: I, b: J, cmp: F) -> MergeBy + where I: IntoIterator, + J: IntoIterator, + F: MergePredicate, +{ + MergeBy { + a: a.into_iter().peekable(), + b: b.into_iter().peekable(), + fused: None, + cmp, + } +} + +impl Clone for MergeBy + where I: Iterator, + J: Iterator, + Peekable: Clone, + Peekable: Clone, + F: Clone +{ + clone_fields!(a, b, fused, cmp); +} + +impl Iterator for MergeBy + where I: Iterator, + J: Iterator, + F: MergePredicate +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + let less_than = match self.fused { + Some(lt) => lt, + None => match (self.a.peek(), self.b.peek()) { + (Some(a), Some(b)) => self.cmp.merge_pred(a, b), + (Some(_), None) => { + self.fused = Some(true); + true + } + (None, Some(_)) => { + self.fused = Some(false); + false + } + (None, None) => return None, + } + }; + if less_than { + self.a.next() + } else { + self.b.next() + } + } + + fn size_hint(&self) -> (usize, Option) { + // Not ExactSizeIterator because size may be larger than usize + size_hint::add(self.a.size_hint(), self.b.size_hint()) + } +} + +impl FusedIterator for MergeBy + where I: FusedIterator, + J: FusedIterator, + F: MergePredicate +{} + +/// An iterator adaptor that borrows from a `Clone`-able iterator +/// to only pick off elements while the predicate returns `true`. +/// +/// See [`.take_while_ref()`](crate::Itertools::take_while_ref) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct TakeWhileRef<'a, I: 'a, F> { + iter: &'a mut I, + f: F, +} + +impl<'a, I, F> fmt::Debug for TakeWhileRef<'a, I, F> + where I: Iterator + fmt::Debug, +{ + debug_fmt_fields!(TakeWhileRef, iter); +} + +/// Create a new `TakeWhileRef` from a reference to clonable iterator. +pub fn take_while_ref(iter: &mut I, f: F) -> TakeWhileRef + where I: Iterator + Clone +{ + TakeWhileRef { iter, f } +} + +impl<'a, I, F> Iterator for TakeWhileRef<'a, I, F> + where I: Iterator + Clone, + F: FnMut(&I::Item) -> bool +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + let old = self.iter.clone(); + match self.iter.next() { + None => None, + Some(elt) => { + if (self.f)(&elt) { + Some(elt) + } else { + *self.iter = old; + None + } + } + } + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } +} + +/// An iterator adaptor that filters `Option` iterator elements +/// and produces `A`. Stops on the first `None` encountered. +/// +/// See [`.while_some()`](crate::Itertools::while_some) for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct WhileSome { + iter: I, +} + +/// Create a new `WhileSome`. +pub fn while_some(iter: I) -> WhileSome { + WhileSome { iter } +} + +impl Iterator for WhileSome + where I: Iterator> +{ + type Item = A; + + fn next(&mut self) -> Option { + match self.iter.next() { + None | Some(None) => None, + Some(elt) => elt, + } + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } +} + +/// An iterator to iterate through all combinations in a `Clone`-able iterator that produces tuples +/// of a specific size. +/// +/// See [`.tuple_combinations()`](crate::Itertools::tuple_combinations) for more +/// information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct TupleCombinations + where I: Iterator, + T: HasCombination +{ + iter: T::Combination, + _mi: PhantomData, +} + +pub trait HasCombination: Sized { + type Combination: From + Iterator; +} + +/// Create a new `TupleCombinations` from a clonable iterator. +pub fn tuple_combinations(iter: I) -> TupleCombinations + where I: Iterator + Clone, + I::Item: Clone, + T: HasCombination, +{ + TupleCombinations { + iter: T::Combination::from(iter), + _mi: PhantomData, + } +} + +impl Iterator for TupleCombinations + where I: Iterator, + T: HasCombination, +{ + type Item = T; + + fn next(&mut self) -> Option { + self.iter.next() + } +} + +impl FusedIterator for TupleCombinations + where I: FusedIterator, + T: HasCombination, +{} + +#[derive(Clone, Debug)] +pub struct Tuple1Combination { + iter: I, +} + +impl From for Tuple1Combination { + fn from(iter: I) -> Self { + Tuple1Combination { iter } + } +} + +impl Iterator for Tuple1Combination { + type Item = (I::Item,); + + fn next(&mut self) -> Option { + self.iter.next().map(|x| (x,)) + } +} + +impl HasCombination for (I::Item,) { + type Combination = Tuple1Combination; +} + +macro_rules! impl_tuple_combination { + ($C:ident $P:ident ; $($X:ident)*) => ( + #[derive(Clone, Debug)] + pub struct $C { + item: Option, + iter: I, + c: $P, + } + + impl From for $C { + fn from(mut iter: I) -> Self { + Self { + item: iter.next(), + iter: iter.clone(), + c: iter.into(), + } + } + } + + impl From for $C> { + fn from(iter: I) -> Self { + Self::from(iter.fuse()) + } + } + + impl Iterator for $C + where I: Iterator + Clone, + I::Item: Clone + { + type Item = (A, $(ignore_ident!($X, A)),*); + + fn next(&mut self) -> Option { + if let Some(($($X),*,)) = self.c.next() { + let z = self.item.clone().unwrap(); + Some((z, $($X),*)) + } else { + self.item = self.iter.next(); + self.item.clone().and_then(|z| { + self.c = self.iter.clone().into(); + self.c.next().map(|($($X),*,)| (z, $($X),*)) + }) + } + } + } + + impl HasCombination for (A, $(ignore_ident!($X, A)),*) + where I: Iterator + Clone, + I::Item: Clone + { + type Combination = $C>; + } + ) +} + +// This snippet generates the twelve `impl_tuple_combination!` invocations: +// use core::iter; +// use itertools::Itertools; +// +// for i in 2..=12 { +// println!("impl_tuple_combination!(Tuple{arity}Combination Tuple{prev}Combination; {idents});", +// arity = i, +// prev = i - 1, +// idents = ('a'..'z').take(i - 1).join(" "), +// ); +// } +// It could probably be replaced by a bit more macro cleverness. +impl_tuple_combination!(Tuple2Combination Tuple1Combination; a); +impl_tuple_combination!(Tuple3Combination Tuple2Combination; a b); +impl_tuple_combination!(Tuple4Combination Tuple3Combination; a b c); +impl_tuple_combination!(Tuple5Combination Tuple4Combination; a b c d); +impl_tuple_combination!(Tuple6Combination Tuple5Combination; a b c d e); +impl_tuple_combination!(Tuple7Combination Tuple6Combination; a b c d e f); +impl_tuple_combination!(Tuple8Combination Tuple7Combination; a b c d e f g); +impl_tuple_combination!(Tuple9Combination Tuple8Combination; a b c d e f g h); +impl_tuple_combination!(Tuple10Combination Tuple9Combination; a b c d e f g h i); +impl_tuple_combination!(Tuple11Combination Tuple10Combination; a b c d e f g h i j); +impl_tuple_combination!(Tuple12Combination Tuple11Combination; a b c d e f g h i j k); + +/// An iterator adapter to filter values within a nested `Result::Ok`. +/// +/// See [`.filter_ok()`](crate::Itertools::filter_ok) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct FilterOk { + iter: I, + f: F +} + +impl fmt::Debug for FilterOk +where + I: fmt::Debug, +{ + debug_fmt_fields!(FilterOk, iter); +} + +/// Create a new `FilterOk` iterator. +pub fn filter_ok(iter: I, f: F) -> FilterOk + where I: Iterator>, + F: FnMut(&T) -> bool, +{ + FilterOk { + iter, + f, + } +} + +impl Iterator for FilterOk + where I: Iterator>, + F: FnMut(&T) -> bool, +{ + type Item = Result; + + fn next(&mut self) -> Option { + loop { + match self.iter.next() { + Some(Ok(v)) => { + if (self.f)(&v) { + return Some(Ok(v)); + } + }, + Some(Err(e)) => return Some(Err(e)), + None => return None, + } + } + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } + + fn fold(self, init: Acc, fold_f: Fold) -> Acc + where Fold: FnMut(Acc, Self::Item) -> Acc, + { + let mut f = self.f; + self.iter.filter(|v| { + v.as_ref().map(&mut f).unwrap_or(true) + }).fold(init, fold_f) + } + + fn collect(self) -> C + where C: FromIterator + { + let mut f = self.f; + self.iter.filter(|v| { + v.as_ref().map(&mut f).unwrap_or(true) + }).collect() + } +} + +impl FusedIterator for FilterOk + where I: FusedIterator>, + F: FnMut(&T) -> bool, +{} + +/// An iterator adapter to filter and apply a transformation on values within a nested `Result::Ok`. +/// +/// See [`.filter_map_ok()`](crate::Itertools::filter_map_ok) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct FilterMapOk { + iter: I, + f: F +} + +impl fmt::Debug for FilterMapOk +where + I: fmt::Debug, +{ + debug_fmt_fields!(FilterMapOk, iter); +} + +fn transpose_result(result: Result, E>) -> Option> { + match result { + Ok(Some(v)) => Some(Ok(v)), + Ok(None) => None, + Err(e) => Some(Err(e)), + } +} + +/// Create a new `FilterOk` iterator. +pub fn filter_map_ok(iter: I, f: F) -> FilterMapOk + where I: Iterator>, + F: FnMut(T) -> Option, +{ + FilterMapOk { + iter, + f, + } +} + +impl Iterator for FilterMapOk + where I: Iterator>, + F: FnMut(T) -> Option, +{ + type Item = Result; + + fn next(&mut self) -> Option { + loop { + match self.iter.next() { + Some(Ok(v)) => { + if let Some(v) = (self.f)(v) { + return Some(Ok(v)); + } + }, + Some(Err(e)) => return Some(Err(e)), + None => return None, + } + } + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } + + fn fold(self, init: Acc, fold_f: Fold) -> Acc + where Fold: FnMut(Acc, Self::Item) -> Acc, + { + let mut f = self.f; + self.iter.filter_map(|v| { + transpose_result(v.map(&mut f)) + }).fold(init, fold_f) + } + + fn collect(self) -> C + where C: FromIterator + { + let mut f = self.f; + self.iter.filter_map(|v| { + transpose_result(v.map(&mut f)) + }).collect() + } +} + +impl FusedIterator for FilterMapOk + where I: FusedIterator>, + F: FnMut(T) -> Option, +{} + +/// An iterator adapter to get the positions of each element that matches a predicate. +/// +/// See [`.positions()`](crate::Itertools::positions) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Positions { + iter: I, + f: F, + count: usize, +} + +impl fmt::Debug for Positions +where + I: fmt::Debug, +{ + debug_fmt_fields!(Positions, iter, count); +} + +/// Create a new `Positions` iterator. +pub fn positions(iter: I, f: F) -> Positions + where I: Iterator, + F: FnMut(I::Item) -> bool, +{ + Positions { + iter, + f, + count: 0 + } +} + +impl Iterator for Positions + where I: Iterator, + F: FnMut(I::Item) -> bool, +{ + type Item = usize; + + fn next(&mut self) -> Option { + while let Some(v) = self.iter.next() { + let i = self.count; + self.count = i + 1; + if (self.f)(v) { + return Some(i); + } + } + None + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } +} + +impl DoubleEndedIterator for Positions + where I: DoubleEndedIterator + ExactSizeIterator, + F: FnMut(I::Item) -> bool, +{ + fn next_back(&mut self) -> Option { + while let Some(v) = self.iter.next_back() { + if (self.f)(v) { + return Some(self.count + self.iter.len()) + } + } + None + } +} + +impl FusedIterator for Positions + where I: FusedIterator, + F: FnMut(I::Item) -> bool, +{} + +/// An iterator adapter to apply a mutating function to each element before yielding it. +/// +/// See [`.update()`](crate::Itertools::update) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Update { + iter: I, + f: F, +} + +impl fmt::Debug for Update +where + I: fmt::Debug, +{ + debug_fmt_fields!(Update, iter); +} + +/// Create a new `Update` iterator. +pub fn update(iter: I, f: F) -> Update +where + I: Iterator, + F: FnMut(&mut I::Item), +{ + Update { iter, f } +} + +impl Iterator for Update +where + I: Iterator, + F: FnMut(&mut I::Item), +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + if let Some(mut v) = self.iter.next() { + (self.f)(&mut v); + Some(v) + } else { + None + } + } + + fn size_hint(&self) -> (usize, Option) { + self.iter.size_hint() + } + + fn fold(self, init: Acc, mut g: G) -> Acc + where G: FnMut(Acc, Self::Item) -> Acc, + { + let mut f = self.f; + self.iter.fold(init, move |acc, mut v| { f(&mut v); g(acc, v) }) + } + + // if possible, re-use inner iterator specializations in collect + fn collect(self) -> C + where C: FromIterator + { + let mut f = self.f; + self.iter.map(move |mut v| { f(&mut v); v }).collect() + } +} + +impl ExactSizeIterator for Update +where + I: ExactSizeIterator, + F: FnMut(&mut I::Item), +{} + +impl DoubleEndedIterator for Update +where + I: DoubleEndedIterator, + F: FnMut(&mut I::Item), +{ + fn next_back(&mut self) -> Option { + if let Some(mut v) = self.iter.next_back() { + (self.f)(&mut v); + Some(v) + } else { + None + } + } +} + +impl FusedIterator for Update +where + I: FusedIterator, + F: FnMut(&mut I::Item), +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/adaptors/multi_product.rs b/rust/hw/char/pl011/vendor/itertools/src/adaptors/multi_product.rs new file mode 100644 index 0000000000..0b38406987 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/adaptors/multi_product.rs @@ -0,0 +1,230 @@ +#![cfg(feature = "use_alloc")] + +use crate::size_hint; +use crate::Itertools; + +use alloc::vec::Vec; + +#[derive(Clone)] +/// An iterator adaptor that iterates over the cartesian product of +/// multiple iterators of type `I`. +/// +/// An iterator element type is `Vec`. +/// +/// See [`.multi_cartesian_product()`](crate::Itertools::multi_cartesian_product) +/// for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct MultiProduct(Vec>) + where I: Iterator + Clone, + I::Item: Clone; + +impl std::fmt::Debug for MultiProduct +where + I: Iterator + Clone + std::fmt::Debug, + I::Item: Clone + std::fmt::Debug, +{ + debug_fmt_fields!(CoalesceBy, 0); +} + +/// Create a new cartesian product iterator over an arbitrary number +/// of iterators of the same type. +/// +/// Iterator element is of type `Vec`. +pub fn multi_cartesian_product(iters: H) -> MultiProduct<::IntoIter> + where H: Iterator, + H::Item: IntoIterator, + ::IntoIter: Clone, + ::Item: Clone +{ + MultiProduct(iters.map(|i| MultiProductIter::new(i.into_iter())).collect()) +} + +#[derive(Clone, Debug)] +/// Holds the state of a single iterator within a `MultiProduct`. +struct MultiProductIter + where I: Iterator + Clone, + I::Item: Clone +{ + cur: Option, + iter: I, + iter_orig: I, +} + +/// Holds the current state during an iteration of a `MultiProduct`. +#[derive(Debug)] +enum MultiProductIterState { + StartOfIter, + MidIter { on_first_iter: bool }, +} + +impl MultiProduct + where I: Iterator + Clone, + I::Item: Clone +{ + /// Iterates the rightmost iterator, then recursively iterates iterators + /// to the left if necessary. + /// + /// Returns true if the iteration succeeded, else false. + fn iterate_last( + multi_iters: &mut [MultiProductIter], + mut state: MultiProductIterState + ) -> bool { + use self::MultiProductIterState::*; + + if let Some((last, rest)) = multi_iters.split_last_mut() { + let on_first_iter = match state { + StartOfIter => { + let on_first_iter = !last.in_progress(); + state = MidIter { on_first_iter }; + on_first_iter + }, + MidIter { on_first_iter } => on_first_iter + }; + + if !on_first_iter { + last.iterate(); + } + + if last.in_progress() { + true + } else if MultiProduct::iterate_last(rest, state) { + last.reset(); + last.iterate(); + // If iterator is None twice consecutively, then iterator is + // empty; whole product is empty. + last.in_progress() + } else { + false + } + } else { + // Reached end of iterator list. On initialisation, return true. + // At end of iteration (final iterator finishes), finish. + match state { + StartOfIter => false, + MidIter { on_first_iter } => on_first_iter + } + } + } + + /// Returns the unwrapped value of the next iteration. + fn curr_iterator(&self) -> Vec { + self.0.iter().map(|multi_iter| { + multi_iter.cur.clone().unwrap() + }).collect() + } + + /// Returns true if iteration has started and has not yet finished; false + /// otherwise. + fn in_progress(&self) -> bool { + if let Some(last) = self.0.last() { + last.in_progress() + } else { + false + } + } +} + +impl MultiProductIter + where I: Iterator + Clone, + I::Item: Clone +{ + fn new(iter: I) -> Self { + MultiProductIter { + cur: None, + iter: iter.clone(), + iter_orig: iter + } + } + + /// Iterate the managed iterator. + fn iterate(&mut self) { + self.cur = self.iter.next(); + } + + /// Reset the managed iterator. + fn reset(&mut self) { + self.iter = self.iter_orig.clone(); + } + + /// Returns true if the current iterator has been started and has not yet + /// finished; false otherwise. + fn in_progress(&self) -> bool { + self.cur.is_some() + } +} + +impl Iterator for MultiProduct + where I: Iterator + Clone, + I::Item: Clone +{ + type Item = Vec; + + fn next(&mut self) -> Option { + if MultiProduct::iterate_last( + &mut self.0, + MultiProductIterState::StartOfIter + ) { + Some(self.curr_iterator()) + } else { + None + } + } + + fn count(self) -> usize { + if self.0.is_empty() { + return 0; + } + + if !self.in_progress() { + return self.0.into_iter().fold(1, |acc, multi_iter| { + acc * multi_iter.iter.count() + }); + } + + self.0.into_iter().fold( + 0, + |acc, MultiProductIter { iter, iter_orig, cur: _ }| { + let total_count = iter_orig.count(); + let cur_count = iter.count(); + acc * total_count + cur_count + } + ) + } + + fn size_hint(&self) -> (usize, Option) { + // Not ExactSizeIterator because size may be larger than usize + if self.0.is_empty() { + return (0, Some(0)); + } + + if !self.in_progress() { + return self.0.iter().fold((1, Some(1)), |acc, multi_iter| { + size_hint::mul(acc, multi_iter.iter.size_hint()) + }); + } + + self.0.iter().fold( + (0, Some(0)), + |acc, &MultiProductIter { ref iter, ref iter_orig, cur: _ }| { + let cur_size = iter.size_hint(); + let total_size = iter_orig.size_hint(); + size_hint::add(size_hint::mul(acc, total_size), cur_size) + } + ) + } + + fn last(self) -> Option { + let iter_count = self.0.len(); + + let lasts: Self::Item = self.0.into_iter() + .map(|multi_iter| multi_iter.iter.last()) + .while_some() + .collect(); + + if lasts.len() == iter_count { + Some(lasts) + } else { + None + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/combinations.rs b/rust/hw/char/pl011/vendor/itertools/src/combinations.rs new file mode 100644 index 0000000000..68a59c5e4d --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/combinations.rs @@ -0,0 +1,128 @@ +use std::fmt; +use std::iter::FusedIterator; + +use super::lazy_buffer::LazyBuffer; +use alloc::vec::Vec; + +/// An iterator to iterate through all the `k`-length combinations in an iterator. +/// +/// See [`.combinations()`](crate::Itertools::combinations) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Combinations { + indices: Vec, + pool: LazyBuffer, + first: bool, +} + +impl Clone for Combinations + where I: Clone + Iterator, + I::Item: Clone, +{ + clone_fields!(indices, pool, first); +} + +impl fmt::Debug for Combinations + where I: Iterator + fmt::Debug, + I::Item: fmt::Debug, +{ + debug_fmt_fields!(Combinations, indices, pool, first); +} + +/// Create a new `Combinations` from a clonable iterator. +pub fn combinations(iter: I, k: usize) -> Combinations + where I: Iterator +{ + let mut pool = LazyBuffer::new(iter); + pool.prefill(k); + + Combinations { + indices: (0..k).collect(), + pool, + first: true, + } +} + +impl Combinations { + /// Returns the length of a combination produced by this iterator. + #[inline] + pub fn k(&self) -> usize { self.indices.len() } + + /// Returns the (current) length of the pool from which combination elements are + /// selected. This value can change between invocations of [`next`](Combinations::next). + #[inline] + pub fn n(&self) -> usize { self.pool.len() } + + /// Returns a reference to the source iterator. + #[inline] + pub(crate) fn src(&self) -> &I { &self.pool.it } + + /// Resets this `Combinations` back to an initial state for combinations of length + /// `k` over the same pool data source. If `k` is larger than the current length + /// of the data pool an attempt is made to prefill the pool so that it holds `k` + /// elements. + pub(crate) fn reset(&mut self, k: usize) { + self.first = true; + + if k < self.indices.len() { + self.indices.truncate(k); + for i in 0..k { + self.indices[i] = i; + } + + } else { + for i in 0..self.indices.len() { + self.indices[i] = i; + } + self.indices.extend(self.indices.len()..k); + self.pool.prefill(k); + } + } +} + +impl Iterator for Combinations + where I: Iterator, + I::Item: Clone +{ + type Item = Vec; + fn next(&mut self) -> Option { + if self.first { + if self.k() > self.n() { + return None; + } + self.first = false; + } else if self.indices.is_empty() { + return None; + } else { + // Scan from the end, looking for an index to increment + let mut i: usize = self.indices.len() - 1; + + // Check if we need to consume more from the iterator + if self.indices[i] == self.pool.len() - 1 { + self.pool.get_next(); // may change pool size + } + + while self.indices[i] == i + self.pool.len() - self.indices.len() { + if i > 0 { + i -= 1; + } else { + // Reached the last combination + return None; + } + } + + // Increment index, and reset the ones to its right + self.indices[i] += 1; + for j in i+1..self.indices.len() { + self.indices[j] = self.indices[j - 1] + 1; + } + } + + // Create result vector based on the indices + Some(self.indices.iter().map(|i| self.pool[*i].clone()).collect()) + } +} + +impl FusedIterator for Combinations + where I: Iterator, + I::Item: Clone +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/combinations_with_replacement.rs b/rust/hw/char/pl011/vendor/itertools/src/combinations_with_replacement.rs new file mode 100644 index 0000000000..0fec9671ac --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/combinations_with_replacement.rs @@ -0,0 +1,109 @@ +use alloc::vec::Vec; +use std::fmt; +use std::iter::FusedIterator; + +use super::lazy_buffer::LazyBuffer; + +/// An iterator to iterate through all the `n`-length combinations in an iterator, with replacement. +/// +/// See [`.combinations_with_replacement()`](crate::Itertools::combinations_with_replacement) +/// for more information. +#[derive(Clone)] +pub struct CombinationsWithReplacement +where + I: Iterator, + I::Item: Clone, +{ + indices: Vec, + pool: LazyBuffer, + first: bool, +} + +impl fmt::Debug for CombinationsWithReplacement +where + I: Iterator + fmt::Debug, + I::Item: fmt::Debug + Clone, +{ + debug_fmt_fields!(Combinations, indices, pool, first); +} + +impl CombinationsWithReplacement +where + I: Iterator, + I::Item: Clone, +{ + /// Map the current mask over the pool to get an output combination + fn current(&self) -> Vec { + self.indices.iter().map(|i| self.pool[*i].clone()).collect() + } +} + +/// Create a new `CombinationsWithReplacement` from a clonable iterator. +pub fn combinations_with_replacement(iter: I, k: usize) -> CombinationsWithReplacement +where + I: Iterator, + I::Item: Clone, +{ + let indices: Vec = alloc::vec![0; k]; + let pool: LazyBuffer = LazyBuffer::new(iter); + + CombinationsWithReplacement { + indices, + pool, + first: true, + } +} + +impl Iterator for CombinationsWithReplacement +where + I: Iterator, + I::Item: Clone, +{ + type Item = Vec; + fn next(&mut self) -> Option { + // If this is the first iteration, return early + if self.first { + // In empty edge cases, stop iterating immediately + return if !(self.indices.is_empty() || self.pool.get_next()) { + None + // Otherwise, yield the initial state + } else { + self.first = false; + Some(self.current()) + }; + } + + // Check if we need to consume more from the iterator + // This will run while we increment our first index digit + self.pool.get_next(); + + // Work out where we need to update our indices + let mut increment: Option<(usize, usize)> = None; + for (i, indices_int) in self.indices.iter().enumerate().rev() { + if *indices_int < self.pool.len()-1 { + increment = Some((i, indices_int + 1)); + break; + } + } + + match increment { + // If we can update the indices further + Some((increment_from, increment_value)) => { + // We need to update the rightmost non-max value + // and all those to the right + for indices_index in increment_from..self.indices.len() { + self.indices[indices_index] = increment_value; + } + Some(self.current()) + } + // Otherwise, we're done + None => None, + } + } +} + +impl FusedIterator for CombinationsWithReplacement +where + I: Iterator, + I::Item: Clone, +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/concat_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/concat_impl.rs new file mode 100644 index 0000000000..f022ec90af --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/concat_impl.rs @@ -0,0 +1,23 @@ +use crate::Itertools; + +/// Combine all an iterator's elements into one element by using [`Extend`]. +/// +/// [`IntoIterator`]-enabled version of [`Itertools::concat`]. +/// +/// This combinator will extend the first item with each of the rest of the +/// items of the iterator. If the iterator is empty, the default value of +/// `I::Item` is returned. +/// +/// ```rust +/// use itertools::concat; +/// +/// let input = vec![vec![1], vec![2, 3], vec![4, 5, 6]]; +/// assert_eq!(concat(input), vec![1, 2, 3, 4, 5, 6]); +/// ``` +pub fn concat(iterable: I) -> I::Item + where I: IntoIterator, + I::Item: Extend<<::Item as IntoIterator>::Item> + IntoIterator + Default +{ + #[allow(deprecated)] //TODO: once msrv hits 1.51. replace `fold1` with `reduce` + iterable.into_iter().fold1(|mut a, b| { a.extend(b); a }).unwrap_or_default() +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/cons_tuples_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/cons_tuples_impl.rs new file mode 100644 index 0000000000..ae0f48f349 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/cons_tuples_impl.rs @@ -0,0 +1,64 @@ + +macro_rules! impl_cons_iter( + ($_A:ident, $_B:ident, ) => (); // stop + + ($A:ident, $($B:ident,)*) => ( + impl_cons_iter!($($B,)*); + #[allow(non_snake_case)] + impl Iterator for ConsTuples + where Iter: Iterator, + { + type Item = ($($B,)* X, ); + fn next(&mut self) -> Option { + self.iter.next().map(|(($($B,)*), x)| ($($B,)* x, )) + } + + fn size_hint(&self) -> (usize, Option) { + self.iter.size_hint() + } + fn fold(self, accum: Acc, mut f: Fold) -> Acc + where Fold: FnMut(Acc, Self::Item) -> Acc, + { + self.iter.fold(accum, move |acc, (($($B,)*), x)| f(acc, ($($B,)* x, ))) + } + } + + #[allow(non_snake_case)] + impl DoubleEndedIterator for ConsTuples + where Iter: DoubleEndedIterator, + { + fn next_back(&mut self) -> Option { + self.iter.next().map(|(($($B,)*), x)| ($($B,)* x, )) + } + } + + ); +); + +impl_cons_iter!(A, B, C, D, E, F, G, H, I, J, K, L,); + +/// An iterator that maps an iterator of tuples like +/// `((A, B), C)` to an iterator of `(A, B, C)`. +/// +/// Used by the `iproduct!()` macro. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Debug)] +pub struct ConsTuples + where I: Iterator, +{ + iter: I, +} + +impl Clone for ConsTuples + where I: Clone + Iterator, +{ + clone_fields!(iter); +} + +/// Create an iterator that maps for example iterators of +/// `((A, B), C)` to `(A, B, C)`. +pub fn cons_tuples(iterable: I) -> ConsTuples + where I: IntoIterator +{ + ConsTuples { iter: iterable.into_iter() } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/diff.rs b/rust/hw/char/pl011/vendor/itertools/src/diff.rs new file mode 100644 index 0000000000..1731f0639f --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/diff.rs @@ -0,0 +1,61 @@ +//! "Diff"ing iterators for caching elements to sequential collections without requiring the new +//! elements' iterator to be `Clone`. +//! +//! - [`Diff`] (produced by the [`diff_with`] function) +//! describes the difference between two non-`Clone` iterators `I` and `J` after breaking ASAP from +//! a lock-step comparison. + +use crate::free::put_back; +use crate::structs::PutBack; + +/// A type returned by the [`diff_with`] function. +/// +/// `Diff` represents the way in which the elements yielded by the iterator `I` differ to some +/// iterator `J`. +pub enum Diff + where I: Iterator, + J: Iterator +{ + /// The index of the first non-matching element along with both iterator's remaining elements + /// starting with the first mis-match. + FirstMismatch(usize, PutBack, PutBack), + /// The total number of elements that were in `J` along with the remaining elements of `I`. + Shorter(usize, PutBack), + /// The total number of elements that were in `I` along with the remaining elements of `J`. + Longer(usize, PutBack), +} + +/// Compares every element yielded by both `i` and `j` with the given function in lock-step and +/// returns a [`Diff`] which describes how `j` differs from `i`. +/// +/// If the number of elements yielded by `j` is less than the number of elements yielded by `i`, +/// the number of `j` elements yielded will be returned along with `i`'s remaining elements as +/// `Diff::Shorter`. +/// +/// If the two elements of a step differ, the index of those elements along with the remaining +/// elements of both `i` and `j` are returned as `Diff::FirstMismatch`. +/// +/// If `i` becomes exhausted before `j` becomes exhausted, the number of elements in `i` along with +/// the remaining `j` elements will be returned as `Diff::Longer`. +pub fn diff_with(i: I, j: J, is_equal: F) + -> Option> + where I: IntoIterator, + J: IntoIterator, + F: Fn(&I::Item, &J::Item) -> bool +{ + let mut i = i.into_iter(); + let mut j = j.into_iter(); + let mut idx = 0; + while let Some(i_elem) = i.next() { + match j.next() { + None => return Some(Diff::Shorter(idx, put_back(i).with_value(i_elem))), + Some(j_elem) => if !is_equal(&i_elem, &j_elem) { + let remaining_i = put_back(i).with_value(i_elem); + let remaining_j = put_back(j).with_value(j_elem); + return Some(Diff::FirstMismatch(idx, remaining_i, remaining_j)); + }, + } + idx += 1; + } + j.next().map(|j_elem| Diff::Longer(idx, put_back(j).with_value(j_elem))) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/duplicates_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/duplicates_impl.rs new file mode 100644 index 0000000000..28eda44a97 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/duplicates_impl.rs @@ -0,0 +1,216 @@ +use std::hash::Hash; + +mod private { + use std::collections::HashMap; + use std::hash::Hash; + use std::fmt; + + #[derive(Clone)] + #[must_use = "iterator adaptors are lazy and do nothing unless consumed"] + pub struct DuplicatesBy { + pub(crate) iter: I, + pub(crate) meta: Meta, + } + + impl fmt::Debug for DuplicatesBy + where + I: Iterator + fmt::Debug, + V: fmt::Debug + Hash + Eq, + { + debug_fmt_fields!(DuplicatesBy, iter, meta.used); + } + + impl DuplicatesBy { + pub(crate) fn new(iter: I, key_method: F) -> Self { + DuplicatesBy { + iter, + meta: Meta { + used: HashMap::new(), + pending: 0, + key_method, + }, + } + } + } + + #[derive(Clone)] + pub struct Meta { + used: HashMap, + pending: usize, + key_method: F, + } + + impl Meta + where + Key: Eq + Hash, + { + /// Takes an item and returns it back to the caller if it's the second time we see it. + /// Otherwise the item is consumed and None is returned + #[inline(always)] + fn filter(&mut self, item: I) -> Option + where + F: KeyMethod, + { + let kv = self.key_method.make(item); + match self.used.get_mut(kv.key_ref()) { + None => { + self.used.insert(kv.key(), false); + self.pending += 1; + None + } + Some(true) => None, + Some(produced) => { + *produced = true; + self.pending -= 1; + Some(kv.value()) + } + } + } + } + + impl Iterator for DuplicatesBy + where + I: Iterator, + Key: Eq + Hash, + F: KeyMethod, + { + type Item = I::Item; + + fn next(&mut self) -> Option { + let DuplicatesBy { iter, meta } = self; + iter.find_map(|v| meta.filter(v)) + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + let (_, hi) = self.iter.size_hint(); + let hi = hi.map(|hi| { + if hi <= self.meta.pending { + // fewer or equally many iter-remaining elements than pending elements + // => at most, each iter-remaining element is matched + hi + } else { + // fewer pending elements than iter-remaining elements + // => at most: + // * each pending element is matched + // * the other iter-remaining elements come in pairs + self.meta.pending + (hi - self.meta.pending) / 2 + } + }); + // The lower bound is always 0 since we might only get unique items from now on + (0, hi) + } + } + + impl DoubleEndedIterator for DuplicatesBy + where + I: DoubleEndedIterator, + Key: Eq + Hash, + F: KeyMethod, + { + fn next_back(&mut self) -> Option { + let DuplicatesBy { iter, meta } = self; + iter.rev().find_map(|v| meta.filter(v)) + } + } + + /// A keying method for use with `DuplicatesBy` + pub trait KeyMethod { + type Container: KeyXorValue; + + fn make(&mut self, value: V) -> Self::Container; + } + + /// Apply the identity function to elements before checking them for equality. + #[derive(Debug)] + pub struct ById; + impl KeyMethod for ById { + type Container = JustValue; + + fn make(&mut self, v: V) -> Self::Container { + JustValue(v) + } + } + + /// Apply a user-supplied function to elements before checking them for equality. + pub struct ByFn(pub(crate) F); + impl fmt::Debug for ByFn { + debug_fmt_fields!(ByFn,); + } + impl KeyMethod for ByFn + where + F: FnMut(&V) -> K, + { + type Container = KeyValue; + + fn make(&mut self, v: V) -> Self::Container { + KeyValue((self.0)(&v), v) + } + } + + // Implementors of this trait can hold onto a key and a value but only give access to one of them + // at a time. This allows the key and the value to be the same value internally + pub trait KeyXorValue { + fn key_ref(&self) -> &K; + fn key(self) -> K; + fn value(self) -> V; + } + + #[derive(Debug)] + pub struct KeyValue(K, V); + impl KeyXorValue for KeyValue { + fn key_ref(&self) -> &K { + &self.0 + } + fn key(self) -> K { + self.0 + } + fn value(self) -> V { + self.1 + } + } + + #[derive(Debug)] + pub struct JustValue(V); + impl KeyXorValue for JustValue { + fn key_ref(&self) -> &V { + &self.0 + } + fn key(self) -> V { + self.0 + } + fn value(self) -> V { + self.0 + } + } +} + +/// An iterator adapter to filter for duplicate elements. +/// +/// See [`.duplicates_by()`](crate::Itertools::duplicates_by) for more information. +pub type DuplicatesBy = private::DuplicatesBy>; + +/// Create a new `DuplicatesBy` iterator. +pub fn duplicates_by(iter: I, f: F) -> DuplicatesBy +where + Key: Eq + Hash, + F: FnMut(&I::Item) -> Key, + I: Iterator, +{ + DuplicatesBy::new(iter, private::ByFn(f)) +} + +/// An iterator adapter to filter out duplicate elements. +/// +/// See [`.duplicates()`](crate::Itertools::duplicates) for more information. +pub type Duplicates = private::DuplicatesBy::Item, private::ById>; + +/// Create a new `Duplicates` iterator. +pub fn duplicates(iter: I) -> Duplicates +where + I: Iterator, + I::Item: Eq + Hash, +{ + Duplicates::new(iter, private::ById) +} + diff --git a/rust/hw/char/pl011/vendor/itertools/src/either_or_both.rs b/rust/hw/char/pl011/vendor/itertools/src/either_or_both.rs new file mode 100644 index 0000000000..cf65fe7885 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/either_or_both.rs @@ -0,0 +1,495 @@ +use core::ops::{Deref, DerefMut}; + +use crate::EitherOrBoth::*; + +use either::Either; + +/// Value that either holds a single A or B, or both. +#[derive(Clone, PartialEq, Eq, Hash, Debug)] +pub enum EitherOrBoth { + /// Both values are present. + Both(A, B), + /// Only the left value of type `A` is present. + Left(A), + /// Only the right value of type `B` is present. + Right(B), +} + +impl EitherOrBoth { + /// If `Left`, or `Both`, return true. Otherwise, return false. + pub fn has_left(&self) -> bool { + self.as_ref().left().is_some() + } + + /// If `Right`, or `Both`, return true, otherwise, return false. + pub fn has_right(&self) -> bool { + self.as_ref().right().is_some() + } + + /// If `Left`, return true. Otherwise, return false. + /// Exclusive version of [`has_left`](EitherOrBoth::has_left). + pub fn is_left(&self) -> bool { + match *self { + Left(_) => true, + _ => false, + } + } + + /// If `Right`, return true. Otherwise, return false. + /// Exclusive version of [`has_right`](EitherOrBoth::has_right). + pub fn is_right(&self) -> bool { + match *self { + Right(_) => true, + _ => false, + } + } + + /// If `Both`, return true. Otherwise, return false. + pub fn is_both(&self) -> bool { + self.as_ref().both().is_some() + } + + /// If `Left`, or `Both`, return `Some` with the left value. Otherwise, return `None`. + pub fn left(self) -> Option { + match self { + Left(left) | Both(left, _) => Some(left), + _ => None, + } + } + + /// If `Right`, or `Both`, return `Some` with the right value. Otherwise, return `None`. + pub fn right(self) -> Option { + match self { + Right(right) | Both(_, right) => Some(right), + _ => None, + } + } + + /// If `Left`, return `Some` with the left value. If `Right` or `Both`, return `None`. + /// + /// # Examples + /// + /// ``` + /// // On the `Left` variant. + /// # use itertools::{EitherOrBoth, EitherOrBoth::{Left, Right, Both}}; + /// let x: EitherOrBoth<_, ()> = Left("bonjour"); + /// assert_eq!(x.just_left(), Some("bonjour")); + /// + /// // On the `Right` variant. + /// let x: EitherOrBoth<(), _> = Right("hola"); + /// assert_eq!(x.just_left(), None); + /// + /// // On the `Both` variant. + /// let x = Both("bonjour", "hola"); + /// assert_eq!(x.just_left(), None); + /// ``` + pub fn just_left(self) -> Option { + match self { + Left(left) => Some(left), + _ => None, + } + } + + /// If `Right`, return `Some` with the right value. If `Left` or `Both`, return `None`. + /// + /// # Examples + /// + /// ``` + /// // On the `Left` variant. + /// # use itertools::{EitherOrBoth::{Left, Right, Both}, EitherOrBoth}; + /// let x: EitherOrBoth<_, ()> = Left("auf wiedersehen"); + /// assert_eq!(x.just_left(), Some("auf wiedersehen")); + /// + /// // On the `Right` variant. + /// let x: EitherOrBoth<(), _> = Right("adios"); + /// assert_eq!(x.just_left(), None); + /// + /// // On the `Both` variant. + /// let x = Both("auf wiedersehen", "adios"); + /// assert_eq!(x.just_left(), None); + /// ``` + pub fn just_right(self) -> Option { + match self { + Right(right) => Some(right), + _ => None, + } + } + + /// If `Both`, return `Some` containing the left and right values. Otherwise, return `None`. + pub fn both(self) -> Option<(A, B)> { + match self { + Both(a, b) => Some((a, b)), + _ => None, + } + } + + /// If `Left` or `Both`, return the left value. Otherwise, convert the right value and return it. + pub fn into_left(self) -> A + where + B: Into, + { + match self { + Left(a) | Both(a, _) => a, + Right(b) => b.into(), + } + } + + /// If `Right` or `Both`, return the right value. Otherwise, convert the left value and return it. + pub fn into_right(self) -> B + where + A: Into, + { + match self { + Right(b) | Both(_, b) => b, + Left(a) => a.into(), + } + } + + /// Converts from `&EitherOrBoth` to `EitherOrBoth<&A, &B>`. + pub fn as_ref(&self) -> EitherOrBoth<&A, &B> { + match *self { + Left(ref left) => Left(left), + Right(ref right) => Right(right), + Both(ref left, ref right) => Both(left, right), + } + } + + /// Converts from `&mut EitherOrBoth` to `EitherOrBoth<&mut A, &mut B>`. + pub fn as_mut(&mut self) -> EitherOrBoth<&mut A, &mut B> { + match *self { + Left(ref mut left) => Left(left), + Right(ref mut right) => Right(right), + Both(ref mut left, ref mut right) => Both(left, right), + } + } + + /// Converts from `&EitherOrBoth` to `EitherOrBoth<&_, &_>` using the [`Deref`] trait. + pub fn as_deref(&self) -> EitherOrBoth<&A::Target, &B::Target> + where + A: Deref, + B: Deref, + { + match *self { + Left(ref left) => Left(left), + Right(ref right) => Right(right), + Both(ref left, ref right) => Both(left, right), + } + } + + /// Converts from `&mut EitherOrBoth` to `EitherOrBoth<&mut _, &mut _>` using the [`DerefMut`] trait. + pub fn as_deref_mut(&mut self) -> EitherOrBoth<&mut A::Target, &mut B::Target> + where + A: DerefMut, + B: DerefMut, + { + match *self { + Left(ref mut left) => Left(left), + Right(ref mut right) => Right(right), + Both(ref mut left, ref mut right) => Both(left, right), + } + } + + /// Convert `EitherOrBoth` to `EitherOrBoth`. + pub fn flip(self) -> EitherOrBoth { + match self { + Left(a) => Right(a), + Right(b) => Left(b), + Both(a, b) => Both(b, a), + } + } + + /// Apply the function `f` on the value `a` in `Left(a)` or `Both(a, b)` variants. If it is + /// present rewrapping the result in `self`'s original variant. + pub fn map_left(self, f: F) -> EitherOrBoth + where + F: FnOnce(A) -> M, + { + match self { + Both(a, b) => Both(f(a), b), + Left(a) => Left(f(a)), + Right(b) => Right(b), + } + } + + /// Apply the function `f` on the value `b` in `Right(b)` or `Both(a, b)` variants. + /// If it is present rewrapping the result in `self`'s original variant. + pub fn map_right(self, f: F) -> EitherOrBoth + where + F: FnOnce(B) -> M, + { + match self { + Left(a) => Left(a), + Right(b) => Right(f(b)), + Both(a, b) => Both(a, f(b)), + } + } + + /// Apply the functions `f` and `g` on the value `a` and `b` respectively; + /// found in `Left(a)`, `Right(b)`, or `Both(a, b)` variants. + /// The Result is rewrapped `self`'s original variant. + pub fn map_any(self, f: F, g: G) -> EitherOrBoth + where + F: FnOnce(A) -> L, + G: FnOnce(B) -> R, + { + match self { + Left(a) => Left(f(a)), + Right(b) => Right(g(b)), + Both(a, b) => Both(f(a), g(b)), + } + } + + /// Apply the function `f` on the value `a` in `Left(a)` or `Both(a, _)` variants if it is + /// present. + pub fn left_and_then(self, f: F) -> EitherOrBoth + where + F: FnOnce(A) -> EitherOrBoth, + { + match self { + Left(a) | Both(a, _) => f(a), + Right(b) => Right(b), + } + } + + /// Apply the function `f` on the value `b` + /// in `Right(b)` or `Both(_, b)` variants if it is present. + pub fn right_and_then(self, f: F) -> EitherOrBoth + where + F: FnOnce(B) -> EitherOrBoth, + { + match self { + Left(a) => Left(a), + Right(b) | Both(_, b) => f(b), + } + } + + /// Returns a tuple consisting of the `l` and `r` in `Both(l, r)`, if present. + /// Otherwise, returns the wrapped value for the present element, and the supplied + /// value for the other. The first (`l`) argument is used for a missing `Left` + /// value. The second (`r`) argument is used for a missing `Right` value. + /// + /// Arguments passed to `or` are eagerly evaluated; if you are passing + /// the result of a function call, it is recommended to use [`or_else`], + /// which is lazily evaluated. + /// + /// [`or_else`]: EitherOrBoth::or_else + /// + /// # Examples + /// + /// ``` + /// # use itertools::EitherOrBoth; + /// assert_eq!(EitherOrBoth::Both("tree", 1).or("stone", 5), ("tree", 1)); + /// assert_eq!(EitherOrBoth::Left("tree").or("stone", 5), ("tree", 5)); + /// assert_eq!(EitherOrBoth::Right(1).or("stone", 5), ("stone", 1)); + /// ``` + pub fn or(self, l: A, r: B) -> (A, B) { + match self { + Left(inner_l) => (inner_l, r), + Right(inner_r) => (l, inner_r), + Both(inner_l, inner_r) => (inner_l, inner_r), + } + } + + /// Returns a tuple consisting of the `l` and `r` in `Both(l, r)`, if present. + /// Otherwise, returns the wrapped value for the present element, and the [`default`](Default::default) + /// for the other. + pub fn or_default(self) -> (A, B) + where + A: Default, + B: Default, + { + match self { + EitherOrBoth::Left(l) => (l, B::default()), + EitherOrBoth::Right(r) => (A::default(), r), + EitherOrBoth::Both(l, r) => (l, r), + } + } + + /// Returns a tuple consisting of the `l` and `r` in `Both(l, r)`, if present. + /// Otherwise, returns the wrapped value for the present element, and computes the + /// missing value with the supplied closure. The first argument (`l`) is used for a + /// missing `Left` value. The second argument (`r`) is used for a missing `Right` value. + /// + /// # Examples + /// + /// ``` + /// # use itertools::EitherOrBoth; + /// let k = 10; + /// assert_eq!(EitherOrBoth::Both("tree", 1).or_else(|| "stone", || 2 * k), ("tree", 1)); + /// assert_eq!(EitherOrBoth::Left("tree").or_else(|| "stone", || 2 * k), ("tree", 20)); + /// assert_eq!(EitherOrBoth::Right(1).or_else(|| "stone", || 2 * k), ("stone", 1)); + /// ``` + pub fn or_else A, R: FnOnce() -> B>(self, l: L, r: R) -> (A, B) { + match self { + Left(inner_l) => (inner_l, r()), + Right(inner_r) => (l(), inner_r), + Both(inner_l, inner_r) => (inner_l, inner_r), + } + } + + /// Returns a mutable reference to the left value. If the left value is not present, + /// it is replaced with `val`. + pub fn left_or_insert(&mut self, val: A) -> &mut A { + self.left_or_insert_with(|| val) + } + + /// Returns a mutable reference to the right value. If the right value is not present, + /// it is replaced with `val`. + pub fn right_or_insert(&mut self, val: B) -> &mut B { + self.right_or_insert_with(|| val) + } + + /// If the left value is not present, replace it the value computed by the closure `f`. + /// Returns a mutable reference to the now-present left value. + pub fn left_or_insert_with(&mut self, f: F) -> &mut A + where + F: FnOnce() -> A, + { + match self { + Left(left) | Both(left, _) => left, + Right(_) => self.insert_left(f()), + } + } + + /// If the right value is not present, replace it the value computed by the closure `f`. + /// Returns a mutable reference to the now-present right value. + pub fn right_or_insert_with(&mut self, f: F) -> &mut B + where + F: FnOnce() -> B, + { + match self { + Right(right) | Both(_, right) => right, + Left(_) => self.insert_right(f()), + } + } + + /// Sets the `left` value of this instance, and returns a mutable reference to it. + /// Does not affect the `right` value. + /// + /// # Examples + /// ``` + /// # use itertools::{EitherOrBoth, EitherOrBoth::{Left, Right, Both}}; + /// + /// // Overwriting a pre-existing value. + /// let mut either: EitherOrBoth<_, ()> = Left(0_u32); + /// assert_eq!(*either.insert_left(69), 69); + /// + /// // Inserting a second value. + /// let mut either = Right("no"); + /// assert_eq!(*either.insert_left("yes"), "yes"); + /// assert_eq!(either, Both("yes", "no")); + /// ``` + pub fn insert_left(&mut self, val: A) -> &mut A { + match self { + Left(left) | Both(left, _) => { + *left = val; + left + } + Right(right) => { + // This is like a map in place operation. We move out of the reference, + // change the value, and then move back into the reference. + unsafe { + // SAFETY: We know this pointer is valid for reading since we got it from a reference. + let right = std::ptr::read(right as *mut _); + // SAFETY: Again, we know the pointer is valid since we got it from a reference. + std::ptr::write(self as *mut _, Both(val, right)); + } + + if let Both(left, _) = self { + left + } else { + // SAFETY: The above pattern will always match, since we just + // set `self` equal to `Both`. + unsafe { std::hint::unreachable_unchecked() } + } + } + } + } + + /// Sets the `right` value of this instance, and returns a mutable reference to it. + /// Does not affect the `left` value. + /// + /// # Examples + /// ``` + /// # use itertools::{EitherOrBoth, EitherOrBoth::{Left, Both}}; + /// // Overwriting a pre-existing value. + /// let mut either: EitherOrBoth<_, ()> = Left(0_u32); + /// assert_eq!(*either.insert_left(69), 69); + /// + /// // Inserting a second value. + /// let mut either = Left("what's"); + /// assert_eq!(*either.insert_right(9 + 10), 21 - 2); + /// assert_eq!(either, Both("what's", 9+10)); + /// ``` + pub fn insert_right(&mut self, val: B) -> &mut B { + match self { + Right(right) | Both(_, right) => { + *right = val; + right + } + Left(left) => { + // This is like a map in place operation. We move out of the reference, + // change the value, and then move back into the reference. + unsafe { + // SAFETY: We know this pointer is valid for reading since we got it from a reference. + let left = std::ptr::read(left as *mut _); + // SAFETY: Again, we know the pointer is valid since we got it from a reference. + std::ptr::write(self as *mut _, Both(left, val)); + } + if let Both(_, right) = self { + right + } else { + // SAFETY: The above pattern will always match, since we just + // set `self` equal to `Both`. + unsafe { std::hint::unreachable_unchecked() } + } + } + } + } + + /// Set `self` to `Both(..)`, containing the specified left and right values, + /// and returns a mutable reference to those values. + pub fn insert_both(&mut self, left: A, right: B) -> (&mut A, &mut B) { + *self = Both(left, right); + if let Both(left, right) = self { + (left, right) + } else { + // SAFETY: The above pattern will always match, since we just + // set `self` equal to `Both`. + unsafe { std::hint::unreachable_unchecked() } + } + } +} + +impl EitherOrBoth { + /// Return either value of left, right, or apply a function `f` to both values if both are present. + /// The input function has to return the same type as both Right and Left carry. + /// + /// # Examples + /// ``` + /// # use itertools::EitherOrBoth; + /// assert_eq!(EitherOrBoth::Both(3, 7).reduce(u32::max), 7); + /// assert_eq!(EitherOrBoth::Left(3).reduce(u32::max), 3); + /// assert_eq!(EitherOrBoth::Right(7).reduce(u32::max), 7); + /// ``` + pub fn reduce(self, f: F) -> T + where + F: FnOnce(T, T) -> T, + { + match self { + Left(a) => a, + Right(b) => b, + Both(a, b) => f(a, b), + } + } +} + +impl Into>> for EitherOrBoth { + fn into(self) -> Option> { + match self { + EitherOrBoth::Left(l) => Some(Either::Left(l)), + EitherOrBoth::Right(r) => Some(Either::Right(r)), + _ => None, + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/exactly_one_err.rs b/rust/hw/char/pl011/vendor/itertools/src/exactly_one_err.rs new file mode 100644 index 0000000000..c54ae77ca9 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/exactly_one_err.rs @@ -0,0 +1,110 @@ +#[cfg(feature = "use_std")] +use std::error::Error; +use std::fmt::{Debug, Display, Formatter, Result as FmtResult}; + +use std::iter::ExactSizeIterator; + +use either::Either; + +use crate::size_hint; + +/// Iterator returned for the error case of `IterTools::exactly_one()` +/// This iterator yields exactly the same elements as the input iterator. +/// +/// During the execution of `exactly_one` the iterator must be mutated. This wrapper +/// effectively "restores" the state of the input iterator when it's handed back. +/// +/// This is very similar to `PutBackN` except this iterator only supports 0-2 elements and does not +/// use a `Vec`. +#[derive(Clone)] +pub struct ExactlyOneError +where + I: Iterator, +{ + first_two: Option>, + inner: I, +} + +impl ExactlyOneError +where + I: Iterator, +{ + /// Creates a new `ExactlyOneErr` iterator. + pub(crate) fn new(first_two: Option>, inner: I) -> Self { + Self { first_two, inner } + } + + fn additional_len(&self) -> usize { + match self.first_two { + Some(Either::Left(_)) => 2, + Some(Either::Right(_)) => 1, + None => 0, + } + } +} + +impl Iterator for ExactlyOneError +where + I: Iterator, +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + match self.first_two.take() { + Some(Either::Left([first, second])) => { + self.first_two = Some(Either::Right(second)); + Some(first) + }, + Some(Either::Right(second)) => { + Some(second) + } + None => { + self.inner.next() + } + } + } + + fn size_hint(&self) -> (usize, Option) { + size_hint::add_scalar(self.inner.size_hint(), self.additional_len()) + } +} + + +impl ExactSizeIterator for ExactlyOneError where I: ExactSizeIterator {} + +impl Display for ExactlyOneError + where I: Iterator, +{ + fn fmt(&self, f: &mut Formatter) -> FmtResult { + let additional = self.additional_len(); + if additional > 0 { + write!(f, "got at least 2 elements when exactly one was expected") + } else { + write!(f, "got zero elements when exactly one was expected") + } + } +} + +impl Debug for ExactlyOneError + where I: Iterator + Debug, + I::Item: Debug, +{ + fn fmt(&self, f: &mut Formatter) -> FmtResult { + match &self.first_two { + Some(Either::Left([first, second])) => { + write!(f, "ExactlyOneError[First: {:?}, Second: {:?}, RemainingIter: {:?}]", first, second, self.inner) + }, + Some(Either::Right(second)) => { + write!(f, "ExactlyOneError[Second: {:?}, RemainingIter: {:?}]", second, self.inner) + } + None => { + write!(f, "ExactlyOneError[RemainingIter: {:?}]", self.inner) + } + } + } +} + +#[cfg(feature = "use_std")] +impl Error for ExactlyOneError where I: Iterator + Debug, I::Item: Debug, {} + + diff --git a/rust/hw/char/pl011/vendor/itertools/src/extrema_set.rs b/rust/hw/char/pl011/vendor/itertools/src/extrema_set.rs new file mode 100644 index 0000000000..ae128364c5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/extrema_set.rs @@ -0,0 +1,48 @@ +use std::cmp::Ordering; + +/// Implementation guts for `min_set`, `min_set_by`, and `min_set_by_key`. +pub fn min_set_impl( + mut it: I, + mut key_for: F, + mut compare: Compare, +) -> Vec +where + I: Iterator, + F: FnMut(&I::Item) -> K, + Compare: FnMut(&I::Item, &I::Item, &K, &K) -> Ordering, +{ + match it.next() { + None => Vec::new(), + Some(element) => { + let mut current_key = key_for(&element); + let mut result = vec![element]; + it.for_each(|element| { + let key = key_for(&element); + match compare(&element, &result[0], &key, ¤t_key) { + Ordering::Less => { + result.clear(); + result.push(element); + current_key = key; + } + Ordering::Equal => { + result.push(element); + } + Ordering::Greater => {} + } + }); + result + } + } +} + +/// Implementation guts for `ax_set`, `max_set_by`, and `max_set_by_key`. +pub fn max_set_impl(it: I, key_for: F, mut compare: Compare) -> Vec +where + I: Iterator, + F: FnMut(&I::Item) -> K, + Compare: FnMut(&I::Item, &I::Item, &K, &K) -> Ordering, +{ + min_set_impl(it, key_for, |it1, it2, key1, key2| { + compare(it2, it1, key2, key1) + }) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/flatten_ok.rs b/rust/hw/char/pl011/vendor/itertools/src/flatten_ok.rs new file mode 100644 index 0000000000..21ae1f7223 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/flatten_ok.rs @@ -0,0 +1,165 @@ +use crate::size_hint; +use std::{ + fmt, + iter::{DoubleEndedIterator, FusedIterator}, +}; + +pub fn flatten_ok(iter: I) -> FlattenOk +where + I: Iterator>, + T: IntoIterator, +{ + FlattenOk { + iter, + inner_front: None, + inner_back: None, + } +} + +/// An iterator adaptor that flattens `Result::Ok` values and +/// allows `Result::Err` values through unchanged. +/// +/// See [`.flatten_ok()`](crate::Itertools::flatten_ok) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct FlattenOk +where + I: Iterator>, + T: IntoIterator, +{ + iter: I, + inner_front: Option, + inner_back: Option, +} + +impl Iterator for FlattenOk +where + I: Iterator>, + T: IntoIterator, +{ + type Item = Result; + + fn next(&mut self) -> Option { + loop { + // Handle the front inner iterator. + if let Some(inner) = &mut self.inner_front { + if let Some(item) = inner.next() { + return Some(Ok(item)); + } + + // This is necessary for the iterator to implement `FusedIterator` + // with only the original iterator being fused. + self.inner_front = None; + } + + match self.iter.next() { + Some(Ok(ok)) => self.inner_front = Some(ok.into_iter()), + Some(Err(e)) => return Some(Err(e)), + None => { + // Handle the back inner iterator. + if let Some(inner) = &mut self.inner_back { + if let Some(item) = inner.next() { + return Some(Ok(item)); + } + + // This is necessary for the iterator to implement `FusedIterator` + // with only the original iterator being fused. + self.inner_back = None; + } else { + return None; + } + } + } + } + } + + fn size_hint(&self) -> (usize, Option) { + let inner_hint = |inner: &Option| { + inner + .as_ref() + .map(Iterator::size_hint) + .unwrap_or((0, Some(0))) + }; + let inner_front = inner_hint(&self.inner_front); + let inner_back = inner_hint(&self.inner_back); + // The outer iterator `Ok` case could be (0, None) as we don't know its size_hint yet. + let outer = match self.iter.size_hint() { + (0, Some(0)) => (0, Some(0)), + _ => (0, None), + }; + + size_hint::add(size_hint::add(inner_front, inner_back), outer) + } +} + +impl DoubleEndedIterator for FlattenOk +where + I: DoubleEndedIterator>, + T: IntoIterator, + T::IntoIter: DoubleEndedIterator, +{ + fn next_back(&mut self) -> Option { + loop { + // Handle the back inner iterator. + if let Some(inner) = &mut self.inner_back { + if let Some(item) = inner.next_back() { + return Some(Ok(item)); + } + + // This is necessary for the iterator to implement `FusedIterator` + // with only the original iterator being fused. + self.inner_back = None; + } + + match self.iter.next_back() { + Some(Ok(ok)) => self.inner_back = Some(ok.into_iter()), + Some(Err(e)) => return Some(Err(e)), + None => { + // Handle the front inner iterator. + if let Some(inner) = &mut self.inner_front { + if let Some(item) = inner.next_back() { + return Some(Ok(item)); + } + + // This is necessary for the iterator to implement `FusedIterator` + // with only the original iterator being fused. + self.inner_front = None; + } else { + return None; + } + } + } + } + } +} + +impl Clone for FlattenOk +where + I: Iterator> + Clone, + T: IntoIterator, + T::IntoIter: Clone, +{ + clone_fields!(iter, inner_front, inner_back); +} + +impl fmt::Debug for FlattenOk +where + I: Iterator> + fmt::Debug, + T: IntoIterator, + T::IntoIter: fmt::Debug, +{ + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("FlattenOk") + .field("iter", &self.iter) + .field("inner_front", &self.inner_front) + .field("inner_back", &self.inner_back) + .finish() + } +} + +/// Only the iterator being flattened needs to implement [`FusedIterator`]. +impl FusedIterator for FlattenOk +where + I: FusedIterator>, + T: IntoIterator, +{ +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/format.rs b/rust/hw/char/pl011/vendor/itertools/src/format.rs new file mode 100644 index 0000000000..c4cb65dcb2 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/format.rs @@ -0,0 +1,168 @@ +use std::cell::Cell; +use std::fmt; + +/// Format all iterator elements lazily, separated by `sep`. +/// +/// The format value can only be formatted once, after that the iterator is +/// exhausted. +/// +/// See [`.format_with()`](crate::Itertools::format_with) for more information. +pub struct FormatWith<'a, I, F> { + sep: &'a str, + /// FormatWith uses interior mutability because Display::fmt takes &self. + inner: Cell>, +} + +/// Format all iterator elements lazily, separated by `sep`. +/// +/// The format value can only be formatted once, after that the iterator is +/// exhausted. +/// +/// See [`.format()`](crate::Itertools::format) +/// for more information. +pub struct Format<'a, I> { + sep: &'a str, + /// Format uses interior mutability because Display::fmt takes &self. + inner: Cell>, +} + +pub fn new_format(iter: I, separator: &str, f: F) -> FormatWith<'_, I, F> +where + I: Iterator, + F: FnMut(I::Item, &mut dyn FnMut(&dyn fmt::Display) -> fmt::Result) -> fmt::Result, +{ + FormatWith { + sep: separator, + inner: Cell::new(Some((iter, f))), + } +} + +pub fn new_format_default(iter: I, separator: &str) -> Format<'_, I> +where + I: Iterator, +{ + Format { + sep: separator, + inner: Cell::new(Some(iter)), + } +} + +impl<'a, I, F> fmt::Display for FormatWith<'a, I, F> +where + I: Iterator, + F: FnMut(I::Item, &mut dyn FnMut(&dyn fmt::Display) -> fmt::Result) -> fmt::Result, +{ + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let (mut iter, mut format) = match self.inner.take() { + Some(t) => t, + None => panic!("FormatWith: was already formatted once"), + }; + + if let Some(fst) = iter.next() { + format(fst, &mut |disp: &dyn fmt::Display| disp.fmt(f))?; + iter.try_for_each(|elt| { + if !self.sep.is_empty() { + f.write_str(self.sep)?; + } + format(elt, &mut |disp: &dyn fmt::Display| disp.fmt(f)) + })?; + } + Ok(()) + } +} + +impl<'a, I> Format<'a, I> +where + I: Iterator, +{ + fn format( + &self, + f: &mut fmt::Formatter, + cb: fn(&I::Item, &mut fmt::Formatter) -> fmt::Result, + ) -> fmt::Result { + let mut iter = match self.inner.take() { + Some(t) => t, + None => panic!("Format: was already formatted once"), + }; + + if let Some(fst) = iter.next() { + cb(&fst, f)?; + iter.try_for_each(|elt| { + if !self.sep.is_empty() { + f.write_str(self.sep)?; + } + cb(&elt, f) + })?; + } + Ok(()) + } +} + +macro_rules! impl_format { + ($($fmt_trait:ident)*) => { + $( + impl<'a, I> fmt::$fmt_trait for Format<'a, I> + where I: Iterator, + I::Item: fmt::$fmt_trait, + { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + self.format(f, fmt::$fmt_trait::fmt) + } + } + )* + } +} + +impl_format! {Display Debug UpperExp LowerExp UpperHex LowerHex Octal Binary Pointer} + +impl<'a, I, F> Clone for FormatWith<'a, I, F> +where + (I, F): Clone, +{ + fn clone(&self) -> Self { + struct PutBackOnDrop<'r, 'a, I, F> { + into: &'r FormatWith<'a, I, F>, + inner: Option<(I, F)>, + } + // This ensures we preserve the state of the original `FormatWith` if `Clone` panics + impl<'r, 'a, I, F> Drop for PutBackOnDrop<'r, 'a, I, F> { + fn drop(&mut self) { + self.into.inner.set(self.inner.take()) + } + } + let pbod = PutBackOnDrop { + inner: self.inner.take(), + into: self, + }; + Self { + inner: Cell::new(pbod.inner.clone()), + sep: self.sep, + } + } +} + +impl<'a, I> Clone for Format<'a, I> +where + I: Clone, +{ + fn clone(&self) -> Self { + struct PutBackOnDrop<'r, 'a, I> { + into: &'r Format<'a, I>, + inner: Option, + } + // This ensures we preserve the state of the original `FormatWith` if `Clone` panics + impl<'r, 'a, I> Drop for PutBackOnDrop<'r, 'a, I> { + fn drop(&mut self) { + self.into.inner.set(self.inner.take()) + } + } + let pbod = PutBackOnDrop { + inner: self.inner.take(), + into: self, + }; + Self { + inner: Cell::new(pbod.inner.clone()), + sep: self.sep, + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/free.rs b/rust/hw/char/pl011/vendor/itertools/src/free.rs new file mode 100644 index 0000000000..19e3e28694 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/free.rs @@ -0,0 +1,286 @@ +//! Free functions that create iterator adaptors or call iterator methods. +//! +//! The benefit of free functions is that they accept any [`IntoIterator`] as +//! argument, so the resulting code may be easier to read. + +#[cfg(feature = "use_alloc")] +use std::fmt::Display; +use std::iter::{self, Zip}; +#[cfg(feature = "use_alloc")] +type VecIntoIter = alloc::vec::IntoIter; + +#[cfg(feature = "use_alloc")] +use alloc::{ + string::String, +}; + +use crate::Itertools; +use crate::intersperse::{Intersperse, IntersperseWith}; + +pub use crate::adaptors::{ + interleave, + merge, + put_back, +}; +#[cfg(feature = "use_alloc")] +pub use crate::put_back_n_impl::put_back_n; +#[cfg(feature = "use_alloc")] +pub use crate::multipeek_impl::multipeek; +#[cfg(feature = "use_alloc")] +pub use crate::peek_nth::peek_nth; +#[cfg(feature = "use_alloc")] +pub use crate::kmerge_impl::kmerge; +pub use crate::zip_eq_impl::zip_eq; +pub use crate::merge_join::merge_join_by; +#[cfg(feature = "use_alloc")] +pub use crate::rciter_impl::rciter; + +/// Iterate `iterable` with a particular value inserted between each element. +/// +/// [`IntoIterator`] enabled version of [`Iterator::intersperse`]. +/// +/// ``` +/// use itertools::intersperse; +/// +/// itertools::assert_equal(intersperse((0..3), 8), vec![0, 8, 1, 8, 2]); +/// ``` +pub fn intersperse(iterable: I, element: I::Item) -> Intersperse + where I: IntoIterator, + ::Item: Clone +{ + Itertools::intersperse(iterable.into_iter(), element) +} + +/// Iterate `iterable` with a particular value created by a function inserted +/// between each element. +/// +/// [`IntoIterator`] enabled version of [`Iterator::intersperse_with`]. +/// +/// ``` +/// use itertools::intersperse_with; +/// +/// let mut i = 10; +/// itertools::assert_equal(intersperse_with((0..3), || { i -= 1; i }), vec![0, 9, 1, 8, 2]); +/// assert_eq!(i, 8); +/// ``` +pub fn intersperse_with(iterable: I, element: F) -> IntersperseWith + where I: IntoIterator, + F: FnMut() -> I::Item +{ + Itertools::intersperse_with(iterable.into_iter(), element) +} + +/// Iterate `iterable` with a running index. +/// +/// [`IntoIterator`] enabled version of [`Iterator::enumerate`]. +/// +/// ``` +/// use itertools::enumerate; +/// +/// for (i, elt) in enumerate(&[1, 2, 3]) { +/// /* loop body */ +/// } +/// ``` +pub fn enumerate(iterable: I) -> iter::Enumerate + where I: IntoIterator +{ + iterable.into_iter().enumerate() +} + +/// Iterate `iterable` in reverse. +/// +/// [`IntoIterator`] enabled version of [`Iterator::rev`]. +/// +/// ``` +/// use itertools::rev; +/// +/// for elt in rev(&[1, 2, 3]) { +/// /* loop body */ +/// } +/// ``` +pub fn rev(iterable: I) -> iter::Rev + where I: IntoIterator, + I::IntoIter: DoubleEndedIterator +{ + iterable.into_iter().rev() +} + +/// Converts the arguments to iterators and zips them. +/// +/// [`IntoIterator`] enabled version of [`Iterator::zip`]. +/// +/// ## Example +/// +/// ``` +/// use itertools::zip; +/// +/// let mut result: Vec<(i32, char)> = Vec::new(); +/// +/// for (a, b) in zip(&[1, 2, 3, 4, 5], &['a', 'b', 'c']) { +/// result.push((*a, *b)); +/// } +/// assert_eq!(result, vec![(1, 'a'),(2, 'b'),(3, 'c')]); +/// ``` +#[deprecated(note="Use [std::iter::zip](https://doc.rust-lang.org/std/iter/fn.zip.html) instead", since="0.10.4")] +pub fn zip(i: I, j: J) -> Zip + where I: IntoIterator, + J: IntoIterator +{ + i.into_iter().zip(j) +} + + +/// Takes two iterables and creates a new iterator over both in sequence. +/// +/// [`IntoIterator`] enabled version of [`Iterator::chain`]. +/// +/// ## Example +/// ``` +/// use itertools::chain; +/// +/// let mut result:Vec = Vec::new(); +/// +/// for element in chain(&[1, 2, 3], &[4]) { +/// result.push(*element); +/// } +/// assert_eq!(result, vec![1, 2, 3, 4]); +/// ``` +pub fn chain(i: I, j: J) -> iter::Chain<::IntoIter, ::IntoIter> + where I: IntoIterator, + J: IntoIterator +{ + i.into_iter().chain(j) +} + +/// Create an iterator that clones each element from &T to T +/// +/// [`IntoIterator`] enabled version of [`Iterator::cloned`]. +/// +/// ``` +/// use itertools::cloned; +/// +/// assert_eq!(cloned(b"abc").next(), Some(b'a')); +/// ``` +pub fn cloned<'a, I, T: 'a>(iterable: I) -> iter::Cloned + where I: IntoIterator, + T: Clone, +{ + iterable.into_iter().cloned() +} + +/// Perform a fold operation over the iterable. +/// +/// [`IntoIterator`] enabled version of [`Iterator::fold`]. +/// +/// ``` +/// use itertools::fold; +/// +/// assert_eq!(fold(&[1., 2., 3.], 0., |a, &b| f32::max(a, b)), 3.); +/// ``` +pub fn fold(iterable: I, init: B, f: F) -> B + where I: IntoIterator, + F: FnMut(B, I::Item) -> B +{ + iterable.into_iter().fold(init, f) +} + +/// Test whether the predicate holds for all elements in the iterable. +/// +/// [`IntoIterator`] enabled version of [`Iterator::all`]. +/// +/// ``` +/// use itertools::all; +/// +/// assert!(all(&[1, 2, 3], |elt| *elt > 0)); +/// ``` +pub fn all(iterable: I, f: F) -> bool + where I: IntoIterator, + F: FnMut(I::Item) -> bool +{ + iterable.into_iter().all(f) +} + +/// Test whether the predicate holds for any elements in the iterable. +/// +/// [`IntoIterator`] enabled version of [`Iterator::any`]. +/// +/// ``` +/// use itertools::any; +/// +/// assert!(any(&[0, -1, 2], |elt| *elt > 0)); +/// ``` +pub fn any(iterable: I, f: F) -> bool + where I: IntoIterator, + F: FnMut(I::Item) -> bool +{ + iterable.into_iter().any(f) +} + +/// Return the maximum value of the iterable. +/// +/// [`IntoIterator`] enabled version of [`Iterator::max`]. +/// +/// ``` +/// use itertools::max; +/// +/// assert_eq!(max(0..10), Some(9)); +/// ``` +pub fn max(iterable: I) -> Option + where I: IntoIterator, + I::Item: Ord +{ + iterable.into_iter().max() +} + +/// Return the minimum value of the iterable. +/// +/// [`IntoIterator`] enabled version of [`Iterator::min`]. +/// +/// ``` +/// use itertools::min; +/// +/// assert_eq!(min(0..10), Some(0)); +/// ``` +pub fn min(iterable: I) -> Option + where I: IntoIterator, + I::Item: Ord +{ + iterable.into_iter().min() +} + + +/// Combine all iterator elements into one String, separated by `sep`. +/// +/// [`IntoIterator`] enabled version of [`Itertools::join`]. +/// +/// ``` +/// use itertools::join; +/// +/// assert_eq!(join(&[1, 2, 3], ", "), "1, 2, 3"); +/// ``` +#[cfg(feature = "use_alloc")] +pub fn join(iterable: I, sep: &str) -> String + where I: IntoIterator, + I::Item: Display +{ + iterable.into_iter().join(sep) +} + +/// Sort all iterator elements into a new iterator in ascending order. +/// +/// [`IntoIterator`] enabled version of [`Itertools::sorted`]. +/// +/// ``` +/// use itertools::sorted; +/// use itertools::assert_equal; +/// +/// assert_equal(sorted("rust".chars()), "rstu".chars()); +/// ``` +#[cfg(feature = "use_alloc")] +pub fn sorted(iterable: I) -> VecIntoIter + where I: IntoIterator, + I::Item: Ord +{ + iterable.into_iter().sorted() +} + diff --git a/rust/hw/char/pl011/vendor/itertools/src/group_map.rs b/rust/hw/char/pl011/vendor/itertools/src/group_map.rs new file mode 100644 index 0000000000..a2d0ebb2ab --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/group_map.rs @@ -0,0 +1,32 @@ +#![cfg(feature = "use_std")] + +use std::collections::HashMap; +use std::hash::Hash; +use std::iter::Iterator; + +/// Return a `HashMap` of keys mapped to a list of their corresponding values. +/// +/// See [`.into_group_map()`](crate::Itertools::into_group_map) +/// for more information. +pub fn into_group_map(iter: I) -> HashMap> + where I: Iterator, + K: Hash + Eq, +{ + let mut lookup = HashMap::new(); + + iter.for_each(|(key, val)| { + lookup.entry(key).or_insert_with(Vec::new).push(val); + }); + + lookup +} + +pub fn into_group_map_by(iter: I, f: impl Fn(&V) -> K) -> HashMap> + where + I: Iterator, + K: Hash + Eq, +{ + into_group_map( + iter.map(|v| (f(&v), v)) + ) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/groupbylazy.rs b/rust/hw/char/pl011/vendor/itertools/src/groupbylazy.rs new file mode 100644 index 0000000000..80c6f09f32 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/groupbylazy.rs @@ -0,0 +1,579 @@ +use std::cell::{Cell, RefCell}; +use alloc::vec::{self, Vec}; + +/// A trait to unify `FnMut` for `GroupBy` with the chunk key in `IntoChunks` +trait KeyFunction { + type Key; + fn call_mut(&mut self, arg: A) -> Self::Key; +} + +impl KeyFunction for F + where F: FnMut(A) -> K +{ + type Key = K; + #[inline] + fn call_mut(&mut self, arg: A) -> Self::Key { + (*self)(arg) + } +} + + +/// `ChunkIndex` acts like the grouping key function for `IntoChunks` +#[derive(Debug, Clone)] +struct ChunkIndex { + size: usize, + index: usize, + key: usize, +} + +impl ChunkIndex { + #[inline(always)] + fn new(size: usize) -> Self { + ChunkIndex { + size, + index: 0, + key: 0, + } + } +} + +impl KeyFunction for ChunkIndex { + type Key = usize; + #[inline(always)] + fn call_mut(&mut self, _arg: A) -> Self::Key { + if self.index == self.size { + self.key += 1; + self.index = 0; + } + self.index += 1; + self.key + } +} + +#[derive(Clone)] +struct GroupInner + where I: Iterator +{ + key: F, + iter: I, + current_key: Option, + current_elt: Option, + /// flag set if iterator is exhausted + done: bool, + /// Index of group we are currently buffering or visiting + top_group: usize, + /// Least index for which we still have elements buffered + oldest_buffered_group: usize, + /// Group index for `buffer[0]` -- the slots + /// bottom_group..oldest_buffered_group are unused and will be erased when + /// that range is large enough. + bottom_group: usize, + /// Buffered groups, from `bottom_group` (index 0) to `top_group`. + buffer: Vec>, + /// index of last group iter that was dropped, usize::MAX == none + dropped_group: usize, +} + +impl GroupInner + where I: Iterator, + F: for<'a> KeyFunction<&'a I::Item, Key=K>, + K: PartialEq, +{ + /// `client`: Index of group that requests next element + #[inline(always)] + fn step(&mut self, client: usize) -> Option { + /* + println!("client={}, bottom_group={}, oldest_buffered_group={}, top_group={}, buffers=[{}]", + client, self.bottom_group, self.oldest_buffered_group, + self.top_group, + self.buffer.iter().map(|elt| elt.len()).format(", ")); + */ + if client < self.oldest_buffered_group { + None + } else if client < self.top_group || + (client == self.top_group && + self.buffer.len() > self.top_group - self.bottom_group) + { + self.lookup_buffer(client) + } else if self.done { + None + } else if self.top_group == client { + self.step_current() + } else { + self.step_buffering(client) + } + } + + #[inline(never)] + fn lookup_buffer(&mut self, client: usize) -> Option { + // if `bufidx` doesn't exist in self.buffer, it might be empty + let bufidx = client - self.bottom_group; + if client < self.oldest_buffered_group { + return None; + } + let elt = self.buffer.get_mut(bufidx).and_then(|queue| queue.next()); + if elt.is_none() && client == self.oldest_buffered_group { + // FIXME: VecDeque is unfortunately not zero allocation when empty, + // so we do this job manually. + // `bottom_group..oldest_buffered_group` is unused, and if it's large enough, erase it. + self.oldest_buffered_group += 1; + // skip forward further empty queues too + while self.buffer.get(self.oldest_buffered_group - self.bottom_group) + .map_or(false, |buf| buf.len() == 0) + { + self.oldest_buffered_group += 1; + } + + let nclear = self.oldest_buffered_group - self.bottom_group; + if nclear > 0 && nclear >= self.buffer.len() / 2 { + let mut i = 0; + self.buffer.retain(|buf| { + i += 1; + debug_assert!(buf.len() == 0 || i > nclear); + i > nclear + }); + self.bottom_group = self.oldest_buffered_group; + } + } + elt + } + + /// Take the next element from the iterator, and set the done + /// flag if exhausted. Must not be called after done. + #[inline(always)] + fn next_element(&mut self) -> Option { + debug_assert!(!self.done); + match self.iter.next() { + None => { self.done = true; None } + otherwise => otherwise, + } + } + + + #[inline(never)] + fn step_buffering(&mut self, client: usize) -> Option { + // requested a later group -- walk through the current group up to + // the requested group index, and buffer the elements (unless + // the group is marked as dropped). + // Because the `Groups` iterator is always the first to request + // each group index, client is the next index efter top_group. + debug_assert!(self.top_group + 1 == client); + let mut group = Vec::new(); + + if let Some(elt) = self.current_elt.take() { + if self.top_group != self.dropped_group { + group.push(elt); + } + } + let mut first_elt = None; // first element of the next group + + while let Some(elt) = self.next_element() { + let key = self.key.call_mut(&elt); + match self.current_key.take() { + None => {} + Some(old_key) => if old_key != key { + self.current_key = Some(key); + first_elt = Some(elt); + break; + }, + } + self.current_key = Some(key); + if self.top_group != self.dropped_group { + group.push(elt); + } + } + + if self.top_group != self.dropped_group { + self.push_next_group(group); + } + if first_elt.is_some() { + self.top_group += 1; + debug_assert!(self.top_group == client); + } + first_elt + } + + fn push_next_group(&mut self, group: Vec) { + // When we add a new buffered group, fill up slots between oldest_buffered_group and top_group + while self.top_group - self.bottom_group > self.buffer.len() { + if self.buffer.is_empty() { + self.bottom_group += 1; + self.oldest_buffered_group += 1; + } else { + self.buffer.push(Vec::new().into_iter()); + } + } + self.buffer.push(group.into_iter()); + debug_assert!(self.top_group + 1 - self.bottom_group == self.buffer.len()); + } + + /// This is the immediate case, where we use no buffering + #[inline] + fn step_current(&mut self) -> Option { + debug_assert!(!self.done); + if let elt @ Some(..) = self.current_elt.take() { + return elt; + } + match self.next_element() { + None => None, + Some(elt) => { + let key = self.key.call_mut(&elt); + match self.current_key.take() { + None => {} + Some(old_key) => if old_key != key { + self.current_key = Some(key); + self.current_elt = Some(elt); + self.top_group += 1; + return None; + }, + } + self.current_key = Some(key); + Some(elt) + } + } + } + + /// Request the just started groups' key. + /// + /// `client`: Index of group + /// + /// **Panics** if no group key is available. + fn group_key(&mut self, client: usize) -> K { + // This can only be called after we have just returned the first + // element of a group. + // Perform this by simply buffering one more element, grabbing the + // next key. + debug_assert!(!self.done); + debug_assert!(client == self.top_group); + debug_assert!(self.current_key.is_some()); + debug_assert!(self.current_elt.is_none()); + let old_key = self.current_key.take().unwrap(); + if let Some(elt) = self.next_element() { + let key = self.key.call_mut(&elt); + if old_key != key { + self.top_group += 1; + } + self.current_key = Some(key); + self.current_elt = Some(elt); + } + old_key + } +} + +impl GroupInner + where I: Iterator, +{ + /// Called when a group is dropped + fn drop_group(&mut self, client: usize) { + // It's only useful to track the maximal index + if self.dropped_group == !0 || client > self.dropped_group { + self.dropped_group = client; + } + } +} + +/// `GroupBy` is the storage for the lazy grouping operation. +/// +/// If the groups are consumed in their original order, or if each +/// group is dropped without keeping it around, then `GroupBy` uses +/// no allocations. It needs allocations only if several group iterators +/// are alive at the same time. +/// +/// This type implements [`IntoIterator`] (it is **not** an iterator +/// itself), because the group iterators need to borrow from this +/// value. It should be stored in a local variable or temporary and +/// iterated. +/// +/// See [`.group_by()`](crate::Itertools::group_by) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct GroupBy + where I: Iterator, +{ + inner: RefCell>, + // the group iterator's current index. Keep this in the main value + // so that simultaneous iterators all use the same state. + index: Cell, +} + +/// Create a new +pub fn new(iter: J, f: F) -> GroupBy + where J: IntoIterator, + F: FnMut(&J::Item) -> K, +{ + GroupBy { + inner: RefCell::new(GroupInner { + key: f, + iter: iter.into_iter(), + current_key: None, + current_elt: None, + done: false, + top_group: 0, + oldest_buffered_group: 0, + bottom_group: 0, + buffer: Vec::new(), + dropped_group: !0, + }), + index: Cell::new(0), + } +} + +impl GroupBy + where I: Iterator, +{ + /// `client`: Index of group that requests next element + fn step(&self, client: usize) -> Option + where F: FnMut(&I::Item) -> K, + K: PartialEq, + { + self.inner.borrow_mut().step(client) + } + + /// `client`: Index of group + fn drop_group(&self, client: usize) { + self.inner.borrow_mut().drop_group(client); + } +} + +impl<'a, K, I, F> IntoIterator for &'a GroupBy + where I: Iterator, + I::Item: 'a, + F: FnMut(&I::Item) -> K, + K: PartialEq +{ + type Item = (K, Group<'a, K, I, F>); + type IntoIter = Groups<'a, K, I, F>; + + fn into_iter(self) -> Self::IntoIter { + Groups { parent: self } + } +} + + +/// An iterator that yields the Group iterators. +/// +/// Iterator element type is `(K, Group)`: +/// the group's key `K` and the group's iterator. +/// +/// See [`.group_by()`](crate::Itertools::group_by) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Groups<'a, K: 'a, I: 'a, F: 'a> + where I: Iterator, + I::Item: 'a +{ + parent: &'a GroupBy, +} + +impl<'a, K, I, F> Iterator for Groups<'a, K, I, F> + where I: Iterator, + I::Item: 'a, + F: FnMut(&I::Item) -> K, + K: PartialEq +{ + type Item = (K, Group<'a, K, I, F>); + + #[inline] + fn next(&mut self) -> Option { + let index = self.parent.index.get(); + self.parent.index.set(index + 1); + let inner = &mut *self.parent.inner.borrow_mut(); + inner.step(index).map(|elt| { + let key = inner.group_key(index); + (key, Group { + parent: self.parent, + index, + first: Some(elt), + }) + }) + } +} + +/// An iterator for the elements in a single group. +/// +/// Iterator element type is `I::Item`. +pub struct Group<'a, K: 'a, I: 'a, F: 'a> + where I: Iterator, + I::Item: 'a, +{ + parent: &'a GroupBy, + index: usize, + first: Option, +} + +impl<'a, K, I, F> Drop for Group<'a, K, I, F> + where I: Iterator, + I::Item: 'a, +{ + fn drop(&mut self) { + self.parent.drop_group(self.index); + } +} + +impl<'a, K, I, F> Iterator for Group<'a, K, I, F> + where I: Iterator, + I::Item: 'a, + F: FnMut(&I::Item) -> K, + K: PartialEq, +{ + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + if let elt @ Some(..) = self.first.take() { + return elt; + } + self.parent.step(self.index) + } +} + +///// IntoChunks ///// + +/// Create a new +pub fn new_chunks(iter: J, size: usize) -> IntoChunks + where J: IntoIterator, +{ + IntoChunks { + inner: RefCell::new(GroupInner { + key: ChunkIndex::new(size), + iter: iter.into_iter(), + current_key: None, + current_elt: None, + done: false, + top_group: 0, + oldest_buffered_group: 0, + bottom_group: 0, + buffer: Vec::new(), + dropped_group: !0, + }), + index: Cell::new(0), + } +} + + +/// `ChunkLazy` is the storage for a lazy chunking operation. +/// +/// `IntoChunks` behaves just like `GroupBy`: it is iterable, and +/// it only buffers if several chunk iterators are alive at the same time. +/// +/// This type implements [`IntoIterator`] (it is **not** an iterator +/// itself), because the chunk iterators need to borrow from this +/// value. It should be stored in a local variable or temporary and +/// iterated. +/// +/// Iterator element type is `Chunk`, each chunk's iterator. +/// +/// See [`.chunks()`](crate::Itertools::chunks) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct IntoChunks + where I: Iterator, +{ + inner: RefCell>, + // the chunk iterator's current index. Keep this in the main value + // so that simultaneous iterators all use the same state. + index: Cell, +} + +impl Clone for IntoChunks + where I: Clone + Iterator, + I::Item: Clone, +{ + clone_fields!(inner, index); +} + + +impl IntoChunks + where I: Iterator, +{ + /// `client`: Index of chunk that requests next element + fn step(&self, client: usize) -> Option { + self.inner.borrow_mut().step(client) + } + + /// `client`: Index of chunk + fn drop_group(&self, client: usize) { + self.inner.borrow_mut().drop_group(client); + } +} + +impl<'a, I> IntoIterator for &'a IntoChunks + where I: Iterator, + I::Item: 'a, +{ + type Item = Chunk<'a, I>; + type IntoIter = Chunks<'a, I>; + + fn into_iter(self) -> Self::IntoIter { + Chunks { + parent: self, + } + } +} + + +/// An iterator that yields the Chunk iterators. +/// +/// Iterator element type is `Chunk`. +/// +/// See [`.chunks()`](crate::Itertools::chunks) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Clone)] +pub struct Chunks<'a, I: 'a> + where I: Iterator, + I::Item: 'a, +{ + parent: &'a IntoChunks, +} + +impl<'a, I> Iterator for Chunks<'a, I> + where I: Iterator, + I::Item: 'a, +{ + type Item = Chunk<'a, I>; + + #[inline] + fn next(&mut self) -> Option { + let index = self.parent.index.get(); + self.parent.index.set(index + 1); + let inner = &mut *self.parent.inner.borrow_mut(); + inner.step(index).map(|elt| { + Chunk { + parent: self.parent, + index, + first: Some(elt), + } + }) + } +} + +/// An iterator for the elements in a single chunk. +/// +/// Iterator element type is `I::Item`. +pub struct Chunk<'a, I: 'a> + where I: Iterator, + I::Item: 'a, +{ + parent: &'a IntoChunks, + index: usize, + first: Option, +} + +impl<'a, I> Drop for Chunk<'a, I> + where I: Iterator, + I::Item: 'a, +{ + fn drop(&mut self) { + self.parent.drop_group(self.index); + } +} + +impl<'a, I> Iterator for Chunk<'a, I> + where I: Iterator, + I::Item: 'a, +{ + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + if let elt @ Some(..) = self.first.take() { + return elt; + } + self.parent.step(self.index) + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/grouping_map.rs b/rust/hw/char/pl011/vendor/itertools/src/grouping_map.rs new file mode 100644 index 0000000000..bb5b582c92 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/grouping_map.rs @@ -0,0 +1,535 @@ +#![cfg(feature = "use_std")] + +use crate::MinMaxResult; +use std::collections::HashMap; +use std::cmp::Ordering; +use std::hash::Hash; +use std::iter::Iterator; +use std::ops::{Add, Mul}; + +/// A wrapper to allow for an easy [`into_grouping_map_by`](crate::Itertools::into_grouping_map_by) +#[derive(Clone, Debug)] +pub struct MapForGrouping(I, F); + +impl MapForGrouping { + pub(crate) fn new(iter: I, key_mapper: F) -> Self { + Self(iter, key_mapper) + } +} + +impl Iterator for MapForGrouping + where I: Iterator, + K: Hash + Eq, + F: FnMut(&V) -> K, +{ + type Item = (K, V); + fn next(&mut self) -> Option { + self.0.next().map(|val| ((self.1)(&val), val)) + } +} + +/// Creates a new `GroupingMap` from `iter` +pub fn new(iter: I) -> GroupingMap + where I: Iterator, + K: Hash + Eq, +{ + GroupingMap { iter } +} + +/// `GroupingMapBy` is an intermediate struct for efficient group-and-fold operations. +/// +/// See [`GroupingMap`] for more informations. +pub type GroupingMapBy = GroupingMap>; + +/// `GroupingMap` is an intermediate struct for efficient group-and-fold operations. +/// It groups elements by their key and at the same time fold each group +/// using some aggregating operation. +/// +/// No method on this struct performs temporary allocations. +#[derive(Clone, Debug)] +#[must_use = "GroupingMap is lazy and do nothing unless consumed"] +pub struct GroupingMap { + iter: I, +} + +impl GroupingMap + where I: Iterator, + K: Hash + Eq, +{ + /// This is the generic way to perform any operation on a `GroupingMap`. + /// It's suggested to use this method only to implement custom operations + /// when the already provided ones are not enough. + /// + /// Groups elements from the `GroupingMap` source by key and applies `operation` to the elements + /// of each group sequentially, passing the previously accumulated value, a reference to the key + /// and the current element as arguments, and stores the results in an `HashMap`. + /// + /// The `operation` function is invoked on each element with the following parameters: + /// - the current value of the accumulator of the group if there is currently one; + /// - a reference to the key of the group this element belongs to; + /// - the element from the source being aggregated; + /// + /// If `operation` returns `Some(element)` then the accumulator is updated with `element`, + /// otherwise the previous accumulation is discarded. + /// + /// Return a `HashMap` associating the key of each group with the result of aggregation of + /// that group's elements. If the aggregation of the last element of a group discards the + /// accumulator then there won't be an entry associated to that group's key. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![2, 8, 5, 7, 9, 0, 4, 10]; + /// let lookup = data.into_iter() + /// .into_grouping_map_by(|&n| n % 4) + /// .aggregate(|acc, _key, val| { + /// if val == 0 || val == 10 { + /// None + /// } else { + /// Some(acc.unwrap_or(0) + val) + /// } + /// }); + /// + /// assert_eq!(lookup[&0], 4); // 0 resets the accumulator so only 4 is summed + /// assert_eq!(lookup[&1], 5 + 9); + /// assert_eq!(lookup.get(&2), None); // 10 resets the accumulator and nothing is summed afterward + /// assert_eq!(lookup[&3], 7); + /// assert_eq!(lookup.len(), 3); // The final keys are only 0, 1 and 2 + /// ``` + pub fn aggregate(self, mut operation: FO) -> HashMap + where FO: FnMut(Option, &K, V) -> Option, + { + let mut destination_map = HashMap::new(); + + self.iter.for_each(|(key, val)| { + let acc = destination_map.remove(&key); + if let Some(op_res) = operation(acc, &key, val) { + destination_map.insert(key, op_res); + } + }); + + destination_map + } + + /// Groups elements from the `GroupingMap` source by key and applies `operation` to the elements + /// of each group sequentially, passing the previously accumulated value, a reference to the key + /// and the current element as arguments, and stores the results in a new map. + /// + /// `init` is the value from which will be cloned the initial value of each accumulator. + /// + /// `operation` is a function that is invoked on each element with the following parameters: + /// - the current value of the accumulator of the group; + /// - a reference to the key of the group this element belongs to; + /// - the element from the source being accumulated. + /// + /// Return a `HashMap` associating the key of each group with the result of folding that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = (1..=7) + /// .into_grouping_map_by(|&n| n % 3) + /// .fold(0, |acc, _key, val| acc + val); + /// + /// assert_eq!(lookup[&0], 3 + 6); + /// assert_eq!(lookup[&1], 1 + 4 + 7); + /// assert_eq!(lookup[&2], 2 + 5); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn fold(self, init: R, mut operation: FO) -> HashMap + where R: Clone, + FO: FnMut(R, &K, V) -> R, + { + self.aggregate(|acc, key, val| { + let acc = acc.unwrap_or_else(|| init.clone()); + Some(operation(acc, key, val)) + }) + } + + /// Groups elements from the `GroupingMap` source by key and applies `operation` to the elements + /// of each group sequentially, passing the previously accumulated value, a reference to the key + /// and the current element as arguments, and stores the results in a new map. + /// + /// This is similar to [`fold`] but the initial value of the accumulator is the first element of the group. + /// + /// `operation` is a function that is invoked on each element with the following parameters: + /// - the current value of the accumulator of the group; + /// - a reference to the key of the group this element belongs to; + /// - the element from the source being accumulated. + /// + /// Return a `HashMap` associating the key of each group with the result of folding that group's elements. + /// + /// [`fold`]: GroupingMap::fold + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = (1..=7) + /// .into_grouping_map_by(|&n| n % 3) + /// .fold_first(|acc, _key, val| acc + val); + /// + /// assert_eq!(lookup[&0], 3 + 6); + /// assert_eq!(lookup[&1], 1 + 4 + 7); + /// assert_eq!(lookup[&2], 2 + 5); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn fold_first(self, mut operation: FO) -> HashMap + where FO: FnMut(V, &K, V) -> V, + { + self.aggregate(|acc, key, val| { + Some(match acc { + Some(acc) => operation(acc, key, val), + None => val, + }) + }) + } + + /// Groups elements from the `GroupingMap` source by key and collects the elements of each group in + /// an instance of `C`. The iteration order is preserved when inserting elements. + /// + /// Return a `HashMap` associating the key of each group with the collection containing that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// use std::collections::HashSet; + /// + /// let lookup = vec![0, 1, 2, 3, 4, 5, 6, 2, 3, 6].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .collect::>(); + /// + /// assert_eq!(lookup[&0], vec![0, 3, 6].into_iter().collect::>()); + /// assert_eq!(lookup[&1], vec![1, 4].into_iter().collect::>()); + /// assert_eq!(lookup[&2], vec![2, 5].into_iter().collect::>()); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn collect(self) -> HashMap + where C: Default + Extend, + { + let mut destination_map = HashMap::new(); + + self.iter.for_each(|(key, val)| { + destination_map.entry(key).or_insert_with(C::default).extend(Some(val)); + }); + + destination_map + } + + /// Groups elements from the `GroupingMap` source by key and finds the maximum of each group. + /// + /// If several elements are equally maximum, the last element is picked. + /// + /// Returns a `HashMap` associating the key of each group with the maximum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .max(); + /// + /// assert_eq!(lookup[&0], 12); + /// assert_eq!(lookup[&1], 7); + /// assert_eq!(lookup[&2], 8); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn max(self) -> HashMap + where V: Ord, + { + self.max_by(|_, v1, v2| V::cmp(v1, v2)) + } + + /// Groups elements from the `GroupingMap` source by key and finds the maximum of each group + /// with respect to the specified comparison function. + /// + /// If several elements are equally maximum, the last element is picked. + /// + /// Returns a `HashMap` associating the key of each group with the maximum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .max_by(|_key, x, y| y.cmp(x)); + /// + /// assert_eq!(lookup[&0], 3); + /// assert_eq!(lookup[&1], 1); + /// assert_eq!(lookup[&2], 5); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn max_by(self, mut compare: F) -> HashMap + where F: FnMut(&K, &V, &V) -> Ordering, + { + self.fold_first(|acc, key, val| match compare(key, &acc, &val) { + Ordering::Less | Ordering::Equal => val, + Ordering::Greater => acc + }) + } + + /// Groups elements from the `GroupingMap` source by key and finds the element of each group + /// that gives the maximum from the specified function. + /// + /// If several elements are equally maximum, the last element is picked. + /// + /// Returns a `HashMap` associating the key of each group with the maximum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .max_by_key(|_key, &val| val % 4); + /// + /// assert_eq!(lookup[&0], 3); + /// assert_eq!(lookup[&1], 7); + /// assert_eq!(lookup[&2], 5); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn max_by_key(self, mut f: F) -> HashMap + where F: FnMut(&K, &V) -> CK, + CK: Ord, + { + self.max_by(|key, v1, v2| f(key, v1).cmp(&f(key, v2))) + } + + /// Groups elements from the `GroupingMap` source by key and finds the minimum of each group. + /// + /// If several elements are equally minimum, the first element is picked. + /// + /// Returns a `HashMap` associating the key of each group with the minimum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .min(); + /// + /// assert_eq!(lookup[&0], 3); + /// assert_eq!(lookup[&1], 1); + /// assert_eq!(lookup[&2], 5); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn min(self) -> HashMap + where V: Ord, + { + self.min_by(|_, v1, v2| V::cmp(v1, v2)) + } + + /// Groups elements from the `GroupingMap` source by key and finds the minimum of each group + /// with respect to the specified comparison function. + /// + /// If several elements are equally minimum, the first element is picked. + /// + /// Returns a `HashMap` associating the key of each group with the minimum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .min_by(|_key, x, y| y.cmp(x)); + /// + /// assert_eq!(lookup[&0], 12); + /// assert_eq!(lookup[&1], 7); + /// assert_eq!(lookup[&2], 8); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn min_by(self, mut compare: F) -> HashMap + where F: FnMut(&K, &V, &V) -> Ordering, + { + self.fold_first(|acc, key, val| match compare(key, &acc, &val) { + Ordering::Less | Ordering::Equal => acc, + Ordering::Greater => val + }) + } + + /// Groups elements from the `GroupingMap` source by key and finds the element of each group + /// that gives the minimum from the specified function. + /// + /// If several elements are equally minimum, the first element is picked. + /// + /// Returns a `HashMap` associating the key of each group with the minimum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .min_by_key(|_key, &val| val % 4); + /// + /// assert_eq!(lookup[&0], 12); + /// assert_eq!(lookup[&1], 4); + /// assert_eq!(lookup[&2], 8); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn min_by_key(self, mut f: F) -> HashMap + where F: FnMut(&K, &V) -> CK, + CK: Ord, + { + self.min_by(|key, v1, v2| f(key, v1).cmp(&f(key, v2))) + } + + /// Groups elements from the `GroupingMap` source by key and find the maximum and minimum of + /// each group. + /// + /// If several elements are equally maximum, the last element is picked. + /// If several elements are equally minimum, the first element is picked. + /// + /// See [.minmax()](crate::Itertools::minmax) for the non-grouping version. + /// + /// Differences from the non grouping version: + /// - It never produces a `MinMaxResult::NoElements` + /// - It doesn't have any speedup + /// + /// Returns a `HashMap` associating the key of each group with the minimum and maximum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{OneElement, MinMax}; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .minmax(); + /// + /// assert_eq!(lookup[&0], MinMax(3, 12)); + /// assert_eq!(lookup[&1], MinMax(1, 7)); + /// assert_eq!(lookup[&2], OneElement(5)); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn minmax(self) -> HashMap> + where V: Ord, + { + self.minmax_by(|_, v1, v2| V::cmp(v1, v2)) + } + + /// Groups elements from the `GroupingMap` source by key and find the maximum and minimum of + /// each group with respect to the specified comparison function. + /// + /// If several elements are equally maximum, the last element is picked. + /// If several elements are equally minimum, the first element is picked. + /// + /// It has the same differences from the non-grouping version as `minmax`. + /// + /// Returns a `HashMap` associating the key of each group with the minimum and maximum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{OneElement, MinMax}; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .minmax_by(|_key, x, y| y.cmp(x)); + /// + /// assert_eq!(lookup[&0], MinMax(12, 3)); + /// assert_eq!(lookup[&1], MinMax(7, 1)); + /// assert_eq!(lookup[&2], OneElement(5)); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn minmax_by(self, mut compare: F) -> HashMap> + where F: FnMut(&K, &V, &V) -> Ordering, + { + self.aggregate(|acc, key, val| { + Some(match acc { + Some(MinMaxResult::OneElement(e)) => { + if compare(key, &val, &e) == Ordering::Less { + MinMaxResult::MinMax(val, e) + } else { + MinMaxResult::MinMax(e, val) + } + } + Some(MinMaxResult::MinMax(min, max)) => { + if compare(key, &val, &min) == Ordering::Less { + MinMaxResult::MinMax(val, max) + } else if compare(key, &val, &max) != Ordering::Less { + MinMaxResult::MinMax(min, val) + } else { + MinMaxResult::MinMax(min, max) + } + } + None => MinMaxResult::OneElement(val), + Some(MinMaxResult::NoElements) => unreachable!(), + }) + }) + } + + /// Groups elements from the `GroupingMap` source by key and find the elements of each group + /// that gives the minimum and maximum from the specified function. + /// + /// If several elements are equally maximum, the last element is picked. + /// If several elements are equally minimum, the first element is picked. + /// + /// It has the same differences from the non-grouping version as `minmax`. + /// + /// Returns a `HashMap` associating the key of each group with the minimum and maximum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{OneElement, MinMax}; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .minmax_by_key(|_key, &val| val % 4); + /// + /// assert_eq!(lookup[&0], MinMax(12, 3)); + /// assert_eq!(lookup[&1], MinMax(4, 7)); + /// assert_eq!(lookup[&2], OneElement(5)); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn minmax_by_key(self, mut f: F) -> HashMap> + where F: FnMut(&K, &V) -> CK, + CK: Ord, + { + self.minmax_by(|key, v1, v2| f(key, v1).cmp(&f(key, v2))) + } + + /// Groups elements from the `GroupingMap` source by key and sums them. + /// + /// This is just a shorthand for `self.fold_first(|acc, _, val| acc + val)`. + /// It is more limited than `Iterator::sum` since it doesn't use the `Sum` trait. + /// + /// Returns a `HashMap` associating the key of each group with the sum of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .sum(); + /// + /// assert_eq!(lookup[&0], 3 + 9 + 12); + /// assert_eq!(lookup[&1], 1 + 4 + 7); + /// assert_eq!(lookup[&2], 5 + 8); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn sum(self) -> HashMap + where V: Add + { + self.fold_first(|acc, _, val| acc + val) + } + + /// Groups elements from the `GroupingMap` source by key and multiply them. + /// + /// This is just a shorthand for `self.fold_first(|acc, _, val| acc * val)`. + /// It is more limited than `Iterator::product` since it doesn't use the `Product` trait. + /// + /// Returns a `HashMap` associating the key of each group with the product of that group's elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let lookup = vec![1, 3, 4, 5, 7, 8, 9, 12].into_iter() + /// .into_grouping_map_by(|&n| n % 3) + /// .product(); + /// + /// assert_eq!(lookup[&0], 3 * 9 * 12); + /// assert_eq!(lookup[&1], 1 * 4 * 7); + /// assert_eq!(lookup[&2], 5 * 8); + /// assert_eq!(lookup.len(), 3); + /// ``` + pub fn product(self) -> HashMap + where V: Mul, + { + self.fold_first(|acc, _, val| acc * val) + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/impl_macros.rs b/rust/hw/char/pl011/vendor/itertools/src/impl_macros.rs new file mode 100644 index 0000000000..a029843b05 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/impl_macros.rs @@ -0,0 +1,29 @@ +//! +//! Implementation's internal macros + +macro_rules! debug_fmt_fields { + ($tyname:ident, $($($field:tt/*TODO ideally we would accept ident or tuple element here*/).+),*) => { + fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result { + f.debug_struct(stringify!($tyname)) + $( + .field(stringify!($($field).+), &self.$($field).+) + )* + .finish() + } + } +} + +macro_rules! clone_fields { + ($($field:ident),*) => { + #[inline] // TODO is this sensible? + fn clone(&self) -> Self { + Self { + $($field: self.$field.clone(),)* + } + } + } +} + +macro_rules! ignore_ident{ + ($id:ident, $($t:tt)*) => {$($t)*}; +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/intersperse.rs b/rust/hw/char/pl011/vendor/itertools/src/intersperse.rs new file mode 100644 index 0000000000..10a3a5389c --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/intersperse.rs @@ -0,0 +1,118 @@ +use std::iter::{Fuse, FusedIterator}; +use super::size_hint; + +pub trait IntersperseElement { + fn generate(&mut self) -> Item; +} + +#[derive(Debug, Clone)] +pub struct IntersperseElementSimple(Item); + +impl IntersperseElement for IntersperseElementSimple { + fn generate(&mut self) -> Item { + self.0.clone() + } +} + +/// An iterator adaptor to insert a particular value +/// between each element of the adapted iterator. +/// +/// Iterator element type is `I::Item` +/// +/// This iterator is *fused*. +/// +/// See [`.intersperse()`](crate::Itertools::intersperse) for more information. +pub type Intersperse = IntersperseWith::Item>>; + +/// Create a new Intersperse iterator +pub fn intersperse(iter: I, elt: I::Item) -> Intersperse + where I: Iterator, +{ + intersperse_with(iter, IntersperseElementSimple(elt)) +} + +implItem> IntersperseElement for F { + fn generate(&mut self) -> Item { + self() + } +} + +/// An iterator adaptor to insert a particular value created by a function +/// between each element of the adapted iterator. +/// +/// Iterator element type is `I::Item` +/// +/// This iterator is *fused*. +/// +/// See [`.intersperse_with()`](crate::Itertools::intersperse_with) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Clone, Debug)] +pub struct IntersperseWith + where I: Iterator, +{ + element: ElemF, + iter: Fuse, + peek: Option, +} + +/// Create a new `IntersperseWith` iterator +pub fn intersperse_with(iter: I, elt: ElemF) -> IntersperseWith + where I: Iterator, +{ + let mut iter = iter.fuse(); + IntersperseWith { + peek: iter.next(), + iter, + element: elt, + } +} + +impl Iterator for IntersperseWith + where I: Iterator, + ElemF: IntersperseElement +{ + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + if self.peek.is_some() { + self.peek.take() + } else { + self.peek = self.iter.next(); + if self.peek.is_some() { + Some(self.element.generate()) + } else { + None + } + } + } + + fn size_hint(&self) -> (usize, Option) { + // 2 * SH + { 1 or 0 } + let has_peek = self.peek.is_some() as usize; + let sh = self.iter.size_hint(); + size_hint::add_scalar(size_hint::add(sh, sh), has_peek) + } + + fn fold(mut self, init: B, mut f: F) -> B where + Self: Sized, F: FnMut(B, Self::Item) -> B, + { + let mut accum = init; + + if let Some(x) = self.peek.take() { + accum = f(accum, x); + } + + let element = &mut self.element; + + self.iter.fold(accum, + |accum, x| { + let accum = f(accum, element.generate()); + f(accum, x) + }) + } +} + +impl FusedIterator for IntersperseWith + where I: Iterator, + ElemF: IntersperseElement +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/k_smallest.rs b/rust/hw/char/pl011/vendor/itertools/src/k_smallest.rs new file mode 100644 index 0000000000..acaea5941c --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/k_smallest.rs @@ -0,0 +1,20 @@ +use alloc::collections::BinaryHeap; +use core::cmp::Ord; + +pub(crate) fn k_smallest>(mut iter: I, k: usize) -> BinaryHeap { + if k == 0 { return BinaryHeap::new(); } + + let mut heap = iter.by_ref().take(k).collect::>(); + + iter.for_each(|i| { + debug_assert_eq!(heap.len(), k); + // Equivalent to heap.push(min(i, heap.pop())) but more efficient. + // This should be done with a single `.peek_mut().unwrap()` but + // `PeekMut` sifts-down unconditionally on Rust 1.46.0 and prior. + if *heap.peek().unwrap() > i { + *heap.peek_mut().unwrap() = i; + } + }); + + heap +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/kmerge_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/kmerge_impl.rs new file mode 100644 index 0000000000..509d5fc6a3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/kmerge_impl.rs @@ -0,0 +1,227 @@ +use crate::size_hint; +use crate::Itertools; + +use alloc::vec::Vec; +use std::iter::FusedIterator; +use std::mem::replace; +use std::fmt; + +/// Head element and Tail iterator pair +/// +/// `PartialEq`, `Eq`, `PartialOrd` and `Ord` are implemented by comparing sequences based on +/// first items (which are guaranteed to exist). +/// +/// The meanings of `PartialOrd` and `Ord` are reversed so as to turn the heap used in +/// `KMerge` into a min-heap. +#[derive(Debug)] +struct HeadTail + where I: Iterator +{ + head: I::Item, + tail: I, +} + +impl HeadTail + where I: Iterator +{ + /// Constructs a `HeadTail` from an `Iterator`. Returns `None` if the `Iterator` is empty. + fn new(mut it: I) -> Option> { + let head = it.next(); + head.map(|h| { + HeadTail { + head: h, + tail: it, + } + }) + } + + /// Get the next element and update `head`, returning the old head in `Some`. + /// + /// Returns `None` when the tail is exhausted (only `head` then remains). + fn next(&mut self) -> Option { + if let Some(next) = self.tail.next() { + Some(replace(&mut self.head, next)) + } else { + None + } + } + + /// Hints at the size of the sequence, same as the `Iterator` method. + fn size_hint(&self) -> (usize, Option) { + size_hint::add_scalar(self.tail.size_hint(), 1) + } +} + +impl Clone for HeadTail + where I: Iterator + Clone, + I::Item: Clone +{ + clone_fields!(head, tail); +} + +/// Make `data` a heap (min-heap w.r.t the sorting). +fn heapify(data: &mut [T], mut less_than: S) + where S: FnMut(&T, &T) -> bool +{ + for i in (0..data.len() / 2).rev() { + sift_down(data, i, &mut less_than); + } +} + +/// Sift down element at `index` (`heap` is a min-heap wrt the ordering) +fn sift_down(heap: &mut [T], index: usize, mut less_than: S) + where S: FnMut(&T, &T) -> bool +{ + debug_assert!(index <= heap.len()); + let mut pos = index; + let mut child = 2 * pos + 1; + // Require the right child to be present + // This allows to find the index of the smallest child without a branch + // that wouldn't be predicted if present + while child + 1 < heap.len() { + // pick the smaller of the two children + // use arithmetic to avoid an unpredictable branch + child += less_than(&heap[child+1], &heap[child]) as usize; + + // sift down is done if we are already in order + if !less_than(&heap[child], &heap[pos]) { + return; + } + heap.swap(pos, child); + pos = child; + child = 2 * pos + 1; + } + // Check if the last (left) child was an only child + // if it is then it has to be compared with the parent + if child + 1 == heap.len() && less_than(&heap[child], &heap[pos]) { + heap.swap(pos, child); + } +} + +/// An iterator adaptor that merges an abitrary number of base iterators in ascending order. +/// If all base iterators are sorted (ascending), the result is sorted. +/// +/// Iterator element type is `I::Item`. +/// +/// See [`.kmerge()`](crate::Itertools::kmerge) for more information. +pub type KMerge = KMergeBy; + +pub trait KMergePredicate { + fn kmerge_pred(&mut self, a: &T, b: &T) -> bool; +} + +#[derive(Clone, Debug)] +pub struct KMergeByLt; + +impl KMergePredicate for KMergeByLt { + fn kmerge_pred(&mut self, a: &T, b: &T) -> bool { + a < b + } +} + +implbool> KMergePredicate for F { + fn kmerge_pred(&mut self, a: &T, b: &T) -> bool { + self(a, b) + } +} + +/// Create an iterator that merges elements of the contained iterators using +/// the ordering function. +/// +/// [`IntoIterator`] enabled version of [`Itertools::kmerge`]. +/// +/// ``` +/// use itertools::kmerge; +/// +/// for elt in kmerge(vec![vec![0, 2, 4], vec![1, 3, 5], vec![6, 7]]) { +/// /* loop body */ +/// } +/// ``` +pub fn kmerge(iterable: I) -> KMerge<::IntoIter> + where I: IntoIterator, + I::Item: IntoIterator, + <::Item as IntoIterator>::Item: PartialOrd +{ + kmerge_by(iterable, KMergeByLt) +} + +/// An iterator adaptor that merges an abitrary number of base iterators +/// according to an ordering function. +/// +/// Iterator element type is `I::Item`. +/// +/// See [`.kmerge_by()`](crate::Itertools::kmerge_by) for more +/// information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct KMergeBy + where I: Iterator, +{ + heap: Vec>, + less_than: F, +} + +impl fmt::Debug for KMergeBy + where I: Iterator + fmt::Debug, + I::Item: fmt::Debug, +{ + debug_fmt_fields!(KMergeBy, heap); +} + +/// Create an iterator that merges elements of the contained iterators. +/// +/// [`IntoIterator`] enabled version of [`Itertools::kmerge_by`]. +pub fn kmerge_by(iterable: I, mut less_than: F) + -> KMergeBy<::IntoIter, F> + where I: IntoIterator, + I::Item: IntoIterator, + F: KMergePredicate<<::Item as IntoIterator>::Item>, +{ + let iter = iterable.into_iter(); + let (lower, _) = iter.size_hint(); + let mut heap: Vec<_> = Vec::with_capacity(lower); + heap.extend(iter.filter_map(|it| HeadTail::new(it.into_iter()))); + heapify(&mut heap, |a, b| less_than.kmerge_pred(&a.head, &b.head)); + KMergeBy { heap, less_than } +} + +impl Clone for KMergeBy + where I: Iterator + Clone, + I::Item: Clone, + F: Clone, +{ + clone_fields!(heap, less_than); +} + +impl Iterator for KMergeBy + where I: Iterator, + F: KMergePredicate +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + if self.heap.is_empty() { + return None; + } + let result = if let Some(next) = self.heap[0].next() { + next + } else { + self.heap.swap_remove(0).head + }; + let less_than = &mut self.less_than; + sift_down(&mut self.heap, 0, |a, b| less_than.kmerge_pred(&a.head, &b.head)); + Some(result) + } + + fn size_hint(&self) -> (usize, Option) { + #[allow(deprecated)] //TODO: once msrv hits 1.51. replace `fold1` with `reduce` + self.heap.iter() + .map(|i| i.size_hint()) + .fold1(size_hint::add) + .unwrap_or((0, Some(0))) + } +} + +impl FusedIterator for KMergeBy + where I: Iterator, + F: KMergePredicate +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/lazy_buffer.rs b/rust/hw/char/pl011/vendor/itertools/src/lazy_buffer.rs new file mode 100644 index 0000000000..ca24062aab --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/lazy_buffer.rs @@ -0,0 +1,63 @@ +use std::ops::Index; +use alloc::vec::Vec; + +#[derive(Debug, Clone)] +pub struct LazyBuffer { + pub it: I, + done: bool, + buffer: Vec, +} + +impl LazyBuffer +where + I: Iterator, +{ + pub fn new(it: I) -> LazyBuffer { + LazyBuffer { + it, + done: false, + buffer: Vec::new(), + } + } + + pub fn len(&self) -> usize { + self.buffer.len() + } + + pub fn get_next(&mut self) -> bool { + if self.done { + return false; + } + if let Some(x) = self.it.next() { + self.buffer.push(x); + true + } else { + self.done = true; + false + } + } + + pub fn prefill(&mut self, len: usize) { + let buffer_len = self.buffer.len(); + + if !self.done && len > buffer_len { + let delta = len - buffer_len; + + self.buffer.extend(self.it.by_ref().take(delta)); + self.done = self.buffer.len() < len; + } + } +} + +impl Index for LazyBuffer +where + I: Iterator, + I::Item: Sized, + Vec: Index +{ + type Output = as Index>::Output; + + fn index(&self, index: J) -> &Self::Output { + self.buffer.index(index) + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/lib.rs b/rust/hw/char/pl011/vendor/itertools/src/lib.rs new file mode 100644 index 0000000000..c23a65db5c --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/lib.rs @@ -0,0 +1,3967 @@ +#![warn(missing_docs)] +#![crate_name="itertools"] +#![cfg_attr(not(feature = "use_std"), no_std)] + +//! Extra iterator adaptors, functions and macros. +//! +//! To extend [`Iterator`] with methods in this crate, import +//! the [`Itertools`] trait: +//! +//! ``` +//! use itertools::Itertools; +//! ``` +//! +//! Now, new methods like [`interleave`](Itertools::interleave) +//! are available on all iterators: +//! +//! ``` +//! use itertools::Itertools; +//! +//! let it = (1..3).interleave(vec![-1, -2]); +//! itertools::assert_equal(it, vec![1, -1, 2, -2]); +//! ``` +//! +//! Most iterator methods are also provided as functions (with the benefit +//! that they convert parameters using [`IntoIterator`]): +//! +//! ``` +//! use itertools::interleave; +//! +//! for elt in interleave(&[1, 2, 3], &[2, 3, 4]) { +//! /* loop body */ +//! } +//! ``` +//! +//! ## Crate Features +//! +//! - `use_std` +//! - Enabled by default. +//! - Disable to compile itertools using `#![no_std]`. This disables +//! any items that depend on collections (like `group_by`, `unique`, +//! `kmerge`, `join` and many more). +//! +//! ## Rust Version +//! +//! This version of itertools requires Rust 1.32 or later. +#![doc(html_root_url="https://docs.rs/itertools/0.8/")] + +#[cfg(not(feature = "use_std"))] +extern crate core as std; + +#[cfg(feature = "use_alloc")] +extern crate alloc; + +#[cfg(feature = "use_alloc")] +use alloc::{ + string::String, + vec::Vec, +}; + +pub use either::Either; + +use core::borrow::Borrow; +#[cfg(feature = "use_std")] +use std::collections::HashMap; +use std::iter::{IntoIterator, once}; +use std::cmp::Ordering; +use std::fmt; +#[cfg(feature = "use_std")] +use std::collections::HashSet; +#[cfg(feature = "use_std")] +use std::hash::Hash; +#[cfg(feature = "use_alloc")] +use std::fmt::Write; +#[cfg(feature = "use_alloc")] +type VecIntoIter = alloc::vec::IntoIter; +#[cfg(feature = "use_alloc")] +use std::iter::FromIterator; + +#[macro_use] +mod impl_macros; + +// for compatibility with no std and macros +#[doc(hidden)] +pub use std::iter as __std_iter; + +/// The concrete iterator types. +pub mod structs { + pub use crate::adaptors::{ + Dedup, + DedupBy, + DedupWithCount, + DedupByWithCount, + Interleave, + InterleaveShortest, + FilterMapOk, + FilterOk, + Product, + PutBack, + Batching, + MapInto, + MapOk, + Merge, + MergeBy, + TakeWhileRef, + WhileSome, + Coalesce, + TupleCombinations, + Positions, + Update, + }; + #[allow(deprecated)] + pub use crate::adaptors::{MapResults, Step}; + #[cfg(feature = "use_alloc")] + pub use crate::adaptors::MultiProduct; + #[cfg(feature = "use_alloc")] + pub use crate::combinations::Combinations; + #[cfg(feature = "use_alloc")] + pub use crate::combinations_with_replacement::CombinationsWithReplacement; + pub use crate::cons_tuples_impl::ConsTuples; + pub use crate::exactly_one_err::ExactlyOneError; + pub use crate::format::{Format, FormatWith}; + pub use crate::flatten_ok::FlattenOk; + #[cfg(feature = "use_std")] + pub use crate::grouping_map::{GroupingMap, GroupingMapBy}; + #[cfg(feature = "use_alloc")] + pub use crate::groupbylazy::{IntoChunks, Chunk, Chunks, GroupBy, Group, Groups}; + pub use crate::intersperse::{Intersperse, IntersperseWith}; + #[cfg(feature = "use_alloc")] + pub use crate::kmerge_impl::{KMerge, KMergeBy}; + pub use crate::merge_join::MergeJoinBy; + #[cfg(feature = "use_alloc")] + pub use crate::multipeek_impl::MultiPeek; + #[cfg(feature = "use_alloc")] + pub use crate::peek_nth::PeekNth; + pub use crate::pad_tail::PadUsing; + pub use crate::peeking_take_while::PeekingTakeWhile; + #[cfg(feature = "use_alloc")] + pub use crate::permutations::Permutations; + pub use crate::process_results_impl::ProcessResults; + #[cfg(feature = "use_alloc")] + pub use crate::powerset::Powerset; + #[cfg(feature = "use_alloc")] + pub use crate::put_back_n_impl::PutBackN; + #[cfg(feature = "use_alloc")] + pub use crate::rciter_impl::RcIter; + pub use crate::repeatn::RepeatN; + #[allow(deprecated)] + pub use crate::sources::{RepeatCall, Unfold, Iterate}; + pub use crate::take_while_inclusive::TakeWhileInclusive; + #[cfg(feature = "use_alloc")] + pub use crate::tee::Tee; + pub use crate::tuple_impl::{TupleBuffer, TupleWindows, CircularTupleWindows, Tuples}; + #[cfg(feature = "use_std")] + pub use crate::duplicates_impl::{Duplicates, DuplicatesBy}; + #[cfg(feature = "use_std")] + pub use crate::unique_impl::{Unique, UniqueBy}; + pub use crate::with_position::WithPosition; + pub use crate::zip_eq_impl::ZipEq; + pub use crate::zip_longest::ZipLongest; + pub use crate::ziptuple::Zip; +} + +/// Traits helpful for using certain `Itertools` methods in generic contexts. +pub mod traits { + pub use crate::tuple_impl::HomogeneousTuple; +} + +#[allow(deprecated)] +pub use crate::structs::*; +pub use crate::concat_impl::concat; +pub use crate::cons_tuples_impl::cons_tuples; +pub use crate::diff::diff_with; +pub use crate::diff::Diff; +#[cfg(feature = "use_alloc")] +pub use crate::kmerge_impl::{kmerge_by}; +pub use crate::minmax::MinMaxResult; +pub use crate::peeking_take_while::PeekingNext; +pub use crate::process_results_impl::process_results; +pub use crate::repeatn::repeat_n; +#[allow(deprecated)] +pub use crate::sources::{repeat_call, unfold, iterate}; +pub use crate::with_position::Position; +pub use crate::unziptuple::{multiunzip, MultiUnzip}; +pub use crate::ziptuple::multizip; +mod adaptors; +mod either_or_both; +pub use crate::either_or_both::EitherOrBoth; +#[doc(hidden)] +pub mod free; +#[doc(inline)] +pub use crate::free::*; +mod concat_impl; +mod cons_tuples_impl; +#[cfg(feature = "use_alloc")] +mod combinations; +#[cfg(feature = "use_alloc")] +mod combinations_with_replacement; +mod exactly_one_err; +mod diff; +mod flatten_ok; +#[cfg(feature = "use_std")] +mod extrema_set; +mod format; +#[cfg(feature = "use_std")] +mod grouping_map; +#[cfg(feature = "use_alloc")] +mod group_map; +#[cfg(feature = "use_alloc")] +mod groupbylazy; +mod intersperse; +#[cfg(feature = "use_alloc")] +mod k_smallest; +#[cfg(feature = "use_alloc")] +mod kmerge_impl; +#[cfg(feature = "use_alloc")] +mod lazy_buffer; +mod merge_join; +mod minmax; +#[cfg(feature = "use_alloc")] +mod multipeek_impl; +mod pad_tail; +#[cfg(feature = "use_alloc")] +mod peek_nth; +mod peeking_take_while; +#[cfg(feature = "use_alloc")] +mod permutations; +#[cfg(feature = "use_alloc")] +mod powerset; +mod process_results_impl; +#[cfg(feature = "use_alloc")] +mod put_back_n_impl; +#[cfg(feature = "use_alloc")] +mod rciter_impl; +mod repeatn; +mod size_hint; +mod sources; +mod take_while_inclusive; +#[cfg(feature = "use_alloc")] +mod tee; +mod tuple_impl; +#[cfg(feature = "use_std")] +mod duplicates_impl; +#[cfg(feature = "use_std")] +mod unique_impl; +mod unziptuple; +mod with_position; +mod zip_eq_impl; +mod zip_longest; +mod ziptuple; + +#[macro_export] +/// Create an iterator over the “cartesian product” of iterators. +/// +/// Iterator element type is like `(A, B, ..., E)` if formed +/// from iterators `(I, J, ..., M)` with element types `I::Item = A`, `J::Item = B`, etc. +/// +/// ``` +/// # use itertools::iproduct; +/// # +/// # fn main() { +/// // Iterate over the coordinates of a 4 x 4 x 4 grid +/// // from (0, 0, 0), (0, 0, 1), .., (0, 1, 0), (0, 1, 1), .. etc until (3, 3, 3) +/// for (i, j, k) in iproduct!(0..4, 0..4, 0..4) { +/// // .. +/// } +/// # } +/// ``` +macro_rules! iproduct { + (@flatten $I:expr,) => ( + $I + ); + (@flatten $I:expr, $J:expr, $($K:expr,)*) => ( + $crate::iproduct!(@flatten $crate::cons_tuples($crate::iproduct!($I, $J)), $($K,)*) + ); + ($I:expr) => ( + $crate::__std_iter::IntoIterator::into_iter($I) + ); + ($I:expr, $J:expr) => ( + $crate::Itertools::cartesian_product($crate::iproduct!($I), $crate::iproduct!($J)) + ); + ($I:expr, $J:expr, $($K:expr),+) => ( + $crate::iproduct!(@flatten $crate::iproduct!($I, $J), $($K,)+) + ); +} + +#[macro_export] +/// Create an iterator running multiple iterators in lockstep. +/// +/// The `izip!` iterator yields elements until any subiterator +/// returns `None`. +/// +/// This is a version of the standard ``.zip()`` that's supporting more than +/// two iterators. The iterator element type is a tuple with one element +/// from each of the input iterators. Just like ``.zip()``, the iteration stops +/// when the shortest of the inputs reaches its end. +/// +/// **Note:** The result of this macro is in the general case an iterator +/// composed of repeated `.zip()` and a `.map()`; it has an anonymous type. +/// The special cases of one and two arguments produce the equivalent of +/// `$a.into_iter()` and `$a.into_iter().zip($b)` respectively. +/// +/// Prefer this macro `izip!()` over [`multizip`] for the performance benefits +/// of using the standard library `.zip()`. +/// +/// ``` +/// # use itertools::izip; +/// # +/// # fn main() { +/// +/// // iterate over three sequences side-by-side +/// let mut results = [0, 0, 0, 0]; +/// let inputs = [3, 7, 9, 6]; +/// +/// for (r, index, input) in izip!(&mut results, 0..10, &inputs) { +/// *r = index * 10 + input; +/// } +/// +/// assert_eq!(results, [0 + 3, 10 + 7, 29, 36]); +/// # } +/// ``` +macro_rules! izip { + // @closure creates a tuple-flattening closure for .map() call. usage: + // @closure partial_pattern => partial_tuple , rest , of , iterators + // eg. izip!( @closure ((a, b), c) => (a, b, c) , dd , ee ) + ( @closure $p:pat => $tup:expr ) => { + |$p| $tup + }; + + // The "b" identifier is a different identifier on each recursion level thanks to hygiene. + ( @closure $p:pat => ( $($tup:tt)* ) , $_iter:expr $( , $tail:expr )* ) => { + $crate::izip!(@closure ($p, b) => ( $($tup)*, b ) $( , $tail )*) + }; + + // unary + ($first:expr $(,)*) => { + $crate::__std_iter::IntoIterator::into_iter($first) + }; + + // binary + ($first:expr, $second:expr $(,)*) => { + $crate::izip!($first) + .zip($second) + }; + + // n-ary where n > 2 + ( $first:expr $( , $rest:expr )* $(,)* ) => { + $crate::izip!($first) + $( + .zip($rest) + )* + .map( + $crate::izip!(@closure a => (a) $( , $rest )*) + ) + }; +} + +#[macro_export] +/// [Chain][`chain`] zero or more iterators together into one sequence. +/// +/// The comma-separated arguments must implement [`IntoIterator`]. +/// The final argument may be followed by a trailing comma. +/// +/// [`chain`]: Iterator::chain +/// +/// # Examples +/// +/// Empty invocations of `chain!` expand to an invocation of [`std::iter::empty`]: +/// ``` +/// use std::iter; +/// use itertools::chain; +/// +/// let _: iter::Empty<()> = chain!(); +/// let _: iter::Empty = chain!(); +/// ``` +/// +/// Invocations of `chain!` with one argument expand to [`arg.into_iter()`](IntoIterator): +/// ``` +/// use std::{ops::Range, slice}; +/// use itertools::chain; +/// let _: as IntoIterator>::IntoIter = chain!((2..6),); // trailing comma optional! +/// let _: <&[_] as IntoIterator>::IntoIter = chain!(&[2, 3, 4]); +/// ``` +/// +/// Invocations of `chain!` with multiple arguments [`.into_iter()`](IntoIterator) each +/// argument, and then [`chain`] them together: +/// ``` +/// use std::{iter::*, ops::Range, slice}; +/// use itertools::{assert_equal, chain}; +/// +/// // e.g., this: +/// let with_macro: Chain, Take>>, slice::Iter<_>> = +/// chain![once(&0), repeat(&1).take(2), &[2, 3, 5],]; +/// +/// // ...is equivalent to this: +/// let with_method: Chain, Take>>, slice::Iter<_>> = +/// once(&0) +/// .chain(repeat(&1).take(2)) +/// .chain(&[2, 3, 5]); +/// +/// assert_equal(with_macro, with_method); +/// ``` +macro_rules! chain { + () => { + core::iter::empty() + }; + ($first:expr $(, $rest:expr )* $(,)?) => { + { + let iter = core::iter::IntoIterator::into_iter($first); + $( + let iter = + core::iter::Iterator::chain( + iter, + core::iter::IntoIterator::into_iter($rest)); + )* + iter + } + }; +} + +/// An [`Iterator`] blanket implementation that provides extra adaptors and +/// methods. +/// +/// This trait defines a number of methods. They are divided into two groups: +/// +/// * *Adaptors* take an iterator and parameter as input, and return +/// a new iterator value. These are listed first in the trait. An example +/// of an adaptor is [`.interleave()`](Itertools::interleave) +/// +/// * *Regular methods* are those that don't return iterators and instead +/// return a regular value of some other kind. +/// [`.next_tuple()`](Itertools::next_tuple) is an example and the first regular +/// method in the list. +pub trait Itertools : Iterator { + // adaptors + + /// Alternate elements from two iterators until both have run out. + /// + /// Iterator element type is `Self::Item`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (1..7).interleave(vec![-1, -2]); + /// itertools::assert_equal(it, vec![1, -1, 2, -2, 3, 4, 5, 6]); + /// ``` + fn interleave(self, other: J) -> Interleave + where J: IntoIterator, + Self: Sized + { + interleave(self, other) + } + + /// Alternate elements from two iterators until at least one of them has run + /// out. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (1..7).interleave_shortest(vec![-1, -2]); + /// itertools::assert_equal(it, vec![1, -1, 2, -2, 3]); + /// ``` + fn interleave_shortest(self, other: J) -> InterleaveShortest + where J: IntoIterator, + Self: Sized + { + adaptors::interleave_shortest(self, other.into_iter()) + } + + /// An iterator adaptor to insert a particular value + /// between each element of the adapted iterator. + /// + /// Iterator element type is `Self::Item`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// itertools::assert_equal((0..3).intersperse(8), vec![0, 8, 1, 8, 2]); + /// ``` + fn intersperse(self, element: Self::Item) -> Intersperse + where Self: Sized, + Self::Item: Clone + { + intersperse::intersperse(self, element) + } + + /// An iterator adaptor to insert a particular value created by a function + /// between each element of the adapted iterator. + /// + /// Iterator element type is `Self::Item`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut i = 10; + /// itertools::assert_equal((0..3).intersperse_with(|| { i -= 1; i }), vec![0, 9, 1, 8, 2]); + /// assert_eq!(i, 8); + /// ``` + fn intersperse_with(self, element: F) -> IntersperseWith + where Self: Sized, + F: FnMut() -> Self::Item + { + intersperse::intersperse_with(self, element) + } + + /// Create an iterator which iterates over both this and the specified + /// iterator simultaneously, yielding pairs of two optional elements. + /// + /// This iterator is *fused*. + /// + /// As long as neither input iterator is exhausted yet, it yields two values + /// via `EitherOrBoth::Both`. + /// + /// When the parameter iterator is exhausted, it only yields a value from the + /// `self` iterator via `EitherOrBoth::Left`. + /// + /// When the `self` iterator is exhausted, it only yields a value from the + /// parameter iterator via `EitherOrBoth::Right`. + /// + /// When both iterators return `None`, all further invocations of `.next()` + /// will return `None`. + /// + /// Iterator element type is + /// [`EitherOrBoth`](EitherOrBoth). + /// + /// ```rust + /// use itertools::EitherOrBoth::{Both, Right}; + /// use itertools::Itertools; + /// let it = (0..1).zip_longest(1..3); + /// itertools::assert_equal(it, vec![Both(0, 1), Right(2)]); + /// ``` + #[inline] + fn zip_longest(self, other: J) -> ZipLongest + where J: IntoIterator, + Self: Sized + { + zip_longest::zip_longest(self, other.into_iter()) + } + + /// Create an iterator which iterates over both this and the specified + /// iterator simultaneously, yielding pairs of elements. + /// + /// **Panics** if the iterators reach an end and they are not of equal + /// lengths. + #[inline] + fn zip_eq(self, other: J) -> ZipEq + where J: IntoIterator, + Self: Sized + { + zip_eq(self, other) + } + + /// A “meta iterator adaptor”. Its closure receives a reference to the + /// iterator and may pick off as many elements as it likes, to produce the + /// next iterator element. + /// + /// Iterator element type is `B`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // An adaptor that gathers elements in pairs + /// let pit = (0..4).batching(|it| { + /// match it.next() { + /// None => None, + /// Some(x) => match it.next() { + /// None => None, + /// Some(y) => Some((x, y)), + /// } + /// } + /// }); + /// + /// itertools::assert_equal(pit, vec![(0, 1), (2, 3)]); + /// ``` + /// + fn batching(self, f: F) -> Batching + where F: FnMut(&mut Self) -> Option, + Self: Sized + { + adaptors::batching(self, f) + } + + /// Return an *iterable* that can group iterator elements. + /// Consecutive elements that map to the same key (“runs”), are assigned + /// to the same group. + /// + /// `GroupBy` is the storage for the lazy grouping operation. + /// + /// If the groups are consumed in order, or if each group's iterator is + /// dropped without keeping it around, then `GroupBy` uses no + /// allocations. It needs allocations only if several group iterators + /// are alive at the same time. + /// + /// This type implements [`IntoIterator`] (it is **not** an iterator + /// itself), because the group iterators need to borrow from this + /// value. It should be stored in a local variable or temporary and + /// iterated. + /// + /// Iterator element type is `(K, Group)`: the group's key and the + /// group iterator. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // group data into runs of larger than zero or not. + /// let data = vec![1, 3, -2, -2, 1, 0, 1, 2]; + /// // groups: |---->|------>|--------->| + /// + /// // Note: The `&` is significant here, `GroupBy` is iterable + /// // only by reference. You can also call `.into_iter()` explicitly. + /// let mut data_grouped = Vec::new(); + /// for (key, group) in &data.into_iter().group_by(|elt| *elt >= 0) { + /// data_grouped.push((key, group.collect())); + /// } + /// assert_eq!(data_grouped, vec![(true, vec![1, 3]), (false, vec![-2, -2]), (true, vec![1, 0, 1, 2])]); + /// ``` + #[cfg(feature = "use_alloc")] + fn group_by(self, key: F) -> GroupBy + where Self: Sized, + F: FnMut(&Self::Item) -> K, + K: PartialEq, + { + groupbylazy::new(self, key) + } + + /// Return an *iterable* that can chunk the iterator. + /// + /// Yield subiterators (chunks) that each yield a fixed number elements, + /// determined by `size`. The last chunk will be shorter if there aren't + /// enough elements. + /// + /// `IntoChunks` is based on `GroupBy`: it is iterable (implements + /// `IntoIterator`, **not** `Iterator`), and it only buffers if several + /// chunk iterators are alive at the same time. + /// + /// Iterator element type is `Chunk`, each chunk's iterator. + /// + /// **Panics** if `size` is 0. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![1, 1, 2, -2, 6, 0, 3, 1]; + /// //chunk size=3 |------->|-------->|--->| + /// + /// // Note: The `&` is significant here, `IntoChunks` is iterable + /// // only by reference. You can also call `.into_iter()` explicitly. + /// for chunk in &data.into_iter().chunks(3) { + /// // Check that the sum of each chunk is 4. + /// assert_eq!(4, chunk.sum()); + /// } + /// ``` + #[cfg(feature = "use_alloc")] + fn chunks(self, size: usize) -> IntoChunks + where Self: Sized, + { + assert!(size != 0); + groupbylazy::new_chunks(self, size) + } + + /// Return an iterator over all contiguous windows producing tuples of + /// a specific size (up to 12). + /// + /// `tuple_windows` clones the iterator elements so that they can be + /// part of successive windows, this makes it most suited for iterators + /// of references and other values that are cheap to copy. + /// + /// ``` + /// use itertools::Itertools; + /// let mut v = Vec::new(); + /// + /// // pairwise iteration + /// for (a, b) in (1..5).tuple_windows() { + /// v.push((a, b)); + /// } + /// assert_eq!(v, vec![(1, 2), (2, 3), (3, 4)]); + /// + /// let mut it = (1..5).tuple_windows(); + /// assert_eq!(Some((1, 2, 3)), it.next()); + /// assert_eq!(Some((2, 3, 4)), it.next()); + /// assert_eq!(None, it.next()); + /// + /// // this requires a type hint + /// let it = (1..5).tuple_windows::<(_, _, _)>(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (2, 3, 4)]); + /// + /// // you can also specify the complete type + /// use itertools::TupleWindows; + /// use std::ops::Range; + /// + /// let it: TupleWindows, (u32, u32, u32)> = (1..5).tuple_windows(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (2, 3, 4)]); + /// ``` + fn tuple_windows(self) -> TupleWindows + where Self: Sized + Iterator, + T: traits::HomogeneousTuple, + T::Item: Clone + { + tuple_impl::tuple_windows(self) + } + + /// Return an iterator over all windows, wrapping back to the first + /// elements when the window would otherwise exceed the length of the + /// iterator, producing tuples of a specific size (up to 12). + /// + /// `circular_tuple_windows` clones the iterator elements so that they can be + /// part of successive windows, this makes it most suited for iterators + /// of references and other values that are cheap to copy. + /// + /// ``` + /// use itertools::Itertools; + /// let mut v = Vec::new(); + /// for (a, b) in (1..5).circular_tuple_windows() { + /// v.push((a, b)); + /// } + /// assert_eq!(v, vec![(1, 2), (2, 3), (3, 4), (4, 1)]); + /// + /// let mut it = (1..5).circular_tuple_windows(); + /// assert_eq!(Some((1, 2, 3)), it.next()); + /// assert_eq!(Some((2, 3, 4)), it.next()); + /// assert_eq!(Some((3, 4, 1)), it.next()); + /// assert_eq!(Some((4, 1, 2)), it.next()); + /// assert_eq!(None, it.next()); + /// + /// // this requires a type hint + /// let it = (1..5).circular_tuple_windows::<(_, _, _)>(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (2, 3, 4), (3, 4, 1), (4, 1, 2)]); + /// ``` + fn circular_tuple_windows(self) -> CircularTupleWindows + where Self: Sized + Clone + Iterator + ExactSizeIterator, + T: tuple_impl::TupleCollect + Clone, + T::Item: Clone + { + tuple_impl::circular_tuple_windows(self) + } + /// Return an iterator that groups the items in tuples of a specific size + /// (up to 12). + /// + /// See also the method [`.next_tuple()`](Itertools::next_tuple). + /// + /// ``` + /// use itertools::Itertools; + /// let mut v = Vec::new(); + /// for (a, b) in (1..5).tuples() { + /// v.push((a, b)); + /// } + /// assert_eq!(v, vec![(1, 2), (3, 4)]); + /// + /// let mut it = (1..7).tuples(); + /// assert_eq!(Some((1, 2, 3)), it.next()); + /// assert_eq!(Some((4, 5, 6)), it.next()); + /// assert_eq!(None, it.next()); + /// + /// // this requires a type hint + /// let it = (1..7).tuples::<(_, _, _)>(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (4, 5, 6)]); + /// + /// // you can also specify the complete type + /// use itertools::Tuples; + /// use std::ops::Range; + /// + /// let it: Tuples, (u32, u32, u32)> = (1..7).tuples(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (4, 5, 6)]); + /// ``` + /// + /// See also [`Tuples::into_buffer`]. + fn tuples(self) -> Tuples + where Self: Sized + Iterator, + T: traits::HomogeneousTuple + { + tuple_impl::tuples(self) + } + + /// Split into an iterator pair that both yield all elements from + /// the original iterator. + /// + /// **Note:** If the iterator is clonable, prefer using that instead + /// of using this method. Cloning is likely to be more efficient. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// let xs = vec![0, 1, 2, 3]; + /// + /// let (mut t1, t2) = xs.into_iter().tee(); + /// itertools::assert_equal(t1.next(), Some(0)); + /// itertools::assert_equal(t2, 0..4); + /// itertools::assert_equal(t1, 1..4); + /// ``` + #[cfg(feature = "use_alloc")] + fn tee(self) -> (Tee, Tee) + where Self: Sized, + Self::Item: Clone + { + tee::new(self) + } + + /// Return an iterator adaptor that steps `n` elements in the base iterator + /// for each iteration. + /// + /// The iterator steps by yielding the next element from the base iterator, + /// then skipping forward `n - 1` elements. + /// + /// Iterator element type is `Self::Item`. + /// + /// **Panics** if the step is 0. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (0..8).step(3); + /// itertools::assert_equal(it, vec![0, 3, 6]); + /// ``` + #[deprecated(note="Use std .step_by() instead", since="0.8.0")] + #[allow(deprecated)] + fn step(self, n: usize) -> Step + where Self: Sized + { + adaptors::step(self, n) + } + + /// Convert each item of the iterator using the [`Into`] trait. + /// + /// ```rust + /// use itertools::Itertools; + /// + /// (1i32..42i32).map_into::().collect_vec(); + /// ``` + fn map_into(self) -> MapInto + where Self: Sized, + Self::Item: Into, + { + adaptors::map_into(self) + } + + /// See [`.map_ok()`](Itertools::map_ok). + #[deprecated(note="Use .map_ok() instead", since="0.10.0")] + fn map_results(self, f: F) -> MapOk + where Self: Iterator> + Sized, + F: FnMut(T) -> U, + { + self.map_ok(f) + } + + /// Return an iterator adaptor that applies the provided closure + /// to every `Result::Ok` value. `Result::Err` values are + /// unchanged. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let input = vec![Ok(41), Err(false), Ok(11)]; + /// let it = input.into_iter().map_ok(|i| i + 1); + /// itertools::assert_equal(it, vec![Ok(42), Err(false), Ok(12)]); + /// ``` + fn map_ok(self, f: F) -> MapOk + where Self: Iterator> + Sized, + F: FnMut(T) -> U, + { + adaptors::map_ok(self, f) + } + + /// Return an iterator adaptor that filters every `Result::Ok` + /// value with the provided closure. `Result::Err` values are + /// unchanged. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let input = vec![Ok(22), Err(false), Ok(11)]; + /// let it = input.into_iter().filter_ok(|&i| i > 20); + /// itertools::assert_equal(it, vec![Ok(22), Err(false)]); + /// ``` + fn filter_ok(self, f: F) -> FilterOk + where Self: Iterator> + Sized, + F: FnMut(&T) -> bool, + { + adaptors::filter_ok(self, f) + } + + /// Return an iterator adaptor that filters and transforms every + /// `Result::Ok` value with the provided closure. `Result::Err` + /// values are unchanged. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let input = vec![Ok(22), Err(false), Ok(11)]; + /// let it = input.into_iter().filter_map_ok(|i| if i > 20 { Some(i * 2) } else { None }); + /// itertools::assert_equal(it, vec![Ok(44), Err(false)]); + /// ``` + fn filter_map_ok(self, f: F) -> FilterMapOk + where Self: Iterator> + Sized, + F: FnMut(T) -> Option, + { + adaptors::filter_map_ok(self, f) + } + + /// Return an iterator adaptor that flattens every `Result::Ok` value into + /// a series of `Result::Ok` values. `Result::Err` values are unchanged. + /// + /// This is useful when you have some common error type for your crate and + /// need to propagate it upwards, but the `Result::Ok` case needs to be flattened. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let input = vec![Ok(0..2), Err(false), Ok(2..4)]; + /// let it = input.iter().cloned().flatten_ok(); + /// itertools::assert_equal(it.clone(), vec![Ok(0), Ok(1), Err(false), Ok(2), Ok(3)]); + /// + /// // This can also be used to propagate errors when collecting. + /// let output_result: Result, bool> = it.collect(); + /// assert_eq!(output_result, Err(false)); + /// ``` + fn flatten_ok(self) -> FlattenOk + where Self: Iterator> + Sized, + T: IntoIterator + { + flatten_ok::flatten_ok(self) + } + + /// “Lift” a function of the values of the current iterator so as to process + /// an iterator of `Result` values instead. + /// + /// `processor` is a closure that receives an adapted version of the iterator + /// as the only argument — the adapted iterator produces elements of type `T`, + /// as long as the original iterator produces `Ok` values. + /// + /// If the original iterable produces an error at any point, the adapted + /// iterator ends and it will return the error iself. + /// + /// Otherwise, the return value from the closure is returned wrapped + /// inside `Ok`. + /// + /// # Example + /// + /// ``` + /// use itertools::Itertools; + /// + /// type Item = Result; + /// + /// let first_values: Vec = vec![Ok(1), Ok(0), Ok(3)]; + /// let second_values: Vec = vec![Ok(2), Ok(1), Err("overflow")]; + /// + /// // “Lift” the iterator .max() method to work on the Ok-values. + /// let first_max = first_values.into_iter().process_results(|iter| iter.max().unwrap_or(0)); + /// let second_max = second_values.into_iter().process_results(|iter| iter.max().unwrap_or(0)); + /// + /// assert_eq!(first_max, Ok(3)); + /// assert!(second_max.is_err()); + /// ``` + fn process_results(self, processor: F) -> Result + where Self: Iterator> + Sized, + F: FnOnce(ProcessResults) -> R + { + process_results(self, processor) + } + + /// Return an iterator adaptor that merges the two base iterators in + /// ascending order. If both base iterators are sorted (ascending), the + /// result is sorted. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a = (0..11).step_by(3); + /// let b = (0..11).step_by(5); + /// let it = a.merge(b); + /// itertools::assert_equal(it, vec![0, 0, 3, 5, 6, 9, 10]); + /// ``` + fn merge(self, other: J) -> Merge + where Self: Sized, + Self::Item: PartialOrd, + J: IntoIterator + { + merge(self, other) + } + + /// Return an iterator adaptor that merges the two base iterators in order. + /// This is much like [`.merge()`](Itertools::merge) but allows for a custom ordering. + /// + /// This can be especially useful for sequences of tuples. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a = (0..).zip("bc".chars()); + /// let b = (0..).zip("ad".chars()); + /// let it = a.merge_by(b, |x, y| x.1 <= y.1); + /// itertools::assert_equal(it, vec![(0, 'a'), (0, 'b'), (1, 'c'), (1, 'd')]); + /// ``` + + fn merge_by(self, other: J, is_first: F) -> MergeBy + where Self: Sized, + J: IntoIterator, + F: FnMut(&Self::Item, &Self::Item) -> bool + { + adaptors::merge_by_new(self, other.into_iter(), is_first) + } + + /// Create an iterator that merges items from both this and the specified + /// iterator in ascending order. + /// + /// The function can either return an `Ordering` variant or a boolean. + /// + /// If `cmp_fn` returns `Ordering`, + /// it chooses whether to pair elements based on the `Ordering` returned by the + /// specified compare function. At any point, inspecting the tip of the + /// iterators `I` and `J` as items `i` of type `I::Item` and `j` of type + /// `J::Item` respectively, the resulting iterator will: + /// + /// - Emit `EitherOrBoth::Left(i)` when `i < j`, + /// and remove `i` from its source iterator + /// - Emit `EitherOrBoth::Right(j)` when `i > j`, + /// and remove `j` from its source iterator + /// - Emit `EitherOrBoth::Both(i, j)` when `i == j`, + /// and remove both `i` and `j` from their respective source iterators + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::EitherOrBoth::{Left, Right, Both}; + /// + /// let a = vec![0, 2, 4, 6, 1].into_iter(); + /// let b = (0..10).step_by(3); + /// + /// itertools::assert_equal( + /// a.merge_join_by(b, |i, j| i.cmp(j)), + /// vec![Both(0, 0), Left(2), Right(3), Left(4), Both(6, 6), Left(1), Right(9)] + /// ); + /// ``` + /// + /// If `cmp_fn` returns `bool`, + /// it chooses whether to pair elements based on the boolean returned by the + /// specified function. At any point, inspecting the tip of the + /// iterators `I` and `J` as items `i` of type `I::Item` and `j` of type + /// `J::Item` respectively, the resulting iterator will: + /// + /// - Emit `Either::Left(i)` when `true`, + /// and remove `i` from its source iterator + /// - Emit `Either::Right(j)` when `false`, + /// and remove `j` from its source iterator + /// + /// It is similar to the `Ordering` case if the first argument is considered + /// "less" than the second argument. + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::Either::{Left, Right}; + /// + /// let a = vec![0, 2, 4, 6, 1].into_iter(); + /// let b = (0..10).step_by(3); + /// + /// itertools::assert_equal( + /// a.merge_join_by(b, |i, j| i <= j), + /// vec![Left(0), Right(0), Left(2), Right(3), Left(4), Left(6), Left(1), Right(6), Right(9)] + /// ); + /// ``` + #[inline] + fn merge_join_by(self, other: J, cmp_fn: F) -> MergeJoinBy + where J: IntoIterator, + F: FnMut(&Self::Item, &J::Item) -> T, + T: merge_join::OrderingOrBool, + Self: Sized + { + merge_join_by(self, other, cmp_fn) + } + + /// Return an iterator adaptor that flattens an iterator of iterators by + /// merging them in ascending order. + /// + /// If all base iterators are sorted (ascending), the result is sorted. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a = (0..6).step_by(3); + /// let b = (1..6).step_by(3); + /// let c = (2..6).step_by(3); + /// let it = vec![a, b, c].into_iter().kmerge(); + /// itertools::assert_equal(it, vec![0, 1, 2, 3, 4, 5]); + /// ``` + #[cfg(feature = "use_alloc")] + fn kmerge(self) -> KMerge<::IntoIter> + where Self: Sized, + Self::Item: IntoIterator, + ::Item: PartialOrd, + { + kmerge(self) + } + + /// Return an iterator adaptor that flattens an iterator of iterators by + /// merging them according to the given closure. + /// + /// The closure `first` is called with two elements *a*, *b* and should + /// return `true` if *a* is ordered before *b*. + /// + /// If all base iterators are sorted according to `first`, the result is + /// sorted. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a = vec![-1f64, 2., 3., -5., 6., -7.]; + /// let b = vec![0., 2., -4.]; + /// let mut it = vec![a, b].into_iter().kmerge_by(|a, b| a.abs() < b.abs()); + /// assert_eq!(it.next(), Some(0.)); + /// assert_eq!(it.last(), Some(-7.)); + /// ``` + #[cfg(feature = "use_alloc")] + fn kmerge_by(self, first: F) + -> KMergeBy<::IntoIter, F> + where Self: Sized, + Self::Item: IntoIterator, + F: FnMut(&::Item, + &::Item) -> bool + { + kmerge_by(self, first) + } + + /// Return an iterator adaptor that iterates over the cartesian product of + /// the element sets of two iterators `self` and `J`. + /// + /// Iterator element type is `(Self::Item, J::Item)`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (0..2).cartesian_product("αβ".chars()); + /// itertools::assert_equal(it, vec![(0, 'α'), (0, 'β'), (1, 'α'), (1, 'β')]); + /// ``` + fn cartesian_product(self, other: J) -> Product + where Self: Sized, + Self::Item: Clone, + J: IntoIterator, + J::IntoIter: Clone + { + adaptors::cartesian_product(self, other.into_iter()) + } + + /// Return an iterator adaptor that iterates over the cartesian product of + /// all subiterators returned by meta-iterator `self`. + /// + /// All provided iterators must yield the same `Item` type. To generate + /// the product of iterators yielding multiple types, use the + /// [`iproduct`] macro instead. + /// + /// + /// The iterator element type is `Vec`, where `T` is the iterator element + /// of the subiterators. + /// + /// ``` + /// use itertools::Itertools; + /// let mut multi_prod = (0..3).map(|i| (i * 2)..(i * 2 + 2)) + /// .multi_cartesian_product(); + /// assert_eq!(multi_prod.next(), Some(vec![0, 2, 4])); + /// assert_eq!(multi_prod.next(), Some(vec![0, 2, 5])); + /// assert_eq!(multi_prod.next(), Some(vec![0, 3, 4])); + /// assert_eq!(multi_prod.next(), Some(vec![0, 3, 5])); + /// assert_eq!(multi_prod.next(), Some(vec![1, 2, 4])); + /// assert_eq!(multi_prod.next(), Some(vec![1, 2, 5])); + /// assert_eq!(multi_prod.next(), Some(vec![1, 3, 4])); + /// assert_eq!(multi_prod.next(), Some(vec![1, 3, 5])); + /// assert_eq!(multi_prod.next(), None); + /// ``` + #[cfg(feature = "use_alloc")] + fn multi_cartesian_product(self) -> MultiProduct<::IntoIter> + where Self: Sized, + Self::Item: IntoIterator, + ::IntoIter: Clone, + ::Item: Clone + { + adaptors::multi_cartesian_product(self) + } + + /// Return an iterator adaptor that uses the passed-in closure to + /// optionally merge together consecutive elements. + /// + /// The closure `f` is passed two elements, `previous` and `current` and may + /// return either (1) `Ok(combined)` to merge the two values or + /// (2) `Err((previous', current'))` to indicate they can't be merged. + /// In (2), the value `previous'` is emitted by the iterator. + /// Either (1) `combined` or (2) `current'` becomes the previous value + /// when coalesce continues with the next pair of elements to merge. The + /// value that remains at the end is also emitted by the iterator. + /// + /// Iterator element type is `Self::Item`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sum same-sign runs together + /// let data = vec![-1., -2., -3., 3., 1., 0., -1.]; + /// itertools::assert_equal(data.into_iter().coalesce(|x, y| + /// if (x >= 0.) == (y >= 0.) { + /// Ok(x + y) + /// } else { + /// Err((x, y)) + /// }), + /// vec![-6., 4., -1.]); + /// ``` + fn coalesce(self, f: F) -> Coalesce + where Self: Sized, + F: FnMut(Self::Item, Self::Item) + -> Result + { + adaptors::coalesce(self, f) + } + + /// Remove duplicates from sections of consecutive identical elements. + /// If the iterator is sorted, all elements will be unique. + /// + /// Iterator element type is `Self::Item`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![1., 1., 2., 3., 3., 2., 2.]; + /// itertools::assert_equal(data.into_iter().dedup(), + /// vec![1., 2., 3., 2.]); + /// ``` + fn dedup(self) -> Dedup + where Self: Sized, + Self::Item: PartialEq, + { + adaptors::dedup(self) + } + + /// Remove duplicates from sections of consecutive identical elements, + /// determining equality using a comparison function. + /// If the iterator is sorted, all elements will be unique. + /// + /// Iterator element type is `Self::Item`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![(0, 1.), (1, 1.), (0, 2.), (0, 3.), (1, 3.), (1, 2.), (2, 2.)]; + /// itertools::assert_equal(data.into_iter().dedup_by(|x, y| x.1 == y.1), + /// vec![(0, 1.), (0, 2.), (0, 3.), (1, 2.)]); + /// ``` + fn dedup_by(self, cmp: Cmp) -> DedupBy + where Self: Sized, + Cmp: FnMut(&Self::Item, &Self::Item)->bool, + { + adaptors::dedup_by(self, cmp) + } + + /// Remove duplicates from sections of consecutive identical elements, while keeping a count of + /// how many repeated elements were present. + /// If the iterator is sorted, all elements will be unique. + /// + /// Iterator element type is `(usize, Self::Item)`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec!['a', 'a', 'b', 'c', 'c', 'b', 'b']; + /// itertools::assert_equal(data.into_iter().dedup_with_count(), + /// vec![(2, 'a'), (1, 'b'), (2, 'c'), (2, 'b')]); + /// ``` + fn dedup_with_count(self) -> DedupWithCount + where + Self: Sized, + { + adaptors::dedup_with_count(self) + } + + /// Remove duplicates from sections of consecutive identical elements, while keeping a count of + /// how many repeated elements were present. + /// This will determine equality using a comparison function. + /// If the iterator is sorted, all elements will be unique. + /// + /// Iterator element type is `(usize, Self::Item)`. + /// + /// This iterator is *fused*. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![(0, 'a'), (1, 'a'), (0, 'b'), (0, 'c'), (1, 'c'), (1, 'b'), (2, 'b')]; + /// itertools::assert_equal(data.into_iter().dedup_by_with_count(|x, y| x.1 == y.1), + /// vec![(2, (0, 'a')), (1, (0, 'b')), (2, (0, 'c')), (2, (1, 'b'))]); + /// ``` + fn dedup_by_with_count(self, cmp: Cmp) -> DedupByWithCount + where + Self: Sized, + Cmp: FnMut(&Self::Item, &Self::Item) -> bool, + { + adaptors::dedup_by_with_count(self, cmp) + } + + /// Return an iterator adaptor that produces elements that appear more than once during the + /// iteration. Duplicates are detected using hash and equality. + /// + /// The iterator is stable, returning the duplicate items in the order in which they occur in + /// the adapted iterator. Each duplicate item is returned exactly once. If an item appears more + /// than twice, the second item is the item retained and the rest are discarded. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![10, 20, 30, 20, 40, 10, 50]; + /// itertools::assert_equal(data.into_iter().duplicates(), + /// vec![20, 10]); + /// ``` + #[cfg(feature = "use_std")] + fn duplicates(self) -> Duplicates + where Self: Sized, + Self::Item: Eq + Hash + { + duplicates_impl::duplicates(self) + } + + /// Return an iterator adaptor that produces elements that appear more than once during the + /// iteration. Duplicates are detected using hash and equality. + /// + /// Duplicates are detected by comparing the key they map to with the keying function `f` by + /// hash and equality. The keys are stored in a hash map in the iterator. + /// + /// The iterator is stable, returning the duplicate items in the order in which they occur in + /// the adapted iterator. Each duplicate item is returned exactly once. If an item appears more + /// than twice, the second item is the item retained and the rest are discarded. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec!["a", "bb", "aa", "c", "ccc"]; + /// itertools::assert_equal(data.into_iter().duplicates_by(|s| s.len()), + /// vec!["aa", "c"]); + /// ``` + #[cfg(feature = "use_std")] + fn duplicates_by(self, f: F) -> DuplicatesBy + where Self: Sized, + V: Eq + Hash, + F: FnMut(&Self::Item) -> V + { + duplicates_impl::duplicates_by(self, f) + } + + /// Return an iterator adaptor that filters out elements that have + /// already been produced once during the iteration. Duplicates + /// are detected using hash and equality. + /// + /// Clones of visited elements are stored in a hash set in the + /// iterator. + /// + /// The iterator is stable, returning the non-duplicate items in the order + /// in which they occur in the adapted iterator. In a set of duplicate + /// items, the first item encountered is the item retained. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![10, 20, 30, 20, 40, 10, 50]; + /// itertools::assert_equal(data.into_iter().unique(), + /// vec![10, 20, 30, 40, 50]); + /// ``` + #[cfg(feature = "use_std")] + fn unique(self) -> Unique + where Self: Sized, + Self::Item: Clone + Eq + Hash + { + unique_impl::unique(self) + } + + /// Return an iterator adaptor that filters out elements that have + /// already been produced once during the iteration. + /// + /// Duplicates are detected by comparing the key they map to + /// with the keying function `f` by hash and equality. + /// The keys are stored in a hash set in the iterator. + /// + /// The iterator is stable, returning the non-duplicate items in the order + /// in which they occur in the adapted iterator. In a set of duplicate + /// items, the first item encountered is the item retained. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec!["a", "bb", "aa", "c", "ccc"]; + /// itertools::assert_equal(data.into_iter().unique_by(|s| s.len()), + /// vec!["a", "bb", "ccc"]); + /// ``` + #[cfg(feature = "use_std")] + fn unique_by(self, f: F) -> UniqueBy + where Self: Sized, + V: Eq + Hash, + F: FnMut(&Self::Item) -> V + { + unique_impl::unique_by(self, f) + } + + /// Return an iterator adaptor that borrows from this iterator and + /// takes items while the closure `accept` returns `true`. + /// + /// This adaptor can only be used on iterators that implement `PeekingNext` + /// like `.peekable()`, `put_back` and a few other collection iterators. + /// + /// The last and rejected element (first `false`) is still available when + /// `peeking_take_while` is done. + /// + /// + /// See also [`.take_while_ref()`](Itertools::take_while_ref) + /// which is a similar adaptor. + fn peeking_take_while(&mut self, accept: F) -> PeekingTakeWhile + where Self: Sized + PeekingNext, + F: FnMut(&Self::Item) -> bool, + { + peeking_take_while::peeking_take_while(self, accept) + } + + /// Return an iterator adaptor that borrows from a `Clone`-able iterator + /// to only pick off elements while the predicate `accept` returns `true`. + /// + /// It uses the `Clone` trait to restore the original iterator so that the + /// last and rejected element (first `false`) is still available when + /// `take_while_ref` is done. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut hexadecimals = "0123456789abcdef".chars(); + /// + /// let decimals = hexadecimals.take_while_ref(|c| c.is_numeric()) + /// .collect::(); + /// assert_eq!(decimals, "0123456789"); + /// assert_eq!(hexadecimals.next(), Some('a')); + /// + /// ``` + fn take_while_ref(&mut self, accept: F) -> TakeWhileRef + where Self: Clone, + F: FnMut(&Self::Item) -> bool + { + adaptors::take_while_ref(self, accept) + } + + /// Returns an iterator adaptor that consumes elements while the given + /// predicate is `true`, *including* the element for which the predicate + /// first returned `false`. + /// + /// The [`.take_while()`][std::iter::Iterator::take_while] adaptor is useful + /// when you want items satisfying a predicate, but to know when to stop + /// taking elements, we have to consume that first element that doesn't + /// satisfy the predicate. This adaptor includes that element where + /// [`.take_while()`][std::iter::Iterator::take_while] would drop it. + /// + /// The [`.take_while_ref()`][crate::Itertools::take_while_ref] adaptor + /// serves a similar purpose, but this adaptor doesn't require [`Clone`]ing + /// the underlying elements. + /// + /// ```rust + /// # use itertools::Itertools; + /// let items = vec![1, 2, 3, 4, 5]; + /// let filtered: Vec<_> = items + /// .into_iter() + /// .take_while_inclusive(|&n| n % 3 != 0) + /// .collect(); + /// + /// assert_eq!(filtered, vec![1, 2, 3]); + /// ``` + /// + /// ```rust + /// # use itertools::Itertools; + /// let items = vec![1, 2, 3, 4, 5]; + /// + /// let take_while_inclusive_result: Vec<_> = items + /// .iter() + /// .copied() + /// .take_while_inclusive(|&n| n % 3 != 0) + /// .collect(); + /// let take_while_result: Vec<_> = items + /// .into_iter() + /// .take_while(|&n| n % 3 != 0) + /// .collect(); + /// + /// assert_eq!(take_while_inclusive_result, vec![1, 2, 3]); + /// assert_eq!(take_while_result, vec![1, 2]); + /// // both iterators have the same items remaining at this point---the 3 + /// // is lost from the `take_while` vec + /// ``` + /// + /// ```rust + /// # use itertools::Itertools; + /// #[derive(Debug, PartialEq)] + /// struct NoCloneImpl(i32); + /// + /// let non_clonable_items: Vec<_> = vec![1, 2, 3, 4, 5] + /// .into_iter() + /// .map(NoCloneImpl) + /// .collect(); + /// let filtered: Vec<_> = non_clonable_items + /// .into_iter() + /// .take_while_inclusive(|n| n.0 % 3 != 0) + /// .collect(); + /// let expected: Vec<_> = vec![1, 2, 3].into_iter().map(NoCloneImpl).collect(); + /// assert_eq!(filtered, expected); + fn take_while_inclusive(&mut self, accept: F) -> TakeWhileInclusive + where + Self: Sized, + F: FnMut(&Self::Item) -> bool, + { + take_while_inclusive::TakeWhileInclusive::new(self, accept) + } + + /// Return an iterator adaptor that filters `Option` iterator elements + /// and produces `A`. Stops on the first `None` encountered. + /// + /// Iterator element type is `A`, the unwrapped element. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // List all hexadecimal digits + /// itertools::assert_equal( + /// (0..).map(|i| std::char::from_digit(i, 16)).while_some(), + /// "0123456789abcdef".chars()); + /// + /// ``` + fn while_some(self) -> WhileSome + where Self: Sized + Iterator> + { + adaptors::while_some(self) + } + + /// Return an iterator adaptor that iterates over the combinations of the + /// elements from an iterator. + /// + /// Iterator element can be any homogeneous tuple of type `Self::Item` with + /// size up to 12. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut v = Vec::new(); + /// for (a, b) in (1..5).tuple_combinations() { + /// v.push((a, b)); + /// } + /// assert_eq!(v, vec![(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)]); + /// + /// let mut it = (1..5).tuple_combinations(); + /// assert_eq!(Some((1, 2, 3)), it.next()); + /// assert_eq!(Some((1, 2, 4)), it.next()); + /// assert_eq!(Some((1, 3, 4)), it.next()); + /// assert_eq!(Some((2, 3, 4)), it.next()); + /// assert_eq!(None, it.next()); + /// + /// // this requires a type hint + /// let it = (1..5).tuple_combinations::<(_, _, _)>(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]); + /// + /// // you can also specify the complete type + /// use itertools::TupleCombinations; + /// use std::ops::Range; + /// + /// let it: TupleCombinations, (u32, u32, u32)> = (1..5).tuple_combinations(); + /// itertools::assert_equal(it, vec![(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)]); + /// ``` + fn tuple_combinations(self) -> TupleCombinations + where Self: Sized + Clone, + Self::Item: Clone, + T: adaptors::HasCombination, + { + adaptors::tuple_combinations(self) + } + + /// Return an iterator adaptor that iterates over the `k`-length combinations of + /// the elements from an iterator. + /// + /// Iterator element type is `Vec`. The iterator produces a new Vec per iteration, + /// and clones the iterator elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (1..5).combinations(3); + /// itertools::assert_equal(it, vec![ + /// vec![1, 2, 3], + /// vec![1, 2, 4], + /// vec![1, 3, 4], + /// vec![2, 3, 4], + /// ]); + /// ``` + /// + /// Note: Combinations does not take into account the equality of the iterated values. + /// ``` + /// use itertools::Itertools; + /// + /// let it = vec![1, 2, 2].into_iter().combinations(2); + /// itertools::assert_equal(it, vec![ + /// vec![1, 2], // Note: these are the same + /// vec![1, 2], // Note: these are the same + /// vec![2, 2], + /// ]); + /// ``` + #[cfg(feature = "use_alloc")] + fn combinations(self, k: usize) -> Combinations + where Self: Sized, + Self::Item: Clone + { + combinations::combinations(self, k) + } + + /// Return an iterator that iterates over the `k`-length combinations of + /// the elements from an iterator, with replacement. + /// + /// Iterator element type is `Vec`. The iterator produces a new Vec per iteration, + /// and clones the iterator elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (1..4).combinations_with_replacement(2); + /// itertools::assert_equal(it, vec![ + /// vec![1, 1], + /// vec![1, 2], + /// vec![1, 3], + /// vec![2, 2], + /// vec![2, 3], + /// vec![3, 3], + /// ]); + /// ``` + #[cfg(feature = "use_alloc")] + fn combinations_with_replacement(self, k: usize) -> CombinationsWithReplacement + where + Self: Sized, + Self::Item: Clone, + { + combinations_with_replacement::combinations_with_replacement(self, k) + } + + /// Return an iterator adaptor that iterates over all k-permutations of the + /// elements from an iterator. + /// + /// Iterator element type is `Vec` with length `k`. The iterator + /// produces a new Vec per iteration, and clones the iterator elements. + /// + /// If `k` is greater than the length of the input iterator, the resultant + /// iterator adaptor will be empty. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let perms = (5..8).permutations(2); + /// itertools::assert_equal(perms, vec![ + /// vec![5, 6], + /// vec![5, 7], + /// vec![6, 5], + /// vec![6, 7], + /// vec![7, 5], + /// vec![7, 6], + /// ]); + /// ``` + /// + /// Note: Permutations does not take into account the equality of the iterated values. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = vec![2, 2].into_iter().permutations(2); + /// itertools::assert_equal(it, vec![ + /// vec![2, 2], // Note: these are the same + /// vec![2, 2], // Note: these are the same + /// ]); + /// ``` + /// + /// Note: The source iterator is collected lazily, and will not be + /// re-iterated if the permutations adaptor is completed and re-iterated. + #[cfg(feature = "use_alloc")] + fn permutations(self, k: usize) -> Permutations + where Self: Sized, + Self::Item: Clone + { + permutations::permutations(self, k) + } + + /// Return an iterator that iterates through the powerset of the elements from an + /// iterator. + /// + /// Iterator element type is `Vec`. The iterator produces a new `Vec` + /// per iteration, and clones the iterator elements. + /// + /// The powerset of a set contains all subsets including the empty set and the full + /// input set. A powerset has length _2^n_ where _n_ is the length of the input + /// set. + /// + /// Each `Vec` produced by this iterator represents a subset of the elements + /// produced by the source iterator. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let sets = (1..4).powerset().collect::>(); + /// itertools::assert_equal(sets, vec![ + /// vec![], + /// vec![1], + /// vec![2], + /// vec![3], + /// vec![1, 2], + /// vec![1, 3], + /// vec![2, 3], + /// vec![1, 2, 3], + /// ]); + /// ``` + #[cfg(feature = "use_alloc")] + fn powerset(self) -> Powerset + where Self: Sized, + Self::Item: Clone, + { + powerset::powerset(self) + } + + /// Return an iterator adaptor that pads the sequence to a minimum length of + /// `min` by filling missing elements using a closure `f`. + /// + /// Iterator element type is `Self::Item`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let it = (0..5).pad_using(10, |i| 2*i); + /// itertools::assert_equal(it, vec![0, 1, 2, 3, 4, 10, 12, 14, 16, 18]); + /// + /// let it = (0..10).pad_using(5, |i| 2*i); + /// itertools::assert_equal(it, vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9]); + /// + /// let it = (0..5).pad_using(10, |i| 2*i).rev(); + /// itertools::assert_equal(it, vec![18, 16, 14, 12, 10, 4, 3, 2, 1, 0]); + /// ``` + fn pad_using(self, min: usize, f: F) -> PadUsing + where Self: Sized, + F: FnMut(usize) -> Self::Item + { + pad_tail::pad_using(self, min, f) + } + + /// Return an iterator adaptor that combines each element with a `Position` to + /// ease special-case handling of the first or last elements. + /// + /// Iterator element type is + /// [`(Position, Self::Item)`](Position) + /// + /// ``` + /// use itertools::{Itertools, Position}; + /// + /// let it = (0..4).with_position(); + /// itertools::assert_equal(it, + /// vec![(Position::First, 0), + /// (Position::Middle, 1), + /// (Position::Middle, 2), + /// (Position::Last, 3)]); + /// + /// let it = (0..1).with_position(); + /// itertools::assert_equal(it, vec![(Position::Only, 0)]); + /// ``` + fn with_position(self) -> WithPosition + where Self: Sized, + { + with_position::with_position(self) + } + + /// Return an iterator adaptor that yields the indices of all elements + /// satisfying a predicate, counted from the start of the iterator. + /// + /// Equivalent to `iter.enumerate().filter(|(_, v)| predicate(v)).map(|(i, _)| i)`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![1, 2, 3, 3, 4, 6, 7, 9]; + /// itertools::assert_equal(data.iter().positions(|v| v % 2 == 0), vec![1, 4, 5]); + /// + /// itertools::assert_equal(data.iter().positions(|v| v % 2 == 1).rev(), vec![7, 6, 3, 2, 0]); + /// ``` + fn positions

(self, predicate: P) -> Positions + where Self: Sized, + P: FnMut(Self::Item) -> bool, + { + adaptors::positions(self, predicate) + } + + /// Return an iterator adaptor that applies a mutating function + /// to each element before yielding it. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let input = vec![vec![1], vec![3, 2, 1]]; + /// let it = input.into_iter().update(|mut v| v.push(0)); + /// itertools::assert_equal(it, vec![vec![1, 0], vec![3, 2, 1, 0]]); + /// ``` + fn update(self, updater: F) -> Update + where Self: Sized, + F: FnMut(&mut Self::Item), + { + adaptors::update(self, updater) + } + + // non-adaptor methods + /// Advances the iterator and returns the next items grouped in a tuple of + /// a specific size (up to 12). + /// + /// If there are enough elements to be grouped in a tuple, then the tuple is + /// returned inside `Some`, otherwise `None` is returned. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut iter = 1..5; + /// + /// assert_eq!(Some((1, 2)), iter.next_tuple()); + /// ``` + fn next_tuple(&mut self) -> Option + where Self: Sized + Iterator, + T: traits::HomogeneousTuple + { + T::collect_from_iter_no_buf(self) + } + + /// Collects all items from the iterator into a tuple of a specific size + /// (up to 12). + /// + /// If the number of elements inside the iterator is **exactly** equal to + /// the tuple size, then the tuple is returned inside `Some`, otherwise + /// `None` is returned. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let iter = 1..3; + /// + /// if let Some((x, y)) = iter.collect_tuple() { + /// assert_eq!((x, y), (1, 2)) + /// } else { + /// panic!("Expected two elements") + /// } + /// ``` + fn collect_tuple(mut self) -> Option + where Self: Sized + Iterator, + T: traits::HomogeneousTuple + { + match self.next_tuple() { + elt @ Some(_) => match self.next() { + Some(_) => None, + None => elt, + }, + _ => None + } + } + + + /// Find the position and value of the first element satisfying a predicate. + /// + /// The iterator is not advanced past the first element found. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let text = "Hα"; + /// assert_eq!(text.chars().find_position(|ch| ch.is_lowercase()), Some((1, 'α'))); + /// ``` + fn find_position

(&mut self, mut pred: P) -> Option<(usize, Self::Item)> + where P: FnMut(&Self::Item) -> bool + { + for (index, elt) in self.enumerate() { + if pred(&elt) { + return Some((index, elt)); + } + } + None + } + /// Find the value of the first element satisfying a predicate or return the last element, if any. + /// + /// The iterator is not advanced past the first element found. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let numbers = [1, 2, 3, 4]; + /// assert_eq!(numbers.iter().find_or_last(|&&x| x > 5), Some(&4)); + /// assert_eq!(numbers.iter().find_or_last(|&&x| x > 2), Some(&3)); + /// assert_eq!(std::iter::empty::().find_or_last(|&x| x > 5), None); + /// ``` + fn find_or_last

(mut self, mut predicate: P) -> Option + where Self: Sized, + P: FnMut(&Self::Item) -> bool, + { + let mut prev = None; + self.find_map(|x| if predicate(&x) { Some(x) } else { prev = Some(x); None }) + .or(prev) + } + /// Find the value of the first element satisfying a predicate or return the first element, if any. + /// + /// The iterator is not advanced past the first element found. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let numbers = [1, 2, 3, 4]; + /// assert_eq!(numbers.iter().find_or_first(|&&x| x > 5), Some(&1)); + /// assert_eq!(numbers.iter().find_or_first(|&&x| x > 2), Some(&3)); + /// assert_eq!(std::iter::empty::().find_or_first(|&x| x > 5), None); + /// ``` + fn find_or_first

(mut self, mut predicate: P) -> Option + where Self: Sized, + P: FnMut(&Self::Item) -> bool, + { + let first = self.next()?; + Some(if predicate(&first) { + first + } else { + self.find(|x| predicate(x)).unwrap_or(first) + }) + } + /// Returns `true` if the given item is present in this iterator. + /// + /// This method is short-circuiting. If the given item is present in this + /// iterator, this method will consume the iterator up-to-and-including + /// the item. If the given item is not present in this iterator, the + /// iterator will be exhausted. + /// + /// ``` + /// use itertools::Itertools; + /// + /// #[derive(PartialEq, Debug)] + /// enum Enum { A, B, C, D, E, } + /// + /// let mut iter = vec![Enum::A, Enum::B, Enum::C, Enum::D].into_iter(); + /// + /// // search `iter` for `B` + /// assert_eq!(iter.contains(&Enum::B), true); + /// // `B` was found, so the iterator now rests at the item after `B` (i.e, `C`). + /// assert_eq!(iter.next(), Some(Enum::C)); + /// + /// // search `iter` for `E` + /// assert_eq!(iter.contains(&Enum::E), false); + /// // `E` wasn't found, so `iter` is now exhausted + /// assert_eq!(iter.next(), None); + /// ``` + fn contains(&mut self, query: &Q) -> bool + where + Self: Sized, + Self::Item: Borrow, + Q: PartialEq, + { + self.any(|x| x.borrow() == query) + } + + /// Check whether all elements compare equal. + /// + /// Empty iterators are considered to have equal elements: + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![1, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5]; + /// assert!(!data.iter().all_equal()); + /// assert!(data[0..3].iter().all_equal()); + /// assert!(data[3..5].iter().all_equal()); + /// assert!(data[5..8].iter().all_equal()); + /// + /// let data : Option = None; + /// assert!(data.into_iter().all_equal()); + /// ``` + fn all_equal(&mut self) -> bool + where Self: Sized, + Self::Item: PartialEq, + { + match self.next() { + None => true, + Some(a) => self.all(|x| a == x), + } + } + + /// If there are elements and they are all equal, return a single copy of that element. + /// If there are no elements, return an Error containing None. + /// If there are elements and they are not all equal, return a tuple containing the first + /// two non-equal elements found. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![1, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5]; + /// assert_eq!(data.iter().all_equal_value(), Err(Some((&1, &2)))); + /// assert_eq!(data[0..3].iter().all_equal_value(), Ok(&1)); + /// assert_eq!(data[3..5].iter().all_equal_value(), Ok(&2)); + /// assert_eq!(data[5..8].iter().all_equal_value(), Ok(&3)); + /// + /// let data : Option = None; + /// assert_eq!(data.into_iter().all_equal_value(), Err(None)); + /// ``` + fn all_equal_value(&mut self) -> Result> + where + Self: Sized, + Self::Item: PartialEq + { + let first = self.next().ok_or(None)?; + let other = self.find(|x| x != &first); + if let Some(other) = other { + Err(Some((first, other))) + } else { + Ok(first) + } + } + + /// Check whether all elements are unique (non equal). + /// + /// Empty iterators are considered to have unique elements: + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![1, 2, 3, 4, 1, 5]; + /// assert!(!data.iter().all_unique()); + /// assert!(data[0..4].iter().all_unique()); + /// assert!(data[1..6].iter().all_unique()); + /// + /// let data : Option = None; + /// assert!(data.into_iter().all_unique()); + /// ``` + #[cfg(feature = "use_std")] + fn all_unique(&mut self) -> bool + where Self: Sized, + Self::Item: Eq + Hash + { + let mut used = HashSet::new(); + self.all(move |elt| used.insert(elt)) + } + + /// Consume the first `n` elements from the iterator eagerly, + /// and return the same iterator again. + /// + /// It works similarly to *.skip(* `n` *)* except it is eager and + /// preserves the iterator type. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut iter = "αβγ".chars().dropping(2); + /// itertools::assert_equal(iter, "γ".chars()); + /// ``` + /// + /// *Fusing notes: if the iterator is exhausted by dropping, + /// the result of calling `.next()` again depends on the iterator implementation.* + fn dropping(mut self, n: usize) -> Self + where Self: Sized + { + if n > 0 { + self.nth(n - 1); + } + self + } + + /// Consume the last `n` elements from the iterator eagerly, + /// and return the same iterator again. + /// + /// This is only possible on double ended iterators. `n` may be + /// larger than the number of elements. + /// + /// Note: This method is eager, dropping the back elements immediately and + /// preserves the iterator type. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let init = vec![0, 3, 6, 9].into_iter().dropping_back(1); + /// itertools::assert_equal(init, vec![0, 3, 6]); + /// ``` + fn dropping_back(mut self, n: usize) -> Self + where Self: Sized, + Self: DoubleEndedIterator + { + if n > 0 { + (&mut self).rev().nth(n - 1); + } + self + } + + /// Run the closure `f` eagerly on each element of the iterator. + /// + /// Consumes the iterator until its end. + /// + /// ``` + /// use std::sync::mpsc::channel; + /// use itertools::Itertools; + /// + /// let (tx, rx) = channel(); + /// + /// // use .foreach() to apply a function to each value -- sending it + /// (0..5).map(|x| x * 2 + 1).foreach(|x| { tx.send(x).unwrap(); } ); + /// + /// drop(tx); + /// + /// itertools::assert_equal(rx.iter(), vec![1, 3, 5, 7, 9]); + /// ``` + #[deprecated(note="Use .for_each() instead", since="0.8.0")] + fn foreach(self, f: F) + where F: FnMut(Self::Item), + Self: Sized, + { + self.for_each(f); + } + + /// Combine all an iterator's elements into one element by using [`Extend`]. + /// + /// This combinator will extend the first item with each of the rest of the + /// items of the iterator. If the iterator is empty, the default value of + /// `I::Item` is returned. + /// + /// ```rust + /// use itertools::Itertools; + /// + /// let input = vec![vec![1], vec![2, 3], vec![4, 5, 6]]; + /// assert_eq!(input.into_iter().concat(), + /// vec![1, 2, 3, 4, 5, 6]); + /// ``` + fn concat(self) -> Self::Item + where Self: Sized, + Self::Item: Extend<<::Item as IntoIterator>::Item> + IntoIterator + Default + { + concat(self) + } + + /// `.collect_vec()` is simply a type specialization of [`Iterator::collect`], + /// for convenience. + #[cfg(feature = "use_alloc")] + fn collect_vec(self) -> Vec + where Self: Sized + { + self.collect() + } + + /// `.try_collect()` is more convenient way of writing + /// `.collect::>()` + /// + /// # Example + /// + /// ``` + /// use std::{fs, io}; + /// use itertools::Itertools; + /// + /// fn process_dir_entries(entries: &[fs::DirEntry]) { + /// // ... + /// } + /// + /// fn do_stuff() -> std::io::Result<()> { + /// let entries: Vec<_> = fs::read_dir(".")?.try_collect()?; + /// process_dir_entries(&entries); + /// + /// Ok(()) + /// } + /// ``` + #[cfg(feature = "use_alloc")] + fn try_collect(self) -> Result + where + Self: Sized + Iterator>, + Result: FromIterator>, + { + self.collect() + } + + /// Assign to each reference in `self` from the `from` iterator, + /// stopping at the shortest of the two iterators. + /// + /// The `from` iterator is queried for its next element before the `self` + /// iterator, and if either is exhausted the method is done. + /// + /// Return the number of elements written. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut xs = [0; 4]; + /// xs.iter_mut().set_from(1..); + /// assert_eq!(xs, [1, 2, 3, 4]); + /// ``` + #[inline] + fn set_from<'a, A: 'a, J>(&mut self, from: J) -> usize + where Self: Iterator, + J: IntoIterator + { + let mut count = 0; + for elt in from { + match self.next() { + None => break, + Some(ptr) => *ptr = elt, + } + count += 1; + } + count + } + + /// Combine all iterator elements into one String, separated by `sep`. + /// + /// Use the `Display` implementation of each element. + /// + /// ``` + /// use itertools::Itertools; + /// + /// assert_eq!(["a", "b", "c"].iter().join(", "), "a, b, c"); + /// assert_eq!([1, 2, 3].iter().join(", "), "1, 2, 3"); + /// ``` + #[cfg(feature = "use_alloc")] + fn join(&mut self, sep: &str) -> String + where Self::Item: std::fmt::Display + { + match self.next() { + None => String::new(), + Some(first_elt) => { + // estimate lower bound of capacity needed + let (lower, _) = self.size_hint(); + let mut result = String::with_capacity(sep.len() * lower); + write!(&mut result, "{}", first_elt).unwrap(); + self.for_each(|elt| { + result.push_str(sep); + write!(&mut result, "{}", elt).unwrap(); + }); + result + } + } + } + + /// Format all iterator elements, separated by `sep`. + /// + /// All elements are formatted (any formatting trait) + /// with `sep` inserted between each element. + /// + /// **Panics** if the formatter helper is formatted more than once. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = [1.1, 2.71828, -3.]; + /// assert_eq!( + /// format!("{:.2}", data.iter().format(", ")), + /// "1.10, 2.72, -3.00"); + /// ``` + fn format(self, sep: &str) -> Format + where Self: Sized, + { + format::new_format_default(self, sep) + } + + /// Format all iterator elements, separated by `sep`. + /// + /// This is a customizable version of [`.format()`](Itertools::format). + /// + /// The supplied closure `format` is called once per iterator element, + /// with two arguments: the element and a callback that takes a + /// `&Display` value, i.e. any reference to type that implements `Display`. + /// + /// Using `&format_args!(...)` is the most versatile way to apply custom + /// element formatting. The callback can be called multiple times if needed. + /// + /// **Panics** if the formatter helper is formatted more than once. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = [1.1, 2.71828, -3.]; + /// let data_formatter = data.iter().format_with(", ", |elt, f| f(&format_args!("{:.2}", elt))); + /// assert_eq!(format!("{}", data_formatter), + /// "1.10, 2.72, -3.00"); + /// + /// // .format_with() is recursively composable + /// let matrix = [[1., 2., 3.], + /// [4., 5., 6.]]; + /// let matrix_formatter = matrix.iter().format_with("\n", |row, f| { + /// f(&row.iter().format_with(", ", |elt, g| g(&elt))) + /// }); + /// assert_eq!(format!("{}", matrix_formatter), + /// "1, 2, 3\n4, 5, 6"); + /// + /// + /// ``` + fn format_with(self, sep: &str, format: F) -> FormatWith + where Self: Sized, + F: FnMut(Self::Item, &mut dyn FnMut(&dyn fmt::Display) -> fmt::Result) -> fmt::Result, + { + format::new_format(self, sep, format) + } + + /// See [`.fold_ok()`](Itertools::fold_ok). + #[deprecated(note="Use .fold_ok() instead", since="0.10.0")] + fn fold_results(&mut self, start: B, f: F) -> Result + where Self: Iterator>, + F: FnMut(B, A) -> B + { + self.fold_ok(start, f) + } + + /// Fold `Result` values from an iterator. + /// + /// Only `Ok` values are folded. If no error is encountered, the folded + /// value is returned inside `Ok`. Otherwise, the operation terminates + /// and returns the first `Err` value it encounters. No iterator elements are + /// consumed after the first error. + /// + /// The first accumulator value is the `start` parameter. + /// Each iteration passes the accumulator value and the next value inside `Ok` + /// to the fold function `f` and its return value becomes the new accumulator value. + /// + /// For example the sequence *Ok(1), Ok(2), Ok(3)* will result in a + /// computation like this: + /// + /// ```ignore + /// let mut accum = start; + /// accum = f(accum, 1); + /// accum = f(accum, 2); + /// accum = f(accum, 3); + /// ``` + /// + /// With a `start` value of 0 and an addition as folding function, + /// this effectively results in *((0 + 1) + 2) + 3* + /// + /// ``` + /// use std::ops::Add; + /// use itertools::Itertools; + /// + /// let values = [1, 2, -2, -1, 2, 1]; + /// assert_eq!( + /// values.iter() + /// .map(Ok::<_, ()>) + /// .fold_ok(0, Add::add), + /// Ok(3) + /// ); + /// assert!( + /// values.iter() + /// .map(|&x| if x >= 0 { Ok(x) } else { Err("Negative number") }) + /// .fold_ok(0, Add::add) + /// .is_err() + /// ); + /// ``` + fn fold_ok(&mut self, mut start: B, mut f: F) -> Result + where Self: Iterator>, + F: FnMut(B, A) -> B + { + for elt in self { + match elt { + Ok(v) => start = f(start, v), + Err(u) => return Err(u), + } + } + Ok(start) + } + + /// Fold `Option` values from an iterator. + /// + /// Only `Some` values are folded. If no `None` is encountered, the folded + /// value is returned inside `Some`. Otherwise, the operation terminates + /// and returns `None`. No iterator elements are consumed after the `None`. + /// + /// This is the `Option` equivalent to [`fold_ok`](Itertools::fold_ok). + /// + /// ``` + /// use std::ops::Add; + /// use itertools::Itertools; + /// + /// let mut values = vec![Some(1), Some(2), Some(-2)].into_iter(); + /// assert_eq!(values.fold_options(5, Add::add), Some(5 + 1 + 2 - 2)); + /// + /// let mut more_values = vec![Some(2), None, Some(0)].into_iter(); + /// assert!(more_values.fold_options(0, Add::add).is_none()); + /// assert_eq!(more_values.next().unwrap(), Some(0)); + /// ``` + fn fold_options(&mut self, mut start: B, mut f: F) -> Option + where Self: Iterator>, + F: FnMut(B, A) -> B + { + for elt in self { + match elt { + Some(v) => start = f(start, v), + None => return None, + } + } + Some(start) + } + + /// Accumulator of the elements in the iterator. + /// + /// Like `.fold()`, without a base case. If the iterator is + /// empty, return `None`. With just one element, return it. + /// Otherwise elements are accumulated in sequence using the closure `f`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// assert_eq!((0..10).fold1(|x, y| x + y).unwrap_or(0), 45); + /// assert_eq!((0..0).fold1(|x, y| x * y), None); + /// ``` + #[deprecated(since = "0.10.2", note = "Use `Iterator::reduce` instead")] + fn fold1(mut self, f: F) -> Option + where F: FnMut(Self::Item, Self::Item) -> Self::Item, + Self: Sized, + { + self.next().map(move |x| self.fold(x, f)) + } + + /// Accumulate the elements in the iterator in a tree-like manner. + /// + /// You can think of it as, while there's more than one item, repeatedly + /// combining adjacent items. It does so in bottom-up-merge-sort order, + /// however, so that it needs only logarithmic stack space. + /// + /// This produces a call tree like the following (where the calls under + /// an item are done after reading that item): + /// + /// ```text + /// 1 2 3 4 5 6 7 + /// │ │ │ │ │ │ │ + /// └─f └─f └─f │ + /// │ │ │ │ + /// └───f └─f + /// │ │ + /// └─────f + /// ``` + /// + /// Which, for non-associative functions, will typically produce a different + /// result than the linear call tree used by [`Iterator::reduce`]: + /// + /// ```text + /// 1 2 3 4 5 6 7 + /// │ │ │ │ │ │ │ + /// └─f─f─f─f─f─f + /// ``` + /// + /// If `f` is associative, prefer the normal [`Iterator::reduce`] instead. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // The same tree as above + /// let num_strings = (1..8).map(|x| x.to_string()); + /// assert_eq!(num_strings.tree_fold1(|x, y| format!("f({}, {})", x, y)), + /// Some(String::from("f(f(f(1, 2), f(3, 4)), f(f(5, 6), 7))"))); + /// + /// // Like fold1, an empty iterator produces None + /// assert_eq!((0..0).tree_fold1(|x, y| x * y), None); + /// + /// // tree_fold1 matches fold1 for associative operations... + /// assert_eq!((0..10).tree_fold1(|x, y| x + y), + /// (0..10).fold1(|x, y| x + y)); + /// // ...but not for non-associative ones + /// assert_ne!((0..10).tree_fold1(|x, y| x - y), + /// (0..10).fold1(|x, y| x - y)); + /// ``` + fn tree_fold1(mut self, mut f: F) -> Option + where F: FnMut(Self::Item, Self::Item) -> Self::Item, + Self: Sized, + { + type State = Result>; + + fn inner0(it: &mut II, f: &mut FF) -> State + where + II: Iterator, + FF: FnMut(T, T) -> T + { + // This function could be replaced with `it.next().ok_or(None)`, + // but half the useful tree_fold1 work is combining adjacent items, + // so put that in a form that LLVM is more likely to optimize well. + + let a = + if let Some(v) = it.next() { v } + else { return Err(None) }; + let b = + if let Some(v) = it.next() { v } + else { return Err(Some(a)) }; + Ok(f(a, b)) + } + + fn inner(stop: usize, it: &mut II, f: &mut FF) -> State + where + II: Iterator, + FF: FnMut(T, T) -> T + { + let mut x = inner0(it, f)?; + for height in 0..stop { + // Try to get another tree the same size with which to combine it, + // creating a new tree that's twice as big for next time around. + let next = + if height == 0 { + inner0(it, f) + } else { + inner(height, it, f) + }; + match next { + Ok(y) => x = f(x, y), + + // If we ran out of items, combine whatever we did manage + // to get. It's better combined with the current value + // than something in a parent frame, because the tree in + // the parent is always as least as big as this one. + Err(None) => return Err(Some(x)), + Err(Some(y)) => return Err(Some(f(x, y))), + } + } + Ok(x) + } + + match inner(usize::max_value(), &mut self, &mut f) { + Err(x) => x, + _ => unreachable!(), + } + } + + /// An iterator method that applies a function, producing a single, final value. + /// + /// `fold_while()` is basically equivalent to [`Iterator::fold`] but with additional support for + /// early exit via short-circuiting. + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::FoldWhile::{Continue, Done}; + /// + /// let numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; + /// + /// let mut result = 0; + /// + /// // for loop: + /// for i in &numbers { + /// if *i > 5 { + /// break; + /// } + /// result = result + i; + /// } + /// + /// // fold: + /// let result2 = numbers.iter().fold(0, |acc, x| { + /// if *x > 5 { acc } else { acc + x } + /// }); + /// + /// // fold_while: + /// let result3 = numbers.iter().fold_while(0, |acc, x| { + /// if *x > 5 { Done(acc) } else { Continue(acc + x) } + /// }).into_inner(); + /// + /// // they're the same + /// assert_eq!(result, result2); + /// assert_eq!(result2, result3); + /// ``` + /// + /// The big difference between the computations of `result2` and `result3` is that while + /// `fold()` called the provided closure for every item of the callee iterator, + /// `fold_while()` actually stopped iterating as soon as it encountered `Fold::Done(_)`. + fn fold_while(&mut self, init: B, mut f: F) -> FoldWhile + where Self: Sized, + F: FnMut(B, Self::Item) -> FoldWhile + { + use Result::{ + Ok as Continue, + Err as Break, + }; + + let result = self.try_fold(init, #[inline(always)] |acc, v| + match f(acc, v) { + FoldWhile::Continue(acc) => Continue(acc), + FoldWhile::Done(acc) => Break(acc), + } + ); + + match result { + Continue(acc) => FoldWhile::Continue(acc), + Break(acc) => FoldWhile::Done(acc), + } + } + + /// Iterate over the entire iterator and add all the elements. + /// + /// An empty iterator returns `None`, otherwise `Some(sum)`. + /// + /// # Panics + /// + /// When calling `sum1()` and a primitive integer type is being returned, this + /// method will panic if the computation overflows and debug assertions are + /// enabled. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let empty_sum = (1..1).sum1::(); + /// assert_eq!(empty_sum, None); + /// + /// let nonempty_sum = (1..11).sum1::(); + /// assert_eq!(nonempty_sum, Some(55)); + /// ``` + fn sum1(mut self) -> Option + where Self: Sized, + S: std::iter::Sum, + { + self.next() + .map(|first| once(first).chain(self).sum()) + } + + /// Iterate over the entire iterator and multiply all the elements. + /// + /// An empty iterator returns `None`, otherwise `Some(product)`. + /// + /// # Panics + /// + /// When calling `product1()` and a primitive integer type is being returned, + /// method will panic if the computation overflows and debug assertions are + /// enabled. + /// + /// # Examples + /// ``` + /// use itertools::Itertools; + /// + /// let empty_product = (1..1).product1::(); + /// assert_eq!(empty_product, None); + /// + /// let nonempty_product = (1..11).product1::(); + /// assert_eq!(nonempty_product, Some(3628800)); + /// ``` + fn product1

(mut self) -> Option

+ where Self: Sized, + P: std::iter::Product, + { + self.next() + .map(|first| once(first).chain(self).product()) + } + + /// Sort all iterator elements into a new iterator in ascending order. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort_unstable`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is unstable (i.e., may reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort the letters of the text in ascending order + /// let text = "bdacfe"; + /// itertools::assert_equal(text.chars().sorted_unstable(), + /// "abcdef".chars()); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted_unstable(self) -> VecIntoIter + where Self: Sized, + Self::Item: Ord + { + // Use .sort_unstable() directly since it is not quite identical with + // .sort_by(Ord::cmp) + let mut v = Vec::from_iter(self); + v.sort_unstable(); + v.into_iter() + } + + /// Sort all iterator elements into a new iterator in ascending order. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort_unstable_by`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is unstable (i.e., may reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort people in descending order by age + /// let people = vec![("Jane", 20), ("John", 18), ("Jill", 30), ("Jack", 27)]; + /// + /// let oldest_people_first = people + /// .into_iter() + /// .sorted_unstable_by(|a, b| Ord::cmp(&b.1, &a.1)) + /// .map(|(person, _age)| person); + /// + /// itertools::assert_equal(oldest_people_first, + /// vec!["Jill", "Jack", "Jane", "John"]); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted_unstable_by(self, cmp: F) -> VecIntoIter + where Self: Sized, + F: FnMut(&Self::Item, &Self::Item) -> Ordering, + { + let mut v = Vec::from_iter(self); + v.sort_unstable_by(cmp); + v.into_iter() + } + + /// Sort all iterator elements into a new iterator in ascending order. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort_unstable_by_key`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is unstable (i.e., may reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort people in descending order by age + /// let people = vec![("Jane", 20), ("John", 18), ("Jill", 30), ("Jack", 27)]; + /// + /// let oldest_people_first = people + /// .into_iter() + /// .sorted_unstable_by_key(|x| -x.1) + /// .map(|(person, _age)| person); + /// + /// itertools::assert_equal(oldest_people_first, + /// vec!["Jill", "Jack", "Jane", "John"]); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted_unstable_by_key(self, f: F) -> VecIntoIter + where Self: Sized, + K: Ord, + F: FnMut(&Self::Item) -> K, + { + let mut v = Vec::from_iter(self); + v.sort_unstable_by_key(f); + v.into_iter() + } + + /// Sort all iterator elements into a new iterator in ascending order. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is stable (i.e., does not reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort the letters of the text in ascending order + /// let text = "bdacfe"; + /// itertools::assert_equal(text.chars().sorted(), + /// "abcdef".chars()); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted(self) -> VecIntoIter + where Self: Sized, + Self::Item: Ord + { + // Use .sort() directly since it is not quite identical with + // .sort_by(Ord::cmp) + let mut v = Vec::from_iter(self); + v.sort(); + v.into_iter() + } + + /// Sort all iterator elements into a new iterator in ascending order. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort_by`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is stable (i.e., does not reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort people in descending order by age + /// let people = vec![("Jane", 20), ("John", 18), ("Jill", 30), ("Jack", 30)]; + /// + /// let oldest_people_first = people + /// .into_iter() + /// .sorted_by(|a, b| Ord::cmp(&b.1, &a.1)) + /// .map(|(person, _age)| person); + /// + /// itertools::assert_equal(oldest_people_first, + /// vec!["Jill", "Jack", "Jane", "John"]); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted_by(self, cmp: F) -> VecIntoIter + where Self: Sized, + F: FnMut(&Self::Item, &Self::Item) -> Ordering, + { + let mut v = Vec::from_iter(self); + v.sort_by(cmp); + v.into_iter() + } + + /// Sort all iterator elements into a new iterator in ascending order. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort_by_key`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is stable (i.e., does not reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort people in descending order by age + /// let people = vec![("Jane", 20), ("John", 18), ("Jill", 30), ("Jack", 30)]; + /// + /// let oldest_people_first = people + /// .into_iter() + /// .sorted_by_key(|x| -x.1) + /// .map(|(person, _age)| person); + /// + /// itertools::assert_equal(oldest_people_first, + /// vec!["Jill", "Jack", "Jane", "John"]); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted_by_key(self, f: F) -> VecIntoIter + where Self: Sized, + K: Ord, + F: FnMut(&Self::Item) -> K, + { + let mut v = Vec::from_iter(self); + v.sort_by_key(f); + v.into_iter() + } + + /// Sort all iterator elements into a new iterator in ascending order. The key function is + /// called exactly once per key. + /// + /// **Note:** This consumes the entire iterator, uses the + /// [`slice::sort_by_cached_key`] method and returns the result as a new + /// iterator that owns its elements. + /// + /// This sort is stable (i.e., does not reorder equal elements). + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // sort people in descending order by age + /// let people = vec![("Jane", 20), ("John", 18), ("Jill", 30), ("Jack", 30)]; + /// + /// let oldest_people_first = people + /// .into_iter() + /// .sorted_by_cached_key(|x| -x.1) + /// .map(|(person, _age)| person); + /// + /// itertools::assert_equal(oldest_people_first, + /// vec!["Jill", "Jack", "Jane", "John"]); + /// ``` + #[cfg(feature = "use_alloc")] + fn sorted_by_cached_key(self, f: F) -> VecIntoIter + where + Self: Sized, + K: Ord, + F: FnMut(&Self::Item) -> K, + { + let mut v = Vec::from_iter(self); + v.sort_by_cached_key(f); + v.into_iter() + } + + /// Sort the k smallest elements into a new iterator, in ascending order. + /// + /// **Note:** This consumes the entire iterator, and returns the result + /// as a new iterator that owns its elements. If the input contains + /// less than k elements, the result is equivalent to `self.sorted()`. + /// + /// This is guaranteed to use `k * sizeof(Self::Item) + O(1)` memory + /// and `O(n log k)` time, with `n` the number of elements in the input. + /// + /// The sorted iterator, if directly collected to a `Vec`, is converted + /// without any extra copying or allocation cost. + /// + /// **Note:** This is functionally-equivalent to `self.sorted().take(k)` + /// but much more efficient. + /// + /// ``` + /// use itertools::Itertools; + /// + /// // A random permutation of 0..15 + /// let numbers = vec![6, 9, 1, 14, 0, 4, 8, 7, 11, 2, 10, 3, 13, 12, 5]; + /// + /// let five_smallest = numbers + /// .into_iter() + /// .k_smallest(5); + /// + /// itertools::assert_equal(five_smallest, 0..5); + /// ``` + #[cfg(feature = "use_alloc")] + fn k_smallest(self, k: usize) -> VecIntoIter + where Self: Sized, + Self::Item: Ord + { + crate::k_smallest::k_smallest(self, k) + .into_sorted_vec() + .into_iter() + } + + /// Collect all iterator elements into one of two + /// partitions. Unlike [`Iterator::partition`], each partition may + /// have a distinct type. + /// + /// ``` + /// use itertools::{Itertools, Either}; + /// + /// let successes_and_failures = vec![Ok(1), Err(false), Err(true), Ok(2)]; + /// + /// let (successes, failures): (Vec<_>, Vec<_>) = successes_and_failures + /// .into_iter() + /// .partition_map(|r| { + /// match r { + /// Ok(v) => Either::Left(v), + /// Err(v) => Either::Right(v), + /// } + /// }); + /// + /// assert_eq!(successes, [1, 2]); + /// assert_eq!(failures, [false, true]); + /// ``` + fn partition_map(self, mut predicate: F) -> (A, B) + where Self: Sized, + F: FnMut(Self::Item) -> Either, + A: Default + Extend, + B: Default + Extend, + { + let mut left = A::default(); + let mut right = B::default(); + + self.for_each(|val| match predicate(val) { + Either::Left(v) => left.extend(Some(v)), + Either::Right(v) => right.extend(Some(v)), + }); + + (left, right) + } + + /// Partition a sequence of `Result`s into one list of all the `Ok` elements + /// and another list of all the `Err` elements. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let successes_and_failures = vec![Ok(1), Err(false), Err(true), Ok(2)]; + /// + /// let (successes, failures): (Vec<_>, Vec<_>) = successes_and_failures + /// .into_iter() + /// .partition_result(); + /// + /// assert_eq!(successes, [1, 2]); + /// assert_eq!(failures, [false, true]); + /// ``` + fn partition_result(self) -> (A, B) + where + Self: Iterator> + Sized, + A: Default + Extend, + B: Default + Extend, + { + self.partition_map(|r| match r { + Ok(v) => Either::Left(v), + Err(v) => Either::Right(v), + }) + } + + /// Return a `HashMap` of keys mapped to `Vec`s of values. Keys and values + /// are taken from `(Key, Value)` tuple pairs yielded by the input iterator. + /// + /// Essentially a shorthand for `.into_grouping_map().collect::>()`. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let data = vec![(0, 10), (2, 12), (3, 13), (0, 20), (3, 33), (2, 42)]; + /// let lookup = data.into_iter().into_group_map(); + /// + /// assert_eq!(lookup[&0], vec![10, 20]); + /// assert_eq!(lookup.get(&1), None); + /// assert_eq!(lookup[&2], vec![12, 42]); + /// assert_eq!(lookup[&3], vec![13, 33]); + /// ``` + #[cfg(feature = "use_std")] + fn into_group_map(self) -> HashMap> + where Self: Iterator + Sized, + K: Hash + Eq, + { + group_map::into_group_map(self) + } + + /// Return an `Iterator` on a `HashMap`. Keys mapped to `Vec`s of values. The key is specified + /// in the closure. + /// + /// Essentially a shorthand for `.into_grouping_map_by(f).collect::>()`. + /// + /// ``` + /// use itertools::Itertools; + /// use std::collections::HashMap; + /// + /// let data = vec![(0, 10), (2, 12), (3, 13), (0, 20), (3, 33), (2, 42)]; + /// let lookup: HashMap> = + /// data.clone().into_iter().into_group_map_by(|a| a.0); + /// + /// assert_eq!(lookup[&0], vec![(0,10),(0,20)]); + /// assert_eq!(lookup.get(&1), None); + /// assert_eq!(lookup[&2], vec![(2,12), (2,42)]); + /// assert_eq!(lookup[&3], vec![(3,13), (3,33)]); + /// + /// assert_eq!( + /// data.into_iter() + /// .into_group_map_by(|x| x.0) + /// .into_iter() + /// .map(|(key, values)| (key, values.into_iter().fold(0,|acc, (_,v)| acc + v ))) + /// .collect::>()[&0], + /// 30, + /// ); + /// ``` + #[cfg(feature = "use_std")] + fn into_group_map_by(self, f: F) -> HashMap> + where + Self: Iterator + Sized, + K: Hash + Eq, + F: Fn(&V) -> K, + { + group_map::into_group_map_by(self, f) + } + + /// Constructs a `GroupingMap` to be used later with one of the efficient + /// group-and-fold operations it allows to perform. + /// + /// The input iterator must yield item in the form of `(K, V)` where the + /// value of type `K` will be used as key to identify the groups and the + /// value of type `V` as value for the folding operation. + /// + /// See [`GroupingMap`] for more informations + /// on what operations are available. + #[cfg(feature = "use_std")] + fn into_grouping_map(self) -> GroupingMap + where Self: Iterator + Sized, + K: Hash + Eq, + { + grouping_map::new(self) + } + + /// Constructs a `GroupingMap` to be used later with one of the efficient + /// group-and-fold operations it allows to perform. + /// + /// The values from this iterator will be used as values for the folding operation + /// while the keys will be obtained from the values by calling `key_mapper`. + /// + /// See [`GroupingMap`] for more informations + /// on what operations are available. + #[cfg(feature = "use_std")] + fn into_grouping_map_by(self, key_mapper: F) -> GroupingMapBy + where Self: Iterator + Sized, + K: Hash + Eq, + F: FnMut(&V) -> K + { + grouping_map::new(grouping_map::MapForGrouping::new(self, key_mapper)) + } + + /// Return all minimum elements of an iterator. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().min_set(), Vec::<&i32>::new()); + /// + /// let a = [1]; + /// assert_eq!(a.iter().min_set(), vec![&1]); + /// + /// let a = [1, 2, 3, 4, 5]; + /// assert_eq!(a.iter().min_set(), vec![&1]); + /// + /// let a = [1, 1, 1, 1]; + /// assert_eq!(a.iter().min_set(), vec![&1, &1, &1, &1]); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + #[cfg(feature = "use_std")] + fn min_set(self) -> Vec + where Self: Sized, Self::Item: Ord + { + extrema_set::min_set_impl(self, |_| (), |x, y, _, _| x.cmp(y)) + } + + /// Return all minimum elements of an iterator, as determined by + /// the specified function. + /// + /// # Examples + /// + /// ``` + /// # use std::cmp::Ordering; + /// use itertools::Itertools; + /// + /// let a: [(i32, i32); 0] = []; + /// assert_eq!(a.iter().min_set_by(|_, _| Ordering::Equal), Vec::<&(i32, i32)>::new()); + /// + /// let a = [(1, 2)]; + /// assert_eq!(a.iter().min_set_by(|&&(k1,_), &&(k2, _)| k1.cmp(&k2)), vec![&(1, 2)]); + /// + /// let a = [(1, 2), (2, 2), (3, 9), (4, 8), (5, 9)]; + /// assert_eq!(a.iter().min_set_by(|&&(_,k1), &&(_,k2)| k1.cmp(&k2)), vec![&(1, 2), &(2, 2)]); + /// + /// let a = [(1, 2), (1, 3), (1, 4), (1, 5)]; + /// assert_eq!(a.iter().min_set_by(|&&(k1,_), &&(k2, _)| k1.cmp(&k2)), vec![&(1, 2), &(1, 3), &(1, 4), &(1, 5)]); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + #[cfg(feature = "use_std")] + fn min_set_by(self, mut compare: F) -> Vec + where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering + { + extrema_set::min_set_impl( + self, + |_| (), + |x, y, _, _| compare(x, y) + ) + } + + /// Return all minimum elements of an iterator, as determined by + /// the specified function. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [(i32, i32); 0] = []; + /// assert_eq!(a.iter().min_set_by_key(|_| ()), Vec::<&(i32, i32)>::new()); + /// + /// let a = [(1, 2)]; + /// assert_eq!(a.iter().min_set_by_key(|&&(k,_)| k), vec![&(1, 2)]); + /// + /// let a = [(1, 2), (2, 2), (3, 9), (4, 8), (5, 9)]; + /// assert_eq!(a.iter().min_set_by_key(|&&(_, k)| k), vec![&(1, 2), &(2, 2)]); + /// + /// let a = [(1, 2), (1, 3), (1, 4), (1, 5)]; + /// assert_eq!(a.iter().min_set_by_key(|&&(k, _)| k), vec![&(1, 2), &(1, 3), &(1, 4), &(1, 5)]); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + #[cfg(feature = "use_std")] + fn min_set_by_key(self, key: F) -> Vec + where Self: Sized, K: Ord, F: FnMut(&Self::Item) -> K + { + extrema_set::min_set_impl(self, key, |_, _, kx, ky| kx.cmp(ky)) + } + + /// Return all maximum elements of an iterator. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().max_set(), Vec::<&i32>::new()); + /// + /// let a = [1]; + /// assert_eq!(a.iter().max_set(), vec![&1]); + /// + /// let a = [1, 2, 3, 4, 5]; + /// assert_eq!(a.iter().max_set(), vec![&5]); + /// + /// let a = [1, 1, 1, 1]; + /// assert_eq!(a.iter().max_set(), vec![&1, &1, &1, &1]); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + #[cfg(feature = "use_std")] + fn max_set(self) -> Vec + where Self: Sized, Self::Item: Ord + { + extrema_set::max_set_impl(self, |_| (), |x, y, _, _| x.cmp(y)) + } + + /// Return all maximum elements of an iterator, as determined by + /// the specified function. + /// + /// # Examples + /// + /// ``` + /// # use std::cmp::Ordering; + /// use itertools::Itertools; + /// + /// let a: [(i32, i32); 0] = []; + /// assert_eq!(a.iter().max_set_by(|_, _| Ordering::Equal), Vec::<&(i32, i32)>::new()); + /// + /// let a = [(1, 2)]; + /// assert_eq!(a.iter().max_set_by(|&&(k1,_), &&(k2, _)| k1.cmp(&k2)), vec![&(1, 2)]); + /// + /// let a = [(1, 2), (2, 2), (3, 9), (4, 8), (5, 9)]; + /// assert_eq!(a.iter().max_set_by(|&&(_,k1), &&(_,k2)| k1.cmp(&k2)), vec![&(3, 9), &(5, 9)]); + /// + /// let a = [(1, 2), (1, 3), (1, 4), (1, 5)]; + /// assert_eq!(a.iter().max_set_by(|&&(k1,_), &&(k2, _)| k1.cmp(&k2)), vec![&(1, 2), &(1, 3), &(1, 4), &(1, 5)]); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + #[cfg(feature = "use_std")] + fn max_set_by(self, mut compare: F) -> Vec + where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering + { + extrema_set::max_set_impl( + self, + |_| (), + |x, y, _, _| compare(x, y) + ) + } + + /// Return all maximum elements of an iterator, as determined by + /// the specified function. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [(i32, i32); 0] = []; + /// assert_eq!(a.iter().max_set_by_key(|_| ()), Vec::<&(i32, i32)>::new()); + /// + /// let a = [(1, 2)]; + /// assert_eq!(a.iter().max_set_by_key(|&&(k,_)| k), vec![&(1, 2)]); + /// + /// let a = [(1, 2), (2, 2), (3, 9), (4, 8), (5, 9)]; + /// assert_eq!(a.iter().max_set_by_key(|&&(_, k)| k), vec![&(3, 9), &(5, 9)]); + /// + /// let a = [(1, 2), (1, 3), (1, 4), (1, 5)]; + /// assert_eq!(a.iter().max_set_by_key(|&&(k, _)| k), vec![&(1, 2), &(1, 3), &(1, 4), &(1, 5)]); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + #[cfg(feature = "use_std")] + fn max_set_by_key(self, key: F) -> Vec + where Self: Sized, K: Ord, F: FnMut(&Self::Item) -> K + { + extrema_set::max_set_impl(self, key, |_, _, kx, ky| kx.cmp(ky)) + } + + /// Return the minimum and maximum elements in the iterator. + /// + /// The return type `MinMaxResult` is an enum of three variants: + /// + /// - `NoElements` if the iterator is empty. + /// - `OneElement(x)` if the iterator has exactly one element. + /// - `MinMax(x, y)` is returned otherwise, where `x <= y`. Two + /// values are equal if and only if there is more than one + /// element in the iterator and all elements are equal. + /// + /// On an iterator of length `n`, `minmax` does `1.5 * n` comparisons, + /// and so is faster than calling `min` and `max` separately which does + /// `2 * n` comparisons. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{NoElements, OneElement, MinMax}; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().minmax(), NoElements); + /// + /// let a = [1]; + /// assert_eq!(a.iter().minmax(), OneElement(&1)); + /// + /// let a = [1, 2, 3, 4, 5]; + /// assert_eq!(a.iter().minmax(), MinMax(&1, &5)); + /// + /// let a = [1, 1, 1, 1]; + /// assert_eq!(a.iter().minmax(), MinMax(&1, &1)); + /// ``` + /// + /// The elements can be floats but no particular result is guaranteed + /// if an element is NaN. + fn minmax(self) -> MinMaxResult + where Self: Sized, Self::Item: PartialOrd + { + minmax::minmax_impl(self, |_| (), |x, y, _, _| x < y) + } + + /// Return the minimum and maximum element of an iterator, as determined by + /// the specified function. + /// + /// The return value is a variant of [`MinMaxResult`] like for [`.minmax()`](Itertools::minmax). + /// + /// For the minimum, the first minimal element is returned. For the maximum, + /// the last maximal element wins. This matches the behavior of the standard + /// [`Iterator::min`] and [`Iterator::max`] methods. + /// + /// The keys can be floats but no particular result is guaranteed + /// if a key is NaN. + fn minmax_by_key(self, key: F) -> MinMaxResult + where Self: Sized, K: PartialOrd, F: FnMut(&Self::Item) -> K + { + minmax::minmax_impl(self, key, |_, _, xk, yk| xk < yk) + } + + /// Return the minimum and maximum element of an iterator, as determined by + /// the specified comparison function. + /// + /// The return value is a variant of [`MinMaxResult`] like for [`.minmax()`](Itertools::minmax). + /// + /// For the minimum, the first minimal element is returned. For the maximum, + /// the last maximal element wins. This matches the behavior of the standard + /// [`Iterator::min`] and [`Iterator::max`] methods. + fn minmax_by(self, mut compare: F) -> MinMaxResult + where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering + { + minmax::minmax_impl( + self, + |_| (), + |x, y, _, _| Ordering::Less == compare(x, y) + ) + } + + /// Return the position of the maximum element in the iterator. + /// + /// If several elements are equally maximum, the position of the + /// last of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_max(), None); + /// + /// let a = [-3, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_max(), Some(3)); + /// + /// let a = [1, 1, -1, -1]; + /// assert_eq!(a.iter().position_max(), Some(1)); + /// ``` + fn position_max(self) -> Option + where Self: Sized, Self::Item: Ord + { + self.enumerate() + .max_by(|x, y| Ord::cmp(&x.1, &y.1)) + .map(|x| x.0) + } + + /// Return the position of the maximum element in the iterator, as + /// determined by the specified function. + /// + /// If several elements are equally maximum, the position of the + /// last of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_max_by_key(|x| x.abs()), None); + /// + /// let a = [-3_i32, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_max_by_key(|x| x.abs()), Some(4)); + /// + /// let a = [1_i32, 1, -1, -1]; + /// assert_eq!(a.iter().position_max_by_key(|x| x.abs()), Some(3)); + /// ``` + fn position_max_by_key(self, mut key: F) -> Option + where Self: Sized, K: Ord, F: FnMut(&Self::Item) -> K + { + self.enumerate() + .max_by(|x, y| Ord::cmp(&key(&x.1), &key(&y.1))) + .map(|x| x.0) + } + + /// Return the position of the maximum element in the iterator, as + /// determined by the specified comparison function. + /// + /// If several elements are equally maximum, the position of the + /// last of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_max_by(|x, y| x.cmp(y)), None); + /// + /// let a = [-3_i32, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_max_by(|x, y| x.cmp(y)), Some(3)); + /// + /// let a = [1_i32, 1, -1, -1]; + /// assert_eq!(a.iter().position_max_by(|x, y| x.cmp(y)), Some(1)); + /// ``` + fn position_max_by(self, mut compare: F) -> Option + where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering + { + self.enumerate() + .max_by(|x, y| compare(&x.1, &y.1)) + .map(|x| x.0) + } + + /// Return the position of the minimum element in the iterator. + /// + /// If several elements are equally minimum, the position of the + /// first of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_min(), None); + /// + /// let a = [-3, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_min(), Some(4)); + /// + /// let a = [1, 1, -1, -1]; + /// assert_eq!(a.iter().position_min(), Some(2)); + /// ``` + fn position_min(self) -> Option + where Self: Sized, Self::Item: Ord + { + self.enumerate() + .min_by(|x, y| Ord::cmp(&x.1, &y.1)) + .map(|x| x.0) + } + + /// Return the position of the minimum element in the iterator, as + /// determined by the specified function. + /// + /// If several elements are equally minimum, the position of the + /// first of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_min_by_key(|x| x.abs()), None); + /// + /// let a = [-3_i32, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_min_by_key(|x| x.abs()), Some(1)); + /// + /// let a = [1_i32, 1, -1, -1]; + /// assert_eq!(a.iter().position_min_by_key(|x| x.abs()), Some(0)); + /// ``` + fn position_min_by_key(self, mut key: F) -> Option + where Self: Sized, K: Ord, F: FnMut(&Self::Item) -> K + { + self.enumerate() + .min_by(|x, y| Ord::cmp(&key(&x.1), &key(&y.1))) + .map(|x| x.0) + } + + /// Return the position of the minimum element in the iterator, as + /// determined by the specified comparison function. + /// + /// If several elements are equally minimum, the position of the + /// first of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_min_by(|x, y| x.cmp(y)), None); + /// + /// let a = [-3_i32, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_min_by(|x, y| x.cmp(y)), Some(4)); + /// + /// let a = [1_i32, 1, -1, -1]; + /// assert_eq!(a.iter().position_min_by(|x, y| x.cmp(y)), Some(2)); + /// ``` + fn position_min_by(self, mut compare: F) -> Option + where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering + { + self.enumerate() + .min_by(|x, y| compare(&x.1, &y.1)) + .map(|x| x.0) + } + + /// Return the positions of the minimum and maximum elements in + /// the iterator. + /// + /// The return type [`MinMaxResult`] is an enum of three variants: + /// + /// - `NoElements` if the iterator is empty. + /// - `OneElement(xpos)` if the iterator has exactly one element. + /// - `MinMax(xpos, ypos)` is returned otherwise, where the + /// element at `xpos` ≤ the element at `ypos`. While the + /// referenced elements themselves may be equal, `xpos` cannot + /// be equal to `ypos`. + /// + /// On an iterator of length `n`, `position_minmax` does `1.5 * n` + /// comparisons, and so is faster than calling `position_min` and + /// `position_max` separately which does `2 * n` comparisons. + /// + /// For the minimum, if several elements are equally minimum, the + /// position of the first of them is returned. For the maximum, if + /// several elements are equally maximum, the position of the last + /// of them is returned. + /// + /// The elements can be floats but no particular result is + /// guaranteed if an element is NaN. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{NoElements, OneElement, MinMax}; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_minmax(), NoElements); + /// + /// let a = [10]; + /// assert_eq!(a.iter().position_minmax(), OneElement(0)); + /// + /// let a = [-3, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_minmax(), MinMax(4, 3)); + /// + /// let a = [1, 1, -1, -1]; + /// assert_eq!(a.iter().position_minmax(), MinMax(2, 1)); + /// ``` + fn position_minmax(self) -> MinMaxResult + where Self: Sized, Self::Item: PartialOrd + { + use crate::MinMaxResult::{NoElements, OneElement, MinMax}; + match minmax::minmax_impl(self.enumerate(), |_| (), |x, y, _, _| x.1 < y.1) { + NoElements => NoElements, + OneElement(x) => OneElement(x.0), + MinMax(x, y) => MinMax(x.0, y.0), + } + } + + /// Return the postions of the minimum and maximum elements of an + /// iterator, as determined by the specified function. + /// + /// The return value is a variant of [`MinMaxResult`] like for + /// [`position_minmax`]. + /// + /// For the minimum, if several elements are equally minimum, the + /// position of the first of them is returned. For the maximum, if + /// several elements are equally maximum, the position of the last + /// of them is returned. + /// + /// The keys can be floats but no particular result is guaranteed + /// if a key is NaN. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{NoElements, OneElement, MinMax}; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_minmax_by_key(|x| x.abs()), NoElements); + /// + /// let a = [10_i32]; + /// assert_eq!(a.iter().position_minmax_by_key(|x| x.abs()), OneElement(0)); + /// + /// let a = [-3_i32, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_minmax_by_key(|x| x.abs()), MinMax(1, 4)); + /// + /// let a = [1_i32, 1, -1, -1]; + /// assert_eq!(a.iter().position_minmax_by_key(|x| x.abs()), MinMax(0, 3)); + /// ``` + /// + /// [`position_minmax`]: Self::position_minmax + fn position_minmax_by_key(self, mut key: F) -> MinMaxResult + where Self: Sized, K: PartialOrd, F: FnMut(&Self::Item) -> K + { + use crate::MinMaxResult::{NoElements, OneElement, MinMax}; + match self.enumerate().minmax_by_key(|e| key(&e.1)) { + NoElements => NoElements, + OneElement(x) => OneElement(x.0), + MinMax(x, y) => MinMax(x.0, y.0), + } + } + + /// Return the postions of the minimum and maximum elements of an + /// iterator, as determined by the specified comparison function. + /// + /// The return value is a variant of [`MinMaxResult`] like for + /// [`position_minmax`]. + /// + /// For the minimum, if several elements are equally minimum, the + /// position of the first of them is returned. For the maximum, if + /// several elements are equally maximum, the position of the last + /// of them is returned. + /// + /// # Examples + /// + /// ``` + /// use itertools::Itertools; + /// use itertools::MinMaxResult::{NoElements, OneElement, MinMax}; + /// + /// let a: [i32; 0] = []; + /// assert_eq!(a.iter().position_minmax_by(|x, y| x.cmp(y)), NoElements); + /// + /// let a = [10_i32]; + /// assert_eq!(a.iter().position_minmax_by(|x, y| x.cmp(y)), OneElement(0)); + /// + /// let a = [-3_i32, 0, 1, 5, -10]; + /// assert_eq!(a.iter().position_minmax_by(|x, y| x.cmp(y)), MinMax(4, 3)); + /// + /// let a = [1_i32, 1, -1, -1]; + /// assert_eq!(a.iter().position_minmax_by(|x, y| x.cmp(y)), MinMax(2, 1)); + /// ``` + /// + /// [`position_minmax`]: Self::position_minmax + fn position_minmax_by(self, mut compare: F) -> MinMaxResult + where Self: Sized, F: FnMut(&Self::Item, &Self::Item) -> Ordering + { + use crate::MinMaxResult::{NoElements, OneElement, MinMax}; + match self.enumerate().minmax_by(|x, y| compare(&x.1, &y.1)) { + NoElements => NoElements, + OneElement(x) => OneElement(x.0), + MinMax(x, y) => MinMax(x.0, y.0), + } + } + + /// If the iterator yields exactly one element, that element will be returned, otherwise + /// an error will be returned containing an iterator that has the same output as the input + /// iterator. + /// + /// This provides an additional layer of validation over just calling `Iterator::next()`. + /// If your assumption that there should only be one element yielded is false this provides + /// the opportunity to detect and handle that, preventing errors at a distance. + /// + /// # Examples + /// ``` + /// use itertools::Itertools; + /// + /// assert_eq!((0..10).filter(|&x| x == 2).exactly_one().unwrap(), 2); + /// assert!((0..10).filter(|&x| x > 1 && x < 4).exactly_one().unwrap_err().eq(2..4)); + /// assert!((0..10).filter(|&x| x > 1 && x < 5).exactly_one().unwrap_err().eq(2..5)); + /// assert!((0..10).filter(|&_| false).exactly_one().unwrap_err().eq(0..0)); + /// ``` + fn exactly_one(mut self) -> Result> + where + Self: Sized, + { + match self.next() { + Some(first) => { + match self.next() { + Some(second) => { + Err(ExactlyOneError::new(Some(Either::Left([first, second])), self)) + } + None => { + Ok(first) + } + } + } + None => Err(ExactlyOneError::new(None, self)), + } + } + + /// If the iterator yields no elements, Ok(None) will be returned. If the iterator yields + /// exactly one element, that element will be returned, otherwise an error will be returned + /// containing an iterator that has the same output as the input iterator. + /// + /// This provides an additional layer of validation over just calling `Iterator::next()`. + /// If your assumption that there should be at most one element yielded is false this provides + /// the opportunity to detect and handle that, preventing errors at a distance. + /// + /// # Examples + /// ``` + /// use itertools::Itertools; + /// + /// assert_eq!((0..10).filter(|&x| x == 2).at_most_one().unwrap(), Some(2)); + /// assert!((0..10).filter(|&x| x > 1 && x < 4).at_most_one().unwrap_err().eq(2..4)); + /// assert!((0..10).filter(|&x| x > 1 && x < 5).at_most_one().unwrap_err().eq(2..5)); + /// assert_eq!((0..10).filter(|&_| false).at_most_one().unwrap(), None); + /// ``` + fn at_most_one(mut self) -> Result, ExactlyOneError> + where + Self: Sized, + { + match self.next() { + Some(first) => { + match self.next() { + Some(second) => { + Err(ExactlyOneError::new(Some(Either::Left([first, second])), self)) + } + None => { + Ok(Some(first)) + } + } + } + None => Ok(None), + } + } + + /// An iterator adaptor that allows the user to peek at multiple `.next()` + /// values without advancing the base iterator. + /// + /// # Examples + /// ``` + /// use itertools::Itertools; + /// + /// let mut iter = (0..10).multipeek(); + /// assert_eq!(iter.peek(), Some(&0)); + /// assert_eq!(iter.peek(), Some(&1)); + /// assert_eq!(iter.peek(), Some(&2)); + /// assert_eq!(iter.next(), Some(0)); + /// assert_eq!(iter.peek(), Some(&1)); + /// ``` + #[cfg(feature = "use_alloc")] + fn multipeek(self) -> MultiPeek + where + Self: Sized, + { + multipeek_impl::multipeek(self) + } + + /// Collect the items in this iterator and return a `HashMap` which + /// contains each item that appears in the iterator and the number + /// of times it appears. + /// + /// # Examples + /// ``` + /// # use itertools::Itertools; + /// let counts = [1, 1, 1, 3, 3, 5].into_iter().counts(); + /// assert_eq!(counts[&1], 3); + /// assert_eq!(counts[&3], 2); + /// assert_eq!(counts[&5], 1); + /// assert_eq!(counts.get(&0), None); + /// ``` + #[cfg(feature = "use_std")] + fn counts(self) -> HashMap + where + Self: Sized, + Self::Item: Eq + Hash, + { + let mut counts = HashMap::new(); + self.for_each(|item| *counts.entry(item).or_default() += 1); + counts + } + + /// Collect the items in this iterator and return a `HashMap` which + /// contains each item that appears in the iterator and the number + /// of times it appears, + /// determining identity using a keying function. + /// + /// ``` + /// # use itertools::Itertools; + /// struct Character { + /// first_name: &'static str, + /// last_name: &'static str, + /// } + /// + /// let characters = + /// vec![ + /// Character { first_name: "Amy", last_name: "Pond" }, + /// Character { first_name: "Amy", last_name: "Wong" }, + /// Character { first_name: "Amy", last_name: "Santiago" }, + /// Character { first_name: "James", last_name: "Bond" }, + /// Character { first_name: "James", last_name: "Sullivan" }, + /// Character { first_name: "James", last_name: "Norington" }, + /// Character { first_name: "James", last_name: "Kirk" }, + /// ]; + /// + /// let first_name_frequency = + /// characters + /// .into_iter() + /// .counts_by(|c| c.first_name); + /// + /// assert_eq!(first_name_frequency["Amy"], 3); + /// assert_eq!(first_name_frequency["James"], 4); + /// assert_eq!(first_name_frequency.contains_key("Asha"), false); + /// ``` + #[cfg(feature = "use_std")] + fn counts_by(self, f: F) -> HashMap + where + Self: Sized, + K: Eq + Hash, + F: FnMut(Self::Item) -> K, + { + self.map(f).counts() + } + + /// Converts an iterator of tuples into a tuple of containers. + /// + /// `unzip()` consumes an entire iterator of n-ary tuples, producing `n` collections, one for each + /// column. + /// + /// This function is, in some sense, the opposite of [`multizip`]. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let inputs = vec![(1, 2, 3), (4, 5, 6), (7, 8, 9)]; + /// + /// let (a, b, c): (Vec<_>, Vec<_>, Vec<_>) = inputs + /// .into_iter() + /// .multiunzip(); + /// + /// assert_eq!(a, vec![1, 4, 7]); + /// assert_eq!(b, vec![2, 5, 8]); + /// assert_eq!(c, vec![3, 6, 9]); + /// ``` + fn multiunzip(self) -> FromI + where + Self: Sized + MultiUnzip, + { + MultiUnzip::multiunzip(self) + } +} + +impl Itertools for T where T: Iterator { } + +/// Return `true` if both iterables produce equal sequences +/// (elements pairwise equal and sequences of the same length), +/// `false` otherwise. +/// +/// [`IntoIterator`] enabled version of [`Iterator::eq`]. +/// +/// ``` +/// assert!(itertools::equal(vec![1, 2, 3], 1..4)); +/// assert!(!itertools::equal(&[0, 0], &[0, 0, 0])); +/// ``` +pub fn equal(a: I, b: J) -> bool + where I: IntoIterator, + J: IntoIterator, + I::Item: PartialEq +{ + a.into_iter().eq(b) +} + +/// Assert that two iterables produce equal sequences, with the same +/// semantics as [`equal(a, b)`](equal). +/// +/// **Panics** on assertion failure with a message that shows the +/// two iteration elements. +/// +/// ```ignore +/// assert_equal("exceed".split('c'), "excess".split('c')); +/// // ^PANIC: panicked at 'Failed assertion Some("eed") == Some("ess") for iteration 1', +/// ``` +pub fn assert_equal(a: I, b: J) + where I: IntoIterator, + J: IntoIterator, + I::Item: fmt::Debug + PartialEq, + J::Item: fmt::Debug, +{ + let mut ia = a.into_iter(); + let mut ib = b.into_iter(); + let mut i = 0; + loop { + match (ia.next(), ib.next()) { + (None, None) => return, + (a, b) => { + let equal = match (&a, &b) { + (&Some(ref a), &Some(ref b)) => a == b, + _ => false, + }; + assert!(equal, "Failed assertion {a:?} == {b:?} for iteration {i}", + i=i, a=a, b=b); + i += 1; + } + } + } +} + +/// Partition a sequence using predicate `pred` so that elements +/// that map to `true` are placed before elements which map to `false`. +/// +/// The order within the partitions is arbitrary. +/// +/// Return the index of the split point. +/// +/// ``` +/// use itertools::partition; +/// +/// # // use repeated numbers to not promise any ordering +/// let mut data = [7, 1, 1, 7, 1, 1, 7]; +/// let split_index = partition(&mut data, |elt| *elt >= 3); +/// +/// assert_eq!(data, [7, 7, 7, 1, 1, 1, 1]); +/// assert_eq!(split_index, 3); +/// ``` +pub fn partition<'a, A: 'a, I, F>(iter: I, mut pred: F) -> usize + where I: IntoIterator, + I::IntoIter: DoubleEndedIterator, + F: FnMut(&A) -> bool +{ + let mut split_index = 0; + let mut iter = iter.into_iter(); + 'main: while let Some(front) = iter.next() { + if !pred(front) { + loop { + match iter.next_back() { + Some(back) => if pred(back) { + std::mem::swap(front, back); + break; + }, + None => break 'main, + } + } + } + split_index += 1; + } + split_index +} + +/// An enum used for controlling the execution of `fold_while`. +/// +/// See [`.fold_while()`](Itertools::fold_while) for more information. +#[derive(Copy, Clone, Debug, Eq, PartialEq)] +pub enum FoldWhile { + /// Continue folding with this value + Continue(T), + /// Fold is complete and will return this value + Done(T), +} + +impl FoldWhile { + /// Return the value in the continue or done. + pub fn into_inner(self) -> T { + match self { + FoldWhile::Continue(x) | FoldWhile::Done(x) => x, + } + } + + /// Return true if `self` is `Done`, false if it is `Continue`. + pub fn is_done(&self) -> bool { + match *self { + FoldWhile::Continue(_) => false, + FoldWhile::Done(_) => true, + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/merge_join.rs b/rust/hw/char/pl011/vendor/itertools/src/merge_join.rs new file mode 100644 index 0000000000..84f7d03338 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/merge_join.rs @@ -0,0 +1,220 @@ +use std::cmp::Ordering; +use std::iter::Fuse; +use std::fmt; + +use either::Either; + +use super::adaptors::{PutBack, put_back}; +use crate::either_or_both::EitherOrBoth; +use crate::size_hint::{self, SizeHint}; +#[cfg(doc)] +use crate::Itertools; + +/// Return an iterator adaptor that merge-joins items from the two base iterators in ascending order. +/// +/// [`IntoIterator`] enabled version of [`Itertools::merge_join_by`]. +pub fn merge_join_by(left: I, right: J, cmp_fn: F) + -> MergeJoinBy + where I: IntoIterator, + J: IntoIterator, + F: FnMut(&I::Item, &J::Item) -> T, + T: OrderingOrBool, +{ + MergeJoinBy { + left: put_back(left.into_iter().fuse()), + right: put_back(right.into_iter().fuse()), + cmp_fn, + } +} + +/// An iterator adaptor that merge-joins items from the two base iterators in ascending order. +/// +/// See [`.merge_join_by()`](crate::Itertools::merge_join_by) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct MergeJoinBy { + left: PutBack>, + right: PutBack>, + cmp_fn: F, +} + +pub trait OrderingOrBool { + type MergeResult; + fn left(left: L) -> Self::MergeResult; + fn right(right: R) -> Self::MergeResult; + // "merge" never returns (Some(...), Some(...), ...) so Option> + // is appealing but it is always followed by two put_backs, so we think the compiler is + // smart enough to optimize it. Or we could move put_backs into "merge". + fn merge(self, left: L, right: R) -> (Option, Option, Self::MergeResult); + fn size_hint(left: SizeHint, right: SizeHint) -> SizeHint; +} + +impl OrderingOrBool for Ordering { + type MergeResult = EitherOrBoth; + fn left(left: L) -> Self::MergeResult { + EitherOrBoth::Left(left) + } + fn right(right: R) -> Self::MergeResult { + EitherOrBoth::Right(right) + } + fn merge(self, left: L, right: R) -> (Option, Option, Self::MergeResult) { + match self { + Ordering::Equal => (None, None, EitherOrBoth::Both(left, right)), + Ordering::Less => (None, Some(right), EitherOrBoth::Left(left)), + Ordering::Greater => (Some(left), None, EitherOrBoth::Right(right)), + } + } + fn size_hint(left: SizeHint, right: SizeHint) -> SizeHint { + let (a_lower, a_upper) = left; + let (b_lower, b_upper) = right; + let lower = ::std::cmp::max(a_lower, b_lower); + let upper = match (a_upper, b_upper) { + (Some(x), Some(y)) => x.checked_add(y), + _ => None, + }; + (lower, upper) + } +} + +impl OrderingOrBool for bool { + type MergeResult = Either; + fn left(left: L) -> Self::MergeResult { + Either::Left(left) + } + fn right(right: R) -> Self::MergeResult { + Either::Right(right) + } + fn merge(self, left: L, right: R) -> (Option, Option, Self::MergeResult) { + if self { + (None, Some(right), Either::Left(left)) + } else { + (Some(left), None, Either::Right(right)) + } + } + fn size_hint(left: SizeHint, right: SizeHint) -> SizeHint { + // Not ExactSizeIterator because size may be larger than usize + size_hint::add(left, right) + } +} + +impl Clone for MergeJoinBy + where I: Iterator, + J: Iterator, + PutBack>: Clone, + PutBack>: Clone, + F: Clone, +{ + clone_fields!(left, right, cmp_fn); +} + +impl fmt::Debug for MergeJoinBy + where I: Iterator + fmt::Debug, + I::Item: fmt::Debug, + J: Iterator + fmt::Debug, + J::Item: fmt::Debug, +{ + debug_fmt_fields!(MergeJoinBy, left, right); +} + +impl Iterator for MergeJoinBy + where I: Iterator, + J: Iterator, + F: FnMut(&I::Item, &J::Item) -> T, + T: OrderingOrBool, +{ + type Item = T::MergeResult; + + fn next(&mut self) -> Option { + match (self.left.next(), self.right.next()) { + (None, None) => None, + (Some(left), None) => Some(T::left(left)), + (None, Some(right)) => Some(T::right(right)), + (Some(left), Some(right)) => { + let (left, right, next) = (self.cmp_fn)(&left, &right).merge(left, right); + if let Some(left) = left { + self.left.put_back(left); + } + if let Some(right) = right { + self.right.put_back(right); + } + Some(next) + } + } + } + + fn size_hint(&self) -> SizeHint { + T::size_hint(self.left.size_hint(), self.right.size_hint()) + } + + fn count(mut self) -> usize { + let mut count = 0; + loop { + match (self.left.next(), self.right.next()) { + (None, None) => break count, + (Some(_left), None) => break count + 1 + self.left.into_parts().1.count(), + (None, Some(_right)) => break count + 1 + self.right.into_parts().1.count(), + (Some(left), Some(right)) => { + count += 1; + let (left, right, _) = (self.cmp_fn)(&left, &right).merge(left, right); + if let Some(left) = left { + self.left.put_back(left); + } + if let Some(right) = right { + self.right.put_back(right); + } + } + } + } + } + + fn last(mut self) -> Option { + let mut previous_element = None; + loop { + match (self.left.next(), self.right.next()) { + (None, None) => break previous_element, + (Some(left), None) => { + break Some(T::left( + self.left.into_parts().1.last().unwrap_or(left), + )) + } + (None, Some(right)) => { + break Some(T::right( + self.right.into_parts().1.last().unwrap_or(right), + )) + } + (Some(left), Some(right)) => { + let (left, right, elem) = (self.cmp_fn)(&left, &right).merge(left, right); + if let Some(left) = left { + self.left.put_back(left); + } + if let Some(right) = right { + self.right.put_back(right); + } + previous_element = Some(elem); + } + } + } + } + + fn nth(&mut self, mut n: usize) -> Option { + loop { + if n == 0 { + break self.next(); + } + n -= 1; + match (self.left.next(), self.right.next()) { + (None, None) => break None, + (Some(_left), None) => break self.left.nth(n).map(T::left), + (None, Some(_right)) => break self.right.nth(n).map(T::right), + (Some(left), Some(right)) => { + let (left, right, _) = (self.cmp_fn)(&left, &right).merge(left, right); + if let Some(left) = left { + self.left.put_back(left); + } + if let Some(right) = right { + self.right.put_back(right); + } + } + } + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/minmax.rs b/rust/hw/char/pl011/vendor/itertools/src/minmax.rs new file mode 100644 index 0000000000..52b2f115dd --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/minmax.rs @@ -0,0 +1,115 @@ + +/// `MinMaxResult` is an enum returned by `minmax`. +/// +/// See [`.minmax()`](crate::Itertools::minmax) for more detail. +#[derive(Copy, Clone, PartialEq, Debug)] +pub enum MinMaxResult { + /// Empty iterator + NoElements, + + /// Iterator with one element, so the minimum and maximum are the same + OneElement(T), + + /// More than one element in the iterator, the first element is not larger + /// than the second + MinMax(T, T) +} + +impl MinMaxResult { + /// `into_option` creates an `Option` of type `(T, T)`. The returned `Option` + /// has variant `None` if and only if the `MinMaxResult` has variant + /// `NoElements`. Otherwise `Some((x, y))` is returned where `x <= y`. + /// If the `MinMaxResult` has variant `OneElement(x)`, performing this + /// operation will make one clone of `x`. + /// + /// # Examples + /// + /// ``` + /// use itertools::MinMaxResult::{self, NoElements, OneElement, MinMax}; + /// + /// let r: MinMaxResult = NoElements; + /// assert_eq!(r.into_option(), None); + /// + /// let r = OneElement(1); + /// assert_eq!(r.into_option(), Some((1, 1))); + /// + /// let r = MinMax(1, 2); + /// assert_eq!(r.into_option(), Some((1, 2))); + /// ``` + pub fn into_option(self) -> Option<(T,T)> { + match self { + MinMaxResult::NoElements => None, + MinMaxResult::OneElement(x) => Some((x.clone(), x)), + MinMaxResult::MinMax(x, y) => Some((x, y)) + } + } +} + +/// Implementation guts for `minmax` and `minmax_by_key`. +pub fn minmax_impl(mut it: I, mut key_for: F, + mut lt: L) -> MinMaxResult + where I: Iterator, + F: FnMut(&I::Item) -> K, + L: FnMut(&I::Item, &I::Item, &K, &K) -> bool, +{ + let (mut min, mut max, mut min_key, mut max_key) = match it.next() { + None => return MinMaxResult::NoElements, + Some(x) => { + match it.next() { + None => return MinMaxResult::OneElement(x), + Some(y) => { + let xk = key_for(&x); + let yk = key_for(&y); + if !lt(&y, &x, &yk, &xk) {(x, y, xk, yk)} else {(y, x, yk, xk)} + } + } + } + }; + + loop { + // `first` and `second` are the two next elements we want to look + // at. We first compare `first` and `second` (#1). The smaller one + // is then compared to current minimum (#2). The larger one is + // compared to current maximum (#3). This way we do 3 comparisons + // for 2 elements. + let first = match it.next() { + None => break, + Some(x) => x + }; + let second = match it.next() { + None => { + let first_key = key_for(&first); + if lt(&first, &min, &first_key, &min_key) { + min = first; + } else if !lt(&first, &max, &first_key, &max_key) { + max = first; + } + break; + } + Some(x) => x + }; + let first_key = key_for(&first); + let second_key = key_for(&second); + if !lt(&second, &first, &second_key, &first_key) { + if lt(&first, &min, &first_key, &min_key) { + min = first; + min_key = first_key; + } + if !lt(&second, &max, &second_key, &max_key) { + max = second; + max_key = second_key; + } + } else { + if lt(&second, &min, &second_key, &min_key) { + min = second; + min_key = second_key; + } + if !lt(&first, &max, &first_key, &max_key) { + max = first; + max_key = first_key; + } + } + } + + MinMaxResult::MinMax(min, max) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/multipeek_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/multipeek_impl.rs new file mode 100644 index 0000000000..8b49c695eb --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/multipeek_impl.rs @@ -0,0 +1,101 @@ +use std::iter::Fuse; +use alloc::collections::VecDeque; +use crate::size_hint; +use crate::PeekingNext; +#[cfg(doc)] +use crate::Itertools; + +/// See [`multipeek()`] for more information. +#[derive(Clone, Debug)] +pub struct MultiPeek + where I: Iterator +{ + iter: Fuse, + buf: VecDeque, + index: usize, +} + +/// An iterator adaptor that allows the user to peek at multiple `.next()` +/// values without advancing the base iterator. +/// +/// [`IntoIterator`] enabled version of [`Itertools::multipeek`]. +pub fn multipeek(iterable: I) -> MultiPeek + where I: IntoIterator +{ + MultiPeek { + iter: iterable.into_iter().fuse(), + buf: VecDeque::new(), + index: 0, + } +} + +impl MultiPeek + where I: Iterator +{ + /// Reset the peeking “cursor” + pub fn reset_peek(&mut self) { + self.index = 0; + } +} + +impl MultiPeek { + /// Works exactly like `.next()` with the only difference that it doesn't + /// advance itself. `.peek()` can be called multiple times, to peek + /// further ahead. + /// When `.next()` is called, reset the peeking “cursor”. + pub fn peek(&mut self) -> Option<&I::Item> { + let ret = if self.index < self.buf.len() { + Some(&self.buf[self.index]) + } else { + match self.iter.next() { + Some(x) => { + self.buf.push_back(x); + Some(&self.buf[self.index]) + } + None => return None, + } + }; + + self.index += 1; + ret + } +} + +impl PeekingNext for MultiPeek + where I: Iterator, +{ + fn peeking_next(&mut self, accept: F) -> Option + where F: FnOnce(&Self::Item) -> bool + { + if self.buf.is_empty() { + if let Some(r) = self.peek() { + if !accept(r) { return None } + } + } else if let Some(r) = self.buf.get(0) { + if !accept(r) { return None } + } + self.next() + } +} + +impl Iterator for MultiPeek + where I: Iterator +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + self.index = 0; + self.buf.pop_front().or_else(|| self.iter.next()) + } + + fn size_hint(&self) -> (usize, Option) { + size_hint::add_scalar(self.iter.size_hint(), self.buf.len()) + } +} + +// Same size +impl ExactSizeIterator for MultiPeek + where I: ExactSizeIterator +{} + + diff --git a/rust/hw/char/pl011/vendor/itertools/src/pad_tail.rs b/rust/hw/char/pl011/vendor/itertools/src/pad_tail.rs new file mode 100644 index 0000000000..248a432436 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/pad_tail.rs @@ -0,0 +1,96 @@ +use std::iter::{Fuse, FusedIterator}; +use crate::size_hint; + +/// An iterator adaptor that pads a sequence to a minimum length by filling +/// missing elements using a closure. +/// +/// Iterator element type is `I::Item`. +/// +/// See [`.pad_using()`](crate::Itertools::pad_using) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct PadUsing { + iter: Fuse, + min: usize, + pos: usize, + filler: F, +} + +impl std::fmt::Debug for PadUsing +where + I: std::fmt::Debug, +{ + debug_fmt_fields!(PadUsing, iter, min, pos); +} + +/// Create a new `PadUsing` iterator. +pub fn pad_using(iter: I, min: usize, filler: F) -> PadUsing + where I: Iterator, + F: FnMut(usize) -> I::Item +{ + PadUsing { + iter: iter.fuse(), + min, + pos: 0, + filler, + } +} + +impl Iterator for PadUsing + where I: Iterator, + F: FnMut(usize) -> I::Item +{ + type Item = I::Item; + + #[inline] + fn next(&mut self) -> Option { + match self.iter.next() { + None => { + if self.pos < self.min { + let e = Some((self.filler)(self.pos)); + self.pos += 1; + e + } else { + None + } + }, + e => { + self.pos += 1; + e + } + } + } + + fn size_hint(&self) -> (usize, Option) { + let tail = self.min.saturating_sub(self.pos); + size_hint::max(self.iter.size_hint(), (tail, Some(tail))) + } +} + +impl DoubleEndedIterator for PadUsing + where I: DoubleEndedIterator + ExactSizeIterator, + F: FnMut(usize) -> I::Item +{ + fn next_back(&mut self) -> Option { + if self.min == 0 { + self.iter.next_back() + } else if self.iter.len() >= self.min { + self.min -= 1; + self.iter.next_back() + } else { + self.min -= 1; + Some((self.filler)(self.min)) + } + } +} + +impl ExactSizeIterator for PadUsing + where I: ExactSizeIterator, + F: FnMut(usize) -> I::Item +{} + + +impl FusedIterator for PadUsing + where I: FusedIterator, + F: FnMut(usize) -> I::Item +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/peek_nth.rs b/rust/hw/char/pl011/vendor/itertools/src/peek_nth.rs new file mode 100644 index 0000000000..bcca45838e --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/peek_nth.rs @@ -0,0 +1,102 @@ +use crate::size_hint; +use crate::PeekingNext; +use alloc::collections::VecDeque; +use std::iter::Fuse; + +/// See [`peek_nth()`] for more information. +#[derive(Clone, Debug)] +pub struct PeekNth +where + I: Iterator, +{ + iter: Fuse, + buf: VecDeque, +} + +/// A drop-in replacement for [`std::iter::Peekable`] which adds a `peek_nth` +/// method allowing the user to `peek` at a value several iterations forward +/// without advancing the base iterator. +/// +/// This differs from `multipeek` in that subsequent calls to `peek` or +/// `peek_nth` will always return the same value until `next` is called +/// (making `reset_peek` unnecessary). +pub fn peek_nth(iterable: I) -> PeekNth +where + I: IntoIterator, +{ + PeekNth { + iter: iterable.into_iter().fuse(), + buf: VecDeque::new(), + } +} + +impl PeekNth +where + I: Iterator, +{ + /// Works exactly like the `peek` method in `std::iter::Peekable` + pub fn peek(&mut self) -> Option<&I::Item> { + self.peek_nth(0) + } + + /// Returns a reference to the `nth` value without advancing the iterator. + /// + /// # Examples + /// + /// Basic usage: + /// + /// ```rust + /// use itertools::peek_nth; + /// + /// let xs = vec![1,2,3]; + /// let mut iter = peek_nth(xs.iter()); + /// + /// assert_eq!(iter.peek_nth(0), Some(&&1)); + /// assert_eq!(iter.next(), Some(&1)); + /// + /// // The iterator does not advance even if we call `peek_nth` multiple times + /// assert_eq!(iter.peek_nth(0), Some(&&2)); + /// assert_eq!(iter.peek_nth(1), Some(&&3)); + /// assert_eq!(iter.next(), Some(&2)); + /// + /// // Calling `peek_nth` past the end of the iterator will return `None` + /// assert_eq!(iter.peek_nth(1), None); + /// ``` + pub fn peek_nth(&mut self, n: usize) -> Option<&I::Item> { + let unbuffered_items = (n + 1).saturating_sub(self.buf.len()); + + self.buf.extend(self.iter.by_ref().take(unbuffered_items)); + + self.buf.get(n) + } +} + +impl Iterator for PeekNth +where + I: Iterator, +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + self.buf.pop_front().or_else(|| self.iter.next()) + } + + fn size_hint(&self) -> (usize, Option) { + size_hint::add_scalar(self.iter.size_hint(), self.buf.len()) + } +} + +impl ExactSizeIterator for PeekNth where I: ExactSizeIterator {} + +impl PeekingNext for PeekNth +where + I: Iterator, +{ + fn peeking_next(&mut self, accept: F) -> Option + where + F: FnOnce(&Self::Item) -> bool, + { + self.peek().filter(|item| accept(item))?; + self.next() + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/peeking_take_while.rs b/rust/hw/char/pl011/vendor/itertools/src/peeking_take_while.rs new file mode 100644 index 0000000000..3a37228122 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/peeking_take_while.rs @@ -0,0 +1,177 @@ +use std::iter::Peekable; +use crate::PutBack; +#[cfg(feature = "use_alloc")] +use crate::PutBackN; + +/// An iterator that allows peeking at an element before deciding to accept it. +/// +/// See [`.peeking_take_while()`](crate::Itertools::peeking_take_while) +/// for more information. +/// +/// This is implemented by peeking adaptors like peekable and put back, +/// but also by a few iterators that can be peeked natively, like the slice’s +/// by reference iterator (`std::slice::Iter`). +pub trait PeekingNext : Iterator { + /// Pass a reference to the next iterator element to the closure `accept`; + /// if `accept` returns true, return it as the next element, + /// else None. + fn peeking_next(&mut self, accept: F) -> Option + where Self: Sized, + F: FnOnce(&Self::Item) -> bool; +} + +impl<'a, I> PeekingNext for &'a mut I + where I: PeekingNext, +{ + fn peeking_next(&mut self, accept: F) -> Option + where F: FnOnce(&Self::Item) -> bool + { + (*self).peeking_next(accept) + } +} + +impl PeekingNext for Peekable + where I: Iterator, +{ + fn peeking_next(&mut self, accept: F) -> Option + where F: FnOnce(&Self::Item) -> bool + { + if let Some(r) = self.peek() { + if !accept(r) { + return None; + } + } + self.next() + } +} + +impl PeekingNext for PutBack + where I: Iterator, +{ + fn peeking_next(&mut self, accept: F) -> Option + where F: FnOnce(&Self::Item) -> bool + { + if let Some(r) = self.next() { + if !accept(&r) { + self.put_back(r); + return None; + } + Some(r) + } else { + None + } + } +} + +#[cfg(feature = "use_alloc")] +impl PeekingNext for PutBackN + where I: Iterator, +{ + fn peeking_next(&mut self, accept: F) -> Option + where F: FnOnce(&Self::Item) -> bool + { + if let Some(r) = self.next() { + if !accept(&r) { + self.put_back(r); + return None; + } + Some(r) + } else { + None + } + } +} + +/// An iterator adaptor that takes items while a closure returns `true`. +/// +/// See [`.peeking_take_while()`](crate::Itertools::peeking_take_while) +/// for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct PeekingTakeWhile<'a, I: 'a, F> + where I: Iterator, +{ + iter: &'a mut I, + f: F, +} + +impl<'a, I: 'a, F> std::fmt::Debug for PeekingTakeWhile<'a, I, F> +where + I: Iterator + std::fmt::Debug, +{ + debug_fmt_fields!(PeekingTakeWhile, iter); +} + +/// Create a `PeekingTakeWhile` +pub fn peeking_take_while(iter: &mut I, f: F) -> PeekingTakeWhile + where I: Iterator, +{ + PeekingTakeWhile { + iter, + f, + } +} + +impl<'a, I, F> Iterator for PeekingTakeWhile<'a, I, F> + where I: PeekingNext, + F: FnMut(&I::Item) -> bool, + +{ + type Item = I::Item; + fn next(&mut self) -> Option { + self.iter.peeking_next(&mut self.f) + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } +} + +impl<'a, I, F> PeekingNext for PeekingTakeWhile<'a, I, F> + where I: PeekingNext, + F: FnMut(&I::Item) -> bool, +{ + fn peeking_next(&mut self, g: G) -> Option + where G: FnOnce(&Self::Item) -> bool, + { + let f = &mut self.f; + self.iter.peeking_next(|r| f(r) && g(r)) + } +} + +// Some iterators are so lightweight we can simply clone them to save their +// state and use that for peeking. +macro_rules! peeking_next_by_clone { + ([$($typarm:tt)*] $type_:ty) => { + impl<$($typarm)*> PeekingNext for $type_ { + fn peeking_next(&mut self, accept: F) -> Option + where F: FnOnce(&Self::Item) -> bool + { + let saved_state = self.clone(); + if let Some(r) = self.next() { + if !accept(&r) { + *self = saved_state; + } else { + return Some(r) + } + } + None + } + } + } +} + +peeking_next_by_clone! { ['a, T] ::std::slice::Iter<'a, T> } +peeking_next_by_clone! { ['a] ::std::str::Chars<'a> } +peeking_next_by_clone! { ['a] ::std::str::CharIndices<'a> } +peeking_next_by_clone! { ['a] ::std::str::Bytes<'a> } +peeking_next_by_clone! { ['a, T] ::std::option::Iter<'a, T> } +peeking_next_by_clone! { ['a, T] ::std::result::Iter<'a, T> } +peeking_next_by_clone! { [T] ::std::iter::Empty } +#[cfg(feature = "use_alloc")] +peeking_next_by_clone! { ['a, T] alloc::collections::linked_list::Iter<'a, T> } +#[cfg(feature = "use_alloc")] +peeking_next_by_clone! { ['a, T] alloc::collections::vec_deque::Iter<'a, T> } + +// cloning a Rev has no extra overhead; peekable and put backs are never DEI. +peeking_next_by_clone! { [I: Clone + PeekingNext + DoubleEndedIterator] + ::std::iter::Rev } diff --git a/rust/hw/char/pl011/vendor/itertools/src/permutations.rs b/rust/hw/char/pl011/vendor/itertools/src/permutations.rs new file mode 100644 index 0000000000..d03b852626 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/permutations.rs @@ -0,0 +1,277 @@ +use alloc::vec::Vec; +use std::fmt; +use std::iter::once; + +use super::lazy_buffer::LazyBuffer; + +/// An iterator adaptor that iterates through all the `k`-permutations of the +/// elements from an iterator. +/// +/// See [`.permutations()`](crate::Itertools::permutations) for +/// more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Permutations { + vals: LazyBuffer, + state: PermutationState, +} + +impl Clone for Permutations + where I: Clone + Iterator, + I::Item: Clone, +{ + clone_fields!(vals, state); +} + +#[derive(Clone, Debug)] +enum PermutationState { + StartUnknownLen { + k: usize, + }, + OngoingUnknownLen { + k: usize, + min_n: usize, + }, + Complete(CompleteState), + Empty, +} + +#[derive(Clone, Debug)] +enum CompleteState { + Start { + n: usize, + k: usize, + }, + Ongoing { + indices: Vec, + cycles: Vec, + } +} + +enum CompleteStateRemaining { + Known(usize), + Overflow, +} + +impl fmt::Debug for Permutations + where I: Iterator + fmt::Debug, + I::Item: fmt::Debug, +{ + debug_fmt_fields!(Permutations, vals, state); +} + +pub fn permutations(iter: I, k: usize) -> Permutations { + let mut vals = LazyBuffer::new(iter); + + if k == 0 { + // Special case, yields single empty vec; `n` is irrelevant + let state = PermutationState::Complete(CompleteState::Start { n: 0, k: 0 }); + + return Permutations { + vals, + state + }; + } + + let mut enough_vals = true; + + while vals.len() < k { + if !vals.get_next() { + enough_vals = false; + break; + } + } + + let state = if enough_vals { + PermutationState::StartUnknownLen { k } + } else { + PermutationState::Empty + }; + + Permutations { + vals, + state + } +} + +impl Iterator for Permutations +where + I: Iterator, + I::Item: Clone +{ + type Item = Vec; + + fn next(&mut self) -> Option { + self.advance(); + + let &mut Permutations { ref vals, ref state } = self; + + match *state { + PermutationState::StartUnknownLen { .. } => panic!("unexpected iterator state"), + PermutationState::OngoingUnknownLen { k, min_n } => { + let latest_idx = min_n - 1; + let indices = (0..(k - 1)).chain(once(latest_idx)); + + Some(indices.map(|i| vals[i].clone()).collect()) + } + PermutationState::Complete(CompleteState::Ongoing { ref indices, ref cycles }) => { + let k = cycles.len(); + Some(indices[0..k].iter().map(|&i| vals[i].clone()).collect()) + }, + PermutationState::Complete(CompleteState::Start { .. }) | PermutationState::Empty => None + } + } + + fn count(self) -> usize { + fn from_complete(complete_state: CompleteState) -> usize { + match complete_state.remaining() { + CompleteStateRemaining::Known(count) => count, + CompleteStateRemaining::Overflow => { + panic!("Iterator count greater than usize::MAX"); + } + } + } + + let Permutations { vals, state } = self; + match state { + PermutationState::StartUnknownLen { k } => { + let n = vals.len() + vals.it.count(); + let complete_state = CompleteState::Start { n, k }; + + from_complete(complete_state) + } + PermutationState::OngoingUnknownLen { k, min_n } => { + let prev_iteration_count = min_n - k + 1; + let n = vals.len() + vals.it.count(); + let complete_state = CompleteState::Start { n, k }; + + from_complete(complete_state) - prev_iteration_count + }, + PermutationState::Complete(state) => from_complete(state), + PermutationState::Empty => 0 + } + } + + fn size_hint(&self) -> (usize, Option) { + match self.state { + PermutationState::StartUnknownLen { .. } | + PermutationState::OngoingUnknownLen { .. } => (0, None), // TODO can we improve this lower bound? + PermutationState::Complete(ref state) => match state.remaining() { + CompleteStateRemaining::Known(count) => (count, Some(count)), + CompleteStateRemaining::Overflow => (::std::usize::MAX, None) + } + PermutationState::Empty => (0, Some(0)) + } + } +} + +impl Permutations +where + I: Iterator, + I::Item: Clone +{ + fn advance(&mut self) { + let &mut Permutations { ref mut vals, ref mut state } = self; + + *state = match *state { + PermutationState::StartUnknownLen { k } => { + PermutationState::OngoingUnknownLen { k, min_n: k } + } + PermutationState::OngoingUnknownLen { k, min_n } => { + if vals.get_next() { + PermutationState::OngoingUnknownLen { k, min_n: min_n + 1 } + } else { + let n = min_n; + let prev_iteration_count = n - k + 1; + let mut complete_state = CompleteState::Start { n, k }; + + // Advance the complete-state iterator to the correct point + for _ in 0..(prev_iteration_count + 1) { + complete_state.advance(); + } + + PermutationState::Complete(complete_state) + } + } + PermutationState::Complete(ref mut state) => { + state.advance(); + + return; + } + PermutationState::Empty => { return; } + }; + } +} + +impl CompleteState { + fn advance(&mut self) { + *self = match *self { + CompleteState::Start { n, k } => { + let indices = (0..n).collect(); + let cycles = ((n - k)..n).rev().collect(); + + CompleteState::Ongoing { + cycles, + indices + } + }, + CompleteState::Ongoing { ref mut indices, ref mut cycles } => { + let n = indices.len(); + let k = cycles.len(); + + for i in (0..k).rev() { + if cycles[i] == 0 { + cycles[i] = n - i - 1; + + let to_push = indices.remove(i); + indices.push(to_push); + } else { + let swap_index = n - cycles[i]; + indices.swap(i, swap_index); + + cycles[i] -= 1; + return; + } + } + + CompleteState::Start { n, k } + } + } + } + + fn remaining(&self) -> CompleteStateRemaining { + use self::CompleteStateRemaining::{Known, Overflow}; + + match *self { + CompleteState::Start { n, k } => { + if n < k { + return Known(0); + } + + let count: Option = (n - k + 1..n + 1).fold(Some(1), |acc, i| { + acc.and_then(|acc| acc.checked_mul(i)) + }); + + match count { + Some(count) => Known(count), + None => Overflow + } + } + CompleteState::Ongoing { ref indices, ref cycles } => { + let mut count: usize = 0; + + for (i, &c) in cycles.iter().enumerate() { + let radix = indices.len() - i; + let next_count = count.checked_mul(radix) + .and_then(|count| count.checked_add(c)); + + count = match next_count { + Some(count) => count, + None => { return Overflow; } + }; + } + + Known(count) + } + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/powerset.rs b/rust/hw/char/pl011/vendor/itertools/src/powerset.rs new file mode 100644 index 0000000000..4d7685b12a --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/powerset.rs @@ -0,0 +1,90 @@ +use std::fmt; +use std::iter::FusedIterator; +use std::usize; +use alloc::vec::Vec; + +use super::combinations::{Combinations, combinations}; +use super::size_hint; + +/// An iterator to iterate through the powerset of the elements from an iterator. +/// +/// See [`.powerset()`](crate::Itertools::powerset) for more +/// information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Powerset { + combs: Combinations, + // Iterator `position` (equal to count of yielded elements). + pos: usize, +} + +impl Clone for Powerset + where I: Clone + Iterator, + I::Item: Clone, +{ + clone_fields!(combs, pos); +} + +impl fmt::Debug for Powerset + where I: Iterator + fmt::Debug, + I::Item: fmt::Debug, +{ + debug_fmt_fields!(Powerset, combs, pos); +} + +/// Create a new `Powerset` from a clonable iterator. +pub fn powerset(src: I) -> Powerset + where I: Iterator, + I::Item: Clone, +{ + Powerset { + combs: combinations(src, 0), + pos: 0, + } +} + +impl Iterator for Powerset + where + I: Iterator, + I::Item: Clone, +{ + type Item = Vec; + + fn next(&mut self) -> Option { + if let Some(elt) = self.combs.next() { + self.pos = self.pos.saturating_add(1); + Some(elt) + } else if self.combs.k() < self.combs.n() + || self.combs.k() == 0 + { + self.combs.reset(self.combs.k() + 1); + self.combs.next().map(|elt| { + self.pos = self.pos.saturating_add(1); + elt + }) + } else { + None + } + } + + fn size_hint(&self) -> (usize, Option) { + // Total bounds for source iterator. + let src_total = size_hint::add_scalar(self.combs.src().size_hint(), self.combs.n()); + + // Total bounds for self ( length(powerset(set) == 2 ^ length(set) ) + let self_total = size_hint::pow_scalar_base(2, src_total); + + if self.pos < usize::MAX { + // Subtract count of elements already yielded from total. + size_hint::sub_scalar(self_total, self.pos) + } else { + // Fallback: self.pos is saturated and no longer reliable. + (0, self_total.1) + } + } +} + +impl FusedIterator for Powerset + where + I: Iterator, + I::Item: Clone, +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/process_results_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/process_results_impl.rs new file mode 100644 index 0000000000..713db45514 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/process_results_impl.rs @@ -0,0 +1,68 @@ +#[cfg(doc)] +use crate::Itertools; + +/// An iterator that produces only the `T` values as long as the +/// inner iterator produces `Ok(T)`. +/// +/// Used by [`process_results`](crate::process_results), see its docs +/// for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Debug)] +pub struct ProcessResults<'a, I, E: 'a> { + error: &'a mut Result<(), E>, + iter: I, +} + +impl<'a, I, T, E> Iterator for ProcessResults<'a, I, E> + where I: Iterator> +{ + type Item = T; + + fn next(&mut self) -> Option { + match self.iter.next() { + Some(Ok(x)) => Some(x), + Some(Err(e)) => { + *self.error = Err(e); + None + } + None => None, + } + } + + fn size_hint(&self) -> (usize, Option) { + (0, self.iter.size_hint().1) + } + + fn fold(mut self, init: B, mut f: F) -> B + where + Self: Sized, + F: FnMut(B, Self::Item) -> B, + { + let error = self.error; + self.iter + .try_fold(init, |acc, opt| match opt { + Ok(x) => Ok(f(acc, x)), + Err(e) => { + *error = Err(e); + Err(acc) + } + }) + .unwrap_or_else(|e| e) + } +} + +/// “Lift” a function of the values of an iterator so that it can process +/// an iterator of `Result` values instead. +/// +/// [`IntoIterator`] enabled version of [`Itertools::process_results`]. +pub fn process_results(iterable: I, processor: F) -> Result + where I: IntoIterator>, + F: FnOnce(ProcessResults) -> R +{ + let iter = iterable.into_iter(); + let mut error = Ok(()); + + let result = processor(ProcessResults { error: &mut error, iter }); + + error.map(|_| result) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/put_back_n_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/put_back_n_impl.rs new file mode 100644 index 0000000000..60ea8e6495 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/put_back_n_impl.rs @@ -0,0 +1,61 @@ +use alloc::vec::Vec; + +use crate::size_hint; + +/// An iterator adaptor that allows putting multiple +/// items in front of the iterator. +/// +/// Iterator element type is `I::Item`. +#[derive(Debug, Clone)] +pub struct PutBackN { + top: Vec, + iter: I, +} + +/// Create an iterator where you can put back multiple values to the front +/// of the iteration. +/// +/// Iterator element type is `I::Item`. +pub fn put_back_n(iterable: I) -> PutBackN + where I: IntoIterator +{ + PutBackN { + top: Vec::new(), + iter: iterable.into_iter(), + } +} + +impl PutBackN { + /// Puts x in front of the iterator. + /// The values are yielded in order of the most recently put back + /// values first. + /// + /// ```rust + /// use itertools::put_back_n; + /// + /// let mut it = put_back_n(1..5); + /// it.next(); + /// it.put_back(1); + /// it.put_back(0); + /// + /// assert!(itertools::equal(it, 0..5)); + /// ``` + #[inline] + pub fn put_back(&mut self, x: I::Item) { + self.top.push(x); + } +} + +impl Iterator for PutBackN { + type Item = I::Item; + #[inline] + fn next(&mut self) -> Option { + self.top.pop().or_else(|| self.iter.next()) + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + size_hint::add_scalar(self.iter.size_hint(), self.top.len()) + } +} + diff --git a/rust/hw/char/pl011/vendor/itertools/src/rciter_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/rciter_impl.rs new file mode 100644 index 0000000000..7298350a88 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/rciter_impl.rs @@ -0,0 +1,99 @@ + +use std::iter::{FusedIterator, IntoIterator}; +use alloc::rc::Rc; +use std::cell::RefCell; + +/// A wrapper for `Rc>`, that implements the `Iterator` trait. +#[derive(Debug)] +pub struct RcIter { + /// The boxed iterator. + pub rciter: Rc>, +} + +/// Return an iterator inside a `Rc>` wrapper. +/// +/// The returned `RcIter` can be cloned, and each clone will refer back to the +/// same original iterator. +/// +/// `RcIter` allows doing interesting things like using `.zip()` on an iterator with +/// itself, at the cost of runtime borrow checking which may have a performance +/// penalty. +/// +/// Iterator element type is `Self::Item`. +/// +/// ``` +/// use itertools::rciter; +/// use itertools::zip; +/// +/// // In this example a range iterator is created and we iterate it using +/// // three separate handles (two of them given to zip). +/// // We also use the IntoIterator implementation for `&RcIter`. +/// +/// let mut iter = rciter(0..9); +/// let mut z = zip(&iter, &iter); +/// +/// assert_eq!(z.next(), Some((0, 1))); +/// assert_eq!(z.next(), Some((2, 3))); +/// assert_eq!(z.next(), Some((4, 5))); +/// assert_eq!(iter.next(), Some(6)); +/// assert_eq!(z.next(), Some((7, 8))); +/// assert_eq!(z.next(), None); +/// ``` +/// +/// **Panics** in iterator methods if a borrow error is encountered in the +/// iterator methods. It can only happen if the `RcIter` is reentered in +/// `.next()`, i.e. if it somehow participates in an “iterator knot” +/// where it is an adaptor of itself. +pub fn rciter(iterable: I) -> RcIter + where I: IntoIterator +{ + RcIter { rciter: Rc::new(RefCell::new(iterable.into_iter())) } +} + +impl Clone for RcIter { + clone_fields!(rciter); +} + +impl Iterator for RcIter + where I: Iterator +{ + type Item = A; + #[inline] + fn next(&mut self) -> Option { + self.rciter.borrow_mut().next() + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + // To work sanely with other API that assume they own an iterator, + // so it can't change in other places, we can't guarantee as much + // in our size_hint. Other clones may drain values under our feet. + (0, self.rciter.borrow().size_hint().1) + } +} + +impl DoubleEndedIterator for RcIter + where I: DoubleEndedIterator +{ + #[inline] + fn next_back(&mut self) -> Option { + self.rciter.borrow_mut().next_back() + } +} + +/// Return an iterator from `&RcIter` (by simply cloning it). +impl<'a, I> IntoIterator for &'a RcIter + where I: Iterator +{ + type Item = I::Item; + type IntoIter = RcIter; + + fn into_iter(self) -> RcIter { + self.clone() + } +} + + +impl FusedIterator for RcIter + where I: FusedIterator +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/repeatn.rs b/rust/hw/char/pl011/vendor/itertools/src/repeatn.rs new file mode 100644 index 0000000000..e025f6f6a5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/repeatn.rs @@ -0,0 +1,59 @@ +use std::iter::FusedIterator; + +/// An iterator that produces *n* repetitions of an element. +/// +/// See [`repeat_n()`](crate::repeat_n) for more information. +#[must_use = "iterators are lazy and do nothing unless consumed"] +#[derive(Clone, Debug)] +pub struct RepeatN { + elt: Option, + n: usize, +} + +/// Create an iterator that produces `n` repetitions of `element`. +pub fn repeat_n(element: A, n: usize) -> RepeatN + where A: Clone, +{ + if n == 0 { + RepeatN { elt: None, n, } + } else { + RepeatN { elt: Some(element), n, } + } +} + +impl Iterator for RepeatN + where A: Clone +{ + type Item = A; + + fn next(&mut self) -> Option { + if self.n > 1 { + self.n -= 1; + self.elt.as_ref().cloned() + } else { + self.n = 0; + self.elt.take() + } + } + + fn size_hint(&self) -> (usize, Option) { + (self.n, Some(self.n)) + } +} + +impl DoubleEndedIterator for RepeatN + where A: Clone +{ + #[inline] + fn next_back(&mut self) -> Option { + self.next() + } +} + +impl ExactSizeIterator for RepeatN + where A: Clone +{} + +impl FusedIterator for RepeatN + where A: Clone +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/size_hint.rs b/rust/hw/char/pl011/vendor/itertools/src/size_hint.rs new file mode 100644 index 0000000000..71ea1412b5 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/size_hint.rs @@ -0,0 +1,119 @@ +//! Arithmetic on `Iterator.size_hint()` values. +//! + +use std::usize; +use std::cmp; +use std::u32; + +/// `SizeHint` is the return type of `Iterator::size_hint()`. +pub type SizeHint = (usize, Option); + +/// Add `SizeHint` correctly. +#[inline] +pub fn add(a: SizeHint, b: SizeHint) -> SizeHint { + let min = a.0.saturating_add(b.0); + let max = match (a.1, b.1) { + (Some(x), Some(y)) => x.checked_add(y), + _ => None, + }; + + (min, max) +} + +/// Add `x` correctly to a `SizeHint`. +#[inline] +pub fn add_scalar(sh: SizeHint, x: usize) -> SizeHint { + let (mut low, mut hi) = sh; + low = low.saturating_add(x); + hi = hi.and_then(|elt| elt.checked_add(x)); + (low, hi) +} + +/// Subtract `x` correctly from a `SizeHint`. +#[inline] +#[allow(dead_code)] +pub fn sub_scalar(sh: SizeHint, x: usize) -> SizeHint { + let (mut low, mut hi) = sh; + low = low.saturating_sub(x); + hi = hi.map(|elt| elt.saturating_sub(x)); + (low, hi) +} + + +/// Multiply `SizeHint` correctly +/// +/// ```ignore +/// use std::usize; +/// use itertools::size_hint; +/// +/// assert_eq!(size_hint::mul((3, Some(4)), (3, Some(4))), +/// (9, Some(16))); +/// +/// assert_eq!(size_hint::mul((3, Some(4)), (usize::MAX, None)), +/// (usize::MAX, None)); +/// +/// assert_eq!(size_hint::mul((3, None), (0, Some(0))), +/// (0, Some(0))); +/// ``` +#[inline] +pub fn mul(a: SizeHint, b: SizeHint) -> SizeHint { + let low = a.0.saturating_mul(b.0); + let hi = match (a.1, b.1) { + (Some(x), Some(y)) => x.checked_mul(y), + (Some(0), None) | (None, Some(0)) => Some(0), + _ => None, + }; + (low, hi) +} + +/// Multiply `x` correctly with a `SizeHint`. +#[inline] +pub fn mul_scalar(sh: SizeHint, x: usize) -> SizeHint { + let (mut low, mut hi) = sh; + low = low.saturating_mul(x); + hi = hi.and_then(|elt| elt.checked_mul(x)); + (low, hi) +} + +/// Raise `base` correctly by a `SizeHint` exponent. +#[inline] +pub fn pow_scalar_base(base: usize, exp: SizeHint) -> SizeHint { + let exp_low = cmp::min(exp.0, u32::MAX as usize) as u32; + let low = base.saturating_pow(exp_low); + + let hi = exp.1.and_then(|exp| { + let exp_hi = cmp::min(exp, u32::MAX as usize) as u32; + base.checked_pow(exp_hi) + }); + + (low, hi) +} + +/// Return the maximum +#[inline] +pub fn max(a: SizeHint, b: SizeHint) -> SizeHint { + let (a_lower, a_upper) = a; + let (b_lower, b_upper) = b; + + let lower = cmp::max(a_lower, b_lower); + + let upper = match (a_upper, b_upper) { + (Some(x), Some(y)) => Some(cmp::max(x, y)), + _ => None, + }; + + (lower, upper) +} + +/// Return the minimum +#[inline] +pub fn min(a: SizeHint, b: SizeHint) -> SizeHint { + let (a_lower, a_upper) = a; + let (b_lower, b_upper) = b; + let lower = cmp::min(a_lower, b_lower); + let upper = match (a_upper, b_upper) { + (Some(u1), Some(u2)) => Some(cmp::min(u1, u2)), + _ => a_upper.or(b_upper), + }; + (lower, upper) +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/sources.rs b/rust/hw/char/pl011/vendor/itertools/src/sources.rs new file mode 100644 index 0000000000..3877ce3c8b --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/sources.rs @@ -0,0 +1,183 @@ +//! Iterators that are sources (produce elements from parameters, +//! not from another iterator). +#![allow(deprecated)] + +use std::fmt; +use std::mem; + +/// See [`repeat_call`](crate::repeat_call) for more information. +#[derive(Clone)] +#[deprecated(note="Use std repeat_with() instead", since="0.8.0")] +pub struct RepeatCall { + f: F, +} + +impl fmt::Debug for RepeatCall +{ + debug_fmt_fields!(RepeatCall, ); +} + +/// An iterator source that produces elements indefinitely by calling +/// a given closure. +/// +/// Iterator element type is the return type of the closure. +/// +/// ``` +/// use itertools::repeat_call; +/// use itertools::Itertools; +/// use std::collections::BinaryHeap; +/// +/// let mut heap = BinaryHeap::from(vec![2, 5, 3, 7, 8]); +/// +/// // extract each element in sorted order +/// for element in repeat_call(|| heap.pop()).while_some() { +/// print!("{}", element); +/// } +/// +/// itertools::assert_equal( +/// repeat_call(|| 1).take(5), +/// vec![1, 1, 1, 1, 1] +/// ); +/// ``` +#[deprecated(note="Use std repeat_with() instead", since="0.8.0")] +pub fn repeat_call(function: F) -> RepeatCall + where F: FnMut() -> A +{ + RepeatCall { f: function } +} + +impl Iterator for RepeatCall + where F: FnMut() -> A +{ + type Item = A; + + #[inline] + fn next(&mut self) -> Option { + Some((self.f)()) + } + + fn size_hint(&self) -> (usize, Option) { + (usize::max_value(), None) + } +} + +/// Creates a new unfold source with the specified closure as the "iterator +/// function" and an initial state to eventually pass to the closure +/// +/// `unfold` is a general iterator builder: it has a mutable state value, +/// and a closure with access to the state that produces the next value. +/// +/// This more or less equivalent to a regular struct with an [`Iterator`] +/// implementation, and is useful for one-off iterators. +/// +/// ``` +/// // an iterator that yields sequential Fibonacci numbers, +/// // and stops at the maximum representable value. +/// +/// use itertools::unfold; +/// +/// let mut fibonacci = unfold((1u32, 1u32), |(x1, x2)| { +/// // Attempt to get the next Fibonacci number +/// let next = x1.saturating_add(*x2); +/// +/// // Shift left: ret <- x1 <- x2 <- next +/// let ret = *x1; +/// *x1 = *x2; +/// *x2 = next; +/// +/// // If addition has saturated at the maximum, we are finished +/// if ret == *x1 && ret > 1 { +/// None +/// } else { +/// Some(ret) +/// } +/// }); +/// +/// itertools::assert_equal(fibonacci.by_ref().take(8), +/// vec![1, 1, 2, 3, 5, 8, 13, 21]); +/// assert_eq!(fibonacci.last(), Some(2_971_215_073)) +/// ``` +pub fn unfold(initial_state: St, f: F) -> Unfold + where F: FnMut(&mut St) -> Option +{ + Unfold { + f, + state: initial_state, + } +} + +impl fmt::Debug for Unfold + where St: fmt::Debug, +{ + debug_fmt_fields!(Unfold, state); +} + +/// See [`unfold`](crate::unfold) for more information. +#[derive(Clone)] +#[must_use = "iterators are lazy and do nothing unless consumed"] +pub struct Unfold { + f: F, + /// Internal state that will be passed to the closure on the next iteration + pub state: St, +} + +impl Iterator for Unfold + where F: FnMut(&mut St) -> Option +{ + type Item = A; + + #[inline] + fn next(&mut self) -> Option { + (self.f)(&mut self.state) + } +} + +/// An iterator that infinitely applies function to value and yields results. +/// +/// This `struct` is created by the [`iterate()`](crate::iterate) function. +/// See its documentation for more. +#[derive(Clone)] +#[must_use = "iterators are lazy and do nothing unless consumed"] +pub struct Iterate { + state: St, + f: F, +} + +impl fmt::Debug for Iterate + where St: fmt::Debug, +{ + debug_fmt_fields!(Iterate, state); +} + +impl Iterator for Iterate + where F: FnMut(&St) -> St +{ + type Item = St; + + #[inline] + fn next(&mut self) -> Option { + let next_state = (self.f)(&self.state); + Some(mem::replace(&mut self.state, next_state)) + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + (usize::max_value(), None) + } +} + +/// Creates a new iterator that infinitely applies function to value and yields results. +/// +/// ``` +/// use itertools::iterate; +/// +/// itertools::assert_equal(iterate(1, |&i| i * 3).take(5), vec![1, 3, 9, 27, 81]); +/// ``` +pub fn iterate(initial_value: St, f: F) -> Iterate + where F: FnMut(&St) -> St +{ + Iterate { + state: initial_value, + f, + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/take_while_inclusive.rs b/rust/hw/char/pl011/vendor/itertools/src/take_while_inclusive.rs new file mode 100644 index 0000000000..e2a7479e0b --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/take_while_inclusive.rs @@ -0,0 +1,68 @@ +use core::iter::FusedIterator; +use std::fmt; + +/// An iterator adaptor that consumes elements while the given predicate is +/// `true`, including the element for which the predicate first returned +/// `false`. +/// +/// See [`.take_while_inclusive()`](crate::Itertools::take_while_inclusive) +/// for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct TakeWhileInclusive<'a, I: 'a, F> { + iter: &'a mut I, + predicate: F, + done: bool, +} + +impl<'a, I, F> TakeWhileInclusive<'a, I, F> +where + I: Iterator, + F: FnMut(&I::Item) -> bool, +{ + /// Create a new [`TakeWhileInclusive`] from an iterator and a predicate. + pub fn new(iter: &'a mut I, predicate: F) -> Self { + Self { iter, predicate, done: false} + } +} + +impl<'a, I, F> fmt::Debug for TakeWhileInclusive<'a, I, F> + where I: Iterator + fmt::Debug, +{ + debug_fmt_fields!(TakeWhileInclusive, iter); +} + +impl<'a, I, F> Iterator for TakeWhileInclusive<'a, I, F> +where + I: Iterator, + F: FnMut(&I::Item) -> bool +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + if self.done { + None + } else { + self.iter.next().map(|item| { + if !(self.predicate)(&item) { + self.done = true; + } + item + }) + } + } + + fn size_hint(&self) -> (usize, Option) { + if self.done { + (0, Some(0)) + } else { + (0, self.iter.size_hint().1) + } + } +} + +impl FusedIterator for TakeWhileInclusive<'_, I, F> +where + I: Iterator, + F: FnMut(&I::Item) -> bool +{ +} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/itertools/src/tee.rs b/rust/hw/char/pl011/vendor/itertools/src/tee.rs new file mode 100644 index 0000000000..ea4752906f --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/tee.rs @@ -0,0 +1,78 @@ +use super::size_hint; + +use std::cell::RefCell; +use alloc::collections::VecDeque; +use alloc::rc::Rc; + +/// Common buffer object for the two tee halves +#[derive(Debug)] +struct TeeBuffer { + backlog: VecDeque, + iter: I, + /// The owner field indicates which id should read from the backlog + owner: bool, +} + +/// One half of an iterator pair where both return the same elements. +/// +/// See [`.tee()`](crate::Itertools::tee) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Debug)] +pub struct Tee + where I: Iterator +{ + rcbuffer: Rc>>, + id: bool, +} + +pub fn new(iter: I) -> (Tee, Tee) + where I: Iterator +{ + let buffer = TeeBuffer{backlog: VecDeque::new(), iter, owner: false}; + let t1 = Tee{rcbuffer: Rc::new(RefCell::new(buffer)), id: true}; + let t2 = Tee{rcbuffer: t1.rcbuffer.clone(), id: false}; + (t1, t2) +} + +impl Iterator for Tee + where I: Iterator, + I::Item: Clone +{ + type Item = I::Item; + fn next(&mut self) -> Option { + // .borrow_mut may fail here -- but only if the user has tied some kind of weird + // knot where the iterator refers back to itself. + let mut buffer = self.rcbuffer.borrow_mut(); + if buffer.owner == self.id { + match buffer.backlog.pop_front() { + None => {} + some_elt => return some_elt, + } + } + match buffer.iter.next() { + None => None, + Some(elt) => { + buffer.backlog.push_back(elt.clone()); + buffer.owner = !self.id; + Some(elt) + } + } + } + + fn size_hint(&self) -> (usize, Option) { + let buffer = self.rcbuffer.borrow(); + let sh = buffer.iter.size_hint(); + + if buffer.owner == self.id { + let log_len = buffer.backlog.len(); + size_hint::add_scalar(sh, log_len) + } else { + sh + } + } +} + +impl ExactSizeIterator for Tee + where I: ExactSizeIterator, + I::Item: Clone +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/tuple_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/tuple_impl.rs new file mode 100644 index 0000000000..fdf0865856 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/tuple_impl.rs @@ -0,0 +1,331 @@ +//! Some iterator that produces tuples + +use std::iter::Fuse; +use std::iter::FusedIterator; +use std::iter::Take; +use std::iter::Cycle; +use std::marker::PhantomData; + +// `HomogeneousTuple` is a public facade for `TupleCollect`, allowing +// tuple-related methods to be used by clients in generic contexts, while +// hiding the implementation details of `TupleCollect`. +// See https://github.com/rust-itertools/itertools/issues/387 + +/// Implemented for homogeneous tuples of size up to 12. +pub trait HomogeneousTuple + : TupleCollect +{} + +impl HomogeneousTuple for T {} + +/// An iterator over a incomplete tuple. +/// +/// See [`.tuples()`](crate::Itertools::tuples) and +/// [`Tuples::into_buffer()`]. +#[derive(Clone, Debug)] +pub struct TupleBuffer + where T: HomogeneousTuple +{ + cur: usize, + buf: T::Buffer, +} + +impl TupleBuffer + where T: HomogeneousTuple +{ + fn new(buf: T::Buffer) -> Self { + TupleBuffer { + cur: 0, + buf, + } + } +} + +impl Iterator for TupleBuffer + where T: HomogeneousTuple +{ + type Item = T::Item; + + fn next(&mut self) -> Option { + let s = self.buf.as_mut(); + if let Some(ref mut item) = s.get_mut(self.cur) { + self.cur += 1; + item.take() + } else { + None + } + } + + fn size_hint(&self) -> (usize, Option) { + let buffer = &self.buf.as_ref()[self.cur..]; + let len = if buffer.is_empty() { + 0 + } else { + buffer.iter() + .position(|x| x.is_none()) + .unwrap_or_else(|| buffer.len()) + }; + (len, Some(len)) + } +} + +impl ExactSizeIterator for TupleBuffer + where T: HomogeneousTuple +{ +} + +/// An iterator that groups the items in tuples of a specific size. +/// +/// See [`.tuples()`](crate::Itertools::tuples) for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Tuples + where I: Iterator, + T: HomogeneousTuple +{ + iter: Fuse, + buf: T::Buffer, +} + +/// Create a new tuples iterator. +pub fn tuples(iter: I) -> Tuples + where I: Iterator, + T: HomogeneousTuple +{ + Tuples { + iter: iter.fuse(), + buf: Default::default(), + } +} + +impl Iterator for Tuples + where I: Iterator, + T: HomogeneousTuple +{ + type Item = T; + + fn next(&mut self) -> Option { + T::collect_from_iter(&mut self.iter, &mut self.buf) + } +} + +impl Tuples + where I: Iterator, + T: HomogeneousTuple +{ + /// Return a buffer with the produced items that was not enough to be grouped in a tuple. + /// + /// ``` + /// use itertools::Itertools; + /// + /// let mut iter = (0..5).tuples(); + /// assert_eq!(Some((0, 1, 2)), iter.next()); + /// assert_eq!(None, iter.next()); + /// itertools::assert_equal(vec![3, 4], iter.into_buffer()); + /// ``` + pub fn into_buffer(self) -> TupleBuffer { + TupleBuffer::new(self.buf) + } +} + + +/// An iterator over all contiguous windows that produces tuples of a specific size. +/// +/// See [`.tuple_windows()`](crate::Itertools::tuple_windows) for more +/// information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Clone, Debug)] +pub struct TupleWindows + where I: Iterator, + T: HomogeneousTuple +{ + iter: I, + last: Option, +} + +/// Create a new tuple windows iterator. +pub fn tuple_windows(mut iter: I) -> TupleWindows + where I: Iterator, + T: HomogeneousTuple, + T::Item: Clone +{ + use std::iter::once; + + let mut last = None; + if T::num_items() != 1 { + // put in a duplicate item in front of the tuple; this simplifies + // .next() function. + if let Some(item) = iter.next() { + let iter = once(item.clone()).chain(once(item)).chain(&mut iter); + last = T::collect_from_iter_no_buf(iter); + } + } + + TupleWindows { + iter, + last, + } +} + +impl Iterator for TupleWindows + where I: Iterator, + T: HomogeneousTuple + Clone, + T::Item: Clone +{ + type Item = T; + + fn next(&mut self) -> Option { + if T::num_items() == 1 { + return T::collect_from_iter_no_buf(&mut self.iter) + } + if let Some(ref mut last) = self.last { + if let Some(new) = self.iter.next() { + last.left_shift_push(new); + return Some(last.clone()); + } + } + None + } +} + +impl FusedIterator for TupleWindows + where I: FusedIterator, + T: HomogeneousTuple + Clone, + T::Item: Clone +{} + +/// An iterator over all windows, wrapping back to the first elements when the +/// window would otherwise exceed the length of the iterator, producing tuples +/// of a specific size. +/// +/// See [`.circular_tuple_windows()`](crate::Itertools::circular_tuple_windows) for more +/// information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +#[derive(Debug, Clone)] +pub struct CircularTupleWindows + where I: Iterator + Clone, + T: TupleCollect + Clone +{ + iter: Take, T>>, + phantom_data: PhantomData +} + +pub fn circular_tuple_windows(iter: I) -> CircularTupleWindows + where I: Iterator + Clone + ExactSizeIterator, + T: TupleCollect + Clone, + T::Item: Clone +{ + let len = iter.len(); + let iter = tuple_windows(iter.cycle()).take(len); + + CircularTupleWindows { + iter, + phantom_data: PhantomData{} + } +} + +impl Iterator for CircularTupleWindows + where I: Iterator + Clone, + T: TupleCollect + Clone, + T::Item: Clone +{ + type Item = T; + + fn next(&mut self) -> Option { + self.iter.next() + } +} + +pub trait TupleCollect: Sized { + type Item; + type Buffer: Default + AsRef<[Option]> + AsMut<[Option]>; + + fn collect_from_iter(iter: I, buf: &mut Self::Buffer) -> Option + where I: IntoIterator; + + fn collect_from_iter_no_buf(iter: I) -> Option + where I: IntoIterator; + + fn num_items() -> usize; + + fn left_shift_push(&mut self, item: Self::Item); +} + +macro_rules! count_ident{ + () => {0}; + ($i0:ident, $($i:ident,)*) => {1 + count_ident!($($i,)*)}; +} +macro_rules! rev_for_each_ident{ + ($m:ident, ) => {}; + ($m:ident, $i0:ident, $($i:ident,)*) => { + rev_for_each_ident!($m, $($i,)*); + $m!($i0); + }; +} + +macro_rules! impl_tuple_collect { + ($dummy:ident,) => {}; // stop + ($dummy:ident, $($Y:ident,)*) => ( + impl_tuple_collect!($($Y,)*); + impl TupleCollect for ($(ignore_ident!($Y, A),)*) { + type Item = A; + type Buffer = [Option; count_ident!($($Y,)*) - 1]; + + #[allow(unused_assignments, unused_mut)] + fn collect_from_iter(iter: I, buf: &mut Self::Buffer) -> Option + where I: IntoIterator + { + let mut iter = iter.into_iter(); + $( + let mut $Y = None; + )* + + loop { + $( + $Y = iter.next(); + if $Y.is_none() { + break + } + )* + return Some(($($Y.unwrap()),*,)) + } + + let mut i = 0; + let mut s = buf.as_mut(); + $( + if i < s.len() { + s[i] = $Y; + i += 1; + } + )* + return None; + } + + fn collect_from_iter_no_buf(iter: I) -> Option + where I: IntoIterator + { + let mut iter = iter.into_iter(); + + Some(($( + { let $Y = iter.next()?; $Y }, + )*)) + } + + fn num_items() -> usize { + count_ident!($($Y,)*) + } + + fn left_shift_push(&mut self, mut item: A) { + use std::mem::replace; + + let &mut ($(ref mut $Y),*,) = self; + macro_rules! replace_item{($i:ident) => { + item = replace($i, item); + }} + rev_for_each_ident!(replace_item, $($Y,)*); + drop(item); + } + } + ) +} +impl_tuple_collect!(dummy, a, b, c, d, e, f, g, h, i, j, k, l,); diff --git a/rust/hw/char/pl011/vendor/itertools/src/unique_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/unique_impl.rs new file mode 100644 index 0000000000..4e81e78ec0 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/unique_impl.rs @@ -0,0 +1,179 @@ +use std::collections::HashMap; +use std::collections::hash_map::Entry; +use std::hash::Hash; +use std::fmt; +use std::iter::FusedIterator; + +/// An iterator adapter to filter out duplicate elements. +/// +/// See [`.unique_by()`](crate::Itertools::unique) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct UniqueBy { + iter: I, + // Use a Hashmap for the Entry API in order to prevent hashing twice. + // This can maybe be replaced with a HashSet once `get_or_insert_with` + // or a proper Entry API for Hashset is stable and meets this msrv + used: HashMap, + f: F, +} + +impl fmt::Debug for UniqueBy + where I: Iterator + fmt::Debug, + V: fmt::Debug + Hash + Eq, +{ + debug_fmt_fields!(UniqueBy, iter, used); +} + +/// Create a new `UniqueBy` iterator. +pub fn unique_by(iter: I, f: F) -> UniqueBy + where V: Eq + Hash, + F: FnMut(&I::Item) -> V, + I: Iterator, +{ + UniqueBy { + iter, + used: HashMap::new(), + f, + } +} + +// count the number of new unique keys in iterable (`used` is the set already seen) +fn count_new_keys(mut used: HashMap, iterable: I) -> usize + where I: IntoIterator, + K: Hash + Eq, +{ + let iter = iterable.into_iter(); + let current_used = used.len(); + used.extend(iter.map(|key| (key, ()))); + used.len() - current_used +} + +impl Iterator for UniqueBy + where I: Iterator, + V: Eq + Hash, + F: FnMut(&I::Item) -> V +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + while let Some(v) = self.iter.next() { + let key = (self.f)(&v); + if self.used.insert(key, ()).is_none() { + return Some(v); + } + } + None + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + let (low, hi) = self.iter.size_hint(); + ((low > 0 && self.used.is_empty()) as usize, hi) + } + + fn count(self) -> usize { + let mut key_f = self.f; + count_new_keys(self.used, self.iter.map(move |elt| key_f(&elt))) + } +} + +impl DoubleEndedIterator for UniqueBy + where I: DoubleEndedIterator, + V: Eq + Hash, + F: FnMut(&I::Item) -> V +{ + fn next_back(&mut self) -> Option { + while let Some(v) = self.iter.next_back() { + let key = (self.f)(&v); + if self.used.insert(key, ()).is_none() { + return Some(v); + } + } + None + } +} + +impl FusedIterator for UniqueBy + where I: FusedIterator, + V: Eq + Hash, + F: FnMut(&I::Item) -> V +{} + +impl Iterator for Unique + where I: Iterator, + I::Item: Eq + Hash + Clone +{ + type Item = I::Item; + + fn next(&mut self) -> Option { + while let Some(v) = self.iter.iter.next() { + if let Entry::Vacant(entry) = self.iter.used.entry(v) { + let elt = entry.key().clone(); + entry.insert(()); + return Some(elt); + } + } + None + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + let (low, hi) = self.iter.iter.size_hint(); + ((low > 0 && self.iter.used.is_empty()) as usize, hi) + } + + fn count(self) -> usize { + count_new_keys(self.iter.used, self.iter.iter) + } +} + +impl DoubleEndedIterator for Unique + where I: DoubleEndedIterator, + I::Item: Eq + Hash + Clone +{ + fn next_back(&mut self) -> Option { + while let Some(v) = self.iter.iter.next_back() { + if let Entry::Vacant(entry) = self.iter.used.entry(v) { + let elt = entry.key().clone(); + entry.insert(()); + return Some(elt); + } + } + None + } +} + +impl FusedIterator for Unique + where I: FusedIterator, + I::Item: Eq + Hash + Clone +{} + +/// An iterator adapter to filter out duplicate elements. +/// +/// See [`.unique()`](crate::Itertools::unique) for more information. +#[derive(Clone)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Unique { + iter: UniqueBy, +} + +impl fmt::Debug for Unique + where I: Iterator + fmt::Debug, + I::Item: Hash + Eq + fmt::Debug, +{ + debug_fmt_fields!(Unique, iter); +} + +pub fn unique(iter: I) -> Unique + where I: Iterator, + I::Item: Eq + Hash, +{ + Unique { + iter: UniqueBy { + iter, + used: HashMap::new(), + f: (), + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/src/unziptuple.rs b/rust/hw/char/pl011/vendor/itertools/src/unziptuple.rs new file mode 100644 index 0000000000..7af29ec4ab --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/unziptuple.rs @@ -0,0 +1,80 @@ +/// Converts an iterator of tuples into a tuple of containers. +/// +/// `unzip()` consumes an entire iterator of n-ary tuples, producing `n` collections, one for each +/// column. +/// +/// This function is, in some sense, the opposite of [`multizip`]. +/// +/// ``` +/// use itertools::multiunzip; +/// +/// let inputs = vec![(1, 2, 3), (4, 5, 6), (7, 8, 9)]; +/// +/// let (a, b, c): (Vec<_>, Vec<_>, Vec<_>) = multiunzip(inputs); +/// +/// assert_eq!(a, vec![1, 4, 7]); +/// assert_eq!(b, vec![2, 5, 8]); +/// assert_eq!(c, vec![3, 6, 9]); +/// ``` +/// +/// [`multizip`]: crate::multizip +pub fn multiunzip(i: I) -> FromI +where + I: IntoIterator, + I::IntoIter: MultiUnzip, +{ + i.into_iter().multiunzip() +} + +/// An iterator that can be unzipped into multiple collections. +/// +/// See [`.multiunzip()`](crate::Itertools::multiunzip) for more information. +pub trait MultiUnzip: Iterator { + /// Unzip this iterator into multiple collections. + fn multiunzip(self) -> FromI; +} + +macro_rules! impl_unzip_iter { + ($($T:ident => $FromT:ident),*) => ( + #[allow(non_snake_case)] + impl, $($T, $FromT: Default + Extend<$T>),* > MultiUnzip<($($FromT,)*)> for IT { + fn multiunzip(self) -> ($($FromT,)*) { + // This implementation mirrors the logic of Iterator::unzip resp. Extend for (A, B) as close as possible. + // Unfortunately a lot of the used api there is still unstable (https://github.com/rust-lang/rust/issues/72631). + // + // Iterator::unzip: https://doc.rust-lang.org/src/core/iter/traits/iterator.rs.html#2825-2865 + // Extend for (A, B): https://doc.rust-lang.org/src/core/iter/traits/collect.rs.html#370-411 + + let mut res = ($($FromT::default(),)*); + let ($($FromT,)*) = &mut res; + + // Still unstable #72631 + // let (lower_bound, _) = self.size_hint(); + // if lower_bound > 0 { + // $($FromT.extend_reserve(lower_bound);)* + // } + + self.fold((), |(), ($($T,)*)| { + // Still unstable #72631 + // $( $FromT.extend_one($T); )* + $( $FromT.extend(std::iter::once($T)); )* + }); + res + } + } + ); +} + +impl_unzip_iter!(); +impl_unzip_iter!(A => FromA); +impl_unzip_iter!(A => FromA, B => FromB); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF, G => FromG); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF, G => FromG, H => FromH); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF, G => FromG, H => FromH, I => FromI); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF, G => FromG, H => FromH, I => FromI, J => FromJ); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF, G => FromG, H => FromH, I => FromI, J => FromJ, K => FromK); +impl_unzip_iter!(A => FromA, B => FromB, C => FromC, D => FromD, E => FromE, F => FromF, G => FromG, H => FromH, I => FromI, J => FromJ, K => FromK, L => FromL); diff --git a/rust/hw/char/pl011/vendor/itertools/src/with_position.rs b/rust/hw/char/pl011/vendor/itertools/src/with_position.rs new file mode 100644 index 0000000000..dda9b25dc3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/with_position.rs @@ -0,0 +1,88 @@ +use std::iter::{Fuse,Peekable, FusedIterator}; + +/// An iterator adaptor that wraps each element in an [`Position`]. +/// +/// Iterator element type is `(Position, I::Item)`. +/// +/// See [`.with_position()`](crate::Itertools::with_position) for more information. +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct WithPosition + where I: Iterator, +{ + handled_first: bool, + peekable: Peekable>, +} + +impl Clone for WithPosition + where I: Clone + Iterator, + I::Item: Clone, +{ + clone_fields!(handled_first, peekable); +} + +/// Create a new `WithPosition` iterator. +pub fn with_position(iter: I) -> WithPosition + where I: Iterator, +{ + WithPosition { + handled_first: false, + peekable: iter.fuse().peekable(), + } +} + +/// The first component of the value yielded by `WithPosition`. +/// Indicates the position of this element in the iterator results. +/// +/// See [`.with_position()`](crate::Itertools::with_position) for more information. +#[derive(Copy, Clone, Debug, PartialEq)] +pub enum Position { + /// This is the first element. + First, + /// This is neither the first nor the last element. + Middle, + /// This is the last element. + Last, + /// This is the only element. + Only, +} + +impl Iterator for WithPosition { + type Item = (Position, I::Item); + + fn next(&mut self) -> Option { + match self.peekable.next() { + Some(item) => { + if !self.handled_first { + // Haven't seen the first item yet, and there is one to give. + self.handled_first = true; + // Peek to see if this is also the last item, + // in which case tag it as `Only`. + match self.peekable.peek() { + Some(_) => Some((Position::First, item)), + None => Some((Position::Only, item)), + } + } else { + // Have seen the first item, and there's something left. + // Peek to see if this is the last item. + match self.peekable.peek() { + Some(_) => Some((Position::Middle, item)), + None => Some((Position::Last, item)), + } + } + } + // Iterator is finished. + None => None, + } + } + + fn size_hint(&self) -> (usize, Option) { + self.peekable.size_hint() + } +} + +impl ExactSizeIterator for WithPosition + where I: ExactSizeIterator, +{ } + +impl FusedIterator for WithPosition +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/zip_eq_impl.rs b/rust/hw/char/pl011/vendor/itertools/src/zip_eq_impl.rs new file mode 100644 index 0000000000..a079b326a4 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/zip_eq_impl.rs @@ -0,0 +1,60 @@ +use super::size_hint; + +/// An iterator which iterates two other iterators simultaneously +/// +/// See [`.zip_eq()`](crate::Itertools::zip_eq) for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct ZipEq { + a: I, + b: J, +} + +/// Iterate `i` and `j` in lock step. +/// +/// **Panics** if the iterators are not of the same length. +/// +/// [`IntoIterator`] enabled version of [`Itertools::zip_eq`](crate::Itertools::zip_eq). +/// +/// ``` +/// use itertools::zip_eq; +/// +/// let data = [1, 2, 3, 4, 5]; +/// for (a, b) in zip_eq(&data[..data.len() - 1], &data[1..]) { +/// /* loop body */ +/// } +/// ``` +pub fn zip_eq(i: I, j: J) -> ZipEq + where I: IntoIterator, + J: IntoIterator +{ + ZipEq { + a: i.into_iter(), + b: j.into_iter(), + } +} + +impl Iterator for ZipEq + where I: Iterator, + J: Iterator +{ + type Item = (I::Item, J::Item); + + fn next(&mut self) -> Option { + match (self.a.next(), self.b.next()) { + (None, None) => None, + (Some(a), Some(b)) => Some((a, b)), + (None, Some(_)) | (Some(_), None) => + panic!("itertools: .zip_eq() reached end of one iterator before the other") + } + } + + fn size_hint(&self) -> (usize, Option) { + size_hint::min(self.a.size_hint(), self.b.size_hint()) + } +} + +impl ExactSizeIterator for ZipEq + where I: ExactSizeIterator, + J: ExactSizeIterator +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/zip_longest.rs b/rust/hw/char/pl011/vendor/itertools/src/zip_longest.rs new file mode 100644 index 0000000000..cb9a7bacb2 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/zip_longest.rs @@ -0,0 +1,83 @@ +use std::cmp::Ordering::{Equal, Greater, Less}; +use super::size_hint; +use std::iter::{Fuse, FusedIterator}; + +use crate::either_or_both::EitherOrBoth; + +// ZipLongest originally written by SimonSapin, +// and dedicated to itertools https://github.com/rust-lang/rust/pull/19283 + +/// An iterator which iterates two other iterators simultaneously +/// +/// This iterator is *fused*. +/// +/// See [`.zip_longest()`](crate::Itertools::zip_longest) for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct ZipLongest { + a: Fuse, + b: Fuse, +} + +/// Create a new `ZipLongest` iterator. +pub fn zip_longest(a: T, b: U) -> ZipLongest + where T: Iterator, + U: Iterator +{ + ZipLongest { + a: a.fuse(), + b: b.fuse(), + } +} + +impl Iterator for ZipLongest + where T: Iterator, + U: Iterator +{ + type Item = EitherOrBoth; + + #[inline] + fn next(&mut self) -> Option { + match (self.a.next(), self.b.next()) { + (None, None) => None, + (Some(a), None) => Some(EitherOrBoth::Left(a)), + (None, Some(b)) => Some(EitherOrBoth::Right(b)), + (Some(a), Some(b)) => Some(EitherOrBoth::Both(a, b)), + } + } + + #[inline] + fn size_hint(&self) -> (usize, Option) { + size_hint::max(self.a.size_hint(), self.b.size_hint()) + } +} + +impl DoubleEndedIterator for ZipLongest + where T: DoubleEndedIterator + ExactSizeIterator, + U: DoubleEndedIterator + ExactSizeIterator +{ + #[inline] + fn next_back(&mut self) -> Option { + match self.a.len().cmp(&self.b.len()) { + Equal => match (self.a.next_back(), self.b.next_back()) { + (None, None) => None, + (Some(a), Some(b)) => Some(EitherOrBoth::Both(a, b)), + // These can only happen if .len() is inconsistent with .next_back() + (Some(a), None) => Some(EitherOrBoth::Left(a)), + (None, Some(b)) => Some(EitherOrBoth::Right(b)), + }, + Greater => self.a.next_back().map(EitherOrBoth::Left), + Less => self.b.next_back().map(EitherOrBoth::Right), + } + } +} + +impl ExactSizeIterator for ZipLongest + where T: ExactSizeIterator, + U: ExactSizeIterator +{} + +impl FusedIterator for ZipLongest + where T: Iterator, + U: Iterator +{} diff --git a/rust/hw/char/pl011/vendor/itertools/src/ziptuple.rs b/rust/hw/char/pl011/vendor/itertools/src/ziptuple.rs new file mode 100644 index 0000000000..6d3a584c49 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/src/ziptuple.rs @@ -0,0 +1,138 @@ +use super::size_hint; + +/// See [`multizip`] for more information. +#[derive(Clone, Debug)] +#[must_use = "iterator adaptors are lazy and do nothing unless consumed"] +pub struct Zip { + t: T, +} + +/// An iterator that generalizes *.zip()* and allows running multiple iterators in lockstep. +/// +/// The iterator `Zip<(I, J, ..., M)>` is formed from a tuple of iterators (or values that +/// implement [`IntoIterator`]) and yields elements +/// until any of the subiterators yields `None`. +/// +/// The iterator element type is a tuple like like `(A, B, ..., E)` where `A` to `E` are the +/// element types of the subiterator. +/// +/// **Note:** The result of this macro is a value of a named type (`Zip<(I, J, +/// ..)>` of each component iterator `I, J, ...`) if each component iterator is +/// nameable. +/// +/// Prefer [`izip!()`] over `multizip` for the performance benefits of using the +/// standard library `.zip()`. Prefer `multizip` if a nameable type is needed. +/// +/// ``` +/// use itertools::multizip; +/// +/// // iterate over three sequences side-by-side +/// let mut results = [0, 0, 0, 0]; +/// let inputs = [3, 7, 9, 6]; +/// +/// for (r, index, input) in multizip((&mut results, 0..10, &inputs)) { +/// *r = index * 10 + input; +/// } +/// +/// assert_eq!(results, [0 + 3, 10 + 7, 29, 36]); +/// ``` +/// [`izip!()`]: crate::izip +pub fn multizip(t: U) -> Zip + where Zip: From, + Zip: Iterator, +{ + Zip::from(t) +} + +macro_rules! impl_zip_iter { + ($($B:ident),*) => ( + #[allow(non_snake_case)] + impl<$($B: IntoIterator),*> From<($($B,)*)> for Zip<($($B::IntoIter,)*)> { + fn from(t: ($($B,)*)) -> Self { + let ($($B,)*) = t; + Zip { t: ($($B.into_iter(),)*) } + } + } + + #[allow(non_snake_case)] + #[allow(unused_assignments)] + impl<$($B),*> Iterator for Zip<($($B,)*)> + where + $( + $B: Iterator, + )* + { + type Item = ($($B::Item,)*); + + fn next(&mut self) -> Option + { + let ($(ref mut $B,)*) = self.t; + + // NOTE: Just like iter::Zip, we check the iterators + // for None in order. We may finish unevenly (some + // iterators gave n + 1 elements, some only n). + $( + let $B = match $B.next() { + None => return None, + Some(elt) => elt + }; + )* + Some(($($B,)*)) + } + + fn size_hint(&self) -> (usize, Option) + { + let sh = (::std::usize::MAX, None); + let ($(ref $B,)*) = self.t; + $( + let sh = size_hint::min($B.size_hint(), sh); + )* + sh + } + } + + #[allow(non_snake_case)] + impl<$($B),*> ExactSizeIterator for Zip<($($B,)*)> where + $( + $B: ExactSizeIterator, + )* + { } + + #[allow(non_snake_case)] + impl<$($B),*> DoubleEndedIterator for Zip<($($B,)*)> where + $( + $B: DoubleEndedIterator + ExactSizeIterator, + )* + { + #[inline] + fn next_back(&mut self) -> Option { + let ($(ref mut $B,)*) = self.t; + let size = *[$( $B.len(), )*].iter().min().unwrap(); + + $( + if $B.len() != size { + for _ in 0..$B.len() - size { $B.next_back(); } + } + )* + + match ($($B.next_back(),)*) { + ($(Some($B),)*) => Some(($($B,)*)), + _ => None, + } + } + } + ); +} + +impl_zip_iter!(A); +impl_zip_iter!(A, B); +impl_zip_iter!(A, B, C); +impl_zip_iter!(A, B, C, D); +impl_zip_iter!(A, B, C, D, E); +impl_zip_iter!(A, B, C, D, E, F); +impl_zip_iter!(A, B, C, D, E, F, G); +impl_zip_iter!(A, B, C, D, E, F, G, H); +impl_zip_iter!(A, B, C, D, E, F, G, H, I); +impl_zip_iter!(A, B, C, D, E, F, G, H, I, J); +impl_zip_iter!(A, B, C, D, E, F, G, H, I, J, K); +impl_zip_iter!(A, B, C, D, E, F, G, H, I, J, K, L); diff --git a/rust/hw/char/pl011/vendor/itertools/tests/adaptors_no_collect.rs b/rust/hw/char/pl011/vendor/itertools/tests/adaptors_no_collect.rs new file mode 100644 index 0000000000..103db23f1e --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/adaptors_no_collect.rs @@ -0,0 +1,46 @@ +use itertools::Itertools; + +struct PanickingCounter { + curr: usize, + max: usize, +} + +impl Iterator for PanickingCounter { + type Item = (); + + fn next(&mut self) -> Option { + self.curr += 1; + + assert_ne!( + self.curr, self.max, + "Input iterator reached maximum of {} suggesting collection by adaptor", + self.max + ); + + Some(()) + } +} + +fn no_collect_test(to_adaptor: T) + where A: Iterator, T: Fn(PanickingCounter) -> A +{ + let counter = PanickingCounter { curr: 0, max: 10_000 }; + let adaptor = to_adaptor(counter); + + for _ in adaptor.take(5) {} +} + +#[test] +fn permutations_no_collect() { + no_collect_test(|iter| iter.permutations(5)) +} + +#[test] +fn combinations_no_collect() { + no_collect_test(|iter| iter.combinations(5)) +} + +#[test] +fn combinations_with_replacement_no_collect() { + no_collect_test(|iter| iter.combinations_with_replacement(5)) +} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/itertools/tests/flatten_ok.rs b/rust/hw/char/pl011/vendor/itertools/tests/flatten_ok.rs new file mode 100644 index 0000000000..bf835b5d70 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/flatten_ok.rs @@ -0,0 +1,76 @@ +use itertools::{assert_equal, Itertools}; +use std::{ops::Range, vec::IntoIter}; + +fn mix_data() -> IntoIter, bool>> { + vec![Ok(0..2), Err(false), Ok(2..4), Err(true), Ok(4..6)].into_iter() +} + +fn ok_data() -> IntoIter, bool>> { + vec![Ok(0..2), Ok(2..4), Ok(4..6)].into_iter() +} + +#[test] +fn flatten_ok_mixed_expected_forward() { + assert_equal( + mix_data().flatten_ok(), + vec![ + Ok(0), + Ok(1), + Err(false), + Ok(2), + Ok(3), + Err(true), + Ok(4), + Ok(5), + ], + ); +} + +#[test] +fn flatten_ok_mixed_expected_reverse() { + assert_equal( + mix_data().flatten_ok().rev(), + vec![ + Ok(5), + Ok(4), + Err(true), + Ok(3), + Ok(2), + Err(false), + Ok(1), + Ok(0), + ], + ); +} + +#[test] +fn flatten_ok_collect_mixed_forward() { + assert_eq!( + mix_data().flatten_ok().collect::, _>>(), + Err(false) + ); +} + +#[test] +fn flatten_ok_collect_mixed_reverse() { + assert_eq!( + mix_data().flatten_ok().rev().collect::, _>>(), + Err(true) + ); +} + +#[test] +fn flatten_ok_collect_ok_forward() { + assert_eq!( + ok_data().flatten_ok().collect::, _>>(), + Ok((0..6).collect()) + ); +} + +#[test] +fn flatten_ok_collect_ok_reverse() { + assert_eq!( + ok_data().flatten_ok().rev().collect::, _>>(), + Ok((0..6).rev().collect()) + ); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/macros_hygiene.rs b/rust/hw/char/pl011/vendor/itertools/tests/macros_hygiene.rs new file mode 100644 index 0000000000..d1111245d6 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/macros_hygiene.rs @@ -0,0 +1,13 @@ +#[test] +fn iproduct_hygiene() { + let _ = itertools::iproduct!(0..6); + let _ = itertools::iproduct!(0..6, 0..9); + let _ = itertools::iproduct!(0..6, 0..9, 0..12); +} + +#[test] +fn izip_hygiene() { + let _ = itertools::izip!(0..6); + let _ = itertools::izip!(0..6, 0..9); + let _ = itertools::izip!(0..6, 0..9, 0..12); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/merge_join.rs b/rust/hw/char/pl011/vendor/itertools/tests/merge_join.rs new file mode 100644 index 0000000000..3280b7d4ec --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/merge_join.rs @@ -0,0 +1,108 @@ +use itertools::EitherOrBoth; +use itertools::free::merge_join_by; + +#[test] +fn empty() { + let left: Vec = vec![]; + let right: Vec = vec![]; + let expected_result: Vec> = vec![]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} + +#[test] +fn left_only() { + let left: Vec = vec![1,2,3]; + let right: Vec = vec![]; + let expected_result: Vec> = vec![ + EitherOrBoth::Left(1), + EitherOrBoth::Left(2), + EitherOrBoth::Left(3) + ]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} + +#[test] +fn right_only() { + let left: Vec = vec![]; + let right: Vec = vec![1,2,3]; + let expected_result: Vec> = vec![ + EitherOrBoth::Right(1), + EitherOrBoth::Right(2), + EitherOrBoth::Right(3) + ]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} + +#[test] +fn first_left_then_right() { + let left: Vec = vec![1,2,3]; + let right: Vec = vec![4,5,6]; + let expected_result: Vec> = vec![ + EitherOrBoth::Left(1), + EitherOrBoth::Left(2), + EitherOrBoth::Left(3), + EitherOrBoth::Right(4), + EitherOrBoth::Right(5), + EitherOrBoth::Right(6) + ]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} + +#[test] +fn first_right_then_left() { + let left: Vec = vec![4,5,6]; + let right: Vec = vec![1,2,3]; + let expected_result: Vec> = vec![ + EitherOrBoth::Right(1), + EitherOrBoth::Right(2), + EitherOrBoth::Right(3), + EitherOrBoth::Left(4), + EitherOrBoth::Left(5), + EitherOrBoth::Left(6) + ]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} + +#[test] +fn interspersed_left_and_right() { + let left: Vec = vec![1,3,5]; + let right: Vec = vec![2,4,6]; + let expected_result: Vec> = vec![ + EitherOrBoth::Left(1), + EitherOrBoth::Right(2), + EitherOrBoth::Left(3), + EitherOrBoth::Right(4), + EitherOrBoth::Left(5), + EitherOrBoth::Right(6) + ]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} + +#[test] +fn overlapping_left_and_right() { + let left: Vec = vec![1,3,4,6]; + let right: Vec = vec![2,3,4,5]; + let expected_result: Vec> = vec![ + EitherOrBoth::Left(1), + EitherOrBoth::Right(2), + EitherOrBoth::Both(3, 3), + EitherOrBoth::Both(4, 4), + EitherOrBoth::Right(5), + EitherOrBoth::Left(6) + ]; + let actual_result = merge_join_by(left, right, |l, r| l.cmp(r)) + .collect::>(); + assert_eq!(expected_result, actual_result); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/peeking_take_while.rs b/rust/hw/char/pl011/vendor/itertools/tests/peeking_take_while.rs new file mode 100644 index 0000000000..5be97271dd --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/peeking_take_while.rs @@ -0,0 +1,69 @@ +use itertools::Itertools; +use itertools::{put_back, put_back_n}; + +#[test] +fn peeking_take_while_peekable() { + let mut r = (0..10).peekable(); + r.peeking_take_while(|x| *x <= 3).count(); + assert_eq!(r.next(), Some(4)); +} + +#[test] +fn peeking_take_while_put_back() { + let mut r = put_back(0..10); + r.peeking_take_while(|x| *x <= 3).count(); + assert_eq!(r.next(), Some(4)); + r.peeking_take_while(|_| true).count(); + assert_eq!(r.next(), None); +} + +#[test] +fn peeking_take_while_put_back_n() { + let mut r = put_back_n(6..10); + for elt in (0..6).rev() { + r.put_back(elt); + } + r.peeking_take_while(|x| *x <= 3).count(); + assert_eq!(r.next(), Some(4)); + r.peeking_take_while(|_| true).count(); + assert_eq!(r.next(), None); +} + +#[test] +fn peeking_take_while_slice_iter() { + let v = [1, 2, 3, 4, 5, 6]; + let mut r = v.iter(); + r.peeking_take_while(|x| **x <= 3).count(); + assert_eq!(r.next(), Some(&4)); + r.peeking_take_while(|_| true).count(); + assert_eq!(r.next(), None); +} + +#[test] +fn peeking_take_while_slice_iter_rev() { + let v = [1, 2, 3, 4, 5, 6]; + let mut r = v.iter().rev(); + r.peeking_take_while(|x| **x >= 3).count(); + assert_eq!(r.next(), Some(&2)); + r.peeking_take_while(|_| true).count(); + assert_eq!(r.next(), None); +} + +#[test] +fn peeking_take_while_nested() { + let mut xs = (0..10).peekable(); + let ys: Vec<_> = xs + .peeking_take_while(|x| *x < 6) + .peeking_take_while(|x| *x != 3) + .collect(); + assert_eq!(ys, vec![0, 1, 2]); + assert_eq!(xs.next(), Some(3)); + + let mut xs = (4..10).peekable(); + let ys: Vec<_> = xs + .peeking_take_while(|x| *x != 3) + .peeking_take_while(|x| *x < 6) + .collect(); + assert_eq!(ys, vec![4, 5]); + assert_eq!(xs.next(), Some(6)); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/quick.rs b/rust/hw/char/pl011/vendor/itertools/tests/quick.rs new file mode 100644 index 0000000000..c19af6c1ea --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/quick.rs @@ -0,0 +1,1849 @@ +//! The purpose of these tests is to cover corner cases of iterators +//! and adaptors. +//! +//! In particular we test the tedious size_hint and exact size correctness. + +use quickcheck as qc; +use std::default::Default; +use std::num::Wrapping; +use std::ops::Range; +use std::cmp::{max, min, Ordering}; +use std::collections::{HashMap, HashSet}; +use itertools::Itertools; +use itertools::{ + multizip, + EitherOrBoth, + iproduct, + izip, +}; +use itertools::free::{ + cloned, + enumerate, + multipeek, + peek_nth, + put_back, + put_back_n, + rciter, + zip, + zip_eq, +}; + +use rand::Rng; +use rand::seq::SliceRandom; +use quickcheck::TestResult; + +/// Trait for size hint modifier types +trait HintKind: Copy + Send + qc::Arbitrary { + fn loosen_bounds(&self, org_hint: (usize, Option)) -> (usize, Option); +} + +/// Exact size hint variant that leaves hints unchanged +#[derive(Clone, Copy, Debug)] +struct Exact {} + +impl HintKind for Exact { + fn loosen_bounds(&self, org_hint: (usize, Option)) -> (usize, Option) { + org_hint + } +} + +impl qc::Arbitrary for Exact { + fn arbitrary(_: &mut G) -> Self { + Exact {} + } +} + +/// Inexact size hint variant to simulate imprecise (but valid) size hints +/// +/// Will always decrease the lower bound and increase the upper bound +/// of the size hint by set amounts. +#[derive(Clone, Copy, Debug)] +struct Inexact { + underestimate: usize, + overestimate: usize, +} + +impl HintKind for Inexact { + fn loosen_bounds(&self, org_hint: (usize, Option)) -> (usize, Option) { + let (org_lower, org_upper) = org_hint; + (org_lower.saturating_sub(self.underestimate), + org_upper.and_then(move |x| x.checked_add(self.overestimate))) + } +} + +impl qc::Arbitrary for Inexact { + fn arbitrary(g: &mut G) -> Self { + let ue_value = usize::arbitrary(g); + let oe_value = usize::arbitrary(g); + // Compensate for quickcheck using extreme values too rarely + let ue_choices = &[0, ue_value, usize::max_value()]; + let oe_choices = &[0, oe_value, usize::max_value()]; + Inexact { + underestimate: *ue_choices.choose(g).unwrap(), + overestimate: *oe_choices.choose(g).unwrap(), + } + } + + fn shrink(&self) -> Box> { + let underestimate_value = self.underestimate; + let overestimate_value = self.overestimate; + Box::new( + underestimate_value.shrink().flat_map(move |ue_value| + overestimate_value.shrink().map(move |oe_value| + Inexact { + underestimate: ue_value, + overestimate: oe_value, + } + ) + ) + ) + } +} + +/// Our base iterator that we can impl Arbitrary for +/// +/// By default we'll return inexact bounds estimates for size_hint +/// to make tests harder to pass. +/// +/// NOTE: Iter is tricky and is not fused, to help catch bugs. +/// At the end it will return None once, then return Some(0), +/// then return None again. +#[derive(Clone, Debug)] +struct Iter { + iterator: Range, + // fuse/done flag + fuse_flag: i32, + hint_kind: SK, +} + +impl Iter where HK: HintKind +{ + fn new(it: Range, hint_kind: HK) -> Self { + Iter { + iterator: it, + fuse_flag: 0, + hint_kind, + } + } +} + +impl Iterator for Iter + where Range: Iterator, + as Iterator>::Item: Default, + HK: HintKind, +{ + type Item = as Iterator>::Item; + + fn next(&mut self) -> Option + { + let elt = self.iterator.next(); + if elt.is_none() { + self.fuse_flag += 1; + // check fuse flag + if self.fuse_flag == 2 { + return Some(Default::default()) + } + } + elt + } + + fn size_hint(&self) -> (usize, Option) + { + let org_hint = self.iterator.size_hint(); + self.hint_kind.loosen_bounds(org_hint) + } +} + +impl DoubleEndedIterator for Iter + where Range: DoubleEndedIterator, + as Iterator>::Item: Default, + HK: HintKind +{ + fn next_back(&mut self) -> Option { self.iterator.next_back() } +} + +impl ExactSizeIterator for Iter where Range: ExactSizeIterator, + as Iterator>::Item: Default, +{ } + +impl qc::Arbitrary for Iter + where T: qc::Arbitrary, + HK: HintKind, +{ + fn arbitrary(g: &mut G) -> Self + { + Iter::new(T::arbitrary(g)..T::arbitrary(g), HK::arbitrary(g)) + } + + fn shrink(&self) -> Box>> + { + let r = self.iterator.clone(); + let hint_kind = self.hint_kind; + Box::new( + r.start.shrink().flat_map(move |a| + r.end.shrink().map(move |b| + Iter::new(a.clone()..b, hint_kind) + ) + ) + ) + } +} + +/// A meta-iterator which yields `Iter`s whose start/endpoints are +/// increased or decreased linearly on each iteration. +#[derive(Clone, Debug)] +struct ShiftRange { + range_start: i32, + range_end: i32, + start_step: i32, + end_step: i32, + iter_count: u32, + hint_kind: HK, +} + +impl Iterator for ShiftRange where HK: HintKind { + type Item = Iter; + + fn next(&mut self) -> Option { + if self.iter_count == 0 { + return None; + } + + let iter = Iter::new(self.range_start..self.range_end, self.hint_kind); + + self.range_start += self.start_step; + self.range_end += self.end_step; + self.iter_count -= 1; + + Some(iter) + } +} + +impl ExactSizeIterator for ShiftRange { } + +impl qc::Arbitrary for ShiftRange + where HK: HintKind +{ + fn arbitrary(g: &mut G) -> Self { + const MAX_STARTING_RANGE_DIFF: i32 = 32; + const MAX_STEP_MODULO: i32 = 8; + const MAX_ITER_COUNT: u32 = 3; + + let range_start = qc::Arbitrary::arbitrary(g); + let range_end = range_start + g.gen_range(0, MAX_STARTING_RANGE_DIFF + 1); + let start_step = g.gen_range(-MAX_STEP_MODULO, MAX_STEP_MODULO + 1); + let end_step = g.gen_range(-MAX_STEP_MODULO, MAX_STEP_MODULO + 1); + let iter_count = g.gen_range(0, MAX_ITER_COUNT + 1); + let hint_kind = qc::Arbitrary::arbitrary(g); + + ShiftRange { + range_start, + range_end, + start_step, + end_step, + iter_count, + hint_kind, + } + } +} + +fn correct_count(get_it: F) -> bool +where + I: Iterator, + F: Fn() -> I +{ + let mut counts = vec![get_it().count()]; + + 'outer: loop { + let mut it = get_it(); + + for _ in 0..(counts.len() - 1) { + #[allow(clippy::manual_assert)] + if it.next().is_none() { + panic!("Iterator shouldn't be finished, may not be deterministic"); + } + } + + if it.next().is_none() { + break 'outer; + } + + counts.push(it.count()); + } + + let total_actual_count = counts.len() - 1; + + for (i, returned_count) in counts.into_iter().enumerate() { + let actual_count = total_actual_count - i; + if actual_count != returned_count { + println!("Total iterations: {} True count: {} returned count: {}", i, actual_count, returned_count); + + return false; + } + } + + true +} + +fn correct_size_hint(mut it: I) -> bool { + // record size hint at each iteration + let initial_hint = it.size_hint(); + let mut hints = Vec::with_capacity(initial_hint.0 + 1); + hints.push(initial_hint); + while let Some(_) = it.next() { + hints.push(it.size_hint()) + } + + let mut true_count = hints.len(); // start off +1 too much + + // check all the size hints + for &(low, hi) in &hints { + true_count -= 1; + if low > true_count || + (hi.is_some() && hi.unwrap() < true_count) + { + println!("True size: {:?}, size hint: {:?}", true_count, (low, hi)); + //println!("All hints: {:?}", hints); + return false + } + } + true +} + +fn exact_size(mut it: I) -> bool { + // check every iteration + let (mut low, mut hi) = it.size_hint(); + if Some(low) != hi { return false; } + while let Some(_) = it.next() { + let (xlow, xhi) = it.size_hint(); + if low != xlow + 1 { return false; } + low = xlow; + hi = xhi; + if Some(low) != hi { return false; } + } + let (low, hi) = it.size_hint(); + low == 0 && hi == Some(0) +} + +// Exact size for this case, without ExactSizeIterator +fn exact_size_for_this(mut it: I) -> bool { + // check every iteration + let (mut low, mut hi) = it.size_hint(); + if Some(low) != hi { return false; } + while let Some(_) = it.next() { + let (xlow, xhi) = it.size_hint(); + if low != xlow + 1 { return false; } + low = xlow; + hi = xhi; + if Some(low) != hi { return false; } + } + let (low, hi) = it.size_hint(); + low == 0 && hi == Some(0) +} + +/* + * NOTE: Range is broken! + * (all signed ranges are) +#[quickcheck] +fn size_range_i8(a: Iter) -> bool { + exact_size(a) +} + +#[quickcheck] +fn size_range_i16(a: Iter) -> bool { + exact_size(a) +} + +#[quickcheck] +fn size_range_u8(a: Iter) -> bool { + exact_size(a) +} + */ + +macro_rules! quickcheck { + // accept several property function definitions + // The property functions can use pattern matching and `mut` as usual + // in the function arguments, but the functions can not be generic. + {$($(#$attr:tt)* fn $fn_name:ident($($arg:tt)*) -> $ret:ty { $($code:tt)* })*} => ( + $( + #[test] + $(#$attr)* + fn $fn_name() { + fn prop($($arg)*) -> $ret { + $($code)* + } + ::quickcheck::quickcheck(quickcheck!(@fn prop [] $($arg)*)); + } + )* + ); + // parse argument list (with patterns allowed) into prop as fn(_, _) -> _ + (@fn $f:ident [$($t:tt)*]) => { + $f as fn($($t),*) -> _ + }; + (@fn $f:ident [$($p:tt)*] : $($tail:tt)*) => { + quickcheck!(@fn $f [$($p)* _] $($tail)*) + }; + (@fn $f:ident [$($p:tt)*] $t:tt $($tail:tt)*) => { + quickcheck!(@fn $f [$($p)*] $($tail)*) + }; +} + +quickcheck! { + + fn size_product(a: Iter, b: Iter) -> bool { + correct_size_hint(a.cartesian_product(b)) + } + fn size_product3(a: Iter, b: Iter, c: Iter) -> bool { + correct_size_hint(iproduct!(a, b, c)) + } + + fn correct_cartesian_product3(a: Iter, b: Iter, c: Iter, + take_manual: usize) -> () + { + // test correctness of iproduct through regular iteration (take) + // and through fold. + let ac = a.clone(); + let br = &b.clone(); + let cr = &c.clone(); + let answer: Vec<_> = ac.flat_map(move |ea| br.clone().flat_map(move |eb| cr.clone().map(move |ec| (ea, eb, ec)))).collect(); + let mut product_iter = iproduct!(a, b, c); + let mut actual = Vec::new(); + + actual.extend((&mut product_iter).take(take_manual)); + if actual.len() == take_manual { + product_iter.fold((), |(), elt| actual.push(elt)); + } + assert_eq!(answer, actual); + } + + fn size_multi_product(a: ShiftRange) -> bool { + correct_size_hint(a.multi_cartesian_product()) + } + fn correct_multi_product3(a: ShiftRange, take_manual: usize) -> () { + // Fix no. of iterators at 3 + let a = ShiftRange { iter_count: 3, ..a }; + + // test correctness of MultiProduct through regular iteration (take) + // and through fold. + let mut iters = a.clone(); + let i0 = iters.next().unwrap(); + let i1r = &iters.next().unwrap(); + let i2r = &iters.next().unwrap(); + let answer: Vec<_> = i0.flat_map(move |ei0| i1r.clone().flat_map(move |ei1| i2r.clone().map(move |ei2| vec![ei0, ei1, ei2]))).collect(); + let mut multi_product = a.clone().multi_cartesian_product(); + let mut actual = Vec::new(); + + actual.extend((&mut multi_product).take(take_manual)); + if actual.len() == take_manual { + multi_product.fold((), |(), elt| actual.push(elt)); + } + assert_eq!(answer, actual); + + assert_eq!(answer.into_iter().last(), a.multi_cartesian_product().last()); + } + + #[allow(deprecated)] + fn size_step(a: Iter, s: usize) -> bool { + let mut s = s; + if s == 0 { + s += 1; // never zero + } + let filt = a.clone().dedup(); + correct_size_hint(filt.step(s)) && + exact_size(a.step(s)) + } + + #[allow(deprecated)] + fn equal_step(a: Iter, s: usize) -> bool { + let mut s = s; + if s == 0 { + s += 1; // never zero + } + let mut i = 0; + itertools::equal(a.clone().step(s), a.filter(|_| { + let keep = i % s == 0; + i += 1; + keep + })) + } + + #[allow(deprecated)] + fn equal_step_vec(a: Vec, s: usize) -> bool { + let mut s = s; + if s == 0 { + s += 1; // never zero + } + let mut i = 0; + itertools::equal(a.iter().step(s), a.iter().filter(|_| { + let keep = i % s == 0; + i += 1; + keep + })) + } + + fn size_multipeek(a: Iter, s: u8) -> bool { + let mut it = multipeek(a); + // peek a few times + for _ in 0..s { + it.peek(); + } + exact_size(it) + } + + fn size_peek_nth(a: Iter, s: u8) -> bool { + let mut it = peek_nth(a); + // peek a few times + for n in 0..s { + it.peek_nth(n as usize); + } + exact_size(it) + } + + fn equal_merge(mut a: Vec, mut b: Vec) -> bool { + a.sort(); + b.sort(); + let mut merged = a.clone(); + merged.extend(b.iter().cloned()); + merged.sort(); + itertools::equal(&merged, a.iter().merge(&b)) + } + fn size_merge(a: Iter, b: Iter) -> bool { + correct_size_hint(a.merge(b)) + } + fn size_zip(a: Iter, b: Iter, c: Iter) -> bool { + let filt = a.clone().dedup(); + correct_size_hint(multizip((filt, b.clone(), c.clone()))) && + exact_size(multizip((a, b, c))) + } + fn size_zip_rc(a: Iter, b: Iter) -> bool { + let rc = rciter(a); + correct_size_hint(multizip((&rc, &rc, b))) + } + + fn size_zip_macro(a: Iter, b: Iter, c: Iter) -> bool { + let filt = a.clone().dedup(); + correct_size_hint(izip!(filt, b.clone(), c.clone())) && + exact_size(izip!(a, b, c)) + } + fn equal_kmerge(mut a: Vec, mut b: Vec, mut c: Vec) -> bool { + use itertools::free::kmerge; + a.sort(); + b.sort(); + c.sort(); + let mut merged = a.clone(); + merged.extend(b.iter().cloned()); + merged.extend(c.iter().cloned()); + merged.sort(); + itertools::equal(merged.into_iter(), kmerge(vec![a, b, c])) + } + + // Any number of input iterators + fn equal_kmerge_2(mut inputs: Vec>) -> bool { + use itertools::free::kmerge; + // sort the inputs + for input in &mut inputs { + input.sort(); + } + let mut merged = inputs.concat(); + merged.sort(); + itertools::equal(merged.into_iter(), kmerge(inputs)) + } + + // Any number of input iterators + fn equal_kmerge_by_ge(mut inputs: Vec>) -> bool { + // sort the inputs + for input in &mut inputs { + input.sort(); + input.reverse(); + } + let mut merged = inputs.concat(); + merged.sort(); + merged.reverse(); + itertools::equal(merged.into_iter(), + inputs.into_iter().kmerge_by(|x, y| x >= y)) + } + + // Any number of input iterators + fn equal_kmerge_by_lt(mut inputs: Vec>) -> bool { + // sort the inputs + for input in &mut inputs { + input.sort(); + } + let mut merged = inputs.concat(); + merged.sort(); + itertools::equal(merged.into_iter(), + inputs.into_iter().kmerge_by(|x, y| x < y)) + } + + // Any number of input iterators + fn equal_kmerge_by_le(mut inputs: Vec>) -> bool { + // sort the inputs + for input in &mut inputs { + input.sort(); + } + let mut merged = inputs.concat(); + merged.sort(); + itertools::equal(merged.into_iter(), + inputs.into_iter().kmerge_by(|x, y| x <= y)) + } + fn size_kmerge(a: Iter, b: Iter, c: Iter) -> bool { + use itertools::free::kmerge; + correct_size_hint(kmerge(vec![a, b, c])) + } + fn equal_zip_eq(a: Vec, b: Vec) -> bool { + let len = std::cmp::min(a.len(), b.len()); + let a = &a[..len]; + let b = &b[..len]; + itertools::equal(zip_eq(a, b), zip(a, b)) + } + fn size_zip_longest(a: Iter, b: Iter) -> bool { + let filt = a.clone().dedup(); + let filt2 = b.clone().dedup(); + correct_size_hint(filt.zip_longest(b.clone())) && + correct_size_hint(a.clone().zip_longest(filt2)) && + exact_size(a.zip_longest(b)) + } + fn size_2_zip_longest(a: Iter, b: Iter) -> bool { + let it = a.clone().zip_longest(b.clone()); + let jt = a.clone().zip_longest(b.clone()); + itertools::equal(a, + it.filter_map(|elt| match elt { + EitherOrBoth::Both(x, _) => Some(x), + EitherOrBoth::Left(x) => Some(x), + _ => None, + } + )) + && + itertools::equal(b, + jt.filter_map(|elt| match elt { + EitherOrBoth::Both(_, y) => Some(y), + EitherOrBoth::Right(y) => Some(y), + _ => None, + } + )) + } + fn size_interleave(a: Iter, b: Iter) -> bool { + correct_size_hint(a.interleave(b)) + } + fn exact_interleave(a: Iter, b: Iter) -> bool { + exact_size_for_this(a.interleave(b)) + } + fn size_interleave_shortest(a: Iter, b: Iter) -> bool { + correct_size_hint(a.interleave_shortest(b)) + } + fn exact_interleave_shortest(a: Vec<()>, b: Vec<()>) -> bool { + exact_size_for_this(a.iter().interleave_shortest(&b)) + } + fn size_intersperse(a: Iter, x: i16) -> bool { + correct_size_hint(a.intersperse(x)) + } + fn equal_intersperse(a: Vec, x: i32) -> bool { + let mut inter = false; + let mut i = 0; + for elt in a.iter().cloned().intersperse(x) { + if inter { + if elt != x { return false } + } else { + if elt != a[i] { return false } + i += 1; + } + inter = !inter; + } + true + } + + fn equal_combinations_2(a: Vec) -> bool { + let mut v = Vec::new(); + for (i, x) in enumerate(&a) { + for y in &a[i + 1..] { + v.push((x, y)); + } + } + itertools::equal(a.iter().tuple_combinations::<(_, _)>(), v) + } + + fn collect_tuple_matches_size(a: Iter) -> bool { + let size = a.clone().count(); + a.collect_tuple::<(_, _, _)>().is_some() == (size == 3) + } + + fn correct_permutations(vals: HashSet, k: usize) -> () { + // Test permutations only on iterators of distinct integers, to prevent + // false positives. + + const MAX_N: usize = 5; + + let n = min(vals.len(), MAX_N); + let vals: HashSet = vals.into_iter().take(n).collect(); + + let perms = vals.iter().permutations(k); + + let mut actual = HashSet::new(); + + for perm in perms { + assert_eq!(perm.len(), k); + + let all_items_valid = perm.iter().all(|p| vals.contains(p)); + assert!(all_items_valid, "perm contains value not from input: {:?}", perm); + + // Check that all perm items are distinct + let distinct_len = { + let perm_set: HashSet<_> = perm.iter().collect(); + perm_set.len() + }; + assert_eq!(perm.len(), distinct_len); + + // Check that the perm is new + assert!(actual.insert(perm.clone()), "perm already encountered: {:?}", perm); + } + } + + fn permutations_lexic_order(a: usize, b: usize) -> () { + let a = a % 6; + let b = b % 6; + + let n = max(a, b); + let k = min (a, b); + + let expected_first: Vec = (0..k).collect(); + let expected_last: Vec = ((n - k)..n).rev().collect(); + + let mut perms = (0..n).permutations(k); + + let mut curr_perm = match perms.next() { + Some(p) => p, + None => { return; } + }; + + assert_eq!(expected_first, curr_perm); + + for next_perm in perms { + assert!( + next_perm > curr_perm, + "next perm isn't greater-than current; next_perm={:?} curr_perm={:?} n={}", + next_perm, curr_perm, n + ); + + curr_perm = next_perm; + } + + assert_eq!(expected_last, curr_perm); + + } + + fn permutations_count(n: usize, k: usize) -> bool { + let n = n % 6; + + correct_count(|| (0..n).permutations(k)) + } + + fn permutations_size(a: Iter, k: usize) -> bool { + correct_size_hint(a.take(5).permutations(k)) + } + + fn permutations_k0_yields_once(n: usize) -> () { + let k = 0; + let expected: Vec> = vec![vec![]]; + let actual = (0..n).permutations(k).collect_vec(); + + assert_eq!(expected, actual); + } +} + +quickcheck! { + fn dedup_via_coalesce(a: Vec) -> bool { + let mut b = a.clone(); + b.dedup(); + itertools::equal( + &b, + a + .iter() + .coalesce(|x, y| { + if x==y { + Ok(x) + } else { + Err((x, y)) + } + }) + .fold(vec![], |mut v, n| { + v.push(n); + v + }) + ) + } +} + +quickcheck! { + fn equal_dedup(a: Vec) -> bool { + let mut b = a.clone(); + b.dedup(); + itertools::equal(&b, a.iter().dedup()) + } +} + +quickcheck! { + fn equal_dedup_by(a: Vec<(i32, i32)>) -> bool { + let mut b = a.clone(); + b.dedup_by(|x, y| x.0==y.0); + itertools::equal(&b, a.iter().dedup_by(|x, y| x.0==y.0)) + } +} + +quickcheck! { + fn size_dedup(a: Vec) -> bool { + correct_size_hint(a.iter().dedup()) + } +} + +quickcheck! { + fn size_dedup_by(a: Vec<(i32, i32)>) -> bool { + correct_size_hint(a.iter().dedup_by(|x, y| x.0==y.0)) + } +} + +quickcheck! { + fn exact_repeatn((n, x): (usize, i32)) -> bool { + let it = itertools::repeat_n(x, n); + exact_size(it) + } +} + +quickcheck! { + fn size_put_back(a: Vec, x: Option) -> bool { + let mut it = put_back(a.into_iter()); + match x { + Some(t) => it.put_back(t), + None => {} + } + correct_size_hint(it) + } +} + +quickcheck! { + fn size_put_backn(a: Vec, b: Vec) -> bool { + let mut it = put_back_n(a.into_iter()); + for elt in b { + it.put_back(elt) + } + correct_size_hint(it) + } +} + +quickcheck! { + fn merge_join_by_ordering_vs_bool(a: Vec, b: Vec) -> bool { + use either::Either; + use itertools::free::merge_join_by; + let mut has_equal = false; + let it_ord = merge_join_by(a.clone(), b.clone(), Ord::cmp).flat_map(|v| match v { + EitherOrBoth::Both(l, r) => { + has_equal = true; + vec![Either::Left(l), Either::Right(r)] + } + EitherOrBoth::Left(l) => vec![Either::Left(l)], + EitherOrBoth::Right(r) => vec![Either::Right(r)], + }); + let it_bool = merge_join_by(a, b, PartialOrd::le); + itertools::equal(it_ord, it_bool) || has_equal + } + fn merge_join_by_bool_unwrapped_is_merge_by(a: Vec, b: Vec) -> bool { + use either::Either; + use itertools::free::merge_join_by; + let it = a.clone().into_iter().merge_by(b.clone(), PartialOrd::ge); + let it_join = merge_join_by(a, b, PartialOrd::ge).map(Either::into_inner); + itertools::equal(it, it_join) + } +} + +quickcheck! { + fn size_tee(a: Vec) -> bool { + let (mut t1, mut t2) = a.iter().tee(); + t1.next(); + t1.next(); + t2.next(); + exact_size(t1) && exact_size(t2) + } +} + +quickcheck! { + fn size_tee_2(a: Vec) -> bool { + let (mut t1, mut t2) = a.iter().dedup().tee(); + t1.next(); + t1.next(); + t2.next(); + correct_size_hint(t1) && correct_size_hint(t2) + } +} + +quickcheck! { + fn size_take_while_ref(a: Vec, stop: u8) -> bool { + correct_size_hint(a.iter().take_while_ref(|x| **x != stop)) + } +} + +quickcheck! { + fn equal_partition(a: Vec) -> bool { + let mut a = a; + let mut ap = a.clone(); + let split_index = itertools::partition(&mut ap, |x| *x >= 0); + let parted = (0..split_index).all(|i| ap[i] >= 0) && + (split_index..a.len()).all(|i| ap[i] < 0); + + a.sort(); + ap.sort(); + parted && (a == ap) + } +} + +quickcheck! { + fn size_combinations(it: Iter) -> bool { + correct_size_hint(it.tuple_combinations::<(_, _)>()) + } +} + +quickcheck! { + fn equal_combinations(it: Iter) -> bool { + let values = it.clone().collect_vec(); + let mut cmb = it.tuple_combinations(); + for i in 0..values.len() { + for j in i+1..values.len() { + let pair = (values[i], values[j]); + if pair != cmb.next().unwrap() { + return false; + } + } + } + cmb.next() == None + } +} + +quickcheck! { + fn size_pad_tail(it: Iter, pad: u8) -> bool { + correct_size_hint(it.clone().pad_using(pad as usize, |_| 0)) && + correct_size_hint(it.dropping(1).rev().pad_using(pad as usize, |_| 0)) + } +} + +quickcheck! { + fn size_pad_tail2(it: Iter, pad: u8) -> bool { + exact_size(it.pad_using(pad as usize, |_| 0)) + } +} + +quickcheck! { + fn size_powerset(it: Iter) -> bool { + // Powerset cardinality gets large very quickly, limit input to keep test fast. + correct_size_hint(it.take(12).powerset()) + } +} + +quickcheck! { + fn size_duplicates(it: Iter) -> bool { + correct_size_hint(it.duplicates()) + } +} + +quickcheck! { + fn size_unique(it: Iter) -> bool { + correct_size_hint(it.unique()) + } + + fn count_unique(it: Vec, take_first: u8) -> () { + let answer = { + let mut v = it.clone(); + v.sort(); v.dedup(); + v.len() + }; + let mut iter = cloned(&it).unique(); + let first_count = (&mut iter).take(take_first as usize).count(); + let rest_count = iter.count(); + assert_eq!(answer, first_count + rest_count); + } +} + +quickcheck! { + fn fuzz_group_by_lazy_1(it: Iter) -> bool { + let jt = it.clone(); + let groups = it.group_by(|k| *k); + itertools::equal(jt, groups.into_iter().flat_map(|(_, x)| x)) + } +} + +quickcheck! { + fn fuzz_group_by_lazy_2(data: Vec) -> bool { + let groups = data.iter().group_by(|k| *k / 10); + let res = itertools::equal(data.iter(), groups.into_iter().flat_map(|(_, x)| x)); + res + } +} + +quickcheck! { + fn fuzz_group_by_lazy_3(data: Vec) -> bool { + let grouper = data.iter().group_by(|k| *k / 10); + let groups = grouper.into_iter().collect_vec(); + let res = itertools::equal(data.iter(), groups.into_iter().flat_map(|(_, x)| x)); + res + } +} + +quickcheck! { + fn fuzz_group_by_lazy_duo(data: Vec, order: Vec<(bool, bool)>) -> bool { + let grouper = data.iter().group_by(|k| *k / 3); + let mut groups1 = grouper.into_iter(); + let mut groups2 = grouper.into_iter(); + let mut elts = Vec::<&u8>::new(); + let mut old_groups = Vec::new(); + + let tup1 = |(_, b)| b; + for &(ord, consume_now) in &order { + let iter = &mut [&mut groups1, &mut groups2][ord as usize]; + match iter.next() { + Some((_, gr)) => if consume_now { + for og in old_groups.drain(..) { + elts.extend(og); + } + elts.extend(gr); + } else { + old_groups.push(gr); + }, + None => break, + } + } + for og in old_groups.drain(..) { + elts.extend(og); + } + for gr in groups1.map(&tup1) { elts.extend(gr); } + for gr in groups2.map(&tup1) { elts.extend(gr); } + itertools::assert_equal(&data, elts); + true + } +} + +quickcheck! { + fn chunk_clone_equal(a: Vec, size: u8) -> () { + let mut size = size; + if size == 0 { + size += 1; + } + let it = a.chunks(size as usize); + itertools::assert_equal(it.clone(), it); + } +} + +quickcheck! { + fn equal_chunks_lazy(a: Vec, size: u8) -> bool { + let mut size = size; + if size == 0 { + size += 1; + } + let chunks = a.iter().chunks(size as usize); + let it = a.chunks(size as usize); + for (a, b) in chunks.into_iter().zip(it) { + if !itertools::equal(a, b) { + return false; + } + } + true + } +} + +// tuple iterators +quickcheck! { + fn equal_circular_tuple_windows_1(a: Vec) -> bool { + let x = a.iter().map(|e| (e,) ); + let y = a.iter().circular_tuple_windows::<(_,)>(); + itertools::assert_equal(x,y); + true + } + + fn equal_circular_tuple_windows_2(a: Vec) -> bool { + let x = (0..a.len()).map(|start_idx| ( + &a[start_idx], + &a[(start_idx + 1) % a.len()], + )); + let y = a.iter().circular_tuple_windows::<(_, _)>(); + itertools::assert_equal(x,y); + true + } + + fn equal_circular_tuple_windows_3(a: Vec) -> bool { + let x = (0..a.len()).map(|start_idx| ( + &a[start_idx], + &a[(start_idx + 1) % a.len()], + &a[(start_idx + 2) % a.len()], + )); + let y = a.iter().circular_tuple_windows::<(_, _, _)>(); + itertools::assert_equal(x,y); + true + } + + fn equal_circular_tuple_windows_4(a: Vec) -> bool { + let x = (0..a.len()).map(|start_idx| ( + &a[start_idx], + &a[(start_idx + 1) % a.len()], + &a[(start_idx + 2) % a.len()], + &a[(start_idx + 3) % a.len()], + )); + let y = a.iter().circular_tuple_windows::<(_, _, _, _)>(); + itertools::assert_equal(x,y); + true + } + + fn equal_cloned_circular_tuple_windows(a: Vec) -> bool { + let x = a.iter().circular_tuple_windows::<(_, _, _, _)>(); + let y = x.clone(); + itertools::assert_equal(x,y); + true + } + + fn equal_cloned_circular_tuple_windows_noninitial(a: Vec) -> bool { + let mut x = a.iter().circular_tuple_windows::<(_, _, _, _)>(); + let _ = x.next(); + let y = x.clone(); + itertools::assert_equal(x,y); + true + } + + fn equal_cloned_circular_tuple_windows_complete(a: Vec) -> bool { + let mut x = a.iter().circular_tuple_windows::<(_, _, _, _)>(); + for _ in x.by_ref() {} + let y = x.clone(); + itertools::assert_equal(x,y); + true + } + + fn equal_tuple_windows_1(a: Vec) -> bool { + let x = a.windows(1).map(|s| (&s[0], )); + let y = a.iter().tuple_windows::<(_,)>(); + itertools::equal(x, y) + } + + fn equal_tuple_windows_2(a: Vec) -> bool { + let x = a.windows(2).map(|s| (&s[0], &s[1])); + let y = a.iter().tuple_windows::<(_, _)>(); + itertools::equal(x, y) + } + + fn equal_tuple_windows_3(a: Vec) -> bool { + let x = a.windows(3).map(|s| (&s[0], &s[1], &s[2])); + let y = a.iter().tuple_windows::<(_, _, _)>(); + itertools::equal(x, y) + } + + fn equal_tuple_windows_4(a: Vec) -> bool { + let x = a.windows(4).map(|s| (&s[0], &s[1], &s[2], &s[3])); + let y = a.iter().tuple_windows::<(_, _, _, _)>(); + itertools::equal(x, y) + } + + fn equal_tuples_1(a: Vec) -> bool { + let x = a.chunks(1).map(|s| (&s[0], )); + let y = a.iter().tuples::<(_,)>(); + itertools::equal(x, y) + } + + fn equal_tuples_2(a: Vec) -> bool { + let x = a.chunks(2).filter(|s| s.len() == 2).map(|s| (&s[0], &s[1])); + let y = a.iter().tuples::<(_, _)>(); + itertools::equal(x, y) + } + + fn equal_tuples_3(a: Vec) -> bool { + let x = a.chunks(3).filter(|s| s.len() == 3).map(|s| (&s[0], &s[1], &s[2])); + let y = a.iter().tuples::<(_, _, _)>(); + itertools::equal(x, y) + } + + fn equal_tuples_4(a: Vec) -> bool { + let x = a.chunks(4).filter(|s| s.len() == 4).map(|s| (&s[0], &s[1], &s[2], &s[3])); + let y = a.iter().tuples::<(_, _, _, _)>(); + itertools::equal(x, y) + } + + fn exact_tuple_buffer(a: Vec) -> bool { + let mut iter = a.iter().tuples::<(_, _, _, _)>(); + (&mut iter).last(); + let buffer = iter.into_buffer(); + assert_eq!(buffer.len(), a.len() % 4); + exact_size(buffer) + } +} + +// with_position +quickcheck! { + fn with_position_exact_size_1(a: Vec) -> bool { + exact_size_for_this(a.iter().with_position()) + } + fn with_position_exact_size_2(a: Iter) -> bool { + exact_size_for_this(a.with_position()) + } +} + +quickcheck! { + fn correct_group_map_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let count = a.len(); + let lookup = a.into_iter().map(|i| (i % modulo, i)).into_group_map(); + + assert_eq!(lookup.values().flat_map(|vals| vals.iter()).count(), count); + + for (&key, vals) in lookup.iter() { + assert!(vals.iter().all(|&val| val % modulo == key)); + } + } +} + +/// A peculiar type: Equality compares both tuple items, but ordering only the +/// first item. This is so we can check the stability property easily. +#[derive(Clone, Debug, PartialEq, Eq)] +struct Val(u32, u32); + +impl PartialOrd for Val { + fn partial_cmp(&self, other: &Val) -> Option { + self.0.partial_cmp(&other.0) + } +} + +impl Ord for Val { + fn cmp(&self, other: &Val) -> Ordering { + self.0.cmp(&other.0) + } +} + +impl qc::Arbitrary for Val { + fn arbitrary(g: &mut G) -> Self { + let (x, y) = <(u32, u32)>::arbitrary(g); + Val(x, y) + } + fn shrink(&self) -> Box> { + Box::new((self.0, self.1).shrink().map(|(x, y)| Val(x, y))) + } +} + +quickcheck! { + fn minmax(a: Vec) -> bool { + use itertools::MinMaxResult; + + + let minmax = a.iter().minmax(); + let expected = match a.len() { + 0 => MinMaxResult::NoElements, + 1 => MinMaxResult::OneElement(&a[0]), + _ => MinMaxResult::MinMax(a.iter().min().unwrap(), + a.iter().max().unwrap()), + }; + minmax == expected + } +} + +quickcheck! { + fn minmax_f64(a: Vec) -> TestResult { + use itertools::MinMaxResult; + + if a.iter().any(|x| x.is_nan()) { + return TestResult::discard(); + } + + let min = cloned(&a).fold1(f64::min); + let max = cloned(&a).fold1(f64::max); + + let minmax = cloned(&a).minmax(); + let expected = match a.len() { + 0 => MinMaxResult::NoElements, + 1 => MinMaxResult::OneElement(min.unwrap()), + _ => MinMaxResult::MinMax(min.unwrap(), max.unwrap()), + }; + TestResult::from_bool(minmax == expected) + } +} + +quickcheck! { + #[allow(deprecated)] + fn tree_fold1_f64(mut a: Vec) -> TestResult { + fn collapse_adjacent(x: Vec, mut f: F) -> Vec + where F: FnMut(f64, f64) -> f64 + { + let mut out = Vec::new(); + for i in (0..x.len()).step(2) { + if i == x.len()-1 { + out.push(x[i]) + } else { + out.push(f(x[i], x[i+1])); + } + } + out + } + + if a.iter().any(|x| x.is_nan()) { + return TestResult::discard(); + } + + let actual = a.iter().cloned().tree_fold1(f64::atan2); + + while a.len() > 1 { + a = collapse_adjacent(a, f64::atan2); + } + let expected = a.pop(); + + TestResult::from_bool(actual == expected) + } +} + +quickcheck! { + fn exactly_one_i32(a: Vec) -> TestResult { + let ret = a.iter().cloned().exactly_one(); + match a.len() { + 1 => TestResult::from_bool(ret.unwrap() == a[0]), + _ => TestResult::from_bool(ret.unwrap_err().eq(a.iter().cloned())), + } + } +} + +quickcheck! { + fn at_most_one_i32(a: Vec) -> TestResult { + let ret = a.iter().cloned().at_most_one(); + match a.len() { + 0 => TestResult::from_bool(ret.unwrap() == None), + 1 => TestResult::from_bool(ret.unwrap() == Some(a[0])), + _ => TestResult::from_bool(ret.unwrap_err().eq(a.iter().cloned())), + } + } +} + +quickcheck! { + fn consistent_grouping_map_with_by(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + + let lookup_grouping_map = a.iter().copied().map(|i| (i % modulo, i)).into_grouping_map().collect::>(); + let lookup_grouping_map_by = a.iter().copied().into_grouping_map_by(|i| i % modulo).collect::>(); + + assert_eq!(lookup_grouping_map, lookup_grouping_map_by); + } + + fn correct_grouping_map_by_aggregate_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo < 2 { 2 } else { modulo } as u64; // Avoid `% 0` + let lookup = a.iter() + .map(|&b| b as u64) // Avoid overflows + .into_grouping_map_by(|i| i % modulo) + .aggregate(|acc, &key, val| { + assert!(val % modulo == key); + if val % (modulo - 1) == 0 { + None + } else { + Some(acc.unwrap_or(0) + val) + } + }); + + let group_map_lookup = a.iter() + .map(|&b| b as u64) + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .filter_map(|(key, vals)| { + vals.into_iter().fold(None, |acc, val| { + if val % (modulo - 1) == 0 { + None + } else { + Some(acc.unwrap_or(0) + val) + } + }).map(|new_val| (key, new_val)) + }) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for m in 0..modulo { + assert_eq!( + lookup.get(&m).copied(), + a.iter() + .map(|&b| b as u64) + .filter(|&val| val % modulo == m) + .fold(None, |acc, val| { + if val % (modulo - 1) == 0 { + None + } else { + Some(acc.unwrap_or(0) + val) + } + }) + ); + } + } + + fn correct_grouping_map_by_fold_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo } as u64; // Avoid `% 0` + let lookup = a.iter().map(|&b| b as u64) // Avoid overflows + .into_grouping_map_by(|i| i % modulo) + .fold(0u64, |acc, &key, val| { + assert!(val % modulo == key); + acc + val + }); + + let group_map_lookup = a.iter() + .map(|&b| b as u64) + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().sum())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &sum) in lookup.iter() { + assert_eq!(sum, a.iter().map(|&b| b as u64).filter(|&val| val % modulo == key).sum::()); + } + } + + fn correct_grouping_map_by_fold_first_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo } as u64; // Avoid `% 0` + let lookup = a.iter().map(|&b| b as u64) // Avoid overflows + .into_grouping_map_by(|i| i % modulo) + .fold_first(|acc, &key, val| { + assert!(val % modulo == key); + acc + val + }); + + // TODO: Swap `fold1` with stdlib's `fold_first` when it's stabilized + let group_map_lookup = a.iter() + .map(|&b| b as u64) + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().fold1(|acc, val| acc + val).unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &sum) in lookup.iter() { + assert_eq!(sum, a.iter().map(|&b| b as u64).filter(|&val| val % modulo == key).sum::()); + } + } + + fn correct_grouping_map_by_collect_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup_grouping_map = a.iter().copied().into_grouping_map_by(|i| i % modulo).collect::>(); + let lookup_group_map = a.iter().copied().map(|i| (i % modulo, i)).into_group_map(); + + assert_eq!(lookup_grouping_map, lookup_group_map); + } + + fn correct_grouping_map_by_max_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).max(); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().max().unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &max) in lookup.iter() { + assert_eq!(Some(max), a.iter().copied().filter(|&val| val % modulo == key).max()); + } + } + + fn correct_grouping_map_by_max_by_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).max_by(|_, v1, v2| v1.cmp(v2)); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().max_by(|v1, v2| v1.cmp(v2)).unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &max) in lookup.iter() { + assert_eq!(Some(max), a.iter().copied().filter(|&val| val % modulo == key).max_by(|v1, v2| v1.cmp(v2))); + } + } + + fn correct_grouping_map_by_max_by_key_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).max_by_key(|_, &val| val); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().max_by_key(|&val| val).unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &max) in lookup.iter() { + assert_eq!(Some(max), a.iter().copied().filter(|&val| val % modulo == key).max_by_key(|&val| val)); + } + } + + fn correct_grouping_map_by_min_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).min(); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().min().unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &min) in lookup.iter() { + assert_eq!(Some(min), a.iter().copied().filter(|&val| val % modulo == key).min()); + } + } + + fn correct_grouping_map_by_min_by_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).min_by(|_, v1, v2| v1.cmp(v2)); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().min_by(|v1, v2| v1.cmp(v2)).unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &min) in lookup.iter() { + assert_eq!(Some(min), a.iter().copied().filter(|&val| val % modulo == key).min_by(|v1, v2| v1.cmp(v2))); + } + } + + fn correct_grouping_map_by_min_by_key_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).min_by_key(|_, &val| val); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().min_by_key(|&val| val).unwrap())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &min) in lookup.iter() { + assert_eq!(Some(min), a.iter().copied().filter(|&val| val % modulo == key).min_by_key(|&val| val)); + } + } + + fn correct_grouping_map_by_minmax_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).minmax(); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().minmax())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &minmax) in lookup.iter() { + assert_eq!(minmax, a.iter().copied().filter(|&val| val % modulo == key).minmax()); + } + } + + fn correct_grouping_map_by_minmax_by_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).minmax_by(|_, v1, v2| v1.cmp(v2)); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().minmax_by(|v1, v2| v1.cmp(v2)))) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &minmax) in lookup.iter() { + assert_eq!(minmax, a.iter().copied().filter(|&val| val % modulo == key).minmax_by(|v1, v2| v1.cmp(v2))); + } + } + + fn correct_grouping_map_by_minmax_by_key_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo }; // Avoid `% 0` + let lookup = a.iter().copied().into_grouping_map_by(|i| i % modulo).minmax_by_key(|_, &val| val); + + let group_map_lookup = a.iter().copied() + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().minmax_by_key(|&val| val))) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &minmax) in lookup.iter() { + assert_eq!(minmax, a.iter().copied().filter(|&val| val % modulo == key).minmax_by_key(|&val| val)); + } + } + + fn correct_grouping_map_by_sum_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = if modulo == 0 { 1 } else { modulo } as u64; // Avoid `% 0` + let lookup = a.iter().map(|&b| b as u64) // Avoid overflows + .into_grouping_map_by(|i| i % modulo) + .sum(); + + let group_map_lookup = a.iter().map(|&b| b as u64) + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().sum())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &sum) in lookup.iter() { + assert_eq!(sum, a.iter().map(|&b| b as u64).filter(|&val| val % modulo == key).sum::()); + } + } + + fn correct_grouping_map_by_product_modulo_key(a: Vec, modulo: u8) -> () { + let modulo = Wrapping(if modulo == 0 { 1 } else { modulo } as u64); // Avoid `% 0` + let lookup = a.iter().map(|&b| Wrapping(b as u64)) // Avoid overflows + .into_grouping_map_by(|i| i % modulo) + .product(); + + let group_map_lookup = a.iter().map(|&b| Wrapping(b as u64)) + .map(|i| (i % modulo, i)) + .into_group_map() + .into_iter() + .map(|(key, vals)| (key, vals.into_iter().product::>())) + .collect::>(); + assert_eq!(lookup, group_map_lookup); + + for (&key, &prod) in lookup.iter() { + assert_eq!( + prod, + a.iter() + .map(|&b| Wrapping(b as u64)) + .filter(|&val| val % modulo == key) + .product::>() + ); + } + } + + // This should check that if multiple elements are equally minimum or maximum + // then `max`, `min` and `minmax` pick the first minimum and the last maximum. + // This is to be consistent with `std::iter::max` and `std::iter::min`. + fn correct_grouping_map_by_min_max_minmax_order_modulo_key() -> () { + use itertools::MinMaxResult; + + let lookup = (0..=10) + .into_grouping_map_by(|_| 0) + .max_by(|_, _, _| Ordering::Equal); + + assert_eq!(lookup[&0], 10); + + let lookup = (0..=10) + .into_grouping_map_by(|_| 0) + .min_by(|_, _, _| Ordering::Equal); + + assert_eq!(lookup[&0], 0); + + let lookup = (0..=10) + .into_grouping_map_by(|_| 0) + .minmax_by(|_, _, _| Ordering::Equal); + + assert_eq!(lookup[&0], MinMaxResult::MinMax(0, 10)); + } +} + +quickcheck! { + fn counts(nums: Vec) -> TestResult { + let counts = nums.iter().counts(); + for (&item, &count) in counts.iter() { + #[allow(clippy::absurd_extreme_comparisons)] + if count <= 0 { + return TestResult::failed(); + } + if count != nums.iter().filter(|&x| x == item).count() { + return TestResult::failed(); + } + } + for item in nums.iter() { + if !counts.contains_key(item) { + return TestResult::failed(); + } + } + TestResult::passed() + } +} + +quickcheck! { + fn test_double_ended_zip_2(a: Vec, b: Vec) -> TestResult { + let mut x = + multizip((a.clone().into_iter(), b.clone().into_iter())) + .collect_vec(); + x.reverse(); + + let y = + multizip((a.into_iter(), b.into_iter())) + .rfold(Vec::new(), |mut vec, e| { vec.push(e); vec }); + + TestResult::from_bool(itertools::equal(x, y)) + } + + fn test_double_ended_zip_3(a: Vec, b: Vec, c: Vec) -> TestResult { + let mut x = + multizip((a.clone().into_iter(), b.clone().into_iter(), c.clone().into_iter())) + .collect_vec(); + x.reverse(); + + let y = + multizip((a.into_iter(), b.into_iter(), c.into_iter())) + .rfold(Vec::new(), |mut vec, e| { vec.push(e); vec }); + + TestResult::from_bool(itertools::equal(x, y)) + } +} + + +fn is_fused(mut it: I) -> bool +{ + for _ in it.by_ref() {} + for _ in 0..10{ + if it.next().is_some(){ + return false; + } + } + true +} + +quickcheck! { + fn fused_combination(a: Iter) -> bool + { + is_fused(a.clone().combinations(1)) && + is_fused(a.combinations(3)) + } + + fn fused_combination_with_replacement(a: Iter) -> bool + { + is_fused(a.clone().combinations_with_replacement(1)) && + is_fused(a.combinations_with_replacement(3)) + } + + fn fused_tuple_combination(a: Iter) -> bool + { + is_fused(a.clone().fuse().tuple_combinations::<(_,)>()) && + is_fused(a.fuse().tuple_combinations::<(_,_,_)>()) + } + + fn fused_unique(a: Iter) -> bool + { + is_fused(a.fuse().unique()) + } + + fn fused_unique_by(a: Iter) -> bool + { + is_fused(a.fuse().unique_by(|x| x % 100)) + } + + fn fused_interleave_shortest(a: Iter, b: Iter) -> bool + { + !is_fused(a.clone().interleave_shortest(b.clone())) && + is_fused(a.fuse().interleave_shortest(b.fuse())) + } + + fn fused_product(a: Iter, b: Iter) -> bool + { + is_fused(a.fuse().cartesian_product(b.fuse())) + } + + fn fused_merge(a: Iter, b: Iter) -> bool + { + is_fused(a.fuse().merge(b.fuse())) + } + + fn fused_filter_ok(a: Iter) -> bool + { + is_fused(a.map(|x| if x % 2 == 0 {Ok(x)} else {Err(x)} ) + .filter_ok(|x| x % 3 == 0) + .fuse()) + } + + fn fused_filter_map_ok(a: Iter) -> bool + { + is_fused(a.map(|x| if x % 2 == 0 {Ok(x)} else {Err(x)} ) + .filter_map_ok(|x| if x % 3 == 0 {Some(x / 3)} else {None}) + .fuse()) + } + + fn fused_positions(a: Iter) -> bool + { + !is_fused(a.clone().positions(|x|x%2==0)) && + is_fused(a.fuse().positions(|x|x%2==0)) + } + + fn fused_update(a: Iter) -> bool + { + !is_fused(a.clone().update(|x|*x+=1)) && + is_fused(a.fuse().update(|x|*x+=1)) + } + + fn fused_tuple_windows(a: Iter) -> bool + { + is_fused(a.fuse().tuple_windows::<(_,_)>()) + } + + fn fused_pad_using(a: Iter) -> bool + { + is_fused(a.fuse().pad_using(100,|_|0)) + } +} + +quickcheck! { + fn min_set_contains_min(a: Vec<(usize, char)>) -> bool { + let result_set = a.iter().min_set(); + if let Some(result_element) = a.iter().min() { + result_set.contains(&result_element) + } else { + result_set.is_empty() + } + } + + fn min_set_by_contains_min(a: Vec<(usize, char)>) -> bool { + let compare = |x: &&(usize, char), y: &&(usize, char)| x.1.cmp(&y.1); + let result_set = a.iter().min_set_by(compare); + if let Some(result_element) = a.iter().min_by(compare) { + result_set.contains(&result_element) + } else { + result_set.is_empty() + } + } + + fn min_set_by_key_contains_min(a: Vec<(usize, char)>) -> bool { + let key = |x: &&(usize, char)| x.1; + let result_set = a.iter().min_set_by_key(&key); + if let Some(result_element) = a.iter().min_by_key(&key) { + result_set.contains(&result_element) + } else { + result_set.is_empty() + } + } + + fn max_set_contains_max(a: Vec<(usize, char)>) -> bool { + let result_set = a.iter().max_set(); + if let Some(result_element) = a.iter().max() { + result_set.contains(&result_element) + } else { + result_set.is_empty() + } + } + + fn max_set_by_contains_max(a: Vec<(usize, char)>) -> bool { + let compare = |x: &&(usize, char), y: &&(usize, char)| x.1.cmp(&y.1); + let result_set = a.iter().max_set_by(compare); + if let Some(result_element) = a.iter().max_by(compare) { + result_set.contains(&result_element) + } else { + result_set.is_empty() + } + } + + fn max_set_by_key_contains_max(a: Vec<(usize, char)>) -> bool { + let key = |x: &&(usize, char)| x.1; + let result_set = a.iter().max_set_by_key(&key); + if let Some(result_element) = a.iter().max_by_key(&key) { + result_set.contains(&result_element) + } else { + result_set.is_empty() + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/specializations.rs b/rust/hw/char/pl011/vendor/itertools/tests/specializations.rs new file mode 100644 index 0000000000..057e11c9f6 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/specializations.rs @@ -0,0 +1,153 @@ +use itertools::Itertools; +use std::fmt::Debug; +use quickcheck::quickcheck; + +struct Unspecialized(I); +impl Iterator for Unspecialized +where + I: Iterator, +{ + type Item = I::Item; + + #[inline(always)] + fn next(&mut self) -> Option { + self.0.next() + } +} + +macro_rules! check_specialized { + ($src:expr, |$it:pat| $closure:expr) => { + let $it = $src.clone(); + let v1 = $closure; + + let $it = Unspecialized($src.clone()); + let v2 = $closure; + + assert_eq!(v1, v2); + } +} + +fn test_specializations( + it: &Iter, +) where + IterItem: Eq + Debug + Clone, + Iter: Iterator + Clone, +{ + check_specialized!(it, |i| i.count()); + check_specialized!(it, |i| i.last()); + check_specialized!(it, |i| i.collect::>()); + check_specialized!(it, |i| { + let mut parameters_from_fold = vec![]; + let fold_result = i.fold(vec![], |mut acc, v: IterItem| { + parameters_from_fold.push((acc.clone(), v.clone())); + acc.push(v); + acc + }); + (parameters_from_fold, fold_result) + }); + check_specialized!(it, |mut i| { + let mut parameters_from_all = vec![]; + let first = i.next(); + let all_result = i.all(|x| { + parameters_from_all.push(x.clone()); + Some(x)==first + }); + (parameters_from_all, all_result) + }); + let size = it.clone().count(); + for n in 0..size + 2 { + check_specialized!(it, |mut i| i.nth(n)); + } + // size_hint is a bit harder to check + let mut it_sh = it.clone(); + for n in 0..size + 2 { + let len = it_sh.clone().count(); + let (min, max) = it_sh.size_hint(); + assert_eq!(size - n.min(size), len); + assert!(min <= len); + if let Some(max) = max { + assert!(len <= max); + } + it_sh.next(); + } +} + +quickcheck! { + fn intersperse(v: Vec) -> () { + test_specializations(&v.into_iter().intersperse(0)); + } +} + +quickcheck! { + fn put_back_qc(test_vec: Vec) -> () { + test_specializations(&itertools::put_back(test_vec.iter())); + let mut pb = itertools::put_back(test_vec.into_iter()); + pb.put_back(1); + test_specializations(&pb); + } +} + +quickcheck! { + fn merge_join_by_qc(i1: Vec, i2: Vec) -> () { + test_specializations(&i1.into_iter().merge_join_by(i2.into_iter(), std::cmp::Ord::cmp)); + } +} + +quickcheck! { + fn map_into(v: Vec) -> () { + test_specializations(&v.into_iter().map_into::()); + } +} + +quickcheck! { + fn map_ok(v: Vec>) -> () { + test_specializations(&v.into_iter().map_ok(|u| u.checked_add(1))); + } +} + +quickcheck! { + fn process_results(v: Vec>) -> () { + helper(v.iter().copied()); + helper(v.iter().copied().filter(Result::is_ok)); + + fn helper(it: impl Iterator> + Clone) { + macro_rules! check_results_specialized { + ($src:expr, |$it:pat| $closure:expr) => { + assert_eq!( + itertools::process_results($src.clone(), |$it| $closure), + itertools::process_results($src.clone(), |i| { + let $it = Unspecialized(i); + $closure + }), + ) + } + } + + check_results_specialized!(it, |i| i.count()); + check_results_specialized!(it, |i| i.last()); + check_results_specialized!(it, |i| i.collect::>()); + check_results_specialized!(it, |i| { + let mut parameters_from_fold = vec![]; + let fold_result = i.fold(vec![], |mut acc, v| { + parameters_from_fold.push((acc.clone(), v)); + acc.push(v); + acc + }); + (parameters_from_fold, fold_result) + }); + check_results_specialized!(it, |mut i| { + let mut parameters_from_all = vec![]; + let first = i.next(); + let all_result = i.all(|x| { + parameters_from_all.push(x); + Some(x)==first + }); + (parameters_from_all, all_result) + }); + let size = it.clone().count(); + for n in 0..size + 2 { + check_results_specialized!(it, |mut i| i.nth(n)); + } + } + } +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/test_core.rs b/rust/hw/char/pl011/vendor/itertools/tests/test_core.rs new file mode 100644 index 0000000000..df94eb665f --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/test_core.rs @@ -0,0 +1,317 @@ +//! Licensed under the Apache License, Version 2.0 +//! https://www.apache.org/licenses/LICENSE-2.0 or the MIT license +//! https://opensource.org/licenses/MIT, at your +//! option. This file may not be copied, modified, or distributed +//! except according to those terms. +#![no_std] + +use core::iter; +use itertools as it; +use crate::it::Itertools; +use crate::it::interleave; +use crate::it::intersperse; +use crate::it::intersperse_with; +use crate::it::multizip; +use crate::it::free::put_back; +use crate::it::iproduct; +use crate::it::izip; +use crate::it::chain; + +#[test] +fn product2() { + let s = "αβ"; + + let mut prod = iproduct!(s.chars(), 0..2); + assert!(prod.next() == Some(('α', 0))); + assert!(prod.next() == Some(('α', 1))); + assert!(prod.next() == Some(('β', 0))); + assert!(prod.next() == Some(('β', 1))); + assert!(prod.next() == None); +} + +#[test] +fn product_temporary() { + for (_x, _y, _z) in iproduct!( + [0, 1, 2].iter().cloned(), + [0, 1, 2].iter().cloned(), + [0, 1, 2].iter().cloned()) + { + // ok + } +} + + +#[test] +fn izip_macro() { + let mut zip = izip!(2..3); + assert!(zip.next() == Some(2)); + assert!(zip.next().is_none()); + + let mut zip = izip!(0..3, 0..2, 0..2i8); + for i in 0..2 { + assert!((i as usize, i, i as i8) == zip.next().unwrap()); + } + assert!(zip.next().is_none()); + + let xs: [isize; 0] = []; + let mut zip = izip!(0..3, 0..2, 0..2i8, &xs); + assert!(zip.next().is_none()); +} + +#[test] +fn izip2() { + let _zip1: iter::Zip<_, _> = izip!(1.., 2..); + let _zip2: iter::Zip<_, _> = izip!(1.., 2.., ); +} + +#[test] +fn izip3() { + let mut zip: iter::Map, _> = izip!(0..3, 0..2, 0..2i8); + for i in 0..2 { + assert!((i as usize, i, i as i8) == zip.next().unwrap()); + } + assert!(zip.next().is_none()); +} + +#[test] +fn multizip3() { + let mut zip = multizip((0..3, 0..2, 0..2i8)); + for i in 0..2 { + assert!((i as usize, i, i as i8) == zip.next().unwrap()); + } + assert!(zip.next().is_none()); + + let xs: [isize; 0] = []; + let mut zip = multizip((0..3, 0..2, 0..2i8, xs.iter())); + assert!(zip.next().is_none()); + + for (_, _, _, _, _) in multizip((0..3, 0..2, xs.iter(), &xs, xs.to_vec())) { + /* test compiles */ + } +} + +#[test] +fn chain_macro() { + let mut chain = chain!(2..3); + assert!(chain.next() == Some(2)); + assert!(chain.next().is_none()); + + let mut chain = chain!(0..2, 2..3, 3..5i8); + for i in 0..5i8 { + assert_eq!(Some(i), chain.next()); + } + assert!(chain.next().is_none()); + + let mut chain = chain!(); + assert_eq!(chain.next(), Option::<()>::None); +} + +#[test] +fn chain2() { + let _ = chain!(1.., 2..); + let _ = chain!(1.., 2.., ); +} + +#[test] +fn write_to() { + let xs = [7, 9, 8]; + let mut ys = [0; 5]; + let cnt = ys.iter_mut().set_from(xs.iter().copied()); + assert!(cnt == xs.len()); + assert!(ys == [7, 9, 8, 0, 0]); + + let cnt = ys.iter_mut().set_from(0..10); + assert!(cnt == ys.len()); + assert!(ys == [0, 1, 2, 3, 4]); +} + +#[test] +fn test_interleave() { + let xs: [u8; 0] = []; + let ys = [7u8, 9, 8, 10]; + let zs = [2u8, 77]; + let it = interleave(xs.iter(), ys.iter()); + it::assert_equal(it, ys.iter()); + + let rs = [7u8, 2, 9, 77, 8, 10]; + let it = interleave(ys.iter(), zs.iter()); + it::assert_equal(it, rs.iter()); +} + +#[test] +fn test_intersperse() { + let xs = [1u8, 2, 3]; + let ys = [1u8, 0, 2, 0, 3]; + let it = intersperse(&xs, &0); + it::assert_equal(it, ys.iter()); +} + +#[test] +fn test_intersperse_with() { + let xs = [1u8, 2, 3]; + let ys = [1u8, 10, 2, 10, 3]; + let i = 10; + let it = intersperse_with(&xs, || &i); + it::assert_equal(it, ys.iter()); +} + +#[allow(deprecated)] +#[test] +fn foreach() { + let xs = [1i32, 2, 3]; + let mut sum = 0; + xs.iter().foreach(|elt| sum += *elt); + assert!(sum == 6); +} + +#[test] +fn dropping() { + let xs = [1, 2, 3]; + let mut it = xs.iter().dropping(2); + assert_eq!(it.next(), Some(&3)); + assert!(it.next().is_none()); + let mut it = xs.iter().dropping(5); + assert!(it.next().is_none()); +} + +#[test] +fn batching() { + let xs = [0, 1, 2, 1, 3]; + let ys = [(0, 1), (2, 1)]; + + // An iterator that gathers elements up in pairs + let pit = xs + .iter() + .cloned() + .batching(|it| it.next().and_then(|x| it.next().map(|y| (x, y)))); + it::assert_equal(pit, ys.iter().cloned()); +} + +#[test] +fn test_put_back() { + let xs = [0, 1, 1, 1, 2, 1, 3, 3]; + let mut pb = put_back(xs.iter().cloned()); + pb.next(); + pb.put_back(1); + pb.put_back(0); + it::assert_equal(pb, xs.iter().cloned()); +} + +#[allow(deprecated)] +#[test] +fn step() { + it::assert_equal((0..10).step(1), 0..10); + it::assert_equal((0..10).step(2), (0..10).filter(|x: &i32| *x % 2 == 0)); + it::assert_equal((0..10).step(10), 0..1); +} + +#[allow(deprecated)] +#[test] +fn merge() { + it::assert_equal((0..10).step(2).merge((1..10).step(2)), 0..10); +} + + +#[test] +fn repeatn() { + let s = "α"; + let mut it = it::repeat_n(s, 3); + assert_eq!(it.len(), 3); + assert_eq!(it.next(), Some(s)); + assert_eq!(it.next(), Some(s)); + assert_eq!(it.next(), Some(s)); + assert_eq!(it.next(), None); + assert_eq!(it.next(), None); +} + +#[test] +fn count_clones() { + // Check that RepeatN only clones N - 1 times. + + use core::cell::Cell; + #[derive(PartialEq, Debug)] + struct Foo { + n: Cell + } + + impl Clone for Foo + { + fn clone(&self) -> Self + { + let n = self.n.get(); + self.n.set(n + 1); + Foo { n: Cell::new(n + 1) } + } + } + + + for n in 0..10 { + let f = Foo{n: Cell::new(0)}; + let it = it::repeat_n(f, n); + // drain it + let last = it.last(); + if n == 0 { + assert_eq!(last, None); + } else { + assert_eq!(last, Some(Foo{n: Cell::new(n - 1)})); + } + } +} + +#[test] +fn part() { + let mut data = [7, 1, 1, 9, 1, 1, 3]; + let i = it::partition(&mut data, |elt| *elt >= 3); + assert_eq!(i, 3); + assert_eq!(data, [7, 3, 9, 1, 1, 1, 1]); + + let i = it::partition(&mut data, |elt| *elt == 1); + assert_eq!(i, 4); + assert_eq!(data, [1, 1, 1, 1, 9, 3, 7]); + + let mut data = [1, 2, 3, 4, 5, 6, 7, 8, 9]; + let i = it::partition(&mut data, |elt| *elt % 3 == 0); + assert_eq!(i, 3); + assert_eq!(data, [9, 6, 3, 4, 5, 2, 7, 8, 1]); +} + +#[test] +fn tree_fold1() { + for i in 0..100 { + assert_eq!((0..i).tree_fold1(|x, y| x + y), (0..i).fold1(|x, y| x + y)); + } +} + +#[test] +fn exactly_one() { + assert_eq!((0..10).filter(|&x| x == 2).exactly_one().unwrap(), 2); + assert!((0..10).filter(|&x| x > 1 && x < 4).exactly_one().unwrap_err().eq(2..4)); + assert!((0..10).filter(|&x| x > 1 && x < 5).exactly_one().unwrap_err().eq(2..5)); + assert!((0..10).filter(|&_| false).exactly_one().unwrap_err().eq(0..0)); +} + +#[test] +fn at_most_one() { + assert_eq!((0..10).filter(|&x| x == 2).at_most_one().unwrap(), Some(2)); + assert!((0..10).filter(|&x| x > 1 && x < 4).at_most_one().unwrap_err().eq(2..4)); + assert!((0..10).filter(|&x| x > 1 && x < 5).at_most_one().unwrap_err().eq(2..5)); + assert_eq!((0..10).filter(|&_| false).at_most_one().unwrap(), None); +} + +#[test] +fn sum1() { + let v: &[i32] = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; + assert_eq!(v[..0].iter().cloned().sum1::(), None); + assert_eq!(v[1..2].iter().cloned().sum1::(), Some(1)); + assert_eq!(v[1..3].iter().cloned().sum1::(), Some(3)); + assert_eq!(v.iter().cloned().sum1::(), Some(55)); +} + +#[test] +fn product1() { + let v: &[i32] = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; + assert_eq!(v[..0].iter().cloned().product1::(), None); + assert_eq!(v[..1].iter().cloned().product1::(), Some(0)); + assert_eq!(v[1..3].iter().cloned().product1::(), Some(2)); + assert_eq!(v[1..5].iter().cloned().product1::(), Some(24)); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/test_std.rs b/rust/hw/char/pl011/vendor/itertools/tests/test_std.rs new file mode 100644 index 0000000000..77207d87e3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/test_std.rs @@ -0,0 +1,1184 @@ +use quickcheck as qc; +use rand::{distributions::{Distribution, Standard}, Rng, SeedableRng, rngs::StdRng}; +use rand::{seq::SliceRandom, thread_rng}; +use std::{cmp::min, fmt::Debug, marker::PhantomData}; +use itertools as it; +use crate::it::Itertools; +use crate::it::ExactlyOneError; +use crate::it::multizip; +use crate::it::multipeek; +use crate::it::peek_nth; +use crate::it::free::rciter; +use crate::it::free::put_back_n; +use crate::it::FoldWhile; +use crate::it::cloned; +use crate::it::iproduct; +use crate::it::izip; + +#[test] +fn product3() { + let prod = iproduct!(0..3, 0..2, 0..2); + assert_eq!(prod.size_hint(), (12, Some(12))); + let v = prod.collect_vec(); + for i in 0..3 { + for j in 0..2 { + for k in 0..2 { + assert!((i, j, k) == v[(i * 2 * 2 + j * 2 + k) as usize]); + } + } + } + for (_, _, _, _) in iproduct!(0..3, 0..2, 0..2, 0..3) { + /* test compiles */ + } +} + +#[test] +fn interleave_shortest() { + let v0: Vec = vec![0, 2, 4]; + let v1: Vec = vec![1, 3, 5, 7]; + let it = v0.into_iter().interleave_shortest(v1.into_iter()); + assert_eq!(it.size_hint(), (6, Some(6))); + assert_eq!(it.collect_vec(), vec![0, 1, 2, 3, 4, 5]); + + let v0: Vec = vec![0, 2, 4, 6, 8]; + let v1: Vec = vec![1, 3, 5]; + let it = v0.into_iter().interleave_shortest(v1.into_iter()); + assert_eq!(it.size_hint(), (7, Some(7))); + assert_eq!(it.collect_vec(), vec![0, 1, 2, 3, 4, 5, 6]); + + let i0 = ::std::iter::repeat(0); + let v1: Vec<_> = vec![1, 3, 5]; + let it = i0.interleave_shortest(v1.into_iter()); + assert_eq!(it.size_hint(), (7, Some(7))); + + let v0: Vec<_> = vec![0, 2, 4]; + let i1 = ::std::iter::repeat(1); + let it = v0.into_iter().interleave_shortest(i1); + assert_eq!(it.size_hint(), (6, Some(6))); +} + +#[test] +fn duplicates_by() { + let xs = ["aaa", "bbbbb", "aa", "ccc", "bbbb", "aaaaa", "cccc"]; + let ys = ["aa", "bbbb", "cccc"]; + it::assert_equal(ys.iter(), xs.iter().duplicates_by(|x| x[..2].to_string())); + it::assert_equal(ys.iter(), xs.iter().rev().duplicates_by(|x| x[..2].to_string()).rev()); + let ys_rev = ["ccc", "aa", "bbbbb"]; + it::assert_equal(ys_rev.iter(), xs.iter().duplicates_by(|x| x[..2].to_string()).rev()); +} + +#[test] +fn duplicates() { + let xs = [0, 1, 2, 3, 2, 1, 3]; + let ys = [2, 1, 3]; + it::assert_equal(ys.iter(), xs.iter().duplicates()); + it::assert_equal(ys.iter(), xs.iter().rev().duplicates().rev()); + let ys_rev = [3, 2, 1]; + it::assert_equal(ys_rev.iter(), xs.iter().duplicates().rev()); + + let xs = [0, 1, 0, 1]; + let ys = [0, 1]; + it::assert_equal(ys.iter(), xs.iter().duplicates()); + it::assert_equal(ys.iter(), xs.iter().rev().duplicates().rev()); + let ys_rev = [1, 0]; + it::assert_equal(ys_rev.iter(), xs.iter().duplicates().rev()); + + let xs = vec![0, 1, 2, 1, 2]; + let ys = vec![1, 2]; + assert_eq!(ys, xs.iter().duplicates().cloned().collect_vec()); + assert_eq!(ys, xs.iter().rev().duplicates().rev().cloned().collect_vec()); + let ys_rev = vec![2, 1]; + assert_eq!(ys_rev, xs.iter().duplicates().rev().cloned().collect_vec()); +} + +#[test] +fn unique_by() { + let xs = ["aaa", "bbbbb", "aa", "ccc", "bbbb", "aaaaa", "cccc"]; + let ys = ["aaa", "bbbbb", "ccc"]; + it::assert_equal(ys.iter(), xs.iter().unique_by(|x| x[..2].to_string())); + it::assert_equal(ys.iter(), xs.iter().rev().unique_by(|x| x[..2].to_string()).rev()); + let ys_rev = ["cccc", "aaaaa", "bbbb"]; + it::assert_equal(ys_rev.iter(), xs.iter().unique_by(|x| x[..2].to_string()).rev()); +} + +#[test] +fn unique() { + let xs = [0, 1, 2, 3, 2, 1, 3]; + let ys = [0, 1, 2, 3]; + it::assert_equal(ys.iter(), xs.iter().unique()); + it::assert_equal(ys.iter(), xs.iter().rev().unique().rev()); + let ys_rev = [3, 1, 2, 0]; + it::assert_equal(ys_rev.iter(), xs.iter().unique().rev()); + + let xs = [0, 1]; + let ys = [0, 1]; + it::assert_equal(ys.iter(), xs.iter().unique()); + it::assert_equal(ys.iter(), xs.iter().rev().unique().rev()); + let ys_rev = [1, 0]; + it::assert_equal(ys_rev.iter(), xs.iter().unique().rev()); +} + +#[test] +fn intersperse() { + let xs = ["a", "", "b", "c"]; + let v: Vec<&str> = xs.iter().cloned().intersperse(", ").collect(); + let text: String = v.concat(); + assert_eq!(text, "a, , b, c".to_string()); + + let ys = [0, 1, 2, 3]; + let mut it = ys[..0].iter().copied().intersperse(1); + assert!(it.next() == None); +} + +#[test] +fn dedup() { + let xs = [0, 1, 1, 1, 2, 1, 3, 3]; + let ys = [0, 1, 2, 1, 3]; + it::assert_equal(ys.iter(), xs.iter().dedup()); + let xs = [0, 0, 0, 0, 0]; + let ys = [0]; + it::assert_equal(ys.iter(), xs.iter().dedup()); + + let xs = [0, 1, 1, 1, 2, 1, 3, 3]; + let ys = [0, 1, 2, 1, 3]; + let mut xs_d = Vec::new(); + xs.iter().dedup().fold((), |(), &elt| xs_d.push(elt)); + assert_eq!(&xs_d, &ys); +} + +#[test] +fn coalesce() { + let data = vec![-1., -2., -3., 3., 1., 0., -1.]; + let it = data.iter().cloned().coalesce(|x, y| + if (x >= 0.) == (y >= 0.) { + Ok(x + y) + } else { + Err((x, y)) + } + ); + itertools::assert_equal(it.clone(), vec![-6., 4., -1.]); + assert_eq!( + it.fold(vec![], |mut v, n| { + v.push(n); + v + }), + vec![-6., 4., -1.] + ); +} + +#[test] +fn dedup_by() { + let xs = [(0, 0), (0, 1), (1, 1), (2, 1), (0, 2), (3, 1), (0, 3), (1, 3)]; + let ys = [(0, 0), (0, 1), (0, 2), (3, 1), (0, 3)]; + it::assert_equal(ys.iter(), xs.iter().dedup_by(|x, y| x.1==y.1)); + let xs = [(0, 1), (0, 2), (0, 3), (0, 4), (0, 5)]; + let ys = [(0, 1)]; + it::assert_equal(ys.iter(), xs.iter().dedup_by(|x, y| x.0==y.0)); + + let xs = [(0, 0), (0, 1), (1, 1), (2, 1), (0, 2), (3, 1), (0, 3), (1, 3)]; + let ys = [(0, 0), (0, 1), (0, 2), (3, 1), (0, 3)]; + let mut xs_d = Vec::new(); + xs.iter().dedup_by(|x, y| x.1==y.1).fold((), |(), &elt| xs_d.push(elt)); + assert_eq!(&xs_d, &ys); +} + +#[test] +fn dedup_with_count() { + let xs: [i32; 8] = [0, 1, 1, 1, 2, 1, 3, 3]; + let ys: [(usize, &i32); 5] = [(1, &0), (3, &1), (1, &2), (1, &1), (2, &3)]; + + it::assert_equal(ys.iter().cloned(), xs.iter().dedup_with_count()); + + let xs: [i32; 5] = [0, 0, 0, 0, 0]; + let ys: [(usize, &i32); 1] = [(5, &0)]; + + it::assert_equal(ys.iter().cloned(), xs.iter().dedup_with_count()); +} + + +#[test] +fn dedup_by_with_count() { + let xs = [(0, 0), (0, 1), (1, 1), (2, 1), (0, 2), (3, 1), (0, 3), (1, 3)]; + let ys = [(1, &(0, 0)), (3, &(0, 1)), (1, &(0, 2)), (1, &(3, 1)), (2, &(0, 3))]; + + it::assert_equal(ys.iter().cloned(), xs.iter().dedup_by_with_count(|x, y| x.1==y.1)); + + let xs = [(0, 1), (0, 2), (0, 3), (0, 4), (0, 5)]; + let ys = [( 5, &(0, 1))]; + + it::assert_equal(ys.iter().cloned(), xs.iter().dedup_by_with_count(|x, y| x.0==y.0)); +} + +#[test] +fn all_equal() { + assert!("".chars().all_equal()); + assert!("A".chars().all_equal()); + assert!(!"AABBCCC".chars().all_equal()); + assert!("AAAAAAA".chars().all_equal()); + for (_key, mut sub) in &"AABBCCC".chars().group_by(|&x| x) { + assert!(sub.all_equal()); + } +} + +#[test] +fn all_equal_value() { + assert_eq!("".chars().all_equal_value(), Err(None)); + assert_eq!("A".chars().all_equal_value(), Ok('A')); + assert_eq!("AABBCCC".chars().all_equal_value(), Err(Some(('A', 'B')))); + assert_eq!("AAAAAAA".chars().all_equal_value(), Ok('A')); + { + let mut it = [1,2,3].iter().copied(); + let result = it.all_equal_value(); + assert_eq!(result, Err(Some((1, 2)))); + let remaining = it.next(); + assert_eq!(remaining, Some(3)); + assert!(it.next().is_none()); + } +} + +#[test] +fn all_unique() { + assert!("ABCDEFGH".chars().all_unique()); + assert!(!"ABCDEFGA".chars().all_unique()); + assert!(::std::iter::empty::().all_unique()); +} + +#[test] +fn test_put_back_n() { + let xs = [0, 1, 1, 1, 2, 1, 3, 3]; + let mut pb = put_back_n(xs.iter().cloned()); + pb.next(); + pb.next(); + pb.put_back(1); + pb.put_back(0); + it::assert_equal(pb, xs.iter().cloned()); +} + +#[test] +fn tee() { + let xs = [0, 1, 2, 3]; + let (mut t1, mut t2) = xs.iter().cloned().tee(); + assert_eq!(t1.next(), Some(0)); + assert_eq!(t2.next(), Some(0)); + assert_eq!(t1.next(), Some(1)); + assert_eq!(t1.next(), Some(2)); + assert_eq!(t1.next(), Some(3)); + assert_eq!(t1.next(), None); + assert_eq!(t2.next(), Some(1)); + assert_eq!(t2.next(), Some(2)); + assert_eq!(t1.next(), None); + assert_eq!(t2.next(), Some(3)); + assert_eq!(t2.next(), None); + assert_eq!(t1.next(), None); + assert_eq!(t2.next(), None); + + let (t1, t2) = xs.iter().cloned().tee(); + it::assert_equal(t1, xs.iter().cloned()); + it::assert_equal(t2, xs.iter().cloned()); + + let (t1, t2) = xs.iter().cloned().tee(); + it::assert_equal(t1.zip(t2), xs.iter().cloned().zip(xs.iter().cloned())); +} + + +#[test] +fn test_rciter() { + let xs = [0, 1, 1, 1, 2, 1, 3, 5, 6]; + + let mut r1 = rciter(xs.iter().cloned()); + let mut r2 = r1.clone(); + assert_eq!(r1.next(), Some(0)); + assert_eq!(r2.next(), Some(1)); + let mut z = r1.zip(r2); + assert_eq!(z.next(), Some((1, 1))); + assert_eq!(z.next(), Some((2, 1))); + assert_eq!(z.next(), Some((3, 5))); + assert_eq!(z.next(), None); + + // test intoiterator + let r1 = rciter(0..5); + let mut z = izip!(&r1, r1); + assert_eq!(z.next(), Some((0, 1))); +} + +#[allow(deprecated)] +#[test] +fn trait_pointers() { + struct ByRef<'r, I: ?Sized>(&'r mut I) ; + + impl<'r, X, I: ?Sized> Iterator for ByRef<'r, I> where + I: 'r + Iterator + { + type Item = X; + fn next(&mut self) -> Option + { + self.0.next() + } + } + + let mut it = Box::new(0..10) as Box>; + assert_eq!(it.next(), Some(0)); + + { + /* make sure foreach works on non-Sized */ + let jt: &mut dyn Iterator = &mut *it; + assert_eq!(jt.next(), Some(1)); + + { + let mut r = ByRef(jt); + assert_eq!(r.next(), Some(2)); + } + + assert_eq!(jt.find_position(|x| *x == 4), Some((1, 4))); + jt.foreach(|_| ()); + } +} + +#[test] +fn merge_by() { + let odd : Vec<(u32, &str)> = vec![(1, "hello"), (3, "world"), (5, "!")]; + let even = vec![(2, "foo"), (4, "bar"), (6, "baz")]; + let expected = vec![(1, "hello"), (2, "foo"), (3, "world"), (4, "bar"), (5, "!"), (6, "baz")]; + let results = odd.iter().merge_by(even.iter(), |a, b| a.0 <= b.0); + it::assert_equal(results, expected.iter()); +} + +#[test] +fn merge_by_btree() { + use std::collections::BTreeMap; + let mut bt1 = BTreeMap::new(); + bt1.insert("hello", 1); + bt1.insert("world", 3); + let mut bt2 = BTreeMap::new(); + bt2.insert("foo", 2); + bt2.insert("bar", 4); + let results = bt1.into_iter().merge_by(bt2.into_iter(), |a, b| a.0 <= b.0 ); + let expected = vec![("bar", 4), ("foo", 2), ("hello", 1), ("world", 3)]; + it::assert_equal(results, expected.into_iter()); +} + +#[allow(deprecated)] +#[test] +fn kmerge() { + let its = (0..4).map(|s| (s..10).step(4)); + + it::assert_equal(its.kmerge(), 0..10); +} + +#[allow(deprecated)] +#[test] +fn kmerge_2() { + let its = vec![3, 2, 1, 0].into_iter().map(|s| (s..10).step(4)); + + it::assert_equal(its.kmerge(), 0..10); +} + +#[test] +fn kmerge_empty() { + let its = (0..4).map(|_| 0..0); + assert_eq!(its.kmerge().next(), None); +} + +#[test] +fn kmerge_size_hint() { + let its = (0..5).map(|_| (0..10)); + assert_eq!(its.kmerge().size_hint(), (50, Some(50))); +} + +#[test] +fn kmerge_empty_size_hint() { + let its = (0..5).map(|_| (0..0)); + assert_eq!(its.kmerge().size_hint(), (0, Some(0))); +} + +#[test] +fn join() { + let many = [1, 2, 3]; + let one = [1]; + let none: Vec = vec![]; + + assert_eq!(many.iter().join(", "), "1, 2, 3"); + assert_eq!( one.iter().join(", "), "1"); + assert_eq!(none.iter().join(", "), ""); +} + +#[test] +fn sorted_unstable_by() { + let sc = [3, 4, 1, 2].iter().cloned().sorted_by(|&a, &b| { + a.cmp(&b) + }); + it::assert_equal(sc, vec![1, 2, 3, 4]); + + let v = (0..5).sorted_unstable_by(|&a, &b| a.cmp(&b).reverse()); + it::assert_equal(v, vec![4, 3, 2, 1, 0]); +} + +#[test] +fn sorted_unstable_by_key() { + let sc = [3, 4, 1, 2].iter().cloned().sorted_unstable_by_key(|&x| x); + it::assert_equal(sc, vec![1, 2, 3, 4]); + + let v = (0..5).sorted_unstable_by_key(|&x| -x); + it::assert_equal(v, vec![4, 3, 2, 1, 0]); +} + +#[test] +fn sorted_by() { + let sc = [3, 4, 1, 2].iter().cloned().sorted_by(|&a, &b| { + a.cmp(&b) + }); + it::assert_equal(sc, vec![1, 2, 3, 4]); + + let v = (0..5).sorted_by(|&a, &b| a.cmp(&b).reverse()); + it::assert_equal(v, vec![4, 3, 2, 1, 0]); +} + +qc::quickcheck! { + fn k_smallest_range(n: u64, m: u16, k: u16) -> () { + // u16 is used to constrain k and m to 0..2¹⁶, + // otherwise the test could use too much memory. + let (k, m) = (k as u64, m as u64); + + // Generate a random permutation of n..n+m + let i = { + let mut v: Vec = (n..n.saturating_add(m)).collect(); + v.shuffle(&mut thread_rng()); + v.into_iter() + }; + + // Check that taking the k smallest elements yields n..n+min(k, m) + it::assert_equal( + i.k_smallest(k as usize), + n..n.saturating_add(min(k, m)) + ); + } +} + +#[derive(Clone, Debug)] +struct RandIter { + idx: usize, + len: usize, + rng: R, + _t: PhantomData +} + +impl Iterator for RandIter +where Standard: Distribution { + type Item = T; + fn next(&mut self) -> Option { + if self.idx == self.len { + None + } else { + self.idx += 1; + Some(self.rng.gen()) + } + } +} + +impl qc::Arbitrary for RandIter { + fn arbitrary(g: &mut G) -> Self { + RandIter { + idx: 0, + len: g.size(), + rng: R::seed_from_u64(g.next_u64()), + _t : PhantomData{}, + } + } +} + +// Check that taking the k smallest is the same as +// sorting then taking the k first elements +fn k_smallest_sort(i: I, k: u16) +where + I: Iterator + Clone, + I::Item: Ord + Debug, +{ + let j = i.clone(); + let k = k as usize; + it::assert_equal( + i.k_smallest(k), + j.sorted().take(k) + ) +} + +macro_rules! generic_test { + ($f:ident, $($t:ty),+) => { + $(paste::item! { + qc::quickcheck! { + fn [< $f _ $t >](i: RandIter<$t>, k: u16) -> () { + $f(i, k) + } + } + })+ + }; +} + +generic_test!(k_smallest_sort, u8, u16, u32, u64, i8, i16, i32, i64); + +#[test] +fn sorted_by_key() { + let sc = [3, 4, 1, 2].iter().cloned().sorted_by_key(|&x| x); + it::assert_equal(sc, vec![1, 2, 3, 4]); + + let v = (0..5).sorted_by_key(|&x| -x); + it::assert_equal(v, vec![4, 3, 2, 1, 0]); +} + +#[test] +fn sorted_by_cached_key() { + // Track calls to key function + let mut ncalls = 0; + + let sorted = [3, 4, 1, 2].iter().cloned().sorted_by_cached_key(|&x| { + ncalls += 1; + x.to_string() + }); + it::assert_equal(sorted, vec![1, 2, 3, 4]); + // Check key function called once per element + assert_eq!(ncalls, 4); + + let mut ncalls = 0; + + let sorted = (0..5).sorted_by_cached_key(|&x| { + ncalls += 1; + -x + }); + it::assert_equal(sorted, vec![4, 3, 2, 1, 0]); + // Check key function called once per element + assert_eq!(ncalls, 5); +} + +#[test] +fn test_multipeek() { + let nums = vec![1u8,2,3,4,5]; + + let mp = multipeek(nums.iter().copied()); + assert_eq!(nums, mp.collect::>()); + + let mut mp = multipeek(nums.iter().copied()); + assert_eq!(mp.peek(), Some(&1)); + assert_eq!(mp.next(), Some(1)); + assert_eq!(mp.peek(), Some(&2)); + assert_eq!(mp.peek(), Some(&3)); + assert_eq!(mp.next(), Some(2)); + assert_eq!(mp.peek(), Some(&3)); + assert_eq!(mp.peek(), Some(&4)); + assert_eq!(mp.peek(), Some(&5)); + assert_eq!(mp.peek(), None); + assert_eq!(mp.next(), Some(3)); + assert_eq!(mp.next(), Some(4)); + assert_eq!(mp.peek(), Some(&5)); + assert_eq!(mp.peek(), None); + assert_eq!(mp.next(), Some(5)); + assert_eq!(mp.next(), None); + assert_eq!(mp.peek(), None); +} + +#[test] +fn test_multipeek_reset() { + let data = [1, 2, 3, 4]; + + let mut mp = multipeek(cloned(&data)); + assert_eq!(mp.peek(), Some(&1)); + assert_eq!(mp.next(), Some(1)); + assert_eq!(mp.peek(), Some(&2)); + assert_eq!(mp.peek(), Some(&3)); + mp.reset_peek(); + assert_eq!(mp.peek(), Some(&2)); + assert_eq!(mp.next(), Some(2)); +} + +#[test] +fn test_multipeek_peeking_next() { + use crate::it::PeekingNext; + let nums = vec![1u8,2,3,4,5,6,7]; + + let mut mp = multipeek(nums.iter().copied()); + assert_eq!(mp.peeking_next(|&x| x != 0), Some(1)); + assert_eq!(mp.next(), Some(2)); + assert_eq!(mp.peek(), Some(&3)); + assert_eq!(mp.peek(), Some(&4)); + assert_eq!(mp.peeking_next(|&x| x == 3), Some(3)); + assert_eq!(mp.peek(), Some(&4)); + assert_eq!(mp.peeking_next(|&x| x != 4), None); + assert_eq!(mp.peeking_next(|&x| x == 4), Some(4)); + assert_eq!(mp.peek(), Some(&5)); + assert_eq!(mp.peek(), Some(&6)); + assert_eq!(mp.peeking_next(|&x| x != 5), None); + assert_eq!(mp.peek(), Some(&7)); + assert_eq!(mp.peeking_next(|&x| x == 5), Some(5)); + assert_eq!(mp.peeking_next(|&x| x == 6), Some(6)); + assert_eq!(mp.peek(), Some(&7)); + assert_eq!(mp.peek(), None); + assert_eq!(mp.next(), Some(7)); + assert_eq!(mp.peek(), None); +} + +#[test] +fn test_peek_nth() { + let nums = vec![1u8,2,3,4,5]; + + let iter = peek_nth(nums.iter().copied()); + assert_eq!(nums, iter.collect::>()); + + let mut iter = peek_nth(nums.iter().copied()); + + assert_eq!(iter.peek_nth(0), Some(&1)); + assert_eq!(iter.peek_nth(0), Some(&1)); + assert_eq!(iter.next(), Some(1)); + + assert_eq!(iter.peek_nth(0), Some(&2)); + assert_eq!(iter.peek_nth(1), Some(&3)); + assert_eq!(iter.next(), Some(2)); + + assert_eq!(iter.peek_nth(0), Some(&3)); + assert_eq!(iter.peek_nth(1), Some(&4)); + assert_eq!(iter.peek_nth(2), Some(&5)); + assert_eq!(iter.peek_nth(3), None); + + assert_eq!(iter.next(), Some(3)); + assert_eq!(iter.next(), Some(4)); + + assert_eq!(iter.peek_nth(0), Some(&5)); + assert_eq!(iter.peek_nth(1), None); + assert_eq!(iter.next(), Some(5)); + assert_eq!(iter.next(), None); + + assert_eq!(iter.peek_nth(0), None); + assert_eq!(iter.peek_nth(1), None); +} + +#[test] +fn test_peek_nth_peeking_next() { + use it::PeekingNext; + let nums = vec![1u8,2,3,4,5,6,7]; + let mut iter = peek_nth(nums.iter().copied()); + + assert_eq!(iter.peeking_next(|&x| x != 0), Some(1)); + assert_eq!(iter.next(), Some(2)); + + assert_eq!(iter.peek_nth(0), Some(&3)); + assert_eq!(iter.peek_nth(1), Some(&4)); + assert_eq!(iter.peeking_next(|&x| x == 3), Some(3)); + assert_eq!(iter.peek(), Some(&4)); + + assert_eq!(iter.peeking_next(|&x| x != 4), None); + assert_eq!(iter.peeking_next(|&x| x == 4), Some(4)); + assert_eq!(iter.peek_nth(0), Some(&5)); + assert_eq!(iter.peek_nth(1), Some(&6)); + + assert_eq!(iter.peeking_next(|&x| x != 5), None); + assert_eq!(iter.peek(), Some(&5)); + + assert_eq!(iter.peeking_next(|&x| x == 5), Some(5)); + assert_eq!(iter.peeking_next(|&x| x == 6), Some(6)); + assert_eq!(iter.peek_nth(0), Some(&7)); + assert_eq!(iter.peek_nth(1), None); + assert_eq!(iter.next(), Some(7)); + assert_eq!(iter.peek(), None); +} + +#[test] +fn pad_using() { + it::assert_equal((0..0).pad_using(1, |_| 1), 1..2); + + let v: Vec = vec![0, 1, 2]; + let r = v.into_iter().pad_using(5, |n| n); + it::assert_equal(r, vec![0, 1, 2, 3, 4]); + + let v: Vec = vec![0, 1, 2]; + let r = v.into_iter().pad_using(1, |_| panic!()); + it::assert_equal(r, vec![0, 1, 2]); +} + +#[test] +fn group_by() { + for (ch1, sub) in &"AABBCCC".chars().group_by(|&x| x) { + for ch2 in sub { + assert_eq!(ch1, ch2); + } + } + + for (ch1, sub) in &"AAABBBCCCCDDDD".chars().group_by(|&x| x) { + for ch2 in sub { + assert_eq!(ch1, ch2); + if ch1 == 'C' { + break; + } + } + } + + let toupper = |ch: &char| ch.to_uppercase().next().unwrap(); + + // try all possible orderings + for indices in permutohedron::Heap::new(&mut [0, 1, 2, 3]) { + let groups = "AaaBbbccCcDDDD".chars().group_by(&toupper); + let mut subs = groups.into_iter().collect_vec(); + + for &idx in &indices[..] { + let (key, text) = match idx { + 0 => ('A', "Aaa".chars()), + 1 => ('B', "Bbb".chars()), + 2 => ('C', "ccCc".chars()), + 3 => ('D', "DDDD".chars()), + _ => unreachable!(), + }; + assert_eq!(key, subs[idx].0); + it::assert_equal(&mut subs[idx].1, text); + } + } + + let groups = "AAABBBCCCCDDDD".chars().group_by(|&x| x); + let mut subs = groups.into_iter().map(|(_, g)| g).collect_vec(); + + let sd = subs.pop().unwrap(); + let sc = subs.pop().unwrap(); + let sb = subs.pop().unwrap(); + let sa = subs.pop().unwrap(); + for (a, b, c, d) in multizip((sa, sb, sc, sd)) { + assert_eq!(a, 'A'); + assert_eq!(b, 'B'); + assert_eq!(c, 'C'); + assert_eq!(d, 'D'); + } + + // check that the key closure is called exactly n times + { + let mut ntimes = 0; + let text = "AABCCC"; + for (_, sub) in &text.chars().group_by(|&x| { ntimes += 1; x}) { + for _ in sub { + } + } + assert_eq!(ntimes, text.len()); + } + + { + let mut ntimes = 0; + let text = "AABCCC"; + for _ in &text.chars().group_by(|&x| { ntimes += 1; x}) { + } + assert_eq!(ntimes, text.len()); + } + + { + let text = "ABCCCDEEFGHIJJKK"; + let gr = text.chars().group_by(|&x| x); + it::assert_equal(gr.into_iter().flat_map(|(_, sub)| sub), text.chars()); + } +} + +#[test] +fn group_by_lazy_2() { + let data = vec![0, 1]; + let groups = data.iter().group_by(|k| *k); + let gs = groups.into_iter().collect_vec(); + it::assert_equal(data.iter(), gs.into_iter().flat_map(|(_k, g)| g)); + + let data = vec![0, 1, 1, 0, 0]; + let groups = data.iter().group_by(|k| *k); + let mut gs = groups.into_iter().collect_vec(); + gs[1..].reverse(); + it::assert_equal(&[0, 0, 0, 1, 1], gs.into_iter().flat_map(|(_, g)| g)); + + let grouper = data.iter().group_by(|k| *k); + let mut groups = Vec::new(); + for (k, group) in &grouper { + if *k == 1 { + groups.push(group); + } + } + it::assert_equal(&mut groups[0], &[1, 1]); + + let data = vec![0, 0, 0, 1, 1, 0, 0, 2, 2, 3, 3]; + let grouper = data.iter().group_by(|k| *k); + let mut groups = Vec::new(); + for (i, (_, group)) in grouper.into_iter().enumerate() { + if i < 2 { + groups.push(group); + } else if i < 4 { + for _ in group { + } + } else { + groups.push(group); + } + } + it::assert_equal(&mut groups[0], &[0, 0, 0]); + it::assert_equal(&mut groups[1], &[1, 1]); + it::assert_equal(&mut groups[2], &[3, 3]); + + // use groups as chunks + let data = vec![0, 0, 0, 1, 1, 0, 0, 2, 2, 3, 3]; + let mut i = 0; + let grouper = data.iter().group_by(move |_| { let k = i / 3; i += 1; k }); + for (i, group) in &grouper { + match i { + 0 => it::assert_equal(group, &[0, 0, 0]), + 1 => it::assert_equal(group, &[1, 1, 0]), + 2 => it::assert_equal(group, &[0, 2, 2]), + 3 => it::assert_equal(group, &[3, 3]), + _ => unreachable!(), + } + } +} + +#[test] +fn group_by_lazy_3() { + // test consuming each group on the lap after it was produced + let data = vec![0, 0, 0, 1, 1, 0, 0, 1, 1, 2, 2]; + let grouper = data.iter().group_by(|elt| *elt); + let mut last = None; + for (key, group) in &grouper { + if let Some(gr) = last.take() { + for elt in gr { + assert!(elt != key && i32::abs(elt - key) == 1); + } + } + last = Some(group); + } +} + +#[test] +fn chunks() { + let data = vec![0, 0, 0, 1, 1, 0, 0, 2, 2, 3, 3]; + let grouper = data.iter().chunks(3); + for (i, chunk) in grouper.into_iter().enumerate() { + match i { + 0 => it::assert_equal(chunk, &[0, 0, 0]), + 1 => it::assert_equal(chunk, &[1, 1, 0]), + 2 => it::assert_equal(chunk, &[0, 2, 2]), + 3 => it::assert_equal(chunk, &[3, 3]), + _ => unreachable!(), + } + } +} + +#[test] +fn concat_empty() { + let data: Vec> = Vec::new(); + assert_eq!(data.into_iter().concat(), Vec::new()) +} + +#[test] +fn concat_non_empty() { + let data = vec![vec![1,2,3], vec![4,5,6], vec![7,8,9]]; + assert_eq!(data.into_iter().concat(), vec![1,2,3,4,5,6,7,8,9]) +} + +#[test] +fn combinations() { + assert!((1..3).combinations(5).next().is_none()); + + let it = (1..3).combinations(2); + it::assert_equal(it, vec![ + vec![1, 2], + ]); + + let it = (1..5).combinations(2); + it::assert_equal(it, vec![ + vec![1, 2], + vec![1, 3], + vec![1, 4], + vec![2, 3], + vec![2, 4], + vec![3, 4], + ]); + + it::assert_equal((0..0).tuple_combinations::<(_, _)>(), >::new()); + it::assert_equal((0..1).tuple_combinations::<(_, _)>(), >::new()); + it::assert_equal((0..2).tuple_combinations::<(_, _)>(), vec![(0, 1)]); + + it::assert_equal((0..0).combinations(2), >>::new()); + it::assert_equal((0..1).combinations(1), vec![vec![0]]); + it::assert_equal((0..2).combinations(1), vec![vec![0], vec![1]]); + it::assert_equal((0..2).combinations(2), vec![vec![0, 1]]); +} + +#[test] +fn combinations_of_too_short() { + for i in 1..10 { + assert!((0..0).combinations(i).next().is_none()); + assert!((0..i - 1).combinations(i).next().is_none()); + } +} + + +#[test] +fn combinations_zero() { + it::assert_equal((1..3).combinations(0), vec![vec![]]); + it::assert_equal((0..0).combinations(0), vec![vec![]]); +} + +#[test] +fn permutations_zero() { + it::assert_equal((1..3).permutations(0), vec![vec![]]); + it::assert_equal((0..0).permutations(0), vec![vec![]]); +} + +#[test] +fn combinations_with_replacement() { + // Pool smaller than n + it::assert_equal((0..1).combinations_with_replacement(2), vec![vec![0, 0]]); + // Pool larger than n + it::assert_equal( + (0..3).combinations_with_replacement(2), + vec![ + vec![0, 0], + vec![0, 1], + vec![0, 2], + vec![1, 1], + vec![1, 2], + vec![2, 2], + ], + ); + // Zero size + it::assert_equal( + (0..3).combinations_with_replacement(0), + vec![vec![]], + ); + // Zero size on empty pool + it::assert_equal( + (0..0).combinations_with_replacement(0), + vec![vec![]], + ); + // Empty pool + it::assert_equal( + (0..0).combinations_with_replacement(2), + >>::new(), + ); +} + +#[test] +fn powerset() { + it::assert_equal((0..0).powerset(), vec![vec![]]); + it::assert_equal((0..1).powerset(), vec![vec![], vec![0]]); + it::assert_equal((0..2).powerset(), vec![vec![], vec![0], vec![1], vec![0, 1]]); + it::assert_equal((0..3).powerset(), vec![ + vec![], + vec![0], vec![1], vec![2], + vec![0, 1], vec![0, 2], vec![1, 2], + vec![0, 1, 2] + ]); + + assert_eq!((0..4).powerset().count(), 1 << 4); + assert_eq!((0..8).powerset().count(), 1 << 8); + assert_eq!((0..16).powerset().count(), 1 << 16); +} + +#[test] +fn diff_mismatch() { + let a = vec![1, 2, 3, 4]; + let b = vec![1.0, 5.0, 3.0, 4.0]; + let b_map = b.into_iter().map(|f| f as i32); + let diff = it::diff_with(a.iter(), b_map, |a, b| *a == b); + + assert!(match diff { + Some(it::Diff::FirstMismatch(1, _, from_diff)) => + from_diff.collect::>() == vec![5, 3, 4], + _ => false, + }); +} + +#[test] +fn diff_longer() { + let a = vec![1, 2, 3, 4]; + let b = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0]; + let b_map = b.into_iter().map(|f| f as i32); + let diff = it::diff_with(a.iter(), b_map, |a, b| *a == b); + + assert!(match diff { + Some(it::Diff::Longer(_, remaining)) => + remaining.collect::>() == vec![5, 6], + _ => false, + }); +} + +#[test] +fn diff_shorter() { + let a = vec![1, 2, 3, 4]; + let b = vec![1.0, 2.0]; + let b_map = b.into_iter().map(|f| f as i32); + let diff = it::diff_with(a.iter(), b_map, |a, b| *a == b); + + assert!(match diff { + Some(it::Diff::Shorter(len, _)) => len == 2, + _ => false, + }); +} + +#[test] +fn extrema_set() { + use std::cmp::Ordering; + + // A peculiar type: Equality compares both tuple items, but ordering only the + // first item. Used to distinguish equal elements. + #[derive(Clone, Debug, PartialEq, Eq)] + struct Val(u32, u32); + + impl PartialOrd for Val { + fn partial_cmp(&self, other: &Val) -> Option { + self.0.partial_cmp(&other.0) + } + } + + impl Ord for Val { + fn cmp(&self, other: &Val) -> Ordering { + self.0.cmp(&other.0) + } + } + + assert_eq!(None::.iter().min_set(), Vec::<&u32>::new()); + assert_eq!(None::.iter().max_set(), Vec::<&u32>::new()); + + assert_eq!(Some(1u32).iter().min_set(), vec![&1]); + assert_eq!(Some(1u32).iter().max_set(), vec![&1]); + + let data = vec![Val(0, 1), Val(2, 0), Val(0, 2), Val(1, 0), Val(2, 1)]; + + let min_set = data.iter().min_set(); + assert_eq!(min_set, vec![&Val(0, 1), &Val(0, 2)]); + + let min_set_by_key = data.iter().min_set_by_key(|v| v.1); + assert_eq!(min_set_by_key, vec![&Val(2, 0), &Val(1, 0)]); + + let min_set_by = data.iter().min_set_by(|x, y| x.1.cmp(&y.1)); + assert_eq!(min_set_by, vec![&Val(2, 0), &Val(1, 0)]); + + let max_set = data.iter().max_set(); + assert_eq!(max_set, vec![&Val(2, 0), &Val(2, 1)]); + + let max_set_by_key = data.iter().max_set_by_key(|v| v.1); + assert_eq!(max_set_by_key, vec![&Val(0, 2)]); + + let max_set_by = data.iter().max_set_by(|x, y| x.1.cmp(&y.1)); + assert_eq!(max_set_by, vec![&Val(0, 2)]); +} + +#[test] +fn minmax() { + use std::cmp::Ordering; + use crate::it::MinMaxResult; + + // A peculiar type: Equality compares both tuple items, but ordering only the + // first item. This is so we can check the stability property easily. + #[derive(Clone, Debug, PartialEq, Eq)] + struct Val(u32, u32); + + impl PartialOrd for Val { + fn partial_cmp(&self, other: &Val) -> Option { + self.0.partial_cmp(&other.0) + } + } + + impl Ord for Val { + fn cmp(&self, other: &Val) -> Ordering { + self.0.cmp(&other.0) + } + } + + assert_eq!(None::>.iter().minmax(), MinMaxResult::NoElements); + + assert_eq!(Some(1u32).iter().minmax(), MinMaxResult::OneElement(&1)); + + let data = vec![Val(0, 1), Val(2, 0), Val(0, 2), Val(1, 0), Val(2, 1)]; + + let minmax = data.iter().minmax(); + assert_eq!(minmax, MinMaxResult::MinMax(&Val(0, 1), &Val(2, 1))); + + let (min, max) = data.iter().minmax_by_key(|v| v.1).into_option().unwrap(); + assert_eq!(min, &Val(2, 0)); + assert_eq!(max, &Val(0, 2)); + + let (min, max) = data.iter().minmax_by(|x, y| x.1.cmp(&y.1)).into_option().unwrap(); + assert_eq!(min, &Val(2, 0)); + assert_eq!(max, &Val(0, 2)); +} + +#[test] +fn format() { + let data = [0, 1, 2, 3]; + let ans1 = "0, 1, 2, 3"; + let ans2 = "0--1--2--3"; + + let t1 = format!("{}", data.iter().format(", ")); + assert_eq!(t1, ans1); + let t2 = format!("{:?}", data.iter().format("--")); + assert_eq!(t2, ans2); + + let dataf = [1.1, 5.71828, -22.]; + let t3 = format!("{:.2e}", dataf.iter().format(", ")); + assert_eq!(t3, "1.10e0, 5.72e0, -2.20e1"); +} + +#[test] +fn while_some() { + let ns = (1..10).map(|x| if x % 5 != 0 { Some(x) } else { None }) + .while_some(); + it::assert_equal(ns, vec![1, 2, 3, 4]); +} + +#[allow(deprecated)] +#[test] +fn fold_while() { + let mut iterations = 0; + let vec = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; + let sum = vec.into_iter().fold_while(0, |acc, item| { + iterations += 1; + let new_sum = acc + item; + if new_sum <= 20 { + FoldWhile::Continue(new_sum) + } else { + FoldWhile::Done(acc) + } + }).into_inner(); + assert_eq!(iterations, 6); + assert_eq!(sum, 15); +} + +#[test] +fn tree_fold1() { + let x = [ + "", + "0", + "0 1 x", + "0 1 x 2 x", + "0 1 x 2 3 x x", + "0 1 x 2 3 x x 4 x", + "0 1 x 2 3 x x 4 5 x x", + "0 1 x 2 3 x x 4 5 x 6 x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x 10 x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x 10 11 x x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x 10 11 x x 12 x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x 10 11 x x 12 13 x x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x 10 11 x x 12 13 x 14 x x x", + "0 1 x 2 3 x x 4 5 x 6 7 x x x 8 9 x 10 11 x x 12 13 x 14 15 x x x x", + ]; + for (i, &s) in x.iter().enumerate() { + let expected = if s.is_empty() { None } else { Some(s.to_string()) }; + let num_strings = (0..i).map(|x| x.to_string()); + let actual = num_strings.tree_fold1(|a, b| format!("{} {} x", a, b)); + assert_eq!(actual, expected); + } +} + +#[test] +fn exactly_one_question_mark_syntax_works() { + exactly_one_question_mark_return().unwrap_err(); +} + +fn exactly_one_question_mark_return() -> Result<(), ExactlyOneError>> { + [].iter().exactly_one()?; + Ok(()) +} + +#[test] +fn multiunzip() { + let (a, b, c): (Vec<_>, Vec<_>, Vec<_>) = [(0, 1, 2), (3, 4, 5), (6, 7, 8)].iter().cloned().multiunzip(); + assert_eq!((a, b, c), (vec![0, 3, 6], vec![1, 4, 7], vec![2, 5, 8])); + let (): () = [(), (), ()].iter().cloned().multiunzip(); + let t: (Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>, Vec<_>) = [(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11)].iter().cloned().multiunzip(); + assert_eq!(t, (vec![0], vec![1], vec![2], vec![3], vec![4], vec![5], vec![6], vec![7], vec![8], vec![9], vec![10], vec![11])); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/tuples.rs b/rust/hw/char/pl011/vendor/itertools/tests/tuples.rs new file mode 100644 index 0000000000..9fc8b3cc78 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/tuples.rs @@ -0,0 +1,86 @@ +use itertools::Itertools; + +#[test] +fn tuples() { + let v = [1, 2, 3, 4, 5]; + let mut iter = v.iter().cloned().tuples(); + assert_eq!(Some((1,)), iter.next()); + assert_eq!(Some((2,)), iter.next()); + assert_eq!(Some((3,)), iter.next()); + assert_eq!(Some((4,)), iter.next()); + assert_eq!(Some((5,)), iter.next()); + assert_eq!(None, iter.next()); + assert_eq!(None, iter.into_buffer().next()); + + let mut iter = v.iter().cloned().tuples(); + assert_eq!(Some((1, 2)), iter.next()); + assert_eq!(Some((3, 4)), iter.next()); + assert_eq!(None, iter.next()); + itertools::assert_equal(vec![5], iter.into_buffer()); + + let mut iter = v.iter().cloned().tuples(); + assert_eq!(Some((1, 2, 3)), iter.next()); + assert_eq!(None, iter.next()); + itertools::assert_equal(vec![4, 5], iter.into_buffer()); + + let mut iter = v.iter().cloned().tuples(); + assert_eq!(Some((1, 2, 3, 4)), iter.next()); + assert_eq!(None, iter.next()); + itertools::assert_equal(vec![5], iter.into_buffer()); +} + +#[test] +fn tuple_windows() { + let v = [1, 2, 3, 4, 5]; + + let mut iter = v.iter().cloned().tuple_windows(); + assert_eq!(Some((1,)), iter.next()); + assert_eq!(Some((2,)), iter.next()); + assert_eq!(Some((3,)), iter.next()); + + let mut iter = v.iter().cloned().tuple_windows(); + assert_eq!(Some((1, 2)), iter.next()); + assert_eq!(Some((2, 3)), iter.next()); + assert_eq!(Some((3, 4)), iter.next()); + assert_eq!(Some((4, 5)), iter.next()); + assert_eq!(None, iter.next()); + + let mut iter = v.iter().cloned().tuple_windows(); + assert_eq!(Some((1, 2, 3)), iter.next()); + assert_eq!(Some((2, 3, 4)), iter.next()); + assert_eq!(Some((3, 4, 5)), iter.next()); + assert_eq!(None, iter.next()); + + let mut iter = v.iter().cloned().tuple_windows(); + assert_eq!(Some((1, 2, 3, 4)), iter.next()); + assert_eq!(Some((2, 3, 4, 5)), iter.next()); + assert_eq!(None, iter.next()); + + let v = [1, 2, 3]; + let mut iter = v.iter().cloned().tuple_windows::<(_, _, _, _)>(); + assert_eq!(None, iter.next()); +} + +#[test] +fn next_tuple() { + let v = [1, 2, 3, 4, 5]; + let mut iter = v.iter(); + assert_eq!(iter.next_tuple().map(|(&x, &y)| (x, y)), Some((1, 2))); + assert_eq!(iter.next_tuple().map(|(&x, &y)| (x, y)), Some((3, 4))); + assert_eq!(iter.next_tuple::<(_, _)>(), None); +} + +#[test] +fn collect_tuple() { + let v = [1, 2]; + let iter = v.iter().cloned(); + assert_eq!(iter.collect_tuple(), Some((1, 2))); + + let v = [1]; + let iter = v.iter().cloned(); + assert_eq!(iter.collect_tuple::<(_, _)>(), None); + + let v = [1, 2, 3]; + let iter = v.iter().cloned(); + assert_eq!(iter.collect_tuple::<(_, _)>(), None); +} diff --git a/rust/hw/char/pl011/vendor/itertools/tests/zip.rs b/rust/hw/char/pl011/vendor/itertools/tests/zip.rs new file mode 100644 index 0000000000..75157d34f3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/itertools/tests/zip.rs @@ -0,0 +1,77 @@ +use itertools::Itertools; +use itertools::EitherOrBoth::{Both, Left, Right}; +use itertools::free::zip_eq; +use itertools::multizip; + +#[test] +fn zip_longest_fused() { + let a = [Some(1), None, Some(3), Some(4)]; + let b = [1, 2, 3]; + + let unfused = a.iter().batching(|it| *it.next().unwrap()) + .zip_longest(b.iter().cloned()); + itertools::assert_equal(unfused, + vec![Both(1, 1), Right(2), Right(3)]); +} + +#[test] +fn test_zip_longest_size_hint() { + let c = (1..10).cycle(); + let v: &[_] = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]; + let v2 = &[10, 11, 12]; + + assert_eq!(c.zip_longest(v.iter()).size_hint(), (std::usize::MAX, None)); + + assert_eq!(v.iter().zip_longest(v2.iter()).size_hint(), (10, Some(10))); +} + +#[test] +fn test_double_ended_zip_longest() { + let xs = [1, 2, 3, 4, 5, 6]; + let ys = [1, 2, 3, 7]; + let a = xs.iter().copied(); + let b = ys.iter().copied(); + let mut it = a.zip_longest(b); + assert_eq!(it.next(), Some(Both(1, 1))); + assert_eq!(it.next(), Some(Both(2, 2))); + assert_eq!(it.next_back(), Some(Left(6))); + assert_eq!(it.next_back(), Some(Left(5))); + assert_eq!(it.next_back(), Some(Both(4, 7))); + assert_eq!(it.next(), Some(Both(3, 3))); + assert_eq!(it.next(), None); +} + +#[test] +fn test_double_ended_zip() { + let xs = [1, 2, 3, 4, 5, 6]; + let ys = [1, 2, 3, 7]; + let a = xs.iter().copied(); + let b = ys.iter().copied(); + let mut it = multizip((a, b)); + assert_eq!(it.next_back(), Some((4, 7))); + assert_eq!(it.next_back(), Some((3, 3))); + assert_eq!(it.next_back(), Some((2, 2))); + assert_eq!(it.next_back(), Some((1, 1))); + assert_eq!(it.next_back(), None); +} + + +#[should_panic] +#[test] +fn zip_eq_panic1() +{ + let a = [1, 2]; + let b = [1, 2, 3]; + + zip_eq(&a, &b).count(); +} + +#[should_panic] +#[test] +fn zip_eq_panic2() +{ + let a: [i32; 0] = []; + let b = [1, 2, 3]; + + zip_eq(&a, &b).count(); +} diff --git a/rust/hw/char/pl011/vendor/meson.build b/rust/hw/char/pl011/vendor/meson.build new file mode 100644 index 0000000000..4611d2f11d --- /dev/null +++ b/rust/hw/char/pl011/vendor/meson.build @@ -0,0 +1,18 @@ +subdir('arbitrary-int') +subdir('unicode-ident') +# subdir('version_check') +subdir('either') + +subdir('itertools') +subdir('proc-macro2') + +subdir('quote') + +subdir('proc-macro-error-attr') +subdir('syn') + +subdir('proc-macro-error') + +subdir('bilge-impl') + +subdir('bilge') diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/.cargo-checksum.json b/rust/hw/char/pl011/vendor/proc-macro-error-attr/.cargo-checksum.json new file mode 100644 index 0000000000..c30b5418a8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"fbd3ce928441a0b43859bbbe36549f05e7a1ebfee62e5982710671a8f41de527","LICENSE-APACHE":"6fd0f3522047150ca7c1939f02bc4a15662a4741a89bc03ae784eefa18caa299","LICENSE-MIT":"544b3aed1fd723d0cadea567affdcfe0431e43e18d997a718f9d67256b814fde","build.rs":"37b0aca3c4a14dfc050c2df38ae633311d7a1532cdbb8eb57182802c4a1983eb","src/lib.rs":"9e3d13c266376b688642572bb4091e094ff5277fce4bee72bcc3c5f982dd831c","src/parse.rs":"2d8f220f91235be8ed0ddcab55ec3699b9d3b28d538ed24197797cc20194c473","src/settings.rs":"be9382479d7a857b55e5a0b1014f72150c9ee7f2bbb5a5bdeabc0f8de2d95c26"},"package":"a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/Cargo.toml b/rust/hw/char/pl011/vendor/proc-macro-error-attr/Cargo.toml new file mode 100644 index 0000000000..a2c766de9b --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/Cargo.toml @@ -0,0 +1,33 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies +# +# If you believe there's an error in this file please file an +# issue against the rust-lang/cargo repository. If you're +# editing this file be aware that the upstream Cargo.toml +# will likely look very different (and much more reasonable) + +[package] +edition = "2018" +name = "proc-macro-error-attr" +version = "1.0.4" +authors = ["CreepySkeleton "] +build = "build.rs" +description = "Attribute macro for proc-macro-error crate" +license = "MIT OR Apache-2.0" +repository = "https://gitlab.com/CreepySkeleton/proc-macro-error" +[package.metadata.docs.rs] +targets = ["x86_64-unknown-linux-gnu"] + +[lib] +proc-macro = true +[dependencies.proc-macro2] +version = "1" + +[dependencies.quote] +version = "1" +[build-dependencies.version_check] +version = "0.9" diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-APACHE b/rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-APACHE new file mode 100644 index 0000000000..658240a840 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright 2019-2020 CreepySkeleton + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-MIT b/rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-MIT new file mode 100644 index 0000000000..fc73e591d7 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/LICENSE-MIT @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2019-2020 CreepySkeleton + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/build.rs b/rust/hw/char/pl011/vendor/proc-macro-error-attr/build.rs new file mode 100644 index 0000000000..f2ac6a70ee --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/build.rs @@ -0,0 +1,5 @@ +fn main() { + if version_check::is_max_version("1.36.0").unwrap_or(false) { + println!("cargo:rustc-cfg=always_assert_unwind"); + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/meson.build b/rust/hw/char/pl011/vendor/proc-macro-error-attr/meson.build new file mode 100644 index 0000000000..63cd12ccf2 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/meson.build @@ -0,0 +1,20 @@ +rust = import('rust') + +_proc_macro_error_attr_rs = rust.proc_macro( + 'proc_macro_error_attr', + files('src/lib.rs'), + rust_args: rust_args + [ + '--edition', '2018', + '--cfg', 'use_fallback', + '--cfg', 'feature="syn-error"', + '--cfg', 'feature="proc-macro"' + ], + dependencies: [ + dep_proc_macro2, + dep_quote, + ], +) + +dep_proc_macro_error_attr = declare_dependency( + link_with: _proc_macro_error_attr_rs, +) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/lib.rs b/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/lib.rs new file mode 100644 index 0000000000..ac0ac21a26 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/lib.rs @@ -0,0 +1,121 @@ +//! This is `#[proc_macro_error]` attribute to be used with +//! [`proc-macro-error`](https://docs.rs/proc-macro-error/). There you go. + +extern crate proc_macro; + +use crate::parse::parse_input; +use crate::parse::Attribute; +use proc_macro::TokenStream; +use proc_macro2::{Literal, Span, TokenStream as TokenStream2, TokenTree}; +use quote::{quote, quote_spanned}; + +use crate::settings::{Setting::*, *}; + +mod parse; +mod settings; + +type Result = std::result::Result; + +struct Error { + span: Span, + message: String, +} + +impl Error { + fn new(span: Span, message: String) -> Self { + Error { span, message } + } + + fn into_compile_error(self) -> TokenStream2 { + let mut message = Literal::string(&self.message); + message.set_span(self.span); + quote_spanned!(self.span=> compile_error!{#message}) + } +} + +#[proc_macro_attribute] +pub fn proc_macro_error(attr: TokenStream, input: TokenStream) -> TokenStream { + match impl_proc_macro_error(attr.into(), input.clone().into()) { + Ok(ts) => ts, + Err(e) => { + let error = e.into_compile_error(); + let input = TokenStream2::from(input); + + quote!(#input #error).into() + } + } +} + +fn impl_proc_macro_error(attr: TokenStream2, input: TokenStream2) -> Result { + let (attrs, signature, body) = parse_input(input)?; + let mut settings = parse_settings(attr)?; + + let is_proc_macro = is_proc_macro(&attrs); + if is_proc_macro { + settings.set(AssertUnwindSafe); + } + + if detect_proc_macro_hack(&attrs) { + settings.set(ProcMacroHack); + } + + if settings.is_set(ProcMacroHack) { + settings.set(AllowNotMacro); + } + + if !(settings.is_set(AllowNotMacro) || is_proc_macro) { + return Err(Error::new( + Span::call_site(), + "#[proc_macro_error] attribute can be used only with procedural macros\n\n \ + = hint: if you are really sure that #[proc_macro_error] should be applied \ + to this exact function, use #[proc_macro_error(allow_not_macro)]\n" + .into(), + )); + } + + let body = gen_body(body, settings); + + let res = quote! { + #(#attrs)* + #(#signature)* + { #body } + }; + Ok(res.into()) +} + +#[cfg(not(always_assert_unwind))] +fn gen_body(block: TokenTree, settings: Settings) -> proc_macro2::TokenStream { + let is_proc_macro_hack = settings.is_set(ProcMacroHack); + let closure = if settings.is_set(AssertUnwindSafe) { + quote!(::std::panic::AssertUnwindSafe(|| #block )) + } else { + quote!(|| #block) + }; + + quote!( ::proc_macro_error::entry_point(#closure, #is_proc_macro_hack) ) +} + +// FIXME: +// proc_macro::TokenStream does not implement UnwindSafe until 1.37.0. +// Considering this is the closure's return type the unwind safety check would fail +// for virtually every closure possible, the check is meaningless. +#[cfg(always_assert_unwind)] +fn gen_body(block: TokenTree, settings: Settings) -> proc_macro2::TokenStream { + let is_proc_macro_hack = settings.is_set(ProcMacroHack); + let closure = quote!(::std::panic::AssertUnwindSafe(|| #block )); + quote!( ::proc_macro_error::entry_point(#closure, #is_proc_macro_hack) ) +} + +fn detect_proc_macro_hack(attrs: &[Attribute]) -> bool { + attrs + .iter() + .any(|attr| attr.path_is_ident("proc_macro_hack")) +} + +fn is_proc_macro(attrs: &[Attribute]) -> bool { + attrs.iter().any(|attr| { + attr.path_is_ident("proc_macro") + || attr.path_is_ident("proc_macro_derive") + || attr.path_is_ident("proc_macro_attribute") + }) +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/parse.rs b/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/parse.rs new file mode 100644 index 0000000000..6f4663f80e --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/parse.rs @@ -0,0 +1,89 @@ +use crate::{Error, Result}; +use proc_macro2::{Delimiter, Ident, Span, TokenStream, TokenTree}; +use quote::ToTokens; +use std::iter::Peekable; + +pub(crate) fn parse_input( + input: TokenStream, +) -> Result<(Vec, Vec, TokenTree)> { + let mut input = input.into_iter().peekable(); + let mut attrs = Vec::new(); + + while let Some(attr) = parse_next_attr(&mut input)? { + attrs.push(attr); + } + + let sig = parse_signature(&mut input); + let body = input.next().ok_or_else(|| { + Error::new( + Span::call_site(), + "`#[proc_macro_error]` can be applied only to functions".to_string(), + ) + })?; + + Ok((attrs, sig, body)) +} + +fn parse_next_attr( + input: &mut Peekable>, +) -> Result> { + let shebang = match input.peek() { + Some(TokenTree::Punct(ref punct)) if punct.as_char() == '#' => input.next().unwrap(), + _ => return Ok(None), + }; + + let group = match input.peek() { + Some(TokenTree::Group(ref group)) if group.delimiter() == Delimiter::Bracket => { + let res = group.clone(); + input.next(); + res + } + other => { + let span = other.map_or(Span::call_site(), |tt| tt.span()); + return Err(Error::new(span, "expected `[`".to_string())); + } + }; + + let path = match group.stream().into_iter().next() { + Some(TokenTree::Ident(ident)) => Some(ident), + _ => None, + }; + + Ok(Some(Attribute { + shebang, + group: TokenTree::Group(group), + path, + })) +} + +fn parse_signature(input: &mut Peekable>) -> Vec { + let mut sig = Vec::new(); + loop { + match input.peek() { + Some(TokenTree::Group(ref group)) if group.delimiter() == Delimiter::Brace => { + return sig; + } + None => return sig, + _ => sig.push(input.next().unwrap()), + } + } +} + +pub(crate) struct Attribute { + pub(crate) shebang: TokenTree, + pub(crate) group: TokenTree, + pub(crate) path: Option, +} + +impl Attribute { + pub(crate) fn path_is_ident(&self, ident: &str) -> bool { + self.path.as_ref().map_or(false, |p| *p == ident) + } +} + +impl ToTokens for Attribute { + fn to_tokens(&self, ts: &mut TokenStream) { + self.shebang.to_tokens(ts); + self.group.to_tokens(ts); + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/settings.rs b/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/settings.rs new file mode 100644 index 0000000000..0b7ec766f6 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error-attr/src/settings.rs @@ -0,0 +1,72 @@ +use crate::{Error, Result}; +use proc_macro2::{Ident, Span, TokenStream, TokenTree}; + +macro_rules! decl_settings { + ($($val:expr => $variant:ident),+ $(,)*) => { + #[derive(PartialEq)] + pub(crate) enum Setting { + $($variant),* + } + + fn ident_to_setting(ident: Ident) -> Result { + match &*ident.to_string() { + $($val => Ok(Setting::$variant),)* + _ => { + let possible_vals = [$($val),*] + .iter() + .map(|v| format!("`{}`", v)) + .collect::>() + .join(", "); + + Err(Error::new( + ident.span(), + format!("unknown setting `{}`, expected one of {}", ident, possible_vals))) + } + } + } + }; +} + +decl_settings! { + "assert_unwind_safe" => AssertUnwindSafe, + "allow_not_macro" => AllowNotMacro, + "proc_macro_hack" => ProcMacroHack, +} + +pub(crate) fn parse_settings(input: TokenStream) -> Result { + let mut input = input.into_iter(); + let mut res = Settings(Vec::new()); + loop { + match input.next() { + Some(TokenTree::Ident(ident)) => { + res.0.push(ident_to_setting(ident)?); + } + None => return Ok(res), + other => { + let span = other.map_or(Span::call_site(), |tt| tt.span()); + return Err(Error::new(span, "expected identifier".to_string())); + } + } + + match input.next() { + Some(TokenTree::Punct(ref punct)) if punct.as_char() == ',' => {} + None => return Ok(res), + other => { + let span = other.map_or(Span::call_site(), |tt| tt.span()); + return Err(Error::new(span, "expected `,`".to_string())); + } + } + } +} + +pub(crate) struct Settings(Vec); + +impl Settings { + pub(crate) fn is_set(&self, setting: Setting) -> bool { + self.0.iter().any(|s| *s == setting) + } + + pub(crate) fn set(&mut self, setting: Setting) { + self.0.push(setting) + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/.cargo-checksum.json b/rust/hw/char/pl011/vendor/proc-macro-error/.cargo-checksum.json new file mode 100644 index 0000000000..79bcfa696f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"CHANGELOG.md":"b84c4baa5fb093c6aaca44b98f9f28ef54d399ef6dd43c91f1dca618ab366b45","Cargo.toml":"50db093e1a4617606939dfb1f098cb59babbea0d7b390e973a3ed6bb1406170d","LICENSE-APACHE":"4665f973ccb9393807a7fb1264add6b3513d19abc6b357e4cb52c6fe59cc6a3b","LICENSE-MIT":"544b3aed1fd723d0cadea567affdcfe0431e43e18d997a718f9d67256b814fde","README.md":"72d59787d0a1f7bf161e292d0bc1bc25fdfb08cd6ad379a34cc3ed1b388d11fa","build.rs":"6238a0ad4f1146fbf55112419609e6449986154cf7ded1b5fdc978b06f4413b3","src/diagnostic.rs":"cb8724bb0bf9d2eee2f7119d0960fd5349edaa80e147abdef060ebf4572eca01","src/dummy.rs":"b44728091ddcdf9786523c01178efcedd83462bfe8bac6b97b1c2ffb19c96d09","src/imp/delegate.rs":"81da3a602a883240161dd98deb52b3b4ae29e626bfd2e1e07ef5e38d1be00679","src/imp/fallback.rs":"c3d333aba1122ac7e26f038f69750aa02e6a1222433a7cffd1c2f961befedd93","src/lib.rs":"e563d5dceaeb81551a5cb2610c1a3ad1a46200a6cbf8c3c3b394d8ac307b8cfa","src/macros.rs":"3be6feccd343cd9dc4bf03780f3107909bf70e02c6c7c72912e4b160dc6a68fc","src/sealed.rs":"dcf569c4c7ce1d372ff51b0fa73fa67f995bdca8e520cb225cde789c71276439","tests/macro-errors.rs":"7f793921dfbec692bfb2bbb067faf0480c0e7eeec83982b5e9fcddd817817616","tests/ok.rs":"a8c1925ac8647d185c7490ed1e33e3ce3203f5945bd3db4dcaf50ea55078df29","tests/runtime-errors.rs":"e53aa7d8e6c0e5128a90e856105eb05e4e7e72ea6db1bd580f3fe439bff62f24","tests/ui/abort.rs":"e209c8dd9dde6bde7440f8795624ad84b0f8486f563c8fe838238818f459bb67","tests/ui/abort.stderr":"dd0605e79be0309f92b251d055f087b0375c48ec60da978df354b48e8563fa10","tests/ui/append_dummy.rs":"ecaf939c8aabd94eef2dd1c10e9515489ba78e4db5b25195e19833b020d2483c","tests/ui/append_dummy.stderr":"ef03b01fc823aba8cfb9eb6d116640ca953fec569e61ed6ed6b7b7fa3bbad686","tests/ui/children_messages.rs":"32299679804133cb5295ed7a9341bf1ab86a4f1277679ee9738a83115c6b1d2b","tests/ui/children_messages.stderr":"dadeb86e1c7094d5fb38664b1766212b3d083fbe04962c137f8281fb3f5d162e","tests/ui/dummy.rs":"ba51c9158cef88ff2ddf0185be26fcd35a641d73c7124fab9ace0bbd546de847","tests/ui/dummy.stderr":"0635fd563d26691d07a2a875111f0b5e155caa45c37ad9cbaefe5fe617eac703","tests/ui/emit.rs":"82aaf06bcee56b7e139bbcba3a92c29448af14974d6806a28c9961aa764026e5","tests/ui/emit.stderr":"d3daa6d304453d436317495b7fc1d9d36bbebb7705bef75a5260d6d8fcfad5b1","tests/ui/explicit_span_range.rs":"3c37d5fc75b2bd460a091acd97a19acc80a40ba8d1d4ac7f36cd2f0e171bf5e7","tests/ui/explicit_span_range.stderr":"d7562847c326badbce2df8546e6f625eef0725b1dd2c786a037cc46357e4d2e8","tests/ui/misuse.rs":"0d66c61ab5c9723cf2f85cd12216751ab09722e9386cc27928034ee17f1c34e3","tests/ui/misuse.stderr":"52568a2208423e8e4050774559f269e79181a350f0805a34880bfa208e08c6bb","tests/ui/multiple_tokens.rs":"74997da1fdd3bce88a04ab42866c651723861fba4f59e826ee602d905398dcca","tests/ui/multiple_tokens.stderr":"e347ef1c18949711ce41957848e255095132f855c94db1e7e28d32e7d2c79a74","tests/ui/not_proc_macro.rs":"ca448d832ccf0cfdcda6f04281d8134a76c61b3ad96437e972b2cb5c6e0844c4","tests/ui/not_proc_macro.stderr":"a22c53a7dd5a03ddfaee5a7fb7fe5d61cb588b2d81a30c1e935b789baf0d2676","tests/ui/option_ext.rs":"1db81c17172f155c0ca8bcf92d55b5852b92107a3ba1d4b2ae6d14020df67f96","tests/ui/option_ext.stderr":"3b363759db60ee4f249dfde4d4571963032d5f0043249de40bd3b38eecc65404","tests/ui/proc_macro_hack.rs":"1d08c3e2c4c331e230c7cdaa2635ca1e43077252f90d3a430dcd091c646a842c","tests/ui/proc_macro_hack.stderr":"65e887dc208b92bfcd44405e76d5d05e315c3c5c5f637070954b7d593c723731","tests/ui/result_ext.rs":"ef398e76aab82a574ca5a988a91353e1a87fcfcb459d30314eceed3cbcf6fcd8","tests/ui/result_ext.stderr":"9e1e387b1378d9ec40ccb29be9f8cdaa5b42060c3f4f9b3c09fb307d5dcf7d85","tests/ui/to_tokens_span.rs":"d017a3c4cd583defe9806cdc51220bde89ced871ddd4d65b7cd089882feb1f61","tests/ui/to_tokens_span.stderr":"0b88e659ab214d6c7dfcd99274d327fe72da4b9bd009477e0e65165ddde65e02","tests/ui/unknown_setting.rs":"16fe9631b51023909497e857a6c674cd216ba9802fbdba360bb8273d6e00fa31","tests/ui/unknown_setting.stderr":"d605f151ce8eba5b2f867667394bd2d2adf0a233145516a9d6b801817521e587","tests/ui/unrelated_panic.rs":"438db25f8f14f1263152545a1c5135e20b3f5063dc4ab223fd8145b891039b24","tests/ui/unrelated_panic.stderr":"04cd814f2bd57d5271f93f90f0dd078b09ee3fd73137245a914d698e4a33ed57"},"package":"da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/CHANGELOG.md b/rust/hw/char/pl011/vendor/proc-macro-error/CHANGELOG.md new file mode 100644 index 0000000000..3c422f1c45 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/CHANGELOG.md @@ -0,0 +1,162 @@ +# v1.0.4 (2020-7-31) + +* `SpanRange` facility is now public. +* Docs have been improved. +* Introduced the `syn-error` feature so you can opt-out from the `syn` dependency. + +# v1.0.3 (2020-6-26) + +* Corrected a few typos. +* Fixed the `emit_call_site_warning` macro. + +# v1.0.2 (2020-4-9) + +* An obsolete note was removed from documentation. + +# v1.0.1 (2020-4-9) + +* `proc-macro-hack` is now well tested and supported. Not sure about `proc-macro-nested`, + please fill a request if you need it. +* Fixed `emit_call_site_error`. +* Documentation improvements. + +# v1.0.0 (2020-3-25) + +I believe the API can be considered stable because it's been a few months without +breaking changes, and I also don't think this crate will receive much further evolution. +It's perfect, admit it. + +Hence, meet the new, stable release! + +### Improvements + +* Supported nested `#[proc_macro_error]` attributes. Well, you aren't supposed to do that, + but I caught myself doing it by accident on one occasion and the behavior was... surprising. + Better to handle this smooth. + +# v0.4.12 (2020-3-23) + +* Error message on macros' misuse is now a bit more understandable. + +# v0.4.11 (2020-3-02) + +* `build.rs` no longer fails when `rustc` date could not be determined, + (thanks to [`Fabian Möller`](https://gitlab.com/CreepySkeleton/proc-macro-error/issues/8) + for noticing and to [`Igor Gnatenko`](https://gitlab.com/CreepySkeleton/proc-macro-error/-/merge_requests/25) + for fixing). + +# v0.4.10 (2020-2-29) + +* `proc-macro-error` doesn't depend on syn\[full\] anymore, the compilation + is \~30secs faster. + +# v0.4.9 (2020-2-13) + +* New function: `append_dummy`. + +# v0.4.8 (2020-2-01) + +* Support for children messages + +# v0.4.7 (2020-1-31) + +* Now any type that implements `quote::ToTokens` can be used instead of spans. + This allows for high quality error messages. + +# v0.4.6 (2020-1-31) + +* `From` implementation doesn't lose span info anymore, see + [#6](https://gitlab.com/CreepySkeleton/proc-macro-error/issues/6). + +# v0.4.5 (2020-1-20) +Just a small intermediate release. + +* Fix some bugs. +* Populate license files into subfolders. + +# v0.4.4 (2019-11-13) +* Fix `abort_if_dirty` + warnings bug +* Allow trailing commas in macros + +# v0.4.2 (2019-11-7) +* FINALLY fixed `__pme__suggestions not found` bug + +# v0.4.1 (2019-11-7) YANKED +* Fixed `__pme__suggestions not found` bug +* Documentation improvements, links checked + +# v0.4.0 (2019-11-6) YANKED + +## New features +* "help" messages that can have their own span on nightly, they + inherit parent span on stable. + ```rust + let cond_help = if condition { Some("some help message") else { None } }; + abort!( + span, // parent span + "something's wrong, {} wrongs in total", 10; // main message + help = "here's a help for you, {}", "take it"; // unconditional help message + help =? cond_help; // conditional help message, must be Option + note = note_span => "don't forget the note, {}", "would you?" // notes can have their own span but it's effective only on nightly + ) + ``` +* Warnings via `emit_warning` and `emit_warning_call_site`. Nightly only, they're ignored on stable. +* Now `proc-macro-error` delegates to `proc_macro::Diagnostic` on nightly. + +## Breaking changes +* `MacroError` is now replaced by `Diagnostic`. Its API resembles `proc_macro::Diagnostic`. +* `Diagnostic` does not implement `From<&str/String>` so `Result::abort_or_exit()` + won't work anymore (nobody used it anyway). +* `macro_error!` macro is replaced with `diagnostic!`. + +## Improvements +* Now `proc-macro-error` renders notes exactly just like rustc does. +* We don't parse a body of a function annotated with `#[proc_macro_error]` anymore, + only looking at the signature. This should somewhat decrease expansion time for large functions. + +# v0.3.3 (2019-10-16) +* Now you can use any word instead of "help", undocumented. + +# v0.3.2 (2019-10-16) +* Introduced support for "help" messages, undocumented. + +# v0.3.0 (2019-10-8) + +## The crate has been completely rewritten from scratch! + +## Changes (most are breaking): +* Renamed macros: + * `span_error` => `abort` + * `call_site_error` => `abort_call_site` +* `filter_macro_errors` was replaced by `#[proc_macro_error]` attribute. +* `set_dummy` now takes `TokenStream` instead of `Option` +* Support for multiple errors via `emit_error` and `emit_call_site_error` +* New `macro_error` macro for building errors in format=like style. +* `MacroError` API had been reconsidered. It also now implements `quote::ToTokens`. + +# v0.2.6 (2019-09-02) +* Introduce support for dummy implementations via `dummy::set_dummy` +* `multi::*` is now deprecated, will be completely rewritten in v0.3 + +# v0.2.0 (2019-08-15) + +## Breaking changes +* `trigger_error` replaced with `MacroError::trigger` and `filter_macro_error_panics` + is hidden from docs. + This is not quite a breaking change since users weren't supposed to use these functions directly anyway. +* All dependencies are updated to `v1.*`. + +## New features +* Ability to stack multiple errors via `multi::MultiMacroErrors` and emit them at once. + +## Improvements +* Now `MacroError` implements `std::fmt::Display` instead of `std::string::ToString`. +* `MacroError::span` inherent method. +* `From for proc_macro/proc_macro2::TokenStream` implementations. +* `AsRef/AsMut for MacroError` implementations. + +# v0.1.x (2019-07-XX) + +## New features +* An easy way to report errors inside within a proc-macro via `span_error`, + `call_site_error` and `filter_macro_errors`. diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/Cargo.toml b/rust/hw/char/pl011/vendor/proc-macro-error/Cargo.toml new file mode 100644 index 0000000000..869585ffc2 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/Cargo.toml @@ -0,0 +1,56 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies +# +# If you believe there's an error in this file please file an +# issue against the rust-lang/cargo repository. If you're +# editing this file be aware that the upstream Cargo.toml +# will likely look very different (and much more reasonable) + +[package] +edition = "2018" +name = "proc-macro-error" +version = "1.0.4" +authors = ["CreepySkeleton "] +build = "build.rs" +description = "Almost drop-in replacement to panics in proc-macros" +readme = "README.md" +keywords = ["proc-macro", "error", "errors"] +categories = ["development-tools::procedural-macro-helpers"] +license = "MIT OR Apache-2.0" +repository = "https://gitlab.com/CreepySkeleton/proc-macro-error" +[package.metadata.docs.rs] +targets = ["x86_64-unknown-linux-gnu"] +[dependencies.proc-macro-error-attr] +version = "=1.0.4" + +[dependencies.proc-macro2] +version = "1" + +[dependencies.quote] +version = "1" + +[dependencies.syn] +version = "1" +optional = true +default-features = false +[dev-dependencies.serde_derive] +version = "=1.0.107" + +[dev-dependencies.toml] +version = "=0.5.2" + +[dev-dependencies.trybuild] +version = "1.0.19" +features = ["diff"] +[build-dependencies.version_check] +version = "0.9" + +[features] +default = ["syn-error"] +syn-error = ["syn"] +[badges.maintenance] +status = "passively-maintained" diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-APACHE b/rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-APACHE new file mode 100644 index 0000000000..cc17374b25 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-APACHE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + +Copyright 2019-2020 CreepySkeleton + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-MIT b/rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-MIT new file mode 100644 index 0000000000..fc73e591d7 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/LICENSE-MIT @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2019-2020 CreepySkeleton + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/README.md b/rust/hw/char/pl011/vendor/proc-macro-error/README.md new file mode 100644 index 0000000000..7fbe07c53a --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/README.md @@ -0,0 +1,258 @@ +# Makes error reporting in procedural macros nice and easy + +[![travis ci](https://travis-ci.org/CreepySkeleton/proc-macro-error.svg?branch=master)](https://travis-ci.org/CreepySkeleton/proc-macro-error) +[![docs.rs](https://docs.rs/proc-macro-error/badge.svg)](https://docs.rs/proc-macro-error) +[![unsafe forbidden](https://img.shields.io/badge/unsafe-forbidden-success.svg)](https://github.com/rust-secure-code/safety-dance/) + +This crate aims to make error reporting in proc-macros simple and easy to use. +Migrate from `panic!`-based errors for as little effort as possible! + +Also, you can explicitly [append a dummy token stream][crate::dummy] to your errors. + +To achieve his, this crate serves as a tiny shim around `proc_macro::Diagnostic` and +`compile_error!`. It detects the most preferable way to emit errors based on compiler's version. +When the underlying diagnostic type is finally stabilized, this crate will be simply +delegating to it, requiring no changes in your code! + +So you can just use this crate and have *both* some of `proc_macro::Diagnostic` functionality +available on stable ahead of time and your error-reporting code future-proof. + +```toml +[dependencies] +proc-macro-error = "1.0" +``` + +*Supports rustc 1.31 and up* + +[Documentation and guide][guide] + +## Quick example + +Code: + +```rust +#[proc_macro] +#[proc_macro_error] +pub fn make_fn(input: TokenStream) -> TokenStream { + let mut input = TokenStream2::from(input).into_iter(); + let name = input.next().unwrap(); + if let Some(second) = input.next() { + abort! { second, + "I don't like this part!"; + note = "I see what you did there..."; + help = "I need only one part, you know?"; + } + } + + quote!( fn #name() {} ).into() +} +``` + +This is how the error is rendered in a terminal: + +

+ +

+ +And this is what your users will see in their IDE: + +

+ +

+ +## Examples + +### Panic-like usage + +```rust +use proc_macro_error::{ + proc_macro_error, + abort, + abort_call_site, + ResultExt, + OptionExt, +}; +use proc_macro::TokenStream; +use syn::{DeriveInput, parse_macro_input}; +use quote::quote; + +// This is your main entry point +#[proc_macro] +// This attribute *MUST* be placed on top of the #[proc_macro] function +#[proc_macro_error] +pub fn make_answer(input: TokenStream) -> TokenStream { + let input = parse_macro_input!(input as DeriveInput); + + if let Err(err) = some_logic(&input) { + // we've got a span to blame, let's use it + // This immediately aborts the proc-macro and shows the error + // + // You can use `proc_macro::Span`, `proc_macro2::Span`, and + // anything that implements `quote::ToTokens` (almost every type from + // `syn` and `proc_macro2`) + abort!(err, "You made an error, go fix it: {}", err.msg); + } + + // `Result` has some handy shortcuts if your error type implements + // `Into`. `Option` has one unconditionally. + more_logic(&input).expect_or_abort("What a careless user, behave!"); + + if !more_logic_for_logic_god(&input) { + // We don't have an exact location this time, + // so just highlight the proc-macro invocation itself + abort_call_site!( + "Bad, bad user! Now go stand in the corner and think about what you did!"); + } + + // Now all the processing is done, return `proc_macro::TokenStream` + quote!(/* stuff */).into() +} +``` + +### `proc_macro::Diagnostic`-like usage + +```rust +use proc_macro_error::*; +use proc_macro::TokenStream; +use syn::{spanned::Spanned, DeriveInput, ItemStruct, Fields, Attribute , parse_macro_input}; +use quote::quote; + +fn process_attrs(attrs: &[Attribute]) -> Vec { + attrs + .iter() + .filter_map(|attr| match process_attr(attr) { + Ok(res) => Some(res), + Err(msg) => { + emit_error!(attr, "Invalid attribute: {}", msg); + None + } + }) + .collect() +} + +fn process_fields(_attrs: &Fields) -> Vec { + // processing fields in pretty much the same way as attributes + unimplemented!() +} + +#[proc_macro] +#[proc_macro_error] +pub fn make_answer(input: TokenStream) -> TokenStream { + let input = parse_macro_input!(input as ItemStruct); + let attrs = process_attrs(&input.attrs); + + // abort right now if some errors were encountered + // at the attributes processing stage + abort_if_dirty(); + + let fields = process_fields(&input.fields); + + // no need to think about emitted errors + // #[proc_macro_error] will handle them for you + // + // just return a TokenStream as you normally would + quote!(/* stuff */).into() +} +``` + +## Real world examples + +* [`structopt-derive`](https://github.com/TeXitoi/structopt/tree/master/structopt-derive) + (abort-like usage) +* [`auto-impl`](https://github.com/auto-impl-rs/auto_impl/) (emit-like usage) + +## Limitations + +- Warnings are emitted only on nightly, they are ignored on stable. +- "help" suggestions can't have their own span info on stable, + (essentially inheriting the parent span). +- If your macro happens to trigger a panic, no errors will be displayed. This is not a + technical limitation but rather intentional design. `panic` is not for error reporting. + +## MSRV policy + +`proc_macro_error` will always be compatible with proc-macro Holy Trinity: +`proc_macro2`, `syn`, `quote` crates. In other words, if the Trinity is available +to you - `proc_macro_error` is available too. + +> **Important!** +> +> If you want to use `#[proc_macro_error]` with `synstructure`, you're going +> to have to put the attribute inside the `decl_derive!` invocation. Unfortunately, +> due to some bug in pre-1.34 rustc, putting proc-macro attributes inside macro +> invocations doesn't work, so your MSRV is effectively 1.34. + +## Motivation + +Error handling in proc-macros sucks. There's not much of a choice today: +you either "bubble up" the error up to the top-level of the macro and convert it to +a [`compile_error!`][compl_err] invocation or just use a good old panic. Both these ways suck: + +- Former sucks because it's quite redundant to unroll a proper error handling + just for critical errors that will crash the macro anyway; so people mostly + choose not to bother with it at all and use panic. Simple `.expect` is too tempting. + + Also, if you do decide to implement this `Result`-based architecture in your macro + you're going to have to rewrite it entirely once [`proc_macro::Diagnostic`][] is finally + stable. Not cool. + +- Later sucks because there's no way to carry out the span info via `panic!`. + `rustc` will highlight the invocation itself but not some specific token inside it. + + Furthermore, panics aren't for error-reporting at all; panics are for bug-detecting + (like unwrapping on `None` or out-of-range indexing) or for early development stages + when you need a prototype ASAP so error handling can wait. Mixing these usages only + messes things up. + +- There is [`proc_macro::Diagnostic`][] which is awesome but it has been experimental + for more than a year and is unlikely to be stabilized any time soon. + + This crate's API is intentionally designed to be compatible with `proc_macro::Diagnostic` + and delegates to it whenever possible. Once `Diagnostics` is stable this crate + will **always** delegate to it, no code changes will be required on user side. + +That said, we need a solution, but this solution must meet these conditions: + +- It must be better than `panic!`. The main point: it must offer a way to carry the span information + over to user. +- It must take as little effort as possible to migrate from `panic!`. Ideally, a new + macro with similar semantics plus ability to carry out span info. +- It must maintain compatibility with [`proc_macro::Diagnostic`][] . +- **It must be usable on stable**. + +This crate aims to provide such a mechanism. All you have to do is annotate your top-level +`#[proc_macro]` function with `#[proc_macro_error]` attribute and change panics to +[`abort!`]/[`abort_call_site!`] where appropriate, see [the Guide][guide]. + +## Disclaimer +Please note that **this crate is not intended to be used in any way other +than error reporting in procedural macros**, use `Result` and `?` (possibly along with one of the +many helpers out there) for anything else. + +
+ +#### License + + +Licensed under either of
Apache License, Version +2.0 or MIT license at your option. + + +
+ + +Unless you explicitly state otherwise, any contribution intentionally submitted +for inclusion in this crate by you, as defined in the Apache-2.0 license, shall +be dual licensed as above, without any additional terms or conditions. + + + +[compl_err]: https://doc.rust-lang.org/std/macro.compile_error.html +[`proc_macro::Diagnostic`]: https://doc.rust-lang.org/proc_macro/struct.Diagnostic.html + +[crate::dummy]: https://docs.rs/proc-macro-error/1/proc_macro_error/dummy/index.html +[crate::multi]: https://docs.rs/proc-macro-error/1/proc_macro_error/multi/index.html + +[`abort_call_site!`]: https://docs.rs/proc-macro-error/1/proc_macro_error/macro.abort_call_site.html +[`abort!`]: https://docs.rs/proc-macro-error/1/proc_macro_error/macro.abort.html +[guide]: https://docs.rs/proc-macro-error diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/build.rs b/rust/hw/char/pl011/vendor/proc-macro-error/build.rs new file mode 100644 index 0000000000..3c1196f269 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/build.rs @@ -0,0 +1,11 @@ +fn main() { + if !version_check::is_feature_flaggable().unwrap_or(false) { + println!("cargo:rustc-cfg=use_fallback"); + } + + if version_check::is_max_version("1.38.0").unwrap_or(false) + || !version_check::Channel::read().unwrap().is_stable() + { + println!("cargo:rustc-cfg=skip_ui_tests"); + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/meson.build b/rust/hw/char/pl011/vendor/proc-macro-error/meson.build new file mode 100644 index 0000000000..db5d09f5db --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/meson.build @@ -0,0 +1,22 @@ +_proc_macro_error_rs = static_library( + 'proc_macro_error', + files('src/lib.rs'), + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2018', + '--cfg', 'use_fallback', + '--cfg', 'feature="syn-error"', + '--cfg', 'feature="proc-macro"', + '-A', 'non_fmt_panics' + ], + dependencies: [ + dep_proc_macro_error_attr, + dep_proc_macro2, + dep_quote, + dep_syn, + ], +) + +dep_proc_macro_error = declare_dependency( + link_with: _proc_macro_error_rs, +) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/diagnostic.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/diagnostic.rs new file mode 100644 index 0000000000..983e6174fe --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/diagnostic.rs @@ -0,0 +1,349 @@ +use crate::{abort_now, check_correctness, sealed::Sealed, SpanRange}; +use proc_macro2::Span; +use proc_macro2::TokenStream; + +use quote::{quote_spanned, ToTokens}; + +/// Represents a diagnostic level +/// +/// # Warnings +/// +/// Warnings are ignored on stable/beta +#[derive(Debug, PartialEq)] +pub enum Level { + Error, + Warning, + #[doc(hidden)] + NonExhaustive, +} + +/// Represents a single diagnostic message +#[derive(Debug)] +pub struct Diagnostic { + pub(crate) level: Level, + pub(crate) span_range: SpanRange, + pub(crate) msg: String, + pub(crate) suggestions: Vec<(SuggestionKind, String, Option)>, + pub(crate) children: Vec<(SpanRange, String)>, +} + +/// A collection of methods that do not exist in `proc_macro::Diagnostic` +/// but still useful to have around. +/// +/// This trait is sealed and cannot be implemented outside of `proc_macro_error`. +pub trait DiagnosticExt: Sealed { + /// Create a new diagnostic message that points to the `span_range`. + /// + /// This function is the same as `Diagnostic::spanned` but produces considerably + /// better error messages for multi-token spans on stable. + fn spanned_range(span_range: SpanRange, level: Level, message: String) -> Self; + + /// Add another error message to self such that it will be emitted right after + /// the main message. + /// + /// This function is the same as `Diagnostic::span_error` but produces considerably + /// better error messages for multi-token spans on stable. + fn span_range_error(self, span_range: SpanRange, msg: String) -> Self; + + /// Attach a "help" note to your main message, the note will have it's own span on nightly. + /// + /// This function is the same as `Diagnostic::span_help` but produces considerably + /// better error messages for multi-token spans on stable. + /// + /// # Span + /// + /// The span is ignored on stable, the note effectively inherits its parent's (main message) span + fn span_range_help(self, span_range: SpanRange, msg: String) -> Self; + + /// Attach a note to your main message, the note will have it's own span on nightly. + /// + /// This function is the same as `Diagnostic::span_note` but produces considerably + /// better error messages for multi-token spans on stable. + /// + /// # Span + /// + /// The span is ignored on stable, the note effectively inherits its parent's (main message) span + fn span_range_note(self, span_range: SpanRange, msg: String) -> Self; +} + +impl DiagnosticExt for Diagnostic { + fn spanned_range(span_range: SpanRange, level: Level, message: String) -> Self { + Diagnostic { + level, + span_range, + msg: message, + suggestions: vec![], + children: vec![], + } + } + + fn span_range_error(mut self, span_range: SpanRange, msg: String) -> Self { + self.children.push((span_range, msg)); + self + } + + fn span_range_help(mut self, span_range: SpanRange, msg: String) -> Self { + self.suggestions + .push((SuggestionKind::Help, msg, Some(span_range))); + self + } + + fn span_range_note(mut self, span_range: SpanRange, msg: String) -> Self { + self.suggestions + .push((SuggestionKind::Note, msg, Some(span_range))); + self + } +} + +impl Diagnostic { + /// Create a new diagnostic message that points to `Span::call_site()` + pub fn new(level: Level, message: String) -> Self { + Diagnostic::spanned(Span::call_site(), level, message) + } + + /// Create a new diagnostic message that points to the `span` + pub fn spanned(span: Span, level: Level, message: String) -> Self { + Diagnostic::spanned_range( + SpanRange { + first: span, + last: span, + }, + level, + message, + ) + } + + /// Add another error message to self such that it will be emitted right after + /// the main message. + pub fn span_error(self, span: Span, msg: String) -> Self { + self.span_range_error( + SpanRange { + first: span, + last: span, + }, + msg, + ) + } + + /// Attach a "help" note to your main message, the note will have it's own span on nightly. + /// + /// # Span + /// + /// The span is ignored on stable, the note effectively inherits its parent's (main message) span + pub fn span_help(self, span: Span, msg: String) -> Self { + self.span_range_help( + SpanRange { + first: span, + last: span, + }, + msg, + ) + } + + /// Attach a "help" note to your main message. + pub fn help(mut self, msg: String) -> Self { + self.suggestions.push((SuggestionKind::Help, msg, None)); + self + } + + /// Attach a note to your main message, the note will have it's own span on nightly. + /// + /// # Span + /// + /// The span is ignored on stable, the note effectively inherits its parent's (main message) span + pub fn span_note(self, span: Span, msg: String) -> Self { + self.span_range_note( + SpanRange { + first: span, + last: span, + }, + msg, + ) + } + + /// Attach a note to your main message + pub fn note(mut self, msg: String) -> Self { + self.suggestions.push((SuggestionKind::Note, msg, None)); + self + } + + /// The message of main warning/error (no notes attached) + pub fn message(&self) -> &str { + &self.msg + } + + /// Abort the proc-macro's execution and display the diagnostic. + /// + /// # Warnings + /// + /// Warnings are not emitted on stable and beta, but this function will abort anyway. + pub fn abort(self) -> ! { + self.emit(); + abort_now() + } + + /// Display the diagnostic while not aborting macro execution. + /// + /// # Warnings + /// + /// Warnings are ignored on stable/beta + pub fn emit(self) { + check_correctness(); + crate::imp::emit_diagnostic(self); + } +} + +/// **NOT PUBLIC API! NOTHING TO SEE HERE!!!** +#[doc(hidden)] +impl Diagnostic { + pub fn span_suggestion(self, span: Span, suggestion: &str, msg: String) -> Self { + match suggestion { + "help" | "hint" => self.span_help(span, msg), + _ => self.span_note(span, msg), + } + } + + pub fn suggestion(self, suggestion: &str, msg: String) -> Self { + match suggestion { + "help" | "hint" => self.help(msg), + _ => self.note(msg), + } + } +} + +impl ToTokens for Diagnostic { + fn to_tokens(&self, ts: &mut TokenStream) { + use std::borrow::Cow; + + fn ensure_lf(buf: &mut String, s: &str) { + if s.ends_with('\n') { + buf.push_str(s); + } else { + buf.push_str(s); + buf.push('\n'); + } + } + + fn diag_to_tokens( + span_range: SpanRange, + level: &Level, + msg: &str, + suggestions: &[(SuggestionKind, String, Option)], + ) -> TokenStream { + if *level == Level::Warning { + return TokenStream::new(); + } + + let message = if suggestions.is_empty() { + Cow::Borrowed(msg) + } else { + let mut message = String::new(); + ensure_lf(&mut message, msg); + message.push('\n'); + + for (kind, note, _span) in suggestions { + message.push_str(" = "); + message.push_str(kind.name()); + message.push_str(": "); + ensure_lf(&mut message, note); + } + message.push('\n'); + + Cow::Owned(message) + }; + + let mut msg = proc_macro2::Literal::string(&message); + msg.set_span(span_range.last); + let group = quote_spanned!(span_range.last=> { #msg } ); + quote_spanned!(span_range.first=> compile_error!#group) + } + + ts.extend(diag_to_tokens( + self.span_range, + &self.level, + &self.msg, + &self.suggestions, + )); + ts.extend( + self.children + .iter() + .map(|(span_range, msg)| diag_to_tokens(*span_range, &Level::Error, &msg, &[])), + ); + } +} + +#[derive(Debug)] +pub(crate) enum SuggestionKind { + Help, + Note, +} + +impl SuggestionKind { + fn name(&self) -> &'static str { + match self { + SuggestionKind::Note => "note", + SuggestionKind::Help => "help", + } + } +} + +#[cfg(feature = "syn-error")] +impl From for Diagnostic { + fn from(err: syn::Error) -> Self { + use proc_macro2::{Delimiter, TokenTree}; + + fn gut_error(ts: &mut impl Iterator) -> Option<(SpanRange, String)> { + let first = match ts.next() { + // compile_error + None => return None, + Some(tt) => tt.span(), + }; + ts.next().unwrap(); // ! + + let lit = match ts.next().unwrap() { + TokenTree::Group(group) => { + // Currently `syn` builds `compile_error!` invocations + // exclusively in `ident{"..."}` (braced) form which is not + // followed by `;` (semicolon). + // + // But if it changes to `ident("...");` (parenthesized) + // or `ident["..."];` (bracketed) form, + // we will need to skip the `;` as well. + // Highly unlikely, but better safe than sorry. + + if group.delimiter() == Delimiter::Parenthesis + || group.delimiter() == Delimiter::Bracket + { + ts.next().unwrap(); // ; + } + + match group.stream().into_iter().next().unwrap() { + TokenTree::Literal(lit) => lit, + _ => unreachable!(), + } + } + _ => unreachable!(), + }; + + let last = lit.span(); + let mut msg = lit.to_string(); + + // "abc" => abc + msg.pop(); + msg.remove(0); + + Some((SpanRange { first, last }, msg)) + } + + let mut ts = err.to_compile_error().into_iter(); + + let (span_range, msg) = gut_error(&mut ts).unwrap(); + let mut res = Diagnostic::spanned_range(span_range, Level::Error, msg); + + while let Some((span_range, msg)) = gut_error(&mut ts) { + res = res.span_range_error(span_range, msg); + } + + res + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/dummy.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/dummy.rs new file mode 100644 index 0000000000..571a595aa9 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/dummy.rs @@ -0,0 +1,150 @@ +//! Facility to emit dummy implementations (or whatever) in case +//! an error happen. +//! +//! `compile_error!` does not abort a compilation right away. This means +//! `rustc` doesn't just show you the error and abort, it carries on the +//! compilation process looking for other errors to report. +//! +//! Let's consider an example: +//! +//! ```rust,ignore +//! use proc_macro::TokenStream; +//! use proc_macro_error::*; +//! +//! trait MyTrait { +//! fn do_thing(); +//! } +//! +//! // this proc macro is supposed to generate MyTrait impl +//! #[proc_macro_derive(MyTrait)] +//! #[proc_macro_error] +//! fn example(input: TokenStream) -> TokenStream { +//! // somewhere deep inside +//! abort!(span, "something's wrong"); +//! +//! // this implementation will be generated if no error happened +//! quote! { +//! impl MyTrait for #name { +//! fn do_thing() {/* whatever */} +//! } +//! } +//! } +//! +//! // ================ +//! // in main.rs +//! +//! // this derive triggers an error +//! #[derive(MyTrait)] // first BOOM! +//! struct Foo; +//! +//! fn main() { +//! Foo::do_thing(); // second BOOM! +//! } +//! ``` +//! +//! The problem is: the generated token stream contains only `compile_error!` +//! invocation, the impl was not generated. That means user will see two compilation +//! errors: +//! +//! ```text +//! error: something's wrong +//! --> $DIR/probe.rs:9:10 +//! | +//! 9 |#[proc_macro_derive(MyTrait)] +//! | ^^^^^^^ +//! +//! error[E0599]: no function or associated item named `do_thing` found for type `Foo` in the current scope +//! --> src\main.rs:3:10 +//! | +//! 1 | struct Foo; +//! | ----------- function or associated item `do_thing` not found for this +//! 2 | fn main() { +//! 3 | Foo::do_thing(); // second BOOM! +//! | ^^^^^^^^ function or associated item not found in `Foo` +//! ``` +//! +//! But the second error is meaningless! We definitely need to fix this. +//! +//! Most used approach in cases like this is "dummy implementation" - +//! omit `impl MyTrait for #name` and fill functions bodies with `unimplemented!()`. +//! +//! This is how you do it: +//! +//! ```rust,ignore +//! use proc_macro::TokenStream; +//! use proc_macro_error::*; +//! +//! trait MyTrait { +//! fn do_thing(); +//! } +//! +//! // this proc macro is supposed to generate MyTrait impl +//! #[proc_macro_derive(MyTrait)] +//! #[proc_macro_error] +//! fn example(input: TokenStream) -> TokenStream { +//! // first of all - we set a dummy impl which will be appended to +//! // `compile_error!` invocations in case a trigger does happen +//! set_dummy(quote! { +//! impl MyTrait for #name { +//! fn do_thing() { unimplemented!() } +//! } +//! }); +//! +//! // somewhere deep inside +//! abort!(span, "something's wrong"); +//! +//! // this implementation will be generated if no error happened +//! quote! { +//! impl MyTrait for #name { +//! fn do_thing() {/* whatever */} +//! } +//! } +//! } +//! +//! // ================ +//! // in main.rs +//! +//! // this derive triggers an error +//! #[derive(MyTrait)] // first BOOM! +//! struct Foo; +//! +//! fn main() { +//! Foo::do_thing(); // no more errors! +//! } +//! ``` + +use proc_macro2::TokenStream; +use std::cell::RefCell; + +use crate::check_correctness; + +thread_local! { + static DUMMY_IMPL: RefCell> = RefCell::new(None); +} + +/// Sets dummy token stream which will be appended to `compile_error!(msg);...` +/// invocations in case you'll emit any errors. +/// +/// See [guide](../index.html#guide). +pub fn set_dummy(dummy: TokenStream) -> Option { + check_correctness(); + DUMMY_IMPL.with(|old_dummy| old_dummy.replace(Some(dummy))) +} + +/// Same as [`set_dummy`] but, instead of resetting, appends tokens to the +/// existing dummy (if any). Behaves as `set_dummy` if no dummy is present. +pub fn append_dummy(dummy: TokenStream) { + check_correctness(); + DUMMY_IMPL.with(|old_dummy| { + let mut cell = old_dummy.borrow_mut(); + if let Some(ts) = cell.as_mut() { + ts.extend(dummy); + } else { + *cell = Some(dummy); + } + }); +} + +pub(crate) fn cleanup() -> Option { + DUMMY_IMPL.with(|old_dummy| old_dummy.replace(None)) +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/imp/delegate.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/imp/delegate.rs new file mode 100644 index 0000000000..07def2b98e --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/imp/delegate.rs @@ -0,0 +1,69 @@ +//! This implementation uses [`proc_macro::Diagnostic`], nightly only. + +use std::cell::Cell; + +use proc_macro::{Diagnostic as PDiag, Level as PLevel}; + +use crate::{ + abort_now, check_correctness, + diagnostic::{Diagnostic, Level, SuggestionKind}, +}; + +pub fn abort_if_dirty() { + check_correctness(); + if IS_DIRTY.with(|c| c.get()) { + abort_now() + } +} + +pub(crate) fn cleanup() -> Vec { + IS_DIRTY.with(|c| c.set(false)); + vec![] +} + +pub(crate) fn emit_diagnostic(diag: Diagnostic) { + let Diagnostic { + level, + span_range, + msg, + suggestions, + children, + } = diag; + + let span = span_range.collapse().unwrap(); + + let level = match level { + Level::Warning => PLevel::Warning, + Level::Error => { + IS_DIRTY.with(|c| c.set(true)); + PLevel::Error + } + _ => unreachable!(), + }; + + let mut res = PDiag::spanned(span, level, msg); + + for (kind, msg, span) in suggestions { + res = match (kind, span) { + (SuggestionKind::Note, Some(span_range)) => { + res.span_note(span_range.collapse().unwrap(), msg) + } + (SuggestionKind::Help, Some(span_range)) => { + res.span_help(span_range.collapse().unwrap(), msg) + } + (SuggestionKind::Note, None) => res.note(msg), + (SuggestionKind::Help, None) => res.help(msg), + } + } + + for (span_range, msg) in children { + let span = span_range.collapse().unwrap(); + res = res.span_error(span, msg); + } + + res.emit() +} + +thread_local! { + static IS_DIRTY: Cell = Cell::new(false); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/imp/fallback.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/imp/fallback.rs new file mode 100644 index 0000000000..ad1f730bfc --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/imp/fallback.rs @@ -0,0 +1,30 @@ +//! This implementation uses self-written stable facilities. + +use crate::{ + abort_now, check_correctness, + diagnostic::{Diagnostic, Level}, +}; +use std::cell::RefCell; + +pub fn abort_if_dirty() { + check_correctness(); + ERR_STORAGE.with(|storage| { + if !storage.borrow().is_empty() { + abort_now() + } + }); +} + +pub(crate) fn cleanup() -> Vec { + ERR_STORAGE.with(|storage| storage.replace(Vec::new())) +} + +pub(crate) fn emit_diagnostic(diag: Diagnostic) { + if diag.level == Level::Error { + ERR_STORAGE.with(|storage| storage.borrow_mut().push(diag)); + } +} + +thread_local! { + static ERR_STORAGE: RefCell> = RefCell::new(Vec::new()); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/lib.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/lib.rs new file mode 100644 index 0000000000..fb867fdc03 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/lib.rs @@ -0,0 +1,560 @@ +//! # proc-macro-error +//! +//! This crate aims to make error reporting in proc-macros simple and easy to use. +//! Migrate from `panic!`-based errors for as little effort as possible! +//! +//! (Also, you can explicitly [append a dummy token stream](dummy/index.html) to your errors). +//! +//! To achieve his, this crate serves as a tiny shim around `proc_macro::Diagnostic` and +//! `compile_error!`. It detects the best way of emitting available based on compiler's version. +//! When the underlying diagnostic type is finally stabilized, this crate will simply be +//! delegating to it requiring no changes in your code! +//! +//! So you can just use this crate and have *both* some of `proc_macro::Diagnostic` functionality +//! available on stable ahead of time *and* your error-reporting code future-proof. +//! +//! ## Cargo features +//! +//! This crate provides *enabled by default* `syn-error` feature that gates +//! `impl From for Diagnostic` conversion. If you don't use `syn` and want +//! to cut off some of compilation time, you can disable it via +//! +//! ```toml +//! [dependencies] +//! proc-macro-error = { version = "1", default-features = false } +//! ``` +//! +//! ***Please note that disabling this feature makes sense only if you don't depend on `syn` +//! directly or indirectly, and you very likely do.** +//! +//! ## Real world examples +//! +//! * [`structopt-derive`](https://github.com/TeXitoi/structopt/tree/master/structopt-derive) +//! (abort-like usage) +//! * [`auto-impl`](https://github.com/auto-impl-rs/auto_impl/) (emit-like usage) +//! +//! ## Limitations +//! +//! - Warnings are emitted only on nightly, they are ignored on stable. +//! - "help" suggestions can't have their own span info on stable, +//! (essentially inheriting the parent span). +//! - If a panic occurs somewhere in your macro no errors will be displayed. This is not a +//! technical limitation but rather intentional design. `panic` is not for error reporting. +//! +//! ### `#[proc_macro_error]` attribute +//! +//! **This attribute MUST be present on the top level of your macro** (the function +//! annotated with any of `#[proc_macro]`, `#[proc_macro_derive]`, `#[proc_macro_attribute]`). +//! +//! This attribute performs the setup and cleanup necessary to make things work. +//! +//! In most cases you'll need the simple `#[proc_macro_error]` form without any +//! additional settings. Feel free to [skip the "Syntax" section](#macros). +//! +//! #### Syntax +//! +//! `#[proc_macro_error]` or `#[proc_macro_error(settings...)]`, where `settings...` +//! is a comma-separated list of: +//! +//! - `proc_macro_hack`: +//! +//! In order to correctly cooperate with `#[proc_macro_hack]`, `#[proc_macro_error]` +//! attribute must be placed *before* (above) it, like this: +//! +//! ```no_run +//! # use proc_macro2::TokenStream; +//! # const IGNORE: &str = " +//! #[proc_macro_error] +//! #[proc_macro_hack] +//! #[proc_macro] +//! # "; +//! fn my_macro(input: TokenStream) -> TokenStream { +//! unimplemented!() +//! } +//! ``` +//! +//! If, for some reason, you can't place it like that you can use +//! `#[proc_macro_error(proc_macro_hack)]` instead. +//! +//! # Note +//! +//! If `proc-macro-hack` was detected (by any means) `allow_not_macro` +//! and `assert_unwind_safe` will be applied automatically. +//! +//! - `allow_not_macro`: +//! +//! By default, the attribute checks that it's applied to a proc-macro. +//! If none of `#[proc_macro]`, `#[proc_macro_derive]` nor `#[proc_macro_attribute]` are +//! present it will panic. It's the intention - this crate is supposed to be used only with +//! proc-macros. +//! +//! This setting is made to bypass the check, useful in certain circumstances. +//! +//! Pay attention: the function this attribute is applied to must return +//! `proc_macro::TokenStream`. +//! +//! This setting is implied if `proc-macro-hack` was detected. +//! +//! - `assert_unwind_safe`: +//! +//! By default, your code must be [unwind safe]. If your code is not unwind safe, +//! but you believe it's correct, you can use this setting to bypass the check. +//! You would need this for code that uses `lazy_static` or `thread_local` with +//! `Cell/RefCell` inside (and the like). +//! +//! This setting is implied if `#[proc_macro_error]` is applied to a function +//! marked as `#[proc_macro]`, `#[proc_macro_derive]` or `#[proc_macro_attribute]`. +//! +//! This setting is also implied if `proc-macro-hack` was detected. +//! +//! ## Macros +//! +//! Most of the time you want to use the macros. Syntax is described in the next section below. +//! +//! You'll need to decide how you want to emit errors: +//! +//! * Emit the error and abort. Very much panic-like usage. Served by [`abort!`] and +//! [`abort_call_site!`]. +//! * Emit the error but do not abort right away, looking for other errors to report. +//! Served by [`emit_error!`] and [`emit_call_site_error!`]. +//! +//! You **can** mix these usages. +//! +//! `abort` and `emit_error` take a "source span" as the first argument. This source +//! will be used to highlight the place the error originates from. It must be one of: +//! +//! * *Something* that implements [`ToTokens`] (most types in `syn` and `proc-macro2` do). +//! This source is the preferable one since it doesn't lose span information on multi-token +//! spans, see [this issue](https://gitlab.com/CreepySkeleton/proc-macro-error/-/issues/6) +//! for details. +//! * [`proc_macro::Span`] +//! * [`proc-macro2::Span`] +//! +//! The rest is your message in format-like style. +//! +//! See [the next section](#syntax-1) for detailed syntax. +//! +//! - [`abort!`]: +//! +//! Very much panic-like usage - abort right away and show the error. +//! Expands to [`!`] (never type). +//! +//! - [`abort_call_site!`]: +//! +//! Shortcut for `abort!(Span::call_site(), ...)`. Expands to [`!`] (never type). +//! +//! - [`emit_error!`]: +//! +//! [`proc_macro::Diagnostic`]-like usage - emit the error but keep going, +//! looking for other errors to report. +//! The compilation will fail nonetheless. Expands to [`()`] (unit type). +//! +//! - [`emit_call_site_error!`]: +//! +//! Shortcut for `emit_error!(Span::call_site(), ...)`. Expands to [`()`] (unit type). +//! +//! - [`emit_warning!`]: +//! +//! Like `emit_error!` but emit a warning instead of error. The compilation won't fail +//! because of warnings. +//! Expands to [`()`] (unit type). +//! +//! **Beware**: warnings are nightly only, they are completely ignored on stable. +//! +//! - [`emit_call_site_warning!`]: +//! +//! Shortcut for `emit_warning!(Span::call_site(), ...)`. Expands to [`()`] (unit type). +//! +//! - [`diagnostic`]: +//! +//! Build an instance of `Diagnostic` in format-like style. +//! +//! #### Syntax +//! +//! All the macros have pretty much the same syntax: +//! +//! 1. ```ignore +//! abort!(single_expr) +//! ``` +//! Shortcut for `Diagnostic::from(expr).abort()`. +//! +//! 2. ```ignore +//! abort!(span, message) +//! ``` +//! The first argument is an expression the span info should be taken from. +//! +//! The second argument is the error message, it must implement [`ToString`]. +//! +//! 3. ```ignore +//! abort!(span, format_literal, format_args...) +//! ``` +//! +//! This form is pretty much the same as 2, except `format!(format_literal, format_args...)` +//! will be used to for the message instead of [`ToString`]. +//! +//! That's it. `abort!`, `emit_warning`, `emit_error` share this exact syntax. +//! +//! `abort_call_site!`, `emit_call_site_warning`, `emit_call_site_error` lack 1 form +//! and do not take span in 2'th and 3'th forms. Those are essentially shortcuts for +//! `macro!(Span::call_site(), args...)`. +//! +//! `diagnostic!` requires a [`Level`] instance between `span` and second argument +//! (1'th form is the same). +//! +//! > **Important!** +//! > +//! > If you have some type from `proc_macro` or `syn` to point to, do not call `.span()` +//! > on it but rather use it directly: +//! > ```no_run +//! > # use proc_macro_error::abort; +//! > # let input = proc_macro2::TokenStream::new(); +//! > let ty: syn::Type = syn::parse2(input).unwrap(); +//! > abort!(ty, "BOOM"); +//! > // ^^ <-- avoid .span() +//! > ``` +//! > +//! > `.span()` calls work too, but you may experience regressions in message quality. +//! +//! #### Note attachments +//! +//! 3. Every macro can have "note" attachments (only 2 and 3 form). +//! ```ignore +//! let opt_help = if have_some_info { Some("did you mean `this`?") } else { None }; +//! +//! abort!( +//! span, message; // <--- attachments start with `;` (semicolon) +//! +//! help = "format {} {}", "arg1", "arg2"; // <--- every attachment ends with `;`, +//! // maybe except the last one +//! +//! note = "to_string"; // <--- one arg uses `.to_string()` instead of `format!()` +//! +//! yay = "I see what {} did here", "you"; // <--- "help =" and "hint =" are mapped +//! // to Diagnostic::help, +//! // anything else is Diagnostic::note +//! +//! wow = note_span => "custom span"; // <--- attachments can have their own span +//! // it takes effect only on nightly though +//! +//! hint =? opt_help; // <-- "optional" attachment, get displayed only if `Some` +//! // must be single `Option` expression +//! +//! note =? note_span => opt_help // <-- optional attachments can have custom spans too +//! ); +//! ``` +//! + +//! ### Diagnostic type +//! +//! [`Diagnostic`] type is intentionally designed to be API compatible with [`proc_macro::Diagnostic`]. +//! Not all API is implemented, only the part that can be reasonably implemented on stable. +//! +//! +//! [`abort!`]: macro.abort.html +//! [`abort_call_site!`]: macro.abort_call_site.html +//! [`emit_warning!`]: macro.emit_warning.html +//! [`emit_error!`]: macro.emit_error.html +//! [`emit_call_site_warning!`]: macro.emit_call_site_error.html +//! [`emit_call_site_error!`]: macro.emit_call_site_warning.html +//! [`diagnostic!`]: macro.diagnostic.html +//! [`Diagnostic`]: struct.Diagnostic.html +//! +//! [`proc_macro::Span`]: https://doc.rust-lang.org/proc_macro/struct.Span.html +//! [`proc_macro::Diagnostic`]: https://doc.rust-lang.org/proc_macro/struct.Diagnostic.html +//! +//! [unwind safe]: https://doc.rust-lang.org/std/panic/trait.UnwindSafe.html#what-is-unwind-safety +//! [`!`]: https://doc.rust-lang.org/std/primitive.never.html +//! [`()`]: https://doc.rust-lang.org/std/primitive.unit.html +//! [`ToString`]: https://doc.rust-lang.org/std/string/trait.ToString.html +//! +//! [`proc-macro2::Span`]: https://docs.rs/proc-macro2/1.0.10/proc_macro2/struct.Span.html +//! [`ToTokens`]: https://docs.rs/quote/1.0.3/quote/trait.ToTokens.html +//! + +#![cfg_attr(not(use_fallback), feature(proc_macro_diagnostic))] +#![forbid(unsafe_code)] +#![allow(clippy::needless_doctest_main)] + +extern crate proc_macro; + +pub use crate::{ + diagnostic::{Diagnostic, DiagnosticExt, Level}, + dummy::{append_dummy, set_dummy}, +}; +pub use proc_macro_error_attr::proc_macro_error; + +use proc_macro2::Span; +use quote::{quote, ToTokens}; + +use std::cell::Cell; +use std::panic::{catch_unwind, resume_unwind, UnwindSafe}; + +pub mod dummy; + +mod diagnostic; +mod macros; +mod sealed; + +#[cfg(use_fallback)] +#[path = "imp/fallback.rs"] +mod imp; + +#[cfg(not(use_fallback))] +#[path = "imp/delegate.rs"] +mod imp; + +#[derive(Debug, Clone, Copy)] +pub struct SpanRange { + pub first: Span, + pub last: Span, +} + +impl SpanRange { + /// Create a range with the `first` and `last` spans being the same. + pub fn single_span(span: Span) -> Self { + SpanRange { + first: span, + last: span, + } + } + + /// Create a `SpanRange` resolving at call site. + pub fn call_site() -> Self { + SpanRange::single_span(Span::call_site()) + } + + /// Construct span range from a `TokenStream`. This method always preserves all the + /// range. + /// + /// ### Note + /// + /// If the stream is empty, the result is `SpanRange::call_site()`. If the stream + /// consists of only one `TokenTree`, the result is `SpanRange::single_span(tt.span())` + /// that doesn't lose anything. + pub fn from_tokens(ts: &dyn ToTokens) -> Self { + let mut spans = ts.to_token_stream().into_iter().map(|tt| tt.span()); + let first = spans.next().unwrap_or_else(|| Span::call_site()); + let last = spans.last().unwrap_or(first); + + SpanRange { first, last } + } + + /// Join two span ranges. The resulting range will start at `self.first` and end at + /// `other.last`. + pub fn join_range(self, other: SpanRange) -> Self { + SpanRange { + first: self.first, + last: other.last, + } + } + + /// Collapse the range into single span, preserving as much information as possible. + pub fn collapse(self) -> Span { + self.first.join(self.last).unwrap_or(self.first) + } +} + +/// This traits expands `Result>` with some handy shortcuts. +pub trait ResultExt { + type Ok; + + /// Behaves like `Result::unwrap`: if self is `Ok` yield the contained value, + /// otherwise abort macro execution via `abort!`. + fn unwrap_or_abort(self) -> Self::Ok; + + /// Behaves like `Result::expect`: if self is `Ok` yield the contained value, + /// otherwise abort macro execution via `abort!`. + /// If it aborts then resulting error message will be preceded with `message`. + fn expect_or_abort(self, msg: &str) -> Self::Ok; +} + +/// This traits expands `Option` with some handy shortcuts. +pub trait OptionExt { + type Some; + + /// Behaves like `Option::expect`: if self is `Some` yield the contained value, + /// otherwise abort macro execution via `abort_call_site!`. + /// If it aborts the `message` will be used for [`compile_error!`][compl_err] invocation. + /// + /// [compl_err]: https://doc.rust-lang.org/std/macro.compile_error.html + fn expect_or_abort(self, msg: &str) -> Self::Some; +} + +/// Abort macro execution and display all the emitted errors, if any. +/// +/// Does nothing if no errors were emitted (warnings do not count). +pub fn abort_if_dirty() { + imp::abort_if_dirty(); +} + +impl> ResultExt for Result { + type Ok = T; + + fn unwrap_or_abort(self) -> T { + match self { + Ok(res) => res, + Err(e) => e.into().abort(), + } + } + + fn expect_or_abort(self, message: &str) -> T { + match self { + Ok(res) => res, + Err(e) => { + let mut e = e.into(); + e.msg = format!("{}: {}", message, e.msg); + e.abort() + } + } + } +} + +impl OptionExt for Option { + type Some = T; + + fn expect_or_abort(self, message: &str) -> T { + match self { + Some(res) => res, + None => abort_call_site!(message), + } + } +} + +/// This is the entry point for a proc-macro. +/// +/// **NOT PUBLIC API, SUBJECT TO CHANGE WITHOUT ANY NOTICE** +#[doc(hidden)] +pub fn entry_point(f: F, proc_macro_hack: bool) -> proc_macro::TokenStream +where + F: FnOnce() -> proc_macro::TokenStream + UnwindSafe, +{ + ENTERED_ENTRY_POINT.with(|flag| flag.set(flag.get() + 1)); + let caught = catch_unwind(f); + let dummy = dummy::cleanup(); + let err_storage = imp::cleanup(); + ENTERED_ENTRY_POINT.with(|flag| flag.set(flag.get() - 1)); + + let gen_error = || { + if proc_macro_hack { + quote! {{ + macro_rules! proc_macro_call { + () => ( unimplemented!() ) + } + + #(#err_storage)* + #dummy + + unimplemented!() + }} + } else { + quote!( #(#err_storage)* #dummy ) + } + }; + + match caught { + Ok(ts) => { + if err_storage.is_empty() { + ts + } else { + gen_error().into() + } + } + + Err(boxed) => match boxed.downcast::() { + Ok(_) => gen_error().into(), + Err(boxed) => resume_unwind(boxed), + }, + } +} + +fn abort_now() -> ! { + check_correctness(); + panic!(AbortNow) +} + +thread_local! { + static ENTERED_ENTRY_POINT: Cell = Cell::new(0); +} + +struct AbortNow; + +fn check_correctness() { + if ENTERED_ENTRY_POINT.with(|flag| flag.get()) == 0 { + panic!( + "proc-macro-error API cannot be used outside of `entry_point` invocation, \ + perhaps you forgot to annotate your #[proc_macro] function with `#[proc_macro_error]" + ); + } +} + +/// **ALL THE STUFF INSIDE IS NOT PUBLIC API!!!** +#[doc(hidden)] +pub mod __export { + // reexports for use in macros + pub extern crate proc_macro; + pub extern crate proc_macro2; + + use proc_macro2::Span; + use quote::ToTokens; + + use crate::SpanRange; + + // inspired by + // https://github.com/dtolnay/case-studies/blob/master/autoref-specialization/README.md#simple-application + + pub trait SpanAsSpanRange { + #[allow(non_snake_case)] + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange; + } + + pub trait Span2AsSpanRange { + #[allow(non_snake_case)] + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange; + } + + pub trait ToTokensAsSpanRange { + #[allow(non_snake_case)] + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange; + } + + pub trait SpanRangeAsSpanRange { + #[allow(non_snake_case)] + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange; + } + + impl ToTokensAsSpanRange for &T { + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange { + let mut ts = self.to_token_stream().into_iter(); + let first = ts + .next() + .map(|tt| tt.span()) + .unwrap_or_else(Span::call_site); + let last = ts.last().map(|tt| tt.span()).unwrap_or(first); + SpanRange { first, last } + } + } + + impl Span2AsSpanRange for Span { + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange { + SpanRange { + first: *self, + last: *self, + } + } + } + + impl SpanAsSpanRange for proc_macro::Span { + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange { + SpanRange { + first: self.clone().into(), + last: self.clone().into(), + } + } + } + + impl SpanRangeAsSpanRange for SpanRange { + fn FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(&self) -> SpanRange { + *self + } + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/macros.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/macros.rs new file mode 100644 index 0000000000..747b684d56 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/macros.rs @@ -0,0 +1,288 @@ +// FIXME: this can be greatly simplified via $()? +// as soon as MRSV hits 1.32 + +/// Build [`Diagnostic`](struct.Diagnostic.html) instance from provided arguments. +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +/// +#[macro_export] +macro_rules! diagnostic { + // from alias + ($err:expr) => { $crate::Diagnostic::from($err) }; + + // span, message, help + ($span:expr, $level:expr, $fmt:expr, $($args:expr),+ ; $($rest:tt)+) => {{ + #[allow(unused_imports)] + use $crate::__export::{ + ToTokensAsSpanRange, + Span2AsSpanRange, + SpanAsSpanRange, + SpanRangeAsSpanRange + }; + use $crate::DiagnosticExt; + let span_range = (&$span).FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(); + + let diag = $crate::Diagnostic::spanned_range( + span_range, + $level, + format!($fmt, $($args),*) + ); + $crate::__pme__suggestions!(diag $($rest)*); + diag + }}; + + ($span:expr, $level:expr, $msg:expr ; $($rest:tt)+) => {{ + #[allow(unused_imports)] + use $crate::__export::{ + ToTokensAsSpanRange, + Span2AsSpanRange, + SpanAsSpanRange, + SpanRangeAsSpanRange + }; + use $crate::DiagnosticExt; + let span_range = (&$span).FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(); + + let diag = $crate::Diagnostic::spanned_range(span_range, $level, $msg.to_string()); + $crate::__pme__suggestions!(diag $($rest)*); + diag + }}; + + // span, message, no help + ($span:expr, $level:expr, $fmt:expr, $($args:expr),+) => {{ + #[allow(unused_imports)] + use $crate::__export::{ + ToTokensAsSpanRange, + Span2AsSpanRange, + SpanAsSpanRange, + SpanRangeAsSpanRange + }; + use $crate::DiagnosticExt; + let span_range = (&$span).FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(); + + $crate::Diagnostic::spanned_range( + span_range, + $level, + format!($fmt, $($args),*) + ) + }}; + + ($span:expr, $level:expr, $msg:expr) => {{ + #[allow(unused_imports)] + use $crate::__export::{ + ToTokensAsSpanRange, + Span2AsSpanRange, + SpanAsSpanRange, + SpanRangeAsSpanRange + }; + use $crate::DiagnosticExt; + let span_range = (&$span).FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange(); + + $crate::Diagnostic::spanned_range(span_range, $level, $msg.to_string()) + }}; + + + // trailing commas + + ($span:expr, $level:expr, $fmt:expr, $($args:expr),+, ; $($rest:tt)+) => { + $crate::diagnostic!($span, $level, $fmt, $($args),* ; $($rest)*) + }; + ($span:expr, $level:expr, $msg:expr, ; $($rest:tt)+) => { + $crate::diagnostic!($span, $level, $msg ; $($rest)*) + }; + ($span:expr, $level:expr, $fmt:expr, $($args:expr),+,) => { + $crate::diagnostic!($span, $level, $fmt, $($args),*) + }; + ($span:expr, $level:expr, $msg:expr,) => { + $crate::diagnostic!($span, $level, $msg) + }; + // ($err:expr,) => { $crate::diagnostic!($err) }; +} + +/// Abort proc-macro execution right now and display the error. +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +#[macro_export] +macro_rules! abort { + ($err:expr) => { + $crate::diagnostic!($err).abort() + }; + + ($span:expr, $($tts:tt)*) => { + $crate::diagnostic!($span, $crate::Level::Error, $($tts)*).abort() + }; +} + +/// Shortcut for `abort!(Span::call_site(), msg...)`. This macro +/// is still preferable over plain panic, panics are not for error reporting. +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +/// +#[macro_export] +macro_rules! abort_call_site { + ($($tts:tt)*) => { + $crate::abort!($crate::__export::proc_macro2::Span::call_site(), $($tts)*) + }; +} + +/// Emit an error while not aborting the proc-macro right away. +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +/// +#[macro_export] +macro_rules! emit_error { + ($err:expr) => { + $crate::diagnostic!($err).emit() + }; + + ($span:expr, $($tts:tt)*) => {{ + let level = $crate::Level::Error; + $crate::diagnostic!($span, level, $($tts)*).emit() + }}; +} + +/// Shortcut for `emit_error!(Span::call_site(), ...)`. This macro +/// is still preferable over plain panic, panics are not for error reporting.. +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +/// +#[macro_export] +macro_rules! emit_call_site_error { + ($($tts:tt)*) => { + $crate::emit_error!($crate::__export::proc_macro2::Span::call_site(), $($tts)*) + }; +} + +/// Emit a warning. Warnings are not errors and compilation won't fail because of them. +/// +/// **Does nothing on stable** +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +/// +#[macro_export] +macro_rules! emit_warning { + ($span:expr, $($tts:tt)*) => { + $crate::diagnostic!($span, $crate::Level::Warning, $($tts)*).emit() + }; +} + +/// Shortcut for `emit_warning!(Span::call_site(), ...)`. +/// +/// **Does nothing on stable** +/// +/// # Syntax +/// +/// See [the guide](index.html#guide). +/// +#[macro_export] +macro_rules! emit_call_site_warning { + ($($tts:tt)*) => {{ + $crate::emit_warning!($crate::__export::proc_macro2::Span::call_site(), $($tts)*) + }}; +} + +#[doc(hidden)] +#[macro_export] +macro_rules! __pme__suggestions { + ($var:ident) => (); + + ($var:ident $help:ident =? $msg:expr) => { + let $var = if let Some(msg) = $msg { + $var.suggestion(stringify!($help), msg.to_string()) + } else { + $var + }; + }; + ($var:ident $help:ident =? $span:expr => $msg:expr) => { + let $var = if let Some(msg) = $msg { + $var.span_suggestion($span.into(), stringify!($help), msg.to_string()) + } else { + $var + }; + }; + + ($var:ident $help:ident =? $msg:expr ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help =? $msg); + $crate::__pme__suggestions!($var $($rest)*); + }; + ($var:ident $help:ident =? $span:expr => $msg:expr ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help =? $span => $msg); + $crate::__pme__suggestions!($var $($rest)*); + }; + + + ($var:ident $help:ident = $msg:expr) => { + let $var = $var.suggestion(stringify!($help), $msg.to_string()); + }; + ($var:ident $help:ident = $fmt:expr, $($args:expr),+) => { + let $var = $var.suggestion( + stringify!($help), + format!($fmt, $($args),*) + ); + }; + ($var:ident $help:ident = $span:expr => $msg:expr) => { + let $var = $var.span_suggestion($span.into(), stringify!($help), $msg.to_string()); + }; + ($var:ident $help:ident = $span:expr => $fmt:expr, $($args:expr),+) => { + let $var = $var.span_suggestion( + $span.into(), + stringify!($help), + format!($fmt, $($args),*) + ); + }; + + ($var:ident $help:ident = $msg:expr ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $msg); + $crate::__pme__suggestions!($var $($rest)*); + }; + ($var:ident $help:ident = $fmt:expr, $($args:expr),+ ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $fmt, $($args),*); + $crate::__pme__suggestions!($var $($rest)*); + }; + ($var:ident $help:ident = $span:expr => $msg:expr ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $span => $msg); + $crate::__pme__suggestions!($var $($rest)*); + }; + ($var:ident $help:ident = $span:expr => $fmt:expr, $($args:expr),+ ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $span => $fmt, $($args),*); + $crate::__pme__suggestions!($var $($rest)*); + }; + + // trailing commas + + ($var:ident $help:ident = $msg:expr,) => { + $crate::__pme__suggestions!($var $help = $msg) + }; + ($var:ident $help:ident = $fmt:expr, $($args:expr),+,) => { + $crate::__pme__suggestions!($var $help = $fmt, $($args)*) + }; + ($var:ident $help:ident = $span:expr => $msg:expr,) => { + $crate::__pme__suggestions!($var $help = $span => $msg) + }; + ($var:ident $help:ident = $span:expr => $fmt:expr, $($args:expr),*,) => { + $crate::__pme__suggestions!($var $help = $span => $fmt, $($args)*) + }; + ($var:ident $help:ident = $msg:expr, ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $msg; $($rest)*) + }; + ($var:ident $help:ident = $fmt:expr, $($args:expr),+, ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $fmt, $($args),*; $($rest)*) + }; + ($var:ident $help:ident = $span:expr => $msg:expr, ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $span => $msg; $($rest)*) + }; + ($var:ident $help:ident = $span:expr => $fmt:expr, $($args:expr),+, ; $($rest:tt)*) => { + $crate::__pme__suggestions!($var $help = $span => $fmt, $($args),*; $($rest)*) + }; +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/src/sealed.rs b/rust/hw/char/pl011/vendor/proc-macro-error/src/sealed.rs new file mode 100644 index 0000000000..a2d5081e55 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/src/sealed.rs @@ -0,0 +1,3 @@ +pub trait Sealed {} + +impl Sealed for crate::Diagnostic {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/macro-errors.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/macro-errors.rs new file mode 100644 index 0000000000..dd60f88a80 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/macro-errors.rs @@ -0,0 +1,8 @@ +extern crate trybuild; + +#[cfg_attr(skip_ui_tests, ignore)] +#[test] +fn ui() { + let t = trybuild::TestCases::new(); + t.compile_fail("tests/ui/*.rs"); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ok.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ok.rs new file mode 100644 index 0000000000..cf64c027f8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ok.rs @@ -0,0 +1,10 @@ +extern crate test_crate; + +use test_crate::*; + +ok!(it_works); + +#[test] +fn check_it_works() { + it_works(); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/runtime-errors.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/runtime-errors.rs new file mode 100644 index 0000000000..13108a2d91 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/runtime-errors.rs @@ -0,0 +1,13 @@ +use proc_macro_error::*; + +#[test] +#[should_panic = "proc-macro-error API cannot be used outside of"] +fn missing_attr_emit() { + emit_call_site_error!("You won't see me"); +} + +#[test] +#[should_panic = "proc-macro-error API cannot be used outside of"] +fn missing_attr_abort() { + abort_call_site!("You won't see me"); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.rs new file mode 100644 index 0000000000..f63118251e --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.rs @@ -0,0 +1,11 @@ +extern crate test_crate; +use test_crate::*; + +abort_from!(one, two); +abort_to_string!(one, two); +abort_format!(one, two); +direct_abort!(one, two); +abort_notes!(one, two); +abort_call_site_test!(one, two); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.stderr new file mode 100644 index 0000000000..c5399d9d91 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/abort.stderr @@ -0,0 +1,48 @@ +error: abort!(span, from) test + --> $DIR/abort.rs:4:13 + | +4 | abort_from!(one, two); + | ^^^ + +error: abort!(span, single_expr) test + --> $DIR/abort.rs:5:18 + | +5 | abort_to_string!(one, two); + | ^^^ + +error: abort!(span, expr1, expr2) test + --> $DIR/abort.rs:6:15 + | +6 | abort_format!(one, two); + | ^^^ + +error: Diagnostic::abort() test + --> $DIR/abort.rs:7:15 + | +7 | direct_abort!(one, two); + | ^^^ + +error: This is an error + + = note: simple note + = help: simple help + = help: simple hint + = note: simple yay + = note: format note + = note: Some note + = note: spanned simple note + = note: spanned format note + = note: Some note + + --> $DIR/abort.rs:8:14 + | +8 | abort_notes!(one, two); + | ^^^ + +error: abort_call_site! test + --> $DIR/abort.rs:9:1 + | +9 | abort_call_site_test!(one, two); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | + = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.rs new file mode 100644 index 0000000000..53d6feacc1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.rs @@ -0,0 +1,13 @@ +extern crate test_crate; +use test_crate::*; + +enum NeedDefault { + A, + B +} + +append_dummy!(need_default); + +fn main() { + let _ = NeedDefault::default(); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.stderr new file mode 100644 index 0000000000..8a47ddaac4 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/append_dummy.stderr @@ -0,0 +1,5 @@ +error: append_dummy test + --> $DIR/append_dummy.rs:9:15 + | +9 | append_dummy!(need_default); + | ^^^^^^^^^^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.rs new file mode 100644 index 0000000000..fb9e6dc697 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.rs @@ -0,0 +1,6 @@ +extern crate test_crate; +use test_crate::*; + +children_messages!(one, two, three, four); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.stderr new file mode 100644 index 0000000000..3b49d83165 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/children_messages.stderr @@ -0,0 +1,23 @@ +error: main macro message + --> $DIR/children_messages.rs:4:20 + | +4 | children_messages!(one, two, three, four); + | ^^^ + +error: child message + --> $DIR/children_messages.rs:4:25 + | +4 | children_messages!(one, two, three, four); + | ^^^ + +error: main syn::Error + --> $DIR/children_messages.rs:4:30 + | +4 | children_messages!(one, two, three, four); + | ^^^^^ + +error: child syn::Error + --> $DIR/children_messages.rs:4:37 + | +4 | children_messages!(one, two, three, four); + | ^^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.rs new file mode 100644 index 0000000000..caa4827886 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.rs @@ -0,0 +1,13 @@ +extern crate test_crate; +use test_crate::*; + +enum NeedDefault { + A, + B +} + +dummy!(need_default); + +fn main() { + let _ = NeedDefault::default(); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.stderr new file mode 100644 index 0000000000..bae078afa8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/dummy.stderr @@ -0,0 +1,5 @@ +error: set_dummy test + --> $DIR/dummy.rs:9:8 + | +9 | dummy!(need_default); + | ^^^^^^^^^^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.rs new file mode 100644 index 0000000000..c5c7db095f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.rs @@ -0,0 +1,7 @@ +extern crate test_crate; +use test_crate::*; + +emit!(one, two, three, four, five); +emit_notes!(one, two); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.stderr new file mode 100644 index 0000000000..9484bd628b --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/emit.stderr @@ -0,0 +1,48 @@ +error: emit!(span, from) test + --> $DIR/emit.rs:4:7 + | +4 | emit!(one, two, three, four, five); + | ^^^ + +error: emit!(span, expr1, expr2) test + --> $DIR/emit.rs:4:12 + | +4 | emit!(one, two, three, four, five); + | ^^^ + +error: emit!(span, single_expr) test + --> $DIR/emit.rs:4:17 + | +4 | emit!(one, two, three, four, five); + | ^^^^^ + +error: Diagnostic::emit() test + --> $DIR/emit.rs:4:24 + | +4 | emit!(one, two, three, four, five); + | ^^^^ + +error: emit_call_site_error!(expr) test + --> $DIR/emit.rs:4:1 + | +4 | emit!(one, two, three, four, five); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | + = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) + +error: This is an error + + = note: simple note + = help: simple help + = help: simple hint + = note: simple yay + = note: format note + = note: Some note + = note: spanned simple note + = note: spanned format note + = note: Some note + + --> $DIR/emit.rs:5:13 + | +5 | emit_notes!(one, two); + | ^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.rs new file mode 100644 index 0000000000..82bbebcc55 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.rs @@ -0,0 +1,6 @@ +extern crate test_crate; +use test_crate::*; + +explicit_span_range!(one, two, three, four); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.stderr new file mode 100644 index 0000000000..781a71e76a --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/explicit_span_range.stderr @@ -0,0 +1,5 @@ +error: explicit SpanRange + --> $DIR/explicit_span_range.rs:4:22 + | +4 | explicit_span_range!(one, two, three, four); + | ^^^^^^^^^^^^^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.rs new file mode 100644 index 0000000000..e6d2d24971 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.rs @@ -0,0 +1,11 @@ +extern crate proc_macro_error; +use proc_macro_error::abort; + +struct Foo; + +#[allow(unused)] +fn foo() { + abort!(Foo, "BOOM"); +} + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.stderr new file mode 100644 index 0000000000..8eaf6456fd --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/misuse.stderr @@ -0,0 +1,13 @@ +error[E0599]: no method named `FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange` found for reference `&Foo` in the current scope + --> $DIR/misuse.rs:8:5 + | +4 | struct Foo; + | ----------- doesn't satisfy `Foo: quote::to_tokens::ToTokens` +... +8 | abort!(Foo, "BOOM"); + | ^^^^^^^^^^^^^^^^^^^^ method not found in `&Foo` + | + = note: the method `FIRST_ARG_MUST_EITHER_BE_Span_OR_IMPLEMENT_ToTokens_OR_BE_SpanRange` exists but the following trait bounds were not satisfied: + `Foo: quote::to_tokens::ToTokens` + which is required by `&Foo: proc_macro_error::__export::ToTokensAsSpanRange` + = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.rs new file mode 100644 index 0000000000..215928f6f4 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.rs @@ -0,0 +1,6 @@ +extern crate test_crate; + +#[test_crate::multiple_tokens] +type T = (); + +fn main() {} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.stderr new file mode 100644 index 0000000000..c6172c6cc6 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/multiple_tokens.stderr @@ -0,0 +1,5 @@ +error: ... + --> $DIR/multiple_tokens.rs:4:1 + | +4 | type T = (); + | ^^^^^^^^^^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.rs new file mode 100644 index 0000000000..e241c5cd28 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.rs @@ -0,0 +1,4 @@ +use proc_macro_error::proc_macro_error; + +#[proc_macro_error] +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.stderr new file mode 100644 index 0000000000..f19f01bd8e --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/not_proc_macro.stderr @@ -0,0 +1,10 @@ +error: #[proc_macro_error] attribute can be used only with procedural macros + + = hint: if you are really sure that #[proc_macro_error] should be applied to this exact function, use #[proc_macro_error(allow_not_macro)] + + --> $DIR/not_proc_macro.rs:3:1 + | +3 | #[proc_macro_error] + | ^^^^^^^^^^^^^^^^^^^ + | + = note: this error originates in an attribute macro (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.rs new file mode 100644 index 0000000000..dfbfc03835 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.rs @@ -0,0 +1,6 @@ +extern crate test_crate; +use test_crate::*; + +option_ext!(one, two); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.stderr new file mode 100644 index 0000000000..91b151ec2f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/option_ext.stderr @@ -0,0 +1,7 @@ +error: Option::expect_or_abort() test + --> $DIR/option_ext.rs:4:1 + | +4 | option_ext!(one, two); + | ^^^^^^^^^^^^^^^^^^^^^^ + | + = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.rs new file mode 100644 index 0000000000..2504bdd401 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.rs @@ -0,0 +1,10 @@ +// Adapted from https://github.com/dtolnay/proc-macro-hack/blob/master/example/src/main.rs +// Licensed under either of Apache License, Version 2.0 or MIT license at your option. + +use proc_macro_hack_test::add_one; + +fn main() { + let two = 2; + let nine = add_one!(two) + add_one!(2 + 3); + println!("nine = {}", nine); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.stderr new file mode 100644 index 0000000000..0e984f918d --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/proc_macro_hack.stderr @@ -0,0 +1,26 @@ +error: BOOM + --> $DIR/proc_macro_hack.rs:8:25 + | +8 | let nine = add_one!(two) + add_one!(2 + 3); + | ^^^ + | + = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) + +error: BOOM + --> $DIR/proc_macro_hack.rs:8:41 + | +8 | let nine = add_one!(two) + add_one!(2 + 3); + | ^^^^^ + | + = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) + +warning: unreachable expression + --> $DIR/proc_macro_hack.rs:8:32 + | +8 | let nine = add_one!(two) + add_one!(2 + 3); + | ------------- ^^^^^^^^^^^^^^^ unreachable expression + | | + | any code following this expression is unreachable + | + = note: `#[warn(unreachable_code)]` on by default + = note: this warning originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.rs new file mode 100644 index 0000000000..bdd560dba9 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.rs @@ -0,0 +1,7 @@ +extern crate test_crate; +use test_crate::*; + +result_unwrap_or_abort!(one, two); +result_expect_or_abort!(one, two); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.stderr new file mode 100644 index 0000000000..f2dc0e4235 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/result_ext.stderr @@ -0,0 +1,11 @@ +error: Result::unwrap_or_abort() test + --> $DIR/result_ext.rs:4:25 + | +4 | result_unwrap_or_abort!(one, two); + | ^^^ + +error: BOOM: Result::expect_or_abort() test + --> $DIR/result_ext.rs:5:25 + | +5 | result_expect_or_abort!(one, two); + | ^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.rs new file mode 100644 index 0000000000..a7c3fc976c --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.rs @@ -0,0 +1,6 @@ +extern crate test_crate; +use test_crate::*; + +to_tokens_span!(std::option::Option); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.stderr new file mode 100644 index 0000000000..b8c4968263 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/to_tokens_span.stderr @@ -0,0 +1,11 @@ +error: whole type + --> $DIR/to_tokens_span.rs:4:17 + | +4 | to_tokens_span!(std::option::Option); + | ^^^^^^^^^^^^^^^^^^^ + +error: explicit .span() + --> $DIR/to_tokens_span.rs:4:17 + | +4 | to_tokens_span!(std::option::Option); + | ^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.rs new file mode 100644 index 0000000000..d8e58eaf87 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.rs @@ -0,0 +1,4 @@ +use proc_macro_error::proc_macro_error; + +#[proc_macro_error(allow_not_macro, assert_unwind_safe, trololo)] +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.stderr new file mode 100644 index 0000000000..a55de0b31b --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unknown_setting.stderr @@ -0,0 +1,5 @@ +error: unknown setting `trololo`, expected one of `assert_unwind_safe`, `allow_not_macro`, `proc_macro_hack` + --> $DIR/unknown_setting.rs:3:57 + | +3 | #[proc_macro_error(allow_not_macro, assert_unwind_safe, trololo)] + | ^^^^^^^ diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.rs b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.rs new file mode 100644 index 0000000000..c74e3e0623 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.rs @@ -0,0 +1,6 @@ +extern crate test_crate; +use test_crate::*; + +unrelated_panic!(); + +fn main() {} diff --git a/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.stderr b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.stderr new file mode 100644 index 0000000000..d46d689f2f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro-error/tests/ui/unrelated_panic.stderr @@ -0,0 +1,7 @@ +error: proc macro panicked + --> $DIR/unrelated_panic.rs:4:1 + | +4 | unrelated_panic!(); + | ^^^^^^^^^^^^^^^^^^^ + | + = help: message: unrelated panic test diff --git a/rust/hw/char/pl011/vendor/proc-macro2/.cargo-checksum.json b/rust/hw/char/pl011/vendor/proc-macro2/.cargo-checksum.json new file mode 100644 index 0000000000..83f4c8a5ec --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"cdfebba5c7483fd052619894a923bd5aa5959a0fcfd7a2fc7e695c6a6231e87a","LICENSE-APACHE":"62c7a1e35f56406896d7aa7ca52d0cc0d272ac022b5d2796e7d6905db8a3636a","LICENSE-MIT":"23f18e03dc49df91622fe2a76176497404e46ced8a715d9d2b67a7446571cca3","README.md":"c609b6865476d6c35879784e9155367a97a0da496aa5c3c61488440a20f59883","build.rs":"c385804afdf08a6292ed1f44afec6cfd0d9600410030ab5dc5bba842fbf0b6b3","build/probe.rs":"971fd2178dc506ccdc5c2065c37b77696a4aee8e00330ca52625db4a857f68d3","rust-toolchain.toml":"6bbb61302978c736b2da03e4fb40e3beab908f85d533ab46fd541e637b5f3e0f","src/detection.rs":"ed9a5f9a979ab01247d7a68eeb1afa3c13209334c5bfff0f9289cb07e5bb4e8b","src/extra.rs":"29f094473279a29b71c3cc9f5fa27c2e2c30c670390cf7e4b7cf451486cc857e","src/fallback.rs":"be1ce5e32c88c29d41d2ab663375951817d52decce3dc9e335ec22378be8fa65","src/lib.rs":"4bd042e054d240332664d67f537419a4fa5e29a4c020d1fac3b6f1f58378ae49","src/location.rs":"9225c5a55f03b56cce42bc55ceb509e8216a5e0b24c94aa1cd071b04e3d6c15f","src/marker.rs":"c11c5a1be8bdf18be3fcd224393f350a9aae7ce282e19ce583c84910c6903a8f","src/parse.rs":"4b77cddbc2752bc4d38a65acd8b96b6786c5220d19b1e1b37810257b5d24132d","src/rcvec.rs":"1c3c48c4f819927cc445ae15ca3bb06775feff2fd1cb21901ae4c40c7e6b4e82","src/wrapper.rs":"e41df9abc846b40f0cf01150d22b91944d07cde93bc72aa34798101652675844","tests/comments.rs":"31115b3a56c83d93eef2fb4c9566bf4543e302560732986161b98aef504785ed","tests/features.rs":"a86deb8644992a4eb64d9fd493eff16f9cf9c5cb6ade3a634ce0c990cf87d559","tests/marker.rs":"473e962ee1aa0633dd5cf9a973b3bbd0ef43b740d4b7f6d008ff455a6b89d386","tests/test.rs":"2e7106f582367d168638be7364d4e9aadbe0affca8b51dd80f0b3977cc2fcf83","tests/test_fmt.rs":"b7743b612af65f2c88cbe109d50a093db7aa7e87f9e37bf45b7bbaeb240aa020","tests/test_size.rs":"08fb1d6bcf867707dfa18d30fceb18c58e8c44c89e058d8d6bfd2b281c77e14e"},"package":"ec96c6a92621310b51366f1e28d05ef11489516e93be030060e5fc12024a49d6"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/proc-macro2/Cargo.toml b/rust/hw/char/pl011/vendor/proc-macro2/Cargo.toml new file mode 100644 index 0000000000..193a898a8c --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/Cargo.toml @@ -0,0 +1,104 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2021" +rust-version = "1.56" +name = "proc-macro2" +version = "1.0.84" +authors = [ + "David Tolnay ", + "Alex Crichton ", +] +build = "build.rs" +autobins = false +autoexamples = false +autotests = false +autobenches = false +description = "A substitute implementation of the compiler's `proc_macro` API to decouple token-based libraries from the procedural macro use case." +documentation = "https://docs.rs/proc-macro2" +readme = "README.md" +keywords = [ + "macros", + "syn", +] +categories = ["development-tools::procedural-macro-helpers"] +license = "MIT OR Apache-2.0" +repository = "https://github.com/dtolnay/proc-macro2" + +[package.metadata.docs.rs] +rustc-args = [ + "--cfg", + "procmacro2_semver_exempt", +] +rustdoc-args = [ + "--cfg", + "procmacro2_semver_exempt", + "--generate-link-to-definition", +] +targets = ["x86_64-unknown-linux-gnu"] + +[package.metadata.playground] +features = ["span-locations"] + +[lib] +name = "proc_macro2" +path = "src/lib.rs" +doc-scrape-examples = false + +[[test]] +name = "comments" +path = "tests/comments.rs" + +[[test]] +name = "test_fmt" +path = "tests/test_fmt.rs" + +[[test]] +name = "features" +path = "tests/features.rs" + +[[test]] +name = "marker" +path = "tests/marker.rs" + +[[test]] +name = "test_size" +path = "tests/test_size.rs" + +[[test]] +name = "test" +path = "tests/test.rs" + +[dependencies.unicode-ident] +version = "1.0" + +[dev-dependencies.flate2] +version = "1.0" + +[dev-dependencies.quote] +version = "1.0" +default-features = false + +[dev-dependencies.rayon] +version = "1.0" + +[dev-dependencies.rustversion] +version = "1" + +[dev-dependencies.tar] +version = "0.4" + +[features] +default = ["proc-macro"] +nightly = [] +proc-macro = [] +span-locations = [] diff --git a/rust/hw/char/pl011/vendor/proc-macro2/LICENSE-APACHE b/rust/hw/char/pl011/vendor/proc-macro2/LICENSE-APACHE new file mode 100644 index 0000000000..1b5ec8b78e --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/LICENSE-APACHE @@ -0,0 +1,176 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS diff --git a/rust/hw/char/pl011/vendor/proc-macro2/LICENSE-MIT b/rust/hw/char/pl011/vendor/proc-macro2/LICENSE-MIT new file mode 100644 index 0000000000..31aa79387f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/LICENSE-MIT @@ -0,0 +1,23 @@ +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/proc-macro2/README.md b/rust/hw/char/pl011/vendor/proc-macro2/README.md new file mode 100644 index 0000000000..3a29ce8b89 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/README.md @@ -0,0 +1,94 @@ +# proc-macro2 + +[github](https://github.com/dtolnay/proc-macro2) +[crates.io](https://crates.io/crates/proc-macro2) +[docs.rs](https://docs.rs/proc-macro2) +[build status](https://github.com/dtolnay/proc-macro2/actions?query=branch%3Amaster) + +A wrapper around the procedural macro API of the compiler's `proc_macro` crate. +This library serves two purposes: + +- **Bring proc-macro-like functionality to other contexts like build.rs and + main.rs.** Types from `proc_macro` are entirely specific to procedural macros + and cannot ever exist in code outside of a procedural macro. Meanwhile + `proc_macro2` types may exist anywhere including non-macro code. By developing + foundational libraries like [syn] and [quote] against `proc_macro2` rather + than `proc_macro`, the procedural macro ecosystem becomes easily applicable to + many other use cases and we avoid reimplementing non-macro equivalents of + those libraries. + +- **Make procedural macros unit testable.** As a consequence of being specific + to procedural macros, nothing that uses `proc_macro` can be executed from a + unit test. In order for helper libraries or components of a macro to be + testable in isolation, they must be implemented using `proc_macro2`. + +[syn]: https://github.com/dtolnay/syn +[quote]: https://github.com/dtolnay/quote + +## Usage + +```toml +[dependencies] +proc-macro2 = "1.0" +``` + +The skeleton of a typical procedural macro typically looks like this: + +```rust +extern crate proc_macro; + +#[proc_macro_derive(MyDerive)] +pub fn my_derive(input: proc_macro::TokenStream) -> proc_macro::TokenStream { + let input = proc_macro2::TokenStream::from(input); + + let output: proc_macro2::TokenStream = { + /* transform input */ + }; + + proc_macro::TokenStream::from(output) +} +``` + +If parsing with [Syn], you'll use [`parse_macro_input!`] instead to propagate +parse errors correctly back to the compiler when parsing fails. + +[`parse_macro_input!`]: https://docs.rs/syn/2.0/syn/macro.parse_macro_input.html + +## Unstable features + +The default feature set of proc-macro2 tracks the most recent stable compiler +API. Functionality in `proc_macro` that is not yet stable is not exposed by +proc-macro2 by default. + +To opt into the additional APIs available in the most recent nightly compiler, +the `procmacro2_semver_exempt` config flag must be passed to rustc. We will +polyfill those nightly-only APIs back to Rust 1.56.0. As these are unstable APIs +that track the nightly compiler, minor versions of proc-macro2 may make breaking +changes to them at any time. + +``` +RUSTFLAGS='--cfg procmacro2_semver_exempt' cargo build +``` + +Note that this must not only be done for your crate, but for any crate that +depends on your crate. This infectious nature is intentional, as it serves as a +reminder that you are outside of the normal semver guarantees. + +Semver exempt methods are marked as such in the proc-macro2 documentation. + +
+ +#### License + + +Licensed under either of Apache License, Version +2.0 or MIT license at your option. + + +
+ + +Unless you explicitly state otherwise, any contribution intentionally submitted +for inclusion in this crate by you, as defined in the Apache-2.0 license, shall +be dual licensed as above, without any additional terms or conditions. + diff --git a/rust/hw/char/pl011/vendor/proc-macro2/build.rs b/rust/hw/char/pl011/vendor/proc-macro2/build.rs new file mode 100644 index 0000000000..0a95c22661 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/build.rs @@ -0,0 +1,227 @@ +// rustc-cfg emitted by the build script: +// +// "wrap_proc_macro" +// Wrap types from libproc_macro rather than polyfilling the whole API. +// Enabled on rustc 1.29+ as long as procmacro2_semver_exempt is not set, +// because we can't emulate the unstable API without emulating everything +// else. Also enabled unconditionally on nightly, in which case the +// procmacro2_semver_exempt surface area is implemented by using the +// nightly-only proc_macro API. +// +// "hygiene" +// Enable Span::mixed_site() and non-dummy behavior of Span::resolved_at +// and Span::located_at. Enabled on Rust 1.45+. +// +// "proc_macro_span" +// Enable non-dummy behavior of Span::start and Span::end methods which +// requires an unstable compiler feature. Enabled when building with +// nightly, unless `-Z allow-feature` in RUSTFLAGS disallows unstable +// features. +// +// "super_unstable" +// Implement the semver exempt API in terms of the nightly-only proc_macro +// API. Enabled when using procmacro2_semver_exempt on a nightly compiler. +// +// "span_locations" +// Provide methods Span::start and Span::end which give the line/column +// location of a token. Enabled by procmacro2_semver_exempt or the +// "span-locations" Cargo cfg. This is behind a cfg because tracking +// location inside spans is a performance hit. +// +// "is_available" +// Use proc_macro::is_available() to detect if the proc macro API is +// available or needs to be polyfilled instead of trying to use the proc +// macro API and catching a panic if it isn't available. Enabled on Rust +// 1.57+. + +#![allow(unknown_lints)] +#![allow(unexpected_cfgs)] + +use std::env; +use std::ffi::OsString; +use std::iter; +use std::path::Path; +use std::process::{self, Command, Stdio}; +use std::str; + +fn main() { + let rustc = rustc_minor_version().unwrap_or(u32::MAX); + + if rustc >= 80 { + println!("cargo:rustc-check-cfg=cfg(fuzzing)"); + println!("cargo:rustc-check-cfg=cfg(no_is_available)"); + println!("cargo:rustc-check-cfg=cfg(no_literal_byte_character)"); + println!("cargo:rustc-check-cfg=cfg(no_literal_c_string)"); + println!("cargo:rustc-check-cfg=cfg(no_source_text)"); + println!("cargo:rustc-check-cfg=cfg(proc_macro_span)"); + println!("cargo:rustc-check-cfg=cfg(procmacro2_backtrace)"); + println!("cargo:rustc-check-cfg=cfg(procmacro2_nightly_testing)"); + println!("cargo:rustc-check-cfg=cfg(procmacro2_semver_exempt)"); + println!("cargo:rustc-check-cfg=cfg(randomize_layout)"); + println!("cargo:rustc-check-cfg=cfg(span_locations)"); + println!("cargo:rustc-check-cfg=cfg(super_unstable)"); + println!("cargo:rustc-check-cfg=cfg(wrap_proc_macro)"); + } + + let docs_rs = env::var_os("DOCS_RS").is_some(); + let semver_exempt = cfg!(procmacro2_semver_exempt) || docs_rs; + if semver_exempt { + // https://github.com/dtolnay/proc-macro2/issues/147 + println!("cargo:rustc-cfg=procmacro2_semver_exempt"); + } + + if semver_exempt || cfg!(feature = "span-locations") { + println!("cargo:rustc-cfg=span_locations"); + } + + if rustc < 57 { + println!("cargo:rustc-cfg=no_is_available"); + } + + if rustc < 66 { + println!("cargo:rustc-cfg=no_source_text"); + } + + if rustc < 79 { + println!("cargo:rustc-cfg=no_literal_byte_character"); + println!("cargo:rustc-cfg=no_literal_c_string"); + } + + if !cfg!(feature = "proc-macro") { + println!("cargo:rerun-if-changed=build.rs"); + return; + } + + println!("cargo:rerun-if-changed=build/probe.rs"); + + let proc_macro_span; + let consider_rustc_bootstrap; + if compile_probe(false) { + // This is a nightly or dev compiler, so it supports unstable features + // regardless of RUSTC_BOOTSTRAP. No need to rerun build script if + // RUSTC_BOOTSTRAP is changed. + proc_macro_span = true; + consider_rustc_bootstrap = false; + } else if let Some(rustc_bootstrap) = env::var_os("RUSTC_BOOTSTRAP") { + if compile_probe(true) { + // This is a stable or beta compiler for which the user has set + // RUSTC_BOOTSTRAP to turn on unstable features. Rerun build script + // if they change it. + proc_macro_span = true; + consider_rustc_bootstrap = true; + } else if rustc_bootstrap == "1" { + // This compiler does not support the proc macro Span API in the + // form that proc-macro2 expects. No need to pay attention to + // RUSTC_BOOTSTRAP. + proc_macro_span = false; + consider_rustc_bootstrap = false; + } else { + // This is a stable or beta compiler for which RUSTC_BOOTSTRAP is + // set to restrict the use of unstable features by this crate. + proc_macro_span = false; + consider_rustc_bootstrap = true; + } + } else { + // Without RUSTC_BOOTSTRAP, this compiler does not support the proc + // macro Span API in the form that proc-macro2 expects, but try again if + // the user turns on unstable features. + proc_macro_span = false; + consider_rustc_bootstrap = true; + } + + if proc_macro_span || !semver_exempt { + println!("cargo:rustc-cfg=wrap_proc_macro"); + } + + if proc_macro_span { + println!("cargo:rustc-cfg=proc_macro_span"); + } + + if semver_exempt && proc_macro_span { + println!("cargo:rustc-cfg=super_unstable"); + } + + if consider_rustc_bootstrap { + println!("cargo:rerun-if-env-changed=RUSTC_BOOTSTRAP"); + } +} + +fn compile_probe(rustc_bootstrap: bool) -> bool { + if env::var_os("RUSTC_STAGE").is_some() { + // We are running inside rustc bootstrap. This is a highly non-standard + // environment with issues such as: + // + // https://github.com/rust-lang/cargo/issues/11138 + // https://github.com/rust-lang/rust/issues/114839 + // + // Let's just not use nightly features here. + return false; + } + + let rustc = cargo_env_var("RUSTC"); + let out_dir = cargo_env_var("OUT_DIR"); + let probefile = Path::new("build").join("probe.rs"); + + let rustc_wrapper = env::var_os("RUSTC_WRAPPER").filter(|wrapper| !wrapper.is_empty()); + let rustc_workspace_wrapper = + env::var_os("RUSTC_WORKSPACE_WRAPPER").filter(|wrapper| !wrapper.is_empty()); + let mut rustc = rustc_wrapper + .into_iter() + .chain(rustc_workspace_wrapper) + .chain(iter::once(rustc)); + let mut cmd = Command::new(rustc.next().unwrap()); + cmd.args(rustc); + + if !rustc_bootstrap { + cmd.env_remove("RUSTC_BOOTSTRAP"); + } + + cmd.stderr(Stdio::null()) + .arg("--edition=2021") + .arg("--crate-name=proc_macro2") + .arg("--crate-type=lib") + .arg("--cap-lints=allow") + .arg("--emit=dep-info,metadata") + .arg("--out-dir") + .arg(out_dir) + .arg(probefile); + + if let Some(target) = env::var_os("TARGET") { + cmd.arg("--target").arg(target); + } + + // If Cargo wants to set RUSTFLAGS, use that. + if let Ok(rustflags) = env::var("CARGO_ENCODED_RUSTFLAGS") { + if !rustflags.is_empty() { + for arg in rustflags.split('\x1f') { + cmd.arg(arg); + } + } + } + + match cmd.status() { + Ok(status) => status.success(), + Err(_) => false, + } +} + +fn rustc_minor_version() -> Option { + let rustc = cargo_env_var("RUSTC"); + let output = Command::new(rustc).arg("--version").output().ok()?; + let version = str::from_utf8(&output.stdout).ok()?; + let mut pieces = version.split('.'); + if pieces.next() != Some("rustc 1") { + return None; + } + pieces.next()?.parse().ok() +} + +fn cargo_env_var(key: &str) -> OsString { + env::var_os(key).unwrap_or_else(|| { + eprintln!( + "Environment variable ${} is not set during execution of build script", + key, + ); + process::exit(1); + }) +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/build/probe.rs b/rust/hw/char/pl011/vendor/proc-macro2/build/probe.rs new file mode 100644 index 0000000000..2c4947a0b8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/build/probe.rs @@ -0,0 +1,25 @@ +// This code exercises the surface area that we expect of Span's unstable API. +// If the current toolchain is able to compile it, then proc-macro2 is able to +// offer these APIs too. + +#![feature(proc_macro_span)] + +extern crate proc_macro; + +use core::ops::{Range, RangeBounds}; +use proc_macro::{Literal, Span}; + +pub fn byte_range(this: &Span) -> Range { + this.byte_range() +} + +pub fn join(this: &Span, other: Span) -> Option { + this.join(other) +} + +pub fn subspan>(this: &Literal, range: R) -> Option { + this.subspan(range) +} + +// Include in sccache cache key. +const _: Option<&str> = option_env!("RUSTC_BOOTSTRAP"); diff --git a/rust/hw/char/pl011/vendor/proc-macro2/meson.build b/rust/hw/char/pl011/vendor/proc-macro2/meson.build new file mode 100644 index 0000000000..2a97df4a70 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/meson.build @@ -0,0 +1,19 @@ +_proc_macro2_rs = static_library( + 'proc_macro2', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + '--cfg', 'feature="proc-macro"', + '--cfg', 'span_locations', + '--cfg', 'wrap_proc_macro', + ], + dependencies: [ + dep_unicode_ident, + ], +) + +dep_proc_macro2 = declare_dependency( + link_with: _proc_macro2_rs, +) diff --git a/rust/hw/char/pl011/vendor/proc-macro2/rust-toolchain.toml b/rust/hw/char/pl011/vendor/proc-macro2/rust-toolchain.toml new file mode 100644 index 0000000000..20fe888c30 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/rust-toolchain.toml @@ -0,0 +1,2 @@ +[toolchain] +components = ["rust-src"] diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/detection.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/detection.rs new file mode 100644 index 0000000000..beba7b2373 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/detection.rs @@ -0,0 +1,75 @@ +use core::sync::atomic::{AtomicUsize, Ordering}; +use std::sync::Once; + +static WORKS: AtomicUsize = AtomicUsize::new(0); +static INIT: Once = Once::new(); + +pub(crate) fn inside_proc_macro() -> bool { + match WORKS.load(Ordering::Relaxed) { + 1 => return false, + 2 => return true, + _ => {} + } + + INIT.call_once(initialize); + inside_proc_macro() +} + +pub(crate) fn force_fallback() { + WORKS.store(1, Ordering::Relaxed); +} + +pub(crate) fn unforce_fallback() { + initialize(); +} + +#[cfg(not(no_is_available))] +fn initialize() { + let available = proc_macro::is_available(); + WORKS.store(available as usize + 1, Ordering::Relaxed); +} + +// Swap in a null panic hook to avoid printing "thread panicked" to stderr, +// then use catch_unwind to determine whether the compiler's proc_macro is +// working. When proc-macro2 is used from outside of a procedural macro all +// of the proc_macro crate's APIs currently panic. +// +// The Once is to prevent the possibility of this ordering: +// +// thread 1 calls take_hook, gets the user's original hook +// thread 1 calls set_hook with the null hook +// thread 2 calls take_hook, thinks null hook is the original hook +// thread 2 calls set_hook with the null hook +// thread 1 calls set_hook with the actual original hook +// thread 2 calls set_hook with what it thinks is the original hook +// +// in which the user's hook has been lost. +// +// There is still a race condition where a panic in a different thread can +// happen during the interval that the user's original panic hook is +// unregistered such that their hook is incorrectly not called. This is +// sufficiently unlikely and less bad than printing panic messages to stderr +// on correct use of this crate. Maybe there is a libstd feature request +// here. For now, if a user needs to guarantee that this failure mode does +// not occur, they need to call e.g. `proc_macro2::Span::call_site()` from +// the main thread before launching any other threads. +#[cfg(no_is_available)] +fn initialize() { + use std::panic::{self, PanicInfo}; + + type PanicHook = dyn Fn(&PanicInfo) + Sync + Send + 'static; + + let null_hook: Box = Box::new(|_panic_info| { /* ignore */ }); + let sanity_check = &*null_hook as *const PanicHook; + let original_hook = panic::take_hook(); + panic::set_hook(null_hook); + + let works = panic::catch_unwind(proc_macro::Span::call_site).is_ok(); + WORKS.store(works as usize + 1, Ordering::Relaxed); + + let hopefully_null_hook = panic::take_hook(); + panic::set_hook(original_hook); + if sanity_check != &*hopefully_null_hook { + panic!("observed race condition in proc_macro2::inside_proc_macro"); + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/extra.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/extra.rs new file mode 100644 index 0000000000..522a90e136 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/extra.rs @@ -0,0 +1,151 @@ +//! Items which do not have a correspondence to any API in the proc_macro crate, +//! but are necessary to include in proc-macro2. + +use crate::fallback; +use crate::imp; +use crate::marker::{ProcMacroAutoTraits, MARKER}; +use crate::Span; +use core::fmt::{self, Debug}; + +/// Invalidate any `proc_macro2::Span` that exist on the current thread. +/// +/// The implementation of `Span` uses thread-local data structures and this +/// function clears them. Calling any method on a `Span` on the current thread +/// created prior to the invalidation will return incorrect values or crash. +/// +/// This function is useful for programs that process more than 232 +/// bytes of Rust source code on the same thread. Just like rustc, proc-macro2 +/// uses 32-bit source locations, and these wrap around when the total source +/// code processed by the same thread exceeds 232 bytes (4 +/// gigabytes). After a wraparound, `Span` methods such as `source_text()` can +/// return wrong data. +/// +/// # Example +/// +/// As of late 2023, there is 200 GB of Rust code published on crates.io. +/// Looking at just the newest version of every crate, it is 16 GB of code. So a +/// workload that involves parsing it all would overflow a 32-bit source +/// location unless spans are being invalidated. +/// +/// ``` +/// use flate2::read::GzDecoder; +/// use std::ffi::OsStr; +/// use std::io::{BufReader, Read}; +/// use std::str::FromStr; +/// use tar::Archive; +/// +/// rayon::scope(|s| { +/// for krate in every_version_of_every_crate() { +/// s.spawn(move |_| { +/// proc_macro2::extra::invalidate_current_thread_spans(); +/// +/// let reader = BufReader::new(krate); +/// let tar = GzDecoder::new(reader); +/// let mut archive = Archive::new(tar); +/// for entry in archive.entries().unwrap() { +/// let mut entry = entry.unwrap(); +/// let path = entry.path().unwrap(); +/// if path.extension() != Some(OsStr::new("rs")) { +/// continue; +/// } +/// let mut content = String::new(); +/// entry.read_to_string(&mut content).unwrap(); +/// match proc_macro2::TokenStream::from_str(&content) { +/// Ok(tokens) => {/* ... */}, +/// Err(_) => continue, +/// } +/// } +/// }); +/// } +/// }); +/// # +/// # fn every_version_of_every_crate() -> Vec { +/// # Vec::new() +/// # } +/// ``` +/// +/// # Panics +/// +/// This function is not applicable to and will panic if called from a +/// procedural macro. +#[cfg(span_locations)] +#[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))] +pub fn invalidate_current_thread_spans() { + crate::imp::invalidate_current_thread_spans(); +} + +/// An object that holds a [`Group`]'s `span_open()` and `span_close()` together +/// in a more compact representation than holding those 2 spans individually. +/// +/// [`Group`]: crate::Group +#[derive(Copy, Clone)] +pub struct DelimSpan { + inner: DelimSpanEnum, + _marker: ProcMacroAutoTraits, +} + +#[derive(Copy, Clone)] +enum DelimSpanEnum { + #[cfg(wrap_proc_macro)] + Compiler { + join: proc_macro::Span, + open: proc_macro::Span, + close: proc_macro::Span, + }, + Fallback(fallback::Span), +} + +impl DelimSpan { + pub(crate) fn new(group: &imp::Group) -> Self { + #[cfg(wrap_proc_macro)] + let inner = match group { + imp::Group::Compiler(group) => DelimSpanEnum::Compiler { + join: group.span(), + open: group.span_open(), + close: group.span_close(), + }, + imp::Group::Fallback(group) => DelimSpanEnum::Fallback(group.span()), + }; + + #[cfg(not(wrap_proc_macro))] + let inner = DelimSpanEnum::Fallback(group.span()); + + DelimSpan { + inner, + _marker: MARKER, + } + } + + /// Returns a span covering the entire delimited group. + pub fn join(&self) -> Span { + match &self.inner { + #[cfg(wrap_proc_macro)] + DelimSpanEnum::Compiler { join, .. } => Span::_new(imp::Span::Compiler(*join)), + DelimSpanEnum::Fallback(span) => Span::_new_fallback(*span), + } + } + + /// Returns a span for the opening punctuation of the group only. + pub fn open(&self) -> Span { + match &self.inner { + #[cfg(wrap_proc_macro)] + DelimSpanEnum::Compiler { open, .. } => Span::_new(imp::Span::Compiler(*open)), + DelimSpanEnum::Fallback(span) => Span::_new_fallback(span.first_byte()), + } + } + + /// Returns a span for the closing punctuation of the group only. + pub fn close(&self) -> Span { + match &self.inner { + #[cfg(wrap_proc_macro)] + DelimSpanEnum::Compiler { close, .. } => Span::_new(imp::Span::Compiler(*close)), + DelimSpanEnum::Fallback(span) => Span::_new_fallback(span.last_byte()), + } + } +} + +impl Debug for DelimSpan { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.join(), f) + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/fallback.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/fallback.rs new file mode 100644 index 0000000000..2d1c991997 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/fallback.rs @@ -0,0 +1,1226 @@ +#[cfg(span_locations)] +use crate::location::LineColumn; +use crate::parse::{self, Cursor}; +use crate::rcvec::{RcVec, RcVecBuilder, RcVecIntoIter, RcVecMut}; +use crate::{Delimiter, Spacing, TokenTree}; +#[cfg(all(span_locations, not(fuzzing)))] +use alloc::collections::BTreeMap; +#[cfg(all(span_locations, not(fuzzing)))] +use core::cell::RefCell; +#[cfg(span_locations)] +use core::cmp; +use core::fmt::{self, Debug, Display, Write}; +use core::mem::ManuallyDrop; +#[cfg(span_locations)] +use core::ops::Range; +use core::ops::RangeBounds; +use core::ptr; +use core::str::{self, FromStr}; +use std::ffi::CStr; +#[cfg(procmacro2_semver_exempt)] +use std::path::PathBuf; + +/// Force use of proc-macro2's fallback implementation of the API for now, even +/// if the compiler's implementation is available. +pub fn force() { + #[cfg(wrap_proc_macro)] + crate::detection::force_fallback(); +} + +/// Resume using the compiler's implementation of the proc macro API if it is +/// available. +pub fn unforce() { + #[cfg(wrap_proc_macro)] + crate::detection::unforce_fallback(); +} + +#[derive(Clone)] +pub(crate) struct TokenStream { + inner: RcVec, +} + +#[derive(Debug)] +pub(crate) struct LexError { + pub(crate) span: Span, +} + +impl LexError { + pub(crate) fn span(&self) -> Span { + self.span + } + + pub(crate) fn call_site() -> Self { + LexError { + span: Span::call_site(), + } + } +} + +impl TokenStream { + pub fn new() -> Self { + TokenStream { + inner: RcVecBuilder::new().build(), + } + } + + pub fn is_empty(&self) -> bool { + self.inner.len() == 0 + } + + fn take_inner(self) -> RcVecBuilder { + let nodrop = ManuallyDrop::new(self); + unsafe { ptr::read(&nodrop.inner) }.make_owned() + } +} + +fn push_token_from_proc_macro(mut vec: RcVecMut, token: TokenTree) { + // https://github.com/dtolnay/proc-macro2/issues/235 + match token { + TokenTree::Literal(crate::Literal { + #[cfg(wrap_proc_macro)] + inner: crate::imp::Literal::Fallback(literal), + #[cfg(not(wrap_proc_macro))] + inner: literal, + .. + }) if literal.repr.starts_with('-') => { + push_negative_literal(vec, literal); + } + _ => vec.push(token), + } + + #[cold] + fn push_negative_literal(mut vec: RcVecMut, mut literal: Literal) { + literal.repr.remove(0); + let mut punct = crate::Punct::new('-', Spacing::Alone); + punct.set_span(crate::Span::_new_fallback(literal.span)); + vec.push(TokenTree::Punct(punct)); + vec.push(TokenTree::Literal(crate::Literal::_new_fallback(literal))); + } +} + +// Nonrecursive to prevent stack overflow. +impl Drop for TokenStream { + fn drop(&mut self) { + let mut inner = match self.inner.get_mut() { + Some(inner) => inner, + None => return, + }; + while let Some(token) = inner.pop() { + let group = match token { + TokenTree::Group(group) => group.inner, + _ => continue, + }; + #[cfg(wrap_proc_macro)] + let group = match group { + crate::imp::Group::Fallback(group) => group, + crate::imp::Group::Compiler(_) => continue, + }; + inner.extend(group.stream.take_inner()); + } + } +} + +pub(crate) struct TokenStreamBuilder { + inner: RcVecBuilder, +} + +impl TokenStreamBuilder { + pub fn new() -> Self { + TokenStreamBuilder { + inner: RcVecBuilder::new(), + } + } + + pub fn with_capacity(cap: usize) -> Self { + TokenStreamBuilder { + inner: RcVecBuilder::with_capacity(cap), + } + } + + pub fn push_token_from_parser(&mut self, tt: TokenTree) { + self.inner.push(tt); + } + + pub fn build(self) -> TokenStream { + TokenStream { + inner: self.inner.build(), + } + } +} + +#[cfg(span_locations)] +fn get_cursor(src: &str) -> Cursor { + #[cfg(fuzzing)] + return Cursor { rest: src, off: 1 }; + + // Create a dummy file & add it to the source map + #[cfg(not(fuzzing))] + SOURCE_MAP.with(|sm| { + let mut sm = sm.borrow_mut(); + let span = sm.add_file(src); + Cursor { + rest: src, + off: span.lo, + } + }) +} + +#[cfg(not(span_locations))] +fn get_cursor(src: &str) -> Cursor { + Cursor { rest: src } +} + +impl FromStr for TokenStream { + type Err = LexError; + + fn from_str(src: &str) -> Result { + // Create a dummy file & add it to the source map + let mut cursor = get_cursor(src); + + // Strip a byte order mark if present + const BYTE_ORDER_MARK: &str = "\u{feff}"; + if cursor.starts_with(BYTE_ORDER_MARK) { + cursor = cursor.advance(BYTE_ORDER_MARK.len()); + } + + parse::token_stream(cursor) + } +} + +impl Display for LexError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.write_str("cannot parse string into token stream") + } +} + +impl Display for TokenStream { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let mut joint = false; + for (i, tt) in self.inner.iter().enumerate() { + if i != 0 && !joint { + write!(f, " ")?; + } + joint = false; + match tt { + TokenTree::Group(tt) => Display::fmt(tt, f), + TokenTree::Ident(tt) => Display::fmt(tt, f), + TokenTree::Punct(tt) => { + joint = tt.spacing() == Spacing::Joint; + Display::fmt(tt, f) + } + TokenTree::Literal(tt) => Display::fmt(tt, f), + }?; + } + + Ok(()) + } +} + +impl Debug for TokenStream { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.write_str("TokenStream ")?; + f.debug_list().entries(self.clone()).finish() + } +} + +#[cfg(feature = "proc-macro")] +impl From for TokenStream { + fn from(inner: proc_macro::TokenStream) -> Self { + inner + .to_string() + .parse() + .expect("compiler token stream parse failed") + } +} + +#[cfg(feature = "proc-macro")] +impl From for proc_macro::TokenStream { + fn from(inner: TokenStream) -> Self { + inner + .to_string() + .parse() + .expect("failed to parse to compiler tokens") + } +} + +impl From for TokenStream { + fn from(tree: TokenTree) -> Self { + let mut stream = RcVecBuilder::new(); + push_token_from_proc_macro(stream.as_mut(), tree); + TokenStream { + inner: stream.build(), + } + } +} + +impl FromIterator for TokenStream { + fn from_iter>(tokens: I) -> Self { + let mut stream = TokenStream::new(); + stream.extend(tokens); + stream + } +} + +impl FromIterator for TokenStream { + fn from_iter>(streams: I) -> Self { + let mut v = RcVecBuilder::new(); + + for stream in streams { + v.extend(stream.take_inner()); + } + + TokenStream { inner: v.build() } + } +} + +impl Extend for TokenStream { + fn extend>(&mut self, tokens: I) { + let mut vec = self.inner.make_mut(); + tokens + .into_iter() + .for_each(|token| push_token_from_proc_macro(vec.as_mut(), token)); + } +} + +impl Extend for TokenStream { + fn extend>(&mut self, streams: I) { + self.inner.make_mut().extend(streams.into_iter().flatten()); + } +} + +pub(crate) type TokenTreeIter = RcVecIntoIter; + +impl IntoIterator for TokenStream { + type Item = TokenTree; + type IntoIter = TokenTreeIter; + + fn into_iter(self) -> TokenTreeIter { + self.take_inner().into_iter() + } +} + +#[cfg(procmacro2_semver_exempt)] +#[derive(Clone, PartialEq, Eq)] +pub(crate) struct SourceFile { + path: PathBuf, +} + +#[cfg(procmacro2_semver_exempt)] +impl SourceFile { + /// Get the path to this source file as a string. + pub fn path(&self) -> PathBuf { + self.path.clone() + } + + pub fn is_real(&self) -> bool { + false + } +} + +#[cfg(procmacro2_semver_exempt)] +impl Debug for SourceFile { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.debug_struct("SourceFile") + .field("path", &self.path()) + .field("is_real", &self.is_real()) + .finish() + } +} + +#[cfg(all(span_locations, not(fuzzing)))] +thread_local! { + static SOURCE_MAP: RefCell = RefCell::new(SourceMap { + // Start with a single dummy file which all call_site() and def_site() + // spans reference. + files: vec![FileInfo { + source_text: String::new(), + span: Span { lo: 0, hi: 0 }, + lines: vec![0], + char_index_to_byte_offset: BTreeMap::new(), + }], + }); +} + +#[cfg(span_locations)] +pub(crate) fn invalidate_current_thread_spans() { + #[cfg(not(fuzzing))] + SOURCE_MAP.with(|sm| sm.borrow_mut().files.truncate(1)); +} + +#[cfg(all(span_locations, not(fuzzing)))] +struct FileInfo { + source_text: String, + span: Span, + lines: Vec, + char_index_to_byte_offset: BTreeMap, +} + +#[cfg(all(span_locations, not(fuzzing)))] +impl FileInfo { + fn offset_line_column(&self, offset: usize) -> LineColumn { + assert!(self.span_within(Span { + lo: offset as u32, + hi: offset as u32, + })); + let offset = offset - self.span.lo as usize; + match self.lines.binary_search(&offset) { + Ok(found) => LineColumn { + line: found + 1, + column: 0, + }, + Err(idx) => LineColumn { + line: idx, + column: offset - self.lines[idx - 1], + }, + } + } + + fn span_within(&self, span: Span) -> bool { + span.lo >= self.span.lo && span.hi <= self.span.hi + } + + fn byte_range(&mut self, span: Span) -> Range { + let lo_char = (span.lo - self.span.lo) as usize; + + // Look up offset of the largest already-computed char index that is + // less than or equal to the current requested one. We resume counting + // chars from that point. + let (&last_char_index, &last_byte_offset) = self + .char_index_to_byte_offset + .range(..=lo_char) + .next_back() + .unwrap_or((&0, &0)); + + let lo_byte = if last_char_index == lo_char { + last_byte_offset + } else { + let total_byte_offset = match self.source_text[last_byte_offset..] + .char_indices() + .nth(lo_char - last_char_index) + { + Some((additional_offset, _ch)) => last_byte_offset + additional_offset, + None => self.source_text.len(), + }; + self.char_index_to_byte_offset + .insert(lo_char, total_byte_offset); + total_byte_offset + }; + + let trunc_lo = &self.source_text[lo_byte..]; + let char_len = (span.hi - span.lo) as usize; + lo_byte..match trunc_lo.char_indices().nth(char_len) { + Some((offset, _ch)) => lo_byte + offset, + None => self.source_text.len(), + } + } + + fn source_text(&mut self, span: Span) -> String { + let byte_range = self.byte_range(span); + self.source_text[byte_range].to_owned() + } +} + +/// Computes the offsets of each line in the given source string +/// and the total number of characters +#[cfg(all(span_locations, not(fuzzing)))] +fn lines_offsets(s: &str) -> (usize, Vec) { + let mut lines = vec![0]; + let mut total = 0; + + for ch in s.chars() { + total += 1; + if ch == '\n' { + lines.push(total); + } + } + + (total, lines) +} + +#[cfg(all(span_locations, not(fuzzing)))] +struct SourceMap { + files: Vec, +} + +#[cfg(all(span_locations, not(fuzzing)))] +impl SourceMap { + fn next_start_pos(&self) -> u32 { + // Add 1 so there's always space between files. + // + // We'll always have at least 1 file, as we initialize our files list + // with a dummy file. + self.files.last().unwrap().span.hi + 1 + } + + fn add_file(&mut self, src: &str) -> Span { + let (len, lines) = lines_offsets(src); + let lo = self.next_start_pos(); + let span = Span { + lo, + hi: lo + (len as u32), + }; + + self.files.push(FileInfo { + source_text: src.to_owned(), + span, + lines, + // Populated lazily by source_text(). + char_index_to_byte_offset: BTreeMap::new(), + }); + + span + } + + #[cfg(procmacro2_semver_exempt)] + fn filepath(&self, span: Span) -> PathBuf { + for (i, file) in self.files.iter().enumerate() { + if file.span_within(span) { + return PathBuf::from(if i == 0 { + "".to_owned() + } else { + format!("", i) + }); + } + } + unreachable!("Invalid span with no related FileInfo!"); + } + + fn fileinfo(&self, span: Span) -> &FileInfo { + for file in &self.files { + if file.span_within(span) { + return file; + } + } + unreachable!("Invalid span with no related FileInfo!"); + } + + fn fileinfo_mut(&mut self, span: Span) -> &mut FileInfo { + for file in &mut self.files { + if file.span_within(span) { + return file; + } + } + unreachable!("Invalid span with no related FileInfo!"); + } +} + +#[derive(Clone, Copy, PartialEq, Eq)] +pub(crate) struct Span { + #[cfg(span_locations)] + pub(crate) lo: u32, + #[cfg(span_locations)] + pub(crate) hi: u32, +} + +impl Span { + #[cfg(not(span_locations))] + pub fn call_site() -> Self { + Span {} + } + + #[cfg(span_locations)] + pub fn call_site() -> Self { + Span { lo: 0, hi: 0 } + } + + pub fn mixed_site() -> Self { + Span::call_site() + } + + #[cfg(procmacro2_semver_exempt)] + pub fn def_site() -> Self { + Span::call_site() + } + + pub fn resolved_at(&self, _other: Span) -> Span { + // Stable spans consist only of line/column information, so + // `resolved_at` and `located_at` only select which span the + // caller wants line/column information from. + *self + } + + pub fn located_at(&self, other: Span) -> Span { + other + } + + #[cfg(procmacro2_semver_exempt)] + pub fn source_file(&self) -> SourceFile { + #[cfg(fuzzing)] + return SourceFile { + path: PathBuf::from(""), + }; + + #[cfg(not(fuzzing))] + SOURCE_MAP.with(|sm| { + let sm = sm.borrow(); + let path = sm.filepath(*self); + SourceFile { path } + }) + } + + #[cfg(span_locations)] + pub fn byte_range(&self) -> Range { + #[cfg(fuzzing)] + return 0..0; + + #[cfg(not(fuzzing))] + { + if self.is_call_site() { + 0..0 + } else { + SOURCE_MAP.with(|sm| sm.borrow_mut().fileinfo_mut(*self).byte_range(*self)) + } + } + } + + #[cfg(span_locations)] + pub fn start(&self) -> LineColumn { + #[cfg(fuzzing)] + return LineColumn { line: 0, column: 0 }; + + #[cfg(not(fuzzing))] + SOURCE_MAP.with(|sm| { + let sm = sm.borrow(); + let fi = sm.fileinfo(*self); + fi.offset_line_column(self.lo as usize) + }) + } + + #[cfg(span_locations)] + pub fn end(&self) -> LineColumn { + #[cfg(fuzzing)] + return LineColumn { line: 0, column: 0 }; + + #[cfg(not(fuzzing))] + SOURCE_MAP.with(|sm| { + let sm = sm.borrow(); + let fi = sm.fileinfo(*self); + fi.offset_line_column(self.hi as usize) + }) + } + + #[cfg(not(span_locations))] + pub fn join(&self, _other: Span) -> Option { + Some(Span {}) + } + + #[cfg(span_locations)] + pub fn join(&self, other: Span) -> Option { + #[cfg(fuzzing)] + return { + let _ = other; + None + }; + + #[cfg(not(fuzzing))] + SOURCE_MAP.with(|sm| { + let sm = sm.borrow(); + // If `other` is not within the same FileInfo as us, return None. + if !sm.fileinfo(*self).span_within(other) { + return None; + } + Some(Span { + lo: cmp::min(self.lo, other.lo), + hi: cmp::max(self.hi, other.hi), + }) + }) + } + + #[cfg(not(span_locations))] + pub fn source_text(&self) -> Option { + None + } + + #[cfg(span_locations)] + pub fn source_text(&self) -> Option { + #[cfg(fuzzing)] + return None; + + #[cfg(not(fuzzing))] + { + if self.is_call_site() { + None + } else { + Some(SOURCE_MAP.with(|sm| sm.borrow_mut().fileinfo_mut(*self).source_text(*self))) + } + } + } + + #[cfg(not(span_locations))] + pub(crate) fn first_byte(self) -> Self { + self + } + + #[cfg(span_locations)] + pub(crate) fn first_byte(self) -> Self { + Span { + lo: self.lo, + hi: cmp::min(self.lo.saturating_add(1), self.hi), + } + } + + #[cfg(not(span_locations))] + pub(crate) fn last_byte(self) -> Self { + self + } + + #[cfg(span_locations)] + pub(crate) fn last_byte(self) -> Self { + Span { + lo: cmp::max(self.hi.saturating_sub(1), self.lo), + hi: self.hi, + } + } + + #[cfg(span_locations)] + fn is_call_site(&self) -> bool { + self.lo == 0 && self.hi == 0 + } +} + +impl Debug for Span { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + #[cfg(span_locations)] + return write!(f, "bytes({}..{})", self.lo, self.hi); + + #[cfg(not(span_locations))] + write!(f, "Span") + } +} + +pub(crate) fn debug_span_field_if_nontrivial(debug: &mut fmt::DebugStruct, span: Span) { + #[cfg(span_locations)] + { + if span.is_call_site() { + return; + } + } + + if cfg!(span_locations) { + debug.field("span", &span); + } +} + +#[derive(Clone)] +pub(crate) struct Group { + delimiter: Delimiter, + stream: TokenStream, + span: Span, +} + +impl Group { + pub fn new(delimiter: Delimiter, stream: TokenStream) -> Self { + Group { + delimiter, + stream, + span: Span::call_site(), + } + } + + pub fn delimiter(&self) -> Delimiter { + self.delimiter + } + + pub fn stream(&self) -> TokenStream { + self.stream.clone() + } + + pub fn span(&self) -> Span { + self.span + } + + pub fn span_open(&self) -> Span { + self.span.first_byte() + } + + pub fn span_close(&self) -> Span { + self.span.last_byte() + } + + pub fn set_span(&mut self, span: Span) { + self.span = span; + } +} + +impl Display for Group { + // We attempt to match libproc_macro's formatting. + // Empty parens: () + // Nonempty parens: (...) + // Empty brackets: [] + // Nonempty brackets: [...] + // Empty braces: { } + // Nonempty braces: { ... } + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let (open, close) = match self.delimiter { + Delimiter::Parenthesis => ("(", ")"), + Delimiter::Brace => ("{ ", "}"), + Delimiter::Bracket => ("[", "]"), + Delimiter::None => ("", ""), + }; + + f.write_str(open)?; + Display::fmt(&self.stream, f)?; + if self.delimiter == Delimiter::Brace && !self.stream.inner.is_empty() { + f.write_str(" ")?; + } + f.write_str(close)?; + + Ok(()) + } +} + +impl Debug for Group { + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + let mut debug = fmt.debug_struct("Group"); + debug.field("delimiter", &self.delimiter); + debug.field("stream", &self.stream); + debug_span_field_if_nontrivial(&mut debug, self.span); + debug.finish() + } +} + +#[derive(Clone)] +pub(crate) struct Ident { + sym: Box, + span: Span, + raw: bool, +} + +impl Ident { + #[track_caller] + pub fn new_checked(string: &str, span: Span) -> Self { + validate_ident(string); + Ident::new_unchecked(string, span) + } + + pub fn new_unchecked(string: &str, span: Span) -> Self { + Ident { + sym: Box::from(string), + span, + raw: false, + } + } + + #[track_caller] + pub fn new_raw_checked(string: &str, span: Span) -> Self { + validate_ident_raw(string); + Ident::new_raw_unchecked(string, span) + } + + pub fn new_raw_unchecked(string: &str, span: Span) -> Self { + Ident { + sym: Box::from(string), + span, + raw: true, + } + } + + pub fn span(&self) -> Span { + self.span + } + + pub fn set_span(&mut self, span: Span) { + self.span = span; + } +} + +pub(crate) fn is_ident_start(c: char) -> bool { + c == '_' || unicode_ident::is_xid_start(c) +} + +pub(crate) fn is_ident_continue(c: char) -> bool { + unicode_ident::is_xid_continue(c) +} + +#[track_caller] +fn validate_ident(string: &str) { + if string.is_empty() { + panic!("Ident is not allowed to be empty; use Option"); + } + + if string.bytes().all(|digit| b'0' <= digit && digit <= b'9') { + panic!("Ident cannot be a number; use Literal instead"); + } + + fn ident_ok(string: &str) -> bool { + let mut chars = string.chars(); + let first = chars.next().unwrap(); + if !is_ident_start(first) { + return false; + } + for ch in chars { + if !is_ident_continue(ch) { + return false; + } + } + true + } + + if !ident_ok(string) { + panic!("{:?} is not a valid Ident", string); + } +} + +#[track_caller] +fn validate_ident_raw(string: &str) { + validate_ident(string); + + match string { + "_" | "super" | "self" | "Self" | "crate" => { + panic!("`r#{}` cannot be a raw identifier", string); + } + _ => {} + } +} + +impl PartialEq for Ident { + fn eq(&self, other: &Ident) -> bool { + self.sym == other.sym && self.raw == other.raw + } +} + +impl PartialEq for Ident +where + T: ?Sized + AsRef, +{ + fn eq(&self, other: &T) -> bool { + let other = other.as_ref(); + if self.raw { + other.starts_with("r#") && *self.sym == other[2..] + } else { + *self.sym == *other + } + } +} + +impl Display for Ident { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + if self.raw { + f.write_str("r#")?; + } + Display::fmt(&self.sym, f) + } +} + +#[allow(clippy::missing_fields_in_debug)] +impl Debug for Ident { + // Ident(proc_macro), Ident(r#union) + #[cfg(not(span_locations))] + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let mut debug = f.debug_tuple("Ident"); + debug.field(&format_args!("{}", self)); + debug.finish() + } + + // Ident { + // sym: proc_macro, + // span: bytes(128..138) + // } + #[cfg(span_locations)] + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let mut debug = f.debug_struct("Ident"); + debug.field("sym", &format_args!("{}", self)); + debug_span_field_if_nontrivial(&mut debug, self.span); + debug.finish() + } +} + +#[derive(Clone)] +pub(crate) struct Literal { + pub(crate) repr: String, + span: Span, +} + +macro_rules! suffixed_numbers { + ($($name:ident => $kind:ident,)*) => ($( + pub fn $name(n: $kind) -> Literal { + Literal::_new(format!(concat!("{}", stringify!($kind)), n)) + } + )*) +} + +macro_rules! unsuffixed_numbers { + ($($name:ident => $kind:ident,)*) => ($( + pub fn $name(n: $kind) -> Literal { + Literal::_new(n.to_string()) + } + )*) +} + +impl Literal { + pub(crate) fn _new(repr: String) -> Self { + Literal { + repr, + span: Span::call_site(), + } + } + + pub(crate) unsafe fn from_str_unchecked(repr: &str) -> Self { + Literal::_new(repr.to_owned()) + } + + suffixed_numbers! { + u8_suffixed => u8, + u16_suffixed => u16, + u32_suffixed => u32, + u64_suffixed => u64, + u128_suffixed => u128, + usize_suffixed => usize, + i8_suffixed => i8, + i16_suffixed => i16, + i32_suffixed => i32, + i64_suffixed => i64, + i128_suffixed => i128, + isize_suffixed => isize, + + f32_suffixed => f32, + f64_suffixed => f64, + } + + unsuffixed_numbers! { + u8_unsuffixed => u8, + u16_unsuffixed => u16, + u32_unsuffixed => u32, + u64_unsuffixed => u64, + u128_unsuffixed => u128, + usize_unsuffixed => usize, + i8_unsuffixed => i8, + i16_unsuffixed => i16, + i32_unsuffixed => i32, + i64_unsuffixed => i64, + i128_unsuffixed => i128, + isize_unsuffixed => isize, + } + + pub fn f32_unsuffixed(f: f32) -> Literal { + let mut s = f.to_string(); + if !s.contains('.') { + s.push_str(".0"); + } + Literal::_new(s) + } + + pub fn f64_unsuffixed(f: f64) -> Literal { + let mut s = f.to_string(); + if !s.contains('.') { + s.push_str(".0"); + } + Literal::_new(s) + } + + pub fn string(string: &str) -> Literal { + let mut repr = String::with_capacity(string.len() + 2); + repr.push('"'); + escape_utf8(string, &mut repr); + repr.push('"'); + Literal::_new(repr) + } + + pub fn character(ch: char) -> Literal { + let mut repr = String::new(); + repr.push('\''); + if ch == '"' { + // escape_debug turns this into '\"' which is unnecessary. + repr.push(ch); + } else { + repr.extend(ch.escape_debug()); + } + repr.push('\''); + Literal::_new(repr) + } + + pub fn byte_character(byte: u8) -> Literal { + let mut repr = "b'".to_string(); + #[allow(clippy::match_overlapping_arm)] + match byte { + b'\0' => repr.push_str(r"\0"), + b'\t' => repr.push_str(r"\t"), + b'\n' => repr.push_str(r"\n"), + b'\r' => repr.push_str(r"\r"), + b'\'' => repr.push_str(r"\'"), + b'\\' => repr.push_str(r"\\"), + b'\x20'..=b'\x7E' => repr.push(byte as char), + _ => { + let _ = write!(repr, r"\x{:02X}", byte); + } + } + repr.push('\''); + Literal::_new(repr) + } + + pub fn byte_string(bytes: &[u8]) -> Literal { + let mut repr = "b\"".to_string(); + let mut bytes = bytes.iter(); + while let Some(&b) = bytes.next() { + #[allow(clippy::match_overlapping_arm)] + match b { + b'\0' => repr.push_str(match bytes.as_slice().first() { + // circumvent clippy::octal_escapes lint + Some(b'0'..=b'7') => r"\x00", + _ => r"\0", + }), + b'\t' => repr.push_str(r"\t"), + b'\n' => repr.push_str(r"\n"), + b'\r' => repr.push_str(r"\r"), + b'"' => repr.push_str("\\\""), + b'\\' => repr.push_str(r"\\"), + b'\x20'..=b'\x7E' => repr.push(b as char), + _ => { + let _ = write!(repr, r"\x{:02X}", b); + } + } + } + repr.push('"'); + Literal::_new(repr) + } + + pub fn c_string(string: &CStr) -> Literal { + let mut repr = "c\"".to_string(); + let mut bytes = string.to_bytes(); + while !bytes.is_empty() { + let (valid, invalid) = match str::from_utf8(bytes) { + Ok(all_valid) => { + bytes = b""; + (all_valid, bytes) + } + Err(utf8_error) => { + let (valid, rest) = bytes.split_at(utf8_error.valid_up_to()); + let valid = str::from_utf8(valid).unwrap(); + let invalid = utf8_error + .error_len() + .map_or(rest, |error_len| &rest[..error_len]); + bytes = &bytes[valid.len() + invalid.len()..]; + (valid, invalid) + } + }; + escape_utf8(valid, &mut repr); + for &byte in invalid { + let _ = write!(repr, r"\x{:02X}", byte); + } + } + repr.push('"'); + Literal::_new(repr) + } + + pub fn span(&self) -> Span { + self.span + } + + pub fn set_span(&mut self, span: Span) { + self.span = span; + } + + pub fn subspan>(&self, range: R) -> Option { + #[cfg(not(span_locations))] + { + let _ = range; + None + } + + #[cfg(span_locations)] + { + use core::ops::Bound; + + let lo = match range.start_bound() { + Bound::Included(start) => { + let start = u32::try_from(*start).ok()?; + self.span.lo.checked_add(start)? + } + Bound::Excluded(start) => { + let start = u32::try_from(*start).ok()?; + self.span.lo.checked_add(start)?.checked_add(1)? + } + Bound::Unbounded => self.span.lo, + }; + let hi = match range.end_bound() { + Bound::Included(end) => { + let end = u32::try_from(*end).ok()?; + self.span.lo.checked_add(end)?.checked_add(1)? + } + Bound::Excluded(end) => { + let end = u32::try_from(*end).ok()?; + self.span.lo.checked_add(end)? + } + Bound::Unbounded => self.span.hi, + }; + if lo <= hi && hi <= self.span.hi { + Some(Span { lo, hi }) + } else { + None + } + } + } +} + +impl FromStr for Literal { + type Err = LexError; + + fn from_str(repr: &str) -> Result { + let mut cursor = get_cursor(repr); + #[cfg(span_locations)] + let lo = cursor.off; + + let negative = cursor.starts_with_char('-'); + if negative { + cursor = cursor.advance(1); + if !cursor.starts_with_fn(|ch| ch.is_ascii_digit()) { + return Err(LexError::call_site()); + } + } + + if let Ok((rest, mut literal)) = parse::literal(cursor) { + if rest.is_empty() { + if negative { + literal.repr.insert(0, '-'); + } + literal.span = Span { + #[cfg(span_locations)] + lo, + #[cfg(span_locations)] + hi: rest.off, + }; + return Ok(literal); + } + } + Err(LexError::call_site()) + } +} + +impl Display for Literal { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.repr, f) + } +} + +impl Debug for Literal { + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + let mut debug = fmt.debug_struct("Literal"); + debug.field("lit", &format_args!("{}", self.repr)); + debug_span_field_if_nontrivial(&mut debug, self.span); + debug.finish() + } +} + +fn escape_utf8(string: &str, repr: &mut String) { + let mut chars = string.chars(); + while let Some(ch) = chars.next() { + if ch == '\0' { + repr.push_str( + if chars + .as_str() + .starts_with(|next| '0' <= next && next <= '7') + { + // circumvent clippy::octal_escapes lint + r"\x00" + } else { + r"\0" + }, + ); + } else if ch == '\'' { + // escape_debug turns this into "\'" which is unnecessary. + repr.push(ch); + } else { + repr.extend(ch.escape_debug()); + } + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/lib.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/lib.rs new file mode 100644 index 0000000000..d7bfa50f4f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/lib.rs @@ -0,0 +1,1369 @@ +//! [![github]](https://github.com/dtolnay/proc-macro2) [![crates-io]](https://crates.io/crates/proc-macro2) [![docs-rs]](crate) +//! +//! [github]: https://img.shields.io/badge/github-8da0cb?style=for-the-badge&labelColor=555555&logo=github +//! [crates-io]: https://img.shields.io/badge/crates.io-fc8d62?style=for-the-badge&labelColor=555555&logo=rust +//! [docs-rs]: https://img.shields.io/badge/docs.rs-66c2a5?style=for-the-badge&labelColor=555555&logo=docs.rs +//! +//!
+//! +//! A wrapper around the procedural macro API of the compiler's [`proc_macro`] +//! crate. This library serves two purposes: +//! +//! [`proc_macro`]: https://doc.rust-lang.org/proc_macro/ +//! +//! - **Bring proc-macro-like functionality to other contexts like build.rs and +//! main.rs.** Types from `proc_macro` are entirely specific to procedural +//! macros and cannot ever exist in code outside of a procedural macro. +//! Meanwhile `proc_macro2` types may exist anywhere including non-macro code. +//! By developing foundational libraries like [syn] and [quote] against +//! `proc_macro2` rather than `proc_macro`, the procedural macro ecosystem +//! becomes easily applicable to many other use cases and we avoid +//! reimplementing non-macro equivalents of those libraries. +//! +//! - **Make procedural macros unit testable.** As a consequence of being +//! specific to procedural macros, nothing that uses `proc_macro` can be +//! executed from a unit test. In order for helper libraries or components of +//! a macro to be testable in isolation, they must be implemented using +//! `proc_macro2`. +//! +//! [syn]: https://github.com/dtolnay/syn +//! [quote]: https://github.com/dtolnay/quote +//! +//! # Usage +//! +//! The skeleton of a typical procedural macro typically looks like this: +//! +//! ``` +//! extern crate proc_macro; +//! +//! # const IGNORE: &str = stringify! { +//! #[proc_macro_derive(MyDerive)] +//! # }; +//! # #[cfg(wrap_proc_macro)] +//! pub fn my_derive(input: proc_macro::TokenStream) -> proc_macro::TokenStream { +//! let input = proc_macro2::TokenStream::from(input); +//! +//! let output: proc_macro2::TokenStream = { +//! /* transform input */ +//! # input +//! }; +//! +//! proc_macro::TokenStream::from(output) +//! } +//! ``` +//! +//! If parsing with [Syn], you'll use [`parse_macro_input!`] instead to +//! propagate parse errors correctly back to the compiler when parsing fails. +//! +//! [`parse_macro_input!`]: https://docs.rs/syn/2.0/syn/macro.parse_macro_input.html +//! +//! # Unstable features +//! +//! The default feature set of proc-macro2 tracks the most recent stable +//! compiler API. Functionality in `proc_macro` that is not yet stable is not +//! exposed by proc-macro2 by default. +//! +//! To opt into the additional APIs available in the most recent nightly +//! compiler, the `procmacro2_semver_exempt` config flag must be passed to +//! rustc. We will polyfill those nightly-only APIs back to Rust 1.56.0. As +//! these are unstable APIs that track the nightly compiler, minor versions of +//! proc-macro2 may make breaking changes to them at any time. +//! +//! ```sh +//! RUSTFLAGS='--cfg procmacro2_semver_exempt' cargo build +//! ``` +//! +//! Note that this must not only be done for your crate, but for any crate that +//! depends on your crate. This infectious nature is intentional, as it serves +//! as a reminder that you are outside of the normal semver guarantees. +//! +//! Semver exempt methods are marked as such in the proc-macro2 documentation. +//! +//! # Thread-Safety +//! +//! Most types in this crate are `!Sync` because the underlying compiler +//! types make use of thread-local memory, meaning they cannot be accessed from +//! a different thread. + +// Proc-macro2 types in rustdoc of other crates get linked to here. +#![doc(html_root_url = "https://docs.rs/proc-macro2/1.0.84")] +#![cfg_attr(any(proc_macro_span, super_unstable), feature(proc_macro_span))] +#![cfg_attr(super_unstable, feature(proc_macro_def_site))] +#![cfg_attr(docsrs, feature(doc_cfg))] +#![deny(unsafe_op_in_unsafe_fn)] +#![allow( + clippy::cast_lossless, + clippy::cast_possible_truncation, + clippy::checked_conversions, + clippy::doc_markdown, + clippy::incompatible_msrv, + clippy::items_after_statements, + clippy::iter_without_into_iter, + clippy::let_underscore_untyped, + clippy::manual_assert, + clippy::manual_range_contains, + clippy::missing_safety_doc, + clippy::must_use_candidate, + clippy::needless_doctest_main, + clippy::new_without_default, + clippy::return_self_not_must_use, + clippy::shadow_unrelated, + clippy::trivially_copy_pass_by_ref, + clippy::unnecessary_wraps, + clippy::unused_self, + clippy::used_underscore_binding, + clippy::vec_init_then_push +)] + +#[cfg(all(procmacro2_semver_exempt, wrap_proc_macro, not(super_unstable)))] +compile_error! {"\ + Something is not right. If you've tried to turn on \ + procmacro2_semver_exempt, you need to ensure that it \ + is turned on for the compilation of the proc-macro2 \ + build script as well. +"} + +#[cfg(all( + procmacro2_nightly_testing, + feature = "proc-macro", + not(proc_macro_span) +))] +compile_error! {"\ + Build script probe failed to compile. +"} + +extern crate alloc; + +#[cfg(feature = "proc-macro")] +extern crate proc_macro; + +mod marker; +mod parse; +mod rcvec; + +#[cfg(wrap_proc_macro)] +mod detection; + +// Public for proc_macro2::fallback::force() and unforce(), but those are quite +// a niche use case so we omit it from rustdoc. +#[doc(hidden)] +pub mod fallback; + +pub mod extra; + +#[cfg(not(wrap_proc_macro))] +use crate::fallback as imp; +#[path = "wrapper.rs"] +#[cfg(wrap_proc_macro)] +mod imp; + +#[cfg(span_locations)] +mod location; + +use crate::extra::DelimSpan; +use crate::marker::{ProcMacroAutoTraits, MARKER}; +use core::cmp::Ordering; +use core::fmt::{self, Debug, Display}; +use core::hash::{Hash, Hasher}; +#[cfg(span_locations)] +use core::ops::Range; +use core::ops::RangeBounds; +use core::str::FromStr; +use std::error::Error; +use std::ffi::CStr; +#[cfg(procmacro2_semver_exempt)] +use std::path::PathBuf; + +#[cfg(span_locations)] +#[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))] +pub use crate::location::LineColumn; + +/// An abstract stream of tokens, or more concretely a sequence of token trees. +/// +/// This type provides interfaces for iterating over token trees and for +/// collecting token trees into one stream. +/// +/// Token stream is both the input and output of `#[proc_macro]`, +/// `#[proc_macro_attribute]` and `#[proc_macro_derive]` definitions. +#[derive(Clone)] +pub struct TokenStream { + inner: imp::TokenStream, + _marker: ProcMacroAutoTraits, +} + +/// Error returned from `TokenStream::from_str`. +pub struct LexError { + inner: imp::LexError, + _marker: ProcMacroAutoTraits, +} + +impl TokenStream { + fn _new(inner: imp::TokenStream) -> Self { + TokenStream { + inner, + _marker: MARKER, + } + } + + fn _new_fallback(inner: fallback::TokenStream) -> Self { + TokenStream { + inner: inner.into(), + _marker: MARKER, + } + } + + /// Returns an empty `TokenStream` containing no token trees. + pub fn new() -> Self { + TokenStream::_new(imp::TokenStream::new()) + } + + /// Checks if this `TokenStream` is empty. + pub fn is_empty(&self) -> bool { + self.inner.is_empty() + } +} + +/// `TokenStream::default()` returns an empty stream, +/// i.e. this is equivalent with `TokenStream::new()`. +impl Default for TokenStream { + fn default() -> Self { + TokenStream::new() + } +} + +/// Attempts to break the string into tokens and parse those tokens into a token +/// stream. +/// +/// May fail for a number of reasons, for example, if the string contains +/// unbalanced delimiters or characters not existing in the language. +/// +/// NOTE: Some errors may cause panics instead of returning `LexError`. We +/// reserve the right to change these errors into `LexError`s later. +impl FromStr for TokenStream { + type Err = LexError; + + fn from_str(src: &str) -> Result { + let e = src.parse().map_err(|e| LexError { + inner: e, + _marker: MARKER, + })?; + Ok(TokenStream::_new(e)) + } +} + +#[cfg(feature = "proc-macro")] +#[cfg_attr(docsrs, doc(cfg(feature = "proc-macro")))] +impl From for TokenStream { + fn from(inner: proc_macro::TokenStream) -> Self { + TokenStream::_new(inner.into()) + } +} + +#[cfg(feature = "proc-macro")] +#[cfg_attr(docsrs, doc(cfg(feature = "proc-macro")))] +impl From for proc_macro::TokenStream { + fn from(inner: TokenStream) -> Self { + inner.inner.into() + } +} + +impl From for TokenStream { + fn from(token: TokenTree) -> Self { + TokenStream::_new(imp::TokenStream::from(token)) + } +} + +impl Extend for TokenStream { + fn extend>(&mut self, streams: I) { + self.inner.extend(streams); + } +} + +impl Extend for TokenStream { + fn extend>(&mut self, streams: I) { + self.inner + .extend(streams.into_iter().map(|stream| stream.inner)); + } +} + +/// Collects a number of token trees into a single stream. +impl FromIterator for TokenStream { + fn from_iter>(streams: I) -> Self { + TokenStream::_new(streams.into_iter().collect()) + } +} +impl FromIterator for TokenStream { + fn from_iter>(streams: I) -> Self { + TokenStream::_new(streams.into_iter().map(|i| i.inner).collect()) + } +} + +/// Prints the token stream as a string that is supposed to be losslessly +/// convertible back into the same token stream (modulo spans), except for +/// possibly `TokenTree::Group`s with `Delimiter::None` delimiters and negative +/// numeric literals. +impl Display for TokenStream { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.inner, f) + } +} + +/// Prints token in a form convenient for debugging. +impl Debug for TokenStream { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, f) + } +} + +impl LexError { + pub fn span(&self) -> Span { + Span::_new(self.inner.span()) + } +} + +impl Debug for LexError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, f) + } +} + +impl Display for LexError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.inner, f) + } +} + +impl Error for LexError {} + +/// The source file of a given `Span`. +/// +/// This type is semver exempt and not exposed by default. +#[cfg(all(procmacro2_semver_exempt, any(not(wrap_proc_macro), super_unstable)))] +#[cfg_attr(docsrs, doc(cfg(procmacro2_semver_exempt)))] +#[derive(Clone, PartialEq, Eq)] +pub struct SourceFile { + inner: imp::SourceFile, + _marker: ProcMacroAutoTraits, +} + +#[cfg(all(procmacro2_semver_exempt, any(not(wrap_proc_macro), super_unstable)))] +impl SourceFile { + fn _new(inner: imp::SourceFile) -> Self { + SourceFile { + inner, + _marker: MARKER, + } + } + + /// Get the path to this source file. + /// + /// ### Note + /// + /// If the code span associated with this `SourceFile` was generated by an + /// external macro, this may not be an actual path on the filesystem. Use + /// [`is_real`] to check. + /// + /// Also note that even if `is_real` returns `true`, if + /// `--remap-path-prefix` was passed on the command line, the path as given + /// may not actually be valid. + /// + /// [`is_real`]: #method.is_real + pub fn path(&self) -> PathBuf { + self.inner.path() + } + + /// Returns `true` if this source file is a real source file, and not + /// generated by an external macro's expansion. + pub fn is_real(&self) -> bool { + self.inner.is_real() + } +} + +#[cfg(all(procmacro2_semver_exempt, any(not(wrap_proc_macro), super_unstable)))] +impl Debug for SourceFile { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, f) + } +} + +/// A region of source code, along with macro expansion information. +#[derive(Copy, Clone)] +pub struct Span { + inner: imp::Span, + _marker: ProcMacroAutoTraits, +} + +impl Span { + fn _new(inner: imp::Span) -> Self { + Span { + inner, + _marker: MARKER, + } + } + + fn _new_fallback(inner: fallback::Span) -> Self { + Span { + inner: inner.into(), + _marker: MARKER, + } + } + + /// The span of the invocation of the current procedural macro. + /// + /// Identifiers created with this span will be resolved as if they were + /// written directly at the macro call location (call-site hygiene) and + /// other code at the macro call site will be able to refer to them as well. + pub fn call_site() -> Self { + Span::_new(imp::Span::call_site()) + } + + /// The span located at the invocation of the procedural macro, but with + /// local variables, labels, and `$crate` resolved at the definition site + /// of the macro. This is the same hygiene behavior as `macro_rules`. + pub fn mixed_site() -> Self { + Span::_new(imp::Span::mixed_site()) + } + + /// A span that resolves at the macro definition site. + /// + /// This method is semver exempt and not exposed by default. + #[cfg(procmacro2_semver_exempt)] + #[cfg_attr(docsrs, doc(cfg(procmacro2_semver_exempt)))] + pub fn def_site() -> Self { + Span::_new(imp::Span::def_site()) + } + + /// Creates a new span with the same line/column information as `self` but + /// that resolves symbols as though it were at `other`. + pub fn resolved_at(&self, other: Span) -> Span { + Span::_new(self.inner.resolved_at(other.inner)) + } + + /// Creates a new span with the same name resolution behavior as `self` but + /// with the line/column information of `other`. + pub fn located_at(&self, other: Span) -> Span { + Span::_new(self.inner.located_at(other.inner)) + } + + /// Convert `proc_macro2::Span` to `proc_macro::Span`. + /// + /// This method is available when building with a nightly compiler, or when + /// building with rustc 1.29+ *without* semver exempt features. + /// + /// # Panics + /// + /// Panics if called from outside of a procedural macro. Unlike + /// `proc_macro2::Span`, the `proc_macro::Span` type can only exist within + /// the context of a procedural macro invocation. + #[cfg(wrap_proc_macro)] + pub fn unwrap(self) -> proc_macro::Span { + self.inner.unwrap() + } + + // Soft deprecated. Please use Span::unwrap. + #[cfg(wrap_proc_macro)] + #[doc(hidden)] + pub fn unstable(self) -> proc_macro::Span { + self.unwrap() + } + + /// The original source file into which this span points. + /// + /// This method is semver exempt and not exposed by default. + #[cfg(all(procmacro2_semver_exempt, any(not(wrap_proc_macro), super_unstable)))] + #[cfg_attr(docsrs, doc(cfg(procmacro2_semver_exempt)))] + pub fn source_file(&self) -> SourceFile { + SourceFile::_new(self.inner.source_file()) + } + + /// Returns the span's byte position range in the source file. + /// + /// This method requires the `"span-locations"` feature to be enabled. + /// + /// When executing in a procedural macro context, the returned range is only + /// accurate if compiled with a nightly toolchain. The stable toolchain does + /// not have this information available. When executing outside of a + /// procedural macro, such as main.rs or build.rs, the byte range is always + /// accurate regardless of toolchain. + #[cfg(span_locations)] + #[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))] + pub fn byte_range(&self) -> Range { + self.inner.byte_range() + } + + /// Get the starting line/column in the source file for this span. + /// + /// This method requires the `"span-locations"` feature to be enabled. + /// + /// When executing in a procedural macro context, the returned line/column + /// are only meaningful if compiled with a nightly toolchain. The stable + /// toolchain does not have this information available. When executing + /// outside of a procedural macro, such as main.rs or build.rs, the + /// line/column are always meaningful regardless of toolchain. + #[cfg(span_locations)] + #[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))] + pub fn start(&self) -> LineColumn { + self.inner.start() + } + + /// Get the ending line/column in the source file for this span. + /// + /// This method requires the `"span-locations"` feature to be enabled. + /// + /// When executing in a procedural macro context, the returned line/column + /// are only meaningful if compiled with a nightly toolchain. The stable + /// toolchain does not have this information available. When executing + /// outside of a procedural macro, such as main.rs or build.rs, the + /// line/column are always meaningful regardless of toolchain. + #[cfg(span_locations)] + #[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))] + pub fn end(&self) -> LineColumn { + self.inner.end() + } + + /// Create a new span encompassing `self` and `other`. + /// + /// Returns `None` if `self` and `other` are from different files. + /// + /// Warning: the underlying [`proc_macro::Span::join`] method is + /// nightly-only. When called from within a procedural macro not using a + /// nightly compiler, this method will always return `None`. + /// + /// [`proc_macro::Span::join`]: https://doc.rust-lang.org/proc_macro/struct.Span.html#method.join + pub fn join(&self, other: Span) -> Option { + self.inner.join(other.inner).map(Span::_new) + } + + /// Compares two spans to see if they're equal. + /// + /// This method is semver exempt and not exposed by default. + #[cfg(procmacro2_semver_exempt)] + #[cfg_attr(docsrs, doc(cfg(procmacro2_semver_exempt)))] + pub fn eq(&self, other: &Span) -> bool { + self.inner.eq(&other.inner) + } + + /// Returns the source text behind a span. This preserves the original + /// source code, including spaces and comments. It only returns a result if + /// the span corresponds to real source code. + /// + /// Note: The observable result of a macro should only rely on the tokens + /// and not on this source text. The result of this function is a best + /// effort to be used for diagnostics only. + pub fn source_text(&self) -> Option { + self.inner.source_text() + } +} + +/// Prints a span in a form convenient for debugging. +impl Debug for Span { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, f) + } +} + +/// A single token or a delimited sequence of token trees (e.g. `[1, (), ..]`). +#[derive(Clone)] +pub enum TokenTree { + /// A token stream surrounded by bracket delimiters. + Group(Group), + /// An identifier. + Ident(Ident), + /// A single punctuation character (`+`, `,`, `$`, etc.). + Punct(Punct), + /// A literal character (`'a'`), string (`"hello"`), number (`2.3`), etc. + Literal(Literal), +} + +impl TokenTree { + /// Returns the span of this tree, delegating to the `span` method of + /// the contained token or a delimited stream. + pub fn span(&self) -> Span { + match self { + TokenTree::Group(t) => t.span(), + TokenTree::Ident(t) => t.span(), + TokenTree::Punct(t) => t.span(), + TokenTree::Literal(t) => t.span(), + } + } + + /// Configures the span for *only this token*. + /// + /// Note that if this token is a `Group` then this method will not configure + /// the span of each of the internal tokens, this will simply delegate to + /// the `set_span` method of each variant. + pub fn set_span(&mut self, span: Span) { + match self { + TokenTree::Group(t) => t.set_span(span), + TokenTree::Ident(t) => t.set_span(span), + TokenTree::Punct(t) => t.set_span(span), + TokenTree::Literal(t) => t.set_span(span), + } + } +} + +impl From for TokenTree { + fn from(g: Group) -> Self { + TokenTree::Group(g) + } +} + +impl From for TokenTree { + fn from(g: Ident) -> Self { + TokenTree::Ident(g) + } +} + +impl From for TokenTree { + fn from(g: Punct) -> Self { + TokenTree::Punct(g) + } +} + +impl From for TokenTree { + fn from(g: Literal) -> Self { + TokenTree::Literal(g) + } +} + +/// Prints the token tree as a string that is supposed to be losslessly +/// convertible back into the same token tree (modulo spans), except for +/// possibly `TokenTree::Group`s with `Delimiter::None` delimiters and negative +/// numeric literals. +impl Display for TokenTree { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + TokenTree::Group(t) => Display::fmt(t, f), + TokenTree::Ident(t) => Display::fmt(t, f), + TokenTree::Punct(t) => Display::fmt(t, f), + TokenTree::Literal(t) => Display::fmt(t, f), + } + } +} + +/// Prints token tree in a form convenient for debugging. +impl Debug for TokenTree { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + // Each of these has the name in the struct type in the derived debug, + // so don't bother with an extra layer of indirection + match self { + TokenTree::Group(t) => Debug::fmt(t, f), + TokenTree::Ident(t) => { + let mut debug = f.debug_struct("Ident"); + debug.field("sym", &format_args!("{}", t)); + imp::debug_span_field_if_nontrivial(&mut debug, t.span().inner); + debug.finish() + } + TokenTree::Punct(t) => Debug::fmt(t, f), + TokenTree::Literal(t) => Debug::fmt(t, f), + } + } +} + +/// A delimited token stream. +/// +/// A `Group` internally contains a `TokenStream` which is surrounded by +/// `Delimiter`s. +#[derive(Clone)] +pub struct Group { + inner: imp::Group, +} + +/// Describes how a sequence of token trees is delimited. +#[derive(Copy, Clone, Debug, Eq, PartialEq)] +pub enum Delimiter { + /// `( ... )` + Parenthesis, + /// `{ ... }` + Brace, + /// `[ ... ]` + Bracket, + /// `∅ ... ∅` + /// + /// An invisible delimiter, that may, for example, appear around tokens + /// coming from a "macro variable" `$var`. It is important to preserve + /// operator priorities in cases like `$var * 3` where `$var` is `1 + 2`. + /// Invisible delimiters may not survive roundtrip of a token stream through + /// a string. + /// + ///
+ /// + /// Note: rustc currently can ignore the grouping of tokens delimited by `None` in the output + /// of a proc_macro. Only `None`-delimited groups created by a macro_rules macro in the input + /// of a proc_macro macro are preserved, and only in very specific circumstances. + /// Any `None`-delimited groups (re)created by a proc_macro will therefore not preserve + /// operator priorities as indicated above. The other `Delimiter` variants should be used + /// instead in this context. This is a rustc bug. For details, see + /// [rust-lang/rust#67062](https://github.com/rust-lang/rust/issues/67062). + /// + ///
+ None, +} + +impl Group { + fn _new(inner: imp::Group) -> Self { + Group { inner } + } + + fn _new_fallback(inner: fallback::Group) -> Self { + Group { + inner: inner.into(), + } + } + + /// Creates a new `Group` with the given delimiter and token stream. + /// + /// This constructor will set the span for this group to + /// `Span::call_site()`. To change the span you can use the `set_span` + /// method below. + pub fn new(delimiter: Delimiter, stream: TokenStream) -> Self { + Group { + inner: imp::Group::new(delimiter, stream.inner), + } + } + + /// Returns the punctuation used as the delimiter for this group: a set of + /// parentheses, square brackets, or curly braces. + pub fn delimiter(&self) -> Delimiter { + self.inner.delimiter() + } + + /// Returns the `TokenStream` of tokens that are delimited in this `Group`. + /// + /// Note that the returned token stream does not include the delimiter + /// returned above. + pub fn stream(&self) -> TokenStream { + TokenStream::_new(self.inner.stream()) + } + + /// Returns the span for the delimiters of this token stream, spanning the + /// entire `Group`. + /// + /// ```text + /// pub fn span(&self) -> Span { + /// ^^^^^^^ + /// ``` + pub fn span(&self) -> Span { + Span::_new(self.inner.span()) + } + + /// Returns the span pointing to the opening delimiter of this group. + /// + /// ```text + /// pub fn span_open(&self) -> Span { + /// ^ + /// ``` + pub fn span_open(&self) -> Span { + Span::_new(self.inner.span_open()) + } + + /// Returns the span pointing to the closing delimiter of this group. + /// + /// ```text + /// pub fn span_close(&self) -> Span { + /// ^ + /// ``` + pub fn span_close(&self) -> Span { + Span::_new(self.inner.span_close()) + } + + /// Returns an object that holds this group's `span_open()` and + /// `span_close()` together (in a more compact representation than holding + /// those 2 spans individually). + pub fn delim_span(&self) -> DelimSpan { + DelimSpan::new(&self.inner) + } + + /// Configures the span for this `Group`'s delimiters, but not its internal + /// tokens. + /// + /// This method will **not** set the span of all the internal tokens spanned + /// by this group, but rather it will only set the span of the delimiter + /// tokens at the level of the `Group`. + pub fn set_span(&mut self, span: Span) { + self.inner.set_span(span.inner); + } +} + +/// Prints the group as a string that should be losslessly convertible back +/// into the same group (modulo spans), except for possibly `TokenTree::Group`s +/// with `Delimiter::None` delimiters. +impl Display for Group { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.inner, formatter) + } +} + +impl Debug for Group { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, formatter) + } +} + +/// A `Punct` is a single punctuation character like `+`, `-` or `#`. +/// +/// Multicharacter operators like `+=` are represented as two instances of +/// `Punct` with different forms of `Spacing` returned. +#[derive(Clone)] +pub struct Punct { + ch: char, + spacing: Spacing, + span: Span, +} + +/// Whether a `Punct` is followed immediately by another `Punct` or followed by +/// another token or whitespace. +#[derive(Copy, Clone, Debug, Eq, PartialEq)] +pub enum Spacing { + /// E.g. `+` is `Alone` in `+ =`, `+ident` or `+()`. + Alone, + /// E.g. `+` is `Joint` in `+=` or `'` is `Joint` in `'#`. + /// + /// Additionally, single quote `'` can join with identifiers to form + /// lifetimes `'ident`. + Joint, +} + +impl Punct { + /// Creates a new `Punct` from the given character and spacing. + /// + /// The `ch` argument must be a valid punctuation character permitted by the + /// language, otherwise the function will panic. + /// + /// The returned `Punct` will have the default span of `Span::call_site()` + /// which can be further configured with the `set_span` method below. + pub fn new(ch: char, spacing: Spacing) -> Self { + Punct { + ch, + spacing, + span: Span::call_site(), + } + } + + /// Returns the value of this punctuation character as `char`. + pub fn as_char(&self) -> char { + self.ch + } + + /// Returns the spacing of this punctuation character, indicating whether + /// it's immediately followed by another `Punct` in the token stream, so + /// they can potentially be combined into a multicharacter operator + /// (`Joint`), or it's followed by some other token or whitespace (`Alone`) + /// so the operator has certainly ended. + pub fn spacing(&self) -> Spacing { + self.spacing + } + + /// Returns the span for this punctuation character. + pub fn span(&self) -> Span { + self.span + } + + /// Configure the span for this punctuation character. + pub fn set_span(&mut self, span: Span) { + self.span = span; + } +} + +/// Prints the punctuation character as a string that should be losslessly +/// convertible back into the same character. +impl Display for Punct { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.ch, f) + } +} + +impl Debug for Punct { + fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { + let mut debug = fmt.debug_struct("Punct"); + debug.field("char", &self.ch); + debug.field("spacing", &self.spacing); + imp::debug_span_field_if_nontrivial(&mut debug, self.span.inner); + debug.finish() + } +} + +/// A word of Rust code, which may be a keyword or legal variable name. +/// +/// An identifier consists of at least one Unicode code point, the first of +/// which has the XID_Start property and the rest of which have the XID_Continue +/// property. +/// +/// - The empty string is not an identifier. Use `Option`. +/// - A lifetime is not an identifier. Use `syn::Lifetime` instead. +/// +/// An identifier constructed with `Ident::new` is permitted to be a Rust +/// keyword, though parsing one through its [`Parse`] implementation rejects +/// Rust keywords. Use `input.call(Ident::parse_any)` when parsing to match the +/// behaviour of `Ident::new`. +/// +/// [`Parse`]: https://docs.rs/syn/2.0/syn/parse/trait.Parse.html +/// +/// # Examples +/// +/// A new ident can be created from a string using the `Ident::new` function. +/// A span must be provided explicitly which governs the name resolution +/// behavior of the resulting identifier. +/// +/// ``` +/// use proc_macro2::{Ident, Span}; +/// +/// fn main() { +/// let call_ident = Ident::new("calligraphy", Span::call_site()); +/// +/// println!("{}", call_ident); +/// } +/// ``` +/// +/// An ident can be interpolated into a token stream using the `quote!` macro. +/// +/// ``` +/// use proc_macro2::{Ident, Span}; +/// use quote::quote; +/// +/// fn main() { +/// let ident = Ident::new("demo", Span::call_site()); +/// +/// // Create a variable binding whose name is this ident. +/// let expanded = quote! { let #ident = 10; }; +/// +/// // Create a variable binding with a slightly different name. +/// let temp_ident = Ident::new(&format!("new_{}", ident), Span::call_site()); +/// let expanded = quote! { let #temp_ident = 10; }; +/// } +/// ``` +/// +/// A string representation of the ident is available through the `to_string()` +/// method. +/// +/// ``` +/// # use proc_macro2::{Ident, Span}; +/// # +/// # let ident = Ident::new("another_identifier", Span::call_site()); +/// # +/// // Examine the ident as a string. +/// let ident_string = ident.to_string(); +/// if ident_string.len() > 60 { +/// println!("Very long identifier: {}", ident_string) +/// } +/// ``` +#[derive(Clone)] +pub struct Ident { + inner: imp::Ident, + _marker: ProcMacroAutoTraits, +} + +impl Ident { + fn _new(inner: imp::Ident) -> Self { + Ident { + inner, + _marker: MARKER, + } + } + + /// Creates a new `Ident` with the given `string` as well as the specified + /// `span`. + /// + /// The `string` argument must be a valid identifier permitted by the + /// language, otherwise the function will panic. + /// + /// Note that `span`, currently in rustc, configures the hygiene information + /// for this identifier. + /// + /// As of this time `Span::call_site()` explicitly opts-in to "call-site" + /// hygiene meaning that identifiers created with this span will be resolved + /// as if they were written directly at the location of the macro call, and + /// other code at the macro call site will be able to refer to them as well. + /// + /// Later spans like `Span::def_site()` will allow to opt-in to + /// "definition-site" hygiene meaning that identifiers created with this + /// span will be resolved at the location of the macro definition and other + /// code at the macro call site will not be able to refer to them. + /// + /// Due to the current importance of hygiene this constructor, unlike other + /// tokens, requires a `Span` to be specified at construction. + /// + /// # Panics + /// + /// Panics if the input string is neither a keyword nor a legal variable + /// name. If you are not sure whether the string contains an identifier and + /// need to handle an error case, use + /// syn::parse_str::<Ident> + /// rather than `Ident::new`. + #[track_caller] + pub fn new(string: &str, span: Span) -> Self { + Ident::_new(imp::Ident::new_checked(string, span.inner)) + } + + /// Same as `Ident::new`, but creates a raw identifier (`r#ident`). The + /// `string` argument must be a valid identifier permitted by the language + /// (including keywords, e.g. `fn`). Keywords which are usable in path + /// segments (e.g. `self`, `super`) are not supported, and will cause a + /// panic. + #[track_caller] + pub fn new_raw(string: &str, span: Span) -> Self { + Ident::_new(imp::Ident::new_raw_checked(string, span.inner)) + } + + /// Returns the span of this `Ident`. + pub fn span(&self) -> Span { + Span::_new(self.inner.span()) + } + + /// Configures the span of this `Ident`, possibly changing its hygiene + /// context. + pub fn set_span(&mut self, span: Span) { + self.inner.set_span(span.inner); + } +} + +impl PartialEq for Ident { + fn eq(&self, other: &Ident) -> bool { + self.inner == other.inner + } +} + +impl PartialEq for Ident +where + T: ?Sized + AsRef, +{ + fn eq(&self, other: &T) -> bool { + self.inner == other + } +} + +impl Eq for Ident {} + +impl PartialOrd for Ident { + fn partial_cmp(&self, other: &Ident) -> Option { + Some(self.cmp(other)) + } +} + +impl Ord for Ident { + fn cmp(&self, other: &Ident) -> Ordering { + self.to_string().cmp(&other.to_string()) + } +} + +impl Hash for Ident { + fn hash(&self, hasher: &mut H) { + self.to_string().hash(hasher); + } +} + +/// Prints the identifier as a string that should be losslessly convertible back +/// into the same identifier. +impl Display for Ident { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.inner, f) + } +} + +impl Debug for Ident { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, f) + } +} + +/// A literal string (`"hello"`), byte string (`b"hello"`), character (`'a'`), +/// byte character (`b'a'`), an integer or floating point number with or without +/// a suffix (`1`, `1u8`, `2.3`, `2.3f32`). +/// +/// Boolean literals like `true` and `false` do not belong here, they are +/// `Ident`s. +#[derive(Clone)] +pub struct Literal { + inner: imp::Literal, + _marker: ProcMacroAutoTraits, +} + +macro_rules! suffixed_int_literals { + ($($name:ident => $kind:ident,)*) => ($( + /// Creates a new suffixed integer literal with the specified value. + /// + /// This function will create an integer like `1u32` where the integer + /// value specified is the first part of the token and the integral is + /// also suffixed at the end. Literals created from negative numbers may + /// not survive roundtrips through `TokenStream` or strings and may be + /// broken into two tokens (`-` and positive literal). + /// + /// Literals created through this method have the `Span::call_site()` + /// span by default, which can be configured with the `set_span` method + /// below. + pub fn $name(n: $kind) -> Literal { + Literal::_new(imp::Literal::$name(n)) + } + )*) +} + +macro_rules! unsuffixed_int_literals { + ($($name:ident => $kind:ident,)*) => ($( + /// Creates a new unsuffixed integer literal with the specified value. + /// + /// This function will create an integer like `1` where the integer + /// value specified is the first part of the token. No suffix is + /// specified on this token, meaning that invocations like + /// `Literal::i8_unsuffixed(1)` are equivalent to + /// `Literal::u32_unsuffixed(1)`. Literals created from negative numbers + /// may not survive roundtrips through `TokenStream` or strings and may + /// be broken into two tokens (`-` and positive literal). + /// + /// Literals created through this method have the `Span::call_site()` + /// span by default, which can be configured with the `set_span` method + /// below. + pub fn $name(n: $kind) -> Literal { + Literal::_new(imp::Literal::$name(n)) + } + )*) +} + +impl Literal { + fn _new(inner: imp::Literal) -> Self { + Literal { + inner, + _marker: MARKER, + } + } + + fn _new_fallback(inner: fallback::Literal) -> Self { + Literal { + inner: inner.into(), + _marker: MARKER, + } + } + + suffixed_int_literals! { + u8_suffixed => u8, + u16_suffixed => u16, + u32_suffixed => u32, + u64_suffixed => u64, + u128_suffixed => u128, + usize_suffixed => usize, + i8_suffixed => i8, + i16_suffixed => i16, + i32_suffixed => i32, + i64_suffixed => i64, + i128_suffixed => i128, + isize_suffixed => isize, + } + + unsuffixed_int_literals! { + u8_unsuffixed => u8, + u16_unsuffixed => u16, + u32_unsuffixed => u32, + u64_unsuffixed => u64, + u128_unsuffixed => u128, + usize_unsuffixed => usize, + i8_unsuffixed => i8, + i16_unsuffixed => i16, + i32_unsuffixed => i32, + i64_unsuffixed => i64, + i128_unsuffixed => i128, + isize_unsuffixed => isize, + } + + /// Creates a new unsuffixed floating-point literal. + /// + /// This constructor is similar to those like `Literal::i8_unsuffixed` where + /// the float's value is emitted directly into the token but no suffix is + /// used, so it may be inferred to be a `f64` later in the compiler. + /// Literals created from negative numbers may not survive round-trips + /// through `TokenStream` or strings and may be broken into two tokens (`-` + /// and positive literal). + /// + /// # Panics + /// + /// This function requires that the specified float is finite, for example + /// if it is infinity or NaN this function will panic. + pub fn f64_unsuffixed(f: f64) -> Literal { + assert!(f.is_finite()); + Literal::_new(imp::Literal::f64_unsuffixed(f)) + } + + /// Creates a new suffixed floating-point literal. + /// + /// This constructor will create a literal like `1.0f64` where the value + /// specified is the preceding part of the token and `f64` is the suffix of + /// the token. This token will always be inferred to be an `f64` in the + /// compiler. Literals created from negative numbers may not survive + /// round-trips through `TokenStream` or strings and may be broken into two + /// tokens (`-` and positive literal). + /// + /// # Panics + /// + /// This function requires that the specified float is finite, for example + /// if it is infinity or NaN this function will panic. + pub fn f64_suffixed(f: f64) -> Literal { + assert!(f.is_finite()); + Literal::_new(imp::Literal::f64_suffixed(f)) + } + + /// Creates a new unsuffixed floating-point literal. + /// + /// This constructor is similar to those like `Literal::i8_unsuffixed` where + /// the float's value is emitted directly into the token but no suffix is + /// used, so it may be inferred to be a `f64` later in the compiler. + /// Literals created from negative numbers may not survive round-trips + /// through `TokenStream` or strings and may be broken into two tokens (`-` + /// and positive literal). + /// + /// # Panics + /// + /// This function requires that the specified float is finite, for example + /// if it is infinity or NaN this function will panic. + pub fn f32_unsuffixed(f: f32) -> Literal { + assert!(f.is_finite()); + Literal::_new(imp::Literal::f32_unsuffixed(f)) + } + + /// Creates a new suffixed floating-point literal. + /// + /// This constructor will create a literal like `1.0f32` where the value + /// specified is the preceding part of the token and `f32` is the suffix of + /// the token. This token will always be inferred to be an `f32` in the + /// compiler. Literals created from negative numbers may not survive + /// round-trips through `TokenStream` or strings and may be broken into two + /// tokens (`-` and positive literal). + /// + /// # Panics + /// + /// This function requires that the specified float is finite, for example + /// if it is infinity or NaN this function will panic. + pub fn f32_suffixed(f: f32) -> Literal { + assert!(f.is_finite()); + Literal::_new(imp::Literal::f32_suffixed(f)) + } + + /// String literal. + pub fn string(string: &str) -> Literal { + Literal::_new(imp::Literal::string(string)) + } + + /// Character literal. + pub fn character(ch: char) -> Literal { + Literal::_new(imp::Literal::character(ch)) + } + + /// Byte character literal. + pub fn byte_character(byte: u8) -> Literal { + Literal::_new(imp::Literal::byte_character(byte)) + } + + /// Byte string literal. + pub fn byte_string(bytes: &[u8]) -> Literal { + Literal::_new(imp::Literal::byte_string(bytes)) + } + + /// C string literal. + pub fn c_string(string: &CStr) -> Literal { + Literal::_new(imp::Literal::c_string(string)) + } + + /// Returns the span encompassing this literal. + pub fn span(&self) -> Span { + Span::_new(self.inner.span()) + } + + /// Configures the span associated for this literal. + pub fn set_span(&mut self, span: Span) { + self.inner.set_span(span.inner); + } + + /// Returns a `Span` that is a subset of `self.span()` containing only + /// the source bytes in range `range`. Returns `None` if the would-be + /// trimmed span is outside the bounds of `self`. + /// + /// Warning: the underlying [`proc_macro::Literal::subspan`] method is + /// nightly-only. When called from within a procedural macro not using a + /// nightly compiler, this method will always return `None`. + /// + /// [`proc_macro::Literal::subspan`]: https://doc.rust-lang.org/proc_macro/struct.Literal.html#method.subspan + pub fn subspan>(&self, range: R) -> Option { + self.inner.subspan(range).map(Span::_new) + } + + // Intended for the `quote!` macro to use when constructing a proc-macro2 + // token out of a macro_rules $:literal token, which is already known to be + // a valid literal. This avoids reparsing/validating the literal's string + // representation. This is not public API other than for quote. + #[doc(hidden)] + pub unsafe fn from_str_unchecked(repr: &str) -> Self { + Literal::_new(unsafe { imp::Literal::from_str_unchecked(repr) }) + } +} + +impl FromStr for Literal { + type Err = LexError; + + fn from_str(repr: &str) -> Result { + repr.parse().map(Literal::_new).map_err(|inner| LexError { + inner, + _marker: MARKER, + }) + } +} + +impl Debug for Literal { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.inner, f) + } +} + +impl Display for Literal { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + Display::fmt(&self.inner, f) + } +} + +/// Public implementation details for the `TokenStream` type, such as iterators. +pub mod token_stream { + use crate::marker::{ProcMacroAutoTraits, MARKER}; + use crate::{imp, TokenTree}; + use core::fmt::{self, Debug}; + + pub use crate::TokenStream; + + /// An iterator over `TokenStream`'s `TokenTree`s. + /// + /// The iteration is "shallow", e.g. the iterator doesn't recurse into + /// delimited groups, and returns whole groups as token trees. + #[derive(Clone)] + pub struct IntoIter { + inner: imp::TokenTreeIter, + _marker: ProcMacroAutoTraits, + } + + impl Iterator for IntoIter { + type Item = TokenTree; + + fn next(&mut self) -> Option { + self.inner.next() + } + + fn size_hint(&self) -> (usize, Option) { + self.inner.size_hint() + } + } + + impl Debug for IntoIter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + f.write_str("TokenStream ")?; + f.debug_list().entries(self.clone()).finish() + } + } + + impl IntoIterator for TokenStream { + type Item = TokenTree; + type IntoIter = IntoIter; + + fn into_iter(self) -> IntoIter { + IntoIter { + inner: self.inner.into_iter(), + _marker: MARKER, + } + } + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/location.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/location.rs new file mode 100644 index 0000000000..7190e2d052 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/location.rs @@ -0,0 +1,29 @@ +use core::cmp::Ordering; + +/// A line-column pair representing the start or end of a `Span`. +/// +/// This type is semver exempt and not exposed by default. +#[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))] +#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)] +pub struct LineColumn { + /// The 1-indexed line in the source file on which the span starts or ends + /// (inclusive). + pub line: usize, + /// The 0-indexed column (in UTF-8 characters) in the source file on which + /// the span starts or ends (inclusive). + pub column: usize, +} + +impl Ord for LineColumn { + fn cmp(&self, other: &Self) -> Ordering { + self.line + .cmp(&other.line) + .then(self.column.cmp(&other.column)) + } +} + +impl PartialOrd for LineColumn { + fn partial_cmp(&self, other: &Self) -> Option { + Some(self.cmp(other)) + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/marker.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/marker.rs new file mode 100644 index 0000000000..23b94ce6fa --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/marker.rs @@ -0,0 +1,17 @@ +use alloc::rc::Rc; +use core::marker::PhantomData; +use core::panic::{RefUnwindSafe, UnwindSafe}; + +// Zero sized marker with the correct set of autotrait impls we want all proc +// macro types to have. +#[derive(Copy, Clone)] +#[cfg_attr( + all(procmacro2_semver_exempt, any(not(wrap_proc_macro), super_unstable)), + derive(PartialEq, Eq) +)] +pub(crate) struct ProcMacroAutoTraits(PhantomData>); + +pub(crate) const MARKER: ProcMacroAutoTraits = ProcMacroAutoTraits(PhantomData); + +impl UnwindSafe for ProcMacroAutoTraits {} +impl RefUnwindSafe for ProcMacroAutoTraits {} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/parse.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/parse.rs new file mode 100644 index 0000000000..07239bc3ad --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/parse.rs @@ -0,0 +1,996 @@ +use crate::fallback::{ + self, is_ident_continue, is_ident_start, Group, LexError, Literal, Span, TokenStream, + TokenStreamBuilder, +}; +use crate::{Delimiter, Punct, Spacing, TokenTree}; +use core::char; +use core::str::{Bytes, CharIndices, Chars}; + +#[derive(Copy, Clone, Eq, PartialEq)] +pub(crate) struct Cursor<'a> { + pub rest: &'a str, + #[cfg(span_locations)] + pub off: u32, +} + +impl<'a> Cursor<'a> { + pub fn advance(&self, bytes: usize) -> Cursor<'a> { + let (_front, rest) = self.rest.split_at(bytes); + Cursor { + rest, + #[cfg(span_locations)] + off: self.off + _front.chars().count() as u32, + } + } + + pub fn starts_with(&self, s: &str) -> bool { + self.rest.starts_with(s) + } + + pub fn starts_with_char(&self, ch: char) -> bool { + self.rest.starts_with(ch) + } + + pub fn starts_with_fn(&self, f: Pattern) -> bool + where + Pattern: FnMut(char) -> bool, + { + self.rest.starts_with(f) + } + + pub fn is_empty(&self) -> bool { + self.rest.is_empty() + } + + fn len(&self) -> usize { + self.rest.len() + } + + fn as_bytes(&self) -> &'a [u8] { + self.rest.as_bytes() + } + + fn bytes(&self) -> Bytes<'a> { + self.rest.bytes() + } + + fn chars(&self) -> Chars<'a> { + self.rest.chars() + } + + fn char_indices(&self) -> CharIndices<'a> { + self.rest.char_indices() + } + + fn parse(&self, tag: &str) -> Result, Reject> { + if self.starts_with(tag) { + Ok(self.advance(tag.len())) + } else { + Err(Reject) + } + } +} + +pub(crate) struct Reject; +type PResult<'a, O> = Result<(Cursor<'a>, O), Reject>; + +fn skip_whitespace(input: Cursor) -> Cursor { + let mut s = input; + + while !s.is_empty() { + let byte = s.as_bytes()[0]; + if byte == b'/' { + if s.starts_with("//") + && (!s.starts_with("///") || s.starts_with("////")) + && !s.starts_with("//!") + { + let (cursor, _) = take_until_newline_or_eof(s); + s = cursor; + continue; + } else if s.starts_with("/**/") { + s = s.advance(4); + continue; + } else if s.starts_with("/*") + && (!s.starts_with("/**") || s.starts_with("/***")) + && !s.starts_with("/*!") + { + match block_comment(s) { + Ok((rest, _)) => { + s = rest; + continue; + } + Err(Reject) => return s, + } + } + } + match byte { + b' ' | 0x09..=0x0d => { + s = s.advance(1); + continue; + } + b if b.is_ascii() => {} + _ => { + let ch = s.chars().next().unwrap(); + if is_whitespace(ch) { + s = s.advance(ch.len_utf8()); + continue; + } + } + } + return s; + } + s +} + +fn block_comment(input: Cursor) -> PResult<&str> { + if !input.starts_with("/*") { + return Err(Reject); + } + + let mut depth = 0usize; + let bytes = input.as_bytes(); + let mut i = 0usize; + let upper = bytes.len() - 1; + + while i < upper { + if bytes[i] == b'/' && bytes[i + 1] == b'*' { + depth += 1; + i += 1; // eat '*' + } else if bytes[i] == b'*' && bytes[i + 1] == b'/' { + depth -= 1; + if depth == 0 { + return Ok((input.advance(i + 2), &input.rest[..i + 2])); + } + i += 1; // eat '/' + } + i += 1; + } + + Err(Reject) +} + +fn is_whitespace(ch: char) -> bool { + // Rust treats left-to-right mark and right-to-left mark as whitespace + ch.is_whitespace() || ch == '\u{200e}' || ch == '\u{200f}' +} + +fn word_break(input: Cursor) -> Result { + match input.chars().next() { + Some(ch) if is_ident_continue(ch) => Err(Reject), + Some(_) | None => Ok(input), + } +} + +// Rustc's representation of a macro expansion error in expression position or +// type position. +const ERROR: &str = "(/*ERROR*/)"; + +pub(crate) fn token_stream(mut input: Cursor) -> Result { + let mut trees = TokenStreamBuilder::new(); + let mut stack = Vec::new(); + + loop { + input = skip_whitespace(input); + + if let Ok((rest, ())) = doc_comment(input, &mut trees) { + input = rest; + continue; + } + + #[cfg(span_locations)] + let lo = input.off; + + let first = match input.bytes().next() { + Some(first) => first, + None => match stack.last() { + None => return Ok(trees.build()), + #[cfg(span_locations)] + Some((lo, _frame)) => { + return Err(LexError { + span: Span { lo: *lo, hi: *lo }, + }) + } + #[cfg(not(span_locations))] + Some(_frame) => return Err(LexError { span: Span {} }), + }, + }; + + if let Some(open_delimiter) = match first { + b'(' if !input.starts_with(ERROR) => Some(Delimiter::Parenthesis), + b'[' => Some(Delimiter::Bracket), + b'{' => Some(Delimiter::Brace), + _ => None, + } { + input = input.advance(1); + let frame = (open_delimiter, trees); + #[cfg(span_locations)] + let frame = (lo, frame); + stack.push(frame); + trees = TokenStreamBuilder::new(); + } else if let Some(close_delimiter) = match first { + b')' => Some(Delimiter::Parenthesis), + b']' => Some(Delimiter::Bracket), + b'}' => Some(Delimiter::Brace), + _ => None, + } { + let frame = match stack.pop() { + Some(frame) => frame, + None => return Err(lex_error(input)), + }; + #[cfg(span_locations)] + let (lo, frame) = frame; + let (open_delimiter, outer) = frame; + if open_delimiter != close_delimiter { + return Err(lex_error(input)); + } + input = input.advance(1); + let mut g = Group::new(open_delimiter, trees.build()); + g.set_span(Span { + #[cfg(span_locations)] + lo, + #[cfg(span_locations)] + hi: input.off, + }); + trees = outer; + trees.push_token_from_parser(TokenTree::Group(crate::Group::_new_fallback(g))); + } else { + let (rest, mut tt) = match leaf_token(input) { + Ok((rest, tt)) => (rest, tt), + Err(Reject) => return Err(lex_error(input)), + }; + tt.set_span(crate::Span::_new_fallback(Span { + #[cfg(span_locations)] + lo, + #[cfg(span_locations)] + hi: rest.off, + })); + trees.push_token_from_parser(tt); + input = rest; + } + } +} + +fn lex_error(cursor: Cursor) -> LexError { + #[cfg(not(span_locations))] + let _ = cursor; + LexError { + span: Span { + #[cfg(span_locations)] + lo: cursor.off, + #[cfg(span_locations)] + hi: cursor.off, + }, + } +} + +fn leaf_token(input: Cursor) -> PResult { + if let Ok((input, l)) = literal(input) { + // must be parsed before ident + Ok((input, TokenTree::Literal(crate::Literal::_new_fallback(l)))) + } else if let Ok((input, p)) = punct(input) { + Ok((input, TokenTree::Punct(p))) + } else if let Ok((input, i)) = ident(input) { + Ok((input, TokenTree::Ident(i))) + } else if input.starts_with(ERROR) { + let rest = input.advance(ERROR.len()); + let repr = crate::Literal::_new_fallback(Literal::_new(ERROR.to_owned())); + Ok((rest, TokenTree::Literal(repr))) + } else { + Err(Reject) + } +} + +fn ident(input: Cursor) -> PResult { + if [ + "r\"", "r#\"", "r##", "b\"", "b\'", "br\"", "br#", "c\"", "cr\"", "cr#", + ] + .iter() + .any(|prefix| input.starts_with(prefix)) + { + Err(Reject) + } else { + ident_any(input) + } +} + +fn ident_any(input: Cursor) -> PResult { + let raw = input.starts_with("r#"); + let rest = input.advance((raw as usize) << 1); + + let (rest, sym) = ident_not_raw(rest)?; + + if !raw { + let ident = crate::Ident::_new(crate::imp::Ident::new_unchecked( + sym, + fallback::Span::call_site(), + )); + return Ok((rest, ident)); + } + + match sym { + "_" | "super" | "self" | "Self" | "crate" => return Err(Reject), + _ => {} + } + + let ident = crate::Ident::_new(crate::imp::Ident::new_raw_unchecked( + sym, + fallback::Span::call_site(), + )); + Ok((rest, ident)) +} + +fn ident_not_raw(input: Cursor) -> PResult<&str> { + let mut chars = input.char_indices(); + + match chars.next() { + Some((_, ch)) if is_ident_start(ch) => {} + _ => return Err(Reject), + } + + let mut end = input.len(); + for (i, ch) in chars { + if !is_ident_continue(ch) { + end = i; + break; + } + } + + Ok((input.advance(end), &input.rest[..end])) +} + +pub(crate) fn literal(input: Cursor) -> PResult { + let rest = literal_nocapture(input)?; + let end = input.len() - rest.len(); + Ok((rest, Literal::_new(input.rest[..end].to_string()))) +} + +fn literal_nocapture(input: Cursor) -> Result { + if let Ok(ok) = string(input) { + Ok(ok) + } else if let Ok(ok) = byte_string(input) { + Ok(ok) + } else if let Ok(ok) = c_string(input) { + Ok(ok) + } else if let Ok(ok) = byte(input) { + Ok(ok) + } else if let Ok(ok) = character(input) { + Ok(ok) + } else if let Ok(ok) = float(input) { + Ok(ok) + } else if let Ok(ok) = int(input) { + Ok(ok) + } else { + Err(Reject) + } +} + +fn literal_suffix(input: Cursor) -> Cursor { + match ident_not_raw(input) { + Ok((input, _)) => input, + Err(Reject) => input, + } +} + +fn string(input: Cursor) -> Result { + if let Ok(input) = input.parse("\"") { + cooked_string(input) + } else if let Ok(input) = input.parse("r") { + raw_string(input) + } else { + Err(Reject) + } +} + +fn cooked_string(mut input: Cursor) -> Result { + let mut chars = input.char_indices(); + + while let Some((i, ch)) = chars.next() { + match ch { + '"' => { + let input = input.advance(i + 1); + return Ok(literal_suffix(input)); + } + '\r' => match chars.next() { + Some((_, '\n')) => {} + _ => break, + }, + '\\' => match chars.next() { + Some((_, 'x')) => { + backslash_x_char(&mut chars)?; + } + Some((_, 'n' | 'r' | 't' | '\\' | '\'' | '"' | '0')) => {} + Some((_, 'u')) => { + backslash_u(&mut chars)?; + } + Some((newline, ch @ ('\n' | '\r'))) => { + input = input.advance(newline + 1); + trailing_backslash(&mut input, ch as u8)?; + chars = input.char_indices(); + } + _ => break, + }, + _ch => {} + } + } + Err(Reject) +} + +fn raw_string(input: Cursor) -> Result { + let (input, delimiter) = delimiter_of_raw_string(input)?; + let mut bytes = input.bytes().enumerate(); + while let Some((i, byte)) = bytes.next() { + match byte { + b'"' if input.rest[i + 1..].starts_with(delimiter) => { + let rest = input.advance(i + 1 + delimiter.len()); + return Ok(literal_suffix(rest)); + } + b'\r' => match bytes.next() { + Some((_, b'\n')) => {} + _ => break, + }, + _ => {} + } + } + Err(Reject) +} + +fn byte_string(input: Cursor) -> Result { + if let Ok(input) = input.parse("b\"") { + cooked_byte_string(input) + } else if let Ok(input) = input.parse("br") { + raw_byte_string(input) + } else { + Err(Reject) + } +} + +fn cooked_byte_string(mut input: Cursor) -> Result { + let mut bytes = input.bytes().enumerate(); + while let Some((offset, b)) = bytes.next() { + match b { + b'"' => { + let input = input.advance(offset + 1); + return Ok(literal_suffix(input)); + } + b'\r' => match bytes.next() { + Some((_, b'\n')) => {} + _ => break, + }, + b'\\' => match bytes.next() { + Some((_, b'x')) => { + backslash_x_byte(&mut bytes)?; + } + Some((_, b'n' | b'r' | b't' | b'\\' | b'0' | b'\'' | b'"')) => {} + Some((newline, b @ (b'\n' | b'\r'))) => { + input = input.advance(newline + 1); + trailing_backslash(&mut input, b)?; + bytes = input.bytes().enumerate(); + } + _ => break, + }, + b if b.is_ascii() => {} + _ => break, + } + } + Err(Reject) +} + +fn delimiter_of_raw_string(input: Cursor) -> PResult<&str> { + for (i, byte) in input.bytes().enumerate() { + match byte { + b'"' => { + if i > 255 { + // https://github.com/rust-lang/rust/pull/95251 + return Err(Reject); + } + return Ok((input.advance(i + 1), &input.rest[..i])); + } + b'#' => {} + _ => break, + } + } + Err(Reject) +} + +fn raw_byte_string(input: Cursor) -> Result { + let (input, delimiter) = delimiter_of_raw_string(input)?; + let mut bytes = input.bytes().enumerate(); + while let Some((i, byte)) = bytes.next() { + match byte { + b'"' if input.rest[i + 1..].starts_with(delimiter) => { + let rest = input.advance(i + 1 + delimiter.len()); + return Ok(literal_suffix(rest)); + } + b'\r' => match bytes.next() { + Some((_, b'\n')) => {} + _ => break, + }, + other => { + if !other.is_ascii() { + break; + } + } + } + } + Err(Reject) +} + +fn c_string(input: Cursor) -> Result { + if let Ok(input) = input.parse("c\"") { + cooked_c_string(input) + } else if let Ok(input) = input.parse("cr") { + raw_c_string(input) + } else { + Err(Reject) + } +} + +fn raw_c_string(input: Cursor) -> Result { + let (input, delimiter) = delimiter_of_raw_string(input)?; + let mut bytes = input.bytes().enumerate(); + while let Some((i, byte)) = bytes.next() { + match byte { + b'"' if input.rest[i + 1..].starts_with(delimiter) => { + let rest = input.advance(i + 1 + delimiter.len()); + return Ok(literal_suffix(rest)); + } + b'\r' => match bytes.next() { + Some((_, b'\n')) => {} + _ => break, + }, + b'\0' => break, + _ => {} + } + } + Err(Reject) +} + +fn cooked_c_string(mut input: Cursor) -> Result { + let mut chars = input.char_indices(); + + while let Some((i, ch)) = chars.next() { + match ch { + '"' => { + let input = input.advance(i + 1); + return Ok(literal_suffix(input)); + } + '\r' => match chars.next() { + Some((_, '\n')) => {} + _ => break, + }, + '\\' => match chars.next() { + Some((_, 'x')) => { + backslash_x_nonzero(&mut chars)?; + } + Some((_, 'n' | 'r' | 't' | '\\' | '\'' | '"')) => {} + Some((_, 'u')) => { + if backslash_u(&mut chars)? == '\0' { + break; + } + } + Some((newline, ch @ ('\n' | '\r'))) => { + input = input.advance(newline + 1); + trailing_backslash(&mut input, ch as u8)?; + chars = input.char_indices(); + } + _ => break, + }, + '\0' => break, + _ch => {} + } + } + Err(Reject) +} + +fn byte(input: Cursor) -> Result { + let input = input.parse("b'")?; + let mut bytes = input.bytes().enumerate(); + let ok = match bytes.next().map(|(_, b)| b) { + Some(b'\\') => match bytes.next().map(|(_, b)| b) { + Some(b'x') => backslash_x_byte(&mut bytes).is_ok(), + Some(b'n' | b'r' | b't' | b'\\' | b'0' | b'\'' | b'"') => true, + _ => false, + }, + b => b.is_some(), + }; + if !ok { + return Err(Reject); + } + let (offset, _) = bytes.next().ok_or(Reject)?; + if !input.chars().as_str().is_char_boundary(offset) { + return Err(Reject); + } + let input = input.advance(offset).parse("'")?; + Ok(literal_suffix(input)) +} + +fn character(input: Cursor) -> Result { + let input = input.parse("'")?; + let mut chars = input.char_indices(); + let ok = match chars.next().map(|(_, ch)| ch) { + Some('\\') => match chars.next().map(|(_, ch)| ch) { + Some('x') => backslash_x_char(&mut chars).is_ok(), + Some('u') => backslash_u(&mut chars).is_ok(), + Some('n' | 'r' | 't' | '\\' | '0' | '\'' | '"') => true, + _ => false, + }, + ch => ch.is_some(), + }; + if !ok { + return Err(Reject); + } + let (idx, _) = chars.next().ok_or(Reject)?; + let input = input.advance(idx).parse("'")?; + Ok(literal_suffix(input)) +} + +macro_rules! next_ch { + ($chars:ident @ $pat:pat) => { + match $chars.next() { + Some((_, ch)) => match ch { + $pat => ch, + _ => return Err(Reject), + }, + None => return Err(Reject), + } + }; +} + +fn backslash_x_char(chars: &mut I) -> Result<(), Reject> +where + I: Iterator, +{ + next_ch!(chars @ '0'..='7'); + next_ch!(chars @ '0'..='9' | 'a'..='f' | 'A'..='F'); + Ok(()) +} + +fn backslash_x_byte(chars: &mut I) -> Result<(), Reject> +where + I: Iterator, +{ + next_ch!(chars @ b'0'..=b'9' | b'a'..=b'f' | b'A'..=b'F'); + next_ch!(chars @ b'0'..=b'9' | b'a'..=b'f' | b'A'..=b'F'); + Ok(()) +} + +fn backslash_x_nonzero(chars: &mut I) -> Result<(), Reject> +where + I: Iterator, +{ + let first = next_ch!(chars @ '0'..='9' | 'a'..='f' | 'A'..='F'); + let second = next_ch!(chars @ '0'..='9' | 'a'..='f' | 'A'..='F'); + if first == '0' && second == '0' { + Err(Reject) + } else { + Ok(()) + } +} + +fn backslash_u(chars: &mut I) -> Result +where + I: Iterator, +{ + next_ch!(chars @ '{'); + let mut value = 0; + let mut len = 0; + for (_, ch) in chars { + let digit = match ch { + '0'..='9' => ch as u8 - b'0', + 'a'..='f' => 10 + ch as u8 - b'a', + 'A'..='F' => 10 + ch as u8 - b'A', + '_' if len > 0 => continue, + '}' if len > 0 => return char::from_u32(value).ok_or(Reject), + _ => break, + }; + if len == 6 { + break; + } + value *= 0x10; + value += u32::from(digit); + len += 1; + } + Err(Reject) +} + +fn trailing_backslash(input: &mut Cursor, mut last: u8) -> Result<(), Reject> { + let mut whitespace = input.bytes().enumerate(); + loop { + if last == b'\r' && whitespace.next().map_or(true, |(_, b)| b != b'\n') { + return Err(Reject); + } + match whitespace.next() { + Some((_, b @ (b' ' | b'\t' | b'\n' | b'\r'))) => { + last = b; + } + Some((offset, _)) => { + *input = input.advance(offset); + return Ok(()); + } + None => return Err(Reject), + } + } +} + +fn float(input: Cursor) -> Result { + let mut rest = float_digits(input)?; + if let Some(ch) = rest.chars().next() { + if is_ident_start(ch) { + rest = ident_not_raw(rest)?.0; + } + } + word_break(rest) +} + +fn float_digits(input: Cursor) -> Result { + let mut chars = input.chars().peekable(); + match chars.next() { + Some(ch) if '0' <= ch && ch <= '9' => {} + _ => return Err(Reject), + } + + let mut len = 1; + let mut has_dot = false; + let mut has_exp = false; + while let Some(&ch) = chars.peek() { + match ch { + '0'..='9' | '_' => { + chars.next(); + len += 1; + } + '.' => { + if has_dot { + break; + } + chars.next(); + if chars + .peek() + .map_or(false, |&ch| ch == '.' || is_ident_start(ch)) + { + return Err(Reject); + } + len += 1; + has_dot = true; + } + 'e' | 'E' => { + chars.next(); + len += 1; + has_exp = true; + break; + } + _ => break, + } + } + + if !(has_dot || has_exp) { + return Err(Reject); + } + + if has_exp { + let token_before_exp = if has_dot { + Ok(input.advance(len - 1)) + } else { + Err(Reject) + }; + let mut has_sign = false; + let mut has_exp_value = false; + while let Some(&ch) = chars.peek() { + match ch { + '+' | '-' => { + if has_exp_value { + break; + } + if has_sign { + return token_before_exp; + } + chars.next(); + len += 1; + has_sign = true; + } + '0'..='9' => { + chars.next(); + len += 1; + has_exp_value = true; + } + '_' => { + chars.next(); + len += 1; + } + _ => break, + } + } + if !has_exp_value { + return token_before_exp; + } + } + + Ok(input.advance(len)) +} + +fn int(input: Cursor) -> Result { + let mut rest = digits(input)?; + if let Some(ch) = rest.chars().next() { + if is_ident_start(ch) { + rest = ident_not_raw(rest)?.0; + } + } + word_break(rest) +} + +fn digits(mut input: Cursor) -> Result { + let base = if input.starts_with("0x") { + input = input.advance(2); + 16 + } else if input.starts_with("0o") { + input = input.advance(2); + 8 + } else if input.starts_with("0b") { + input = input.advance(2); + 2 + } else { + 10 + }; + + let mut len = 0; + let mut empty = true; + for b in input.bytes() { + match b { + b'0'..=b'9' => { + let digit = (b - b'0') as u64; + if digit >= base { + return Err(Reject); + } + } + b'a'..=b'f' => { + let digit = 10 + (b - b'a') as u64; + if digit >= base { + break; + } + } + b'A'..=b'F' => { + let digit = 10 + (b - b'A') as u64; + if digit >= base { + break; + } + } + b'_' => { + if empty && base == 10 { + return Err(Reject); + } + len += 1; + continue; + } + _ => break, + }; + len += 1; + empty = false; + } + if empty { + Err(Reject) + } else { + Ok(input.advance(len)) + } +} + +fn punct(input: Cursor) -> PResult { + let (rest, ch) = punct_char(input)?; + if ch == '\'' { + if ident_any(rest)?.0.starts_with_char('\'') { + Err(Reject) + } else { + Ok((rest, Punct::new('\'', Spacing::Joint))) + } + } else { + let kind = match punct_char(rest) { + Ok(_) => Spacing::Joint, + Err(Reject) => Spacing::Alone, + }; + Ok((rest, Punct::new(ch, kind))) + } +} + +fn punct_char(input: Cursor) -> PResult { + if input.starts_with("//") || input.starts_with("/*") { + // Do not accept `/` of a comment as a punct. + return Err(Reject); + } + + let mut chars = input.chars(); + let first = match chars.next() { + Some(ch) => ch, + None => { + return Err(Reject); + } + }; + let recognized = "~!@#$%^&*-=+|;:,<.>/?'"; + if recognized.contains(first) { + Ok((input.advance(first.len_utf8()), first)) + } else { + Err(Reject) + } +} + +fn doc_comment<'a>(input: Cursor<'a>, trees: &mut TokenStreamBuilder) -> PResult<'a, ()> { + #[cfg(span_locations)] + let lo = input.off; + let (rest, (comment, inner)) = doc_comment_contents(input)?; + let fallback_span = Span { + #[cfg(span_locations)] + lo, + #[cfg(span_locations)] + hi: rest.off, + }; + let span = crate::Span::_new_fallback(fallback_span); + + let mut scan_for_bare_cr = comment; + while let Some(cr) = scan_for_bare_cr.find('\r') { + let rest = &scan_for_bare_cr[cr + 1..]; + if !rest.starts_with('\n') { + return Err(Reject); + } + scan_for_bare_cr = rest; + } + + let mut pound = Punct::new('#', Spacing::Alone); + pound.set_span(span); + trees.push_token_from_parser(TokenTree::Punct(pound)); + + if inner { + let mut bang = Punct::new('!', Spacing::Alone); + bang.set_span(span); + trees.push_token_from_parser(TokenTree::Punct(bang)); + } + + let doc_ident = crate::Ident::_new(crate::imp::Ident::new_unchecked("doc", fallback_span)); + let mut equal = Punct::new('=', Spacing::Alone); + equal.set_span(span); + let mut literal = crate::Literal::string(comment); + literal.set_span(span); + let mut bracketed = TokenStreamBuilder::with_capacity(3); + bracketed.push_token_from_parser(TokenTree::Ident(doc_ident)); + bracketed.push_token_from_parser(TokenTree::Punct(equal)); + bracketed.push_token_from_parser(TokenTree::Literal(literal)); + let group = Group::new(Delimiter::Bracket, bracketed.build()); + let mut group = crate::Group::_new_fallback(group); + group.set_span(span); + trees.push_token_from_parser(TokenTree::Group(group)); + + Ok((rest, ())) +} + +fn doc_comment_contents(input: Cursor) -> PResult<(&str, bool)> { + if input.starts_with("//!") { + let input = input.advance(3); + let (input, s) = take_until_newline_or_eof(input); + Ok((input, (s, true))) + } else if input.starts_with("/*!") { + let (input, s) = block_comment(input)?; + Ok((input, (&s[3..s.len() - 2], true))) + } else if input.starts_with("///") { + let input = input.advance(3); + if input.starts_with_char('/') { + return Err(Reject); + } + let (input, s) = take_until_newline_or_eof(input); + Ok((input, (s, false))) + } else if input.starts_with("/**") && !input.rest[3..].starts_with('*') { + let (input, s) = block_comment(input)?; + Ok((input, (&s[3..s.len() - 2], false))) + } else { + Err(Reject) + } +} + +fn take_until_newline_or_eof(input: Cursor) -> (Cursor, &str) { + let chars = input.char_indices(); + + for (i, ch) in chars { + if ch == '\n' { + return (input.advance(i), &input.rest[..i]); + } else if ch == '\r' && input.rest[i + 1..].starts_with('\n') { + return (input.advance(i + 1), &input.rest[..i]); + } + } + + (input.advance(input.len()), input.rest) +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/rcvec.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/rcvec.rs new file mode 100644 index 0000000000..37955afb11 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/rcvec.rs @@ -0,0 +1,145 @@ +use alloc::rc::Rc; +use alloc::vec; +use core::mem; +use core::panic::RefUnwindSafe; +use core::slice; + +pub(crate) struct RcVec { + inner: Rc>, +} + +pub(crate) struct RcVecBuilder { + inner: Vec, +} + +pub(crate) struct RcVecMut<'a, T> { + inner: &'a mut Vec, +} + +#[derive(Clone)] +pub(crate) struct RcVecIntoIter { + inner: vec::IntoIter, +} + +impl RcVec { + pub fn is_empty(&self) -> bool { + self.inner.is_empty() + } + + pub fn len(&self) -> usize { + self.inner.len() + } + + pub fn iter(&self) -> slice::Iter { + self.inner.iter() + } + + pub fn make_mut(&mut self) -> RcVecMut + where + T: Clone, + { + RcVecMut { + inner: Rc::make_mut(&mut self.inner), + } + } + + pub fn get_mut(&mut self) -> Option> { + let inner = Rc::get_mut(&mut self.inner)?; + Some(RcVecMut { inner }) + } + + pub fn make_owned(mut self) -> RcVecBuilder + where + T: Clone, + { + let vec = if let Some(owned) = Rc::get_mut(&mut self.inner) { + mem::take(owned) + } else { + Vec::clone(&self.inner) + }; + RcVecBuilder { inner: vec } + } +} + +impl RcVecBuilder { + pub fn new() -> Self { + RcVecBuilder { inner: Vec::new() } + } + + pub fn with_capacity(cap: usize) -> Self { + RcVecBuilder { + inner: Vec::with_capacity(cap), + } + } + + pub fn push(&mut self, element: T) { + self.inner.push(element); + } + + pub fn extend(&mut self, iter: impl IntoIterator) { + self.inner.extend(iter); + } + + pub fn as_mut(&mut self) -> RcVecMut { + RcVecMut { + inner: &mut self.inner, + } + } + + pub fn build(self) -> RcVec { + RcVec { + inner: Rc::new(self.inner), + } + } +} + +impl<'a, T> RcVecMut<'a, T> { + pub fn push(&mut self, element: T) { + self.inner.push(element); + } + + pub fn extend(&mut self, iter: impl IntoIterator) { + self.inner.extend(iter); + } + + pub fn pop(&mut self) -> Option { + self.inner.pop() + } + + pub fn as_mut(&mut self) -> RcVecMut { + RcVecMut { inner: self.inner } + } +} + +impl Clone for RcVec { + fn clone(&self) -> Self { + RcVec { + inner: Rc::clone(&self.inner), + } + } +} + +impl IntoIterator for RcVecBuilder { + type Item = T; + type IntoIter = RcVecIntoIter; + + fn into_iter(self) -> Self::IntoIter { + RcVecIntoIter { + inner: self.inner.into_iter(), + } + } +} + +impl Iterator for RcVecIntoIter { + type Item = T; + + fn next(&mut self) -> Option { + self.inner.next() + } + + fn size_hint(&self) -> (usize, Option) { + self.inner.size_hint() + } +} + +impl RefUnwindSafe for RcVec where T: RefUnwindSafe {} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/src/wrapper.rs b/rust/hw/char/pl011/vendor/proc-macro2/src/wrapper.rs new file mode 100644 index 0000000000..87e348dbb3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/src/wrapper.rs @@ -0,0 +1,993 @@ +use crate::detection::inside_proc_macro; +#[cfg(span_locations)] +use crate::location::LineColumn; +use crate::{fallback, Delimiter, Punct, Spacing, TokenTree}; +use core::fmt::{self, Debug, Display}; +#[cfg(span_locations)] +use core::ops::Range; +use core::ops::RangeBounds; +use core::str::FromStr; +use std::ffi::CStr; +use std::panic; +#[cfg(super_unstable)] +use std::path::PathBuf; + +#[derive(Clone)] +pub(crate) enum TokenStream { + Compiler(DeferredTokenStream), + Fallback(fallback::TokenStream), +} + +// Work around https://github.com/rust-lang/rust/issues/65080. +// In `impl Extend for TokenStream` which is used heavily by quote, +// we hold on to the appended tokens and do proc_macro::TokenStream::extend as +// late as possible to batch together consecutive uses of the Extend impl. +#[derive(Clone)] +pub(crate) struct DeferredTokenStream { + stream: proc_macro::TokenStream, + extra: Vec, +} + +pub(crate) enum LexError { + Compiler(proc_macro::LexError), + Fallback(fallback::LexError), + + // Rustc was supposed to return a LexError, but it panicked instead. + // https://github.com/rust-lang/rust/issues/58736 + CompilerPanic, +} + +#[cold] +fn mismatch(line: u32) -> ! { + #[cfg(procmacro2_backtrace)] + { + let backtrace = std::backtrace::Backtrace::force_capture(); + panic!("compiler/fallback mismatch #{}\n\n{}", line, backtrace) + } + #[cfg(not(procmacro2_backtrace))] + { + panic!("compiler/fallback mismatch #{}", line) + } +} + +impl DeferredTokenStream { + fn new(stream: proc_macro::TokenStream) -> Self { + DeferredTokenStream { + stream, + extra: Vec::new(), + } + } + + fn is_empty(&self) -> bool { + self.stream.is_empty() && self.extra.is_empty() + } + + fn evaluate_now(&mut self) { + // If-check provides a fast short circuit for the common case of `extra` + // being empty, which saves a round trip over the proc macro bridge. + // Improves macro expansion time in winrt by 6% in debug mode. + if !self.extra.is_empty() { + self.stream.extend(self.extra.drain(..)); + } + } + + fn into_token_stream(mut self) -> proc_macro::TokenStream { + self.evaluate_now(); + self.stream + } +} + +impl TokenStream { + pub fn new() -> Self { + if inside_proc_macro() { + TokenStream::Compiler(DeferredTokenStream::new(proc_macro::TokenStream::new())) + } else { + TokenStream::Fallback(fallback::TokenStream::new()) + } + } + + pub fn is_empty(&self) -> bool { + match self { + TokenStream::Compiler(tts) => tts.is_empty(), + TokenStream::Fallback(tts) => tts.is_empty(), + } + } + + fn unwrap_nightly(self) -> proc_macro::TokenStream { + match self { + TokenStream::Compiler(s) => s.into_token_stream(), + TokenStream::Fallback(_) => mismatch(line!()), + } + } + + fn unwrap_stable(self) -> fallback::TokenStream { + match self { + TokenStream::Compiler(_) => mismatch(line!()), + TokenStream::Fallback(s) => s, + } + } +} + +impl FromStr for TokenStream { + type Err = LexError; + + fn from_str(src: &str) -> Result { + if inside_proc_macro() { + Ok(TokenStream::Compiler(DeferredTokenStream::new( + proc_macro_parse(src)?, + ))) + } else { + Ok(TokenStream::Fallback(src.parse()?)) + } + } +} + +// Work around https://github.com/rust-lang/rust/issues/58736. +fn proc_macro_parse(src: &str) -> Result { + let result = panic::catch_unwind(|| src.parse().map_err(LexError::Compiler)); + result.unwrap_or_else(|_| Err(LexError::CompilerPanic)) +} + +impl Display for TokenStream { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + TokenStream::Compiler(tts) => Display::fmt(&tts.clone().into_token_stream(), f), + TokenStream::Fallback(tts) => Display::fmt(tts, f), + } + } +} + +impl From for TokenStream { + fn from(inner: proc_macro::TokenStream) -> Self { + TokenStream::Compiler(DeferredTokenStream::new(inner)) + } +} + +impl From for proc_macro::TokenStream { + fn from(inner: TokenStream) -> Self { + match inner { + TokenStream::Compiler(inner) => inner.into_token_stream(), + TokenStream::Fallback(inner) => inner.to_string().parse().unwrap(), + } + } +} + +impl From for TokenStream { + fn from(inner: fallback::TokenStream) -> Self { + TokenStream::Fallback(inner) + } +} + +// Assumes inside_proc_macro(). +fn into_compiler_token(token: TokenTree) -> proc_macro::TokenTree { + match token { + TokenTree::Group(tt) => tt.inner.unwrap_nightly().into(), + TokenTree::Punct(tt) => { + let spacing = match tt.spacing() { + Spacing::Joint => proc_macro::Spacing::Joint, + Spacing::Alone => proc_macro::Spacing::Alone, + }; + let mut punct = proc_macro::Punct::new(tt.as_char(), spacing); + punct.set_span(tt.span().inner.unwrap_nightly()); + punct.into() + } + TokenTree::Ident(tt) => tt.inner.unwrap_nightly().into(), + TokenTree::Literal(tt) => tt.inner.unwrap_nightly().into(), + } +} + +impl From for TokenStream { + fn from(token: TokenTree) -> Self { + if inside_proc_macro() { + TokenStream::Compiler(DeferredTokenStream::new(into_compiler_token(token).into())) + } else { + TokenStream::Fallback(token.into()) + } + } +} + +impl FromIterator for TokenStream { + fn from_iter>(trees: I) -> Self { + if inside_proc_macro() { + TokenStream::Compiler(DeferredTokenStream::new( + trees.into_iter().map(into_compiler_token).collect(), + )) + } else { + TokenStream::Fallback(trees.into_iter().collect()) + } + } +} + +impl FromIterator for TokenStream { + fn from_iter>(streams: I) -> Self { + let mut streams = streams.into_iter(); + match streams.next() { + Some(TokenStream::Compiler(mut first)) => { + first.evaluate_now(); + first.stream.extend(streams.map(|s| match s { + TokenStream::Compiler(s) => s.into_token_stream(), + TokenStream::Fallback(_) => mismatch(line!()), + })); + TokenStream::Compiler(first) + } + Some(TokenStream::Fallback(mut first)) => { + first.extend(streams.map(|s| match s { + TokenStream::Fallback(s) => s, + TokenStream::Compiler(_) => mismatch(line!()), + })); + TokenStream::Fallback(first) + } + None => TokenStream::new(), + } + } +} + +impl Extend for TokenStream { + fn extend>(&mut self, stream: I) { + match self { + TokenStream::Compiler(tts) => { + // Here is the reason for DeferredTokenStream. + for token in stream { + tts.extra.push(into_compiler_token(token)); + } + } + TokenStream::Fallback(tts) => tts.extend(stream), + } + } +} + +impl Extend for TokenStream { + fn extend>(&mut self, streams: I) { + match self { + TokenStream::Compiler(tts) => { + tts.evaluate_now(); + tts.stream + .extend(streams.into_iter().map(TokenStream::unwrap_nightly)); + } + TokenStream::Fallback(tts) => { + tts.extend(streams.into_iter().map(TokenStream::unwrap_stable)); + } + } + } +} + +impl Debug for TokenStream { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + TokenStream::Compiler(tts) => Debug::fmt(&tts.clone().into_token_stream(), f), + TokenStream::Fallback(tts) => Debug::fmt(tts, f), + } + } +} + +impl LexError { + pub(crate) fn span(&self) -> Span { + match self { + LexError::Compiler(_) | LexError::CompilerPanic => Span::call_site(), + LexError::Fallback(e) => Span::Fallback(e.span()), + } + } +} + +impl From for LexError { + fn from(e: proc_macro::LexError) -> Self { + LexError::Compiler(e) + } +} + +impl From for LexError { + fn from(e: fallback::LexError) -> Self { + LexError::Fallback(e) + } +} + +impl Debug for LexError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + LexError::Compiler(e) => Debug::fmt(e, f), + LexError::Fallback(e) => Debug::fmt(e, f), + LexError::CompilerPanic => { + let fallback = fallback::LexError::call_site(); + Debug::fmt(&fallback, f) + } + } + } +} + +impl Display for LexError { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + LexError::Compiler(e) => Display::fmt(e, f), + LexError::Fallback(e) => Display::fmt(e, f), + LexError::CompilerPanic => { + let fallback = fallback::LexError::call_site(); + Display::fmt(&fallback, f) + } + } + } +} + +#[derive(Clone)] +pub(crate) enum TokenTreeIter { + Compiler(proc_macro::token_stream::IntoIter), + Fallback(fallback::TokenTreeIter), +} + +impl IntoIterator for TokenStream { + type Item = TokenTree; + type IntoIter = TokenTreeIter; + + fn into_iter(self) -> TokenTreeIter { + match self { + TokenStream::Compiler(tts) => { + TokenTreeIter::Compiler(tts.into_token_stream().into_iter()) + } + TokenStream::Fallback(tts) => TokenTreeIter::Fallback(tts.into_iter()), + } + } +} + +impl Iterator for TokenTreeIter { + type Item = TokenTree; + + fn next(&mut self) -> Option { + let token = match self { + TokenTreeIter::Compiler(iter) => iter.next()?, + TokenTreeIter::Fallback(iter) => return iter.next(), + }; + Some(match token { + proc_macro::TokenTree::Group(tt) => crate::Group::_new(Group::Compiler(tt)).into(), + proc_macro::TokenTree::Punct(tt) => { + let spacing = match tt.spacing() { + proc_macro::Spacing::Joint => Spacing::Joint, + proc_macro::Spacing::Alone => Spacing::Alone, + }; + let mut o = Punct::new(tt.as_char(), spacing); + o.set_span(crate::Span::_new(Span::Compiler(tt.span()))); + o.into() + } + proc_macro::TokenTree::Ident(s) => crate::Ident::_new(Ident::Compiler(s)).into(), + proc_macro::TokenTree::Literal(l) => crate::Literal::_new(Literal::Compiler(l)).into(), + }) + } + + fn size_hint(&self) -> (usize, Option) { + match self { + TokenTreeIter::Compiler(tts) => tts.size_hint(), + TokenTreeIter::Fallback(tts) => tts.size_hint(), + } + } +} + +#[derive(Clone, PartialEq, Eq)] +#[cfg(super_unstable)] +pub(crate) enum SourceFile { + Compiler(proc_macro::SourceFile), + Fallback(fallback::SourceFile), +} + +#[cfg(super_unstable)] +impl SourceFile { + fn nightly(sf: proc_macro::SourceFile) -> Self { + SourceFile::Compiler(sf) + } + + /// Get the path to this source file as a string. + pub fn path(&self) -> PathBuf { + match self { + SourceFile::Compiler(a) => a.path(), + SourceFile::Fallback(a) => a.path(), + } + } + + pub fn is_real(&self) -> bool { + match self { + SourceFile::Compiler(a) => a.is_real(), + SourceFile::Fallback(a) => a.is_real(), + } + } +} + +#[cfg(super_unstable)] +impl Debug for SourceFile { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + SourceFile::Compiler(a) => Debug::fmt(a, f), + SourceFile::Fallback(a) => Debug::fmt(a, f), + } + } +} + +#[derive(Copy, Clone)] +pub(crate) enum Span { + Compiler(proc_macro::Span), + Fallback(fallback::Span), +} + +impl Span { + pub fn call_site() -> Self { + if inside_proc_macro() { + Span::Compiler(proc_macro::Span::call_site()) + } else { + Span::Fallback(fallback::Span::call_site()) + } + } + + pub fn mixed_site() -> Self { + if inside_proc_macro() { + Span::Compiler(proc_macro::Span::mixed_site()) + } else { + Span::Fallback(fallback::Span::mixed_site()) + } + } + + #[cfg(super_unstable)] + pub fn def_site() -> Self { + if inside_proc_macro() { + Span::Compiler(proc_macro::Span::def_site()) + } else { + Span::Fallback(fallback::Span::def_site()) + } + } + + pub fn resolved_at(&self, other: Span) -> Span { + match (self, other) { + (Span::Compiler(a), Span::Compiler(b)) => Span::Compiler(a.resolved_at(b)), + (Span::Fallback(a), Span::Fallback(b)) => Span::Fallback(a.resolved_at(b)), + (Span::Compiler(_), Span::Fallback(_)) => mismatch(line!()), + (Span::Fallback(_), Span::Compiler(_)) => mismatch(line!()), + } + } + + pub fn located_at(&self, other: Span) -> Span { + match (self, other) { + (Span::Compiler(a), Span::Compiler(b)) => Span::Compiler(a.located_at(b)), + (Span::Fallback(a), Span::Fallback(b)) => Span::Fallback(a.located_at(b)), + (Span::Compiler(_), Span::Fallback(_)) => mismatch(line!()), + (Span::Fallback(_), Span::Compiler(_)) => mismatch(line!()), + } + } + + pub fn unwrap(self) -> proc_macro::Span { + match self { + Span::Compiler(s) => s, + Span::Fallback(_) => panic!("proc_macro::Span is only available in procedural macros"), + } + } + + #[cfg(super_unstable)] + pub fn source_file(&self) -> SourceFile { + match self { + Span::Compiler(s) => SourceFile::nightly(s.source_file()), + Span::Fallback(s) => SourceFile::Fallback(s.source_file()), + } + } + + #[cfg(span_locations)] + pub fn byte_range(&self) -> Range { + match self { + #[cfg(proc_macro_span)] + Span::Compiler(s) => s.byte_range(), + #[cfg(not(proc_macro_span))] + Span::Compiler(_) => 0..0, + Span::Fallback(s) => s.byte_range(), + } + } + + #[cfg(span_locations)] + pub fn start(&self) -> LineColumn { + match self { + Span::Compiler(_) => LineColumn { line: 0, column: 0 }, + Span::Fallback(s) => s.start(), + } + } + + #[cfg(span_locations)] + pub fn end(&self) -> LineColumn { + match self { + Span::Compiler(_) => LineColumn { line: 0, column: 0 }, + Span::Fallback(s) => s.end(), + } + } + + pub fn join(&self, other: Span) -> Option { + let ret = match (self, other) { + #[cfg(proc_macro_span)] + (Span::Compiler(a), Span::Compiler(b)) => Span::Compiler(a.join(b)?), + (Span::Fallback(a), Span::Fallback(b)) => Span::Fallback(a.join(b)?), + _ => return None, + }; + Some(ret) + } + + #[cfg(super_unstable)] + pub fn eq(&self, other: &Span) -> bool { + match (self, other) { + (Span::Compiler(a), Span::Compiler(b)) => a.eq(b), + (Span::Fallback(a), Span::Fallback(b)) => a.eq(b), + _ => false, + } + } + + pub fn source_text(&self) -> Option { + match self { + #[cfg(not(no_source_text))] + Span::Compiler(s) => s.source_text(), + #[cfg(no_source_text)] + Span::Compiler(_) => None, + Span::Fallback(s) => s.source_text(), + } + } + + fn unwrap_nightly(self) -> proc_macro::Span { + match self { + Span::Compiler(s) => s, + Span::Fallback(_) => mismatch(line!()), + } + } +} + +impl From for crate::Span { + fn from(proc_span: proc_macro::Span) -> Self { + crate::Span::_new(Span::Compiler(proc_span)) + } +} + +impl From for Span { + fn from(inner: fallback::Span) -> Self { + Span::Fallback(inner) + } +} + +impl Debug for Span { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + Span::Compiler(s) => Debug::fmt(s, f), + Span::Fallback(s) => Debug::fmt(s, f), + } + } +} + +pub(crate) fn debug_span_field_if_nontrivial(debug: &mut fmt::DebugStruct, span: Span) { + match span { + Span::Compiler(s) => { + debug.field("span", &s); + } + Span::Fallback(s) => fallback::debug_span_field_if_nontrivial(debug, s), + } +} + +#[derive(Clone)] +pub(crate) enum Group { + Compiler(proc_macro::Group), + Fallback(fallback::Group), +} + +impl Group { + pub fn new(delimiter: Delimiter, stream: TokenStream) -> Self { + match stream { + TokenStream::Compiler(tts) => { + let delimiter = match delimiter { + Delimiter::Parenthesis => proc_macro::Delimiter::Parenthesis, + Delimiter::Bracket => proc_macro::Delimiter::Bracket, + Delimiter::Brace => proc_macro::Delimiter::Brace, + Delimiter::None => proc_macro::Delimiter::None, + }; + Group::Compiler(proc_macro::Group::new(delimiter, tts.into_token_stream())) + } + TokenStream::Fallback(stream) => { + Group::Fallback(fallback::Group::new(delimiter, stream)) + } + } + } + + pub fn delimiter(&self) -> Delimiter { + match self { + Group::Compiler(g) => match g.delimiter() { + proc_macro::Delimiter::Parenthesis => Delimiter::Parenthesis, + proc_macro::Delimiter::Bracket => Delimiter::Bracket, + proc_macro::Delimiter::Brace => Delimiter::Brace, + proc_macro::Delimiter::None => Delimiter::None, + }, + Group::Fallback(g) => g.delimiter(), + } + } + + pub fn stream(&self) -> TokenStream { + match self { + Group::Compiler(g) => TokenStream::Compiler(DeferredTokenStream::new(g.stream())), + Group::Fallback(g) => TokenStream::Fallback(g.stream()), + } + } + + pub fn span(&self) -> Span { + match self { + Group::Compiler(g) => Span::Compiler(g.span()), + Group::Fallback(g) => Span::Fallback(g.span()), + } + } + + pub fn span_open(&self) -> Span { + match self { + Group::Compiler(g) => Span::Compiler(g.span_open()), + Group::Fallback(g) => Span::Fallback(g.span_open()), + } + } + + pub fn span_close(&self) -> Span { + match self { + Group::Compiler(g) => Span::Compiler(g.span_close()), + Group::Fallback(g) => Span::Fallback(g.span_close()), + } + } + + pub fn set_span(&mut self, span: Span) { + match (self, span) { + (Group::Compiler(g), Span::Compiler(s)) => g.set_span(s), + (Group::Fallback(g), Span::Fallback(s)) => g.set_span(s), + (Group::Compiler(_), Span::Fallback(_)) => mismatch(line!()), + (Group::Fallback(_), Span::Compiler(_)) => mismatch(line!()), + } + } + + fn unwrap_nightly(self) -> proc_macro::Group { + match self { + Group::Compiler(g) => g, + Group::Fallback(_) => mismatch(line!()), + } + } +} + +impl From for Group { + fn from(g: fallback::Group) -> Self { + Group::Fallback(g) + } +} + +impl Display for Group { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + match self { + Group::Compiler(group) => Display::fmt(group, formatter), + Group::Fallback(group) => Display::fmt(group, formatter), + } + } +} + +impl Debug for Group { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + match self { + Group::Compiler(group) => Debug::fmt(group, formatter), + Group::Fallback(group) => Debug::fmt(group, formatter), + } + } +} + +#[derive(Clone)] +pub(crate) enum Ident { + Compiler(proc_macro::Ident), + Fallback(fallback::Ident), +} + +impl Ident { + #[track_caller] + pub fn new_checked(string: &str, span: Span) -> Self { + match span { + Span::Compiler(s) => Ident::Compiler(proc_macro::Ident::new(string, s)), + Span::Fallback(s) => Ident::Fallback(fallback::Ident::new_checked(string, s)), + } + } + + pub fn new_unchecked(string: &str, span: fallback::Span) -> Self { + Ident::Fallback(fallback::Ident::new_unchecked(string, span)) + } + + #[track_caller] + pub fn new_raw_checked(string: &str, span: Span) -> Self { + match span { + Span::Compiler(s) => Ident::Compiler(proc_macro::Ident::new_raw(string, s)), + Span::Fallback(s) => Ident::Fallback(fallback::Ident::new_raw_checked(string, s)), + } + } + + pub fn new_raw_unchecked(string: &str, span: fallback::Span) -> Self { + Ident::Fallback(fallback::Ident::new_raw_unchecked(string, span)) + } + + pub fn span(&self) -> Span { + match self { + Ident::Compiler(t) => Span::Compiler(t.span()), + Ident::Fallback(t) => Span::Fallback(t.span()), + } + } + + pub fn set_span(&mut self, span: Span) { + match (self, span) { + (Ident::Compiler(t), Span::Compiler(s)) => t.set_span(s), + (Ident::Fallback(t), Span::Fallback(s)) => t.set_span(s), + (Ident::Compiler(_), Span::Fallback(_)) => mismatch(line!()), + (Ident::Fallback(_), Span::Compiler(_)) => mismatch(line!()), + } + } + + fn unwrap_nightly(self) -> proc_macro::Ident { + match self { + Ident::Compiler(s) => s, + Ident::Fallback(_) => mismatch(line!()), + } + } +} + +impl PartialEq for Ident { + fn eq(&self, other: &Ident) -> bool { + match (self, other) { + (Ident::Compiler(t), Ident::Compiler(o)) => t.to_string() == o.to_string(), + (Ident::Fallback(t), Ident::Fallback(o)) => t == o, + (Ident::Compiler(_), Ident::Fallback(_)) => mismatch(line!()), + (Ident::Fallback(_), Ident::Compiler(_)) => mismatch(line!()), + } + } +} + +impl PartialEq for Ident +where + T: ?Sized + AsRef, +{ + fn eq(&self, other: &T) -> bool { + let other = other.as_ref(); + match self { + Ident::Compiler(t) => t.to_string() == other, + Ident::Fallback(t) => t == other, + } + } +} + +impl Display for Ident { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + Ident::Compiler(t) => Display::fmt(t, f), + Ident::Fallback(t) => Display::fmt(t, f), + } + } +} + +impl Debug for Ident { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + Ident::Compiler(t) => Debug::fmt(t, f), + Ident::Fallback(t) => Debug::fmt(t, f), + } + } +} + +#[derive(Clone)] +pub(crate) enum Literal { + Compiler(proc_macro::Literal), + Fallback(fallback::Literal), +} + +macro_rules! suffixed_numbers { + ($($name:ident => $kind:ident,)*) => ($( + pub fn $name(n: $kind) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::$name(n)) + } else { + Literal::Fallback(fallback::Literal::$name(n)) + } + } + )*) +} + +macro_rules! unsuffixed_integers { + ($($name:ident => $kind:ident,)*) => ($( + pub fn $name(n: $kind) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::$name(n)) + } else { + Literal::Fallback(fallback::Literal::$name(n)) + } + } + )*) +} + +impl Literal { + pub unsafe fn from_str_unchecked(repr: &str) -> Self { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::from_str(repr).expect("invalid literal")) + } else { + Literal::Fallback(unsafe { fallback::Literal::from_str_unchecked(repr) }) + } + } + + suffixed_numbers! { + u8_suffixed => u8, + u16_suffixed => u16, + u32_suffixed => u32, + u64_suffixed => u64, + u128_suffixed => u128, + usize_suffixed => usize, + i8_suffixed => i8, + i16_suffixed => i16, + i32_suffixed => i32, + i64_suffixed => i64, + i128_suffixed => i128, + isize_suffixed => isize, + + f32_suffixed => f32, + f64_suffixed => f64, + } + + unsuffixed_integers! { + u8_unsuffixed => u8, + u16_unsuffixed => u16, + u32_unsuffixed => u32, + u64_unsuffixed => u64, + u128_unsuffixed => u128, + usize_unsuffixed => usize, + i8_unsuffixed => i8, + i16_unsuffixed => i16, + i32_unsuffixed => i32, + i64_unsuffixed => i64, + i128_unsuffixed => i128, + isize_unsuffixed => isize, + } + + pub fn f32_unsuffixed(f: f32) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::f32_unsuffixed(f)) + } else { + Literal::Fallback(fallback::Literal::f32_unsuffixed(f)) + } + } + + pub fn f64_unsuffixed(f: f64) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::f64_unsuffixed(f)) + } else { + Literal::Fallback(fallback::Literal::f64_unsuffixed(f)) + } + } + + pub fn string(string: &str) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::string(string)) + } else { + Literal::Fallback(fallback::Literal::string(string)) + } + } + + pub fn character(ch: char) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::character(ch)) + } else { + Literal::Fallback(fallback::Literal::character(ch)) + } + } + + pub fn byte_character(byte: u8) -> Literal { + if inside_proc_macro() { + Literal::Compiler({ + #[cfg(not(no_literal_byte_character))] + { + proc_macro::Literal::byte_character(byte) + } + + #[cfg(no_literal_byte_character)] + { + let fallback = fallback::Literal::byte_character(byte); + fallback.repr.parse::().unwrap() + } + }) + } else { + Literal::Fallback(fallback::Literal::byte_character(byte)) + } + } + + pub fn byte_string(bytes: &[u8]) -> Literal { + if inside_proc_macro() { + Literal::Compiler(proc_macro::Literal::byte_string(bytes)) + } else { + Literal::Fallback(fallback::Literal::byte_string(bytes)) + } + } + + pub fn c_string(string: &CStr) -> Literal { + if inside_proc_macro() { + Literal::Compiler({ + #[cfg(not(no_literal_c_string))] + { + proc_macro::Literal::c_string(string) + } + + #[cfg(no_literal_c_string)] + { + let fallback = fallback::Literal::c_string(string); + fallback.repr.parse::().unwrap() + } + }) + } else { + Literal::Fallback(fallback::Literal::c_string(string)) + } + } + + pub fn span(&self) -> Span { + match self { + Literal::Compiler(lit) => Span::Compiler(lit.span()), + Literal::Fallback(lit) => Span::Fallback(lit.span()), + } + } + + pub fn set_span(&mut self, span: Span) { + match (self, span) { + (Literal::Compiler(lit), Span::Compiler(s)) => lit.set_span(s), + (Literal::Fallback(lit), Span::Fallback(s)) => lit.set_span(s), + (Literal::Compiler(_), Span::Fallback(_)) => mismatch(line!()), + (Literal::Fallback(_), Span::Compiler(_)) => mismatch(line!()), + } + } + + pub fn subspan>(&self, range: R) -> Option { + match self { + #[cfg(proc_macro_span)] + Literal::Compiler(lit) => lit.subspan(range).map(Span::Compiler), + #[cfg(not(proc_macro_span))] + Literal::Compiler(_lit) => None, + Literal::Fallback(lit) => lit.subspan(range).map(Span::Fallback), + } + } + + fn unwrap_nightly(self) -> proc_macro::Literal { + match self { + Literal::Compiler(s) => s, + Literal::Fallback(_) => mismatch(line!()), + } + } +} + +impl From for Literal { + fn from(s: fallback::Literal) -> Self { + Literal::Fallback(s) + } +} + +impl FromStr for Literal { + type Err = LexError; + + fn from_str(repr: &str) -> Result { + if inside_proc_macro() { + let literal = proc_macro::Literal::from_str(repr)?; + Ok(Literal::Compiler(literal)) + } else { + let literal = fallback::Literal::from_str(repr)?; + Ok(Literal::Fallback(literal)) + } + } +} + +impl Display for Literal { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + Literal::Compiler(t) => Display::fmt(t, f), + Literal::Fallback(t) => Display::fmt(t, f), + } + } +} + +impl Debug for Literal { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + match self { + Literal::Compiler(t) => Debug::fmt(t, f), + Literal::Fallback(t) => Debug::fmt(t, f), + } + } +} + +#[cfg(span_locations)] +pub(crate) fn invalidate_current_thread_spans() { + if inside_proc_macro() { + panic!( + "proc_macro2::extra::invalidate_current_thread_spans is not available in procedural macros" + ); + } else { + crate::fallback::invalidate_current_thread_spans(); + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/tests/comments.rs b/rust/hw/char/pl011/vendor/proc-macro2/tests/comments.rs new file mode 100644 index 0000000000..4f7236dea9 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/tests/comments.rs @@ -0,0 +1,105 @@ +#![allow(clippy::assertions_on_result_states)] + +use proc_macro2::{Delimiter, Literal, Spacing, TokenStream, TokenTree}; + +// #[doc = "..."] -> "..." +fn lit_of_outer_doc_comment(tokens: &TokenStream) -> Literal { + lit_of_doc_comment(tokens, false) +} + +// #![doc = "..."] -> "..." +fn lit_of_inner_doc_comment(tokens: &TokenStream) -> Literal { + lit_of_doc_comment(tokens, true) +} + +fn lit_of_doc_comment(tokens: &TokenStream, inner: bool) -> Literal { + let mut iter = tokens.clone().into_iter(); + match iter.next().unwrap() { + TokenTree::Punct(punct) => { + assert_eq!(punct.as_char(), '#'); + assert_eq!(punct.spacing(), Spacing::Alone); + } + _ => panic!("wrong token {:?}", tokens), + } + if inner { + match iter.next().unwrap() { + TokenTree::Punct(punct) => { + assert_eq!(punct.as_char(), '!'); + assert_eq!(punct.spacing(), Spacing::Alone); + } + _ => panic!("wrong token {:?}", tokens), + } + } + iter = match iter.next().unwrap() { + TokenTree::Group(group) => { + assert_eq!(group.delimiter(), Delimiter::Bracket); + assert!(iter.next().is_none(), "unexpected token {:?}", tokens); + group.stream().into_iter() + } + _ => panic!("wrong token {:?}", tokens), + }; + match iter.next().unwrap() { + TokenTree::Ident(ident) => assert_eq!(ident.to_string(), "doc"), + _ => panic!("wrong token {:?}", tokens), + } + match iter.next().unwrap() { + TokenTree::Punct(punct) => { + assert_eq!(punct.as_char(), '='); + assert_eq!(punct.spacing(), Spacing::Alone); + } + _ => panic!("wrong token {:?}", tokens), + } + match iter.next().unwrap() { + TokenTree::Literal(literal) => { + assert!(iter.next().is_none(), "unexpected token {:?}", tokens); + literal + } + _ => panic!("wrong token {:?}", tokens), + } +} + +#[test] +fn closed_immediately() { + let stream = "/**/".parse::().unwrap(); + let tokens = stream.into_iter().collect::>(); + assert!(tokens.is_empty(), "not empty -- {:?}", tokens); +} + +#[test] +fn incomplete() { + assert!("/*/".parse::().is_err()); +} + +#[test] +fn lit() { + let stream = "/// doc".parse::().unwrap(); + let lit = lit_of_outer_doc_comment(&stream); + assert_eq!(lit.to_string(), "\" doc\""); + + let stream = "//! doc".parse::().unwrap(); + let lit = lit_of_inner_doc_comment(&stream); + assert_eq!(lit.to_string(), "\" doc\""); + + let stream = "/** doc */".parse::().unwrap(); + let lit = lit_of_outer_doc_comment(&stream); + assert_eq!(lit.to_string(), "\" doc \""); + + let stream = "/*! doc */".parse::().unwrap(); + let lit = lit_of_inner_doc_comment(&stream); + assert_eq!(lit.to_string(), "\" doc \""); +} + +#[test] +fn carriage_return() { + let stream = "///\r\n".parse::().unwrap(); + let lit = lit_of_outer_doc_comment(&stream); + assert_eq!(lit.to_string(), "\"\""); + + let stream = "/**\r\n*/".parse::().unwrap(); + let lit = lit_of_outer_doc_comment(&stream); + assert_eq!(lit.to_string(), "\"\\r\\n\""); + + "///\r".parse::().unwrap_err(); + "///\r \n".parse::().unwrap_err(); + "/**\r \n*/".parse::().unwrap_err(); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/tests/features.rs b/rust/hw/char/pl011/vendor/proc-macro2/tests/features.rs new file mode 100644 index 0000000000..073f6e60fb --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/tests/features.rs @@ -0,0 +1,8 @@ +#[test] +#[ignore] +fn make_sure_no_proc_macro() { + assert!( + !cfg!(feature = "proc-macro"), + "still compiled with proc_macro?" + ); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/tests/marker.rs b/rust/hw/char/pl011/vendor/proc-macro2/tests/marker.rs new file mode 100644 index 0000000000..99f64c068f --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/tests/marker.rs @@ -0,0 +1,100 @@ +#![allow(clippy::extra_unused_type_parameters)] + +use proc_macro2::{ + Delimiter, Group, Ident, LexError, Literal, Punct, Spacing, Span, TokenStream, TokenTree, +}; + +macro_rules! assert_impl { + ($ty:ident is $($marker:ident) and +) => { + #[test] + #[allow(non_snake_case)] + fn $ty() { + fn assert_implemented() {} + assert_implemented::<$ty>(); + } + }; + + ($ty:ident is not $($marker:ident) or +) => { + #[test] + #[allow(non_snake_case)] + fn $ty() { + $( + { + // Implemented for types that implement $marker. + #[allow(dead_code)] + trait IsNotImplemented { + fn assert_not_implemented() {} + } + impl IsNotImplemented for T {} + + // Implemented for the type being tested. + trait IsImplemented { + fn assert_not_implemented() {} + } + impl IsImplemented for $ty {} + + // If $ty does not implement $marker, there is no ambiguity + // in the following trait method call. + <$ty>::assert_not_implemented(); + } + )+ + } + }; +} + +assert_impl!(Delimiter is Send and Sync); +assert_impl!(Spacing is Send and Sync); + +assert_impl!(Group is not Send or Sync); +assert_impl!(Ident is not Send or Sync); +assert_impl!(LexError is not Send or Sync); +assert_impl!(Literal is not Send or Sync); +assert_impl!(Punct is not Send or Sync); +assert_impl!(Span is not Send or Sync); +assert_impl!(TokenStream is not Send or Sync); +assert_impl!(TokenTree is not Send or Sync); + +#[cfg(procmacro2_semver_exempt)] +mod semver_exempt { + use proc_macro2::{LineColumn, SourceFile}; + + assert_impl!(LineColumn is Send and Sync); + + assert_impl!(SourceFile is not Send or Sync); +} + +mod unwind_safe { + use proc_macro2::{ + Delimiter, Group, Ident, LexError, Literal, Punct, Spacing, Span, TokenStream, TokenTree, + }; + #[cfg(procmacro2_semver_exempt)] + use proc_macro2::{LineColumn, SourceFile}; + use std::panic::{RefUnwindSafe, UnwindSafe}; + + macro_rules! assert_unwind_safe { + ($($types:ident)*) => { + $( + assert_impl!($types is UnwindSafe and RefUnwindSafe); + )* + }; + } + + assert_unwind_safe! { + Delimiter + Group + Ident + LexError + Literal + Punct + Spacing + Span + TokenStream + TokenTree + } + + #[cfg(procmacro2_semver_exempt)] + assert_unwind_safe! { + LineColumn + SourceFile + } +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/tests/test.rs b/rust/hw/char/pl011/vendor/proc-macro2/tests/test.rs new file mode 100644 index 0000000000..0d7c88d3df --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/tests/test.rs @@ -0,0 +1,905 @@ +#![allow( + clippy::assertions_on_result_states, + clippy::items_after_statements, + clippy::needless_pass_by_value, + clippy::needless_raw_string_hashes, + clippy::non_ascii_literal, + clippy::octal_escapes +)] + +use proc_macro2::{Ident, Literal, Punct, Spacing, Span, TokenStream, TokenTree}; +use std::ffi::CStr; +use std::iter; +use std::str::{self, FromStr}; + +#[test] +fn idents() { + assert_eq!( + Ident::new("String", Span::call_site()).to_string(), + "String" + ); + assert_eq!(Ident::new("fn", Span::call_site()).to_string(), "fn"); + assert_eq!(Ident::new("_", Span::call_site()).to_string(), "_"); +} + +#[test] +fn raw_idents() { + assert_eq!( + Ident::new_raw("String", Span::call_site()).to_string(), + "r#String" + ); + assert_eq!(Ident::new_raw("fn", Span::call_site()).to_string(), "r#fn"); +} + +#[test] +#[should_panic(expected = "`r#_` cannot be a raw identifier")] +fn ident_raw_underscore() { + Ident::new_raw("_", Span::call_site()); +} + +#[test] +#[should_panic(expected = "`r#super` cannot be a raw identifier")] +fn ident_raw_reserved() { + Ident::new_raw("super", Span::call_site()); +} + +#[test] +#[should_panic(expected = "Ident is not allowed to be empty; use Option")] +fn ident_empty() { + Ident::new("", Span::call_site()); +} + +#[test] +#[should_panic(expected = "Ident cannot be a number; use Literal instead")] +fn ident_number() { + Ident::new("255", Span::call_site()); +} + +#[test] +#[should_panic(expected = "\"a#\" is not a valid Ident")] +fn ident_invalid() { + Ident::new("a#", Span::call_site()); +} + +#[test] +#[should_panic(expected = "not a valid Ident")] +fn raw_ident_empty() { + Ident::new("r#", Span::call_site()); +} + +#[test] +#[should_panic(expected = "not a valid Ident")] +fn raw_ident_number() { + Ident::new("r#255", Span::call_site()); +} + +#[test] +#[should_panic(expected = "\"r#a#\" is not a valid Ident")] +fn raw_ident_invalid() { + Ident::new("r#a#", Span::call_site()); +} + +#[test] +#[should_panic(expected = "not a valid Ident")] +fn lifetime_empty() { + Ident::new("'", Span::call_site()); +} + +#[test] +#[should_panic(expected = "not a valid Ident")] +fn lifetime_number() { + Ident::new("'255", Span::call_site()); +} + +#[test] +#[should_panic(expected = r#""'a#" is not a valid Ident"#)] +fn lifetime_invalid() { + Ident::new("'a#", Span::call_site()); +} + +#[test] +fn literal_string() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected.trim()); + } + + assert(Literal::string(""), r#" "" "#); + assert(Literal::string("aA"), r#" "aA" "#); + assert(Literal::string("\t"), r#" "\t" "#); + assert(Literal::string("❤"), r#" "❤" "#); + assert(Literal::string("'"), r#" "'" "#); + assert(Literal::string("\""), r#" "\"" "#); + assert(Literal::string("\0"), r#" "\0" "#); + assert(Literal::string("\u{1}"), r#" "\u{1}" "#); + assert( + Literal::string("a\00b\07c\08d\0e\0"), + r#" "a\x000b\x007c\08d\0e\0" "#, + ); + + "\"\\\r\n x\"".parse::().unwrap(); + "\"\\\r\n \rx\"".parse::().unwrap_err(); +} + +#[test] +fn literal_raw_string() { + "r\"\r\n\"".parse::().unwrap(); + + fn raw_string_literal_with_hashes(n: usize) -> String { + let mut literal = String::new(); + literal.push('r'); + literal.extend(iter::repeat('#').take(n)); + literal.push('"'); + literal.push('"'); + literal.extend(iter::repeat('#').take(n)); + literal + } + + raw_string_literal_with_hashes(255) + .parse::() + .unwrap(); + + // https://github.com/rust-lang/rust/pull/95251 + raw_string_literal_with_hashes(256) + .parse::() + .unwrap_err(); +} + +#[test] +fn literal_byte_character() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected.trim()); + } + + assert(Literal::byte_character(b'a'), r#" b'a' "#); + assert(Literal::byte_character(b'\0'), r#" b'\0' "#); + assert(Literal::byte_character(b'\t'), r#" b'\t' "#); + assert(Literal::byte_character(b'\n'), r#" b'\n' "#); + assert(Literal::byte_character(b'\r'), r#" b'\r' "#); + assert(Literal::byte_character(b'\''), r#" b'\'' "#); + assert(Literal::byte_character(b'\\'), r#" b'\\' "#); + assert(Literal::byte_character(b'\x1f'), r#" b'\x1F' "#); + assert(Literal::byte_character(b'"'), r#" b'"' "#); +} + +#[test] +fn literal_byte_string() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected.trim()); + } + + assert(Literal::byte_string(b""), r#" b"" "#); + assert(Literal::byte_string(b"\0"), r#" b"\0" "#); + assert(Literal::byte_string(b"\t"), r#" b"\t" "#); + assert(Literal::byte_string(b"\n"), r#" b"\n" "#); + assert(Literal::byte_string(b"\r"), r#" b"\r" "#); + assert(Literal::byte_string(b"\""), r#" b"\"" "#); + assert(Literal::byte_string(b"\\"), r#" b"\\" "#); + assert(Literal::byte_string(b"\x1f"), r#" b"\x1F" "#); + assert(Literal::byte_string(b"'"), r#" b"'" "#); + assert( + Literal::byte_string(b"a\00b\07c\08d\0e\0"), + r#" b"a\x000b\x007c\08d\0e\0" "#, + ); + + "b\"\\\r\n x\"".parse::().unwrap(); + "b\"\\\r\n \rx\"".parse::().unwrap_err(); + "b\"\\\r\n \u{a0}x\"".parse::().unwrap_err(); + "br\"\u{a0}\"".parse::().unwrap_err(); +} + +#[test] +fn literal_c_string() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected.trim()); + } + + assert(Literal::c_string(<&CStr>::default()), r#" c"" "#); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"aA\0").unwrap()), + r#" c"aA" "#, + ); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"aA\0").unwrap()), + r#" c"aA" "#, + ); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"\t\0").unwrap()), + r#" c"\t" "#, + ); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"\xE2\x9D\xA4\0").unwrap()), + r#" c"❤" "#, + ); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"'\0").unwrap()), + r#" c"'" "#, + ); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"\"\0").unwrap()), + r#" c"\"" "#, + ); + assert( + Literal::c_string(CStr::from_bytes_with_nul(b"\x7F\xFF\xFE\xCC\xB3\0").unwrap()), + r#" c"\u{7f}\xFF\xFE\u{333}" "#, + ); + + let strings = r###" + c"hello\x80我叫\u{1F980}" // from the RFC + cr"\" + cr##"Hello "world"!"## + c"\t\n\r\"\\" + "###; + + let mut tokens = strings.parse::().unwrap().into_iter(); + + for expected in &[ + r#"c"hello\x80我叫\u{1F980}""#, + r#"cr"\""#, + r###"cr##"Hello "world"!"##"###, + r#"c"\t\n\r\"\\""#, + ] { + match tokens.next().unwrap() { + TokenTree::Literal(literal) => { + assert_eq!(literal.to_string(), *expected); + } + unexpected => panic!("unexpected token: {:?}", unexpected), + } + } + + if let Some(unexpected) = tokens.next() { + panic!("unexpected token: {:?}", unexpected); + } + + for invalid in &[r#"c"\0""#, r#"c"\x00""#, r#"c"\u{0}""#, "c\"\0\""] { + if let Ok(unexpected) = invalid.parse::() { + panic!("unexpected token: {:?}", unexpected); + } + } +} + +#[test] +fn literal_character() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected.trim()); + } + + assert(Literal::character('a'), r#" 'a' "#); + assert(Literal::character('\t'), r#" '\t' "#); + assert(Literal::character('❤'), r#" '❤' "#); + assert(Literal::character('\''), r#" '\'' "#); + assert(Literal::character('"'), r#" '"' "#); + assert(Literal::character('\0'), r#" '\0' "#); + assert(Literal::character('\u{1}'), r#" '\u{1}' "#); +} + +#[test] +fn literal_integer() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected); + } + + assert(Literal::u8_suffixed(10), "10u8"); + assert(Literal::u16_suffixed(10), "10u16"); + assert(Literal::u32_suffixed(10), "10u32"); + assert(Literal::u64_suffixed(10), "10u64"); + assert(Literal::u128_suffixed(10), "10u128"); + assert(Literal::usize_suffixed(10), "10usize"); + + assert(Literal::i8_suffixed(10), "10i8"); + assert(Literal::i16_suffixed(10), "10i16"); + assert(Literal::i32_suffixed(10), "10i32"); + assert(Literal::i64_suffixed(10), "10i64"); + assert(Literal::i128_suffixed(10), "10i128"); + assert(Literal::isize_suffixed(10), "10isize"); + + assert(Literal::u8_unsuffixed(10), "10"); + assert(Literal::u16_unsuffixed(10), "10"); + assert(Literal::u32_unsuffixed(10), "10"); + assert(Literal::u64_unsuffixed(10), "10"); + assert(Literal::u128_unsuffixed(10), "10"); + assert(Literal::usize_unsuffixed(10), "10"); + + assert(Literal::i8_unsuffixed(10), "10"); + assert(Literal::i16_unsuffixed(10), "10"); + assert(Literal::i32_unsuffixed(10), "10"); + assert(Literal::i64_unsuffixed(10), "10"); + assert(Literal::i128_unsuffixed(10), "10"); + assert(Literal::isize_unsuffixed(10), "10"); + + assert(Literal::i32_suffixed(-10), "-10i32"); + assert(Literal::i32_unsuffixed(-10), "-10"); +} + +#[test] +fn literal_float() { + #[track_caller] + fn assert(literal: Literal, expected: &str) { + assert_eq!(literal.to_string(), expected); + } + + assert(Literal::f32_suffixed(10.0), "10f32"); + assert(Literal::f32_suffixed(-10.0), "-10f32"); + assert(Literal::f64_suffixed(10.0), "10f64"); + assert(Literal::f64_suffixed(-10.0), "-10f64"); + + assert(Literal::f32_unsuffixed(10.0), "10.0"); + assert(Literal::f32_unsuffixed(-10.0), "-10.0"); + assert(Literal::f64_unsuffixed(10.0), "10.0"); + assert(Literal::f64_unsuffixed(-10.0), "-10.0"); + + assert( + Literal::f64_unsuffixed(1e100), + "10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.0", + ); +} + +#[test] +fn literal_suffix() { + fn token_count(p: &str) -> usize { + p.parse::().unwrap().into_iter().count() + } + + assert_eq!(token_count("999u256"), 1); + assert_eq!(token_count("999r#u256"), 3); + assert_eq!(token_count("1."), 1); + assert_eq!(token_count("1.f32"), 3); + assert_eq!(token_count("1.0_0"), 1); + assert_eq!(token_count("1._0"), 3); + assert_eq!(token_count("1._m"), 3); + assert_eq!(token_count("\"\"s"), 1); + assert_eq!(token_count("r\"\"r"), 1); + assert_eq!(token_count("r#\"\"#r"), 1); + assert_eq!(token_count("b\"\"b"), 1); + assert_eq!(token_count("br\"\"br"), 1); + assert_eq!(token_count("br#\"\"#br"), 1); + assert_eq!(token_count("c\"\"c"), 1); + assert_eq!(token_count("cr\"\"cr"), 1); + assert_eq!(token_count("cr#\"\"#cr"), 1); + assert_eq!(token_count("'c'c"), 1); + assert_eq!(token_count("b'b'b"), 1); + assert_eq!(token_count("0E"), 1); + assert_eq!(token_count("0o0A"), 1); + assert_eq!(token_count("0E--0"), 4); + assert_eq!(token_count("0.0ECMA"), 1); +} + +#[test] +fn literal_iter_negative() { + let negative_literal = Literal::i32_suffixed(-3); + let tokens = TokenStream::from(TokenTree::Literal(negative_literal)); + let mut iter = tokens.into_iter(); + match iter.next().unwrap() { + TokenTree::Punct(punct) => { + assert_eq!(punct.as_char(), '-'); + assert_eq!(punct.spacing(), Spacing::Alone); + } + unexpected => panic!("unexpected token {:?}", unexpected), + } + match iter.next().unwrap() { + TokenTree::Literal(literal) => { + assert_eq!(literal.to_string(), "3i32"); + } + unexpected => panic!("unexpected token {:?}", unexpected), + } + assert!(iter.next().is_none()); +} + +#[test] +fn literal_parse() { + assert!("1".parse::().is_ok()); + assert!("-1".parse::().is_ok()); + assert!("-1u12".parse::().is_ok()); + assert!("1.0".parse::().is_ok()); + assert!("-1.0".parse::().is_ok()); + assert!("-1.0f12".parse::().is_ok()); + assert!("'a'".parse::().is_ok()); + assert!("\"\n\"".parse::().is_ok()); + assert!("0 1".parse::().is_err()); + assert!(" 0".parse::().is_err()); + assert!("0 ".parse::().is_err()); + assert!("/* comment */0".parse::().is_err()); + assert!("0/* comment */".parse::().is_err()); + assert!("0// comment".parse::().is_err()); + assert!("- 1".parse::().is_err()); + assert!("- 1.0".parse::().is_err()); + assert!("-\"\"".parse::().is_err()); +} + +#[test] +fn literal_span() { + let positive = "0.1".parse::().unwrap(); + let negative = "-0.1".parse::().unwrap(); + let subspan = positive.subspan(1..2); + + #[cfg(not(span_locations))] + { + let _ = negative; + assert!(subspan.is_none()); + } + + #[cfg(span_locations)] + { + assert_eq!(positive.span().start().column, 0); + assert_eq!(positive.span().end().column, 3); + assert_eq!(negative.span().start().column, 0); + assert_eq!(negative.span().end().column, 4); + assert_eq!(subspan.unwrap().source_text().unwrap(), "."); + } + + assert!(positive.subspan(1..4).is_none()); +} + +#[cfg(span_locations)] +#[test] +fn source_text() { + let input = " 𓀕 a z "; + let mut tokens = input + .parse::() + .unwrap() + .into_iter(); + + let first = tokens.next().unwrap(); + assert_eq!("𓀕", first.span().source_text().unwrap()); + + let second = tokens.next().unwrap(); + let third = tokens.next().unwrap(); + assert_eq!("z", third.span().source_text().unwrap()); + assert_eq!("a", second.span().source_text().unwrap()); +} + +#[test] +fn roundtrip() { + fn roundtrip(p: &str) { + println!("parse: {}", p); + let s = p.parse::().unwrap().to_string(); + println!("first: {}", s); + let s2 = s.parse::().unwrap().to_string(); + assert_eq!(s, s2); + } + roundtrip("a"); + roundtrip("<<"); + roundtrip("<<="); + roundtrip( + " + 1 + 1.0 + 1f32 + 2f64 + 1usize + 4isize + 4e10 + 1_000 + 1_0i32 + 8u8 + 9 + 0 + 0xffffffffffffffffffffffffffffffff + 1x + 1u80 + 1f320 + ", + ); + roundtrip("'a"); + roundtrip("'_"); + roundtrip("'static"); + roundtrip(r"'\u{10__FFFF}'"); + roundtrip("\"\\u{10_F0FF__}foo\\u{1_0_0_0__}\""); +} + +#[test] +fn fail() { + fn fail(p: &str) { + if let Ok(s) = p.parse::() { + panic!("should have failed to parse: {}\n{:#?}", p, s); + } + } + fail("' static"); + fail("r#1"); + fail("r#_"); + fail("\"\\u{0000000}\""); // overlong unicode escape (rust allows at most 6 hex digits) + fail("\"\\u{999999}\""); // outside of valid range of char + fail("\"\\u{_0}\""); // leading underscore + fail("\"\\u{}\""); // empty + fail("b\"\r\""); // bare carriage return in byte string + fail("r\"\r\""); // bare carriage return in raw string + fail("\"\\\r \""); // backslash carriage return + fail("'aa'aa"); + fail("br##\"\"#"); + fail("cr##\"\"#"); + fail("\"\\\n\u{85}\r\""); +} + +#[cfg(span_locations)] +#[test] +fn span_test() { + check_spans( + "\ +/// This is a document comment +testing 123 +{ + testing 234 +}", + &[ + (1, 0, 1, 30), // # + (1, 0, 1, 30), // [ ... ] + (1, 0, 1, 30), // doc + (1, 0, 1, 30), // = + (1, 0, 1, 30), // "This is..." + (2, 0, 2, 7), // testing + (2, 8, 2, 11), // 123 + (3, 0, 5, 1), // { ... } + (4, 2, 4, 9), // testing + (4, 10, 4, 13), // 234 + ], + ); +} + +#[cfg(procmacro2_semver_exempt)] +#[test] +fn default_span() { + let start = Span::call_site().start(); + assert_eq!(start.line, 1); + assert_eq!(start.column, 0); + let end = Span::call_site().end(); + assert_eq!(end.line, 1); + assert_eq!(end.column, 0); + let source_file = Span::call_site().source_file(); + assert_eq!(source_file.path().to_string_lossy(), ""); + assert!(!source_file.is_real()); +} + +#[cfg(procmacro2_semver_exempt)] +#[test] +fn span_join() { + let source1 = "aaa\nbbb" + .parse::() + .unwrap() + .into_iter() + .collect::>(); + let source2 = "ccc\nddd" + .parse::() + .unwrap() + .into_iter() + .collect::>(); + + assert!(source1[0].span().source_file() != source2[0].span().source_file()); + assert_eq!( + source1[0].span().source_file(), + source1[1].span().source_file() + ); + + let joined1 = source1[0].span().join(source1[1].span()); + let joined2 = source1[0].span().join(source2[0].span()); + assert!(joined1.is_some()); + assert!(joined2.is_none()); + + let start = joined1.unwrap().start(); + let end = joined1.unwrap().end(); + assert_eq!(start.line, 1); + assert_eq!(start.column, 0); + assert_eq!(end.line, 2); + assert_eq!(end.column, 3); + + assert_eq!( + joined1.unwrap().source_file(), + source1[0].span().source_file() + ); +} + +#[test] +fn no_panic() { + let s = str::from_utf8(b"b\'\xc2\x86 \x00\x00\x00^\"").unwrap(); + assert!(s.parse::().is_err()); +} + +#[test] +fn punct_before_comment() { + let mut tts = TokenStream::from_str("~// comment").unwrap().into_iter(); + match tts.next().unwrap() { + TokenTree::Punct(tt) => { + assert_eq!(tt.as_char(), '~'); + assert_eq!(tt.spacing(), Spacing::Alone); + } + wrong => panic!("wrong token {:?}", wrong), + } +} + +#[test] +fn joint_last_token() { + // This test verifies that we match the behavior of libproc_macro *not* in + // the range nightly-2020-09-06 through nightly-2020-09-10, in which this + // behavior was temporarily broken. + // See https://github.com/rust-lang/rust/issues/76399 + + let joint_punct = Punct::new(':', Spacing::Joint); + let stream = TokenStream::from(TokenTree::Punct(joint_punct)); + let punct = match stream.into_iter().next().unwrap() { + TokenTree::Punct(punct) => punct, + _ => unreachable!(), + }; + assert_eq!(punct.spacing(), Spacing::Joint); +} + +#[test] +fn raw_identifier() { + let mut tts = TokenStream::from_str("r#dyn").unwrap().into_iter(); + match tts.next().unwrap() { + TokenTree::Ident(raw) => assert_eq!("r#dyn", raw.to_string()), + wrong => panic!("wrong token {:?}", wrong), + } + assert!(tts.next().is_none()); +} + +#[test] +fn test_debug_ident() { + let ident = Ident::new("proc_macro", Span::call_site()); + + #[cfg(not(span_locations))] + let expected = "Ident(proc_macro)"; + + #[cfg(span_locations)] + let expected = "Ident { sym: proc_macro }"; + + assert_eq!(expected, format!("{:?}", ident)); +} + +#[test] +fn test_debug_tokenstream() { + let tts = TokenStream::from_str("[a + 1]").unwrap(); + + #[cfg(not(span_locations))] + let expected = "\ +TokenStream [ + Group { + delimiter: Bracket, + stream: TokenStream [ + Ident { + sym: a, + }, + Punct { + char: '+', + spacing: Alone, + }, + Literal { + lit: 1, + }, + ], + }, +]\ + "; + + #[cfg(not(span_locations))] + let expected_before_trailing_commas = "\ +TokenStream [ + Group { + delimiter: Bracket, + stream: TokenStream [ + Ident { + sym: a + }, + Punct { + char: '+', + spacing: Alone + }, + Literal { + lit: 1 + } + ] + } +]\ + "; + + #[cfg(span_locations)] + let expected = "\ +TokenStream [ + Group { + delimiter: Bracket, + stream: TokenStream [ + Ident { + sym: a, + span: bytes(2..3), + }, + Punct { + char: '+', + spacing: Alone, + span: bytes(4..5), + }, + Literal { + lit: 1, + span: bytes(6..7), + }, + ], + span: bytes(1..8), + }, +]\ + "; + + #[cfg(span_locations)] + let expected_before_trailing_commas = "\ +TokenStream [ + Group { + delimiter: Bracket, + stream: TokenStream [ + Ident { + sym: a, + span: bytes(2..3) + }, + Punct { + char: '+', + spacing: Alone, + span: bytes(4..5) + }, + Literal { + lit: 1, + span: bytes(6..7) + } + ], + span: bytes(1..8) + } +]\ + "; + + let actual = format!("{:#?}", tts); + if actual.ends_with(",\n]") { + assert_eq!(expected, actual); + } else { + assert_eq!(expected_before_trailing_commas, actual); + } +} + +#[test] +fn default_tokenstream_is_empty() { + let default_token_stream = ::default(); + + assert!(default_token_stream.is_empty()); +} + +#[test] +fn tokenstream_size_hint() { + let tokens = "a b (c d) e".parse::().unwrap(); + + assert_eq!(tokens.into_iter().size_hint(), (4, Some(4))); +} + +#[test] +fn tuple_indexing() { + // This behavior may change depending on https://github.com/rust-lang/rust/pull/71322 + let mut tokens = "tuple.0.0".parse::().unwrap().into_iter(); + assert_eq!("tuple", tokens.next().unwrap().to_string()); + assert_eq!(".", tokens.next().unwrap().to_string()); + assert_eq!("0.0", tokens.next().unwrap().to_string()); + assert!(tokens.next().is_none()); +} + +#[cfg(span_locations)] +#[test] +fn non_ascii_tokens() { + check_spans("// abc", &[]); + check_spans("// ábc", &[]); + check_spans("// abc x", &[]); + check_spans("// ábc x", &[]); + check_spans("/* abc */ x", &[(1, 10, 1, 11)]); + check_spans("/* ábc */ x", &[(1, 10, 1, 11)]); + check_spans("/* ab\nc */ x", &[(2, 5, 2, 6)]); + check_spans("/* áb\nc */ x", &[(2, 5, 2, 6)]); + check_spans("/*** abc */ x", &[(1, 12, 1, 13)]); + check_spans("/*** ábc */ x", &[(1, 12, 1, 13)]); + check_spans(r#""abc""#, &[(1, 0, 1, 5)]); + check_spans(r#""ábc""#, &[(1, 0, 1, 5)]); + check_spans(r##"r#"abc"#"##, &[(1, 0, 1, 8)]); + check_spans(r##"r#"ábc"#"##, &[(1, 0, 1, 8)]); + check_spans("r#\"a\nc\"#", &[(1, 0, 2, 3)]); + check_spans("r#\"á\nc\"#", &[(1, 0, 2, 3)]); + check_spans("'a'", &[(1, 0, 1, 3)]); + check_spans("'á'", &[(1, 0, 1, 3)]); + check_spans("//! abc", &[(1, 0, 1, 7), (1, 0, 1, 7), (1, 0, 1, 7)]); + check_spans("//! ábc", &[(1, 0, 1, 7), (1, 0, 1, 7), (1, 0, 1, 7)]); + check_spans("//! abc\n", &[(1, 0, 1, 7), (1, 0, 1, 7), (1, 0, 1, 7)]); + check_spans("//! ábc\n", &[(1, 0, 1, 7), (1, 0, 1, 7), (1, 0, 1, 7)]); + check_spans("/*! abc */", &[(1, 0, 1, 10), (1, 0, 1, 10), (1, 0, 1, 10)]); + check_spans("/*! ábc */", &[(1, 0, 1, 10), (1, 0, 1, 10), (1, 0, 1, 10)]); + check_spans("/*! a\nc */", &[(1, 0, 2, 4), (1, 0, 2, 4), (1, 0, 2, 4)]); + check_spans("/*! á\nc */", &[(1, 0, 2, 4), (1, 0, 2, 4), (1, 0, 2, 4)]); + check_spans("abc", &[(1, 0, 1, 3)]); + check_spans("ábc", &[(1, 0, 1, 3)]); + check_spans("ábć", &[(1, 0, 1, 3)]); + check_spans("abc// foo", &[(1, 0, 1, 3)]); + check_spans("ábc// foo", &[(1, 0, 1, 3)]); + check_spans("ábć// foo", &[(1, 0, 1, 3)]); + check_spans("b\"a\\\n c\"", &[(1, 0, 2, 3)]); +} + +#[cfg(span_locations)] +fn check_spans(p: &str, mut lines: &[(usize, usize, usize, usize)]) { + let ts = p.parse::().unwrap(); + check_spans_internal(ts, &mut lines); + assert!(lines.is_empty(), "leftover ranges: {:?}", lines); +} + +#[cfg(span_locations)] +fn check_spans_internal(ts: TokenStream, lines: &mut &[(usize, usize, usize, usize)]) { + for i in ts { + if let Some((&(sline, scol, eline, ecol), rest)) = lines.split_first() { + *lines = rest; + + let start = i.span().start(); + assert_eq!(start.line, sline, "sline did not match for {}", i); + assert_eq!(start.column, scol, "scol did not match for {}", i); + + let end = i.span().end(); + assert_eq!(end.line, eline, "eline did not match for {}", i); + assert_eq!(end.column, ecol, "ecol did not match for {}", i); + + if let TokenTree::Group(g) = i { + check_spans_internal(g.stream().clone(), lines); + } + } + } +} + +#[test] +fn whitespace() { + // space, horizontal tab, vertical tab, form feed, carriage return, line + // feed, non-breaking space, left-to-right mark, right-to-left mark + let various_spaces = " \t\u{b}\u{c}\r\n\u{a0}\u{200e}\u{200f}"; + let tokens = various_spaces.parse::().unwrap(); + assert_eq!(tokens.into_iter().count(), 0); + + let lone_carriage_returns = " \r \r\r\n "; + lone_carriage_returns.parse::().unwrap(); +} + +#[test] +fn byte_order_mark() { + let string = "\u{feff}foo"; + let tokens = string.parse::().unwrap(); + match tokens.into_iter().next().unwrap() { + TokenTree::Ident(ident) => assert_eq!(ident, "foo"), + _ => unreachable!(), + } + + let string = "foo\u{feff}"; + string.parse::().unwrap_err(); +} + +#[cfg(span_locations)] +fn create_span() -> proc_macro2::Span { + let tts: TokenStream = "1".parse().unwrap(); + match tts.into_iter().next().unwrap() { + TokenTree::Literal(literal) => literal.span(), + _ => unreachable!(), + } +} + +#[cfg(span_locations)] +#[test] +fn test_invalidate_current_thread_spans() { + let actual = format!("{:#?}", create_span()); + assert_eq!(actual, "bytes(1..2)"); + let actual = format!("{:#?}", create_span()); + assert_eq!(actual, "bytes(3..4)"); + + proc_macro2::extra::invalidate_current_thread_spans(); + + let actual = format!("{:#?}", create_span()); + // Test that span offsets have been reset after the call + // to invalidate_current_thread_spans() + assert_eq!(actual, "bytes(1..2)"); +} + +#[cfg(span_locations)] +#[test] +#[should_panic(expected = "Invalid span with no related FileInfo!")] +fn test_use_span_after_invalidation() { + let span = create_span(); + + proc_macro2::extra::invalidate_current_thread_spans(); + + span.source_text(); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/tests/test_fmt.rs b/rust/hw/char/pl011/vendor/proc-macro2/tests/test_fmt.rs new file mode 100644 index 0000000000..86a4c38763 --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/tests/test_fmt.rs @@ -0,0 +1,28 @@ +#![allow(clippy::from_iter_instead_of_collect)] + +use proc_macro2::{Delimiter, Group, Ident, Span, TokenStream, TokenTree}; +use std::iter; + +#[test] +fn test_fmt_group() { + let ident = Ident::new("x", Span::call_site()); + let inner = TokenStream::from_iter(iter::once(TokenTree::Ident(ident))); + let parens_empty = Group::new(Delimiter::Parenthesis, TokenStream::new()); + let parens_nonempty = Group::new(Delimiter::Parenthesis, inner.clone()); + let brackets_empty = Group::new(Delimiter::Bracket, TokenStream::new()); + let brackets_nonempty = Group::new(Delimiter::Bracket, inner.clone()); + let braces_empty = Group::new(Delimiter::Brace, TokenStream::new()); + let braces_nonempty = Group::new(Delimiter::Brace, inner.clone()); + let none_empty = Group::new(Delimiter::None, TokenStream::new()); + let none_nonempty = Group::new(Delimiter::None, inner); + + // Matches libproc_macro. + assert_eq!("()", parens_empty.to_string()); + assert_eq!("(x)", parens_nonempty.to_string()); + assert_eq!("[]", brackets_empty.to_string()); + assert_eq!("[x]", brackets_nonempty.to_string()); + assert_eq!("{ }", braces_empty.to_string()); + assert_eq!("{ x }", braces_nonempty.to_string()); + assert_eq!("", none_empty.to_string()); + assert_eq!("x", none_nonempty.to_string()); +} diff --git a/rust/hw/char/pl011/vendor/proc-macro2/tests/test_size.rs b/rust/hw/char/pl011/vendor/proc-macro2/tests/test_size.rs new file mode 100644 index 0000000000..7b0739023a --- /dev/null +++ b/rust/hw/char/pl011/vendor/proc-macro2/tests/test_size.rs @@ -0,0 +1,73 @@ +#![cfg(not(randomize_layout))] + +extern crate proc_macro; + +use std::mem; + +#[rustversion::attr(before(1.64), ignore)] +#[test] +fn test_proc_macro_size() { + assert_eq!(mem::size_of::(), 4); + assert_eq!(mem::size_of::>(), 4); + assert_eq!(mem::size_of::(), 20); + assert_eq!(mem::size_of::(), 12); + assert_eq!(mem::size_of::(), 8); + assert_eq!(mem::size_of::(), 16); + assert_eq!(mem::size_of::(), 4); +} + +#[cfg_attr(not(all(not(wrap_proc_macro), not(span_locations))), ignore)] +#[test] +fn test_proc_macro2_fallback_size_without_locations() { + assert_eq!(mem::size_of::(), 0); + assert_eq!(mem::size_of::>(), 1); + assert_eq!(mem::size_of::(), 16); + assert_eq!(mem::size_of::(), 24); + assert_eq!(mem::size_of::(), 8); + assert_eq!(mem::size_of::(), 24); + assert_eq!(mem::size_of::(), 8); +} + +#[cfg_attr(not(all(not(wrap_proc_macro), span_locations)), ignore)] +#[test] +fn test_proc_macro2_fallback_size_with_locations() { + assert_eq!(mem::size_of::(), 8); + assert_eq!(mem::size_of::>(), 12); + assert_eq!(mem::size_of::(), 24); + assert_eq!(mem::size_of::(), 32); + assert_eq!(mem::size_of::(), 16); + assert_eq!(mem::size_of::(), 32); + assert_eq!(mem::size_of::(), 8); +} + +#[rustversion::attr(before(1.71), ignore)] +#[rustversion::attr( + since(1.71), + cfg_attr(not(all(wrap_proc_macro, not(span_locations))), ignore) +)] +#[test] +fn test_proc_macro2_wrapper_size_without_locations() { + assert_eq!(mem::size_of::(), 4); + assert_eq!(mem::size_of::>(), 8); + assert_eq!(mem::size_of::(), 24); + assert_eq!(mem::size_of::(), 24); + assert_eq!(mem::size_of::(), 12); + assert_eq!(mem::size_of::(), 24); + assert_eq!(mem::size_of::(), 32); +} + +#[rustversion::attr(before(1.65), ignore)] +#[rustversion::attr( + since(1.65), + cfg_attr(not(all(wrap_proc_macro, span_locations)), ignore) +)] +#[test] +fn test_proc_macro2_wrapper_size_with_locations() { + assert_eq!(mem::size_of::(), 12); + assert_eq!(mem::size_of::>(), 12); + assert_eq!(mem::size_of::(), 32); + assert_eq!(mem::size_of::(), 32); + assert_eq!(mem::size_of::(), 20); + assert_eq!(mem::size_of::(), 32); + assert_eq!(mem::size_of::(), 32); +} diff --git a/rust/hw/char/pl011/vendor/quote/.cargo-checksum.json b/rust/hw/char/pl011/vendor/quote/.cargo-checksum.json new file mode 100644 index 0000000000..dcfc52a21e --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"0a98ab1241e7b64caa29c6ff868e2e96e0f74c1ef8b265727f1863a960fa322c","LICENSE-APACHE":"62c7a1e35f56406896d7aa7ca52d0cc0d272ac022b5d2796e7d6905db8a3636a","LICENSE-MIT":"23f18e03dc49df91622fe2a76176497404e46ced8a715d9d2b67a7446571cca3","README.md":"626e7079eab0baacf0fcaf3e244f407b2014ebaeca45905d72e8fb8bed18aaea","rust-toolchain.toml":"6bbb61302978c736b2da03e4fb40e3beab908f85d533ab46fd541e637b5f3e0f","src/ext.rs":"9881576cac3e476a4bf04f9b601cf9a53b79399fb0ca9634e8b861ac91709843","src/format.rs":"c595015418f35e6992e710441b9999f09b2afe4678b138039d670d100c0bdd86","src/ident_fragment.rs":"0b3e6c2129e55910fd2d240e1e7efba6f1796801d24352d1c0bfbceb0e8b678f","src/lib.rs":"abbc178821e46d0bcd224904a7542ac4582d189f57cd4daf02a54fd772e52a55","src/runtime.rs":"7f37326edaeac2c42ed806b447eeba12e36dd4b1bc25fbf52f8eb23140f3be7a","src/spanned.rs":"3ccf5120593f35787442c0a37d243e802c5262e7f8b35aed503873008ec035c5","src/to_tokens.rs":"1c76311fcc82098e630056d71fd6f3929194ee31b0840e2aa643ed7e78026e3e","tests/compiletest.rs":"022a8e400ef813d7ea1875b944549cee5125f6a995dc33e93b48cba3e1b57bd1","tests/test.rs":"3be80741f84a707376c230d9cf70ce9537caa359691d8d4c34968e28175e4ad7","tests/ui/does-not-have-iter-interpolated-dup.rs":"ad13eea21d4cdd2ab6c082f633392e1ff20fb0d1af5f2177041e0bf7f30da695","tests/ui/does-not-have-iter-interpolated-dup.stderr":"90a4bdb9267535f5d2785940148338d6b7d905548051d2c9c5dcbd58f2c11d8e","tests/ui/does-not-have-iter-interpolated.rs":"83a5b3f240651adcbe4b6e51076d76d653ad439b37442cf4054f1fd3c073f3b7","tests/ui/does-not-have-iter-interpolated.stderr":"ae7c2739554c862b331705e82781aa4687a4375210cef6ae899a4be4a4ec2d97","tests/ui/does-not-have-iter-separated.rs":"fe413c48331d5e3a7ae5fef6a5892a90c72f610d54595879eb49d0a94154ba3f","tests/ui/does-not-have-iter-separated.stderr":"03fd560979ebcd5aa6f83858bc2c3c01ba6546c16335101275505304895c1ae9","tests/ui/does-not-have-iter.rs":"09dc9499d861b63cebb0848b855b78e2dc9497bfde37ba6339f3625ae009a62f","tests/ui/does-not-have-iter.stderr":"d6da483c29e232ced72059bbdf05d31afb1df9e02954edaa9cfaea1ec6df72dc","tests/ui/not-quotable.rs":"5759d0884943417609f28faadc70254a3e2fd3d9bd6ff7297a3fb70a77fafd8a","tests/ui/not-quotable.stderr":"1b5ad13712a35f2f25a159c003956762941b111d540b20ad6a258cdb079a9c95","tests/ui/not-repeatable.rs":"a4b115c04e4e41049a05f5b69450503fbffeba031218b4189cb931839f7f9a9c","tests/ui/not-repeatable.stderr":"bbfb702638374001061251f81d63476851ac28ed743f13db9d65e30dd9bdcf52","tests/ui/wrong-type-span.rs":"6195e35ea844c0c52ba1cff5d790c3a371af6915d137d377834ad984229ef9ea","tests/ui/wrong-type-span.stderr":"cad072e40e0ecc04f375122ae41aede2f0da2a9244492b3fcf70249e59d1b128"},"package":"0fa76aaf39101c457836aec0ce2316dbdc3ab723cdda1c6bd4e6ad4208acaca7"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/quote/Cargo.toml b/rust/hw/char/pl011/vendor/quote/Cargo.toml new file mode 100644 index 0000000000..5b521762bc --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/Cargo.toml @@ -0,0 +1,50 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2018" +rust-version = "1.56" +name = "quote" +version = "1.0.36" +authors = ["David Tolnay "] +autobenches = false +description = "Quasi-quoting macro quote!(...)" +documentation = "https://docs.rs/quote/" +readme = "README.md" +keywords = [ + "macros", + "syn", +] +categories = ["development-tools::procedural-macro-helpers"] +license = "MIT OR Apache-2.0" +repository = "https://github.com/dtolnay/quote" + +[package.metadata.docs.rs] +rustdoc-args = ["--generate-link-to-definition"] +targets = ["x86_64-unknown-linux-gnu"] + +[lib] +doc-scrape-examples = false + +[dependencies.proc-macro2] +version = "1.0.74" +default-features = false + +[dev-dependencies.rustversion] +version = "1.0" + +[dev-dependencies.trybuild] +version = "1.0.66" +features = ["diff"] + +[features] +default = ["proc-macro"] +proc-macro = ["proc-macro2/proc-macro"] diff --git a/rust/hw/char/pl011/vendor/quote/LICENSE-APACHE b/rust/hw/char/pl011/vendor/quote/LICENSE-APACHE new file mode 100644 index 0000000000..1b5ec8b78e --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/LICENSE-APACHE @@ -0,0 +1,176 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS diff --git a/rust/hw/char/pl011/vendor/quote/LICENSE-MIT b/rust/hw/char/pl011/vendor/quote/LICENSE-MIT new file mode 100644 index 0000000000..31aa79387f --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/LICENSE-MIT @@ -0,0 +1,23 @@ +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/quote/README.md b/rust/hw/char/pl011/vendor/quote/README.md new file mode 100644 index 0000000000..bfc91a9753 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/README.md @@ -0,0 +1,272 @@ +Rust Quasi-Quoting +================== + +[github](https://github.com/dtolnay/quote) +[crates.io](https://crates.io/crates/quote) +[docs.rs](https://docs.rs/quote) +[build status](https://github.com/dtolnay/quote/actions?query=branch%3Amaster) + +This crate provides the [`quote!`] macro for turning Rust syntax tree data +structures into tokens of source code. + +[`quote!`]: https://docs.rs/quote/1.0/quote/macro.quote.html + +Procedural macros in Rust receive a stream of tokens as input, execute arbitrary +Rust code to determine how to manipulate those tokens, and produce a stream of +tokens to hand back to the compiler to compile into the caller's crate. +Quasi-quoting is a solution to one piece of that — producing tokens to +return to the compiler. + +The idea of quasi-quoting is that we write *code* that we treat as *data*. +Within the `quote!` macro, we can write what looks like code to our text editor +or IDE. We get all the benefits of the editor's brace matching, syntax +highlighting, indentation, and maybe autocompletion. But rather than compiling +that as code into the current crate, we can treat it as data, pass it around, +mutate it, and eventually hand it back to the compiler as tokens to compile into +the macro caller's crate. + +This crate is motivated by the procedural macro use case, but is a +general-purpose Rust quasi-quoting library and is not specific to procedural +macros. + +```toml +[dependencies] +quote = "1.0" +``` + +*Version requirement: Quote supports rustc 1.56 and up.*
+[*Release notes*](https://github.com/dtolnay/quote/releases) + +
+ +## Syntax + +The quote crate provides a [`quote!`] macro within which you can write Rust code +that gets packaged into a [`TokenStream`] and can be treated as data. You should +think of `TokenStream` as representing a fragment of Rust source code. + +[`TokenStream`]: https://docs.rs/proc-macro2/1.0/proc_macro2/struct.TokenStream.html + +Within the `quote!` macro, interpolation is done with `#var`. Any type +implementing the [`quote::ToTokens`] trait can be interpolated. This includes +most Rust primitive types as well as most of the syntax tree types from [`syn`]. + +[`quote::ToTokens`]: https://docs.rs/quote/1.0/quote/trait.ToTokens.html +[`syn`]: https://github.com/dtolnay/syn + +```rust +let tokens = quote! { + struct SerializeWith #generics #where_clause { + value: &'a #field_ty, + phantom: core::marker::PhantomData<#item_ty>, + } + + impl #generics serde::Serialize for SerializeWith #generics #where_clause { + fn serialize(&self, serializer: S) -> Result + where + S: serde::Serializer, + { + #path(self.value, serializer) + } + } + + SerializeWith { + value: #value, + phantom: core::marker::PhantomData::<#item_ty>, + } +}; +``` + +
+ +## Repetition + +Repetition is done using `#(...)*` or `#(...),*` similar to `macro_rules!`. This +iterates through the elements of any variable interpolated within the repetition +and inserts a copy of the repetition body for each one. The variables in an +interpolation may be anything that implements `IntoIterator`, including `Vec` or +a pre-existing iterator. + +- `#(#var)*` — no separators +- `#(#var),*` — the character before the asterisk is used as a separator +- `#( struct #var; )*` — the repetition can contain other things +- `#( #k => println!("{}", #v), )*` — even multiple interpolations + +Note that there is a difference between `#(#var ,)*` and `#(#var),*`—the latter +does not produce a trailing comma. This matches the behavior of delimiters in +`macro_rules!`. + +
+ +## Returning tokens to the compiler + +The `quote!` macro evaluates to an expression of type +`proc_macro2::TokenStream`. Meanwhile Rust procedural macros are expected to +return the type `proc_macro::TokenStream`. + +The difference between the two types is that `proc_macro` types are entirely +specific to procedural macros and cannot ever exist in code outside of a +procedural macro, while `proc_macro2` types may exist anywhere including tests +and non-macro code like main.rs and build.rs. This is why even the procedural +macro ecosystem is largely built around `proc_macro2`, because that ensures the +libraries are unit testable and accessible in non-macro contexts. + +There is a [`From`]-conversion in both directions so returning the output of +`quote!` from a procedural macro usually looks like `tokens.into()` or +`proc_macro::TokenStream::from(tokens)`. + +[`From`]: https://doc.rust-lang.org/std/convert/trait.From.html + +
+ +## Examples + +### Combining quoted fragments + +Usually you don't end up constructing an entire final `TokenStream` in one +piece. Different parts may come from different helper functions. The tokens +produced by `quote!` themselves implement `ToTokens` and so can be interpolated +into later `quote!` invocations to build up a final result. + +```rust +let type_definition = quote! {...}; +let methods = quote! {...}; + +let tokens = quote! { + #type_definition + #methods +}; +``` + +### Constructing identifiers + +Suppose we have an identifier `ident` which came from somewhere in a macro +input and we need to modify it in some way for the macro output. Let's consider +prepending the identifier with an underscore. + +Simply interpolating the identifier next to an underscore will not have the +behavior of concatenating them. The underscore and the identifier will continue +to be two separate tokens as if you had written `_ x`. + +```rust +// incorrect +quote! { + let mut _#ident = 0; +} +``` + +The solution is to build a new identifier token with the correct value. As this +is such a common case, the `format_ident!` macro provides a convenient utility +for doing so correctly. + +```rust +let varname = format_ident!("_{}", ident); +quote! { + let mut #varname = 0; +} +``` + +Alternatively, the APIs provided by Syn and proc-macro2 can be used to directly +build the identifier. This is roughly equivalent to the above, but will not +handle `ident` being a raw identifier. + +```rust +let concatenated = format!("_{}", ident); +let varname = syn::Ident::new(&concatenated, ident.span()); +quote! { + let mut #varname = 0; +} +``` + +### Making method calls + +Let's say our macro requires some type specified in the macro input to have a +constructor called `new`. We have the type in a variable called `field_type` of +type `syn::Type` and want to invoke the constructor. + +```rust +// incorrect +quote! { + let value = #field_type::new(); +} +``` + +This works only sometimes. If `field_type` is `String`, the expanded code +contains `String::new()` which is fine. But if `field_type` is something like +`Vec` then the expanded code is `Vec::new()` which is invalid syntax. +Ordinarily in handwritten Rust we would write `Vec::::new()` but for macros +often the following is more convenient. + +```rust +quote! { + let value = <#field_type>::new(); +} +``` + +This expands to `>::new()` which behaves correctly. + +A similar pattern is appropriate for trait methods. + +```rust +quote! { + let value = <#field_type as core::default::Default>::default(); +} +``` + +
+ +## Hygiene + +Any interpolated tokens preserve the `Span` information provided by their +`ToTokens` implementation. Tokens that originate within a `quote!` invocation +are spanned with [`Span::call_site()`]. + +[`Span::call_site()`]: https://docs.rs/proc-macro2/1.0/proc_macro2/struct.Span.html#method.call_site + +A different span can be provided explicitly through the [`quote_spanned!`] +macro. + +[`quote_spanned!`]: https://docs.rs/quote/1.0/quote/macro.quote_spanned.html + +
+ +## Non-macro code generators + +When using `quote` in a build.rs or main.rs and writing the output out to a +file, consider having the code generator pass the tokens through [prettyplease] +before writing. This way if an error occurs in the generated code it is +convenient for a human to read and debug. + +Be aware that no kind of hygiene or span information is retained when tokens are +written to a file; the conversion from tokens to source code is lossy. + +Example usage in build.rs: + +```rust +let output = quote! { ... }; +let syntax_tree = syn::parse2(output).unwrap(); +let formatted = prettyplease::unparse(&syntax_tree); + +let out_dir = env::var_os("OUT_DIR").unwrap(); +let dest_path = Path::new(&out_dir).join("out.rs"); +fs::write(dest_path, formatted).unwrap(); +``` + +[prettyplease]: https://github.com/dtolnay/prettyplease + +
+ +#### License + + +Licensed under either of Apache License, Version +2.0 or MIT license at your option. + + +
+ + +Unless you explicitly state otherwise, any contribution intentionally submitted +for inclusion in this crate by you, as defined in the Apache-2.0 license, shall +be dual licensed as above, without any additional terms or conditions. + diff --git a/rust/hw/char/pl011/vendor/quote/meson.build b/rust/hw/char/pl011/vendor/quote/meson.build new file mode 100644 index 0000000000..11b83932f6 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/meson.build @@ -0,0 +1,17 @@ +_quote_rs = static_library( + 'quote', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + '--cfg', 'feature="proc-macro"', + ], + dependencies: [ + dep_proc_macro2, + ], +) + +dep_quote = declare_dependency( + link_with: _quote_rs, +) diff --git a/rust/hw/char/pl011/vendor/quote/rust-toolchain.toml b/rust/hw/char/pl011/vendor/quote/rust-toolchain.toml new file mode 100644 index 0000000000..20fe888c30 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/rust-toolchain.toml @@ -0,0 +1,2 @@ +[toolchain] +components = ["rust-src"] diff --git a/rust/hw/char/pl011/vendor/quote/src/ext.rs b/rust/hw/char/pl011/vendor/quote/src/ext.rs new file mode 100644 index 0000000000..92c2315b18 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/ext.rs @@ -0,0 +1,110 @@ +use super::ToTokens; +use core::iter; +use proc_macro2::{TokenStream, TokenTree}; + +/// TokenStream extension trait with methods for appending tokens. +/// +/// This trait is sealed and cannot be implemented outside of the `quote` crate. +pub trait TokenStreamExt: private::Sealed { + /// For use by `ToTokens` implementations. + /// + /// Appends the token specified to this list of tokens. + fn append(&mut self, token: U) + where + U: Into; + + /// For use by `ToTokens` implementations. + /// + /// ``` + /// # use quote::{quote, TokenStreamExt, ToTokens}; + /// # use proc_macro2::TokenStream; + /// # + /// struct X; + /// + /// impl ToTokens for X { + /// fn to_tokens(&self, tokens: &mut TokenStream) { + /// tokens.append_all(&[true, false]); + /// } + /// } + /// + /// let tokens = quote!(#X); + /// assert_eq!(tokens.to_string(), "true false"); + /// ``` + fn append_all(&mut self, iter: I) + where + I: IntoIterator, + I::Item: ToTokens; + + /// For use by `ToTokens` implementations. + /// + /// Appends all of the items in the iterator `I`, separated by the tokens + /// `U`. + fn append_separated(&mut self, iter: I, op: U) + where + I: IntoIterator, + I::Item: ToTokens, + U: ToTokens; + + /// For use by `ToTokens` implementations. + /// + /// Appends all tokens in the iterator `I`, appending `U` after each + /// element, including after the last element of the iterator. + fn append_terminated(&mut self, iter: I, term: U) + where + I: IntoIterator, + I::Item: ToTokens, + U: ToTokens; +} + +impl TokenStreamExt for TokenStream { + fn append(&mut self, token: U) + where + U: Into, + { + self.extend(iter::once(token.into())); + } + + fn append_all(&mut self, iter: I) + where + I: IntoIterator, + I::Item: ToTokens, + { + for token in iter { + token.to_tokens(self); + } + } + + fn append_separated(&mut self, iter: I, op: U) + where + I: IntoIterator, + I::Item: ToTokens, + U: ToTokens, + { + for (i, token) in iter.into_iter().enumerate() { + if i > 0 { + op.to_tokens(self); + } + token.to_tokens(self); + } + } + + fn append_terminated(&mut self, iter: I, term: U) + where + I: IntoIterator, + I::Item: ToTokens, + U: ToTokens, + { + for token in iter { + token.to_tokens(self); + term.to_tokens(self); + } + } +} + +mod private { + use proc_macro2::TokenStream; + + pub trait Sealed {} + + impl Sealed for TokenStream {} +} diff --git a/rust/hw/char/pl011/vendor/quote/src/format.rs b/rust/hw/char/pl011/vendor/quote/src/format.rs new file mode 100644 index 0000000000..3cddbd2819 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/format.rs @@ -0,0 +1,168 @@ +/// Formatting macro for constructing `Ident`s. +/// +///
+/// +/// # Syntax +/// +/// Syntax is copied from the [`format!`] macro, supporting both positional and +/// named arguments. +/// +/// Only a limited set of formatting traits are supported. The current mapping +/// of format types to traits is: +/// +/// * `{}` ⇒ [`IdentFragment`] +/// * `{:o}` ⇒ [`Octal`](std::fmt::Octal) +/// * `{:x}` ⇒ [`LowerHex`](std::fmt::LowerHex) +/// * `{:X}` ⇒ [`UpperHex`](std::fmt::UpperHex) +/// * `{:b}` ⇒ [`Binary`](std::fmt::Binary) +/// +/// See [`std::fmt`] for more information. +/// +///
+/// +/// # IdentFragment +/// +/// Unlike `format!`, this macro uses the [`IdentFragment`] formatting trait by +/// default. This trait is like `Display`, with a few differences: +/// +/// * `IdentFragment` is only implemented for a limited set of types, such as +/// unsigned integers and strings. +/// * [`Ident`] arguments will have their `r#` prefixes stripped, if present. +/// +/// [`IdentFragment`]: crate::IdentFragment +/// [`Ident`]: proc_macro2::Ident +/// +///
+/// +/// # Hygiene +/// +/// The [`Span`] of the first `Ident` argument is used as the span of the final +/// identifier, falling back to [`Span::call_site`] when no identifiers are +/// provided. +/// +/// ``` +/// # use quote::format_ident; +/// # let ident = format_ident!("Ident"); +/// // If `ident` is an Ident, the span of `my_ident` will be inherited from it. +/// let my_ident = format_ident!("My{}{}", ident, "IsCool"); +/// assert_eq!(my_ident, "MyIdentIsCool"); +/// ``` +/// +/// Alternatively, the span can be overridden by passing the `span` named +/// argument. +/// +/// ``` +/// # use quote::format_ident; +/// # const IGNORE_TOKENS: &'static str = stringify! { +/// let my_span = /* ... */; +/// # }; +/// # let my_span = proc_macro2::Span::call_site(); +/// format_ident!("MyIdent", span = my_span); +/// ``` +/// +/// [`Span`]: proc_macro2::Span +/// [`Span::call_site`]: proc_macro2::Span::call_site +/// +///


+/// +/// # Panics +/// +/// This method will panic if the resulting formatted string is not a valid +/// identifier. +/// +///
+/// +/// # Examples +/// +/// Composing raw and non-raw identifiers: +/// ``` +/// # use quote::format_ident; +/// let my_ident = format_ident!("My{}", "Ident"); +/// assert_eq!(my_ident, "MyIdent"); +/// +/// let raw = format_ident!("r#Raw"); +/// assert_eq!(raw, "r#Raw"); +/// +/// let my_ident_raw = format_ident!("{}Is{}", my_ident, raw); +/// assert_eq!(my_ident_raw, "MyIdentIsRaw"); +/// ``` +/// +/// Integer formatting options: +/// ``` +/// # use quote::format_ident; +/// let num: u32 = 10; +/// +/// let decimal = format_ident!("Id_{}", num); +/// assert_eq!(decimal, "Id_10"); +/// +/// let octal = format_ident!("Id_{:o}", num); +/// assert_eq!(octal, "Id_12"); +/// +/// let binary = format_ident!("Id_{:b}", num); +/// assert_eq!(binary, "Id_1010"); +/// +/// let lower_hex = format_ident!("Id_{:x}", num); +/// assert_eq!(lower_hex, "Id_a"); +/// +/// let upper_hex = format_ident!("Id_{:X}", num); +/// assert_eq!(upper_hex, "Id_A"); +/// ``` +#[macro_export] +macro_rules! format_ident { + ($fmt:expr) => { + $crate::format_ident_impl!([ + $crate::__private::Option::None, + $fmt + ]) + }; + + ($fmt:expr, $($rest:tt)*) => { + $crate::format_ident_impl!([ + $crate::__private::Option::None, + $fmt + ] $($rest)*) + }; +} + +#[macro_export] +#[doc(hidden)] +macro_rules! format_ident_impl { + // Final state + ([$span:expr, $($fmt:tt)*]) => { + $crate::__private::mk_ident( + &$crate::__private::format!($($fmt)*), + $span, + ) + }; + + // Span argument + ([$old:expr, $($fmt:tt)*] span = $span:expr) => { + $crate::format_ident_impl!([$old, $($fmt)*] span = $span,) + }; + ([$old:expr, $($fmt:tt)*] span = $span:expr, $($rest:tt)*) => { + $crate::format_ident_impl!([ + $crate::__private::Option::Some::<$crate::__private::Span>($span), + $($fmt)* + ] $($rest)*) + }; + + // Named argument + ([$span:expr, $($fmt:tt)*] $name:ident = $arg:expr) => { + $crate::format_ident_impl!([$span, $($fmt)*] $name = $arg,) + }; + ([$span:expr, $($fmt:tt)*] $name:ident = $arg:expr, $($rest:tt)*) => { + match $crate::__private::IdentFragmentAdapter(&$arg) { + arg => $crate::format_ident_impl!([$span.or(arg.span()), $($fmt)*, $name = arg] $($rest)*), + } + }; + + // Positional argument + ([$span:expr, $($fmt:tt)*] $arg:expr) => { + $crate::format_ident_impl!([$span, $($fmt)*] $arg,) + }; + ([$span:expr, $($fmt:tt)*] $arg:expr, $($rest:tt)*) => { + match $crate::__private::IdentFragmentAdapter(&$arg) { + arg => $crate::format_ident_impl!([$span.or(arg.span()), $($fmt)*, arg] $($rest)*), + } + }; +} diff --git a/rust/hw/char/pl011/vendor/quote/src/ident_fragment.rs b/rust/hw/char/pl011/vendor/quote/src/ident_fragment.rs new file mode 100644 index 0000000000..6c2a9a87ac --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/ident_fragment.rs @@ -0,0 +1,88 @@ +use alloc::borrow::Cow; +use core::fmt; +use proc_macro2::{Ident, Span}; + +/// Specialized formatting trait used by `format_ident!`. +/// +/// [`Ident`] arguments formatted using this trait will have their `r#` prefix +/// stripped, if present. +/// +/// See [`format_ident!`] for more information. +/// +/// [`format_ident!`]: crate::format_ident +pub trait IdentFragment { + /// Format this value as an identifier fragment. + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result; + + /// Span associated with this `IdentFragment`. + /// + /// If non-`None`, may be inherited by formatted identifiers. + fn span(&self) -> Option { + None + } +} + +impl IdentFragment for &T { + fn span(&self) -> Option { + ::span(*self) + } + + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + IdentFragment::fmt(*self, f) + } +} + +impl IdentFragment for &mut T { + fn span(&self) -> Option { + ::span(*self) + } + + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + IdentFragment::fmt(*self, f) + } +} + +impl IdentFragment for Ident { + fn span(&self) -> Option { + Some(self.span()) + } + + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + let id = self.to_string(); + if let Some(id) = id.strip_prefix("r#") { + fmt::Display::fmt(id, f) + } else { + fmt::Display::fmt(&id[..], f) + } + } +} + +impl IdentFragment for Cow<'_, T> +where + T: IdentFragment + ToOwned + ?Sized, +{ + fn span(&self) -> Option { + T::span(self) + } + + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + T::fmt(self, f) + } +} + +// Limited set of types which this is implemented for, as we want to avoid types +// which will often include non-identifier characters in their `Display` impl. +macro_rules! ident_fragment_display { + ($($T:ty),*) => { + $( + impl IdentFragment for $T { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::Display::fmt(self, f) + } + } + )* + }; +} + +ident_fragment_display!(bool, str, String, char); +ident_fragment_display!(u8, u16, u32, u64, u128, usize); diff --git a/rust/hw/char/pl011/vendor/quote/src/lib.rs b/rust/hw/char/pl011/vendor/quote/src/lib.rs new file mode 100644 index 0000000000..4d198cb2e7 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/lib.rs @@ -0,0 +1,1464 @@ +//! [![github]](https://github.com/dtolnay/quote) [![crates-io]](https://crates.io/crates/quote) [![docs-rs]](https://docs.rs/quote) +//! +//! [github]: https://img.shields.io/badge/github-8da0cb?style=for-the-badge&labelColor=555555&logo=github +//! [crates-io]: https://img.shields.io/badge/crates.io-fc8d62?style=for-the-badge&labelColor=555555&logo=rust +//! [docs-rs]: https://img.shields.io/badge/docs.rs-66c2a5?style=for-the-badge&labelColor=555555&logo=docs.rs +//! +//!
+//! +//! This crate provides the [`quote!`] macro for turning Rust syntax tree data +//! structures into tokens of source code. +//! +//! [`quote!`]: macro.quote.html +//! +//! Procedural macros in Rust receive a stream of tokens as input, execute +//! arbitrary Rust code to determine how to manipulate those tokens, and produce +//! a stream of tokens to hand back to the compiler to compile into the caller's +//! crate. Quasi-quoting is a solution to one piece of that — producing +//! tokens to return to the compiler. +//! +//! The idea of quasi-quoting is that we write *code* that we treat as *data*. +//! Within the `quote!` macro, we can write what looks like code to our text +//! editor or IDE. We get all the benefits of the editor's brace matching, +//! syntax highlighting, indentation, and maybe autocompletion. But rather than +//! compiling that as code into the current crate, we can treat it as data, pass +//! it around, mutate it, and eventually hand it back to the compiler as tokens +//! to compile into the macro caller's crate. +//! +//! This crate is motivated by the procedural macro use case, but is a +//! general-purpose Rust quasi-quoting library and is not specific to procedural +//! macros. +//! +//! ```toml +//! [dependencies] +//! quote = "1.0" +//! ``` +//! +//!
+//! +//! # Example +//! +//! The following quasi-quoted block of code is something you might find in [a] +//! procedural macro having to do with data structure serialization. The `#var` +//! syntax performs interpolation of runtime variables into the quoted tokens. +//! Check out the documentation of the [`quote!`] macro for more detail about +//! the syntax. See also the [`quote_spanned!`] macro which is important for +//! implementing hygienic procedural macros. +//! +//! [a]: https://serde.rs/ +//! [`quote_spanned!`]: macro.quote_spanned.html +//! +//! ``` +//! # use quote::quote; +//! # +//! # let generics = ""; +//! # let where_clause = ""; +//! # let field_ty = ""; +//! # let item_ty = ""; +//! # let path = ""; +//! # let value = ""; +//! # +//! let tokens = quote! { +//! struct SerializeWith #generics #where_clause { +//! value: &'a #field_ty, +//! phantom: core::marker::PhantomData<#item_ty>, +//! } +//! +//! impl #generics serde::Serialize for SerializeWith #generics #where_clause { +//! fn serialize(&self, serializer: S) -> Result +//! where +//! S: serde::Serializer, +//! { +//! #path(self.value, serializer) +//! } +//! } +//! +//! SerializeWith { +//! value: #value, +//! phantom: core::marker::PhantomData::<#item_ty>, +//! } +//! }; +//! ``` +//! +//!
+//! +//! # Non-macro code generators +//! +//! When using `quote` in a build.rs or main.rs and writing the output out to a +//! file, consider having the code generator pass the tokens through +//! [prettyplease] before writing. This way if an error occurs in the generated +//! code it is convenient for a human to read and debug. +//! +//! [prettyplease]: https://github.com/dtolnay/prettyplease + +// Quote types in rustdoc of other crates get linked to here. +#![doc(html_root_url = "https://docs.rs/quote/1.0.36")] +#![allow( + clippy::doc_markdown, + clippy::missing_errors_doc, + clippy::missing_panics_doc, + clippy::module_name_repetitions, + // false positive https://github.com/rust-lang/rust-clippy/issues/6983 + clippy::wrong_self_convention, +)] + +extern crate alloc; + +#[cfg(feature = "proc-macro")] +extern crate proc_macro; + +mod ext; +mod format; +mod ident_fragment; +mod to_tokens; + +// Not public API. +#[doc(hidden)] +#[path = "runtime.rs"] +pub mod __private; + +pub use crate::ext::TokenStreamExt; +pub use crate::ident_fragment::IdentFragment; +pub use crate::to_tokens::ToTokens; + +// Not public API. +#[doc(hidden)] +pub mod spanned; + +macro_rules! __quote { + ($quote:item) => { + /// The whole point. + /// + /// Performs variable interpolation against the input and produces it as + /// [`proc_macro2::TokenStream`]. + /// + /// Note: for returning tokens to the compiler in a procedural macro, use + /// `.into()` on the result to convert to [`proc_macro::TokenStream`]. + /// + /// [`TokenStream`]: https://docs.rs/proc-macro2/1.0/proc_macro2/struct.TokenStream.html + /// + ///
+ /// + /// # Interpolation + /// + /// Variable interpolation is done with `#var` (similar to `$var` in + /// `macro_rules!` macros). This grabs the `var` variable that is currently in + /// scope and inserts it in that location in the output tokens. Any type + /// implementing the [`ToTokens`] trait can be interpolated. This includes most + /// Rust primitive types as well as most of the syntax tree types from the [Syn] + /// crate. + /// + /// [`ToTokens`]: trait.ToTokens.html + /// [Syn]: https://github.com/dtolnay/syn + /// + /// Repetition is done using `#(...)*` or `#(...),*` again similar to + /// `macro_rules!`. This iterates through the elements of any variable + /// interpolated within the repetition and inserts a copy of the repetition body + /// for each one. The variables in an interpolation may be a `Vec`, slice, + /// `BTreeSet`, or any `Iterator`. + /// + /// - `#(#var)*` — no separators + /// - `#(#var),*` — the character before the asterisk is used as a separator + /// - `#( struct #var; )*` — the repetition can contain other tokens + /// - `#( #k => println!("{}", #v), )*` — even multiple interpolations + /// + ///
+ /// + /// # Hygiene + /// + /// Any interpolated tokens preserve the `Span` information provided by their + /// `ToTokens` implementation. Tokens that originate within the `quote!` + /// invocation are spanned with [`Span::call_site()`]. + /// + /// [`Span::call_site()`]: https://docs.rs/proc-macro2/1.0/proc_macro2/struct.Span.html#method.call_site + /// + /// A different span can be provided through the [`quote_spanned!`] macro. + /// + /// [`quote_spanned!`]: macro.quote_spanned.html + /// + ///
+ /// + /// # Return type + /// + /// The macro evaluates to an expression of type `proc_macro2::TokenStream`. + /// Meanwhile Rust procedural macros are expected to return the type + /// `proc_macro::TokenStream`. + /// + /// The difference between the two types is that `proc_macro` types are entirely + /// specific to procedural macros and cannot ever exist in code outside of a + /// procedural macro, while `proc_macro2` types may exist anywhere including + /// tests and non-macro code like main.rs and build.rs. This is why even the + /// procedural macro ecosystem is largely built around `proc_macro2`, because + /// that ensures the libraries are unit testable and accessible in non-macro + /// contexts. + /// + /// There is a [`From`]-conversion in both directions so returning the output of + /// `quote!` from a procedural macro usually looks like `tokens.into()` or + /// `proc_macro::TokenStream::from(tokens)`. + /// + /// [`From`]: https://doc.rust-lang.org/std/convert/trait.From.html + /// + ///
+ /// + /// # Examples + /// + /// ### Procedural macro + /// + /// The structure of a basic procedural macro is as follows. Refer to the [Syn] + /// crate for further useful guidance on using `quote!` as part of a procedural + /// macro. + /// + /// [Syn]: https://github.com/dtolnay/syn + /// + /// ``` + /// # #[cfg(any())] + /// extern crate proc_macro; + /// # extern crate proc_macro2; + /// + /// # #[cfg(any())] + /// use proc_macro::TokenStream; + /// # use proc_macro2::TokenStream; + /// use quote::quote; + /// + /// # const IGNORE_TOKENS: &'static str = stringify! { + /// #[proc_macro_derive(HeapSize)] + /// # }; + /// pub fn derive_heap_size(input: TokenStream) -> TokenStream { + /// // Parse the input and figure out what implementation to generate... + /// # const IGNORE_TOKENS: &'static str = stringify! { + /// let name = /* ... */; + /// let expr = /* ... */; + /// # }; + /// # + /// # let name = 0; + /// # let expr = 0; + /// + /// let expanded = quote! { + /// // The generated impl. + /// impl heapsize::HeapSize for #name { + /// fn heap_size_of_children(&self) -> usize { + /// #expr + /// } + /// } + /// }; + /// + /// // Hand the output tokens back to the compiler. + /// TokenStream::from(expanded) + /// } + /// ``` + /// + ///


+ /// + /// ### Combining quoted fragments + /// + /// Usually you don't end up constructing an entire final `TokenStream` in one + /// piece. Different parts may come from different helper functions. The tokens + /// produced by `quote!` themselves implement `ToTokens` and so can be + /// interpolated into later `quote!` invocations to build up a final result. + /// + /// ``` + /// # use quote::quote; + /// # + /// let type_definition = quote! {...}; + /// let methods = quote! {...}; + /// + /// let tokens = quote! { + /// #type_definition + /// #methods + /// }; + /// ``` + /// + ///


+ /// + /// ### Constructing identifiers + /// + /// Suppose we have an identifier `ident` which came from somewhere in a macro + /// input and we need to modify it in some way for the macro output. Let's + /// consider prepending the identifier with an underscore. + /// + /// Simply interpolating the identifier next to an underscore will not have the + /// behavior of concatenating them. The underscore and the identifier will + /// continue to be two separate tokens as if you had written `_ x`. + /// + /// ``` + /// # use proc_macro2::{self as syn, Span}; + /// # use quote::quote; + /// # + /// # let ident = syn::Ident::new("i", Span::call_site()); + /// # + /// // incorrect + /// quote! { + /// let mut _#ident = 0; + /// } + /// # ; + /// ``` + /// + /// The solution is to build a new identifier token with the correct value. As + /// this is such a common case, the [`format_ident!`] macro provides a + /// convenient utility for doing so correctly. + /// + /// ``` + /// # use proc_macro2::{Ident, Span}; + /// # use quote::{format_ident, quote}; + /// # + /// # let ident = Ident::new("i", Span::call_site()); + /// # + /// let varname = format_ident!("_{}", ident); + /// quote! { + /// let mut #varname = 0; + /// } + /// # ; + /// ``` + /// + /// Alternatively, the APIs provided by Syn and proc-macro2 can be used to + /// directly build the identifier. This is roughly equivalent to the above, but + /// will not handle `ident` being a raw identifier. + /// + /// ``` + /// # use proc_macro2::{self as syn, Span}; + /// # use quote::quote; + /// # + /// # let ident = syn::Ident::new("i", Span::call_site()); + /// # + /// let concatenated = format!("_{}", ident); + /// let varname = syn::Ident::new(&concatenated, ident.span()); + /// quote! { + /// let mut #varname = 0; + /// } + /// # ; + /// ``` + /// + ///


+ /// + /// ### Making method calls + /// + /// Let's say our macro requires some type specified in the macro input to have + /// a constructor called `new`. We have the type in a variable called + /// `field_type` of type `syn::Type` and want to invoke the constructor. + /// + /// ``` + /// # use quote::quote; + /// # + /// # let field_type = quote!(...); + /// # + /// // incorrect + /// quote! { + /// let value = #field_type::new(); + /// } + /// # ; + /// ``` + /// + /// This works only sometimes. If `field_type` is `String`, the expanded code + /// contains `String::new()` which is fine. But if `field_type` is something + /// like `Vec` then the expanded code is `Vec::new()` which is invalid + /// syntax. Ordinarily in handwritten Rust we would write `Vec::::new()` + /// but for macros often the following is more convenient. + /// + /// ``` + /// # use quote::quote; + /// # + /// # let field_type = quote!(...); + /// # + /// quote! { + /// let value = <#field_type>::new(); + /// } + /// # ; + /// ``` + /// + /// This expands to `>::new()` which behaves correctly. + /// + /// A similar pattern is appropriate for trait methods. + /// + /// ``` + /// # use quote::quote; + /// # + /// # let field_type = quote!(...); + /// # + /// quote! { + /// let value = <#field_type as core::default::Default>::default(); + /// } + /// # ; + /// ``` + /// + ///


+ /// + /// ### Interpolating text inside of doc comments + /// + /// Neither doc comments nor string literals get interpolation behavior in + /// quote: + /// + /// ```compile_fail + /// quote! { + /// /// try to interpolate: #ident + /// /// + /// /// ... + /// } + /// ``` + /// + /// ```compile_fail + /// quote! { + /// #[doc = "try to interpolate: #ident"] + /// } + /// ``` + /// + /// Instead the best way to build doc comments that involve variables is by + /// formatting the doc string literal outside of quote. + /// + /// ```rust + /// # use proc_macro2::{Ident, Span}; + /// # use quote::quote; + /// # + /// # const IGNORE: &str = stringify! { + /// let msg = format!(...); + /// # }; + /// # + /// # let ident = Ident::new("var", Span::call_site()); + /// # let msg = format!("try to interpolate: {}", ident); + /// quote! { + /// #[doc = #msg] + /// /// + /// /// ... + /// } + /// # ; + /// ``` + /// + ///


+ /// + /// ### Indexing into a tuple struct + /// + /// When interpolating indices of a tuple or tuple struct, we need them not to + /// appears suffixed as integer literals by interpolating them as [`syn::Index`] + /// instead. + /// + /// [`syn::Index`]: https://docs.rs/syn/2.0/syn/struct.Index.html + /// + /// ```compile_fail + /// let i = 0usize..self.fields.len(); + /// + /// // expands to 0 + self.0usize.heap_size() + self.1usize.heap_size() + ... + /// // which is not valid syntax + /// quote! { + /// 0 #( + self.#i.heap_size() )* + /// } + /// ``` + /// + /// ``` + /// # use proc_macro2::{Ident, TokenStream}; + /// # use quote::quote; + /// # + /// # mod syn { + /// # use proc_macro2::{Literal, TokenStream}; + /// # use quote::{ToTokens, TokenStreamExt}; + /// # + /// # pub struct Index(usize); + /// # + /// # impl From for Index { + /// # fn from(i: usize) -> Self { + /// # Index(i) + /// # } + /// # } + /// # + /// # impl ToTokens for Index { + /// # fn to_tokens(&self, tokens: &mut TokenStream) { + /// # tokens.append(Literal::usize_unsuffixed(self.0)); + /// # } + /// # } + /// # } + /// # + /// # struct Struct { + /// # fields: Vec, + /// # } + /// # + /// # impl Struct { + /// # fn example(&self) -> TokenStream { + /// let i = (0..self.fields.len()).map(syn::Index::from); + /// + /// // expands to 0 + self.0.heap_size() + self.1.heap_size() + ... + /// quote! { + /// 0 #( + self.#i.heap_size() )* + /// } + /// # } + /// # } + /// ``` + $quote + }; +} + +#[cfg(doc)] +__quote![ + #[macro_export] + macro_rules! quote { + ($($tt:tt)*) => { + ... + }; + } +]; + +#[cfg(not(doc))] +__quote![ + #[macro_export] + macro_rules! quote { + () => { + $crate::__private::TokenStream::new() + }; + + // Special case rule for a single tt, for performance. + ($tt:tt) => {{ + let mut _s = $crate::__private::TokenStream::new(); + $crate::quote_token!{$tt _s} + _s + }}; + + // Special case rules for two tts, for performance. + (# $var:ident) => {{ + let mut _s = $crate::__private::TokenStream::new(); + $crate::ToTokens::to_tokens(&$var, &mut _s); + _s + }}; + ($tt1:tt $tt2:tt) => {{ + let mut _s = $crate::__private::TokenStream::new(); + $crate::quote_token!{$tt1 _s} + $crate::quote_token!{$tt2 _s} + _s + }}; + + // Rule for any other number of tokens. + ($($tt:tt)*) => {{ + let mut _s = $crate::__private::TokenStream::new(); + $crate::quote_each_token!{_s $($tt)*} + _s + }}; + } +]; + +macro_rules! __quote_spanned { + ($quote_spanned:item) => { + /// Same as `quote!`, but applies a given span to all tokens originating within + /// the macro invocation. + /// + ///
+ /// + /// # Syntax + /// + /// A span expression of type [`Span`], followed by `=>`, followed by the tokens + /// to quote. The span expression should be brief — use a variable for + /// anything more than a few characters. There should be no space before the + /// `=>` token. + /// + /// [`Span`]: https://docs.rs/proc-macro2/1.0/proc_macro2/struct.Span.html + /// + /// ``` + /// # use proc_macro2::Span; + /// # use quote::quote_spanned; + /// # + /// # const IGNORE_TOKENS: &'static str = stringify! { + /// let span = /* ... */; + /// # }; + /// # let span = Span::call_site(); + /// # let init = 0; + /// + /// // On one line, use parentheses. + /// let tokens = quote_spanned!(span=> Box::into_raw(Box::new(#init))); + /// + /// // On multiple lines, place the span at the top and use braces. + /// let tokens = quote_spanned! {span=> + /// Box::into_raw(Box::new(#init)) + /// }; + /// ``` + /// + /// The lack of space before the `=>` should look jarring to Rust programmers + /// and this is intentional. The formatting is designed to be visibly + /// off-balance and draw the eye a particular way, due to the span expression + /// being evaluated in the context of the procedural macro and the remaining + /// tokens being evaluated in the generated code. + /// + ///
+ /// + /// # Hygiene + /// + /// Any interpolated tokens preserve the `Span` information provided by their + /// `ToTokens` implementation. Tokens that originate within the `quote_spanned!` + /// invocation are spanned with the given span argument. + /// + ///
+ /// + /// # Example + /// + /// The following procedural macro code uses `quote_spanned!` to assert that a + /// particular Rust type implements the [`Sync`] trait so that references can be + /// safely shared between threads. + /// + /// [`Sync`]: https://doc.rust-lang.org/std/marker/trait.Sync.html + /// + /// ``` + /// # use quote::{quote_spanned, TokenStreamExt, ToTokens}; + /// # use proc_macro2::{Span, TokenStream}; + /// # + /// # struct Type; + /// # + /// # impl Type { + /// # fn span(&self) -> Span { + /// # Span::call_site() + /// # } + /// # } + /// # + /// # impl ToTokens for Type { + /// # fn to_tokens(&self, _tokens: &mut TokenStream) {} + /// # } + /// # + /// # let ty = Type; + /// # let call_site = Span::call_site(); + /// # + /// let ty_span = ty.span(); + /// let assert_sync = quote_spanned! {ty_span=> + /// struct _AssertSync where #ty: Sync; + /// }; + /// ``` + /// + /// If the assertion fails, the user will see an error like the following. The + /// input span of their type is highlighted in the error. + /// + /// ```text + /// error[E0277]: the trait bound `*const (): std::marker::Sync` is not satisfied + /// --> src/main.rs:10:21 + /// | + /// 10 | static ref PTR: *const () = &(); + /// | ^^^^^^^^^ `*const ()` cannot be shared between threads safely + /// ``` + /// + /// In this example it is important for the where-clause to be spanned with the + /// line/column information of the user's input type so that error messages are + /// placed appropriately by the compiler. + $quote_spanned + }; +} + +#[cfg(doc)] +__quote_spanned![ + #[macro_export] + macro_rules! quote_spanned { + ($span:expr=> $($tt:tt)*) => { + ... + }; + } +]; + +#[cfg(not(doc))] +__quote_spanned![ + #[macro_export] + macro_rules! quote_spanned { + ($span:expr=>) => {{ + let _: $crate::__private::Span = $crate::__private::get_span($span).__into_span(); + $crate::__private::TokenStream::new() + }}; + + // Special case rule for a single tt, for performance. + ($span:expr=> $tt:tt) => {{ + let mut _s = $crate::__private::TokenStream::new(); + let _span: $crate::__private::Span = $crate::__private::get_span($span).__into_span(); + $crate::quote_token_spanned!{$tt _s _span} + _s + }}; + + // Special case rules for two tts, for performance. + ($span:expr=> # $var:ident) => {{ + let mut _s = $crate::__private::TokenStream::new(); + let _: $crate::__private::Span = $crate::__private::get_span($span).__into_span(); + $crate::ToTokens::to_tokens(&$var, &mut _s); + _s + }}; + ($span:expr=> $tt1:tt $tt2:tt) => {{ + let mut _s = $crate::__private::TokenStream::new(); + let _span: $crate::__private::Span = $crate::__private::get_span($span).__into_span(); + $crate::quote_token_spanned!{$tt1 _s _span} + $crate::quote_token_spanned!{$tt2 _s _span} + _s + }}; + + // Rule for any other number of tokens. + ($span:expr=> $($tt:tt)*) => {{ + let mut _s = $crate::__private::TokenStream::new(); + let _span: $crate::__private::Span = $crate::__private::get_span($span).__into_span(); + $crate::quote_each_token_spanned!{_s _span $($tt)*} + _s + }}; + } +]; + +// Extract the names of all #metavariables and pass them to the $call macro. +// +// in: pounded_var_names!(then!(...) a #b c #( #d )* #e) +// out: then!(... b); +// then!(... d); +// then!(... e); +#[macro_export] +#[doc(hidden)] +macro_rules! pounded_var_names { + ($call:ident! $extra:tt $($tts:tt)*) => { + $crate::pounded_var_names_with_context!{$call! $extra + (@ $($tts)*) + ($($tts)* @) + } + }; +} + +#[macro_export] +#[doc(hidden)] +macro_rules! pounded_var_names_with_context { + ($call:ident! $extra:tt ($($b1:tt)*) ($($curr:tt)*)) => { + $( + $crate::pounded_var_with_context!{$call! $extra $b1 $curr} + )* + }; +} + +#[macro_export] +#[doc(hidden)] +macro_rules! pounded_var_with_context { + ($call:ident! $extra:tt $b1:tt ( $($inner:tt)* )) => { + $crate::pounded_var_names!{$call! $extra $($inner)*} + }; + + ($call:ident! $extra:tt $b1:tt [ $($inner:tt)* ]) => { + $crate::pounded_var_names!{$call! $extra $($inner)*} + }; + + ($call:ident! $extra:tt $b1:tt { $($inner:tt)* }) => { + $crate::pounded_var_names!{$call! $extra $($inner)*} + }; + + ($call:ident!($($extra:tt)*) # $var:ident) => { + $crate::$call!($($extra)* $var); + }; + + ($call:ident! $extra:tt $b1:tt $curr:tt) => {}; +} + +#[macro_export] +#[doc(hidden)] +macro_rules! quote_bind_into_iter { + ($has_iter:ident $var:ident) => { + // `mut` may be unused if $var occurs multiple times in the list. + #[allow(unused_mut)] + let (mut $var, i) = $var.quote_into_iter(); + let $has_iter = $has_iter | i; + }; +} + +#[macro_export] +#[doc(hidden)] +macro_rules! quote_bind_next_or_break { + ($var:ident) => { + let $var = match $var.next() { + Some(_x) => $crate::__private::RepInterp(_x), + None => break, + }; + }; +} + +// The obvious way to write this macro is as a tt muncher. This implementation +// does something more complex for two reasons. +// +// - With a tt muncher it's easy to hit Rust's built-in recursion_limit, which +// this implementation avoids because it isn't tail recursive. +// +// - Compile times for a tt muncher are quadratic relative to the length of +// the input. This implementation is linear, so it will be faster +// (potentially much faster) for big inputs. However, the constant factors +// of this implementation are higher than that of a tt muncher, so it is +// somewhat slower than a tt muncher if there are many invocations with +// short inputs. +// +// An invocation like this: +// +// quote_each_token!(_s a b c d e f g h i j); +// +// expands to this: +// +// quote_tokens_with_context!(_s +// (@ @ @ @ @ @ a b c d e f g h i j) +// (@ @ @ @ @ a b c d e f g h i j @) +// (@ @ @ @ a b c d e f g h i j @ @) +// (@ @ @ (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) @ @ @) +// (@ @ a b c d e f g h i j @ @ @ @) +// (@ a b c d e f g h i j @ @ @ @ @) +// (a b c d e f g h i j @ @ @ @ @ @) +// ); +// +// which gets transposed and expanded to this: +// +// quote_token_with_context!(_s @ @ @ @ @ @ a); +// quote_token_with_context!(_s @ @ @ @ @ a b); +// quote_token_with_context!(_s @ @ @ @ a b c); +// quote_token_with_context!(_s @ @ @ (a) b c d); +// quote_token_with_context!(_s @ @ a (b) c d e); +// quote_token_with_context!(_s @ a b (c) d e f); +// quote_token_with_context!(_s a b c (d) e f g); +// quote_token_with_context!(_s b c d (e) f g h); +// quote_token_with_context!(_s c d e (f) g h i); +// quote_token_with_context!(_s d e f (g) h i j); +// quote_token_with_context!(_s e f g (h) i j @); +// quote_token_with_context!(_s f g h (i) j @ @); +// quote_token_with_context!(_s g h i (j) @ @ @); +// quote_token_with_context!(_s h i j @ @ @ @); +// quote_token_with_context!(_s i j @ @ @ @ @); +// quote_token_with_context!(_s j @ @ @ @ @ @); +// +// Without having used muncher-style recursion, we get one invocation of +// quote_token_with_context for each original tt, with three tts of context on +// either side. This is enough for the longest possible interpolation form (a +// repetition with separator, as in `# (#var) , *`) to be fully represented with +// the first or last tt in the middle. +// +// The middle tt (surrounded by parentheses) is the tt being processed. +// +// - When it is a `#`, quote_token_with_context can do an interpolation. The +// interpolation kind will depend on the three subsequent tts. +// +// - When it is within a later part of an interpolation, it can be ignored +// because the interpolation has already been done. +// +// - When it is not part of an interpolation it can be pushed as a single +// token into the output. +// +// - When the middle token is an unparenthesized `@`, that call is one of the +// first 3 or last 3 calls of quote_token_with_context and does not +// correspond to one of the original input tokens, so turns into nothing. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_each_token { + ($tokens:ident $($tts:tt)*) => { + $crate::quote_tokens_with_context!{$tokens + (@ @ @ @ @ @ $($tts)*) + (@ @ @ @ @ $($tts)* @) + (@ @ @ @ $($tts)* @ @) + (@ @ @ $(($tts))* @ @ @) + (@ @ $($tts)* @ @ @ @) + (@ $($tts)* @ @ @ @ @) + ($($tts)* @ @ @ @ @ @) + } + }; +} + +// See the explanation on quote_each_token. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_each_token_spanned { + ($tokens:ident $span:ident $($tts:tt)*) => { + $crate::quote_tokens_with_context_spanned!{$tokens $span + (@ @ @ @ @ @ $($tts)*) + (@ @ @ @ @ $($tts)* @) + (@ @ @ @ $($tts)* @ @) + (@ @ @ $(($tts))* @ @ @) + (@ @ $($tts)* @ @ @ @) + (@ $($tts)* @ @ @ @ @) + ($($tts)* @ @ @ @ @ @) + } + }; +} + +// See the explanation on quote_each_token. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_tokens_with_context { + ($tokens:ident + ($($b3:tt)*) ($($b2:tt)*) ($($b1:tt)*) + ($($curr:tt)*) + ($($a1:tt)*) ($($a2:tt)*) ($($a3:tt)*) + ) => { + $( + $crate::quote_token_with_context!{$tokens $b3 $b2 $b1 $curr $a1 $a2 $a3} + )* + }; +} + +// See the explanation on quote_each_token. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_tokens_with_context_spanned { + ($tokens:ident $span:ident + ($($b3:tt)*) ($($b2:tt)*) ($($b1:tt)*) + ($($curr:tt)*) + ($($a1:tt)*) ($($a2:tt)*) ($($a3:tt)*) + ) => { + $( + $crate::quote_token_with_context_spanned!{$tokens $span $b3 $b2 $b1 $curr $a1 $a2 $a3} + )* + }; +} + +// See the explanation on quote_each_token. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_token_with_context { + // Unparenthesized `@` indicates this call does not correspond to one of the + // original input tokens. Ignore it. + ($tokens:ident $b3:tt $b2:tt $b1:tt @ $a1:tt $a2:tt $a3:tt) => {}; + + // A repetition with no separator. + ($tokens:ident $b3:tt $b2:tt $b1:tt (#) ( $($inner:tt)* ) * $a3:tt) => {{ + use $crate::__private::ext::*; + let has_iter = $crate::__private::ThereIsNoIteratorInRepetition; + $crate::pounded_var_names!{quote_bind_into_iter!(has_iter) () $($inner)*} + let _: $crate::__private::HasIterator = has_iter; + // This is `while true` instead of `loop` because if there are no + // iterators used inside of this repetition then the body would not + // contain any `break`, so the compiler would emit unreachable code + // warnings on anything below the loop. We use has_iter to detect and + // fail to compile when there are no iterators, so here we just work + // around the unneeded extra warning. + while true { + $crate::pounded_var_names!{quote_bind_next_or_break!() () $($inner)*} + $crate::quote_each_token!{$tokens $($inner)*} + } + }}; + // ... and one step later. + ($tokens:ident $b3:tt $b2:tt # (( $($inner:tt)* )) * $a2:tt $a3:tt) => {}; + // ... and one step later. + ($tokens:ident $b3:tt # ( $($inner:tt)* ) (*) $a1:tt $a2:tt $a3:tt) => {}; + + // A repetition with separator. + ($tokens:ident $b3:tt $b2:tt $b1:tt (#) ( $($inner:tt)* ) $sep:tt *) => {{ + use $crate::__private::ext::*; + let mut _i = 0usize; + let has_iter = $crate::__private::ThereIsNoIteratorInRepetition; + $crate::pounded_var_names!{quote_bind_into_iter!(has_iter) () $($inner)*} + let _: $crate::__private::HasIterator = has_iter; + while true { + $crate::pounded_var_names!{quote_bind_next_or_break!() () $($inner)*} + if _i > 0 { + $crate::quote_token!{$sep $tokens} + } + _i += 1; + $crate::quote_each_token!{$tokens $($inner)*} + } + }}; + // ... and one step later. + ($tokens:ident $b3:tt $b2:tt # (( $($inner:tt)* )) $sep:tt * $a3:tt) => {}; + // ... and one step later. + ($tokens:ident $b3:tt # ( $($inner:tt)* ) ($sep:tt) * $a2:tt $a3:tt) => {}; + // (A special case for `#(var)**`, where the first `*` is treated as the + // repetition symbol and the second `*` is treated as an ordinary token.) + ($tokens:ident # ( $($inner:tt)* ) * (*) $a1:tt $a2:tt $a3:tt) => { + // https://github.com/dtolnay/quote/issues/130 + $crate::quote_token!{* $tokens} + }; + // ... and one step later. + ($tokens:ident # ( $($inner:tt)* ) $sep:tt (*) $a1:tt $a2:tt $a3:tt) => {}; + + // A non-repetition interpolation. + ($tokens:ident $b3:tt $b2:tt $b1:tt (#) $var:ident $a2:tt $a3:tt) => { + $crate::ToTokens::to_tokens(&$var, &mut $tokens); + }; + // ... and one step later. + ($tokens:ident $b3:tt $b2:tt # ($var:ident) $a1:tt $a2:tt $a3:tt) => {}; + + // An ordinary token, not part of any interpolation. + ($tokens:ident $b3:tt $b2:tt $b1:tt ($curr:tt) $a1:tt $a2:tt $a3:tt) => { + $crate::quote_token!{$curr $tokens} + }; +} + +// See the explanation on quote_each_token, and on the individual rules of +// quote_token_with_context. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_token_with_context_spanned { + ($tokens:ident $span:ident $b3:tt $b2:tt $b1:tt @ $a1:tt $a2:tt $a3:tt) => {}; + + ($tokens:ident $span:ident $b3:tt $b2:tt $b1:tt (#) ( $($inner:tt)* ) * $a3:tt) => {{ + use $crate::__private::ext::*; + let has_iter = $crate::__private::ThereIsNoIteratorInRepetition; + $crate::pounded_var_names!{quote_bind_into_iter!(has_iter) () $($inner)*} + let _: $crate::__private::HasIterator = has_iter; + while true { + $crate::pounded_var_names!{quote_bind_next_or_break!() () $($inner)*} + $crate::quote_each_token_spanned!{$tokens $span $($inner)*} + } + }}; + ($tokens:ident $span:ident $b3:tt $b2:tt # (( $($inner:tt)* )) * $a2:tt $a3:tt) => {}; + ($tokens:ident $span:ident $b3:tt # ( $($inner:tt)* ) (*) $a1:tt $a2:tt $a3:tt) => {}; + + ($tokens:ident $span:ident $b3:tt $b2:tt $b1:tt (#) ( $($inner:tt)* ) $sep:tt *) => {{ + use $crate::__private::ext::*; + let mut _i = 0usize; + let has_iter = $crate::__private::ThereIsNoIteratorInRepetition; + $crate::pounded_var_names!{quote_bind_into_iter!(has_iter) () $($inner)*} + let _: $crate::__private::HasIterator = has_iter; + while true { + $crate::pounded_var_names!{quote_bind_next_or_break!() () $($inner)*} + if _i > 0 { + $crate::quote_token_spanned!{$sep $tokens $span} + } + _i += 1; + $crate::quote_each_token_spanned!{$tokens $span $($inner)*} + } + }}; + ($tokens:ident $span:ident $b3:tt $b2:tt # (( $($inner:tt)* )) $sep:tt * $a3:tt) => {}; + ($tokens:ident $span:ident $b3:tt # ( $($inner:tt)* ) ($sep:tt) * $a2:tt $a3:tt) => {}; + ($tokens:ident $span:ident # ( $($inner:tt)* ) * (*) $a1:tt $a2:tt $a3:tt) => { + // https://github.com/dtolnay/quote/issues/130 + $crate::quote_token_spanned!{* $tokens $span} + }; + ($tokens:ident $span:ident # ( $($inner:tt)* ) $sep:tt (*) $a1:tt $a2:tt $a3:tt) => {}; + + ($tokens:ident $span:ident $b3:tt $b2:tt $b1:tt (#) $var:ident $a2:tt $a3:tt) => { + $crate::ToTokens::to_tokens(&$var, &mut $tokens); + }; + ($tokens:ident $span:ident $b3:tt $b2:tt # ($var:ident) $a1:tt $a2:tt $a3:tt) => {}; + + ($tokens:ident $span:ident $b3:tt $b2:tt $b1:tt ($curr:tt) $a1:tt $a2:tt $a3:tt) => { + $crate::quote_token_spanned!{$curr $tokens $span} + }; +} + +// These rules are ordered by approximate token frequency, at least for the +// first 10 or so, to improve compile times. Having `ident` first is by far the +// most important because it's typically 2-3x more common than the next most +// common token. +// +// Separately, we put the token being matched in the very front so that failing +// rules may fail to match as quickly as possible. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_token { + ($ident:ident $tokens:ident) => { + $crate::__private::push_ident(&mut $tokens, stringify!($ident)); + }; + + (:: $tokens:ident) => { + $crate::__private::push_colon2(&mut $tokens); + }; + + (( $($inner:tt)* ) $tokens:ident) => { + $crate::__private::push_group( + &mut $tokens, + $crate::__private::Delimiter::Parenthesis, + $crate::quote!($($inner)*), + ); + }; + + ([ $($inner:tt)* ] $tokens:ident) => { + $crate::__private::push_group( + &mut $tokens, + $crate::__private::Delimiter::Bracket, + $crate::quote!($($inner)*), + ); + }; + + ({ $($inner:tt)* } $tokens:ident) => { + $crate::__private::push_group( + &mut $tokens, + $crate::__private::Delimiter::Brace, + $crate::quote!($($inner)*), + ); + }; + + (# $tokens:ident) => { + $crate::__private::push_pound(&mut $tokens); + }; + + (, $tokens:ident) => { + $crate::__private::push_comma(&mut $tokens); + }; + + (. $tokens:ident) => { + $crate::__private::push_dot(&mut $tokens); + }; + + (; $tokens:ident) => { + $crate::__private::push_semi(&mut $tokens); + }; + + (: $tokens:ident) => { + $crate::__private::push_colon(&mut $tokens); + }; + + (+ $tokens:ident) => { + $crate::__private::push_add(&mut $tokens); + }; + + (+= $tokens:ident) => { + $crate::__private::push_add_eq(&mut $tokens); + }; + + (& $tokens:ident) => { + $crate::__private::push_and(&mut $tokens); + }; + + (&& $tokens:ident) => { + $crate::__private::push_and_and(&mut $tokens); + }; + + (&= $tokens:ident) => { + $crate::__private::push_and_eq(&mut $tokens); + }; + + (@ $tokens:ident) => { + $crate::__private::push_at(&mut $tokens); + }; + + (! $tokens:ident) => { + $crate::__private::push_bang(&mut $tokens); + }; + + (^ $tokens:ident) => { + $crate::__private::push_caret(&mut $tokens); + }; + + (^= $tokens:ident) => { + $crate::__private::push_caret_eq(&mut $tokens); + }; + + (/ $tokens:ident) => { + $crate::__private::push_div(&mut $tokens); + }; + + (/= $tokens:ident) => { + $crate::__private::push_div_eq(&mut $tokens); + }; + + (.. $tokens:ident) => { + $crate::__private::push_dot2(&mut $tokens); + }; + + (... $tokens:ident) => { + $crate::__private::push_dot3(&mut $tokens); + }; + + (..= $tokens:ident) => { + $crate::__private::push_dot_dot_eq(&mut $tokens); + }; + + (= $tokens:ident) => { + $crate::__private::push_eq(&mut $tokens); + }; + + (== $tokens:ident) => { + $crate::__private::push_eq_eq(&mut $tokens); + }; + + (>= $tokens:ident) => { + $crate::__private::push_ge(&mut $tokens); + }; + + (> $tokens:ident) => { + $crate::__private::push_gt(&mut $tokens); + }; + + (<= $tokens:ident) => { + $crate::__private::push_le(&mut $tokens); + }; + + (< $tokens:ident) => { + $crate::__private::push_lt(&mut $tokens); + }; + + (*= $tokens:ident) => { + $crate::__private::push_mul_eq(&mut $tokens); + }; + + (!= $tokens:ident) => { + $crate::__private::push_ne(&mut $tokens); + }; + + (| $tokens:ident) => { + $crate::__private::push_or(&mut $tokens); + }; + + (|= $tokens:ident) => { + $crate::__private::push_or_eq(&mut $tokens); + }; + + (|| $tokens:ident) => { + $crate::__private::push_or_or(&mut $tokens); + }; + + (? $tokens:ident) => { + $crate::__private::push_question(&mut $tokens); + }; + + (-> $tokens:ident) => { + $crate::__private::push_rarrow(&mut $tokens); + }; + + (<- $tokens:ident) => { + $crate::__private::push_larrow(&mut $tokens); + }; + + (% $tokens:ident) => { + $crate::__private::push_rem(&mut $tokens); + }; + + (%= $tokens:ident) => { + $crate::__private::push_rem_eq(&mut $tokens); + }; + + (=> $tokens:ident) => { + $crate::__private::push_fat_arrow(&mut $tokens); + }; + + (<< $tokens:ident) => { + $crate::__private::push_shl(&mut $tokens); + }; + + (<<= $tokens:ident) => { + $crate::__private::push_shl_eq(&mut $tokens); + }; + + (>> $tokens:ident) => { + $crate::__private::push_shr(&mut $tokens); + }; + + (>>= $tokens:ident) => { + $crate::__private::push_shr_eq(&mut $tokens); + }; + + (* $tokens:ident) => { + $crate::__private::push_star(&mut $tokens); + }; + + (- $tokens:ident) => { + $crate::__private::push_sub(&mut $tokens); + }; + + (-= $tokens:ident) => { + $crate::__private::push_sub_eq(&mut $tokens); + }; + + ($lifetime:lifetime $tokens:ident) => { + $crate::__private::push_lifetime(&mut $tokens, stringify!($lifetime)); + }; + + (_ $tokens:ident) => { + $crate::__private::push_underscore(&mut $tokens); + }; + + ($other:tt $tokens:ident) => { + $crate::__private::parse(&mut $tokens, stringify!($other)); + }; +} + +// See the comment above `quote_token!` about the rule ordering. +#[macro_export] +#[doc(hidden)] +macro_rules! quote_token_spanned { + ($ident:ident $tokens:ident $span:ident) => { + $crate::__private::push_ident_spanned(&mut $tokens, $span, stringify!($ident)); + }; + + (:: $tokens:ident $span:ident) => { + $crate::__private::push_colon2_spanned(&mut $tokens, $span); + }; + + (( $($inner:tt)* ) $tokens:ident $span:ident) => { + $crate::__private::push_group_spanned( + &mut $tokens, + $span, + $crate::__private::Delimiter::Parenthesis, + $crate::quote_spanned!($span=> $($inner)*), + ); + }; + + ([ $($inner:tt)* ] $tokens:ident $span:ident) => { + $crate::__private::push_group_spanned( + &mut $tokens, + $span, + $crate::__private::Delimiter::Bracket, + $crate::quote_spanned!($span=> $($inner)*), + ); + }; + + ({ $($inner:tt)* } $tokens:ident $span:ident) => { + $crate::__private::push_group_spanned( + &mut $tokens, + $span, + $crate::__private::Delimiter::Brace, + $crate::quote_spanned!($span=> $($inner)*), + ); + }; + + (# $tokens:ident $span:ident) => { + $crate::__private::push_pound_spanned(&mut $tokens, $span); + }; + + (, $tokens:ident $span:ident) => { + $crate::__private::push_comma_spanned(&mut $tokens, $span); + }; + + (. $tokens:ident $span:ident) => { + $crate::__private::push_dot_spanned(&mut $tokens, $span); + }; + + (; $tokens:ident $span:ident) => { + $crate::__private::push_semi_spanned(&mut $tokens, $span); + }; + + (: $tokens:ident $span:ident) => { + $crate::__private::push_colon_spanned(&mut $tokens, $span); + }; + + (+ $tokens:ident $span:ident) => { + $crate::__private::push_add_spanned(&mut $tokens, $span); + }; + + (+= $tokens:ident $span:ident) => { + $crate::__private::push_add_eq_spanned(&mut $tokens, $span); + }; + + (& $tokens:ident $span:ident) => { + $crate::__private::push_and_spanned(&mut $tokens, $span); + }; + + (&& $tokens:ident $span:ident) => { + $crate::__private::push_and_and_spanned(&mut $tokens, $span); + }; + + (&= $tokens:ident $span:ident) => { + $crate::__private::push_and_eq_spanned(&mut $tokens, $span); + }; + + (@ $tokens:ident $span:ident) => { + $crate::__private::push_at_spanned(&mut $tokens, $span); + }; + + (! $tokens:ident $span:ident) => { + $crate::__private::push_bang_spanned(&mut $tokens, $span); + }; + + (^ $tokens:ident $span:ident) => { + $crate::__private::push_caret_spanned(&mut $tokens, $span); + }; + + (^= $tokens:ident $span:ident) => { + $crate::__private::push_caret_eq_spanned(&mut $tokens, $span); + }; + + (/ $tokens:ident $span:ident) => { + $crate::__private::push_div_spanned(&mut $tokens, $span); + }; + + (/= $tokens:ident $span:ident) => { + $crate::__private::push_div_eq_spanned(&mut $tokens, $span); + }; + + (.. $tokens:ident $span:ident) => { + $crate::__private::push_dot2_spanned(&mut $tokens, $span); + }; + + (... $tokens:ident $span:ident) => { + $crate::__private::push_dot3_spanned(&mut $tokens, $span); + }; + + (..= $tokens:ident $span:ident) => { + $crate::__private::push_dot_dot_eq_spanned(&mut $tokens, $span); + }; + + (= $tokens:ident $span:ident) => { + $crate::__private::push_eq_spanned(&mut $tokens, $span); + }; + + (== $tokens:ident $span:ident) => { + $crate::__private::push_eq_eq_spanned(&mut $tokens, $span); + }; + + (>= $tokens:ident $span:ident) => { + $crate::__private::push_ge_spanned(&mut $tokens, $span); + }; + + (> $tokens:ident $span:ident) => { + $crate::__private::push_gt_spanned(&mut $tokens, $span); + }; + + (<= $tokens:ident $span:ident) => { + $crate::__private::push_le_spanned(&mut $tokens, $span); + }; + + (< $tokens:ident $span:ident) => { + $crate::__private::push_lt_spanned(&mut $tokens, $span); + }; + + (*= $tokens:ident $span:ident) => { + $crate::__private::push_mul_eq_spanned(&mut $tokens, $span); + }; + + (!= $tokens:ident $span:ident) => { + $crate::__private::push_ne_spanned(&mut $tokens, $span); + }; + + (| $tokens:ident $span:ident) => { + $crate::__private::push_or_spanned(&mut $tokens, $span); + }; + + (|= $tokens:ident $span:ident) => { + $crate::__private::push_or_eq_spanned(&mut $tokens, $span); + }; + + (|| $tokens:ident $span:ident) => { + $crate::__private::push_or_or_spanned(&mut $tokens, $span); + }; + + (? $tokens:ident $span:ident) => { + $crate::__private::push_question_spanned(&mut $tokens, $span); + }; + + (-> $tokens:ident $span:ident) => { + $crate::__private::push_rarrow_spanned(&mut $tokens, $span); + }; + + (<- $tokens:ident $span:ident) => { + $crate::__private::push_larrow_spanned(&mut $tokens, $span); + }; + + (% $tokens:ident $span:ident) => { + $crate::__private::push_rem_spanned(&mut $tokens, $span); + }; + + (%= $tokens:ident $span:ident) => { + $crate::__private::push_rem_eq_spanned(&mut $tokens, $span); + }; + + (=> $tokens:ident $span:ident) => { + $crate::__private::push_fat_arrow_spanned(&mut $tokens, $span); + }; + + (<< $tokens:ident $span:ident) => { + $crate::__private::push_shl_spanned(&mut $tokens, $span); + }; + + (<<= $tokens:ident $span:ident) => { + $crate::__private::push_shl_eq_spanned(&mut $tokens, $span); + }; + + (>> $tokens:ident $span:ident) => { + $crate::__private::push_shr_spanned(&mut $tokens, $span); + }; + + (>>= $tokens:ident $span:ident) => { + $crate::__private::push_shr_eq_spanned(&mut $tokens, $span); + }; + + (* $tokens:ident $span:ident) => { + $crate::__private::push_star_spanned(&mut $tokens, $span); + }; + + (- $tokens:ident $span:ident) => { + $crate::__private::push_sub_spanned(&mut $tokens, $span); + }; + + (-= $tokens:ident $span:ident) => { + $crate::__private::push_sub_eq_spanned(&mut $tokens, $span); + }; + + ($lifetime:lifetime $tokens:ident $span:ident) => { + $crate::__private::push_lifetime_spanned(&mut $tokens, $span, stringify!($lifetime)); + }; + + (_ $tokens:ident $span:ident) => { + $crate::__private::push_underscore_spanned(&mut $tokens, $span); + }; + + ($other:tt $tokens:ident $span:ident) => { + $crate::__private::parse_spanned(&mut $tokens, $span, stringify!($other)); + }; +} diff --git a/rust/hw/char/pl011/vendor/quote/src/runtime.rs b/rust/hw/char/pl011/vendor/quote/src/runtime.rs new file mode 100644 index 0000000000..eff044a957 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/runtime.rs @@ -0,0 +1,530 @@ +use self::get_span::{GetSpan, GetSpanBase, GetSpanInner}; +use crate::{IdentFragment, ToTokens, TokenStreamExt}; +use core::fmt; +use core::iter; +use core::ops::BitOr; +use proc_macro2::{Group, Ident, Punct, Spacing, TokenTree}; + +#[doc(hidden)] +pub use alloc::format; +#[doc(hidden)] +pub use core::option::Option; + +#[doc(hidden)] +pub type Delimiter = proc_macro2::Delimiter; +#[doc(hidden)] +pub type Span = proc_macro2::Span; +#[doc(hidden)] +pub type TokenStream = proc_macro2::TokenStream; + +#[doc(hidden)] +pub struct HasIterator; // True +#[doc(hidden)] +pub struct ThereIsNoIteratorInRepetition; // False + +impl BitOr for ThereIsNoIteratorInRepetition { + type Output = ThereIsNoIteratorInRepetition; + fn bitor(self, _rhs: ThereIsNoIteratorInRepetition) -> ThereIsNoIteratorInRepetition { + ThereIsNoIteratorInRepetition + } +} + +impl BitOr for HasIterator { + type Output = HasIterator; + fn bitor(self, _rhs: ThereIsNoIteratorInRepetition) -> HasIterator { + HasIterator + } +} + +impl BitOr for ThereIsNoIteratorInRepetition { + type Output = HasIterator; + fn bitor(self, _rhs: HasIterator) -> HasIterator { + HasIterator + } +} + +impl BitOr for HasIterator { + type Output = HasIterator; + fn bitor(self, _rhs: HasIterator) -> HasIterator { + HasIterator + } +} + +/// Extension traits used by the implementation of `quote!`. These are defined +/// in separate traits, rather than as a single trait due to ambiguity issues. +/// +/// These traits expose a `quote_into_iter` method which should allow calling +/// whichever impl happens to be applicable. Calling that method repeatedly on +/// the returned value should be idempotent. +#[doc(hidden)] +pub mod ext { + use super::RepInterp; + use super::{HasIterator as HasIter, ThereIsNoIteratorInRepetition as DoesNotHaveIter}; + use crate::ToTokens; + use alloc::collections::btree_set::{self, BTreeSet}; + use core::slice; + + /// Extension trait providing the `quote_into_iter` method on iterators. + #[doc(hidden)] + pub trait RepIteratorExt: Iterator + Sized { + fn quote_into_iter(self) -> (Self, HasIter) { + (self, HasIter) + } + } + + impl RepIteratorExt for T {} + + /// Extension trait providing the `quote_into_iter` method for + /// non-iterable types. These types interpolate the same value in each + /// iteration of the repetition. + #[doc(hidden)] + pub trait RepToTokensExt { + /// Pretend to be an iterator for the purposes of `quote_into_iter`. + /// This allows repeated calls to `quote_into_iter` to continue + /// correctly returning DoesNotHaveIter. + fn next(&self) -> Option<&Self> { + Some(self) + } + + fn quote_into_iter(&self) -> (&Self, DoesNotHaveIter) { + (self, DoesNotHaveIter) + } + } + + impl RepToTokensExt for T {} + + /// Extension trait providing the `quote_into_iter` method for types that + /// can be referenced as an iterator. + #[doc(hidden)] + pub trait RepAsIteratorExt<'q> { + type Iter: Iterator; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter); + } + + impl<'q, 'a, T: RepAsIteratorExt<'q> + ?Sized> RepAsIteratorExt<'q> for &'a T { + type Iter = T::Iter; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) { + ::quote_into_iter(*self) + } + } + + impl<'q, 'a, T: RepAsIteratorExt<'q> + ?Sized> RepAsIteratorExt<'q> for &'a mut T { + type Iter = T::Iter; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) { + ::quote_into_iter(*self) + } + } + + impl<'q, T: 'q> RepAsIteratorExt<'q> for [T] { + type Iter = slice::Iter<'q, T>; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) { + (self.iter(), HasIter) + } + } + + impl<'q, T: 'q> RepAsIteratorExt<'q> for Vec { + type Iter = slice::Iter<'q, T>; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) { + (self.iter(), HasIter) + } + } + + impl<'q, T: 'q> RepAsIteratorExt<'q> for BTreeSet { + type Iter = btree_set::Iter<'q, T>; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) { + (self.iter(), HasIter) + } + } + + impl<'q, T: RepAsIteratorExt<'q>> RepAsIteratorExt<'q> for RepInterp { + type Iter = T::Iter; + + fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) { + self.0.quote_into_iter() + } + } +} + +// Helper type used within interpolations to allow for repeated binding names. +// Implements the relevant traits, and exports a dummy `next()` method. +#[derive(Copy, Clone)] +#[doc(hidden)] +pub struct RepInterp(pub T); + +impl RepInterp { + // This method is intended to look like `Iterator::next`, and is called when + // a name is bound multiple times, as the previous binding will shadow the + // original `Iterator` object. This allows us to avoid advancing the + // iterator multiple times per iteration. + pub fn next(self) -> Option { + Some(self.0) + } +} + +impl Iterator for RepInterp { + type Item = T::Item; + + fn next(&mut self) -> Option { + self.0.next() + } +} + +impl ToTokens for RepInterp { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.0.to_tokens(tokens); + } +} + +#[doc(hidden)] +#[inline] +pub fn get_span(span: T) -> GetSpan { + GetSpan(GetSpanInner(GetSpanBase(span))) +} + +mod get_span { + use core::ops::Deref; + use proc_macro2::extra::DelimSpan; + use proc_macro2::Span; + + pub struct GetSpan(pub(crate) GetSpanInner); + + pub struct GetSpanInner(pub(crate) GetSpanBase); + + pub struct GetSpanBase(pub(crate) T); + + impl GetSpan { + #[inline] + pub fn __into_span(self) -> Span { + ((self.0).0).0 + } + } + + impl GetSpanInner { + #[inline] + pub fn __into_span(&self) -> Span { + (self.0).0.join() + } + } + + impl GetSpanBase { + #[allow(clippy::unused_self)] + pub fn __into_span(&self) -> T { + unreachable!() + } + } + + impl Deref for GetSpan { + type Target = GetSpanInner; + + #[inline] + fn deref(&self) -> &Self::Target { + &self.0 + } + } + + impl Deref for GetSpanInner { + type Target = GetSpanBase; + + #[inline] + fn deref(&self) -> &Self::Target { + &self.0 + } + } +} + +#[doc(hidden)] +pub fn push_group(tokens: &mut TokenStream, delimiter: Delimiter, inner: TokenStream) { + tokens.append(Group::new(delimiter, inner)); +} + +#[doc(hidden)] +pub fn push_group_spanned( + tokens: &mut TokenStream, + span: Span, + delimiter: Delimiter, + inner: TokenStream, +) { + let mut g = Group::new(delimiter, inner); + g.set_span(span); + tokens.append(g); +} + +#[doc(hidden)] +pub fn parse(tokens: &mut TokenStream, s: &str) { + let s: TokenStream = s.parse().expect("invalid token stream"); + tokens.extend(iter::once(s)); +} + +#[doc(hidden)] +pub fn parse_spanned(tokens: &mut TokenStream, span: Span, s: &str) { + let s: TokenStream = s.parse().expect("invalid token stream"); + tokens.extend(s.into_iter().map(|t| respan_token_tree(t, span))); +} + +// Token tree with every span replaced by the given one. +fn respan_token_tree(mut token: TokenTree, span: Span) -> TokenTree { + match &mut token { + TokenTree::Group(g) => { + let stream = g + .stream() + .into_iter() + .map(|token| respan_token_tree(token, span)) + .collect(); + *g = Group::new(g.delimiter(), stream); + g.set_span(span); + } + other => other.set_span(span), + } + token +} + +#[doc(hidden)] +pub fn push_ident(tokens: &mut TokenStream, s: &str) { + let span = Span::call_site(); + push_ident_spanned(tokens, span, s); +} + +#[doc(hidden)] +pub fn push_ident_spanned(tokens: &mut TokenStream, span: Span, s: &str) { + tokens.append(ident_maybe_raw(s, span)); +} + +#[doc(hidden)] +pub fn push_lifetime(tokens: &mut TokenStream, lifetime: &str) { + struct Lifetime<'a> { + name: &'a str, + state: u8, + } + + impl<'a> Iterator for Lifetime<'a> { + type Item = TokenTree; + + fn next(&mut self) -> Option { + match self.state { + 0 => { + self.state = 1; + Some(TokenTree::Punct(Punct::new('\'', Spacing::Joint))) + } + 1 => { + self.state = 2; + Some(TokenTree::Ident(Ident::new(self.name, Span::call_site()))) + } + _ => None, + } + } + } + + tokens.extend(Lifetime { + name: &lifetime[1..], + state: 0, + }); +} + +#[doc(hidden)] +pub fn push_lifetime_spanned(tokens: &mut TokenStream, span: Span, lifetime: &str) { + struct Lifetime<'a> { + name: &'a str, + span: Span, + state: u8, + } + + impl<'a> Iterator for Lifetime<'a> { + type Item = TokenTree; + + fn next(&mut self) -> Option { + match self.state { + 0 => { + self.state = 1; + let mut apostrophe = Punct::new('\'', Spacing::Joint); + apostrophe.set_span(self.span); + Some(TokenTree::Punct(apostrophe)) + } + 1 => { + self.state = 2; + Some(TokenTree::Ident(Ident::new(self.name, self.span))) + } + _ => None, + } + } + } + + tokens.extend(Lifetime { + name: &lifetime[1..], + span, + state: 0, + }); +} + +macro_rules! push_punct { + ($name:ident $spanned:ident $char1:tt) => { + #[doc(hidden)] + pub fn $name(tokens: &mut TokenStream) { + tokens.append(Punct::new($char1, Spacing::Alone)); + } + #[doc(hidden)] + pub fn $spanned(tokens: &mut TokenStream, span: Span) { + let mut punct = Punct::new($char1, Spacing::Alone); + punct.set_span(span); + tokens.append(punct); + } + }; + ($name:ident $spanned:ident $char1:tt $char2:tt) => { + #[doc(hidden)] + pub fn $name(tokens: &mut TokenStream) { + tokens.append(Punct::new($char1, Spacing::Joint)); + tokens.append(Punct::new($char2, Spacing::Alone)); + } + #[doc(hidden)] + pub fn $spanned(tokens: &mut TokenStream, span: Span) { + let mut punct = Punct::new($char1, Spacing::Joint); + punct.set_span(span); + tokens.append(punct); + let mut punct = Punct::new($char2, Spacing::Alone); + punct.set_span(span); + tokens.append(punct); + } + }; + ($name:ident $spanned:ident $char1:tt $char2:tt $char3:tt) => { + #[doc(hidden)] + pub fn $name(tokens: &mut TokenStream) { + tokens.append(Punct::new($char1, Spacing::Joint)); + tokens.append(Punct::new($char2, Spacing::Joint)); + tokens.append(Punct::new($char3, Spacing::Alone)); + } + #[doc(hidden)] + pub fn $spanned(tokens: &mut TokenStream, span: Span) { + let mut punct = Punct::new($char1, Spacing::Joint); + punct.set_span(span); + tokens.append(punct); + let mut punct = Punct::new($char2, Spacing::Joint); + punct.set_span(span); + tokens.append(punct); + let mut punct = Punct::new($char3, Spacing::Alone); + punct.set_span(span); + tokens.append(punct); + } + }; +} + +push_punct!(push_add push_add_spanned '+'); +push_punct!(push_add_eq push_add_eq_spanned '+' '='); +push_punct!(push_and push_and_spanned '&'); +push_punct!(push_and_and push_and_and_spanned '&' '&'); +push_punct!(push_and_eq push_and_eq_spanned '&' '='); +push_punct!(push_at push_at_spanned '@'); +push_punct!(push_bang push_bang_spanned '!'); +push_punct!(push_caret push_caret_spanned '^'); +push_punct!(push_caret_eq push_caret_eq_spanned '^' '='); +push_punct!(push_colon push_colon_spanned ':'); +push_punct!(push_colon2 push_colon2_spanned ':' ':'); +push_punct!(push_comma push_comma_spanned ','); +push_punct!(push_div push_div_spanned '/'); +push_punct!(push_div_eq push_div_eq_spanned '/' '='); +push_punct!(push_dot push_dot_spanned '.'); +push_punct!(push_dot2 push_dot2_spanned '.' '.'); +push_punct!(push_dot3 push_dot3_spanned '.' '.' '.'); +push_punct!(push_dot_dot_eq push_dot_dot_eq_spanned '.' '.' '='); +push_punct!(push_eq push_eq_spanned '='); +push_punct!(push_eq_eq push_eq_eq_spanned '=' '='); +push_punct!(push_ge push_ge_spanned '>' '='); +push_punct!(push_gt push_gt_spanned '>'); +push_punct!(push_le push_le_spanned '<' '='); +push_punct!(push_lt push_lt_spanned '<'); +push_punct!(push_mul_eq push_mul_eq_spanned '*' '='); +push_punct!(push_ne push_ne_spanned '!' '='); +push_punct!(push_or push_or_spanned '|'); +push_punct!(push_or_eq push_or_eq_spanned '|' '='); +push_punct!(push_or_or push_or_or_spanned '|' '|'); +push_punct!(push_pound push_pound_spanned '#'); +push_punct!(push_question push_question_spanned '?'); +push_punct!(push_rarrow push_rarrow_spanned '-' '>'); +push_punct!(push_larrow push_larrow_spanned '<' '-'); +push_punct!(push_rem push_rem_spanned '%'); +push_punct!(push_rem_eq push_rem_eq_spanned '%' '='); +push_punct!(push_fat_arrow push_fat_arrow_spanned '=' '>'); +push_punct!(push_semi push_semi_spanned ';'); +push_punct!(push_shl push_shl_spanned '<' '<'); +push_punct!(push_shl_eq push_shl_eq_spanned '<' '<' '='); +push_punct!(push_shr push_shr_spanned '>' '>'); +push_punct!(push_shr_eq push_shr_eq_spanned '>' '>' '='); +push_punct!(push_star push_star_spanned '*'); +push_punct!(push_sub push_sub_spanned '-'); +push_punct!(push_sub_eq push_sub_eq_spanned '-' '='); + +#[doc(hidden)] +pub fn push_underscore(tokens: &mut TokenStream) { + push_underscore_spanned(tokens, Span::call_site()); +} + +#[doc(hidden)] +pub fn push_underscore_spanned(tokens: &mut TokenStream, span: Span) { + tokens.append(Ident::new("_", span)); +} + +// Helper method for constructing identifiers from the `format_ident!` macro, +// handling `r#` prefixes. +#[doc(hidden)] +pub fn mk_ident(id: &str, span: Option) -> Ident { + let span = span.unwrap_or_else(Span::call_site); + ident_maybe_raw(id, span) +} + +fn ident_maybe_raw(id: &str, span: Span) -> Ident { + if let Some(id) = id.strip_prefix("r#") { + Ident::new_raw(id, span) + } else { + Ident::new(id, span) + } +} + +// Adapts from `IdentFragment` to `fmt::Display` for use by the `format_ident!` +// macro, and exposes span information from these fragments. +// +// This struct also has forwarding implementations of the formatting traits +// `Octal`, `LowerHex`, `UpperHex`, and `Binary` to allow for their use within +// `format_ident!`. +#[derive(Copy, Clone)] +#[doc(hidden)] +pub struct IdentFragmentAdapter(pub T); + +impl IdentFragmentAdapter { + pub fn span(&self) -> Option { + self.0.span() + } +} + +impl fmt::Display for IdentFragmentAdapter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + IdentFragment::fmt(&self.0, f) + } +} + +impl fmt::Octal for IdentFragmentAdapter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::Octal::fmt(&self.0, f) + } +} + +impl fmt::LowerHex for IdentFragmentAdapter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::LowerHex::fmt(&self.0, f) + } +} + +impl fmt::UpperHex for IdentFragmentAdapter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::UpperHex::fmt(&self.0, f) + } +} + +impl fmt::Binary for IdentFragmentAdapter { + fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { + fmt::Binary::fmt(&self.0, f) + } +} diff --git a/rust/hw/char/pl011/vendor/quote/src/spanned.rs b/rust/hw/char/pl011/vendor/quote/src/spanned.rs new file mode 100644 index 0000000000..6eba64445d --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/spanned.rs @@ -0,0 +1,50 @@ +use crate::ToTokens; +use proc_macro2::extra::DelimSpan; +use proc_macro2::{Span, TokenStream}; + +// Not public API other than via the syn crate. Use syn::spanned::Spanned. +pub trait Spanned: private::Sealed { + fn __span(&self) -> Span; +} + +impl Spanned for Span { + fn __span(&self) -> Span { + *self + } +} + +impl Spanned for DelimSpan { + fn __span(&self) -> Span { + self.join() + } +} + +impl Spanned for T { + fn __span(&self) -> Span { + join_spans(self.into_token_stream()) + } +} + +fn join_spans(tokens: TokenStream) -> Span { + let mut iter = tokens.into_iter().map(|tt| tt.span()); + + let first = match iter.next() { + Some(span) => span, + None => return Span::call_site(), + }; + + iter.fold(None, |_prev, next| Some(next)) + .and_then(|last| first.join(last)) + .unwrap_or(first) +} + +mod private { + use crate::ToTokens; + use proc_macro2::extra::DelimSpan; + use proc_macro2::Span; + + pub trait Sealed {} + impl Sealed for Span {} + impl Sealed for DelimSpan {} + impl Sealed for T {} +} diff --git a/rust/hw/char/pl011/vendor/quote/src/to_tokens.rs b/rust/hw/char/pl011/vendor/quote/src/to_tokens.rs new file mode 100644 index 0000000000..23b6ec2c08 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/src/to_tokens.rs @@ -0,0 +1,209 @@ +use super::TokenStreamExt; +use alloc::borrow::Cow; +use alloc::rc::Rc; +use core::iter; +use proc_macro2::{Group, Ident, Literal, Punct, Span, TokenStream, TokenTree}; + +/// Types that can be interpolated inside a `quote!` invocation. +/// +/// [`quote!`]: macro.quote.html +pub trait ToTokens { + /// Write `self` to the given `TokenStream`. + /// + /// The token append methods provided by the [`TokenStreamExt`] extension + /// trait may be useful for implementing `ToTokens`. + /// + /// [`TokenStreamExt`]: trait.TokenStreamExt.html + /// + /// # Example + /// + /// Example implementation for a struct representing Rust paths like + /// `std::cmp::PartialEq`: + /// + /// ``` + /// use proc_macro2::{TokenTree, Spacing, Span, Punct, TokenStream}; + /// use quote::{TokenStreamExt, ToTokens}; + /// + /// pub struct Path { + /// pub global: bool, + /// pub segments: Vec, + /// } + /// + /// impl ToTokens for Path { + /// fn to_tokens(&self, tokens: &mut TokenStream) { + /// for (i, segment) in self.segments.iter().enumerate() { + /// if i > 0 || self.global { + /// // Double colon `::` + /// tokens.append(Punct::new(':', Spacing::Joint)); + /// tokens.append(Punct::new(':', Spacing::Alone)); + /// } + /// segment.to_tokens(tokens); + /// } + /// } + /// } + /// # + /// # pub struct PathSegment; + /// # + /// # impl ToTokens for PathSegment { + /// # fn to_tokens(&self, tokens: &mut TokenStream) { + /// # unimplemented!() + /// # } + /// # } + /// ``` + fn to_tokens(&self, tokens: &mut TokenStream); + + /// Convert `self` directly into a `TokenStream` object. + /// + /// This method is implicitly implemented using `to_tokens`, and acts as a + /// convenience method for consumers of the `ToTokens` trait. + fn to_token_stream(&self) -> TokenStream { + let mut tokens = TokenStream::new(); + self.to_tokens(&mut tokens); + tokens + } + + /// Convert `self` directly into a `TokenStream` object. + /// + /// This method is implicitly implemented using `to_tokens`, and acts as a + /// convenience method for consumers of the `ToTokens` trait. + fn into_token_stream(self) -> TokenStream + where + Self: Sized, + { + self.to_token_stream() + } +} + +impl<'a, T: ?Sized + ToTokens> ToTokens for &'a T { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens); + } +} + +impl<'a, T: ?Sized + ToTokens> ToTokens for &'a mut T { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens); + } +} + +impl<'a, T: ?Sized + ToOwned + ToTokens> ToTokens for Cow<'a, T> { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens); + } +} + +impl ToTokens for Box { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens); + } +} + +impl ToTokens for Rc { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens); + } +} + +impl ToTokens for Option { + fn to_tokens(&self, tokens: &mut TokenStream) { + if let Some(ref t) = *self { + t.to_tokens(tokens); + } + } +} + +impl ToTokens for str { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(Literal::string(self)); + } +} + +impl ToTokens for String { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.as_str().to_tokens(tokens); + } +} + +macro_rules! primitive { + ($($t:ident => $name:ident)*) => { + $( + impl ToTokens for $t { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(Literal::$name(*self)); + } + } + )* + }; +} + +primitive! { + i8 => i8_suffixed + i16 => i16_suffixed + i32 => i32_suffixed + i64 => i64_suffixed + i128 => i128_suffixed + isize => isize_suffixed + + u8 => u8_suffixed + u16 => u16_suffixed + u32 => u32_suffixed + u64 => u64_suffixed + u128 => u128_suffixed + usize => usize_suffixed + + f32 => f32_suffixed + f64 => f64_suffixed +} + +impl ToTokens for char { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(Literal::character(*self)); + } +} + +impl ToTokens for bool { + fn to_tokens(&self, tokens: &mut TokenStream) { + let word = if *self { "true" } else { "false" }; + tokens.append(Ident::new(word, Span::call_site())); + } +} + +impl ToTokens for Group { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(self.clone()); + } +} + +impl ToTokens for Ident { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(self.clone()); + } +} + +impl ToTokens for Punct { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(self.clone()); + } +} + +impl ToTokens for Literal { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(self.clone()); + } +} + +impl ToTokens for TokenTree { + fn to_tokens(&self, dst: &mut TokenStream) { + dst.append(self.clone()); + } +} + +impl ToTokens for TokenStream { + fn to_tokens(&self, dst: &mut TokenStream) { + dst.extend(iter::once(self.clone())); + } + + fn into_token_stream(self) -> TokenStream { + self + } +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/compiletest.rs b/rust/hw/char/pl011/vendor/quote/tests/compiletest.rs new file mode 100644 index 0000000000..7974a6249e --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/compiletest.rs @@ -0,0 +1,7 @@ +#[rustversion::attr(not(nightly), ignore)] +#[cfg_attr(miri, ignore)] +#[test] +fn ui() { + let t = trybuild::TestCases::new(); + t.compile_fail("tests/ui/*.rs"); +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/test.rs b/rust/hw/char/pl011/vendor/quote/tests/test.rs new file mode 100644 index 0000000000..eab4f55aa8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/test.rs @@ -0,0 +1,549 @@ +#![allow( + clippy::disallowed_names, + clippy::let_underscore_untyped, + clippy::shadow_unrelated, + clippy::unseparated_literal_suffix, + clippy::used_underscore_binding +)] + +extern crate proc_macro; + +use std::borrow::Cow; +use std::collections::BTreeSet; + +use proc_macro2::{Delimiter, Group, Ident, Span, TokenStream}; +use quote::{format_ident, quote, quote_spanned, TokenStreamExt}; + +struct X; + +impl quote::ToTokens for X { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append(Ident::new("X", Span::call_site())); + } +} + +#[test] +fn test_quote_impl() { + let tokens = quote! { + impl<'a, T: ToTokens> ToTokens for &'a T { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens) + } + } + }; + + let expected = concat!( + "impl < 'a , T : ToTokens > ToTokens for & 'a T { ", + "fn to_tokens (& self , tokens : & mut TokenStream) { ", + "(* * self) . to_tokens (tokens) ", + "} ", + "}" + ); + + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_quote_spanned_impl() { + let span = Span::call_site(); + let tokens = quote_spanned! {span=> + impl<'a, T: ToTokens> ToTokens for &'a T { + fn to_tokens(&self, tokens: &mut TokenStream) { + (**self).to_tokens(tokens) + } + } + }; + + let expected = concat!( + "impl < 'a , T : ToTokens > ToTokens for & 'a T { ", + "fn to_tokens (& self , tokens : & mut TokenStream) { ", + "(* * self) . to_tokens (tokens) ", + "} ", + "}" + ); + + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_substitution() { + let x = X; + let tokens = quote!(#x <#x> (#x) [#x] {#x}); + + let expected = "X < X > (X) [X] { X }"; + + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_iter() { + let primes = &[X, X, X, X]; + + assert_eq!("X X X X", quote!(#(#primes)*).to_string()); + + assert_eq!("X , X , X , X ,", quote!(#(#primes,)*).to_string()); + + assert_eq!("X , X , X , X", quote!(#(#primes),*).to_string()); +} + +#[test] +fn test_array() { + let array: [u8; 40] = [0; 40]; + let _ = quote!(#(#array #array)*); + + let ref_array: &[u8; 40] = &[0; 40]; + let _ = quote!(#(#ref_array #ref_array)*); + + let ref_slice: &[u8] = &[0; 40]; + let _ = quote!(#(#ref_slice #ref_slice)*); + + let array: [X; 2] = [X, X]; // !Copy + let _ = quote!(#(#array #array)*); + + let ref_array: &[X; 2] = &[X, X]; + let _ = quote!(#(#ref_array #ref_array)*); + + let ref_slice: &[X] = &[X, X]; + let _ = quote!(#(#ref_slice #ref_slice)*); +} + +#[test] +fn test_advanced() { + let generics = quote!( <'a, T> ); + + let where_clause = quote!( where T: Serialize ); + + let field_ty = quote!(String); + + let item_ty = quote!(Cow<'a, str>); + + let path = quote!(SomeTrait::serialize_with); + + let value = quote!(self.x); + + let tokens = quote! { + struct SerializeWith #generics #where_clause { + value: &'a #field_ty, + phantom: ::std::marker::PhantomData<#item_ty>, + } + + impl #generics ::serde::Serialize for SerializeWith #generics #where_clause { + fn serialize(&self, s: &mut S) -> Result<(), S::Error> + where S: ::serde::Serializer + { + #path(self.value, s) + } + } + + SerializeWith { + value: #value, + phantom: ::std::marker::PhantomData::<#item_ty>, + } + }; + + let expected = concat!( + "struct SerializeWith < 'a , T > where T : Serialize { ", + "value : & 'a String , ", + "phantom : :: std :: marker :: PhantomData < Cow < 'a , str > > , ", + "} ", + "impl < 'a , T > :: serde :: Serialize for SerializeWith < 'a , T > where T : Serialize { ", + "fn serialize < S > (& self , s : & mut S) -> Result < () , S :: Error > ", + "where S : :: serde :: Serializer ", + "{ ", + "SomeTrait :: serialize_with (self . value , s) ", + "} ", + "} ", + "SerializeWith { ", + "value : self . x , ", + "phantom : :: std :: marker :: PhantomData :: < Cow < 'a , str > > , ", + "}" + ); + + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_integer() { + let ii8 = -1i8; + let ii16 = -1i16; + let ii32 = -1i32; + let ii64 = -1i64; + let ii128 = -1i128; + let iisize = -1isize; + let uu8 = 1u8; + let uu16 = 1u16; + let uu32 = 1u32; + let uu64 = 1u64; + let uu128 = 1u128; + let uusize = 1usize; + + let tokens = quote! { + 1 1i32 1u256 + #ii8 #ii16 #ii32 #ii64 #ii128 #iisize + #uu8 #uu16 #uu32 #uu64 #uu128 #uusize + }; + let expected = + "1 1i32 1u256 - 1i8 - 1i16 - 1i32 - 1i64 - 1i128 - 1isize 1u8 1u16 1u32 1u64 1u128 1usize"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_floating() { + let e32 = 2.345f32; + + let e64 = 2.345f64; + + let tokens = quote! { + #e32 + #e64 + }; + let expected = concat!("2.345f32 2.345f64"); + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_char() { + let zero = '\u{1}'; + let pound = '#'; + let quote = '"'; + let apost = '\''; + let newline = '\n'; + let heart = '\u{2764}'; + + let tokens = quote! { + #zero #pound #quote #apost #newline #heart + }; + let expected = "'\\u{1}' '#' '\"' '\\'' '\\n' '\u{2764}'"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_str() { + let s = "\u{1} a 'b \" c"; + let tokens = quote!(#s); + let expected = "\"\\u{1} a 'b \\\" c\""; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_string() { + let s = "\u{1} a 'b \" c".to_string(); + let tokens = quote!(#s); + let expected = "\"\\u{1} a 'b \\\" c\""; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_interpolated_literal() { + macro_rules! m { + ($literal:literal) => { + quote!($literal) + }; + } + + let tokens = m!(1); + let expected = "1"; + assert_eq!(expected, tokens.to_string()); + + let tokens = m!(-1); + let expected = "- 1"; + assert_eq!(expected, tokens.to_string()); + + let tokens = m!(true); + let expected = "true"; + assert_eq!(expected, tokens.to_string()); + + let tokens = m!(-true); + let expected = "- true"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_ident() { + let foo = Ident::new("Foo", Span::call_site()); + let bar = Ident::new(&format!("Bar{}", 7), Span::call_site()); + let tokens = quote!(struct #foo; enum #bar {}); + let expected = "struct Foo ; enum Bar7 { }"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_underscore() { + let tokens = quote!(let _;); + let expected = "let _ ;"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_duplicate() { + let ch = 'x'; + + let tokens = quote!(#ch #ch); + + let expected = "'x' 'x'"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_fancy_repetition() { + let foo = vec!["a", "b"]; + let bar = vec![true, false]; + + let tokens = quote! { + #(#foo: #bar),* + }; + + let expected = r#""a" : true , "b" : false"#; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_nested_fancy_repetition() { + let nested = vec![vec!['a', 'b', 'c'], vec!['x', 'y', 'z']]; + + let tokens = quote! { + #( + #(#nested)* + ),* + }; + + let expected = "'a' 'b' 'c' , 'x' 'y' 'z'"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_duplicate_name_repetition() { + let foo = &["a", "b"]; + + let tokens = quote! { + #(#foo: #foo),* + #(#foo: #foo),* + }; + + let expected = r#""a" : "a" , "b" : "b" "a" : "a" , "b" : "b""#; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_duplicate_name_repetition_no_copy() { + let foo = vec!["a".to_owned(), "b".to_owned()]; + + let tokens = quote! { + #(#foo: #foo),* + }; + + let expected = r#""a" : "a" , "b" : "b""#; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_btreeset_repetition() { + let mut set = BTreeSet::new(); + set.insert("a".to_owned()); + set.insert("b".to_owned()); + + let tokens = quote! { + #(#set: #set),* + }; + + let expected = r#""a" : "a" , "b" : "b""#; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_variable_name_conflict() { + // The implementation of `#(...),*` uses the variable `_i` but it should be + // fine, if a little confusing when debugging. + let _i = vec!['a', 'b']; + let tokens = quote! { #(#_i),* }; + let expected = "'a' , 'b'"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_nonrep_in_repetition() { + let rep = vec!["a", "b"]; + let nonrep = "c"; + + let tokens = quote! { + #(#rep #rep : #nonrep #nonrep),* + }; + + let expected = r#""a" "a" : "c" "c" , "b" "b" : "c" "c""#; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_empty_quote() { + let tokens = quote!(); + assert_eq!("", tokens.to_string()); +} + +#[test] +fn test_box_str() { + let b = "str".to_owned().into_boxed_str(); + let tokens = quote! { #b }; + assert_eq!("\"str\"", tokens.to_string()); +} + +#[test] +fn test_cow() { + let owned: Cow = Cow::Owned(Ident::new("owned", Span::call_site())); + + let ident = Ident::new("borrowed", Span::call_site()); + let borrowed = Cow::Borrowed(&ident); + + let tokens = quote! { #owned #borrowed }; + assert_eq!("owned borrowed", tokens.to_string()); +} + +#[test] +fn test_closure() { + fn field_i(i: usize) -> Ident { + format_ident!("__field{}", i) + } + + let fields = (0usize..3) + .map(field_i as fn(_) -> _) + .map(|var| quote! { #var }); + + let tokens = quote! { #(#fields)* }; + assert_eq!("__field0 __field1 __field2", tokens.to_string()); +} + +#[test] +fn test_append_tokens() { + let mut a = quote!(a); + let b = quote!(b); + a.append_all(b); + assert_eq!("a b", a.to_string()); +} + +#[test] +fn test_format_ident() { + let id0 = format_ident!("Aa"); + let id1 = format_ident!("Hello{x}", x = id0); + let id2 = format_ident!("Hello{x}", x = 5usize); + let id3 = format_ident!("Hello{}_{x}", id0, x = 10usize); + let id4 = format_ident!("Aa", span = Span::call_site()); + let id5 = format_ident!("Hello{}", Cow::Borrowed("World")); + + assert_eq!(id0, "Aa"); + assert_eq!(id1, "HelloAa"); + assert_eq!(id2, "Hello5"); + assert_eq!(id3, "HelloAa_10"); + assert_eq!(id4, "Aa"); + assert_eq!(id5, "HelloWorld"); +} + +#[test] +fn test_format_ident_strip_raw() { + let id = format_ident!("r#struct"); + let my_id = format_ident!("MyId{}", id); + let raw_my_id = format_ident!("r#MyId{}", id); + + assert_eq!(id, "r#struct"); + assert_eq!(my_id, "MyIdstruct"); + assert_eq!(raw_my_id, "r#MyIdstruct"); +} + +#[test] +fn test_outer_line_comment() { + let tokens = quote! { + /// doc + }; + let expected = "# [doc = r\" doc\"]"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_inner_line_comment() { + let tokens = quote! { + //! doc + }; + let expected = "# ! [doc = r\" doc\"]"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_outer_block_comment() { + let tokens = quote! { + /** doc */ + }; + let expected = "# [doc = r\" doc \"]"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_inner_block_comment() { + let tokens = quote! { + /*! doc */ + }; + let expected = "# ! [doc = r\" doc \"]"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_outer_attr() { + let tokens = quote! { + #[inline] + }; + let expected = "# [inline]"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_inner_attr() { + let tokens = quote! { + #![no_std] + }; + let expected = "# ! [no_std]"; + assert_eq!(expected, tokens.to_string()); +} + +// https://github.com/dtolnay/quote/issues/130 +#[test] +fn test_star_after_repetition() { + let c = vec!['0', '1']; + let tokens = quote! { + #( + f(#c); + )* + *out = None; + }; + let expected = "f ('0') ; f ('1') ; * out = None ;"; + assert_eq!(expected, tokens.to_string()); +} + +#[test] +fn test_quote_raw_id() { + let id = quote!(r#raw_id); + assert_eq!(id.to_string(), "r#raw_id"); +} + +#[test] +fn test_type_inference_for_span() { + trait CallSite { + fn get() -> Self; + } + + impl CallSite for Span { + fn get() -> Self { + Span::call_site() + } + } + + let span = Span::call_site(); + let _ = quote_spanned!(span=> ...); + + let delim_span = Group::new(Delimiter::Parenthesis, TokenStream::new()).delim_span(); + let _ = quote_spanned!(delim_span=> ...); + + let inferred = CallSite::get(); + let _ = quote_spanned!(inferred=> ...); + + if false { + let proc_macro_span = proc_macro::Span::call_site(); + let _ = quote_spanned!(proc_macro_span.into()=> ...); + } +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.rs new file mode 100644 index 0000000000..0a39f41507 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.rs @@ -0,0 +1,9 @@ +use quote::quote; + +fn main() { + let nonrep = ""; + + // Without some protection against repetitions with no iterator somewhere + // inside, this would loop infinitely. + quote!(#(#nonrep #nonrep)*); +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.stderr new file mode 100644 index 0000000000..99c20a5676 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated-dup.stderr @@ -0,0 +1,11 @@ +error[E0308]: mismatched types + --> tests/ui/does-not-have-iter-interpolated-dup.rs:8:5 + | +8 | quote!(#(#nonrep #nonrep)*); + | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ + | | + | expected `HasIterator`, found `ThereIsNoIteratorInRepetition` + | expected due to this + | here the type of `has_iter` is inferred to be `ThereIsNoIteratorInRepetition` + | + = note: this error originates in the macro `$crate::quote_token_with_context` which comes from the expansion of the macro `quote` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.rs new file mode 100644 index 0000000000..2c740cc083 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.rs @@ -0,0 +1,9 @@ +use quote::quote; + +fn main() { + let nonrep = ""; + + // Without some protection against repetitions with no iterator somewhere + // inside, this would loop infinitely. + quote!(#(#nonrep)*); +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.stderr new file mode 100644 index 0000000000..ef908131ba --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-interpolated.stderr @@ -0,0 +1,11 @@ +error[E0308]: mismatched types + --> tests/ui/does-not-have-iter-interpolated.rs:8:5 + | +8 | quote!(#(#nonrep)*); + | ^^^^^^^^^^^^^^^^^^^ + | | + | expected `HasIterator`, found `ThereIsNoIteratorInRepetition` + | expected due to this + | here the type of `has_iter` is inferred to be `ThereIsNoIteratorInRepetition` + | + = note: this error originates in the macro `$crate::quote_token_with_context` which comes from the expansion of the macro `quote` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.rs new file mode 100644 index 0000000000..c027243dda --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.rs @@ -0,0 +1,5 @@ +use quote::quote; + +fn main() { + quote!(#(a b),*); +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.stderr new file mode 100644 index 0000000000..7c6e30f2b8 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter-separated.stderr @@ -0,0 +1,10 @@ +error[E0308]: mismatched types + --> tests/ui/does-not-have-iter-separated.rs:4:5 + | +4 | quote!(#(a b),*); + | ^^^^^^^^^^^^^^^^ + | | + | expected `HasIterator`, found `ThereIsNoIteratorInRepetition` + | expected due to this + | + = note: this error originates in the macro `$crate::quote_token_with_context` which comes from the expansion of the macro `quote` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.rs new file mode 100644 index 0000000000..8908353b57 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.rs @@ -0,0 +1,5 @@ +use quote::quote; + +fn main() { + quote!(#(a b)*); +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.stderr new file mode 100644 index 0000000000..0b13e5cb78 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/does-not-have-iter.stderr @@ -0,0 +1,10 @@ +error[E0308]: mismatched types + --> tests/ui/does-not-have-iter.rs:4:5 + | +4 | quote!(#(a b)*); + | ^^^^^^^^^^^^^^^ + | | + | expected `HasIterator`, found `ThereIsNoIteratorInRepetition` + | expected due to this + | + = note: this error originates in the macro `$crate::quote_token_with_context` which comes from the expansion of the macro `quote` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.rs new file mode 100644 index 0000000000..f991c1883d --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.rs @@ -0,0 +1,7 @@ +use quote::quote; +use std::net::Ipv4Addr; + +fn main() { + let ip = Ipv4Addr::LOCALHOST; + let _ = quote! { #ip }; +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.stderr new file mode 100644 index 0000000000..7bd20707e7 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/not-quotable.stderr @@ -0,0 +1,20 @@ +error[E0277]: the trait bound `Ipv4Addr: ToTokens` is not satisfied + --> tests/ui/not-quotable.rs:6:13 + | +6 | let _ = quote! { #ip }; + | ^^^^^^^^^^^^^^ + | | + | the trait `ToTokens` is not implemented for `Ipv4Addr` + | required by a bound introduced by this call + | + = help: the following other types implement trait `ToTokens`: + &'a T + &'a mut T + Box + Cow<'a, T> + Option + Rc + RepInterp + String + and $N others + = note: this error originates in the macro `quote` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.rs new file mode 100644 index 0000000000..a8f0fe773c --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.rs @@ -0,0 +1,8 @@ +use quote::quote; + +struct Ipv4Addr; + +fn main() { + let ip = Ipv4Addr; + let _ = quote! { #(#ip)* }; +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.stderr new file mode 100644 index 0000000000..26932bbf67 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/not-repeatable.stderr @@ -0,0 +1,34 @@ +error[E0599]: the method `quote_into_iter` exists for struct `Ipv4Addr`, but its trait bounds were not satisfied + --> tests/ui/not-repeatable.rs:7:13 + | +3 | struct Ipv4Addr; + | --------------- method `quote_into_iter` not found for this struct because it doesn't satisfy `Ipv4Addr: Iterator`, `Ipv4Addr: ToTokens`, `Ipv4Addr: ext::RepIteratorExt` or `Ipv4Addr: ext::RepToTokensExt` +... +7 | let _ = quote! { #(#ip)* }; + | ^^^^^^^^^^^^^^^^^^ method cannot be called on `Ipv4Addr` due to unsatisfied trait bounds + | + = note: the following trait bounds were not satisfied: + `Ipv4Addr: Iterator` + which is required by `Ipv4Addr: ext::RepIteratorExt` + `&Ipv4Addr: Iterator` + which is required by `&Ipv4Addr: ext::RepIteratorExt` + `Ipv4Addr: ToTokens` + which is required by `Ipv4Addr: ext::RepToTokensExt` + `&mut Ipv4Addr: Iterator` + which is required by `&mut Ipv4Addr: ext::RepIteratorExt` +note: the traits `Iterator` and `ToTokens` must be implemented + --> src/to_tokens.rs + | + | pub trait ToTokens { + | ^^^^^^^^^^^^^^^^^^ + | + ::: $RUST/core/src/iter/traits/iterator.rs + | + | pub trait Iterator { + | ^^^^^^^^^^^^^^^^^^ + = help: items from traits can only be used if the trait is implemented and in scope + = note: the following traits define an item `quote_into_iter`, perhaps you need to implement one of them: + candidate #1: `ext::RepAsIteratorExt` + candidate #2: `ext::RepIteratorExt` + candidate #3: `ext::RepToTokensExt` + = note: this error originates in the macro `$crate::quote_bind_into_iter` which comes from the expansion of the macro `quote` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.rs b/rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.rs new file mode 100644 index 0000000000..d5601c8a06 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.rs @@ -0,0 +1,7 @@ +use quote::quote_spanned; + +fn main() { + let span = ""; + let x = 0i32; + quote_spanned!(span=> #x); +} diff --git a/rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.stderr b/rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.stderr new file mode 100644 index 0000000000..12ad307703 --- /dev/null +++ b/rust/hw/char/pl011/vendor/quote/tests/ui/wrong-type-span.stderr @@ -0,0 +1,10 @@ +error[E0308]: mismatched types + --> tests/ui/wrong-type-span.rs:6:5 + | +6 | quote_spanned!(span=> #x); + | ^^^^^^^^^^^^^^^^^^^^^^^^^ + | | + | expected `Span`, found `&str` + | expected due to this + | + = note: this error originates in the macro `quote_spanned` (in Nightly builds, run with -Z macro-backtrace for more info) diff --git a/rust/hw/char/pl011/vendor/syn/.cargo-checksum.json b/rust/hw/char/pl011/vendor/syn/.cargo-checksum.json new file mode 100644 index 0000000000..abc9f4cabd --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/.cargo-checksum.json @@ -0,0 +1 @@ +{"files":{"Cargo.toml":"9ed91391aa48c48ccc7b52a0f196ee2e5d3b4905cc390d682dc3517957d2ff40","LICENSE-APACHE":"62c7a1e35f56406896d7aa7ca52d0cc0d272ac022b5d2796e7d6905db8a3636a","LICENSE-MIT":"23f18e03dc49df91622fe2a76176497404e46ced8a715d9d2b67a7446571cca3","README.md":"684c75ce029ed7c4ad57cb12fcecfee86d521699c3332c6c4c57af98f6e3b276","benches/file.rs":"0a0527c78d849148cbb6118b4d36f72da7d4add865ba1a410e0a1be9e8dbfe0e","benches/rust.rs":"77342ecd278686080c6390e84d70ecb4a31cc696d45dc4b8ddcc5ca65a25bcfb","src/attr.rs":"bd959c93f997d5d77ec08a5f580d1d38a391978b916b5bbe82bad4c03e694563","src/bigint.rs":"0299829b2f7a1a798fe2f7bc1680e4a10f9b6f4a852d09af4da2deab466c4242","src/buffer.rs":"8f05a11b2c6fcfca3c52a70695de982b0afba7d41d4c0d4ca4192589179ca626","src/classify.rs":"192835587f4585d84abdcc76ee00892f2dd39da6c16911aca073e24b44bddc35","src/custom_keyword.rs":"322114e36ae43a2f8605506fb4568efdbc2986853e2fee74bd10a4ca0fb60c69","src/custom_punctuation.rs":"2ae2339c29b1aff3ab16157d51a3a07bfca594aa38586981534fe07a62cdd9d1","src/data.rs":"7ad7f3c55d9f0b2ff223f3f30602ffa82013c0e7366006b0a9a01193de753db0","src/derive.rs":"f54f8cf9386a2d45186ff3c86ade5dae59e0e337b0198532449190ae8520cff8","src/discouraged.rs":"c246e96f7b25cc0af11944766e1ce176be8c396447bb696769b73c4d9bb5cfcf","src/drops.rs":"013385f1dd95663f1afab41abc1e2eea04181998644828935ca564c74d6462ae","src/error.rs":"3b03fd75eee8b0bb646eaf20f7e287345bdc7515ad5286024a2dd1e53c1e7bf2","src/export.rs":"b260cc49da1da3489e7755832bc8015cfad79e84f6c74e237f65ae25a2385e56","src/expr.rs":"35cc7d3ba13bad9355756d3903d8156c8fec8e203ff016e5ff0a8acb252b9dd5","src/ext.rs":"ed143b029af286e62ceb4310286a4ce894792dd588465face042b4199b39d329","src/file.rs":"39d4ed9c56a7dc0d83259843732c434cd187248a4cde3dba4a98c3b92df6d08f","src/fixup.rs":"ec3f84bb2e09a00d460399ee009c582ce468d294c7a7c930f7677571b5bc985e","src/gen/clone.rs":"36491f5f9e9cad6c4eb354b3331ec2b672607bb26429eba6379d1e9a4919170f","src/gen/debug.rs":"c9b2547663ed9025ba614fb1a70810df1b25f471ebb57abb01de5ab8e4fa8bf0","src/gen/eq.rs":"b5fffca0c3b6c31b3fcc80a7bd8fec65baed982a4e2fb4c8862db6059ab7dea1","src/gen/fold.rs":"345e6a6d9a7d2d09e09cd5857fc903af4202df42f0759a3da118556e98829fd2","src/gen/hash.rs":"447e840245178d0777b4e143b54c356b88962456e80282dcaad1763093709c13","src/gen/visit.rs":"178a6841d7d1974bff8c8c2f9a18e9b77384956841861a50828252bcaef67c18","src/gen/visit_mut.rs":"2a8f9a1c0259060f3fa1d6cab8a1924c1b07d713561aa9562cde8e79a39e66d5","src/generics.rs":"53ba17d13f89d316387c7debb302f703f99d5038921a7272a6d25e6f8bec42ec","src/group.rs":"911dd046f5a043d1c984b1b0b893addd152a9065457908fa4515722e96663d79","src/ident.rs":"d6061030fadae9c7dc847e1ee46178d9657d782aad108c7197e8cafe765b3eaa","src/item.rs":"81e2f57843cf2f8df82e314da940a6dda9bfa54e0641dbabe3ca2a7a68c9c5a8","src/lib.rs":"3a3e7d3d191d4411c9cf98e31a5efb562bab63facba27d12f54fc333a1ecd158","src/lifetime.rs":"5787d5a5dc7e5332b03283a25ae0a9e826464242ca2d149b1a19e7cae9cee34d","src/lit.rs":"205bb729cdf82ce0ec727304878132da0290b3042d41f4ccf0dbb7250fe91c51","src/lookahead.rs":"289dbd9048a74d75e5c3fa66045e72805ff8a2d3cf7aab685b6d7e136faba248","src/mac.rs":"cd85132ab4d302333f771be7a9b40f9281781ae9bcaee0607e0a25547352baaa","src/macros.rs":"e0587f60d510fd0079c60521f6898b61da5857664bd8b450154f83c85c4403c5","src/meta.rs":"969d8ccbdbc6ea2e4928a21831b791c57447b231e1373149e4c63b46f3951801","src/op.rs":"a61757370f802e44efa3c4a1057ae2cd26e64e273f7d76c06d5ffb49602319e2","src/parse.rs":"2b5d032212425a220e910ca5ebe3eb84ee9e2892e25d4b5a34ae7e9384e5ff2a","src/parse_macro_input.rs":"e4e22b63d0496d06a4ca17742a22467ed93f08a739081324773828bad63175ee","src/parse_quote.rs":"50bfd45f176d10562cb5f4e53af9034b8e1506081bf6cb5f507ce42d24d81d7d","src/pat.rs":"e552911a1712508c672eca35abdf9f81bad3a960f21522eddbc411a6a7070445","src/path.rs":"d77045e5dad382056d67fe31a42bc45b6a02ce044c43287bd38a95e32fd6aead","src/precedence.rs":"abd13523c4e72c555d68e734d11b779ba16e33a214cf00bf9a993d3abff34638","src/print.rs":"22910bf0521ab868ebd7c62601c55912d12cfb400c65723e08e5cfa3a2d111c0","src/punctuated.rs":"19f762231c8ea46b1610fc2a293329c85f0ce82f1819f1607e71634634d43212","src/restriction.rs":"eabb012634ef67aa3c3849c905ab549189461df7fefde2a4b90161c8685f31b2","src/sealed.rs":"6ece3b3dcb30f6bb98b93d83759ca7712ee8592bef9c0511141039c38765db0e","src/span.rs":"0a48e375e5c9768f6f64174a91ba6a255f4b021e2fb3548d8494e617f142601b","src/spanned.rs":"4b9bd65f60ab81922adfd0be8f03b6d50e98da3a5f525f242f9639aec4beac79","src/stmt.rs":"bb4cd196ce23c3fc07fefa47e67a0cd815db4f02ce1192625379d60bd657ffd2","src/thread.rs":"1f1deb1272525ab2af9a36aac4bce8f65b0e315adb1656641fd7075662f49222","src/token.rs":"25df9f6a305c1be58eb4f2454b6ab35c6bef703bf4954fcfed2108b27723cb16","src/tt.rs":"a58303a95d08d6bf3f3e09715b9b70a57b91b54774cfc1f00f2848034d2ff5c7","src/ty.rs":"90af4ce1911c91bdfd9ae431def641640daeb0c788c39a2ef024926485e7b2b2","src/verbatim.rs":"87cbe82a90f48efb57ffd09141042698b3e011a21d0d5412154d80324b0a5ef0","src/whitespace.rs":"9cdcbfe9045b259046329a795bc1105ab5a871471a6d3f7318d275ee53f7a825","tests/common/eq.rs":"1a754d31cd6acd15cd17d7cc8e6afe918f2a11334fe6fc46c92ab887a470d838","tests/common/mod.rs":"64fb893bc0e7148395fd9ce1f67432b3d8406be29cbd664e2b73585da5ee5719","tests/common/parse.rs":"fff650bb98a9382beefbd22d2a89c0c8f90501dd6d58abc4d12b29cb4f647dc7","tests/debug/gen.rs":"3ca161a049fe72ff73ead99fbfe78335fdb2ac7c41085fe8cd0c9a0b29995151","tests/debug/mod.rs":"b56136586267ae1812a937b69215dd053ada2c21717771d89dcd3ce52bcb27f5","tests/macros/mod.rs":"64b0da858096e7cf0f772e66bc1787a867e45897d7677de580c0a1f35c0f6852","tests/regression.rs":"e9565ea0efecb4136f099164ffcfa26e1996b0a27fb9c6659e90ad9bdd42e7b6","tests/regression/issue1108.rs":"f32db35244a674e22ff824ca9e5bbec2184e287b59f022db68c418b5878a2edc","tests/regression/issue1235.rs":"a2266b10c3f7c7af5734817ab0a3e8b309b51e7d177b63f26e67e6b744d280b0","tests/repo/mod.rs":"a463bc4786fa211005ef93bf2257d89c8ccd0be621275d9689396a53cb9bf425","tests/repo/progress.rs":"c08d0314a7f3ecf760d471f27da3cd2a500aeb9f1c8331bffb2aa648f9fabf3f","tests/test_asyncness.rs":"8982f6bc4e36510f924e288247473403e72697389ce9dda4e4b5ab0a8e49259f","tests/test_attribute.rs":"b35550a43bbd187bb330997ba36f90c65d8fc489135b1d32ef4547f145cb7612","tests/test_derive_input.rs":"99c4e6e45e3322ea9e269b309059c8a00fda1dcc03aed41f6e7d8c7e0a72fa2b","tests/test_expr.rs":"59843a1534d5a84fd302a815523eef9d5177f7323b8be48e75f2d9d970950751","tests/test_generics.rs":"2fcc8575d695b568f3724b3b33d853b8fa6d9864eb816b5e3ca82420682e6155","tests/test_grouping.rs":"1bd63c8ca0b90bd493fb3f927079ab9ddf74d2a78da82db2f638e652d22305d5","tests/test_ident.rs":"d5850e817720e774cd397a46dbc5298c57933823c18e20805e84503fc9387e8f","tests/test_item.rs":"1b8412a5581adf93eaa215785a592f139af8511c954dee283d52dff2718a6cc2","tests/test_iterators.rs":"f4dacb5f3a8e0473dfb0d27f05270d41e79eddb4759b1fad3e88e379b4731e17","tests/test_lit.rs":"01b0acfe03cff16e7c1a45ceb7f4b637e5cbc6145840886ba981b7ed8e83691c","tests/test_meta.rs":"4ae570333f849ed8edec5dd957111a2deb721ede360f1e1ffeeab75380578ad4","tests/test_parse_buffer.rs":"0de6af13ba0345986b18d495063f9b75a1018e8569c34b277f9522c63a6c0941","tests/test_parse_quote.rs":"928176da6ebb449ef01a798f3352c9b181d3077c1266eb008df73876f4013c47","tests/test_parse_stream.rs":"b6b533432173123d6d01d8d2cb33714bc50b30b16ffbb6116f93937221ad4594","tests/test_pat.rs":"f6954a50e62a97ac2bc1ba0cb7a5a1fc53b7b01fb55ffe0176bee3fe1955d460","tests/test_path.rs":"d54350aa91508f8d301f5be3e3a34e03b0615b1a04e8fbbab9840da20161838b","tests/test_precedence.rs":"62484c9a04778b506c183b06cae5f0c460a581e3c3b6baf4ff2cff0827698c3f","tests/test_receiver.rs":"af64117acd66fbf42edc476f731ecd20c88009d9cb641dbd7a1d6384ae99ae73","tests/test_round_trip.rs":"c9aae3a76ee801b9fb7ce2f2732aa9e1bf1b8f43f317ec1bfd0f8e5765c4e39c","tests/test_shebang.rs":"98e8a6690c04e0aad2893b747593620b51836fe704f50f5c6fe352609837138a","tests/test_size.rs":"57c83ebf1a4d4fb910b4db16566c611b08428271da30a278fab749b2f2177459","tests/test_stmt.rs":"bbc305ea888254798b6faf285187d8bc7a955e4402d9a497d4b9d361e0436691","tests/test_token_trees.rs":"d012da9c3c861073711b006bf6ffdc073821fb9fb0a08733628cdae57124d1f5","tests/test_ty.rs":"49fbb880891d4c2e21350e35b914d92aa9a056fbaad9c4afa5242802848fe9c4","tests/test_visibility.rs":"7bd239aef6f6d8173462dbd869064f3fdb9ba71644ac1f62c5d2fbb2568fb986","tests/zzz_stable.rs":"2a862e59cb446235ed99aec0e6ada8e16d3ecc30229b29d825b7c0bbc2602989"},"package":"c42f3f41a2de00b01c0aaad383c5a45241efc8b2d1eda5661812fda5f3cdcff5"} \ No newline at end of file diff --git a/rust/hw/char/pl011/vendor/syn/Cargo.toml b/rust/hw/char/pl011/vendor/syn/Cargo.toml new file mode 100644 index 0000000000..1d42e2745f --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/Cargo.toml @@ -0,0 +1,260 @@ +# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO +# +# When uploading crates to the registry Cargo will automatically +# "normalize" Cargo.toml files for maximal compatibility +# with all versions of Cargo and also rewrite `path` dependencies +# to registry (e.g., crates.io) dependencies. +# +# If you are reading this file be aware that the original Cargo.toml +# will likely look very different (and much more reasonable). +# See Cargo.toml.orig for the original contents. + +[package] +edition = "2021" +rust-version = "1.60" +name = "syn" +version = "2.0.66" +authors = ["David Tolnay "] +build = false +include = [ + "/benches/**", + "/Cargo.toml", + "/LICENSE-APACHE", + "/LICENSE-MIT", + "/README.md", + "/src/**", + "/tests/**", +] +autobins = false +autoexamples = false +autotests = false +autobenches = false +description = "Parser for Rust source code" +documentation = "https://docs.rs/syn" +readme = "README.md" +keywords = [ + "macros", + "syn", +] +categories = [ + "development-tools::procedural-macro-helpers", + "parser-implementations", +] +license = "MIT OR Apache-2.0" +repository = "https://github.com/dtolnay/syn" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--generate-link-to-definition"] +targets = ["x86_64-unknown-linux-gnu"] + +[package.metadata.playground] +features = [ + "full", + "visit", + "visit-mut", + "fold", + "extra-traits", +] + +[lib] +name = "syn" +path = "src/lib.rs" +doc-scrape-examples = false + +[[test]] +name = "test_meta" +path = "tests/test_meta.rs" + +[[test]] +name = "test_stmt" +path = "tests/test_stmt.rs" + +[[test]] +name = "test_receiver" +path = "tests/test_receiver.rs" + +[[test]] +name = "regression" +path = "tests/regression.rs" + +[[test]] +name = "test_generics" +path = "tests/test_generics.rs" + +[[test]] +name = "test_grouping" +path = "tests/test_grouping.rs" + +[[test]] +name = "test_parse_stream" +path = "tests/test_parse_stream.rs" + +[[test]] +name = "test_round_trip" +path = "tests/test_round_trip.rs" + +[[test]] +name = "test_derive_input" +path = "tests/test_derive_input.rs" + +[[test]] +name = "test_visibility" +path = "tests/test_visibility.rs" + +[[test]] +name = "test_pat" +path = "tests/test_pat.rs" + +[[test]] +name = "zzz_stable" +path = "tests/zzz_stable.rs" + +[[test]] +name = "test_item" +path = "tests/test_item.rs" + +[[test]] +name = "test_iterators" +path = "tests/test_iterators.rs" + +[[test]] +name = "test_path" +path = "tests/test_path.rs" + +[[test]] +name = "test_lit" +path = "tests/test_lit.rs" + +[[test]] +name = "test_ident" +path = "tests/test_ident.rs" + +[[test]] +name = "test_parse_quote" +path = "tests/test_parse_quote.rs" + +[[test]] +name = "test_size" +path = "tests/test_size.rs" + +[[test]] +name = "test_ty" +path = "tests/test_ty.rs" + +[[test]] +name = "test_shebang" +path = "tests/test_shebang.rs" + +[[test]] +name = "test_attribute" +path = "tests/test_attribute.rs" + +[[test]] +name = "test_asyncness" +path = "tests/test_asyncness.rs" + +[[test]] +name = "test_expr" +path = "tests/test_expr.rs" + +[[test]] +name = "test_token_trees" +path = "tests/test_token_trees.rs" + +[[test]] +name = "test_parse_buffer" +path = "tests/test_parse_buffer.rs" + +[[test]] +name = "test_precedence" +path = "tests/test_precedence.rs" + +[[bench]] +name = "rust" +path = "benches/rust.rs" +harness = false +required-features = [ + "full", + "parsing", +] + +[[bench]] +name = "file" +path = "benches/file.rs" +required-features = [ + "full", + "parsing", +] + +[dependencies.proc-macro2] +version = "1.0.83" +default-features = false + +[dependencies.quote] +version = "1.0.35" +optional = true +default-features = false + +[dependencies.unicode-ident] +version = "1" + +[dev-dependencies.anyhow] +version = "1" + +[dev-dependencies.automod] +version = "1" + +[dev-dependencies.flate2] +version = "1" + +[dev-dependencies.insta] +version = "1" + +[dev-dependencies.rayon] +version = "1" + +[dev-dependencies.ref-cast] +version = "1" + +[dev-dependencies.reqwest] +version = "0.12" +features = ["blocking"] + +[dev-dependencies.rustversion] +version = "1" + +[dev-dependencies.syn-test-suite] +version = "0" + +[dev-dependencies.tar] +version = "0.4.16" + +[dev-dependencies.termcolor] +version = "1" + +[dev-dependencies.walkdir] +version = "2.3.2" + +[features] +clone-impls = [] +default = [ + "derive", + "parsing", + "printing", + "clone-impls", + "proc-macro", +] +derive = [] +extra-traits = [] +fold = [] +full = [] +parsing = [] +printing = ["dep:quote"] +proc-macro = [ + "proc-macro2/proc-macro", + "quote?/proc-macro", +] +test = ["syn-test-suite/all-features"] +visit = [] +visit-mut = [] diff --git a/rust/hw/char/pl011/vendor/syn/LICENSE-APACHE b/rust/hw/char/pl011/vendor/syn/LICENSE-APACHE new file mode 100644 index 0000000000..1b5ec8b78e --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/LICENSE-APACHE @@ -0,0 +1,176 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + +2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + +3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + +4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + +5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + +6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + +8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS diff --git a/rust/hw/char/pl011/vendor/syn/LICENSE-MIT b/rust/hw/char/pl011/vendor/syn/LICENSE-MIT new file mode 100644 index 0000000000..31aa79387f --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/LICENSE-MIT @@ -0,0 +1,23 @@ +Permission is hereby granted, free of charge, to any +person obtaining a copy of this software and associated +documentation files (the "Software"), to deal in the +Software without restriction, including without +limitation the rights to use, copy, modify, merge, +publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software +is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice +shall be included in all copies or substantial portions +of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF +ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED +TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A +PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT +SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY +CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR +IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER +DEALINGS IN THE SOFTWARE. diff --git a/rust/hw/char/pl011/vendor/syn/README.md b/rust/hw/char/pl011/vendor/syn/README.md new file mode 100644 index 0000000000..04f9bf6cb1 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/README.md @@ -0,0 +1,284 @@ +Parser for Rust source code +=========================== + +[github](https://github.com/dtolnay/syn) +[crates.io](https://crates.io/crates/syn) +[docs.rs](https://docs.rs/syn) +[build status](https://github.com/dtolnay/syn/actions?query=branch%3Amaster) + +Syn is a parsing library for parsing a stream of Rust tokens into a syntax tree +of Rust source code. + +Currently this library is geared toward use in Rust procedural macros, but +contains some APIs that may be useful more generally. + +- **Data structures** — Syn provides a complete syntax tree that can represent + any valid Rust source code. The syntax tree is rooted at [`syn::File`] which + represents a full source file, but there are other entry points that may be + useful to procedural macros including [`syn::Item`], [`syn::Expr`] and + [`syn::Type`]. + +- **Derives** — Of particular interest to derive macros is [`syn::DeriveInput`] + which is any of the three legal input items to a derive macro. An example + below shows using this type in a library that can derive implementations of a + user-defined trait. + +- **Parsing** — Parsing in Syn is built around [parser functions] with the + signature `fn(ParseStream) -> Result`. Every syntax tree node defined by + Syn is individually parsable and may be used as a building block for custom + syntaxes, or you may dream up your own brand new syntax without involving any + of our syntax tree types. + +- **Location information** — Every token parsed by Syn is associated with a + `Span` that tracks line and column information back to the source of that + token. These spans allow a procedural macro to display detailed error messages + pointing to all the right places in the user's code. There is an example of + this below. + +- **Feature flags** — Functionality is aggressively feature gated so your + procedural macros enable only what they need, and do not pay in compile time + for all the rest. + +[`syn::File`]: https://docs.rs/syn/2.0/syn/struct.File.html +[`syn::Item`]: https://docs.rs/syn/2.0/syn/enum.Item.html +[`syn::Expr`]: https://docs.rs/syn/2.0/syn/enum.Expr.html +[`syn::Type`]: https://docs.rs/syn/2.0/syn/enum.Type.html +[`syn::DeriveInput`]: https://docs.rs/syn/2.0/syn/struct.DeriveInput.html +[parser functions]: https://docs.rs/syn/2.0/syn/parse/index.html + +*Version requirement: Syn supports rustc 1.60 and up.* + +[*Release notes*](https://github.com/dtolnay/syn/releases) + +
+ +## Resources + +The best way to learn about procedural macros is by writing some. Consider +working through [this procedural macro workshop][workshop] to get familiar with +the different types of procedural macros. The workshop contains relevant links +into the Syn documentation as you work through each project. + +[workshop]: https://github.com/dtolnay/proc-macro-workshop + +
+ +## Example of a derive macro + +The canonical derive macro using Syn looks like this. We write an ordinary Rust +function tagged with a `proc_macro_derive` attribute and the name of the trait +we are deriving. Any time that derive appears in the user's code, the Rust +compiler passes their data structure as tokens into our macro. We get to execute +arbitrary Rust code to figure out what to do with those tokens, then hand some +tokens back to the compiler to compile into the user's crate. + +[`TokenStream`]: https://doc.rust-lang.org/proc_macro/struct.TokenStream.html + +```toml +[dependencies] +syn = "2.0" +quote = "1.0" + +[lib] +proc-macro = true +``` + +```rust +use proc_macro::TokenStream; +use quote::quote; +use syn::{parse_macro_input, DeriveInput}; + +#[proc_macro_derive(MyMacro)] +pub fn my_macro(input: TokenStream) -> TokenStream { + // Parse the input tokens into a syntax tree + let input = parse_macro_input!(input as DeriveInput); + + // Build the output, possibly using quasi-quotation + let expanded = quote! { + // ... + }; + + // Hand the output tokens back to the compiler + TokenStream::from(expanded) +} +``` + +The [`heapsize`] example directory shows a complete working implementation of a +derive macro. The example derives a `HeapSize` trait which computes an estimate +of the amount of heap memory owned by a value. + +[`heapsize`]: examples/heapsize + +```rust +pub trait HeapSize { + /// Total number of bytes of heap memory owned by `self`. + fn heap_size_of_children(&self) -> usize; +} +``` + +The derive macro allows users to write `#[derive(HeapSize)]` on data structures +in their program. + +```rust +#[derive(HeapSize)] +struct Demo<'a, T: ?Sized> { + a: Box, + b: u8, + c: &'a str, + d: String, +} +``` + +
+ +## Spans and error reporting + +The token-based procedural macro API provides great control over where the +compiler's error messages are displayed in user code. Consider the error the +user sees if one of their field types does not implement `HeapSize`. + +```rust +#[derive(HeapSize)] +struct Broken { + ok: String, + bad: std::thread::Thread, +} +``` + +By tracking span information all the way through the expansion of a procedural +macro as shown in the `heapsize` example, token-based macros in Syn are able to +trigger errors that directly pinpoint the source of the problem. + +```console +error[E0277]: the trait bound `std::thread::Thread: HeapSize` is not satisfied + --> src/main.rs:7:5 + | +7 | bad: std::thread::Thread, + | ^^^^^^^^^^^^^^^^^^^^^^^^ the trait `HeapSize` is not implemented for `std::thread::Thread` +``` + +
+ +## Parsing a custom syntax + +The [`lazy-static`] example directory shows the implementation of a +`functionlike!(...)` procedural macro in which the input tokens are parsed using +Syn's parsing API. + +[`lazy-static`]: examples/lazy-static + +The example reimplements the popular `lazy_static` crate from crates.io as a +procedural macro. + +```rust +lazy_static! { + static ref USERNAME: Regex = Regex::new("^[a-z0-9_-]{3,16}$").unwrap(); +} +``` + +The implementation shows how to trigger custom warnings and error messages on +the macro input. + +```console +warning: come on, pick a more creative name + --> src/main.rs:10:16 + | +10 | static ref FOO: String = "lazy_static".to_owned(); + | ^^^ +``` + +
+ +## Testing + +When testing macros, we often care not just that the macro can be used +successfully but also that when the macro is provided with invalid input it +produces maximally helpful error messages. Consider using the [`trybuild`] crate +to write tests for errors that are emitted by your macro or errors detected by +the Rust compiler in the expanded code following misuse of the macro. Such tests +help avoid regressions from later refactors that mistakenly make an error no +longer trigger or be less helpful than it used to be. + +[`trybuild`]: https://github.com/dtolnay/trybuild + +
+ +## Debugging + +When developing a procedural macro it can be helpful to look at what the +generated code looks like. Use `cargo rustc -- -Zunstable-options +--pretty=expanded` or the [`cargo expand`] subcommand. + +[`cargo expand`]: https://github.com/dtolnay/cargo-expand + +To show the expanded code for some crate that uses your procedural macro, run +`cargo expand` from that crate. To show the expanded code for one of your own +test cases, run `cargo expand --test the_test_case` where the last argument is +the name of the test file without the `.rs` extension. + +This write-up by Brandon W Maister discusses debugging in more detail: +[Debugging Rust's new Custom Derive system][debugging]. + +[debugging]: https://quodlibetor.github.io/posts/debugging-rusts-new-custom-derive-system/ + +
+ +## Optional features + +Syn puts a lot of functionality behind optional features in order to optimize +compile time for the most common use cases. The following features are +available. + +- **`derive`** *(enabled by default)* — Data structures for representing the + possible input to a derive macro, including structs and enums and types. +- **`full`** — Data structures for representing the syntax tree of all valid + Rust source code, including items and expressions. +- **`parsing`** *(enabled by default)* — Ability to parse input tokens into a + syntax tree node of a chosen type. +- **`printing`** *(enabled by default)* — Ability to print a syntax tree node as + tokens of Rust source code. +- **`visit`** — Trait for traversing a syntax tree. +- **`visit-mut`** — Trait for traversing and mutating in place a syntax tree. +- **`fold`** — Trait for transforming an owned syntax tree. +- **`clone-impls`** *(enabled by default)* — Clone impls for all syntax tree + types. +- **`extra-traits`** — Debug, Eq, PartialEq, Hash impls for all syntax tree + types. +- **`proc-macro`** *(enabled by default)* — Runtime dependency on the dynamic + library libproc_macro from rustc toolchain. + +
+ +## Proc macro shim + +Syn operates on the token representation provided by the [proc-macro2] crate +from crates.io rather than using the compiler's built in proc-macro crate +directly. This enables code using Syn to execute outside of the context of a +procedural macro, such as in unit tests or build.rs, and we avoid needing +incompatible ecosystems for proc macros vs non-macro use cases. + +In general all of your code should be written against proc-macro2 rather than +proc-macro. The one exception is in the signatures of procedural macro entry +points, which are required by the language to use `proc_macro::TokenStream`. + +The proc-macro2 crate will automatically detect and use the compiler's data +structures when a procedural macro is active. + +[proc-macro2]: https://docs.rs/proc-macro2/1.0/proc_macro2/ + +
+ +#### License + + +Licensed under either of Apache License, Version +2.0 or MIT license at your option. + + +
+ + +Unless you explicitly state otherwise, any contribution intentionally submitted +for inclusion in this crate by you, as defined in the Apache-2.0 license, shall +be dual licensed as above, without any additional terms or conditions. + diff --git a/rust/hw/char/pl011/vendor/syn/benches/file.rs b/rust/hw/char/pl011/vendor/syn/benches/file.rs new file mode 100644 index 0000000000..b424723966 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/benches/file.rs @@ -0,0 +1,57 @@ +// $ cargo bench --features full,test --bench file + +#![feature(rustc_private, test)] +#![recursion_limit = "1024"] +#![allow( + clippy::items_after_statements, + clippy::manual_let_else, + clippy::match_like_matches_macro, + clippy::missing_panics_doc, + clippy::must_use_candidate, + clippy::uninlined_format_args +)] + +extern crate test; + +#[macro_use] +#[path = "../tests/macros/mod.rs"] +mod macros; + +#[allow(dead_code)] +#[path = "../tests/repo/mod.rs"] +mod repo; + +use proc_macro2::{Span, TokenStream}; +use std::fs; +use std::str::FromStr; +use syn::parse::{ParseStream, Parser}; +use test::Bencher; + +const FILE: &str = "tests/rust/library/core/src/str/mod.rs"; + +fn get_tokens() -> TokenStream { + repo::clone_rust(); + let content = fs::read_to_string(FILE).unwrap(); + TokenStream::from_str(&content).unwrap() +} + +#[bench] +fn baseline(b: &mut Bencher) { + let tokens = get_tokens(); + b.iter(|| drop(tokens.clone())); +} + +#[bench] +fn create_token_buffer(b: &mut Bencher) { + let tokens = get_tokens(); + fn immediate_fail(_input: ParseStream) -> syn::Result<()> { + Err(syn::Error::new(Span::call_site(), "")) + } + b.iter(|| immediate_fail.parse2(tokens.clone())); +} + +#[bench] +fn parse_file(b: &mut Bencher) { + let tokens = get_tokens(); + b.iter(|| syn::parse2::(tokens.clone())); +} diff --git a/rust/hw/char/pl011/vendor/syn/benches/rust.rs b/rust/hw/char/pl011/vendor/syn/benches/rust.rs new file mode 100644 index 0000000000..bfa3a17f4a --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/benches/rust.rs @@ -0,0 +1,182 @@ +// $ cargo bench --features full,test --bench rust +// +// Syn only, useful for profiling: +// $ RUSTFLAGS='--cfg syn_only' cargo build --release --features full,test --bench rust + +#![cfg_attr(not(syn_only), feature(rustc_private))] +#![recursion_limit = "1024"] +#![allow( + clippy::arc_with_non_send_sync, + clippy::cast_lossless, + clippy::let_underscore_untyped, + clippy::manual_let_else, + clippy::match_like_matches_macro, + clippy::uninlined_format_args, + clippy::unnecessary_wraps +)] + +#[macro_use] +#[path = "../tests/macros/mod.rs"] +mod macros; + +#[allow(dead_code)] +#[path = "../tests/repo/mod.rs"] +mod repo; + +use std::fs; +use std::time::{Duration, Instant}; + +#[cfg(not(syn_only))] +mod tokenstream_parse { + use proc_macro2::TokenStream; + use std::str::FromStr; + + pub fn bench(content: &str) -> Result<(), ()> { + TokenStream::from_str(content).map(drop).map_err(drop) + } +} + +mod syn_parse { + pub fn bench(content: &str) -> Result<(), ()> { + syn::parse_file(content).map(drop).map_err(drop) + } +} + +#[cfg(not(syn_only))] +mod librustc_parse { + extern crate rustc_data_structures; + extern crate rustc_driver; + extern crate rustc_error_messages; + extern crate rustc_errors; + extern crate rustc_parse; + extern crate rustc_session; + extern crate rustc_span; + + use rustc_data_structures::sync::Lrc; + use rustc_error_messages::FluentBundle; + use rustc_errors::{emitter::Emitter, translation::Translate, DiagCtxt, DiagInner}; + use rustc_session::parse::ParseSess; + use rustc_span::source_map::{FilePathMapping, SourceMap}; + use rustc_span::{edition::Edition, FileName}; + + pub fn bench(content: &str) -> Result<(), ()> { + struct SilentEmitter; + + impl Emitter for SilentEmitter { + fn emit_diagnostic(&mut self, _diag: DiagInner) {} + fn source_map(&self) -> Option<&Lrc> { + None + } + } + + impl Translate for SilentEmitter { + fn fluent_bundle(&self) -> Option<&Lrc> { + None + } + fn fallback_fluent_bundle(&self) -> &FluentBundle { + panic!("silent emitter attempted to translate a diagnostic"); + } + } + + rustc_span::create_session_if_not_set_then(Edition::Edition2018, |_| { + let source_map = Lrc::new(SourceMap::new(FilePathMapping::empty())); + let emitter = Box::new(SilentEmitter); + let handler = DiagCtxt::new(emitter); + let sess = ParseSess::with_dcx(handler, source_map); + if let Err(diagnostic) = rustc_parse::parse_crate_from_source_str( + FileName::Custom("bench".to_owned()), + content.to_owned(), + &sess, + ) { + diagnostic.cancel(); + return Err(()); + }; + Ok(()) + }) + } +} + +#[cfg(not(syn_only))] +mod read_from_disk { + pub fn bench(content: &str) -> Result<(), ()> { + let _ = content; + Ok(()) + } +} + +fn exec(mut codepath: impl FnMut(&str) -> Result<(), ()>) -> Duration { + let begin = Instant::now(); + let mut success = 0; + let mut total = 0; + + ["tests/rust/compiler", "tests/rust/library"] + .iter() + .flat_map(|dir| { + walkdir::WalkDir::new(dir) + .into_iter() + .filter_entry(repo::base_dir_filter) + }) + .for_each(|entry| { + let entry = entry.unwrap(); + let path = entry.path(); + if path.is_dir() { + return; + } + let content = fs::read_to_string(path).unwrap(); + let ok = codepath(&content).is_ok(); + success += ok as usize; + total += 1; + if !ok { + eprintln!("FAIL {}", path.display()); + } + }); + + assert_eq!(success, total); + begin.elapsed() +} + +fn main() { + repo::clone_rust(); + + macro_rules! testcases { + ($($(#[$cfg:meta])* $name:ident,)*) => { + [ + $( + $(#[$cfg])* + (stringify!($name), $name::bench as fn(&str) -> Result<(), ()>), + )* + ] + }; + } + + #[cfg(not(syn_only))] + { + let mut lines = 0; + let mut files = 0; + exec(|content| { + lines += content.lines().count(); + files += 1; + Ok(()) + }); + eprintln!("\n{} lines in {} files", lines, files); + } + + for (name, f) in testcases!( + #[cfg(not(syn_only))] + read_from_disk, + #[cfg(not(syn_only))] + tokenstream_parse, + syn_parse, + #[cfg(not(syn_only))] + librustc_parse, + ) { + eprint!("{:20}", format!("{}:", name)); + let elapsed = exec(f); + eprintln!( + "elapsed={}.{:03}s", + elapsed.as_secs(), + elapsed.subsec_millis(), + ); + } + eprintln!(); +} diff --git a/rust/hw/char/pl011/vendor/syn/meson.build b/rust/hw/char/pl011/vendor/syn/meson.build new file mode 100644 index 0000000000..87b9628671 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/meson.build @@ -0,0 +1,24 @@ +_syn_rs = static_library( + 'syn', + files('src/lib.rs'), + gnu_symbol_visibility: 'hidden', + rust_abi: 'rust', + rust_args: rust_args + [ + '--edition', '2021', + '--cfg', 'feature="full"', + '--cfg', 'feature="derive"', + '--cfg', 'feature="parsing"', + '--cfg', 'feature="printing"', + '--cfg', 'feature="clone-impls"', + '--cfg', 'feature="proc-macro"', + ], + dependencies: [ + dep_quote, + dep_proc_macro2, + dep_unicode_ident, + ], +) + +dep_syn = declare_dependency( + link_with: _syn_rs, +) diff --git a/rust/hw/char/pl011/vendor/syn/src/attr.rs b/rust/hw/char/pl011/vendor/syn/src/attr.rs new file mode 100644 index 0000000000..c19715cb3b --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/attr.rs @@ -0,0 +1,793 @@ +#[cfg(feature = "parsing")] +use crate::error::Error; +#[cfg(feature = "parsing")] +use crate::error::Result; +use crate::expr::Expr; +use crate::mac::MacroDelimiter; +#[cfg(feature = "parsing")] +use crate::meta::{self, ParseNestedMeta}; +#[cfg(feature = "parsing")] +use crate::parse::{Parse, ParseStream, Parser}; +use crate::path::Path; +use crate::token; +use proc_macro2::TokenStream; +#[cfg(feature = "printing")] +use std::iter; +#[cfg(feature = "printing")] +use std::slice; + +ast_struct! { + /// An attribute, like `#[repr(transparent)]`. + /// + ///
+ /// + /// # Syntax + /// + /// Rust has six types of attributes. + /// + /// - Outer attributes like `#[repr(transparent)]`. These appear outside or + /// in front of the item they describe. + /// + /// - Inner attributes like `#![feature(proc_macro)]`. These appear inside + /// of the item they describe, usually a module. + /// + /// - Outer one-line doc comments like `/// Example`. + /// + /// - Inner one-line doc comments like `//! Please file an issue`. + /// + /// - Outer documentation blocks `/** Example */`. + /// + /// - Inner documentation blocks `/*! Please file an issue */`. + /// + /// The `style` field of type `AttrStyle` distinguishes whether an attribute + /// is outer or inner. + /// + /// Every attribute has a `path` that indicates the intended interpretation + /// of the rest of the attribute's contents. The path and the optional + /// additional contents are represented together in the `meta` field of the + /// attribute in three possible varieties: + /// + /// - Meta::Path — attributes whose information content conveys just a + /// path, for example the `#[test]` attribute. + /// + /// - Meta::List — attributes that carry arbitrary tokens after the + /// path, surrounded by a delimiter (parenthesis, bracket, or brace). For + /// example `#[derive(Copy)]` or `#[precondition(x < 5)]`. + /// + /// - Meta::NameValue — attributes with an `=` sign after the path, + /// followed by a Rust expression. For example `#[path = + /// "sys/windows.rs"]`. + /// + /// All doc comments are represented in the NameValue style with a path of + /// "doc", as this is how they are processed by the compiler and by + /// `macro_rules!` macros. + /// + /// ```text + /// #[derive(Copy, Clone)] + /// ~~~~~~Path + /// ^^^^^^^^^^^^^^^^^^^Meta::List + /// + /// #[path = "sys/windows.rs"] + /// ~~~~Path + /// ^^^^^^^^^^^^^^^^^^^^^^^Meta::NameValue + /// + /// #[test] + /// ^^^^Meta::Path + /// ``` + /// + ///
+ /// + /// # Parsing from tokens to Attribute + /// + /// This type does not implement the [`Parse`] trait and thus cannot be + /// parsed directly by [`ParseStream::parse`]. Instead use + /// [`ParseStream::call`] with one of the two parser functions + /// [`Attribute::parse_outer`] or [`Attribute::parse_inner`] depending on + /// which you intend to parse. + /// + /// [`Parse`]: crate::parse::Parse + /// [`ParseStream::parse`]: crate::parse::ParseBuffer::parse + /// [`ParseStream::call`]: crate::parse::ParseBuffer::call + /// + /// ``` + /// use syn::{Attribute, Ident, Result, Token}; + /// use syn::parse::{Parse, ParseStream}; + /// + /// // Parses a unit struct with attributes. + /// // + /// // #[path = "s.tmpl"] + /// // struct S; + /// struct UnitStruct { + /// attrs: Vec, + /// struct_token: Token![struct], + /// name: Ident, + /// semi_token: Token![;], + /// } + /// + /// impl Parse for UnitStruct { + /// fn parse(input: ParseStream) -> Result { + /// Ok(UnitStruct { + /// attrs: input.call(Attribute::parse_outer)?, + /// struct_token: input.parse()?, + /// name: input.parse()?, + /// semi_token: input.parse()?, + /// }) + /// } + /// } + /// ``` + /// + ///


+ /// + /// # Parsing from Attribute to structured arguments + /// + /// The grammar of attributes in Rust is very flexible, which makes the + /// syntax tree not that useful on its own. In particular, arguments of the + /// `Meta::List` variety of attribute are held in an arbitrary `tokens: + /// TokenStream`. Macros are expected to check the `path` of the attribute, + /// decide whether they recognize it, and then parse the remaining tokens + /// according to whatever grammar they wish to require for that kind of + /// attribute. Use [`parse_args()`] to parse those tokens into the expected + /// data structure. + /// + /// [`parse_args()`]: Attribute::parse_args + /// + ///


+ /// + /// # Doc comments + /// + /// The compiler transforms doc comments, such as `/// comment` and `/*! + /// comment */`, into attributes before macros are expanded. Each comment is + /// expanded into an attribute of the form `#[doc = r"comment"]`. + /// + /// As an example, the following `mod` items are expanded identically: + /// + /// ``` + /// # use syn::{ItemMod, parse_quote}; + /// let doc: ItemMod = parse_quote! { + /// /// Single line doc comments + /// /// We write so many! + /// /** + /// * Multi-line comments... + /// * May span many lines + /// */ + /// mod example { + /// //! Of course, they can be inner too + /// /*! And fit in a single line */ + /// } + /// }; + /// let attr: ItemMod = parse_quote! { + /// #[doc = r" Single line doc comments"] + /// #[doc = r" We write so many!"] + /// #[doc = r" + /// * Multi-line comments... + /// * May span many lines + /// "] + /// mod example { + /// #![doc = r" Of course, they can be inner too"] + /// #![doc = r" And fit in a single line "] + /// } + /// }; + /// assert_eq!(doc, attr); + /// ``` + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct Attribute { + pub pound_token: Token![#], + pub style: AttrStyle, + pub bracket_token: token::Bracket, + pub meta: Meta, + } +} + +impl Attribute { + /// Returns the path that identifies the interpretation of this attribute. + /// + /// For example this would return the `test` in `#[test]`, the `derive` in + /// `#[derive(Copy)]`, and the `path` in `#[path = "sys/windows.rs"]`. + pub fn path(&self) -> &Path { + self.meta.path() + } + + /// Parse the arguments to the attribute as a syntax tree. + /// + /// This is similar to pulling out the `TokenStream` from `Meta::List` and + /// doing `syn::parse2::(meta_list.tokens)`, except that using + /// `parse_args` the error message has a more useful span when `tokens` is + /// empty. + /// + /// The surrounding delimiters are *not* included in the input to the + /// parser. + /// + /// ```text + /// #[my_attr(value < 5)] + /// ^^^^^^^^^ what gets parsed + /// ``` + /// + /// # Example + /// + /// ``` + /// use syn::{parse_quote, Attribute, Expr}; + /// + /// let attr: Attribute = parse_quote! { + /// #[precondition(value < 5)] + /// }; + /// + /// if attr.path().is_ident("precondition") { + /// let precondition: Expr = attr.parse_args()?; + /// // ... + /// } + /// # anyhow::Ok(()) + /// ``` + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_args(&self) -> Result { + self.parse_args_with(T::parse) + } + + /// Parse the arguments to the attribute using the given parser. + /// + /// # Example + /// + /// ``` + /// use syn::{parse_quote, Attribute}; + /// + /// let attr: Attribute = parse_quote! { + /// #[inception { #[brrrrrrraaaaawwwwrwrrrmrmrmmrmrmmmmm] }] + /// }; + /// + /// let bwom = attr.parse_args_with(Attribute::parse_outer)?; + /// + /// // Attribute does not have a Parse impl, so we couldn't directly do: + /// // let bwom: Attribute = attr.parse_args()?; + /// # anyhow::Ok(()) + /// ``` + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_args_with(&self, parser: F) -> Result { + match &self.meta { + Meta::Path(path) => Err(crate::error::new2( + path.segments.first().unwrap().ident.span(), + path.segments.last().unwrap().ident.span(), + format!( + "expected attribute arguments in parentheses: {}[{}(...)]", + parsing::DisplayAttrStyle(&self.style), + parsing::DisplayPath(path), + ), + )), + Meta::NameValue(meta) => Err(Error::new( + meta.eq_token.span, + format_args!( + "expected parentheses: {}[{}(...)]", + parsing::DisplayAttrStyle(&self.style), + parsing::DisplayPath(&meta.path), + ), + )), + Meta::List(meta) => meta.parse_args_with(parser), + } + } + + /// Parse the arguments to the attribute, expecting it to follow the + /// conventional structure used by most of Rust's built-in attributes. + /// + /// The [*Meta Item Attribute Syntax*][syntax] section in the Rust reference + /// explains the convention in more detail. Not all attributes follow this + /// convention, so [`parse_args()`][Self::parse_args] is available if you + /// need to parse arbitrarily goofy attribute syntax. + /// + /// [syntax]: https://doc.rust-lang.org/reference/attributes.html#meta-item-attribute-syntax + /// + /// # Example + /// + /// We'll parse a struct, and then parse some of Rust's `#[repr]` attribute + /// syntax. + /// + /// ``` + /// use syn::{parenthesized, parse_quote, token, ItemStruct, LitInt}; + /// + /// let input: ItemStruct = parse_quote! { + /// #[repr(C, align(4))] + /// pub struct MyStruct(u16, u32); + /// }; + /// + /// let mut repr_c = false; + /// let mut repr_transparent = false; + /// let mut repr_align = None::; + /// let mut repr_packed = None::; + /// for attr in &input.attrs { + /// if attr.path().is_ident("repr") { + /// attr.parse_nested_meta(|meta| { + /// // #[repr(C)] + /// if meta.path.is_ident("C") { + /// repr_c = true; + /// return Ok(()); + /// } + /// + /// // #[repr(transparent)] + /// if meta.path.is_ident("transparent") { + /// repr_transparent = true; + /// return Ok(()); + /// } + /// + /// // #[repr(align(N))] + /// if meta.path.is_ident("align") { + /// let content; + /// parenthesized!(content in meta.input); + /// let lit: LitInt = content.parse()?; + /// let n: usize = lit.base10_parse()?; + /// repr_align = Some(n); + /// return Ok(()); + /// } + /// + /// // #[repr(packed)] or #[repr(packed(N))], omitted N means 1 + /// if meta.path.is_ident("packed") { + /// if meta.input.peek(token::Paren) { + /// let content; + /// parenthesized!(content in meta.input); + /// let lit: LitInt = content.parse()?; + /// let n: usize = lit.base10_parse()?; + /// repr_packed = Some(n); + /// } else { + /// repr_packed = Some(1); + /// } + /// return Ok(()); + /// } + /// + /// Err(meta.error("unrecognized repr")) + /// })?; + /// } + /// } + /// # anyhow::Ok(()) + /// ``` + /// + /// # Alternatives + /// + /// In some cases, for attributes which have nested layers of structured + /// content, the following less flexible approach might be more convenient: + /// + /// ``` + /// # use syn::{parse_quote, ItemStruct}; + /// # + /// # let input: ItemStruct = parse_quote! { + /// # #[repr(C, align(4))] + /// # pub struct MyStruct(u16, u32); + /// # }; + /// # + /// use syn::punctuated::Punctuated; + /// use syn::{parenthesized, token, Error, LitInt, Meta, Token}; + /// + /// let mut repr_c = false; + /// let mut repr_transparent = false; + /// let mut repr_align = None::; + /// let mut repr_packed = None::; + /// for attr in &input.attrs { + /// if attr.path().is_ident("repr") { + /// let nested = attr.parse_args_with(Punctuated::::parse_terminated)?; + /// for meta in nested { + /// match meta { + /// // #[repr(C)] + /// Meta::Path(path) if path.is_ident("C") => { + /// repr_c = true; + /// } + /// + /// // #[repr(align(N))] + /// Meta::List(meta) if meta.path.is_ident("align") => { + /// let lit: LitInt = meta.parse_args()?; + /// let n: usize = lit.base10_parse()?; + /// repr_align = Some(n); + /// } + /// + /// /* ... */ + /// + /// _ => { + /// return Err(Error::new_spanned(meta, "unrecognized repr")); + /// } + /// } + /// } + /// } + /// } + /// # Ok(()) + /// ``` + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_nested_meta( + &self, + logic: impl FnMut(ParseNestedMeta) -> Result<()>, + ) -> Result<()> { + self.parse_args_with(meta::parser(logic)) + } + + /// Parses zero or more outer attributes from the stream. + /// + /// # Example + /// + /// See + /// [*Parsing from tokens to Attribute*](#parsing-from-tokens-to-attribute). + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_outer(input: ParseStream) -> Result> { + let mut attrs = Vec::new(); + while input.peek(Token![#]) { + attrs.push(input.call(parsing::single_parse_outer)?); + } + Ok(attrs) + } + + /// Parses zero or more inner attributes from the stream. + /// + /// # Example + /// + /// See + /// [*Parsing from tokens to Attribute*](#parsing-from-tokens-to-attribute). + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_inner(input: ParseStream) -> Result> { + let mut attrs = Vec::new(); + parsing::parse_inner(input, &mut attrs)?; + Ok(attrs) + } +} + +ast_enum! { + /// Distinguishes between attributes that decorate an item and attributes + /// that are contained within an item. + /// + /// # Outer attributes + /// + /// - `#[repr(transparent)]` + /// - `/// # Example` + /// - `/** Please file an issue */` + /// + /// # Inner attributes + /// + /// - `#![feature(proc_macro)]` + /// - `//! # Example` + /// - `/*! Please file an issue */` + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub enum AttrStyle { + Outer, + Inner(Token![!]), + } +} + +ast_enum_of_structs! { + /// Content of a compile-time structured attribute. + /// + /// ## Path + /// + /// A meta path is like the `test` in `#[test]`. + /// + /// ## List + /// + /// A meta list is like the `derive(Copy)` in `#[derive(Copy)]`. + /// + /// ## NameValue + /// + /// A name-value meta is like the `path = "..."` in `#[path = + /// "sys/windows.rs"]`. + /// + /// # Syntax tree enum + /// + /// This type is a [syntax tree enum]. + /// + /// [syntax tree enum]: crate::expr::Expr#syntax-tree-enums + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub enum Meta { + Path(Path), + + /// A structured list within an attribute, like `derive(Copy, Clone)`. + List(MetaList), + + /// A name-value pair within an attribute, like `feature = "nightly"`. + NameValue(MetaNameValue), + } +} + +ast_struct! { + /// A structured list within an attribute, like `derive(Copy, Clone)`. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct MetaList { + pub path: Path, + pub delimiter: MacroDelimiter, + pub tokens: TokenStream, + } +} + +ast_struct! { + /// A name-value pair within an attribute, like `feature = "nightly"`. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct MetaNameValue { + pub path: Path, + pub eq_token: Token![=], + pub value: Expr, + } +} + +impl Meta { + /// Returns the path that begins this structured meta item. + /// + /// For example this would return the `test` in `#[test]`, the `derive` in + /// `#[derive(Copy)]`, and the `path` in `#[path = "sys/windows.rs"]`. + pub fn path(&self) -> &Path { + match self { + Meta::Path(path) => path, + Meta::List(meta) => &meta.path, + Meta::NameValue(meta) => &meta.path, + } + } + + /// Error if this is a `Meta::List` or `Meta::NameValue`. + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn require_path_only(&self) -> Result<&Path> { + let error_span = match self { + Meta::Path(path) => return Ok(path), + Meta::List(meta) => meta.delimiter.span().open(), + Meta::NameValue(meta) => meta.eq_token.span, + }; + Err(Error::new(error_span, "unexpected token in attribute")) + } + + /// Error if this is a `Meta::Path` or `Meta::NameValue`. + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn require_list(&self) -> Result<&MetaList> { + match self { + Meta::List(meta) => Ok(meta), + Meta::Path(path) => Err(crate::error::new2( + path.segments.first().unwrap().ident.span(), + path.segments.last().unwrap().ident.span(), + format!( + "expected attribute arguments in parentheses: `{}(...)`", + parsing::DisplayPath(path), + ), + )), + Meta::NameValue(meta) => Err(Error::new(meta.eq_token.span, "expected `(`")), + } + } + + /// Error if this is a `Meta::Path` or `Meta::List`. + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn require_name_value(&self) -> Result<&MetaNameValue> { + match self { + Meta::NameValue(meta) => Ok(meta), + Meta::Path(path) => Err(crate::error::new2( + path.segments.first().unwrap().ident.span(), + path.segments.last().unwrap().ident.span(), + format!( + "expected a value for this attribute: `{} = ...`", + parsing::DisplayPath(path), + ), + )), + Meta::List(meta) => Err(Error::new(meta.delimiter.span().open(), "expected `=`")), + } + } +} + +impl MetaList { + /// See [`Attribute::parse_args`]. + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_args(&self) -> Result { + self.parse_args_with(T::parse) + } + + /// See [`Attribute::parse_args_with`]. + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_args_with(&self, parser: F) -> Result { + let scope = self.delimiter.span().close(); + crate::parse::parse_scoped(parser, scope, self.tokens.clone()) + } + + /// See [`Attribute::parse_nested_meta`]. + #[cfg(feature = "parsing")] + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_nested_meta( + &self, + logic: impl FnMut(ParseNestedMeta) -> Result<()>, + ) -> Result<()> { + self.parse_args_with(meta::parser(logic)) + } +} + +#[cfg(feature = "printing")] +pub(crate) trait FilterAttrs<'a> { + type Ret: Iterator; + + fn outer(self) -> Self::Ret; + #[cfg(feature = "full")] + fn inner(self) -> Self::Ret; +} + +#[cfg(feature = "printing")] +impl<'a> FilterAttrs<'a> for &'a [Attribute] { + type Ret = iter::Filter, fn(&&Attribute) -> bool>; + + fn outer(self) -> Self::Ret { + fn is_outer(attr: &&Attribute) -> bool { + match attr.style { + AttrStyle::Outer => true, + AttrStyle::Inner(_) => false, + } + } + self.iter().filter(is_outer) + } + + #[cfg(feature = "full")] + fn inner(self) -> Self::Ret { + fn is_inner(attr: &&Attribute) -> bool { + match attr.style { + AttrStyle::Inner(_) => true, + AttrStyle::Outer => false, + } + } + self.iter().filter(is_inner) + } +} + +#[cfg(feature = "parsing")] +pub(crate) mod parsing { + use crate::attr::{AttrStyle, Attribute, Meta, MetaList, MetaNameValue}; + use crate::error::Result; + use crate::expr::{Expr, ExprLit}; + use crate::lit::Lit; + use crate::parse::discouraged::Speculative as _; + use crate::parse::{Parse, ParseStream}; + use crate::path::Path; + use crate::{mac, token}; + use std::fmt::{self, Display}; + + pub(crate) fn parse_inner(input: ParseStream, attrs: &mut Vec) -> Result<()> { + while input.peek(Token![#]) && input.peek2(Token![!]) { + attrs.push(input.call(single_parse_inner)?); + } + Ok(()) + } + + pub(crate) fn single_parse_inner(input: ParseStream) -> Result { + let content; + Ok(Attribute { + pound_token: input.parse()?, + style: AttrStyle::Inner(input.parse()?), + bracket_token: bracketed!(content in input), + meta: content.parse()?, + }) + } + + pub(crate) fn single_parse_outer(input: ParseStream) -> Result { + let content; + Ok(Attribute { + pound_token: input.parse()?, + style: AttrStyle::Outer, + bracket_token: bracketed!(content in input), + meta: content.parse()?, + }) + } + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for Meta { + fn parse(input: ParseStream) -> Result { + let path = input.call(Path::parse_mod_style)?; + parse_meta_after_path(path, input) + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for MetaList { + fn parse(input: ParseStream) -> Result { + let path = input.call(Path::parse_mod_style)?; + parse_meta_list_after_path(path, input) + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for MetaNameValue { + fn parse(input: ParseStream) -> Result { + let path = input.call(Path::parse_mod_style)?; + parse_meta_name_value_after_path(path, input) + } + } + + pub(crate) fn parse_meta_after_path(path: Path, input: ParseStream) -> Result { + if input.peek(token::Paren) || input.peek(token::Bracket) || input.peek(token::Brace) { + parse_meta_list_after_path(path, input).map(Meta::List) + } else if input.peek(Token![=]) { + parse_meta_name_value_after_path(path, input).map(Meta::NameValue) + } else { + Ok(Meta::Path(path)) + } + } + + fn parse_meta_list_after_path(path: Path, input: ParseStream) -> Result { + let (delimiter, tokens) = mac::parse_delimiter(input)?; + Ok(MetaList { + path, + delimiter, + tokens, + }) + } + + fn parse_meta_name_value_after_path(path: Path, input: ParseStream) -> Result { + let eq_token: Token![=] = input.parse()?; + let ahead = input.fork(); + let lit: Option = ahead.parse()?; + let value = if let (Some(lit), true) = (lit, ahead.is_empty()) { + input.advance_to(&ahead); + Expr::Lit(ExprLit { + attrs: Vec::new(), + lit, + }) + } else if input.peek(Token![#]) && input.peek2(token::Bracket) { + return Err(input.error("unexpected attribute inside of attribute")); + } else { + input.parse()? + }; + Ok(MetaNameValue { + path, + eq_token, + value, + }) + } + + pub(super) struct DisplayAttrStyle<'a>(pub &'a AttrStyle); + + impl<'a> Display for DisplayAttrStyle<'a> { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + formatter.write_str(match self.0 { + AttrStyle::Outer => "#", + AttrStyle::Inner(_) => "#!", + }) + } + } + + pub(super) struct DisplayPath<'a>(pub &'a Path); + + impl<'a> Display for DisplayPath<'a> { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + for (i, segment) in self.0.segments.iter().enumerate() { + if i > 0 || self.0.leading_colon.is_some() { + formatter.write_str("::")?; + } + write!(formatter, "{}", segment.ident)?; + } + Ok(()) + } + } +} + +#[cfg(feature = "printing")] +mod printing { + use crate::attr::{AttrStyle, Attribute, MetaList, MetaNameValue}; + use proc_macro2::TokenStream; + use quote::ToTokens; + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for Attribute { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.pound_token.to_tokens(tokens); + if let AttrStyle::Inner(b) = &self.style { + b.to_tokens(tokens); + } + self.bracket_token.surround(tokens, |tokens| { + self.meta.to_tokens(tokens); + }); + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for MetaList { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.path.to_tokens(tokens); + self.delimiter.surround(tokens, self.tokens.clone()); + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for MetaNameValue { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.path.to_tokens(tokens); + self.eq_token.to_tokens(tokens); + self.value.to_tokens(tokens); + } + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/bigint.rs b/rust/hw/char/pl011/vendor/syn/src/bigint.rs new file mode 100644 index 0000000000..66aaa93725 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/bigint.rs @@ -0,0 +1,66 @@ +use std::ops::{AddAssign, MulAssign}; + +// For implementing base10_digits() accessor on LitInt. +pub(crate) struct BigInt { + digits: Vec, +} + +impl BigInt { + pub(crate) fn new() -> Self { + BigInt { digits: Vec::new() } + } + + pub(crate) fn to_string(&self) -> String { + let mut repr = String::with_capacity(self.digits.len()); + + let mut has_nonzero = false; + for digit in self.digits.iter().rev() { + has_nonzero |= *digit != 0; + if has_nonzero { + repr.push((*digit + b'0') as char); + } + } + + if repr.is_empty() { + repr.push('0'); + } + + repr + } + + fn reserve_two_digits(&mut self) { + let len = self.digits.len(); + let desired = + len + !self.digits.ends_with(&[0, 0]) as usize + !self.digits.ends_with(&[0]) as usize; + self.digits.resize(desired, 0); + } +} + +impl AddAssign for BigInt { + // Assumes increment <16. + fn add_assign(&mut self, mut increment: u8) { + self.reserve_two_digits(); + + let mut i = 0; + while increment > 0 { + let sum = self.digits[i] + increment; + self.digits[i] = sum % 10; + increment = sum / 10; + i += 1; + } + } +} + +impl MulAssign for BigInt { + // Assumes base <=16. + fn mul_assign(&mut self, base: u8) { + self.reserve_two_digits(); + + let mut carry = 0; + for digit in &mut self.digits { + let prod = *digit * base + carry; + *digit = prod % 10; + carry = prod / 10; + } + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/buffer.rs b/rust/hw/char/pl011/vendor/syn/src/buffer.rs new file mode 100644 index 0000000000..1686e28209 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/buffer.rs @@ -0,0 +1,434 @@ +//! A stably addressed token buffer supporting efficient traversal based on a +//! cheaply copyable cursor. + +// This module is heavily commented as it contains most of the unsafe code in +// Syn, and caution should be used when editing it. The public-facing interface +// is 100% safe but the implementation is fragile internally. + +use crate::Lifetime; +use proc_macro2::extra::DelimSpan; +use proc_macro2::{Delimiter, Group, Ident, Literal, Punct, Spacing, Span, TokenStream, TokenTree}; +use std::cmp::Ordering; +use std::marker::PhantomData; + +/// Internal type which is used instead of `TokenTree` to represent a token tree +/// within a `TokenBuffer`. +enum Entry { + // Mimicking types from proc-macro. + // Group entries contain the offset to the matching End entry. + Group(Group, usize), + Ident(Ident), + Punct(Punct), + Literal(Literal), + // End entries contain the offset (negative) to the start of the buffer. + End(isize), +} + +/// A buffer that can be efficiently traversed multiple times, unlike +/// `TokenStream` which requires a deep copy in order to traverse more than +/// once. +pub struct TokenBuffer { + // NOTE: Do not implement clone on this - while the current design could be + // cloned, other designs which could be desirable may not be cloneable. + entries: Box<[Entry]>, +} + +impl TokenBuffer { + fn recursive_new(entries: &mut Vec, stream: TokenStream) { + for tt in stream { + match tt { + TokenTree::Ident(ident) => entries.push(Entry::Ident(ident)), + TokenTree::Punct(punct) => entries.push(Entry::Punct(punct)), + TokenTree::Literal(literal) => entries.push(Entry::Literal(literal)), + TokenTree::Group(group) => { + let group_start_index = entries.len(); + entries.push(Entry::End(0)); // we replace this below + Self::recursive_new(entries, group.stream()); + let group_end_index = entries.len(); + entries.push(Entry::End(-(group_end_index as isize))); + let group_end_offset = group_end_index - group_start_index; + entries[group_start_index] = Entry::Group(group, group_end_offset); + } + } + } + } + + /// Creates a `TokenBuffer` containing all the tokens from the input + /// `proc_macro::TokenStream`. + #[cfg(feature = "proc-macro")] + #[cfg_attr(docsrs, doc(cfg(feature = "proc-macro")))] + pub fn new(stream: proc_macro::TokenStream) -> Self { + Self::new2(stream.into()) + } + + /// Creates a `TokenBuffer` containing all the tokens from the input + /// `proc_macro2::TokenStream`. + pub fn new2(stream: TokenStream) -> Self { + let mut entries = Vec::new(); + Self::recursive_new(&mut entries, stream); + entries.push(Entry::End(-(entries.len() as isize))); + Self { + entries: entries.into_boxed_slice(), + } + } + + /// Creates a cursor referencing the first token in the buffer and able to + /// traverse until the end of the buffer. + pub fn begin(&self) -> Cursor { + let ptr = self.entries.as_ptr(); + unsafe { Cursor::create(ptr, ptr.add(self.entries.len() - 1)) } + } +} + +/// A cheaply copyable cursor into a `TokenBuffer`. +/// +/// This cursor holds a shared reference into the immutable data which is used +/// internally to represent a `TokenStream`, and can be efficiently manipulated +/// and copied around. +/// +/// An empty `Cursor` can be created directly, or one may create a `TokenBuffer` +/// object and get a cursor to its first token with `begin()`. +pub struct Cursor<'a> { + // The current entry which the `Cursor` is pointing at. + ptr: *const Entry, + // This is the only `Entry::End` object which this cursor is allowed to + // point at. All other `End` objects are skipped over in `Cursor::create`. + scope: *const Entry, + // Cursor is covariant in 'a. This field ensures that our pointers are still + // valid. + marker: PhantomData<&'a Entry>, +} + +impl<'a> Cursor<'a> { + /// Creates a cursor referencing a static empty TokenStream. + pub fn empty() -> Self { + // It's safe in this situation for us to put an `Entry` object in global + // storage, despite it not actually being safe to send across threads + // (`Ident` is a reference into a thread-local table). This is because + // this entry never includes a `Ident` object. + // + // This wrapper struct allows us to break the rules and put a `Sync` + // object in global storage. + struct UnsafeSyncEntry(Entry); + unsafe impl Sync for UnsafeSyncEntry {} + static EMPTY_ENTRY: UnsafeSyncEntry = UnsafeSyncEntry(Entry::End(0)); + + Cursor { + ptr: &EMPTY_ENTRY.0, + scope: &EMPTY_ENTRY.0, + marker: PhantomData, + } + } + + /// This create method intelligently exits non-explicitly-entered + /// `None`-delimited scopes when the cursor reaches the end of them, + /// allowing for them to be treated transparently. + unsafe fn create(mut ptr: *const Entry, scope: *const Entry) -> Self { + // NOTE: If we're looking at a `End`, we want to advance the cursor + // past it, unless `ptr == scope`, which means that we're at the edge of + // our cursor's scope. We should only have `ptr != scope` at the exit + // from None-delimited groups entered with `ignore_none`. + while let Entry::End(_) = unsafe { &*ptr } { + if ptr == scope { + break; + } + ptr = unsafe { ptr.add(1) }; + } + + Cursor { + ptr, + scope, + marker: PhantomData, + } + } + + /// Get the current entry. + fn entry(self) -> &'a Entry { + unsafe { &*self.ptr } + } + + /// Bump the cursor to point at the next token after the current one. This + /// is undefined behavior if the cursor is currently looking at an + /// `Entry::End`. + /// + /// If the cursor is looking at an `Entry::Group`, the bumped cursor will + /// point at the first token in the group (with the same scope end). + unsafe fn bump_ignore_group(self) -> Cursor<'a> { + unsafe { Cursor::create(self.ptr.offset(1), self.scope) } + } + + /// While the cursor is looking at a `None`-delimited group, move it to look + /// at the first token inside instead. If the group is empty, this will move + /// the cursor past the `None`-delimited group. + /// + /// WARNING: This mutates its argument. + fn ignore_none(&mut self) { + while let Entry::Group(group, _) = self.entry() { + if group.delimiter() == Delimiter::None { + unsafe { *self = self.bump_ignore_group() }; + } else { + break; + } + } + } + + /// Checks whether the cursor is currently pointing at the end of its valid + /// scope. + pub fn eof(self) -> bool { + // We're at eof if we're at the end of our scope. + self.ptr == self.scope + } + + /// If the cursor is pointing at a `Group` with the given delimiter, returns + /// a cursor into that group and one pointing to the next `TokenTree`. + pub fn group(mut self, delim: Delimiter) -> Option<(Cursor<'a>, DelimSpan, Cursor<'a>)> { + // If we're not trying to enter a none-delimited group, we want to + // ignore them. We have to make sure to _not_ ignore them when we want + // to enter them, of course. For obvious reasons. + if delim != Delimiter::None { + self.ignore_none(); + } + + if let Entry::Group(group, end_offset) = self.entry() { + if group.delimiter() == delim { + let span = group.delim_span(); + let end_of_group = unsafe { self.ptr.add(*end_offset) }; + let inside_of_group = unsafe { Cursor::create(self.ptr.add(1), end_of_group) }; + let after_group = unsafe { Cursor::create(end_of_group, self.scope) }; + return Some((inside_of_group, span, after_group)); + } + } + + None + } + + pub(crate) fn any_group(self) -> Option<(Cursor<'a>, Delimiter, DelimSpan, Cursor<'a>)> { + if let Entry::Group(group, end_offset) = self.entry() { + let delimiter = group.delimiter(); + let span = group.delim_span(); + let end_of_group = unsafe { self.ptr.add(*end_offset) }; + let inside_of_group = unsafe { Cursor::create(self.ptr.add(1), end_of_group) }; + let after_group = unsafe { Cursor::create(end_of_group, self.scope) }; + return Some((inside_of_group, delimiter, span, after_group)); + } + + None + } + + pub(crate) fn any_group_token(self) -> Option<(Group, Cursor<'a>)> { + if let Entry::Group(group, end_offset) = self.entry() { + let end_of_group = unsafe { self.ptr.add(*end_offset) }; + let after_group = unsafe { Cursor::create(end_of_group, self.scope) }; + return Some((group.clone(), after_group)); + } + + None + } + + /// If the cursor is pointing at a `Ident`, returns it along with a cursor + /// pointing at the next `TokenTree`. + pub fn ident(mut self) -> Option<(Ident, Cursor<'a>)> { + self.ignore_none(); + match self.entry() { + Entry::Ident(ident) => Some((ident.clone(), unsafe { self.bump_ignore_group() })), + _ => None, + } + } + + /// If the cursor is pointing at a `Punct`, returns it along with a cursor + /// pointing at the next `TokenTree`. + pub fn punct(mut self) -> Option<(Punct, Cursor<'a>)> { + self.ignore_none(); + match self.entry() { + Entry::Punct(punct) if punct.as_char() != '\'' => { + Some((punct.clone(), unsafe { self.bump_ignore_group() })) + } + _ => None, + } + } + + /// If the cursor is pointing at a `Literal`, return it along with a cursor + /// pointing at the next `TokenTree`. + pub fn literal(mut self) -> Option<(Literal, Cursor<'a>)> { + self.ignore_none(); + match self.entry() { + Entry::Literal(literal) => Some((literal.clone(), unsafe { self.bump_ignore_group() })), + _ => None, + } + } + + /// If the cursor is pointing at a `Lifetime`, returns it along with a + /// cursor pointing at the next `TokenTree`. + pub fn lifetime(mut self) -> Option<(Lifetime, Cursor<'a>)> { + self.ignore_none(); + match self.entry() { + Entry::Punct(punct) if punct.as_char() == '\'' && punct.spacing() == Spacing::Joint => { + let next = unsafe { self.bump_ignore_group() }; + let (ident, rest) = next.ident()?; + let lifetime = Lifetime { + apostrophe: punct.span(), + ident, + }; + Some((lifetime, rest)) + } + _ => None, + } + } + + /// Copies all remaining tokens visible from this cursor into a + /// `TokenStream`. + pub fn token_stream(self) -> TokenStream { + let mut tts = Vec::new(); + let mut cursor = self; + while let Some((tt, rest)) = cursor.token_tree() { + tts.push(tt); + cursor = rest; + } + tts.into_iter().collect() + } + + /// If the cursor is pointing at a `TokenTree`, returns it along with a + /// cursor pointing at the next `TokenTree`. + /// + /// Returns `None` if the cursor has reached the end of its stream. + /// + /// This method does not treat `None`-delimited groups as transparent, and + /// will return a `Group(None, ..)` if the cursor is looking at one. + pub fn token_tree(self) -> Option<(TokenTree, Cursor<'a>)> { + let (tree, len) = match self.entry() { + Entry::Group(group, end_offset) => (group.clone().into(), *end_offset), + Entry::Literal(literal) => (literal.clone().into(), 1), + Entry::Ident(ident) => (ident.clone().into(), 1), + Entry::Punct(punct) => (punct.clone().into(), 1), + Entry::End(_) => return None, + }; + + let rest = unsafe { Cursor::create(self.ptr.add(len), self.scope) }; + Some((tree, rest)) + } + + /// Returns the `Span` of the current token, or `Span::call_site()` if this + /// cursor points to eof. + pub fn span(self) -> Span { + match self.entry() { + Entry::Group(group, _) => group.span(), + Entry::Literal(literal) => literal.span(), + Entry::Ident(ident) => ident.span(), + Entry::Punct(punct) => punct.span(), + Entry::End(_) => Span::call_site(), + } + } + + /// Returns the `Span` of the token immediately prior to the position of + /// this cursor, or of the current token if there is no previous one. + #[cfg(any(feature = "full", feature = "derive"))] + pub(crate) fn prev_span(mut self) -> Span { + if start_of_buffer(self) < self.ptr { + self.ptr = unsafe { self.ptr.offset(-1) }; + if let Entry::End(_) = self.entry() { + // Locate the matching Group begin token. + let mut depth = 1; + loop { + self.ptr = unsafe { self.ptr.offset(-1) }; + match self.entry() { + Entry::Group(group, _) => { + depth -= 1; + if depth == 0 { + return group.span(); + } + } + Entry::End(_) => depth += 1, + Entry::Literal(_) | Entry::Ident(_) | Entry::Punct(_) => {} + } + } + } + } + self.span() + } + + /// Skip over the next token that is not a None-delimited group, without + /// cloning it. Returns `None` if this cursor points to eof. + /// + /// This method treats `'lifetimes` as a single token. + pub(crate) fn skip(mut self) -> Option> { + self.ignore_none(); + + let len = match self.entry() { + Entry::End(_) => return None, + + // Treat lifetimes as a single tt for the purposes of 'skip'. + Entry::Punct(punct) if punct.as_char() == '\'' && punct.spacing() == Spacing::Joint => { + match unsafe { &*self.ptr.add(1) } { + Entry::Ident(_) => 2, + _ => 1, + } + } + + Entry::Group(_, end_offset) => *end_offset, + _ => 1, + }; + + Some(unsafe { Cursor::create(self.ptr.add(len), self.scope) }) + } +} + +impl<'a> Copy for Cursor<'a> {} + +impl<'a> Clone for Cursor<'a> { + fn clone(&self) -> Self { + *self + } +} + +impl<'a> Eq for Cursor<'a> {} + +impl<'a> PartialEq for Cursor<'a> { + fn eq(&self, other: &Self) -> bool { + self.ptr == other.ptr + } +} + +impl<'a> PartialOrd for Cursor<'a> { + fn partial_cmp(&self, other: &Self) -> Option { + if same_buffer(*self, *other) { + Some(cmp_assuming_same_buffer(*self, *other)) + } else { + None + } + } +} + +pub(crate) fn same_scope(a: Cursor, b: Cursor) -> bool { + a.scope == b.scope +} + +pub(crate) fn same_buffer(a: Cursor, b: Cursor) -> bool { + start_of_buffer(a) == start_of_buffer(b) +} + +fn start_of_buffer(cursor: Cursor) -> *const Entry { + unsafe { + match &*cursor.scope { + Entry::End(offset) => cursor.scope.offset(*offset), + _ => unreachable!(), + } + } +} + +pub(crate) fn cmp_assuming_same_buffer(a: Cursor, b: Cursor) -> Ordering { + a.ptr.cmp(&b.ptr) +} + +pub(crate) fn open_span_of_group(cursor: Cursor) -> Span { + match cursor.entry() { + Entry::Group(group, _) => group.span_open(), + _ => cursor.span(), + } +} + +pub(crate) fn close_span_of_group(cursor: Cursor) -> Span { + match cursor.entry() { + Entry::Group(group, _) => group.span_close(), + _ => cursor.span(), + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/classify.rs b/rust/hw/char/pl011/vendor/syn/src/classify.rs new file mode 100644 index 0000000000..1b0ff30040 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/classify.rs @@ -0,0 +1,377 @@ +use crate::expr::Expr; +use crate::generics::TypeParamBound; +use crate::path::{Path, PathArguments}; +use crate::punctuated::Punctuated; +use crate::ty::{ReturnType, Type}; +#[cfg(feature = "full")] +use proc_macro2::{Delimiter, TokenStream, TokenTree}; +use std::ops::ControlFlow; + +#[cfg(feature = "full")] +pub(crate) fn requires_semi_to_be_stmt(expr: &Expr) -> bool { + match expr { + Expr::Macro(expr) => !expr.mac.delimiter.is_brace(), + _ => requires_comma_to_be_match_arm(expr), + } +} + +#[cfg(feature = "full")] +pub(crate) fn requires_comma_to_be_match_arm(expr: &Expr) -> bool { + match expr { + Expr::If(_) + | Expr::Match(_) + | Expr::Block(_) | Expr::Unsafe(_) // both under ExprKind::Block in rustc + | Expr::While(_) + | Expr::Loop(_) + | Expr::ForLoop(_) + | Expr::TryBlock(_) + | Expr::Const(_) => false, + + Expr::Array(_) + | Expr::Assign(_) + | Expr::Async(_) + | Expr::Await(_) + | Expr::Binary(_) + | Expr::Break(_) + | Expr::Call(_) + | Expr::Cast(_) + | Expr::Closure(_) + | Expr::Continue(_) + | Expr::Field(_) + | Expr::Group(_) + | Expr::Index(_) + | Expr::Infer(_) + | Expr::Let(_) + | Expr::Lit(_) + | Expr::Macro(_) + | Expr::MethodCall(_) + | Expr::Paren(_) + | Expr::Path(_) + | Expr::Range(_) + | Expr::Reference(_) + | Expr::Repeat(_) + | Expr::Return(_) + | Expr::Struct(_) + | Expr::Try(_) + | Expr::Tuple(_) + | Expr::Unary(_) + | Expr::Yield(_) + | Expr::Verbatim(_) => true + } +} + +#[cfg(all(feature = "printing", feature = "full"))] +pub(crate) fn confusable_with_adjacent_block(mut expr: &Expr) -> bool { + let mut stack = Vec::new(); + + while let Some(next) = match expr { + Expr::Assign(e) => { + stack.push(&e.right); + Some(&e.left) + } + Expr::Await(e) => Some(&e.base), + Expr::Binary(e) => { + stack.push(&e.right); + Some(&e.left) + } + Expr::Break(e) => { + if let Some(Expr::Block(_)) = e.expr.as_deref() { + return true; + } + stack.pop() + } + Expr::Call(e) => Some(&e.func), + Expr::Cast(e) => Some(&e.expr), + Expr::Closure(e) => Some(&e.body), + Expr::Field(e) => Some(&e.base), + Expr::Index(e) => Some(&e.expr), + Expr::MethodCall(e) => Some(&e.receiver), + Expr::Range(e) => { + if let Some(Expr::Block(_)) = e.end.as_deref() { + return true; + } + match (&e.start, &e.end) { + (Some(start), end) => { + stack.extend(end); + Some(start) + } + (None, Some(end)) => Some(end), + (None, None) => stack.pop(), + } + } + Expr::Reference(e) => Some(&e.expr), + Expr::Return(e) => { + if e.expr.is_none() && stack.is_empty() { + return true; + } + stack.pop() + } + Expr::Struct(_) => return true, + Expr::Try(e) => Some(&e.expr), + Expr::Unary(e) => Some(&e.expr), + Expr::Yield(e) => { + if e.expr.is_none() && stack.is_empty() { + return true; + } + stack.pop() + } + + Expr::Array(_) + | Expr::Async(_) + | Expr::Block(_) + | Expr::Const(_) + | Expr::Continue(_) + | Expr::ForLoop(_) + | Expr::Group(_) + | Expr::If(_) + | Expr::Infer(_) + | Expr::Let(_) + | Expr::Lit(_) + | Expr::Loop(_) + | Expr::Macro(_) + | Expr::Match(_) + | Expr::Paren(_) + | Expr::Path(_) + | Expr::Repeat(_) + | Expr::TryBlock(_) + | Expr::Tuple(_) + | Expr::Unsafe(_) + | Expr::Verbatim(_) + | Expr::While(_) => stack.pop(), + } { + expr = next; + } + + false +} + +#[cfg(feature = "printing")] +pub(crate) fn confusable_with_adjacent_lt(mut expr: &Expr) -> bool { + loop { + match expr { + Expr::Binary(e) => expr = &e.right, + Expr::Cast(e) => return trailing_unparameterized_path(&e.ty), + Expr::Reference(e) => expr = &e.expr, + Expr::Unary(e) => expr = &e.expr, + + Expr::Array(_) + | Expr::Assign(_) + | Expr::Async(_) + | Expr::Await(_) + | Expr::Block(_) + | Expr::Break(_) + | Expr::Call(_) + | Expr::Closure(_) + | Expr::Const(_) + | Expr::Continue(_) + | Expr::Field(_) + | Expr::ForLoop(_) + | Expr::Group(_) + | Expr::If(_) + | Expr::Index(_) + | Expr::Infer(_) + | Expr::Let(_) + | Expr::Lit(_) + | Expr::Loop(_) + | Expr::Macro(_) + | Expr::Match(_) + | Expr::MethodCall(_) + | Expr::Paren(_) + | Expr::Path(_) + | Expr::Range(_) + | Expr::Repeat(_) + | Expr::Return(_) + | Expr::Struct(_) + | Expr::Try(_) + | Expr::TryBlock(_) + | Expr::Tuple(_) + | Expr::Unsafe(_) + | Expr::Verbatim(_) + | Expr::While(_) + | Expr::Yield(_) => return false, + } + } + + fn trailing_unparameterized_path(mut ty: &Type) -> bool { + loop { + match ty { + Type::BareFn(t) => match &t.output { + ReturnType::Default => return false, + ReturnType::Type(_, ret) => ty = ret, + }, + Type::ImplTrait(t) => match last_type_in_bounds(&t.bounds) { + ControlFlow::Break(trailing_path) => return trailing_path, + ControlFlow::Continue(t) => ty = t, + }, + Type::Path(t) => match last_type_in_path(&t.path) { + ControlFlow::Break(trailing_path) => return trailing_path, + ControlFlow::Continue(t) => ty = t, + }, + Type::Ptr(t) => ty = &t.elem, + Type::Reference(t) => ty = &t.elem, + Type::TraitObject(t) => match last_type_in_bounds(&t.bounds) { + ControlFlow::Break(trailing_path) => return trailing_path, + ControlFlow::Continue(t) => ty = t, + }, + + Type::Array(_) + | Type::Group(_) + | Type::Infer(_) + | Type::Macro(_) + | Type::Never(_) + | Type::Paren(_) + | Type::Slice(_) + | Type::Tuple(_) + | Type::Verbatim(_) => return false, + } + } + } + + fn last_type_in_path(path: &Path) -> ControlFlow { + match &path.segments.last().unwrap().arguments { + PathArguments::None => ControlFlow::Break(true), + PathArguments::AngleBracketed(_) => ControlFlow::Break(false), + PathArguments::Parenthesized(arg) => match &arg.output { + ReturnType::Default => ControlFlow::Break(false), + ReturnType::Type(_, ret) => ControlFlow::Continue(ret), + }, + } + } + + fn last_type_in_bounds( + bounds: &Punctuated, + ) -> ControlFlow { + match bounds.last().unwrap() { + TypeParamBound::Trait(t) => last_type_in_path(&t.path), + TypeParamBound::Lifetime(_) | TypeParamBound::Verbatim(_) => ControlFlow::Break(false), + } + } +} + +/// Whether the expression's last token is `}`. +#[cfg(feature = "full")] +pub(crate) fn expr_trailing_brace(mut expr: &Expr) -> bool { + loop { + match expr { + Expr::Async(_) + | Expr::Block(_) + | Expr::Const(_) + | Expr::ForLoop(_) + | Expr::If(_) + | Expr::Loop(_) + | Expr::Match(_) + | Expr::Struct(_) + | Expr::TryBlock(_) + | Expr::Unsafe(_) + | Expr::While(_) => return true, + + Expr::Assign(e) => expr = &e.right, + Expr::Binary(e) => expr = &e.right, + Expr::Break(e) => match &e.expr { + Some(e) => expr = e, + None => return false, + }, + Expr::Cast(e) => return type_trailing_brace(&e.ty), + Expr::Closure(e) => expr = &e.body, + Expr::Let(e) => expr = &e.expr, + Expr::Macro(e) => return e.mac.delimiter.is_brace(), + Expr::Range(e) => match &e.end { + Some(end) => expr = end, + None => return false, + }, + Expr::Reference(e) => expr = &e.expr, + Expr::Return(e) => match &e.expr { + Some(e) => expr = e, + None => return false, + }, + Expr::Unary(e) => expr = &e.expr, + Expr::Verbatim(e) => return tokens_trailing_brace(e), + Expr::Yield(e) => match &e.expr { + Some(e) => expr = e, + None => return false, + }, + + Expr::Array(_) + | Expr::Await(_) + | Expr::Call(_) + | Expr::Continue(_) + | Expr::Field(_) + | Expr::Group(_) + | Expr::Index(_) + | Expr::Infer(_) + | Expr::Lit(_) + | Expr::MethodCall(_) + | Expr::Paren(_) + | Expr::Path(_) + | Expr::Repeat(_) + | Expr::Try(_) + | Expr::Tuple(_) => return false, + } + } + + fn type_trailing_brace(mut ty: &Type) -> bool { + loop { + match ty { + Type::BareFn(t) => match &t.output { + ReturnType::Default => return false, + ReturnType::Type(_, ret) => ty = ret, + }, + Type::ImplTrait(t) => match last_type_in_bounds(&t.bounds) { + ControlFlow::Break(trailing_brace) => return trailing_brace, + ControlFlow::Continue(t) => ty = t, + }, + Type::Macro(t) => return t.mac.delimiter.is_brace(), + Type::Path(t) => match last_type_in_path(&t.path) { + Some(t) => ty = t, + None => return false, + }, + Type::Ptr(t) => ty = &t.elem, + Type::Reference(t) => ty = &t.elem, + Type::TraitObject(t) => match last_type_in_bounds(&t.bounds) { + ControlFlow::Break(trailing_brace) => return trailing_brace, + ControlFlow::Continue(t) => ty = t, + }, + Type::Verbatim(t) => return tokens_trailing_brace(t), + + Type::Array(_) + | Type::Group(_) + | Type::Infer(_) + | Type::Never(_) + | Type::Paren(_) + | Type::Slice(_) + | Type::Tuple(_) => return false, + } + } + } + + fn last_type_in_path(path: &Path) -> Option<&Type> { + match &path.segments.last().unwrap().arguments { + PathArguments::None | PathArguments::AngleBracketed(_) => None, + PathArguments::Parenthesized(arg) => match &arg.output { + ReturnType::Default => None, + ReturnType::Type(_, ret) => Some(ret), + }, + } + } + + fn last_type_in_bounds( + bounds: &Punctuated, + ) -> ControlFlow { + match bounds.last().unwrap() { + TypeParamBound::Trait(t) => match last_type_in_path(&t.path) { + Some(t) => ControlFlow::Continue(t), + None => ControlFlow::Break(false), + }, + TypeParamBound::Lifetime(_) => ControlFlow::Break(false), + TypeParamBound::Verbatim(t) => ControlFlow::Break(tokens_trailing_brace(t)), + } + } + + fn tokens_trailing_brace(tokens: &TokenStream) -> bool { + if let Some(TokenTree::Group(last)) = tokens.clone().into_iter().last() { + last.delimiter() == Delimiter::Brace + } else { + false + } + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/custom_keyword.rs b/rust/hw/char/pl011/vendor/syn/src/custom_keyword.rs new file mode 100644 index 0000000000..cc4f632c98 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/custom_keyword.rs @@ -0,0 +1,260 @@ +/// Define a type that supports parsing and printing a given identifier as if it +/// were a keyword. +/// +/// # Usage +/// +/// As a convention, it is recommended that this macro be invoked within a +/// module called `kw` or `keyword` and that the resulting parser be invoked +/// with a `kw::` or `keyword::` prefix. +/// +/// ``` +/// mod kw { +/// syn::custom_keyword!(whatever); +/// } +/// ``` +/// +/// The generated syntax tree node supports the following operations just like +/// any built-in keyword token. +/// +/// - [Peeking] — `input.peek(kw::whatever)` +/// +/// - [Parsing] — `input.parse::()?` +/// +/// - [Printing] — `quote!( ... #whatever_token ... )` +/// +/// - Construction from a [`Span`] — `let whatever_token = kw::whatever(sp)` +/// +/// - Field access to its span — `let sp = whatever_token.span` +/// +/// [Peeking]: crate::parse::ParseBuffer::peek +/// [Parsing]: crate::parse::ParseBuffer::parse +/// [Printing]: quote::ToTokens +/// [`Span`]: proc_macro2::Span +/// +/// # Example +/// +/// This example parses input that looks like `bool = true` or `str = "value"`. +/// The key must be either the identifier `bool` or the identifier `str`. If +/// `bool`, the value may be either `true` or `false`. If `str`, the value may +/// be any string literal. +/// +/// The symbols `bool` and `str` are not reserved keywords in Rust so these are +/// not considered keywords in the `syn::token` module. Like any other +/// identifier that is not a keyword, these can be declared as custom keywords +/// by crates that need to use them as such. +/// +/// ``` +/// use syn::{LitBool, LitStr, Result, Token}; +/// use syn::parse::{Parse, ParseStream}; +/// +/// mod kw { +/// syn::custom_keyword!(bool); +/// syn::custom_keyword!(str); +/// } +/// +/// enum Argument { +/// Bool { +/// bool_token: kw::bool, +/// eq_token: Token![=], +/// value: LitBool, +/// }, +/// Str { +/// str_token: kw::str, +/// eq_token: Token![=], +/// value: LitStr, +/// }, +/// } +/// +/// impl Parse for Argument { +/// fn parse(input: ParseStream) -> Result { +/// let lookahead = input.lookahead1(); +/// if lookahead.peek(kw::bool) { +/// Ok(Argument::Bool { +/// bool_token: input.parse::()?, +/// eq_token: input.parse()?, +/// value: input.parse()?, +/// }) +/// } else if lookahead.peek(kw::str) { +/// Ok(Argument::Str { +/// str_token: input.parse::()?, +/// eq_token: input.parse()?, +/// value: input.parse()?, +/// }) +/// } else { +/// Err(lookahead.error()) +/// } +/// } +/// } +/// ``` +#[macro_export] +macro_rules! custom_keyword { + ($ident:ident) => { + #[allow(non_camel_case_types)] + pub struct $ident { + #[allow(dead_code)] + pub span: $crate::__private::Span, + } + + #[doc(hidden)] + #[allow(dead_code, non_snake_case)] + pub fn $ident<__S: $crate::__private::IntoSpans<$crate::__private::Span>>( + span: __S, + ) -> $ident { + $ident { + span: $crate::__private::IntoSpans::into_spans(span), + } + } + + const _: () = { + impl $crate::__private::Default for $ident { + fn default() -> Self { + $ident { + span: $crate::__private::Span::call_site(), + } + } + } + + $crate::impl_parse_for_custom_keyword!($ident); + $crate::impl_to_tokens_for_custom_keyword!($ident); + $crate::impl_clone_for_custom_keyword!($ident); + $crate::impl_extra_traits_for_custom_keyword!($ident); + }; + }; +} + +// Not public API. +#[cfg(feature = "parsing")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_parse_for_custom_keyword { + ($ident:ident) => { + // For peek. + impl $crate::__private::CustomToken for $ident { + fn peek(cursor: $crate::buffer::Cursor) -> $crate::__private::bool { + if let $crate::__private::Some((ident, _rest)) = cursor.ident() { + ident == $crate::__private::stringify!($ident) + } else { + false + } + } + + fn display() -> &'static $crate::__private::str { + $crate::__private::concat!("`", $crate::__private::stringify!($ident), "`") + } + } + + impl $crate::parse::Parse for $ident { + fn parse(input: $crate::parse::ParseStream) -> $crate::parse::Result<$ident> { + input.step(|cursor| { + if let $crate::__private::Some((ident, rest)) = cursor.ident() { + if ident == $crate::__private::stringify!($ident) { + return $crate::__private::Ok(($ident { span: ident.span() }, rest)); + } + } + $crate::__private::Err(cursor.error($crate::__private::concat!( + "expected `", + $crate::__private::stringify!($ident), + "`", + ))) + }) + } + } + }; +} + +// Not public API. +#[cfg(not(feature = "parsing"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_parse_for_custom_keyword { + ($ident:ident) => {}; +} + +// Not public API. +#[cfg(feature = "printing")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_to_tokens_for_custom_keyword { + ($ident:ident) => { + impl $crate::__private::ToTokens for $ident { + fn to_tokens(&self, tokens: &mut $crate::__private::TokenStream2) { + let ident = $crate::Ident::new($crate::__private::stringify!($ident), self.span); + $crate::__private::TokenStreamExt::append(tokens, ident); + } + } + }; +} + +// Not public API. +#[cfg(not(feature = "printing"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_to_tokens_for_custom_keyword { + ($ident:ident) => {}; +} + +// Not public API. +#[cfg(feature = "clone-impls")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_clone_for_custom_keyword { + ($ident:ident) => { + impl $crate::__private::Copy for $ident {} + + #[allow(clippy::expl_impl_clone_on_copy)] + impl $crate::__private::Clone for $ident { + fn clone(&self) -> Self { + *self + } + } + }; +} + +// Not public API. +#[cfg(not(feature = "clone-impls"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_clone_for_custom_keyword { + ($ident:ident) => {}; +} + +// Not public API. +#[cfg(feature = "extra-traits")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_extra_traits_for_custom_keyword { + ($ident:ident) => { + impl $crate::__private::Debug for $ident { + fn fmt(&self, f: &mut $crate::__private::Formatter) -> $crate::__private::FmtResult { + $crate::__private::Formatter::write_str( + f, + $crate::__private::concat!( + "Keyword [", + $crate::__private::stringify!($ident), + "]", + ), + ) + } + } + + impl $crate::__private::Eq for $ident {} + + impl $crate::__private::PartialEq for $ident { + fn eq(&self, _other: &Self) -> $crate::__private::bool { + true + } + } + + impl $crate::__private::Hash for $ident { + fn hash<__H: $crate::__private::Hasher>(&self, _state: &mut __H) {} + } + }; +} + +// Not public API. +#[cfg(not(feature = "extra-traits"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_extra_traits_for_custom_keyword { + ($ident:ident) => {}; +} diff --git a/rust/hw/char/pl011/vendor/syn/src/custom_punctuation.rs b/rust/hw/char/pl011/vendor/syn/src/custom_punctuation.rs new file mode 100644 index 0000000000..eef5f54584 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/custom_punctuation.rs @@ -0,0 +1,304 @@ +/// Define a type that supports parsing and printing a multi-character symbol +/// as if it were a punctuation token. +/// +/// # Usage +/// +/// ``` +/// syn::custom_punctuation!(LeftRightArrow, <=>); +/// ``` +/// +/// The generated syntax tree node supports the following operations just like +/// any built-in punctuation token. +/// +/// - [Peeking] — `input.peek(LeftRightArrow)` +/// +/// - [Parsing] — `input.parse::()?` +/// +/// - [Printing] — `quote!( ... #lrarrow ... )` +/// +/// - Construction from a [`Span`] — `let lrarrow = LeftRightArrow(sp)` +/// +/// - Construction from multiple [`Span`] — `let lrarrow = LeftRightArrow([sp, sp, sp])` +/// +/// - Field access to its spans — `let spans = lrarrow.spans` +/// +/// [Peeking]: crate::parse::ParseBuffer::peek +/// [Parsing]: crate::parse::ParseBuffer::parse +/// [Printing]: quote::ToTokens +/// [`Span`]: proc_macro2::Span +/// +/// # Example +/// +/// ``` +/// use proc_macro2::{TokenStream, TokenTree}; +/// use syn::parse::{Parse, ParseStream, Peek, Result}; +/// use syn::punctuated::Punctuated; +/// use syn::Expr; +/// +/// syn::custom_punctuation!(PathSeparator, ); +/// +/// // expr expr expr ... +/// struct PathSegments { +/// segments: Punctuated, +/// } +/// +/// impl Parse for PathSegments { +/// fn parse(input: ParseStream) -> Result { +/// let mut segments = Punctuated::new(); +/// +/// let first = parse_until(input, PathSeparator)?; +/// segments.push_value(syn::parse2(first)?); +/// +/// while input.peek(PathSeparator) { +/// segments.push_punct(input.parse()?); +/// +/// let next = parse_until(input, PathSeparator)?; +/// segments.push_value(syn::parse2(next)?); +/// } +/// +/// Ok(PathSegments { segments }) +/// } +/// } +/// +/// fn parse_until(input: ParseStream, end: E) -> Result { +/// let mut tokens = TokenStream::new(); +/// while !input.is_empty() && !input.peek(end) { +/// let next: TokenTree = input.parse()?; +/// tokens.extend(Some(next)); +/// } +/// Ok(tokens) +/// } +/// +/// fn main() { +/// let input = r#" a::b c::d::e "#; +/// let _: PathSegments = syn::parse_str(input).unwrap(); +/// } +/// ``` +#[macro_export] +macro_rules! custom_punctuation { + ($ident:ident, $($tt:tt)+) => { + pub struct $ident { + #[allow(dead_code)] + pub spans: $crate::custom_punctuation_repr!($($tt)+), + } + + #[doc(hidden)] + #[allow(dead_code, non_snake_case)] + pub fn $ident<__S: $crate::__private::IntoSpans<$crate::custom_punctuation_repr!($($tt)+)>>( + spans: __S, + ) -> $ident { + let _validate_len = 0 $(+ $crate::custom_punctuation_len!(strict, $tt))*; + $ident { + spans: $crate::__private::IntoSpans::into_spans(spans) + } + } + + const _: () = { + impl $crate::__private::Default for $ident { + fn default() -> Self { + $ident($crate::__private::Span::call_site()) + } + } + + $crate::impl_parse_for_custom_punctuation!($ident, $($tt)+); + $crate::impl_to_tokens_for_custom_punctuation!($ident, $($tt)+); + $crate::impl_clone_for_custom_punctuation!($ident, $($tt)+); + $crate::impl_extra_traits_for_custom_punctuation!($ident, $($tt)+); + }; + }; +} + +// Not public API. +#[cfg(feature = "parsing")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_parse_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => { + impl $crate::__private::CustomToken for $ident { + fn peek(cursor: $crate::buffer::Cursor) -> $crate::__private::bool { + $crate::__private::peek_punct(cursor, $crate::stringify_punct!($($tt)+)) + } + + fn display() -> &'static $crate::__private::str { + $crate::__private::concat!("`", $crate::stringify_punct!($($tt)+), "`") + } + } + + impl $crate::parse::Parse for $ident { + fn parse(input: $crate::parse::ParseStream) -> $crate::parse::Result<$ident> { + let spans: $crate::custom_punctuation_repr!($($tt)+) = + $crate::__private::parse_punct(input, $crate::stringify_punct!($($tt)+))?; + Ok($ident(spans)) + } + } + }; +} + +// Not public API. +#[cfg(not(feature = "parsing"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_parse_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => {}; +} + +// Not public API. +#[cfg(feature = "printing")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_to_tokens_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => { + impl $crate::__private::ToTokens for $ident { + fn to_tokens(&self, tokens: &mut $crate::__private::TokenStream2) { + $crate::__private::print_punct($crate::stringify_punct!($($tt)+), &self.spans, tokens) + } + } + }; +} + +// Not public API. +#[cfg(not(feature = "printing"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_to_tokens_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => {}; +} + +// Not public API. +#[cfg(feature = "clone-impls")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_clone_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => { + impl $crate::__private::Copy for $ident {} + + #[allow(clippy::expl_impl_clone_on_copy)] + impl $crate::__private::Clone for $ident { + fn clone(&self) -> Self { + *self + } + } + }; +} + +// Not public API. +#[cfg(not(feature = "clone-impls"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_clone_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => {}; +} + +// Not public API. +#[cfg(feature = "extra-traits")] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_extra_traits_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => { + impl $crate::__private::Debug for $ident { + fn fmt(&self, f: &mut $crate::__private::Formatter) -> $crate::__private::FmtResult { + $crate::__private::Formatter::write_str(f, $crate::__private::stringify!($ident)) + } + } + + impl $crate::__private::Eq for $ident {} + + impl $crate::__private::PartialEq for $ident { + fn eq(&self, _other: &Self) -> $crate::__private::bool { + true + } + } + + impl $crate::__private::Hash for $ident { + fn hash<__H: $crate::__private::Hasher>(&self, _state: &mut __H) {} + } + }; +} + +// Not public API. +#[cfg(not(feature = "extra-traits"))] +#[doc(hidden)] +#[macro_export] +macro_rules! impl_extra_traits_for_custom_punctuation { + ($ident:ident, $($tt:tt)+) => {}; +} + +// Not public API. +#[doc(hidden)] +#[macro_export] +macro_rules! custom_punctuation_repr { + ($($tt:tt)+) => { + [$crate::__private::Span; 0 $(+ $crate::custom_punctuation_len!(lenient, $tt))+] + }; +} + +// Not public API. +#[doc(hidden)] +#[macro_export] +#[rustfmt::skip] +macro_rules! custom_punctuation_len { + ($mode:ident, &) => { 1 }; + ($mode:ident, &&) => { 2 }; + ($mode:ident, &=) => { 2 }; + ($mode:ident, @) => { 1 }; + ($mode:ident, ^) => { 1 }; + ($mode:ident, ^=) => { 2 }; + ($mode:ident, :) => { 1 }; + ($mode:ident, ,) => { 1 }; + ($mode:ident, $) => { 1 }; + ($mode:ident, .) => { 1 }; + ($mode:ident, ..) => { 2 }; + ($mode:ident, ...) => { 3 }; + ($mode:ident, ..=) => { 3 }; + ($mode:ident, =) => { 1 }; + ($mode:ident, ==) => { 2 }; + ($mode:ident, =>) => { 2 }; + ($mode:ident, >=) => { 2 }; + ($mode:ident, >) => { 1 }; + ($mode:ident, <-) => { 2 }; + ($mode:ident, <=) => { 2 }; + ($mode:ident, <) => { 1 }; + ($mode:ident, -) => { 1 }; + ($mode:ident, -=) => { 2 }; + ($mode:ident, !=) => { 2 }; + ($mode:ident, !) => { 1 }; + ($mode:ident, |) => { 1 }; + ($mode:ident, |=) => { 2 }; + ($mode:ident, ||) => { 2 }; + ($mode:ident, ::) => { 2 }; + ($mode:ident, %) => { 1 }; + ($mode:ident, %=) => { 2 }; + ($mode:ident, +) => { 1 }; + ($mode:ident, +=) => { 2 }; + ($mode:ident, #) => { 1 }; + ($mode:ident, ?) => { 1 }; + ($mode:ident, ->) => { 2 }; + ($mode:ident, ;) => { 1 }; + ($mode:ident, <<) => { 2 }; + ($mode:ident, <<=) => { 3 }; + ($mode:ident, >>) => { 2 }; + ($mode:ident, >>=) => { 3 }; + ($mode:ident, /) => { 1 }; + ($mode:ident, /=) => { 2 }; + ($mode:ident, *) => { 1 }; + ($mode:ident, *=) => { 2 }; + ($mode:ident, ~) => { 1 }; + (lenient, $tt:tt) => { 0 }; + (strict, $tt:tt) => {{ $crate::custom_punctuation_unexpected!($tt); 0 }}; +} + +// Not public API. +#[doc(hidden)] +#[macro_export] +macro_rules! custom_punctuation_unexpected { + () => {}; +} + +// Not public API. +#[doc(hidden)] +#[macro_export] +macro_rules! stringify_punct { + ($($tt:tt)+) => { + $crate::__private::concat!($($crate::__private::stringify!($tt)),+) + }; +} diff --git a/rust/hw/char/pl011/vendor/syn/src/data.rs b/rust/hw/char/pl011/vendor/syn/src/data.rs new file mode 100644 index 0000000000..a44cdf341d --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/data.rs @@ -0,0 +1,423 @@ +use crate::attr::Attribute; +use crate::expr::Expr; +use crate::ident::Ident; +use crate::punctuated::{self, Punctuated}; +use crate::restriction::{FieldMutability, Visibility}; +use crate::token; +use crate::ty::Type; + +ast_struct! { + /// An enum variant. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct Variant { + pub attrs: Vec, + + /// Name of the variant. + pub ident: Ident, + + /// Content stored in the variant. + pub fields: Fields, + + /// Explicit discriminant: `Variant = 1` + pub discriminant: Option<(Token![=], Expr)>, + } +} + +ast_enum_of_structs! { + /// Data stored within an enum variant or struct. + /// + /// # Syntax tree enum + /// + /// This type is a [syntax tree enum]. + /// + /// [syntax tree enum]: crate::expr::Expr#syntax-tree-enums + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub enum Fields { + /// Named fields of a struct or struct variant such as `Point { x: f64, + /// y: f64 }`. + Named(FieldsNamed), + + /// Unnamed fields of a tuple struct or tuple variant such as `Some(T)`. + Unnamed(FieldsUnnamed), + + /// Unit struct or unit variant such as `None`. + Unit, + } +} + +ast_struct! { + /// Named fields of a struct or struct variant such as `Point { x: f64, + /// y: f64 }`. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct FieldsNamed { + pub brace_token: token::Brace, + pub named: Punctuated, + } +} + +ast_struct! { + /// Unnamed fields of a tuple struct or tuple variant such as `Some(T)`. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct FieldsUnnamed { + pub paren_token: token::Paren, + pub unnamed: Punctuated, + } +} + +impl Fields { + /// Get an iterator over the borrowed [`Field`] items in this object. This + /// iterator can be used to iterate over a named or unnamed struct or + /// variant's fields uniformly. + pub fn iter(&self) -> punctuated::Iter { + match self { + Fields::Unit => crate::punctuated::empty_punctuated_iter(), + Fields::Named(f) => f.named.iter(), + Fields::Unnamed(f) => f.unnamed.iter(), + } + } + + /// Get an iterator over the mutably borrowed [`Field`] items in this + /// object. This iterator can be used to iterate over a named or unnamed + /// struct or variant's fields uniformly. + pub fn iter_mut(&mut self) -> punctuated::IterMut { + match self { + Fields::Unit => crate::punctuated::empty_punctuated_iter_mut(), + Fields::Named(f) => f.named.iter_mut(), + Fields::Unnamed(f) => f.unnamed.iter_mut(), + } + } + + /// Returns the number of fields. + pub fn len(&self) -> usize { + match self { + Fields::Unit => 0, + Fields::Named(f) => f.named.len(), + Fields::Unnamed(f) => f.unnamed.len(), + } + } + + /// Returns `true` if there are zero fields. + pub fn is_empty(&self) -> bool { + match self { + Fields::Unit => true, + Fields::Named(f) => f.named.is_empty(), + Fields::Unnamed(f) => f.unnamed.is_empty(), + } + } +} + +impl IntoIterator for Fields { + type Item = Field; + type IntoIter = punctuated::IntoIter; + + fn into_iter(self) -> Self::IntoIter { + match self { + Fields::Unit => Punctuated::::new().into_iter(), + Fields::Named(f) => f.named.into_iter(), + Fields::Unnamed(f) => f.unnamed.into_iter(), + } + } +} + +impl<'a> IntoIterator for &'a Fields { + type Item = &'a Field; + type IntoIter = punctuated::Iter<'a, Field>; + + fn into_iter(self) -> Self::IntoIter { + self.iter() + } +} + +impl<'a> IntoIterator for &'a mut Fields { + type Item = &'a mut Field; + type IntoIter = punctuated::IterMut<'a, Field>; + + fn into_iter(self) -> Self::IntoIter { + self.iter_mut() + } +} + +ast_struct! { + /// A field of a struct or enum variant. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct Field { + pub attrs: Vec, + + pub vis: Visibility, + + pub mutability: FieldMutability, + + /// Name of the field, if any. + /// + /// Fields of tuple structs have no names. + pub ident: Option, + + pub colon_token: Option, + + pub ty: Type, + } +} + +#[cfg(feature = "parsing")] +pub(crate) mod parsing { + use crate::attr::Attribute; + use crate::data::{Field, Fields, FieldsNamed, FieldsUnnamed, Variant}; + use crate::error::Result; + use crate::expr::Expr; + use crate::ext::IdentExt as _; + use crate::ident::Ident; + #[cfg(not(feature = "full"))] + use crate::parse::discouraged::Speculative as _; + use crate::parse::{Parse, ParseStream}; + use crate::restriction::{FieldMutability, Visibility}; + use crate::token; + use crate::ty::Type; + use crate::verbatim; + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for Variant { + fn parse(input: ParseStream) -> Result { + let attrs = input.call(Attribute::parse_outer)?; + let _visibility: Visibility = input.parse()?; + let ident: Ident = input.parse()?; + let fields = if input.peek(token::Brace) { + Fields::Named(input.parse()?) + } else if input.peek(token::Paren) { + Fields::Unnamed(input.parse()?) + } else { + Fields::Unit + }; + let discriminant = if input.peek(Token![=]) { + let eq_token: Token![=] = input.parse()?; + #[cfg(feature = "full")] + let discriminant: Expr = input.parse()?; + #[cfg(not(feature = "full"))] + let discriminant = { + let begin = input.fork(); + let ahead = input.fork(); + let mut discriminant: Result = ahead.parse(); + if discriminant.is_ok() { + input.advance_to(&ahead); + } else if scan_lenient_discriminant(input).is_ok() { + discriminant = Ok(Expr::Verbatim(verbatim::between(&begin, input))); + } + discriminant? + }; + Some((eq_token, discriminant)) + } else { + None + }; + Ok(Variant { + attrs, + ident, + fields, + discriminant, + }) + } + } + + #[cfg(not(feature = "full"))] + pub(crate) fn scan_lenient_discriminant(input: ParseStream) -> Result<()> { + use crate::expr::Member; + use crate::lifetime::Lifetime; + use crate::lit::Lit; + use crate::lit::LitFloat; + use crate::op::{BinOp, UnOp}; + use crate::path::{self, AngleBracketedGenericArguments}; + use proc_macro2::Delimiter::{self, Brace, Bracket, Parenthesis}; + + let consume = |delimiter: Delimiter| { + Result::unwrap(input.step(|cursor| match cursor.group(delimiter) { + Some((_inside, _span, rest)) => Ok((true, rest)), + None => Ok((false, *cursor)), + })) + }; + + macro_rules! consume { + [$token:tt] => { + input.parse::>().unwrap().is_some() + }; + } + + let mut initial = true; + let mut depth = 0usize; + loop { + if initial { + if consume![&] { + input.parse::>()?; + } else if consume![if] || consume![match] || consume![while] { + depth += 1; + } else if input.parse::>()?.is_some() + || (consume(Brace) || consume(Bracket) || consume(Parenthesis)) + || (consume![async] || consume![const] || consume![loop] || consume![unsafe]) + && (consume(Brace) || break) + { + initial = false; + } else if consume![let] { + while !consume![=] { + if !((consume![|] || consume![ref] || consume![mut] || consume![@]) + || (consume![!] || input.parse::>()?.is_some()) + || (consume![..=] || consume![..] || consume![&] || consume![_]) + || (consume(Brace) || consume(Bracket) || consume(Parenthesis))) + { + path::parsing::qpath(input, true)?; + } + } + } else if input.parse::>()?.is_some() && !consume![:] { + break; + } else if input.parse::().is_err() { + path::parsing::qpath(input, true)?; + initial = consume![!] || depth == 0 && input.peek(token::Brace); + } + } else if input.is_empty() || input.peek(Token![,]) { + return Ok(()); + } else if depth > 0 && consume(Brace) { + if consume![else] && !consume(Brace) { + initial = consume![if] || break; + } else { + depth -= 1; + } + } else if input.parse::().is_ok() || (consume![..] | consume![=]) { + initial = true; + } else if consume![.] { + if input.parse::>()?.is_none() + && (input.parse::()?.is_named() && consume![::]) + { + AngleBracketedGenericArguments::do_parse(None, input)?; + } + } else if consume![as] { + input.parse::()?; + } else if !(consume(Brace) || consume(Bracket) || consume(Parenthesis)) { + break; + } + } + + Err(input.error("unsupported expression")) + } + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for FieldsNamed { + fn parse(input: ParseStream) -> Result { + let content; + Ok(FieldsNamed { + brace_token: braced!(content in input), + named: content.parse_terminated(Field::parse_named, Token![,])?, + }) + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for FieldsUnnamed { + fn parse(input: ParseStream) -> Result { + let content; + Ok(FieldsUnnamed { + paren_token: parenthesized!(content in input), + unnamed: content.parse_terminated(Field::parse_unnamed, Token![,])?, + }) + } + } + + impl Field { + /// Parses a named (braced struct) field. + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_named(input: ParseStream) -> Result { + let attrs = input.call(Attribute::parse_outer)?; + let vis: Visibility = input.parse()?; + + let unnamed_field = cfg!(feature = "full") && input.peek(Token![_]); + let ident = if unnamed_field { + input.call(Ident::parse_any) + } else { + input.parse() + }?; + + let colon_token: Token![:] = input.parse()?; + + let ty: Type = if unnamed_field + && (input.peek(Token![struct]) + || input.peek(Token![union]) && input.peek2(token::Brace)) + { + let begin = input.fork(); + input.call(Ident::parse_any)?; + input.parse::()?; + Type::Verbatim(verbatim::between(&begin, input)) + } else { + input.parse()? + }; + + Ok(Field { + attrs, + vis, + mutability: FieldMutability::None, + ident: Some(ident), + colon_token: Some(colon_token), + ty, + }) + } + + /// Parses an unnamed (tuple struct) field. + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + pub fn parse_unnamed(input: ParseStream) -> Result { + Ok(Field { + attrs: input.call(Attribute::parse_outer)?, + vis: input.parse()?, + mutability: FieldMutability::None, + ident: None, + colon_token: None, + ty: input.parse()?, + }) + } + } +} + +#[cfg(feature = "printing")] +mod printing { + use crate::data::{Field, FieldsNamed, FieldsUnnamed, Variant}; + use crate::print::TokensOrDefault; + use proc_macro2::TokenStream; + use quote::{ToTokens, TokenStreamExt}; + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for Variant { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append_all(&self.attrs); + self.ident.to_tokens(tokens); + self.fields.to_tokens(tokens); + if let Some((eq_token, disc)) = &self.discriminant { + eq_token.to_tokens(tokens); + disc.to_tokens(tokens); + } + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for FieldsNamed { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.brace_token.surround(tokens, |tokens| { + self.named.to_tokens(tokens); + }); + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for FieldsUnnamed { + fn to_tokens(&self, tokens: &mut TokenStream) { + self.paren_token.surround(tokens, |tokens| { + self.unnamed.to_tokens(tokens); + }); + } + } + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for Field { + fn to_tokens(&self, tokens: &mut TokenStream) { + tokens.append_all(&self.attrs); + self.vis.to_tokens(tokens); + if let Some(ident) = &self.ident { + ident.to_tokens(tokens); + TokensOrDefault(&self.colon_token).to_tokens(tokens); + } + self.ty.to_tokens(tokens); + } + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/derive.rs b/rust/hw/char/pl011/vendor/syn/src/derive.rs new file mode 100644 index 0000000000..3443ecfc05 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/derive.rs @@ -0,0 +1,259 @@ +use crate::attr::Attribute; +use crate::data::{Fields, FieldsNamed, Variant}; +use crate::generics::Generics; +use crate::ident::Ident; +use crate::punctuated::Punctuated; +use crate::restriction::Visibility; +use crate::token; + +ast_struct! { + /// Data structure sent to a `proc_macro_derive` macro. + #[cfg_attr(docsrs, doc(cfg(feature = "derive")))] + pub struct DeriveInput { + pub attrs: Vec, + pub vis: Visibility, + pub ident: Ident, + pub generics: Generics, + pub data: Data, + } +} + +ast_enum! { + /// The storage of a struct, enum or union data structure. + /// + /// # Syntax tree enum + /// + /// This type is a [syntax tree enum]. + /// + /// [syntax tree enum]: crate::expr::Expr#syntax-tree-enums + #[cfg_attr(docsrs, doc(cfg(feature = "derive")))] + pub enum Data { + Struct(DataStruct), + Enum(DataEnum), + Union(DataUnion), + } +} + +ast_struct! { + /// A struct input to a `proc_macro_derive` macro. + #[cfg_attr(docsrs, doc(cfg(feature = "derive")))] + pub struct DataStruct { + pub struct_token: Token![struct], + pub fields: Fields, + pub semi_token: Option, + } +} + +ast_struct! { + /// An enum input to a `proc_macro_derive` macro. + #[cfg_attr(docsrs, doc(cfg(feature = "derive")))] + pub struct DataEnum { + pub enum_token: Token![enum], + pub brace_token: token::Brace, + pub variants: Punctuated, + } +} + +ast_struct! { + /// An untagged union input to a `proc_macro_derive` macro. + #[cfg_attr(docsrs, doc(cfg(feature = "derive")))] + pub struct DataUnion { + pub union_token: Token![union], + pub fields: FieldsNamed, + } +} + +#[cfg(feature = "parsing")] +pub(crate) mod parsing { + use crate::attr::Attribute; + use crate::data::{Fields, FieldsNamed, Variant}; + use crate::derive::{Data, DataEnum, DataStruct, DataUnion, DeriveInput}; + use crate::error::Result; + use crate::generics::{Generics, WhereClause}; + use crate::ident::Ident; + use crate::parse::{Parse, ParseStream}; + use crate::punctuated::Punctuated; + use crate::restriction::Visibility; + use crate::token; + + #[cfg_attr(docsrs, doc(cfg(feature = "parsing")))] + impl Parse for DeriveInput { + fn parse(input: ParseStream) -> Result { + let attrs = input.call(Attribute::parse_outer)?; + let vis = input.parse::()?; + + let lookahead = input.lookahead1(); + if lookahead.peek(Token![struct]) { + let struct_token = input.parse::()?; + let ident = input.parse::()?; + let generics = input.parse::()?; + let (where_clause, fields, semi) = data_struct(input)?; + Ok(DeriveInput { + attrs, + vis, + ident, + generics: Generics { + where_clause, + ..generics + }, + data: Data::Struct(DataStruct { + struct_token, + fields, + semi_token: semi, + }), + }) + } else if lookahead.peek(Token![enum]) { + let enum_token = input.parse::()?; + let ident = input.parse::()?; + let generics = input.parse::()?; + let (where_clause, brace, variants) = data_enum(input)?; + Ok(DeriveInput { + attrs, + vis, + ident, + generics: Generics { + where_clause, + ..generics + }, + data: Data::Enum(DataEnum { + enum_token, + brace_token: brace, + variants, + }), + }) + } else if lookahead.peek(Token![union]) { + let union_token = input.parse::()?; + let ident = input.parse::()?; + let generics = input.parse::()?; + let (where_clause, fields) = data_union(input)?; + Ok(DeriveInput { + attrs, + vis, + ident, + generics: Generics { + where_clause, + ..generics + }, + data: Data::Union(DataUnion { + union_token, + fields, + }), + }) + } else { + Err(lookahead.error()) + } + } + } + + pub(crate) fn data_struct( + input: ParseStream, + ) -> Result<(Option, Fields, Option)> { + let mut lookahead = input.lookahead1(); + let mut where_clause = None; + if lookahead.peek(Token![where]) { + where_clause = Some(input.parse()?); + lookahead = input.lookahead1(); + } + + if where_clause.is_none() && lookahead.peek(token::Paren) { + let fields = input.parse()?; + + lookahead = input.lookahead1(); + if lookahead.peek(Token![where]) { + where_clause = Some(input.parse()?); + lookahead = input.lookahead1(); + } + + if lookahead.peek(Token![;]) { + let semi = input.parse()?; + Ok((where_clause, Fields::Unnamed(fields), Some(semi))) + } else { + Err(lookahead.error()) + } + } else if lookahead.peek(token::Brace) { + let fields = input.parse()?; + Ok((where_clause, Fields::Named(fields), None)) + } else if lookahead.peek(Token![;]) { + let semi = input.parse()?; + Ok((where_clause, Fields::Unit, Some(semi))) + } else { + Err(lookahead.error()) + } + } + + pub(crate) fn data_enum( + input: ParseStream, + ) -> Result<( + Option, + token::Brace, + Punctuated, + )> { + let where_clause = input.parse()?; + + let content; + let brace = braced!(content in input); + let variants = content.parse_terminated(Variant::parse, Token![,])?; + + Ok((where_clause, brace, variants)) + } + + pub(crate) fn data_union(input: ParseStream) -> Result<(Option, FieldsNamed)> { + let where_clause = input.parse()?; + let fields = input.parse()?; + Ok((where_clause, fields)) + } +} + +#[cfg(feature = "printing")] +mod printing { + use crate::attr::FilterAttrs; + use crate::data::Fields; + use crate::derive::{Data, DeriveInput}; + use crate::print::TokensOrDefault; + use proc_macro2::TokenStream; + use quote::ToTokens; + + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + impl ToTokens for DeriveInput { + fn to_tokens(&self, tokens: &mut TokenStream) { + for attr in self.attrs.outer() { + attr.to_tokens(tokens); + } + self.vis.to_tokens(tokens); + match &self.data { + Data::Struct(d) => d.struct_token.to_tokens(tokens), + Data::Enum(d) => d.enum_token.to_tokens(tokens), + Data::Union(d) => d.union_token.to_tokens(tokens), + } + self.ident.to_tokens(tokens); + self.generics.to_tokens(tokens); + match &self.data { + Data::Struct(data) => match &data.fields { + Fields::Named(fields) => { + self.generics.where_clause.to_tokens(tokens); + fields.to_tokens(tokens); + } + Fields::Unnamed(fields) => { + fields.to_tokens(tokens); + self.generics.where_clause.to_tokens(tokens); + TokensOrDefault(&data.semi_token).to_tokens(tokens); + } + Fields::Unit => { + self.generics.where_clause.to_tokens(tokens); + TokensOrDefault(&data.semi_token).to_tokens(tokens); + } + }, + Data::Enum(data) => { + self.generics.where_clause.to_tokens(tokens); + data.brace_token.surround(tokens, |tokens| { + data.variants.to_tokens(tokens); + }); + } + Data::Union(data) => { + self.generics.where_clause.to_tokens(tokens); + data.fields.to_tokens(tokens); + } + } + } + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/discouraged.rs b/rust/hw/char/pl011/vendor/syn/src/discouraged.rs new file mode 100644 index 0000000000..4109c670e7 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/discouraged.rs @@ -0,0 +1,225 @@ +//! Extensions to the parsing API with niche applicability. + +use crate::buffer::Cursor; +use crate::error::Result; +use crate::parse::{inner_unexpected, ParseBuffer, Unexpected}; +use proc_macro2::extra::DelimSpan; +use proc_macro2::Delimiter; +use std::cell::Cell; +use std::mem; +use std::rc::Rc; + +/// Extensions to the `ParseStream` API to support speculative parsing. +pub trait Speculative { + /// Advance this parse stream to the position of a forked parse stream. + /// + /// This is the opposite operation to [`ParseStream::fork`]. You can fork a + /// parse stream, perform some speculative parsing, then join the original + /// stream to the fork to "commit" the parsing from the fork to the main + /// stream. + /// + /// If you can avoid doing this, you should, as it limits the ability to + /// generate useful errors. That said, it is often the only way to parse + /// syntax of the form `A* B*` for arbitrary syntax `A` and `B`. The problem + /// is that when the fork fails to parse an `A`, it's impossible to tell + /// whether that was because of a syntax error and the user meant to provide + /// an `A`, or that the `A`s are finished and it's time to start parsing + /// `B`s. Use with care. + /// + /// Also note that if `A` is a subset of `B`, `A* B*` can be parsed by + /// parsing `B*` and removing the leading members of `A` from the + /// repetition, bypassing the need to involve the downsides associated with + /// speculative parsing. + /// + /// [`ParseStream::fork`]: ParseBuffer::fork + /// + /// # Example + /// + /// There has been chatter about the possibility of making the colons in the + /// turbofish syntax like `path::to::` no longer required by accepting + /// `path::to` in expression position. Specifically, according to [RFC + /// 2544], [`PathSegment`] parsing should always try to consume a following + /// `<` token as the start of generic arguments, and reset to the `<` if + /// that fails (e.g. the token is acting as a less-than operator). + /// + /// This is the exact kind of parsing behavior which requires the "fork, + /// try, commit" behavior that [`ParseStream::fork`] discourages. With + /// `advance_to`, we can avoid having to parse the speculatively parsed + /// content a second time. + /// + /// This change in behavior can be implemented in syn by replacing just the + /// `Parse` implementation for `PathSegment`: + /// + /// ``` + /// # use syn::ext::IdentExt; + /// use syn::parse::discouraged::Speculative; + /// # use syn::parse::{Parse, ParseStream}; + /// # use syn::{Ident, PathArguments, Result, Token}; + /// + /// pub struct PathSegment { + /// pub ident: Ident, + /// pub arguments: PathArguments, + /// } + /// # + /// # impl From for PathSegment + /// # where + /// # T: Into, + /// # { + /// # fn from(ident: T) -> Self { + /// # PathSegment { + /// # ident: ident.into(), + /// # arguments: PathArguments::None, + /// # } + /// # } + /// # } + /// + /// impl Parse for PathSegment { + /// fn parse(input: ParseStream) -> Result { + /// if input.peek(Token![super]) + /// || input.peek(Token![self]) + /// || input.peek(Token![Self]) + /// || input.peek(Token![crate]) + /// { + /// let ident = input.call(Ident::parse_any)?; + /// return Ok(PathSegment::from(ident)); + /// } + /// + /// let ident = input.parse()?; + /// if input.peek(Token![::]) && input.peek3(Token![<]) { + /// return Ok(PathSegment { + /// ident, + /// arguments: PathArguments::AngleBracketed(input.parse()?), + /// }); + /// } + /// if input.peek(Token![<]) && !input.peek(Token![<=]) { + /// let fork = input.fork(); + /// if let Ok(arguments) = fork.parse() { + /// input.advance_to(&fork); + /// return Ok(PathSegment { + /// ident, + /// arguments: PathArguments::AngleBracketed(arguments), + /// }); + /// } + /// } + /// Ok(PathSegment::from(ident)) + /// } + /// } + /// + /// # syn::parse_str::("a").unwrap(); + /// ``` + /// + /// # Drawbacks + /// + /// The main drawback of this style of speculative parsing is in error + /// presentation. Even if the lookahead is the "correct" parse, the error + /// that is shown is that of the "fallback" parse. To use the same example + /// as the turbofish above, take the following unfinished "turbofish": + /// + /// ```text + /// let _ = f<&'a fn(), for<'a> serde::>(); + /// ``` + /// + /// If this is parsed as generic arguments, we can provide the error message + /// + /// ```text + /// error: expected identifier + /// --> src.rs:L:C + /// | + /// L | let _ = f<&'a fn(), for<'a> serde::>(); + /// | ^ + /// ``` + /// + /// but if parsed using the above speculative parsing, it falls back to + /// assuming that the `<` is a less-than when it fails to parse the generic + /// arguments, and tries to interpret the `&'a` as the start of a labelled + /// loop, resulting in the much less helpful error + /// + /// ```text + /// error: expected `:` + /// --> src.rs:L:C + /// | + /// L | let _ = f<&'a fn(), for<'a> serde::>(); + /// | ^^ + /// ``` + /// + /// This can be mitigated with various heuristics (two examples: show both + /// forks' parse errors, or show the one that consumed more tokens), but + /// when you can control the grammar, sticking to something that can be + /// parsed LL(3) and without the LL(*) speculative parsing this makes + /// possible, displaying reasonable errors becomes much more simple. + /// + /// [RFC 2544]: https://github.com/rust-lang/rfcs/pull/2544 + /// [`PathSegment`]: crate::PathSegment + /// + /// # Performance + /// + /// This method performs a cheap fixed amount of work that does not depend + /// on how far apart the two streams are positioned. + /// + /// # Panics + /// + /// The forked stream in the argument of `advance_to` must have been + /// obtained by forking `self`. Attempting to advance to any other stream + /// will cause a panic. + fn advance_to(&self, fork: &Self); +} + +impl<'a> Speculative for ParseBuffer<'a> { + fn advance_to(&self, fork: &Self) { + if !crate::buffer::same_scope(self.cursor(), fork.cursor()) { + panic!("fork was not derived from the advancing parse stream"); + } + + let (self_unexp, self_sp) = inner_unexpected(self); + let (fork_unexp, fork_sp) = inner_unexpected(fork); + if !Rc::ptr_eq(&self_unexp, &fork_unexp) { + match (fork_sp, self_sp) { + // Unexpected set on the fork, but not on `self`, copy it over. + (Some(span), None) => { + self_unexp.set(Unexpected::Some(span)); + } + // Unexpected unset. Use chain to propagate errors from fork. + (None, None) => { + fork_unexp.set(Unexpected::Chain(self_unexp)); + + // Ensure toplevel 'unexpected' tokens from the fork don't + // bubble up the chain by replacing the root `unexpected` + // pointer, only 'unexpected' tokens from existing group + // parsers should bubble. + fork.unexpected + .set(Some(Rc::new(Cell::new(Unexpected::None)))); + } + // Unexpected has been set on `self`. No changes needed. + (_, Some(_)) => {} + } + } + + // See comment on `cell` in the struct definition. + self.cell + .set(unsafe { mem::transmute::>(fork.cursor()) }); + } +} + +/// Extensions to the `ParseStream` API to support manipulating invisible +/// delimiters the same as if they were visible. +pub trait AnyDelimiter { + /// Returns the delimiter, the span of the delimiter token, and the nested + /// contents for further parsing. + fn parse_any_delimiter(&self) -> Result<(Delimiter, DelimSpan, ParseBuffer)>; +} + +impl<'a> AnyDelimiter for ParseBuffer<'a> { + fn parse_any_delimiter(&self) -> Result<(Delimiter, DelimSpan, ParseBuffer)> { + self.step(|cursor| { + if let Some((content, delimiter, span, rest)) = cursor.any_group() { + let scope = crate::buffer::close_span_of_group(*cursor); + let nested = crate::parse::advance_step_cursor(cursor, content); + let unexpected = crate::parse::get_unexpected(self); + let content = crate::parse::new_parse_buffer(scope, nested, unexpected); + Ok(((delimiter, span, content), rest)) + } else { + Err(cursor.error("expected any delimiter")) + } + }) + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/drops.rs b/rust/hw/char/pl011/vendor/syn/src/drops.rs new file mode 100644 index 0000000000..89b42d82ef --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/drops.rs @@ -0,0 +1,58 @@ +use std::iter; +use std::mem::ManuallyDrop; +use std::ops::{Deref, DerefMut}; +use std::option; +use std::slice; + +#[repr(transparent)] +pub(crate) struct NoDrop(ManuallyDrop); + +impl NoDrop { + pub(crate) fn new(value: T) -> Self + where + T: TrivialDrop, + { + NoDrop(ManuallyDrop::new(value)) + } +} + +impl Deref for NoDrop { + type Target = T; + fn deref(&self) -> &Self::Target { + &self.0 + } +} + +impl DerefMut for NoDrop { + fn deref_mut(&mut self) -> &mut Self::Target { + &mut self.0 + } +} + +pub(crate) trait TrivialDrop {} + +impl TrivialDrop for iter::Empty {} +impl<'a, T> TrivialDrop for slice::Iter<'a, T> {} +impl<'a, T> TrivialDrop for slice::IterMut<'a, T> {} +impl<'a, T> TrivialDrop for option::IntoIter<&'a T> {} +impl<'a, T> TrivialDrop for option::IntoIter<&'a mut T> {} + +#[test] +fn test_needs_drop() { + use std::mem::needs_drop; + + struct NeedsDrop; + + impl Drop for NeedsDrop { + fn drop(&mut self) {} + } + + assert!(needs_drop::()); + + // Test each of the types with a handwritten TrivialDrop impl above. + assert!(!needs_drop::>()); + assert!(!needs_drop::>()); + assert!(!needs_drop::>()); + assert!(!needs_drop::>()); + assert!(!needs_drop::>()); +} diff --git a/rust/hw/char/pl011/vendor/syn/src/error.rs b/rust/hw/char/pl011/vendor/syn/src/error.rs new file mode 100644 index 0000000000..63310543a3 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/error.rs @@ -0,0 +1,467 @@ +#[cfg(feature = "parsing")] +use crate::buffer::Cursor; +use crate::thread::ThreadBound; +use proc_macro2::{ + Delimiter, Group, Ident, LexError, Literal, Punct, Spacing, Span, TokenStream, TokenTree, +}; +#[cfg(feature = "printing")] +use quote::ToTokens; +use std::fmt::{self, Debug, Display}; +use std::slice; +use std::vec; + +/// The result of a Syn parser. +pub type Result = std::result::Result; + +/// Error returned when a Syn parser cannot parse the input tokens. +/// +/// # Error reporting in proc macros +/// +/// The correct way to report errors back to the compiler from a procedural +/// macro is by emitting an appropriately spanned invocation of +/// [`compile_error!`] in the generated code. This produces a better diagnostic +/// message than simply panicking the macro. +/// +/// [`compile_error!`]: std::compile_error! +/// +/// When parsing macro input, the [`parse_macro_input!`] macro handles the +/// conversion to `compile_error!` automatically. +/// +/// [`parse_macro_input!`]: crate::parse_macro_input! +/// +/// ``` +/// # extern crate proc_macro; +/// # +/// use proc_macro::TokenStream; +/// use syn::parse::{Parse, ParseStream, Result}; +/// use syn::{parse_macro_input, ItemFn}; +/// +/// # const IGNORE: &str = stringify! { +/// #[proc_macro_attribute] +/// # }; +/// pub fn my_attr(args: TokenStream, input: TokenStream) -> TokenStream { +/// let args = parse_macro_input!(args as MyAttrArgs); +/// let input = parse_macro_input!(input as ItemFn); +/// +/// /* ... */ +/// # TokenStream::new() +/// } +/// +/// struct MyAttrArgs { +/// # _k: [(); { stringify! { +/// ... +/// # }; 0 }] +/// } +/// +/// impl Parse for MyAttrArgs { +/// fn parse(input: ParseStream) -> Result { +/// # stringify! { +/// ... +/// # }; +/// # unimplemented!() +/// } +/// } +/// ``` +/// +/// For errors that arise later than the initial parsing stage, the +/// [`.to_compile_error()`] or [`.into_compile_error()`] methods can be used to +/// perform an explicit conversion to `compile_error!`. +/// +/// [`.to_compile_error()`]: Error::to_compile_error +/// [`.into_compile_error()`]: Error::into_compile_error +/// +/// ``` +/// # extern crate proc_macro; +/// # +/// # use proc_macro::TokenStream; +/// # use syn::{parse_macro_input, DeriveInput}; +/// # +/// # const IGNORE: &str = stringify! { +/// #[proc_macro_derive(MyDerive)] +/// # }; +/// pub fn my_derive(input: TokenStream) -> TokenStream { +/// let input = parse_macro_input!(input as DeriveInput); +/// +/// // fn(DeriveInput) -> syn::Result +/// expand::my_derive(input) +/// .unwrap_or_else(syn::Error::into_compile_error) +/// .into() +/// } +/// # +/// # mod expand { +/// # use proc_macro2::TokenStream; +/// # use syn::{DeriveInput, Result}; +/// # +/// # pub fn my_derive(input: DeriveInput) -> Result { +/// # unimplemented!() +/// # } +/// # } +/// ``` +pub struct Error { + messages: Vec, +} + +struct ErrorMessage { + // Span is implemented as an index into a thread-local interner to keep the + // size small. It is not safe to access from a different thread. We want + // errors to be Send and Sync to play nicely with ecosystem crates for error + // handling, so pin the span we're given to its original thread and assume + // it is Span::call_site if accessed from any other thread. + span: ThreadBound, + message: String, +} + +// Cannot use std::ops::Range because that does not implement Copy, +// whereas ThreadBound requires a Copy impl as a way to ensure no Drop impls +// are involved. +struct SpanRange { + start: Span, + end: Span, +} + +#[cfg(test)] +struct _Test +where + Error: Send + Sync; + +impl Error { + /// Usually the [`ParseStream::error`] method will be used instead, which + /// automatically uses the correct span from the current position of the + /// parse stream. + /// + /// Use `Error::new` when the error needs to be triggered on some span other + /// than where the parse stream is currently positioned. + /// + /// [`ParseStream::error`]: crate::parse::ParseBuffer::error + /// + /// # Example + /// + /// ``` + /// use syn::{Error, Ident, LitStr, Result, Token}; + /// use syn::parse::ParseStream; + /// + /// // Parses input that looks like `name = "string"` where the key must be + /// // the identifier `name` and the value may be any string literal. + /// // Returns the string literal. + /// fn parse_name(input: ParseStream) -> Result { + /// let name_token: Ident = input.parse()?; + /// if name_token != "name" { + /// // Trigger an error not on the current position of the stream, + /// // but on the position of the unexpected identifier. + /// return Err(Error::new(name_token.span(), "expected `name`")); + /// } + /// input.parse::()?; + /// let s: LitStr = input.parse()?; + /// Ok(s) + /// } + /// ``` + pub fn new(span: Span, message: T) -> Self { + return new(span, message.to_string()); + + fn new(span: Span, message: String) -> Error { + Error { + messages: vec![ErrorMessage { + span: ThreadBound::new(SpanRange { + start: span, + end: span, + }), + message, + }], + } + } + } + + /// Creates an error with the specified message spanning the given syntax + /// tree node. + /// + /// Unlike the `Error::new` constructor, this constructor takes an argument + /// `tokens` which is a syntax tree node. This allows the resulting `Error` + /// to attempt to span all tokens inside of `tokens`. While you would + /// typically be able to use the `Spanned` trait with the above `Error::new` + /// constructor, implementation limitations today mean that + /// `Error::new_spanned` may provide a higher-quality error message on + /// stable Rust. + /// + /// When in doubt it's recommended to stick to `Error::new` (or + /// `ParseStream::error`)! + #[cfg(feature = "printing")] + #[cfg_attr(docsrs, doc(cfg(feature = "printing")))] + pub fn new_spanned(tokens: T, message: U) -> Self { + return new_spanned(tokens.into_token_stream(), message.to_string()); + + fn new_spanned(tokens: TokenStream, message: String) -> Error { + let mut iter = tokens.into_iter(); + let start = iter.next().map_or_else(Span::call_site, |t| t.span()); + let end = iter.last().map_or(start, |t| t.span()); + Error { + messages: vec![ErrorMessage { + span: ThreadBound::new(SpanRange { start, end }), + message, + }], + } + } + } + + /// The source location of the error. + /// + /// Spans are not thread-safe so this function returns `Span::call_site()` + /// if called from a different thread than the one on which the `Error` was + /// originally created. + pub fn span(&self) -> Span { + let SpanRange { start, end } = match self.messages[0].span.get() { + Some(span) => *span, + None => return Span::call_site(), + }; + start.join(end).unwrap_or(start) + } + + /// Render the error as an invocation of [`compile_error!`]. + /// + /// The [`parse_macro_input!`] macro provides a convenient way to invoke + /// this method correctly in a procedural macro. + /// + /// [`compile_error!`]: std::compile_error! + /// [`parse_macro_input!`]: crate::parse_macro_input! + pub fn to_compile_error(&self) -> TokenStream { + self.messages + .iter() + .map(ErrorMessage::to_compile_error) + .collect() + } + + /// Render the error as an invocation of [`compile_error!`]. + /// + /// [`compile_error!`]: std::compile_error! + /// + /// # Example + /// + /// ``` + /// # extern crate proc_macro; + /// # + /// use proc_macro::TokenStream; + /// use syn::{parse_macro_input, DeriveInput, Error}; + /// + /// # const _: &str = stringify! { + /// #[proc_macro_derive(MyTrait)] + /// # }; + /// pub fn derive_my_trait(input: TokenStream) -> TokenStream { + /// let input = parse_macro_input!(input as DeriveInput); + /// my_trait::expand(input) + /// .unwrap_or_else(Error::into_compile_error) + /// .into() + /// } + /// + /// mod my_trait { + /// use proc_macro2::TokenStream; + /// use syn::{DeriveInput, Result}; + /// + /// pub(crate) fn expand(input: DeriveInput) -> Result { + /// /* ... */ + /// # unimplemented!() + /// } + /// } + /// ``` + pub fn into_compile_error(self) -> TokenStream { + self.to_compile_error() + } + + /// Add another error message to self such that when `to_compile_error()` is + /// called, both errors will be emitted together. + pub fn combine(&mut self, another: Error) { + self.messages.extend(another.messages); + } +} + +impl ErrorMessage { + fn to_compile_error(&self) -> TokenStream { + let (start, end) = match self.span.get() { + Some(range) => (range.start, range.end), + None => (Span::call_site(), Span::call_site()), + }; + + // ::core::compile_error!($message) + TokenStream::from_iter([ + TokenTree::Punct({ + let mut punct = Punct::new(':', Spacing::Joint); + punct.set_span(start); + punct + }), + TokenTree::Punct({ + let mut punct = Punct::new(':', Spacing::Alone); + punct.set_span(start); + punct + }), + TokenTree::Ident(Ident::new("core", start)), + TokenTree::Punct({ + let mut punct = Punct::new(':', Spacing::Joint); + punct.set_span(start); + punct + }), + TokenTree::Punct({ + let mut punct = Punct::new(':', Spacing::Alone); + punct.set_span(start); + punct + }), + TokenTree::Ident(Ident::new("compile_error", start)), + TokenTree::Punct({ + let mut punct = Punct::new('!', Spacing::Alone); + punct.set_span(start); + punct + }), + TokenTree::Group({ + let mut group = Group::new(Delimiter::Brace, { + TokenStream::from_iter([TokenTree::Literal({ + let mut string = Literal::string(&self.message); + string.set_span(end); + string + })]) + }); + group.set_span(end); + group + }), + ]) + } +} + +#[cfg(feature = "parsing")] +pub(crate) fn new_at(scope: Span, cursor: Cursor, message: T) -> Error { + if cursor.eof() { + Error::new(scope, format!("unexpected end of input, {}", message)) + } else { + let span = crate::buffer::open_span_of_group(cursor); + Error::new(span, message) + } +} + +#[cfg(all(feature = "parsing", any(feature = "full", feature = "derive")))] +pub(crate) fn new2(start: Span, end: Span, message: T) -> Error { + return new2(start, end, message.to_string()); + + fn new2(start: Span, end: Span, message: String) -> Error { + Error { + messages: vec![ErrorMessage { + span: ThreadBound::new(SpanRange { start, end }), + message, + }], + } + } +} + +impl Debug for Error { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + if self.messages.len() == 1 { + formatter + .debug_tuple("Error") + .field(&self.messages[0]) + .finish() + } else { + formatter + .debug_tuple("Error") + .field(&self.messages) + .finish() + } + } +} + +impl Debug for ErrorMessage { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + Debug::fmt(&self.message, formatter) + } +} + +impl Display for Error { + fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result { + formatter.write_str(&self.messages[0].message) + } +} + +impl Clone for Error { + fn clone(&self) -> Self { + Error { + messages: self.messages.clone(), + } + } +} + +impl Clone for ErrorMessage { + fn clone(&self) -> Self { + ErrorMessage { + span: self.span, + message: self.message.clone(), + } + } +} + +impl Clone for SpanRange { + fn clone(&self) -> Self { + *self + } +} + +impl Copy for SpanRange {} + +impl std::error::Error for Error {} + +impl From for Error { + fn from(err: LexError) -> Self { + Error::new(err.span(), err) + } +} + +impl IntoIterator for Error { + type Item = Error; + type IntoIter = IntoIter; + + fn into_iter(self) -> Self::IntoIter { + IntoIter { + messages: self.messages.into_iter(), + } + } +} + +pub struct IntoIter { + messages: vec::IntoIter, +} + +impl Iterator for IntoIter { + type Item = Error; + + fn next(&mut self) -> Option { + Some(Error { + messages: vec![self.messages.next()?], + }) + } +} + +impl<'a> IntoIterator for &'a Error { + type Item = Error; + type IntoIter = Iter<'a>; + + fn into_iter(self) -> Self::IntoIter { + Iter { + messages: self.messages.iter(), + } + } +} + +pub struct Iter<'a> { + messages: slice::Iter<'a, ErrorMessage>, +} + +impl<'a> Iterator for Iter<'a> { + type Item = Error; + + fn next(&mut self) -> Option { + Some(Error { + messages: vec![self.messages.next()?.clone()], + }) + } +} + +impl Extend for Error { + fn extend>(&mut self, iter: T) { + for err in iter { + self.combine(err); + } + } +} diff --git a/rust/hw/char/pl011/vendor/syn/src/export.rs b/rust/hw/char/pl011/vendor/syn/src/export.rs new file mode 100644 index 0000000000..b9ea5c747b --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/export.rs @@ -0,0 +1,73 @@ +#[doc(hidden)] +pub use std::clone::Clone; +#[doc(hidden)] +pub use std::cmp::{Eq, PartialEq}; +#[doc(hidden)] +pub use std::concat; +#[doc(hidden)] +pub use std::default::Default; +#[doc(hidden)] +pub use std::fmt::Debug; +#[doc(hidden)] +pub use std::hash::{Hash, Hasher}; +#[doc(hidden)] +pub use std::marker::Copy; +#[doc(hidden)] +pub use std::option::Option::{None, Some}; +#[doc(hidden)] +pub use std::result::Result::{Err, Ok}; +#[doc(hidden)] +pub use std::stringify; + +#[doc(hidden)] +pub type Formatter<'a> = std::fmt::Formatter<'a>; +#[doc(hidden)] +pub type FmtResult = std::fmt::Result; + +#[doc(hidden)] +pub type bool = std::primitive::bool; +#[doc(hidden)] +pub type str = std::primitive::str; + +#[cfg(feature = "printing")] +#[doc(hidden)] +pub use quote; + +#[doc(hidden)] +pub type Span = proc_macro2::Span; +#[doc(hidden)] +pub type TokenStream2 = proc_macro2::TokenStream; + +#[cfg(feature = "parsing")] +#[doc(hidden)] +pub use crate::group::{parse_braces, parse_brackets, parse_parens}; + +#[doc(hidden)] +pub use crate::span::IntoSpans; + +#[cfg(all(feature = "parsing", feature = "printing"))] +#[doc(hidden)] +pub use crate::parse_quote::parse as parse_quote; + +#[cfg(feature = "parsing")] +#[doc(hidden)] +pub use crate::token::parsing::{peek_punct, punct as parse_punct}; + +#[cfg(feature = "printing")] +#[doc(hidden)] +pub use crate::token::printing::punct as print_punct; + +#[cfg(feature = "parsing")] +#[doc(hidden)] +pub use crate::token::private::CustomToken; + +#[cfg(feature = "proc-macro")] +#[doc(hidden)] +pub type TokenStream = proc_macro::TokenStream; + +#[cfg(feature = "printing")] +#[doc(hidden)] +pub use quote::{ToTokens, TokenStreamExt}; + +#[doc(hidden)] +pub struct private(pub(crate) ()); diff --git a/rust/hw/char/pl011/vendor/syn/src/expr.rs b/rust/hw/char/pl011/vendor/syn/src/expr.rs new file mode 100644 index 0000000000..c60bcf4771 --- /dev/null +++ b/rust/hw/char/pl011/vendor/syn/src/expr.rs @@ -0,0 +1,3960 @@ +use crate::attr::Attribute; +#[cfg(all(feature = "parsing", feature = "full"))] +use crate::error::Result; +#[cfg(feature = "full")] +use crate::generics::BoundLifetimes; +use crate::ident::Ident; +#[cfg(feature = "full")] +use crate::lifetime::Lifetime; +use crate::lit::Lit; +use crate::mac::Macro; +use crate::op::{BinOp, UnOp}; +#[cfg(all(feature = "parsing", feature = "full"))] +use crate::parse::ParseStream; +#[cfg(feature = "full")] +use crate::pat::Pat; +use crate::path::{AngleBracketedGenericArguments, Path, QSelf}; +use crate::punctuated::Punctuated; +#[cfg(feature = "full")] +use crate::stmt::Block; +use crate::token; +#[cfg(feature = "full")] +use crate::ty::ReturnType; +use crate::ty::Type; +use proc_macro2::{Span, TokenStream}; +#[cfg(feature = "printing")] +use quote::IdentFragment; +#[cfg(feature = "printing")] +use std::fmt::{self, Display}; +use std::hash::{Hash, Hasher}; +#[cfg(all(feature = "parsing", feature = "full"))] +use std::mem; + +ast_enum_of_structs! { + /// A Rust expression. + /// + /// *This type is available only if Syn is built with the `"derive"` or `"full"` + /// feature, but most of the variants are not available unless "full" is enabled.* + /// + /// # Syntax tree enums + /// + /// This type is a syntax tree enum. In Syn this and other syntax tree enums + /// are designed to be traversed using the following rebinding idiom. + /// + /// ``` + /// # use syn::Expr; + /// # + /// # fn example(expr: Expr) { + /// # const IGNORE: &str = stringify! { + /// let expr: Expr = /* ... */; + /// # }; + /// match expr { + /// Expr::MethodCall(expr) => { + /// /* ... */ + /// } + /// Expr::Cast(expr) => { + /// /* ... */ + /// } + /// Expr::If(expr) => { + /// /* ... */ + /// } + /// + /// /* ... */ + /// # _ => {} + /// # } + /// # } + /// ``` + /// + /// We begin with a variable `expr` of type `Expr` that has no fields + /// (because it is an enum), and by matching on it and rebinding a variable + /// with the same name `expr` we effectively imbue our variable with all of + /// the data fields provided by the variant that it turned out to be. So for + /// example above if we ended up in the `MethodCall` case then we get to use + /// `expr.receiver`, `expr.args` etc; if we ended up in the `If` case we get + /// to use `expr.cond`, `expr.then_branch`, `expr.else_branch`. + /// + /// This approach avoids repeating the variant names twice on every line. + /// + /// ``` + /// # use syn::{Expr, ExprMethodCall}; + /// # + /// # fn example(expr: Expr) { + /// // Repetitive; recommend not doing this. + /// match expr { + /// Expr::MethodCall(ExprMethodCall { method, args, .. }) => { + /// # } + /// # _ => {} + /// # } + /// # } + /// ``` + /// + /// In general, the name to which a syntax tree enum variant is bound should + /// be a suitable name for the complete syntax tree enum type. + /// + /// ``` + /// # use syn::{Expr, ExprField}; + /// # + /// # fn example(discriminant: ExprField) { + /// // Binding is called `base` which is the name I would use if I were + /// // assigning `*discriminant.base` without an `if let`. + /// if let Expr::Tuple(base) = *discriminant.base { + /// # } + /// # } + /// ``` + /// + /// A sign that you may not be choosing the right variable names is if you + /// see names getting repeated in your code, like accessing + /// `receiver.receiver` or `pat.pat` or `cond.cond`. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + #[non_exhaustive] + pub enum Expr { + /// A slice literal expression: `[a, b, c, d]`. + Array(ExprArray), + + /// An assignment expression: `a = compute()`. + Assign(ExprAssign), + + /// An async block: `async { ... }`. + Async(ExprAsync), + + /// An await expression: `fut.await`. + Await(ExprAwait), + + /// A binary operation: `a + b`, `a += b`. + Binary(ExprBinary), + + /// A blocked scope: `{ ... }`. + Block(ExprBlock), + + /// A `break`, with an optional label to break and an optional + /// expression. + Break(ExprBreak), + + /// A function call expression: `invoke(a, b)`. + Call(ExprCall), + + /// A cast expression: `foo as f64`. + Cast(ExprCast), + + /// A closure expression: `|a, b| a + b`. + Closure(ExprClosure), + + /// A const block: `const { ... }`. + Const(ExprConst), + + /// A `continue`, with an optional label. + Continue(ExprContinue), + + /// Access of a named struct field (`obj.k`) or unnamed tuple struct + /// field (`obj.0`). + Field(ExprField), + + /// A for loop: `for pat in expr { ... }`. + ForLoop(ExprForLoop), + + /// An expression contained within invisible delimiters. + /// + /// This variant is important for faithfully representing the precedence + /// of expressions and is related to `None`-delimited spans in a + /// `TokenStream`. + Group(ExprGroup), + + /// An `if` expression with an optional `else` block: `if expr { ... } + /// else { ... }`. + /// + /// The `else` branch expression may only be an `If` or `Block` + /// expression, not any of the other types of expression. + If(ExprIf), + + /// A square bracketed indexing expression: `vector[2]`. + Index(ExprIndex), + + /// The inferred value of a const generic argument, denoted `_`. + Infer(ExprInfer), + + /// A `let` guard: `let Some(x) = opt`. + Let(ExprLet), + + /// A literal in place of an expression: `1`, `"foo"`. + Lit(ExprLit), + + /// Conditionless loop: `loop { ... }`. + Loop(ExprLoop), + + /// A macro invocation expression: `format!("{}", q)`. + Macro(ExprMacro), + + /// A `match` expression: `match n { Some(n) => {}, None => {} }`. + Match(ExprMatch), + + /// A method call expression: `x.foo::(a, b)`. + MethodCall(ExprMethodCall), + + /// A parenthesized expression: `(a + b)`. + Paren(ExprParen), + + /// A path like `std::mem::replace` possibly containing generic + /// parameters and a qualified self-type. + /// + /// A plain identifier like `x` is a path of length 1. + Path(ExprPath), + + /// A range expression: `1..2`, `1..`, `..2`, `1..=2`, `..=2`. + Range(ExprRange), + + /// A referencing operation: `&a` or `&mut a`. + Reference(ExprReference), + + /// An array literal constructed from one repeated element: `[0u8; N]`. + Repeat(ExprRepeat), + + /// A `return`, with an optional value to be returned. + Return(ExprReturn), + + /// A struct literal expression: `Point { x: 1, y: 1 }`. + /// + /// The `rest` provides the value of the remaining fields as in `S { a: + /// 1, b: 1, ..rest }`. + Struct(ExprStruct), + + /// A try-expression: `expr?`. + Try(ExprTry), + + /// A try block: `try { ... }`. + TryBlock(ExprTryBlock), + + /// A tuple expression: `(a, b, c, d)`. + Tuple(ExprTuple), + + /// A unary operation: `!x`, `*x`. + Unary(ExprUnary), + + /// An unsafe block: `unsafe { ... }`. + Unsafe(ExprUnsafe), + + /// Tokens in expression position not interpreted by Syn. + Verbatim(TokenStream), + + /// A while loop: `while expr { ... }`. + While(ExprWhile), + + /// A yield expression: `yield expr`. + Yield(ExprYield), + + // For testing exhaustiveness in downstream code, use the following idiom: + // + // match expr { + // #![cfg_attr(test, deny(non_exhaustive_omitted_patterns))] + // + // Expr::Array(expr) => {...} + // Expr::Assign(expr) => {...} + // ... + // Expr::Yield(expr) => {...} + // + // _ => { /* some sane fallback */ } + // } + // + // This way we fail your tests but don't break your library when adding + // a variant. You will be notified by a test failure when a variant is + // added, so that you can add code to handle it, but your library will + // continue to compile and work for downstream users in the interim. + } +} + +ast_struct! { + /// A slice literal expression: `[a, b, c, d]`. + #[cfg_attr(docsrs, doc(cfg(feature = "full")))] + pub struct ExprArray #full { + pub attrs: Vec, + pub bracket_token: token::Bracket, + pub elems: Punctuated, + } +} + +ast_struct! { + /// An assignment expression: `a = compute()`. + #[cfg_attr(docsrs, doc(cfg(feature = "full")))] + pub struct ExprAssign #full { + pub attrs: Vec, + pub left: Box, + pub eq_token: Token![=], + pub right: Box, + } +} + +ast_struct! { + /// An async block: `async { ... }`. + #[cfg_attr(docsrs, doc(cfg(feature = "full")))] + pub struct ExprAsync #full { + pub attrs: Vec, + pub async_token: Token![async], + pub capture: Option, + pub block: Block, + } +} + +ast_struct! { + /// An await expression: `fut.await`. + #[cfg_attr(docsrs, doc(cfg(feature = "full")))] + pub struct ExprAwait #full { + pub attrs: Vec, + pub base: Box, + pub dot_token: Token![.], + pub await_token: Token![await], + } +} + +ast_struct! { + /// A binary operation: `a + b`, `a += b`. + #[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))] + pub struct ExprBinary { + pub attrs: Vec, + pub left: Box, + pub op: BinOp, + pub right: Box, + } +} + +ast_struct! { + /// A blocked scope: `{ ... }`. + #[cfg_attr(docsrs, doc(cfg(feature = "full")))] + pub struct ExprBlock #full { + pub attrs: Vec, + pub label: Option