rand-0.8.4/.cargo_vcs_info.json0000644000000001120000000000000117620ustar { "git": { "sha1": "8792268dfe57e49bb4518190bf4fe66176759a44" } } rand-0.8.4/CHANGELOG.md000064400000000000000000000565450000000000000123660ustar 00000000000000# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). A [separate changelog is kept for rand_core](rand_core/CHANGELOG.md). You may also find the [Upgrade Guide](https://rust-random.github.io/book/update.html) useful. ## [0.8.4] - 2021-06-15 ### Additions - Use const-generics to support arrays of all sizes (#1104) - Implement `Clone` and `Copy` for `Alphanumeric` (#1126) - Add `Distribution::map` to derive a distribution using a closure (#1129) - Add `Slice` distribution (#1107) - Add `DistString` trait with impls for `Standard` and `Alphanumeric` (#1133) ### Other - Reorder asserts in `Uniform` float distributions for easier debugging of non-finite arguments (#1094, #1108) - Add range overflow check in `Uniform` float distributions (#1108) - Deprecate `rngs::adapter::ReadRng` (#1130) ## [0.8.3] - 2021-01-25 ### Fixes - Fix `no-std` + `alloc` build by gating `choose_multiple_weighted` on `std` (#1088) ## [0.8.2] - 2021-01-12 ### Fixes - Fix panic in `UniformInt::sample_single_inclusive` and `Rng::gen_range` when providing a full integer range (eg `0..=MAX`) (#1087) ## [0.8.1] - 2020-12-31 ### Other - Enable all stable features in the playground (#1081) ## [0.8.0] - 2020-12-18 ### Platform support - The minimum supported Rust version is now 1.36 (#1011) - `getrandom` updated to v0.2 (#1041) - Remove `wasm-bindgen` and `stdweb` feature flags. For details of WASM support, see the [getrandom documentation](https://docs.rs/getrandom/latest). (#948) - `ReadRng::next_u32` and `next_u64` now use little-Endian conversion instead of native-Endian, affecting results on Big-Endian platforms (#1061) - The `nightly` feature no longer implies the `simd_support` feature (#1048) - Fix `simd_support` feature to work on current nightlies (#1056) ### Rngs - `ThreadRng` is no longer `Copy` to enable safe usage within thread-local destructors (#1035) - `gen_range(a, b)` was replaced with `gen_range(a..b)`. `gen_range(a..=b)` is also supported. Note that `a` and `b` can no longer be references or SIMD types. (#744, #1003) - Replace `AsByteSliceMut` with `Fill` and add support for `[bool], [char], [f32], [f64]` (#940) - Restrict `rand::rngs::adapter` to `std` (#1027; see also #928) - `StdRng`: add new `std_rng` feature flag (enabled by default, but might need to be used if disabling default crate features) (#948) - `StdRng`: Switch from ChaCha20 to ChaCha12 for better performance (#1028) - `SmallRng`: Replace PCG algorithm with xoshiro{128,256}++ (#1038) ### Sequences - Add `IteratorRandom::choose_stable` as an alternative to `choose` which does not depend on size hints (#1057) - Improve accuracy and performance of `IteratorRandom::choose` (#1059) - Implement `IntoIterator` for `IndexVec`, replacing the `into_iter` method (#1007) - Add value stability tests for `seq` module (#933) ### Misc - Support `PartialEq` and `Eq` for `StdRng`, `SmallRng` and `StepRng` (#979) - Added a `serde1` feature and added Serialize/Deserialize to `UniformInt` and `WeightedIndex` (#974) - Drop some unsafe code (#962, #963, #1011) - Reduce packaged crate size (#983) - Migrate to GitHub Actions from Travis+AppVeyor (#1073) ### Distributions - `Alphanumeric` samples bytes instead of chars (#935) - `Uniform` now supports `char`, enabling `rng.gen_range('A'..='Z')` (#1068) - Add `UniformSampler::sample_single_inclusive` (#1003) #### Weighted sampling - Implement weighted sampling without replacement (#976, #1013) - `rand::distributions::alias_method::WeightedIndex` was moved to `rand_distr::WeightedAliasIndex`. The simpler alternative `rand::distribution::WeightedIndex` remains. (#945) - Improve treatment of rounding errors in `WeightedIndex::update_weights` (#956) - `WeightedIndex`: return error on NaN instead of panic (#1005) ### Documentation - Document types supported by `random` (#994) - Document notes on password generation (#995) - Note that `SmallRng` may not be the best choice for performance and in some other cases (#1038) - Use `doc(cfg)` to annotate feature-gated items (#1019) - Adjust README (#1065) ## [0.7.3] - 2020-01-10 ### Fixes - The `Bernoulli` distribution constructors now reports an error on NaN and on `denominator == 0`. (#925) - Use `std::sync::Once` to register fork handler, avoiding possible atomicity violation (#928) - Fix documentation on the precision of generated floating-point values ### Changes - Unix: make libc dependency optional; only use fork protection with std feature (#928) ### Additions - Implement `std::error::Error` for `BernoulliError` (#919) ## [0.7.2] - 2019-09-16 ### Fixes - Fix dependency on `rand_core` 0.5.1 (#890) ### Additions - Unit tests for value stability of distributions added (#888) ## [0.7.1] - 2019-09-13 ### Yanked This release was yanked since it depends on `rand_core::OsRng` added in 0.5.1 but specifies a dependency on version 0.5.0 (#890), causing a broken builds when updating from `rand 0.7.0` without also updating `rand_core`. ### Fixes - Fix `no_std` behaviour, appropriately enable c2-chacha's `std` feature (#844) - `alloc` feature in `no_std` is available since Rust 1.36 (#856) - Fix or squelch issues from Clippy lints (#840) ### Additions - Add a `no_std` target to CI to continuously evaluate `no_std` status (#844) - `WeightedIndex`: allow adjusting a sub-set of weights (#866) ## [0.7.0] - 2019-06-28 ### Fixes - Fix incorrect pointer usages revealed by Miri testing (#780, #781) - Fix (tiny!) bias in `Uniform` for 8- and 16-bit ints (#809) ### Crate - Bumped MSRV (min supported Rust version) to 1.32.0 - Updated to Rust Edition 2018 (#823, #824) - Removed dependence on `rand_xorshift`, `rand_isaac`, `rand_jitter` crates (#759, #765) - Remove dependency on `winapi` (#724) - Removed all `build.rs` files (#824) - Removed code already deprecated in version 0.6 (#757) - Removed the serde1 feature (It's still available for backwards compatibility, but it does not do anything. #830) - Many documentation changes ### rand_core - Updated to `rand_core` 0.5.0 - `Error` type redesigned with new API (#800) - Move `from_entropy` method to `SeedableRng` and remove `FromEntropy` (#800) - `SeedableRng::from_rng` is now expected to be value-stable (#815) ### Standard RNGs - OS interface moved from `rand_os` to new `getrandom` crate (#765, [getrandom](https://github.com/rust-random/getrandom)) - Use ChaCha for `StdRng` and `ThreadRng` (#792) - Feature-gate `SmallRng` (#792) - `ThreadRng` now supports `Copy` (#758) - Deprecated `EntropyRng` (#765) - Enable fork protection of ReseedingRng without `std` (#724) ### Distributions - Many distributions have been moved to `rand_distr` (#761) - `Bernoulli::new` constructor now returns a `Result` (#803) - `Distribution::sample_iter` adjusted for more flexibility (#758) - Added `distributions::weighted::alias_method::WeightedIndex` for `O(1)` sampling (#692) - Support sampling `NonZeroU*` types with the `Standard` distribution (#728) - Optimised `Binomial` distribution sampling (#735, #740, #752) - Optimised SIMD float sampling (#739) ### Sequences - Make results portable across 32- and 64-bit by using `u32` samples for `usize` where possible (#809) ## [0.6.5] - 2019-01-28 ### Crates - Update `rand_core` to 0.4 (#703) - Move `JitterRng` to its own crate (#685) - Add a wasm-bindgen test crate (#696) ### Platforms - Fuchsia: Replaced fuchsia-zircon with fuchsia-cprng ### Doc - Use RFC 1946 for doc links (#691) - Fix some doc links and notes (#711) ## [0.6.4] - 2019-01-08 ### Fixes - Move wasm-bindgen shims to correct crate (#686) - Make `wasm32-unknown-unknown` compile but fail at run-time if missing bindingsg (#686) ## [0.6.3] - 2019-01-04 ### Fixes - Make the `std` feature require the optional `rand_os` dependency (#675) - Re-export the optional WASM dependencies of `rand_os` from `rand` to avoid breakage (#674) ## [0.6.2] - 2019-01-04 ### Additions - Add `Default` for `ThreadRng` (#657) - Move `rngs::OsRng` to `rand_os` sub-crate; clean up code; use as dependency (#643) ##BLOCKER## - Add `rand_xoshiro` sub-crate, plus benchmarks (#642, #668) ### Fixes - Fix bias in `UniformInt::sample_single` (#662) - Use `autocfg` instead of `rustc_version` for rustc version detection (#664) - Disable `i128` and `u128` if the `target_os` is `emscripten` (#671: work-around Emscripten limitation) - CI fixes (#660, #671) ### Optimisations - Optimise memory usage of `UnitCircle` and `UnitSphereSurface` distributions (no PR) ## [0.6.1] - 2018-11-22 - Support sampling `Duration` also for `no_std` (only since Rust 1.25) (#649) - Disable default features of `libc` (#647) ## [0.6.0] - 2018-11-14 ### Project organisation - Rand has moved from [rust-lang-nursery](https://github.com/rust-lang-nursery/rand) to [rust-random](https://github.com/rust-random/rand)! (#578) - Created [The Rust Random Book](https://rust-random.github.io/book/) ([source](https://github.com/rust-random/book)) - Update copyright and licence notices (#591, #611) - Migrate policy documentation from the wiki (#544) ### Platforms - Add fork protection on Unix (#466) - Added support for wasm-bindgen. (#541, #559, #562, #600) - Enable `OsRng` for powerpc64, sparc and sparc64 (#609) - Use `syscall` from `libc` on Linux instead of redefining it (#629) ### RNGs - Switch `SmallRng` to use PCG (#623) - Implement `Pcg32` and `Pcg64Mcg` generators (#632) - Move ISAAC RNGs to a dedicated crate (#551) - Move Xorshift RNG to its own crate (#557) - Move ChaCha and HC128 RNGs to dedicated crates (#607, #636) - Remove usage of `Rc` from `ThreadRng` (#615) ### Sampling and distributions - Implement `Rng.gen_ratio()` and `Bernoulli::new_ratio()` (#491) - Make `Uniform` strictly respect `f32` / `f64` high/low bounds (#477) - Allow `gen_range` and `Uniform` to work on non-`Copy` types (#506) - `Uniform` supports inclusive ranges: `Uniform::from(a..=b)`. This is automatically enabled for Rust >= 1.27. (#566) - Implement `TrustedLen` and `FusedIterator` for `DistIter` (#620) #### New distributions - Add the `Dirichlet` distribution (#485) - Added sampling from the unit sphere and circle. (#567) - Implement the triangular distribution (#575) - Implement the Weibull distribution (#576) - Implement the Beta distribution (#574) #### Optimisations - Optimise `Bernoulli::new` (#500) - Optimise `char` sampling (#519) - Optimise sampling of `std::time::Duration` (#583) ### Sequences - Redesign the `seq` module (#483, #515) - Add `WeightedIndex` and `choose_weighted` (#518, #547) - Optimised and changed return type of the `sample_indices` function. (#479) - Use `Iterator::size_hint()` to speed up `IteratorRandom::choose` (#593) ### SIMD - Support for generating SIMD types (#523, #542, #561, #630) ### Other - Revise CI scripts (#632, #635) - Remove functionality already deprecated in 0.5 (#499) - Support for `i128` and `u128` is automatically enabled for Rust >= 1.26. This renders the `i128_support` feature obsolete. It still exists for backwards compatibility but does not have any effect. This breaks programs using Rand with `i128_support` on nightlies older than Rust 1.26. (#571) ## [0.5.5] - 2018-08-07 ### Documentation - Fix links in documentation (#582) ## [0.5.4] - 2018-07-11 ### Platform support - Make `OsRng` work via WASM/stdweb for WebWorkers ## [0.5.3] - 2018-06-26 ### Platform support - OpenBSD, Bitrig: fix compilation (broken in 0.5.1) (#530) ## [0.5.2] - 2018-06-18 ### Platform support - Hide `OsRng` and `JitterRng` on unsupported platforms (#512; fixes #503). ## [0.5.1] - 2018-06-08 ### New distributions - Added Cauchy distribution. (#474, #486) - Added Pareto distribution. (#495) ### Platform support and `OsRng` - Remove blanket Unix implementation. (#484) - Remove Wasm unimplemented stub. (#484) - Dragonfly BSD: read from `/dev/random`. (#484) - Bitrig: use `getentropy` like OpenBSD. (#484) - Solaris: (untested) use `getrandom` if available, otherwise `/dev/random`. (#484) - Emscripten, `stdweb`: split the read up in chunks. (#484) - Emscripten, Haiku: don't do an extra blocking read from `/dev/random`. (#484) - Linux, NetBSD, Solaris: read in blocking mode on first use in `fill_bytes`. (#484) - Fuchsia, CloudABI: fix compilation (broken in Rand 0.5). (#484) ## [0.5.0] - 2018-05-21 ### Crate features and organisation - Minimum Rust version update: 1.22.0. (#239) - Create a separate `rand_core` crate. (#288) - Deprecate `rand_derive`. (#256) - Add `prelude` (and module reorganisation). (#435) - Add `log` feature. Logging is now available in `JitterRng`, `OsRng`, `EntropyRng` and `ReseedingRng`. (#246) - Add `serde1` feature for some PRNGs. (#189) - `stdweb` feature for `OsRng` support on WASM via stdweb. (#272, #336) ### `Rng` trait - Split `Rng` in `RngCore` and `Rng` extension trait. `next_u32`, `next_u64` and `fill_bytes` are now part of `RngCore`. (#265) - Add `Rng::sample`. (#256) - Deprecate `Rng::gen_weighted_bool`. (#308) - Add `Rng::gen_bool`. (#308) - Remove `Rng::next_f32` and `Rng::next_f64`. (#273) - Add optimized `Rng::fill` and `Rng::try_fill` methods. (#247) - Deprecate `Rng::gen_iter`. (#286) - Deprecate `Rng::gen_ascii_chars`. (#279) ### `rand_core` crate - `rand` now depends on new `rand_core` crate (#288) - `RngCore` and `SeedableRng` are now part of `rand_core`. (#288) - Add modules to help implementing RNGs `impl` and `le`. (#209, #228) - Add `Error` and `ErrorKind`. (#225) - Add `CryptoRng` marker trait. (#273) - Add `BlockRngCore` trait. (#281) - Add `BlockRng` and `BlockRng64` wrappers to help implementations. (#281, #325) - Revise the `SeedableRng` trait. (#233) - Remove default implementations for `RngCore::next_u64` and `RngCore::fill_bytes`. (#288) - Add `RngCore::try_fill_bytes`. (#225) ### Other traits and types - Add `FromEntropy` trait. (#233, #375) - Add `SmallRng` wrapper. (#296) - Rewrite `ReseedingRng` to only work with `BlockRngCore` (substantial performance improvement). (#281) - Deprecate `weak_rng`. Use `SmallRng` instead. (#296) - Deprecate `AsciiGenerator`. (#279) ### Random number generators - Switch `StdRng` and `thread_rng` to HC-128. (#277) - `StdRng` must now be created with `from_entropy` instead of `new` - Change `thread_rng` reseeding threshold to 32 MiB. (#277) - PRNGs no longer implement `Copy`. (#209) - `Debug` implementations no longer show internals. (#209) - Implement `Clone` for `ReseedingRng`, `JitterRng`, OsRng`. (#383, #384) - Implement serialization for `XorShiftRng`, `IsaacRng` and `Isaac64Rng` under the `serde1` feature. (#189) - Implement `BlockRngCore` for `ChaChaCore` and `Hc128Core`. (#281) - All PRNGs are now portable across big- and little-endian architectures. (#209) - `Isaac64Rng::next_u32` no longer throws away half the results. (#209) - Add `IsaacRng::new_from_u64` and `Isaac64Rng::new_from_u64`. (#209) - Add the HC-128 CSPRNG `Hc128Rng`. (#210) - Change ChaCha20 to have 64-bit counter and 64-bit stream. (#349) - Changes to `JitterRng` to get its size down from 2112 to 24 bytes. (#251) - Various performance improvements to all PRNGs. ### Platform support and `OsRng` - Add support for CloudABI. (#224) - Remove support for NaCl. (#225) - WASM support for `OsRng` via stdweb, behind the `stdweb` feature. (#272, #336) - Use `getrandom` on more platforms for Linux, and on Android. (#338) - Use the `SecRandomCopyBytes` interface on macOS. (#322) - On systems that do not have a syscall interface, only keep a single file descriptor open for `OsRng`. (#239) - On Unix, first try a single read from `/dev/random`, then `/dev/urandom`. (#338) - Better error handling and reporting in `OsRng` (using new error type). (#225) - `OsRng` now uses non-blocking when available. (#225) - Add `EntropyRng`, which provides `OsRng`, but has `JitterRng` as a fallback. (#235) ### Distributions - New `Distribution` trait. (#256) - Add `Distribution::sample_iter` and `Rng::::sample_iter`. (#361) - Deprecate `Rand`, `Sample` and `IndependentSample` traits. (#256) - Add a `Standard` distribution (replaces most `Rand` implementations). (#256) - Add `Binomial` and `Poisson` distributions. (#96) - Add `Bernoulli` dsitribution. (#411) - Add `Alphanumeric` distribution. (#279) - Remove `Closed01` distribution, add `OpenClosed01`. (#274, #420) - Rework `Range` type, making it possible to implement it for user types. (#274) - Rename `Range` to `Uniform`. (#395) - Add `Uniform::new_inclusive` for inclusive ranges. (#274) - Use widening multiply method for much faster integer range reduction. (#274) - `Standard` distribution for `char` uses `Uniform` internally. (#274) - `Standard` distribution for `bool` uses sign test. (#274) - Implement `Standard` distribution for `Wrapping`. (#436) - Implement `Uniform` distribution for `Duration`. (#427) ## [0.4.3] - 2018-08-16 ### Fixed - Use correct syscall number for PowerPC (#589) ## [0.4.2] - 2018-01-06 ### Changed - Use `winapi` on Windows - Update for Fuchsia OS - Remove dev-dependency on `log` ## [0.4.1] - 2017-12-17 ### Added - `no_std` support ## [0.4.0-pre.0] - 2017-12-11 ### Added - `JitterRng` added as a high-quality alternative entropy source using the system timer - new `seq` module with `sample_iter`, `sample_slice`, etc. - WASM support via dummy implementations (fail at run-time) - Additional benchmarks, covering generators and new seq code ### Changed - `thread_rng` uses `JitterRng` if seeding from system time fails (slower but more secure than previous method) ### Deprecated - `sample` function deprecated (replaced by `sample_iter`) ## [0.3.20] - 2018-01-06 ### Changed - Remove dev-dependency on `log` - Update `fuchsia-zircon` dependency to 0.3.2 ## [0.3.19] - 2017-12-27 ### Changed - Require `log <= 0.3.8` for dev builds - Update `fuchsia-zircon` dependency to 0.3 - Fix broken links in docs (to unblock compiler docs testing CI) ## [0.3.18] - 2017-11-06 ### Changed - `thread_rng` is seeded from the system time if `OsRng` fails - `weak_rng` now uses `thread_rng` internally ## [0.3.17] - 2017-10-07 ### Changed - Fuchsia: Magenta was renamed Zircon ## [0.3.16] - 2017-07-27 ### Added - Implement Debug for mote non-public types - implement `Rand` for (i|u)i128 - Support for Fuchsia ### Changed - Add inline attribute to SampleRange::construct_range. This improves the benchmark for sample in 11% and for shuffle in 16%. - Use `RtlGenRandom` instead of `CryptGenRandom` ## [0.3.15] - 2016-11-26 ### Added - Add `Rng` trait method `choose_mut` - Redox support ### Changed - Use `arc4rand` for `OsRng` on FreeBSD. - Use `arc4random(3)` for `OsRng` on OpenBSD. ### Fixed - Fix filling buffers 4 GiB or larger with `OsRng::fill_bytes` on Windows ## [0.3.14] - 2016-02-13 ### Fixed - Inline definitions from winapi/advapi32, which decreases build times ## [0.3.13] - 2016-01-09 ### Fixed - Compatible with Rust 1.7.0-nightly (needed some extra type annotations) ## [0.3.12] - 2015-11-09 ### Changed - Replaced the methods in `next_f32` and `next_f64` with the technique described Saito & Matsumoto at MCQMC'08. The new method should exhibit a slightly more uniform distribution. - Depend on libc 0.2 ### Fixed - Fix iterator protocol issue in `rand::sample` ## [0.3.11] - 2015-08-31 ### Added - Implement `Rand` for arrays with n <= 32 ## [0.3.10] - 2015-08-17 ### Added - Support for NaCl platforms ### Changed - Allow `Rng` to be `?Sized`, impl for `&mut R` and `Box` where `R: ?Sized + Rng` ## [0.3.9] - 2015-06-18 ### Changed - Use `winapi` for Windows API things ### Fixed - Fixed test on stable/nightly - Fix `getrandom` syscall number for aarch64-unknown-linux-gnu ## [0.3.8] - 2015-04-23 ### Changed - `log` is a dev dependency ### Fixed - Fix race condition of atomics in `is_getrandom_available` ## [0.3.7] - 2015-04-03 ### Fixed - Derive Copy/Clone changes ## [0.3.6] - 2015-04-02 ### Changed - Move to stable Rust! ## [0.3.5] - 2015-04-01 ### Fixed - Compatible with Rust master ## [0.3.4] - 2015-03-31 ### Added - Implement Clone for `Weighted` ### Fixed - Compatible with Rust master ## [0.3.3] - 2015-03-26 ### Fixed - Fix compile on Windows ## [0.3.2] - 2015-03-26 ## [0.3.1] - 2015-03-26 ### Fixed - Fix compile on Windows ## [0.3.0] - 2015-03-25 ### Changed - Update to use log version 0.3.x ## [0.2.1] - 2015-03-22 ### Fixed - Compatible with Rust master - Fixed iOS compilation ## [0.2.0] - 2015-03-06 ### Fixed - Compatible with Rust master (move from `old_io` to `std::io`) ## [0.1.4] - 2015-03-04 ### Fixed - Compatible with Rust master (use wrapping ops) ## [0.1.3] - 2015-02-20 ### Fixed - Compatible with Rust master ### Removed - Removed Copy implementations from RNGs ## [0.1.2] - 2015-02-03 ### Added - Imported functionality from `std::rand`, including: - `StdRng`, `SeedableRng`, `TreadRng`, `weak_rng()` - `ReaderRng`: A wrapper around any Reader to treat it as an RNG. - Imported documentation from `std::rand` - Imported tests from `std::rand` ## [0.1.1] - 2015-02-03 ### Added - Migrate to a cargo-compatible directory structure. ### Fixed - Do not use entropy during `gen_weighted_bool(1)` ## [Rust 0.12.0] - 2014-10-09 ### Added - Impl Rand for tuples of arity 11 and 12 - Include ChaCha pseudorandom generator - Add `next_f64` and `next_f32` to Rng - Implement Clone for PRNGs ### Changed - Rename `TaskRng` to `ThreadRng` and `task_rng` to `thread_rng` (since a runtime is removed from Rust). ### Fixed - Improved performance of ISAAC and ISAAC64 by 30% and 12 % respectively, by informing the optimiser that indexing is never out-of-bounds. ### Removed - Removed the Deprecated `choose_option` ## [Rust 0.11.0] - 2014-07-02 ### Added - document when to use `OSRng` in cryptographic context, and explain why we use `/dev/urandom` instead of `/dev/random` - `Rng::gen_iter()` which will return an infinite stream of random values - `Rng::gen_ascii_chars()` which will return an infinite stream of random ascii characters ### Changed - Now only depends on libcore! - Remove `Rng.choose()`, rename `Rng.choose_option()` to `.choose()` - Rename OSRng to OsRng - The WeightedChoice structure is no longer built with a `Vec>`, but rather a `&mut [Weighted]`. This means that the WeightedChoice structure now has a lifetime associated with it. - The `sample` method on `Rng` has been moved to a top-level function in the `rand` module due to its dependence on `Vec`. ### Removed - `Rng::gen_vec()` was removed. Previous behavior can be regained with `rng.gen_iter().take(n).collect()` - `Rng::gen_ascii_str()` was removed. Previous behavior can be regained with `rng.gen_ascii_chars().take(n).collect()` - {IsaacRng, Isaac64Rng, XorShiftRng}::new() have all been removed. These all relied on being able to use an OSRng for seeding, but this is no longer available in librand (where these types are defined). To retain the same functionality, these types now implement the `Rand` trait so they can be generated with a random seed from another random number generator. This allows the stdlib to use an OSRng to create seeded instances of these RNGs. - Rand implementations for `Box` and `@T` were removed. These seemed to be pretty rare in the codebase, and it allows for librand to not depend on liballoc. Additionally, other pointer types like Rc and Arc were not supported. - Remove a slew of old deprecated functions ## [Rust 0.10] - 2014-04-03 ### Changed - replace `Rng.shuffle's` functionality with `.shuffle_mut` - bubble up IO errors when creating an OSRng ### Fixed - Use `fill()` instead of `read()` - Rewrite OsRng in Rust for windows ## [0.10-pre] - 2014-03-02 ### Added - Seperate `rand` out of the standard library rand-0.8.4/COPYRIGHT000064400000000000000000000010710000000000000120300ustar 00000000000000Copyrights in the Rand project are retained by their contributors. No copyright assignment is required to contribute to the Rand project. For full authorship information, see the version control history. Except as otherwise noted (below and/or in individual files), Rand is licensed under the Apache License, Version 2.0 or or the MIT license or , at your option. The Rand project includes code from the Rust project published under these same licenses. rand-0.8.4/Cargo.lock0000644000000114150000000000000077450ustar # This file is automatically @generated by Cargo. # It is not intended for manual editing. [[package]] name = "bincode" version = "1.3.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d175dfa69e619905c4c3cdb7c3c203fa3bdd5d51184e3afdb2742c0280493772" dependencies = [ "byteorder", "serde", ] [[package]] name = "byteorder" version = "1.3.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "08c48aae112d48ed9f069b33538ea9e3e90aa263cfa3d1c24309612b1f7472de" [[package]] name = "cfg-if" version = "0.1.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "4785bdd1c96b2a846b2bd7cc02e86b6b3dbf14e7e53446c4f54c92a361040822" [[package]] name = "cfg-if" version = "1.0.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd" [[package]] name = "getrandom" version = "0.2.2" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c9495705279e7140bf035dde1f6e750c162df8b625267cd52cc44e0b156732c8" dependencies = [ "cfg-if 1.0.0", "libc", "wasi", ] [[package]] name = "libc" version = "0.2.91" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "8916b1f6ca17130ec6568feccee27c156ad12037880833a3b842a823236502e7" [[package]] name = "libm" version = "0.1.4" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7fc7aa29613bd6a620df431842069224d8bc9011086b1db4c0e0cd47fa03ec9a" [[package]] name = "log" version = "0.4.14" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710" dependencies = [ "cfg-if 1.0.0", ] [[package]] name = "packed_simd_2" version = "0.3.5" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "0e64858a2d3733fdd61adfdd6da89aa202f7ff0e741d2fc7ed1e452ba9dc99d7" dependencies = [ "cfg-if 0.1.10", "libm", ] [[package]] name = "ppv-lite86" version = "0.2.10" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "ac74c624d6b2d21f425f752262f42188365d7b8ff1aff74c82e45136510a4857" [[package]] name = "proc-macro2" version = "1.0.24" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "1e0704ee1a7e00d7bb417d0770ea303c1bccbabf0ef1667dae92b5967f5f8a71" dependencies = [ "unicode-xid", ] [[package]] name = "quote" version = "1.0.9" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "c3d0b9745dc2debf507c8422de05d7226cc1f0644216dfdfead988f9b1ab32a7" dependencies = [ "proc-macro2", ] [[package]] name = "rand" version = "0.8.4" dependencies = [ "bincode", "libc", "log", "packed_simd_2", "rand_chacha", "rand_core", "rand_hc", "rand_pcg", "serde", ] [[package]] name = "rand_chacha" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "e6c10a63a0fa32252be49d21e7709d4d4baf8d231c2dbce1eaa8141b9b127d88" dependencies = [ "ppv-lite86", "rand_core", ] [[package]] name = "rand_core" version = "0.6.3" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d34f1408f55294453790c48b2f1ebbb1c5b4b7563eb1f418bcfcfdbb06ebb4e7" dependencies = [ "getrandom", "serde", ] [[package]] name = "rand_hc" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "d51e9f596de227fda2ea6c84607f5558e196eeaf43c986b724ba4fb8fdf497e7" dependencies = [ "rand_core", ] [[package]] name = "rand_pcg" version = "0.3.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "59cad018caf63deb318e5a4586d99a24424a364f40f1e5778c29aca23f4fc73e" dependencies = [ "rand_core", ] [[package]] name = "serde" version = "1.0.125" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "558dc50e1a5a5fa7112ca2ce4effcb321b0300c0d4ccf0776a9f60cd89031171" dependencies = [ "serde_derive", ] [[package]] name = "serde_derive" version = "1.0.125" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "b093b7a2bb58203b5da3056c05b4ec1fed827dcfdb37347a8841695263b3d06d" dependencies = [ "proc-macro2", "quote", "syn", ] [[package]] name = "syn" version = "1.0.67" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "6498a9efc342871f91cc2d0d694c674368b4ceb40f62b65a7a08c3792935e702" dependencies = [ "proc-macro2", "quote", "unicode-xid", ] [[package]] name = "unicode-xid" version = "0.2.1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "f7fe0bb3479651439c9112f72b6c505038574c9fbb575ed1bf3b797fa39dd564" [[package]] name = "wasi" version = "0.10.2+wasi-snapshot-preview1" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "fd6fbd9a79829dd1ad0cc20627bf1ed606756a7f77edff7b66b7064f9cb327c6" rand-0.8.4/Cargo.toml0000644000000044230000000000000077710ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "rand" version = "0.8.4" authors = ["The Rand Project Developers", "The Rust Project Developers"] include = ["src/", "LICENSE-*", "README.md", "CHANGELOG.md", "COPYRIGHT"] autobenches = true description = "Random number generators and other randomness functionality.\n" homepage = "https://rust-random.github.io/book" documentation = "https://docs.rs/rand" readme = "README.md" keywords = ["random", "rng"] categories = ["algorithms", "no-std"] license = "MIT OR Apache-2.0" repository = "https://github.com/rust-random/rand" [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "doc_cfg"] [package.metadata.playground] features = ["small_rng", "serde1"] [dependencies.log] version = "0.4.4" optional = true [dependencies.packed_simd] version = "0.3.5" features = ["into_bits"] optional = true package = "packed_simd_2" [dependencies.rand_core] version = "0.6.0" [dependencies.serde] version = "1.0.103" features = ["derive"] optional = true [dev-dependencies.bincode] version = "1.2.1" [dev-dependencies.rand_hc] version = "0.3.0" [dev-dependencies.rand_pcg] version = "0.3.0" [features] alloc = ["rand_core/alloc"] default = ["std", "std_rng"] getrandom = ["rand_core/getrandom"] min_const_gen = [] nightly = [] serde1 = ["serde", "rand_core/serde1"] simd_support = ["packed_simd"] small_rng = [] std = ["rand_core/std", "rand_chacha/std", "alloc", "getrandom", "libc"] std_rng = ["rand_chacha", "rand_hc"] [target."cfg(not(target_os = \"emscripten\"))".dependencies.rand_chacha] version = "0.3.0" optional = true default-features = false [target."cfg(target_os = \"emscripten\")".dependencies.rand_hc] version = "0.3.0" optional = true [target."cfg(unix)".dependencies.libc] version = "0.2.22" optional = true default-features = false rand-0.8.4/Cargo.toml.orig000064400000000000000000000056130000000000000134320ustar 00000000000000[package] name = "rand" version = "0.8.4" authors = ["The Rand Project Developers", "The Rust Project Developers"] license = "MIT OR Apache-2.0" readme = "README.md" repository = "https://github.com/rust-random/rand" documentation = "https://docs.rs/rand" homepage = "https://rust-random.github.io/book" description = """ Random number generators and other randomness functionality. """ keywords = ["random", "rng"] categories = ["algorithms", "no-std"] autobenches = true edition = "2018" include = ["src/", "LICENSE-*", "README.md", "CHANGELOG.md", "COPYRIGHT"] [features] # Meta-features: default = ["std", "std_rng"] nightly = [] # enables performance optimizations requiring nightly rust serde1 = ["serde", "rand_core/serde1"] # Option (enabled by default): without "std" rand uses libcore; this option # enables functionality expected to be available on a standard platform. std = ["rand_core/std", "rand_chacha/std", "alloc", "getrandom", "libc"] # Option: "alloc" enables support for Vec and Box when not using "std" alloc = ["rand_core/alloc"] # Option: use getrandom package for seeding getrandom = ["rand_core/getrandom"] # Option (requires nightly): experimental SIMD support simd_support = ["packed_simd"] # Option (enabled by default): enable StdRng std_rng = ["rand_chacha", "rand_hc"] # Option: enable SmallRng small_rng = [] # Option: for rustc β‰₯ 1.51, enable generating random arrays of any size # using min-const-generics min_const_gen = [] [workspace] members = [ "rand_core", "rand_distr", "rand_chacha", "rand_hc", "rand_pcg", ] [dependencies] rand_core = { path = "rand_core", version = "0.6.0" } log = { version = "0.4.4", optional = true } serde = { version = "1.0.103", features = ["derive"], optional = true } [dependencies.packed_simd] # NOTE: so far no version works reliably due to dependence on unstable features package = "packed_simd_2" version = "0.3.5" optional = true features = ["into_bits"] [target.'cfg(unix)'.dependencies] # Used for fork protection (reseeding.rs) libc = { version = "0.2.22", optional = true, default-features = false } # Emscripten does not support 128-bit integers, which are used by ChaCha code. # We work around this by using a different RNG. [target.'cfg(not(target_os = "emscripten"))'.dependencies] rand_chacha = { path = "rand_chacha", version = "0.3.0", default-features = false, optional = true } [target.'cfg(target_os = "emscripten")'.dependencies] rand_hc = { path = "rand_hc", version = "0.3.0", optional = true } [dev-dependencies] rand_pcg = { path = "rand_pcg", version = "0.3.0" } # Only for benches: rand_hc = { path = "rand_hc", version = "0.3.0" } # Only to test serde1 bincode = "1.2.1" [package.metadata.docs.rs] # To build locally: # RUSTDOCFLAGS="--cfg doc_cfg" cargo +nightly doc --all-features --no-deps --open all-features = true rustdoc-args = ["--cfg", "doc_cfg"] [package.metadata.playground] features = ["small_rng", "serde1"] rand-0.8.4/LICENSE-APACHE000064400000000000000000000251410000000000000124650ustar 00000000000000 Apache License Version 2.0, January 2004 https://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. rand-0.8.4/LICENSE-MIT000064400000000000000000000021350000000000000121730ustar 00000000000000Copyright 2018 Developers of the Rand project Copyright (c) 2014 The Rust Project Developers Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. rand-0.8.4/README.md000064400000000000000000000164540000000000000120270ustar 00000000000000# Rand [![Test Status](https://github.com/rust-random/rand/workflows/Tests/badge.svg?event=push)](https://github.com/rust-random/rand/actions) [![Crate](https://img.shields.io/crates/v/rand.svg)](https://crates.io/crates/rand) [![Book](https://img.shields.io/badge/book-master-yellow.svg)](https://rust-random.github.io/book/) [![API](https://img.shields.io/badge/api-master-yellow.svg)](https://rust-random.github.io/rand/rand) [![API](https://docs.rs/rand/badge.svg)](https://docs.rs/rand) [![Minimum rustc version](https://img.shields.io/badge/rustc-1.36+-lightgray.svg)](https://github.com/rust-random/rand#rust-version-requirements) A Rust library for random number generation, featuring: - Easy random value generation and usage via the [`Rng`](https://docs.rs/rand/*/rand/trait.Rng.html), [`SliceRandom`](https://docs.rs/rand/*/rand/seq/trait.SliceRandom.html) and [`IteratorRandom`](https://docs.rs/rand/*/rand/seq/trait.IteratorRandom.html) traits - Secure seeding via the [`getrandom` crate](https://crates.io/crates/getrandom) and fast, convenient generation via [`thread_rng`](https://docs.rs/rand/*/rand/fn.thread_rng.html) - A modular design built over [`rand_core`](https://crates.io/crates/rand_core) ([see the book](https://rust-random.github.io/book/crates.html)) - Fast implementations of the best-in-class [cryptographic](https://rust-random.github.io/book/guide-rngs.html#cryptographically-secure-pseudo-random-number-generators-csprngs) and [non-cryptographic](https://rust-random.github.io/book/guide-rngs.html#basic-pseudo-random-number-generators-prngs) generators - A flexible [`distributions`](https://docs.rs/rand/*/rand/distributions/index.html) module - Samplers for a large number of random number distributions via our own [`rand_distr`](https://docs.rs/rand_distr) and via the [`statrs`](https://docs.rs/statrs/0.13.0/statrs/) - [Portably reproducible output](https://rust-random.github.io/book/portability.html) - `#[no_std]` compatibility (partial) - *Many* performance optimisations It's also worth pointing out what `rand` *is not*: - Small. Most low-level crates are small, but the higher-level `rand` and `rand_distr` each contain a lot of functionality. - Simple (implementation). We have a strong focus on correctness, speed and flexibility, but not simplicity. If you prefer a small-and-simple library, there are alternatives including [fastrand](https://crates.io/crates/fastrand) and [oorandom](https://crates.io/crates/oorandom). - Slow. We take performance seriously, with considerations also for set-up time of new distributions, commonly-used parameters, and parameters of the current sampler. Documentation: - [The Rust Rand Book](https://rust-random.github.io/book) - [API reference (master branch)](https://rust-random.github.io/rand) - [API reference (docs.rs)](https://docs.rs/rand) ## Usage Add this to your `Cargo.toml`: ```toml [dependencies] rand = "0.8.0" ``` To get started using Rand, see [The Book](https://rust-random.github.io/book). ## Versions Rand is *mature* (suitable for general usage, with infrequent breaking releases which minimise breakage) but not yet at 1.0. We maintain compatibility with pinned versions of the Rust compiler (see below). Current Rand versions are: - Version 0.7 was released in June 2019, moving most non-uniform distributions to an external crate, moving `from_entropy` to `SeedableRng`, and many small changes and fixes. - Version 0.8 was released in December 2020 with many small changes. A detailed [changelog](CHANGELOG.md) is available for releases. When upgrading to the next minor series (especially 0.4 β†’ 0.5), we recommend reading the [Upgrade Guide](https://rust-random.github.io/book/update.html). Rand has not yet reached 1.0 implying some breaking changes may arrive in the future ([SemVer](https://semver.org/) allows each 0.x.0 release to include breaking changes), but is considered *mature*: breaking changes are minimised and breaking releases are infrequent. Rand libs have inter-dependencies and make use of the [semver trick](https://github.com/dtolnay/semver-trick/) in order to make traits compatible across crate versions. (This is especially important for `RngCore` and `SeedableRng`.) A few crate releases are thus compatibility shims, depending on the *next* lib version (e.g. `rand_core` versions `0.2.2` and `0.3.1`). This means, for example, that `rand_core_0_4_0::SeedableRng` and `rand_core_0_3_0::SeedableRng` are distinct, incompatible traits, which can cause build errors. Usually, running `cargo update` is enough to fix any issues. ### Yanked versions Some versions of Rand crates have been yanked ("unreleased"). Where this occurs, the crate's CHANGELOG *should* be updated with a rationale, and a search on the issue tracker with the keyword `yank` *should* uncover the motivation. ### Rust version requirements Since version 0.8, Rand requires **Rustc version 1.36 or greater**. Rand 0.7 requires Rustc 1.32 or greater while versions 0.5 require Rustc 1.22 or greater, and 0.4 and 0.3 (since approx. June 2017) require Rustc version 1.15 or greater. Subsets of the Rand code may work with older Rust versions, but this is not supported. Continuous Integration (CI) will always test the minimum supported Rustc version (the MSRV). The current policy is that this can be updated in any Rand release if required, but the change must be noted in the changelog. ## Crate Features Rand is built with these features enabled by default: - `std` enables functionality dependent on the `std` lib - `alloc` (implied by `std`) enables functionality requiring an allocator - `getrandom` (implied by `std`) is an optional dependency providing the code behind `rngs::OsRng` - `std_rng` enables inclusion of `StdRng`, `thread_rng` and `random` (the latter two *also* require that `std` be enabled) Optionally, the following dependencies can be enabled: - `log` enables logging via the `log` crate` crate Additionally, these features configure Rand: - `small_rng` enables inclusion of the `SmallRng` PRNG - `nightly` enables some optimizations requiring nightly Rust - `simd_support` (experimental) enables sampling of SIMD values (uniformly random SIMD integers and floats), requiring nightly Rust Note that nightly features are not stable and therefore not all library and compiler versions will be compatible. This is especially true of Rand's experimental `simd_support` feature. Rand supports limited functionality in `no_std` mode (enabled via `default-features = false`). In this case, `OsRng` and `from_entropy` are unavailable (unless `getrandom` is enabled), large parts of `seq` are unavailable (unless `alloc` is enabled), and `thread_rng` and `random` are unavailable. ### WASM support The WASM target `wasm32-unknown-unknown` is not *automatically* supported by `rand` or `getrandom`. To solve this, either use a different target such as `wasm32-wasi` or add a direct dependancy on `getrandom` with the `js` feature (if the target supports JavaScript). See [getrandom#WebAssembly support](https://docs.rs/getrandom/latest/getrandom/#webassembly-support). # License Rand is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See [LICENSE-APACHE](LICENSE-APACHE) and [LICENSE-MIT](LICENSE-MIT), and [COPYRIGHT](COPYRIGHT) for details. rand-0.8.4/src/distributions/bernoulli.rs000064400000000000000000000163770000000000000166060ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! The Bernoulli distribution. use crate::distributions::Distribution; use crate::Rng; use core::{fmt, u64}; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; /// The Bernoulli distribution. /// /// This is a special case of the Binomial distribution where `n = 1`. /// /// # Example /// /// ```rust /// use rand::distributions::{Bernoulli, Distribution}; /// /// let d = Bernoulli::new(0.3).unwrap(); /// let v = d.sample(&mut rand::thread_rng()); /// println!("{} is from a Bernoulli distribution", v); /// ``` /// /// # Precision /// /// This `Bernoulli` distribution uses 64 bits from the RNG (a `u64`), /// so only probabilities that are multiples of 2-64 can be /// represented. #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Bernoulli { /// Probability of success, relative to the maximal integer. p_int: u64, } // To sample from the Bernoulli distribution we use a method that compares a // random `u64` value `v < (p * 2^64)`. // // If `p == 1.0`, the integer `v` to compare against can not represented as a // `u64`. We manually set it to `u64::MAX` instead (2^64 - 1 instead of 2^64). // Note that value of `p < 1.0` can never result in `u64::MAX`, because an // `f64` only has 53 bits of precision, and the next largest value of `p` will // result in `2^64 - 2048`. // // Also there is a 100% theoretical concern: if someone consistenly wants to // generate `true` using the Bernoulli distribution (i.e. by using a probability // of `1.0`), just using `u64::MAX` is not enough. On average it would return // false once every 2^64 iterations. Some people apparently care about this // case. // // That is why we special-case `u64::MAX` to always return `true`, without using // the RNG, and pay the performance price for all uses that *are* reasonable. // Luckily, if `new()` and `sample` are close, the compiler can optimize out the // extra check. const ALWAYS_TRUE: u64 = u64::MAX; // This is just `2.0.powi(64)`, but written this way because it is not available // in `no_std` mode. const SCALE: f64 = 2.0 * (1u64 << 63) as f64; /// Error type returned from `Bernoulli::new`. #[derive(Clone, Copy, Debug, PartialEq, Eq)] pub enum BernoulliError { /// `p < 0` or `p > 1`. InvalidProbability, } impl fmt::Display for BernoulliError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.write_str(match self { BernoulliError::InvalidProbability => "p is outside [0, 1] in Bernoulli distribution", }) } } #[cfg(feature = "std")] impl ::std::error::Error for BernoulliError {} impl Bernoulli { /// Construct a new `Bernoulli` with the given probability of success `p`. /// /// # Precision /// /// For `p = 1.0`, the resulting distribution will always generate true. /// For `p = 0.0`, the resulting distribution will always generate false. /// /// This method is accurate for any input `p` in the range `[0, 1]` which is /// a multiple of 2-64. (Note that not all multiples of /// 2-64 in `[0, 1]` can be represented as a `f64`.) #[inline] pub fn new(p: f64) -> Result { if !(0.0..1.0).contains(&p) { if p == 1.0 { return Ok(Bernoulli { p_int: ALWAYS_TRUE }); } return Err(BernoulliError::InvalidProbability); } Ok(Bernoulli { p_int: (p * SCALE) as u64, }) } /// Construct a new `Bernoulli` with the probability of success of /// `numerator`-in-`denominator`. I.e. `new_ratio(2, 3)` will return /// a `Bernoulli` with a 2-in-3 chance, or about 67%, of returning `true`. /// /// return `true`. If `numerator == 0` it will always return `false`. /// For `numerator > denominator` and `denominator == 0`, this returns an /// error. Otherwise, for `numerator == denominator`, samples are always /// true; for `numerator == 0` samples are always false. #[inline] pub fn from_ratio(numerator: u32, denominator: u32) -> Result { if numerator > denominator || denominator == 0 { return Err(BernoulliError::InvalidProbability); } if numerator == denominator { return Ok(Bernoulli { p_int: ALWAYS_TRUE }); } let p_int = ((f64::from(numerator) / f64::from(denominator)) * SCALE) as u64; Ok(Bernoulli { p_int }) } } impl Distribution for Bernoulli { #[inline] fn sample(&self, rng: &mut R) -> bool { // Make sure to always return true for p = 1.0. if self.p_int == ALWAYS_TRUE { return true; } let v: u64 = rng.gen(); v < self.p_int } } #[cfg(test)] mod test { use super::Bernoulli; use crate::distributions::Distribution; use crate::Rng; #[test] #[cfg(feature="serde1")] fn test_serializing_deserializing_bernoulli() { let coin_flip = Bernoulli::new(0.5).unwrap(); let de_coin_flip : Bernoulli = bincode::deserialize(&bincode::serialize(&coin_flip).unwrap()).unwrap(); assert_eq!(coin_flip.p_int, de_coin_flip.p_int); } #[test] fn test_trivial() { // We prefer to be explicit here. #![allow(clippy::bool_assert_comparison)] let mut r = crate::test::rng(1); let always_false = Bernoulli::new(0.0).unwrap(); let always_true = Bernoulli::new(1.0).unwrap(); for _ in 0..5 { assert_eq!(r.sample::(&always_false), false); assert_eq!(r.sample::(&always_true), true); assert_eq!(Distribution::::sample(&always_false, &mut r), false); assert_eq!(Distribution::::sample(&always_true, &mut r), true); } } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_average() { const P: f64 = 0.3; const NUM: u32 = 3; const DENOM: u32 = 10; let d1 = Bernoulli::new(P).unwrap(); let d2 = Bernoulli::from_ratio(NUM, DENOM).unwrap(); const N: u32 = 100_000; let mut sum1: u32 = 0; let mut sum2: u32 = 0; let mut rng = crate::test::rng(2); for _ in 0..N { if d1.sample(&mut rng) { sum1 += 1; } if d2.sample(&mut rng) { sum2 += 1; } } let avg1 = (sum1 as f64) / (N as f64); assert!((avg1 - P).abs() < 5e-3); let avg2 = (sum2 as f64) / (N as f64); assert!((avg2 - (NUM as f64) / (DENOM as f64)).abs() < 5e-3); } #[test] fn value_stability() { let mut rng = crate::test::rng(3); let distr = Bernoulli::new(0.4532).unwrap(); let mut buf = [false; 10]; for x in &mut buf { *x = rng.sample(&distr); } assert_eq!(buf, [ true, false, false, true, false, false, true, true, true, true ]); } } rand-0.8.4/src/distributions/distribution.rs000064400000000000000000000202710000000000000173160ustar 00000000000000// Copyright 2018 Developers of the Rand project. // Copyright 2013-2017 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Distribution trait and associates use crate::Rng; use core::iter; #[cfg(feature = "alloc")] use alloc::string::String; /// Types (distributions) that can be used to create a random instance of `T`. /// /// It is possible to sample from a distribution through both the /// `Distribution` and [`Rng`] traits, via `distr.sample(&mut rng)` and /// `rng.sample(distr)`. They also both offer the [`sample_iter`] method, which /// produces an iterator that samples from the distribution. /// /// All implementations are expected to be immutable; this has the significant /// advantage of not needing to consider thread safety, and for most /// distributions efficient state-less sampling algorithms are available. /// /// Implementations are typically expected to be portable with reproducible /// results when used with a PRNG with fixed seed; see the /// [portability chapter](https://rust-random.github.io/book/portability.html) /// of The Rust Rand Book. In some cases this does not apply, e.g. the `usize` /// type requires different sampling on 32-bit and 64-bit machines. /// /// [`sample_iter`]: Distribution::sample_iter pub trait Distribution { /// Generate a random value of `T`, using `rng` as the source of randomness. fn sample(&self, rng: &mut R) -> T; /// Create an iterator that generates random values of `T`, using `rng` as /// the source of randomness. /// /// Note that this function takes `self` by value. This works since /// `Distribution` is impl'd for `&D` where `D: Distribution`, /// however borrowing is not automatic hence `distr.sample_iter(...)` may /// need to be replaced with `(&distr).sample_iter(...)` to borrow or /// `(&*distr).sample_iter(...)` to reborrow an existing reference. /// /// # Example /// /// ``` /// use rand::thread_rng; /// use rand::distributions::{Distribution, Alphanumeric, Uniform, Standard}; /// /// let mut rng = thread_rng(); /// /// // Vec of 16 x f32: /// let v: Vec = Standard.sample_iter(&mut rng).take(16).collect(); /// /// // String: /// let s: String = Alphanumeric /// .sample_iter(&mut rng) /// .take(7) /// .map(char::from) /// .collect(); /// /// // Dice-rolling: /// let die_range = Uniform::new_inclusive(1, 6); /// let mut roll_die = die_range.sample_iter(&mut rng); /// while roll_die.next().unwrap() != 6 { /// println!("Not a 6; rolling again!"); /// } /// ``` fn sample_iter(self, rng: R) -> DistIter where R: Rng, Self: Sized, { DistIter { distr: self, rng, phantom: ::core::marker::PhantomData, } } /// Create a distribution of values of 'S' by mapping the output of `Self` /// through the closure `F` /// /// # Example /// /// ``` /// use rand::thread_rng; /// use rand::distributions::{Distribution, Uniform}; /// /// let mut rng = thread_rng(); /// /// let die = Uniform::new_inclusive(1, 6); /// let even_number = die.map(|num| num % 2 == 0); /// while !even_number.sample(&mut rng) { /// println!("Still odd; rolling again!"); /// } /// ``` fn map(self, func: F) -> DistMap where F: Fn(T) -> S, Self: Sized, { DistMap { distr: self, func, phantom: ::core::marker::PhantomData, } } } impl<'a, T, D: Distribution> Distribution for &'a D { fn sample(&self, rng: &mut R) -> T { (*self).sample(rng) } } /// An iterator that generates random values of `T` with distribution `D`, /// using `R` as the source of randomness. /// /// This `struct` is created by the [`sample_iter`] method on [`Distribution`]. /// See its documentation for more. /// /// [`sample_iter`]: Distribution::sample_iter #[derive(Debug)] pub struct DistIter { distr: D, rng: R, phantom: ::core::marker::PhantomData, } impl Iterator for DistIter where D: Distribution, R: Rng, { type Item = T; #[inline(always)] fn next(&mut self) -> Option { // Here, self.rng may be a reference, but we must take &mut anyway. // Even if sample could take an R: Rng by value, we would need to do this // since Rng is not copyable and we cannot enforce that this is "reborrowable". Some(self.distr.sample(&mut self.rng)) } fn size_hint(&self) -> (usize, Option) { (usize::max_value(), None) } } impl iter::FusedIterator for DistIter where D: Distribution, R: Rng, { } #[cfg(features = "nightly")] impl iter::TrustedLen for DistIter where D: Distribution, R: Rng, { } /// A distribution of values of type `S` derived from the distribution `D` /// by mapping its output of type `T` through the closure `F`. /// /// This `struct` is created by the [`Distribution::map`] method. /// See its documentation for more. #[derive(Debug)] pub struct DistMap { distr: D, func: F, phantom: ::core::marker::PhantomData S>, } impl Distribution for DistMap where D: Distribution, F: Fn(T) -> S, { fn sample(&self, rng: &mut R) -> S { (self.func)(self.distr.sample(rng)) } } /// `String` sampler /// /// Sampling a `String` of random characters is not quite the same as collecting /// a sequence of chars. This trait contains some helpers. #[cfg(feature = "alloc")] pub trait DistString { /// Append `len` random chars to `string` fn append_string(&self, rng: &mut R, string: &mut String, len: usize); /// Generate a `String` of `len` random chars #[inline] fn sample_string(&self, rng: &mut R, len: usize) -> String { let mut s = String::new(); self.append_string(rng, &mut s, len); s } } #[cfg(test)] mod tests { use crate::distributions::{Alphanumeric, Distribution, Standard, Uniform}; use crate::Rng; #[test] fn test_distributions_iter() { use crate::distributions::Open01; let mut rng = crate::test::rng(210); let distr = Open01; let mut iter = Distribution::::sample_iter(distr, &mut rng); let mut sum: f32 = 0.; for _ in 0..100 { sum += iter.next().unwrap(); } assert!(0. < sum && sum < 100.); } #[test] fn test_distributions_map() { let dist = Uniform::new_inclusive(0, 5).map(|val| val + 15); let mut rng = crate::test::rng(212); let val = dist.sample(&mut rng); assert!(val >= 15 && val <= 20); } #[test] fn test_make_an_iter() { fn ten_dice_rolls_other_than_five( rng: &mut R, ) -> impl Iterator + '_ { Uniform::new_inclusive(1, 6) .sample_iter(rng) .filter(|x| *x != 5) .take(10) } let mut rng = crate::test::rng(211); let mut count = 0; for val in ten_dice_rolls_other_than_five(&mut rng) { assert!((1..=6).contains(&val) && val != 5); count += 1; } assert_eq!(count, 10); } #[test] #[cfg(feature = "alloc")] fn test_dist_string() { use core::str; use crate::distributions::DistString; let mut rng = crate::test::rng(213); let s1 = Alphanumeric.sample_string(&mut rng, 20); assert_eq!(s1.len(), 20); assert_eq!(str::from_utf8(s1.as_bytes()), Ok(s1.as_str())); let s2 = Standard.sample_string(&mut rng, 20); assert_eq!(s2.chars().count(), 20); assert_eq!(str::from_utf8(s2.as_bytes()), Ok(s2.as_str())); } } rand-0.8.4/src/distributions/float.rs000064400000000000000000000304470000000000000157120ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Basic floating-point number distributions use crate::distributions::utils::FloatSIMDUtils; use crate::distributions::{Distribution, Standard}; use crate::Rng; use core::mem; #[cfg(feature = "simd_support")] use packed_simd::*; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; /// A distribution to sample floating point numbers uniformly in the half-open /// interval `(0, 1]`, i.e. including 1 but not 0. /// /// All values that can be generated are of the form `n * Ξ΅/2`. For `f32` /// the 24 most significant random bits of a `u32` are used and for `f64` the /// 53 most significant bits of a `u64` are used. The conversion uses the /// multiplicative method. /// /// See also: [`Standard`] which samples from `[0, 1)`, [`Open01`] /// which samples from `(0, 1)` and [`Uniform`] which samples from arbitrary /// ranges. /// /// # Example /// ``` /// use rand::{thread_rng, Rng}; /// use rand::distributions::OpenClosed01; /// /// let val: f32 = thread_rng().sample(OpenClosed01); /// println!("f32 from (0, 1): {}", val); /// ``` /// /// [`Standard`]: crate::distributions::Standard /// [`Open01`]: crate::distributions::Open01 /// [`Uniform`]: crate::distributions::uniform::Uniform #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct OpenClosed01; /// A distribution to sample floating point numbers uniformly in the open /// interval `(0, 1)`, i.e. not including either endpoint. /// /// All values that can be generated are of the form `n * Ξ΅ + Ξ΅/2`. For `f32` /// the 23 most significant random bits of an `u32` are used, for `f64` 52 from /// an `u64`. The conversion uses a transmute-based method. /// /// See also: [`Standard`] which samples from `[0, 1)`, [`OpenClosed01`] /// which samples from `(0, 1]` and [`Uniform`] which samples from arbitrary /// ranges. /// /// # Example /// ``` /// use rand::{thread_rng, Rng}; /// use rand::distributions::Open01; /// /// let val: f32 = thread_rng().sample(Open01); /// println!("f32 from (0, 1): {}", val); /// ``` /// /// [`Standard`]: crate::distributions::Standard /// [`OpenClosed01`]: crate::distributions::OpenClosed01 /// [`Uniform`]: crate::distributions::uniform::Uniform #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Open01; // This trait is needed by both this lib and rand_distr hence is a hidden export #[doc(hidden)] pub trait IntoFloat { type F; /// Helper method to combine the fraction and a contant exponent into a /// float. /// /// Only the least significant bits of `self` may be set, 23 for `f32` and /// 52 for `f64`. /// The resulting value will fall in a range that depends on the exponent. /// As an example the range with exponent 0 will be /// [20..21), which is [1..2). fn into_float_with_exponent(self, exponent: i32) -> Self::F; } macro_rules! float_impls { ($ty:ident, $uty:ident, $f_scalar:ident, $u_scalar:ty, $fraction_bits:expr, $exponent_bias:expr) => { impl IntoFloat for $uty { type F = $ty; #[inline(always)] fn into_float_with_exponent(self, exponent: i32) -> $ty { // The exponent is encoded using an offset-binary representation let exponent_bits: $u_scalar = (($exponent_bias + exponent) as $u_scalar) << $fraction_bits; $ty::from_bits(self | exponent_bits) } } impl Distribution<$ty> for Standard { fn sample(&self, rng: &mut R) -> $ty { // Multiply-based method; 24/53 random bits; [0, 1) interval. // We use the most significant bits because for simple RNGs // those are usually more random. let float_size = mem::size_of::<$f_scalar>() as u32 * 8; let precision = $fraction_bits + 1; let scale = 1.0 / ((1 as $u_scalar << precision) as $f_scalar); let value: $uty = rng.gen(); let value = value >> (float_size - precision); scale * $ty::cast_from_int(value) } } impl Distribution<$ty> for OpenClosed01 { fn sample(&self, rng: &mut R) -> $ty { // Multiply-based method; 24/53 random bits; (0, 1] interval. // We use the most significant bits because for simple RNGs // those are usually more random. let float_size = mem::size_of::<$f_scalar>() as u32 * 8; let precision = $fraction_bits + 1; let scale = 1.0 / ((1 as $u_scalar << precision) as $f_scalar); let value: $uty = rng.gen(); let value = value >> (float_size - precision); // Add 1 to shift up; will not overflow because of right-shift: scale * $ty::cast_from_int(value + 1) } } impl Distribution<$ty> for Open01 { fn sample(&self, rng: &mut R) -> $ty { // Transmute-based method; 23/52 random bits; (0, 1) interval. // We use the most significant bits because for simple RNGs // those are usually more random. use core::$f_scalar::EPSILON; let float_size = mem::size_of::<$f_scalar>() as u32 * 8; let value: $uty = rng.gen(); let fraction = value >> (float_size - $fraction_bits); fraction.into_float_with_exponent(0) - (1.0 - EPSILON / 2.0) } } } } float_impls! { f32, u32, f32, u32, 23, 127 } float_impls! { f64, u64, f64, u64, 52, 1023 } #[cfg(feature = "simd_support")] float_impls! { f32x2, u32x2, f32, u32, 23, 127 } #[cfg(feature = "simd_support")] float_impls! { f32x4, u32x4, f32, u32, 23, 127 } #[cfg(feature = "simd_support")] float_impls! { f32x8, u32x8, f32, u32, 23, 127 } #[cfg(feature = "simd_support")] float_impls! { f32x16, u32x16, f32, u32, 23, 127 } #[cfg(feature = "simd_support")] float_impls! { f64x2, u64x2, f64, u64, 52, 1023 } #[cfg(feature = "simd_support")] float_impls! { f64x4, u64x4, f64, u64, 52, 1023 } #[cfg(feature = "simd_support")] float_impls! { f64x8, u64x8, f64, u64, 52, 1023 } #[cfg(test)] mod tests { use super::*; use crate::rngs::mock::StepRng; const EPSILON32: f32 = ::core::f32::EPSILON; const EPSILON64: f64 = ::core::f64::EPSILON; macro_rules! test_f32 { ($fnn:ident, $ty:ident, $ZERO:expr, $EPSILON:expr) => { #[test] fn $fnn() { // Standard let mut zeros = StepRng::new(0, 0); assert_eq!(zeros.gen::<$ty>(), $ZERO); let mut one = StepRng::new(1 << 8 | 1 << (8 + 32), 0); assert_eq!(one.gen::<$ty>(), $EPSILON / 2.0); let mut max = StepRng::new(!0, 0); assert_eq!(max.gen::<$ty>(), 1.0 - $EPSILON / 2.0); // OpenClosed01 let mut zeros = StepRng::new(0, 0); assert_eq!(zeros.sample::<$ty, _>(OpenClosed01), 0.0 + $EPSILON / 2.0); let mut one = StepRng::new(1 << 8 | 1 << (8 + 32), 0); assert_eq!(one.sample::<$ty, _>(OpenClosed01), $EPSILON); let mut max = StepRng::new(!0, 0); assert_eq!(max.sample::<$ty, _>(OpenClosed01), $ZERO + 1.0); // Open01 let mut zeros = StepRng::new(0, 0); assert_eq!(zeros.sample::<$ty, _>(Open01), 0.0 + $EPSILON / 2.0); let mut one = StepRng::new(1 << 9 | 1 << (9 + 32), 0); assert_eq!(one.sample::<$ty, _>(Open01), $EPSILON / 2.0 * 3.0); let mut max = StepRng::new(!0, 0); assert_eq!(max.sample::<$ty, _>(Open01), 1.0 - $EPSILON / 2.0); } }; } test_f32! { f32_edge_cases, f32, 0.0, EPSILON32 } #[cfg(feature = "simd_support")] test_f32! { f32x2_edge_cases, f32x2, f32x2::splat(0.0), f32x2::splat(EPSILON32) } #[cfg(feature = "simd_support")] test_f32! { f32x4_edge_cases, f32x4, f32x4::splat(0.0), f32x4::splat(EPSILON32) } #[cfg(feature = "simd_support")] test_f32! { f32x8_edge_cases, f32x8, f32x8::splat(0.0), f32x8::splat(EPSILON32) } #[cfg(feature = "simd_support")] test_f32! { f32x16_edge_cases, f32x16, f32x16::splat(0.0), f32x16::splat(EPSILON32) } macro_rules! test_f64 { ($fnn:ident, $ty:ident, $ZERO:expr, $EPSILON:expr) => { #[test] fn $fnn() { // Standard let mut zeros = StepRng::new(0, 0); assert_eq!(zeros.gen::<$ty>(), $ZERO); let mut one = StepRng::new(1 << 11, 0); assert_eq!(one.gen::<$ty>(), $EPSILON / 2.0); let mut max = StepRng::new(!0, 0); assert_eq!(max.gen::<$ty>(), 1.0 - $EPSILON / 2.0); // OpenClosed01 let mut zeros = StepRng::new(0, 0); assert_eq!(zeros.sample::<$ty, _>(OpenClosed01), 0.0 + $EPSILON / 2.0); let mut one = StepRng::new(1 << 11, 0); assert_eq!(one.sample::<$ty, _>(OpenClosed01), $EPSILON); let mut max = StepRng::new(!0, 0); assert_eq!(max.sample::<$ty, _>(OpenClosed01), $ZERO + 1.0); // Open01 let mut zeros = StepRng::new(0, 0); assert_eq!(zeros.sample::<$ty, _>(Open01), 0.0 + $EPSILON / 2.0); let mut one = StepRng::new(1 << 12, 0); assert_eq!(one.sample::<$ty, _>(Open01), $EPSILON / 2.0 * 3.0); let mut max = StepRng::new(!0, 0); assert_eq!(max.sample::<$ty, _>(Open01), 1.0 - $EPSILON / 2.0); } }; } test_f64! { f64_edge_cases, f64, 0.0, EPSILON64 } #[cfg(feature = "simd_support")] test_f64! { f64x2_edge_cases, f64x2, f64x2::splat(0.0), f64x2::splat(EPSILON64) } #[cfg(feature = "simd_support")] test_f64! { f64x4_edge_cases, f64x4, f64x4::splat(0.0), f64x4::splat(EPSILON64) } #[cfg(feature = "simd_support")] test_f64! { f64x8_edge_cases, f64x8, f64x8::splat(0.0), f64x8::splat(EPSILON64) } #[test] fn value_stability() { fn test_samples>( distr: &D, zero: T, expected: &[T], ) { let mut rng = crate::test::rng(0x6f44f5646c2a7334); let mut buf = [zero; 3]; for x in &mut buf { *x = rng.sample(&distr); } assert_eq!(&buf, expected); } test_samples(&Standard, 0f32, &[0.0035963655, 0.7346052, 0.09778172]); test_samples(&Standard, 0f64, &[ 0.7346051961657583, 0.20298547462974248, 0.8166436635290655, ]); test_samples(&OpenClosed01, 0f32, &[0.003596425, 0.73460525, 0.09778178]); test_samples(&OpenClosed01, 0f64, &[ 0.7346051961657584, 0.2029854746297426, 0.8166436635290656, ]); test_samples(&Open01, 0f32, &[0.0035963655, 0.73460525, 0.09778172]); test_samples(&Open01, 0f64, &[ 0.7346051961657584, 0.20298547462974248, 0.8166436635290656, ]); #[cfg(feature = "simd_support")] { // We only test a sub-set of types here. Values are identical to // non-SIMD types; we assume this pattern continues across all // SIMD types. test_samples(&Standard, f32x2::new(0.0, 0.0), &[ f32x2::new(0.0035963655, 0.7346052), f32x2::new(0.09778172, 0.20298547), f32x2::new(0.34296435, 0.81664366), ]); test_samples(&Standard, f64x2::new(0.0, 0.0), &[ f64x2::new(0.7346051961657583, 0.20298547462974248), f64x2::new(0.8166436635290655, 0.7423708925400552), f64x2::new(0.16387782224016323, 0.9087068770169618), ]); } } } rand-0.8.4/src/distributions/integer.rs000064400000000000000000000215270000000000000162410ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! The implementations of the `Standard` distribution for integer types. use crate::distributions::{Distribution, Standard}; use crate::Rng; #[cfg(all(target_arch = "x86", feature = "simd_support"))] use core::arch::x86::{__m128i, __m256i}; #[cfg(all(target_arch = "x86_64", feature = "simd_support"))] use core::arch::x86_64::{__m128i, __m256i}; #[cfg(not(target_os = "emscripten"))] use core::num::NonZeroU128; use core::num::{NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize}; #[cfg(feature = "simd_support")] use packed_simd::*; impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> u8 { rng.next_u32() as u8 } } impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> u16 { rng.next_u32() as u16 } } impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> u32 { rng.next_u32() } } impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> u64 { rng.next_u64() } } #[cfg(not(target_os = "emscripten"))] impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> u128 { // Use LE; we explicitly generate one value before the next. let x = u128::from(rng.next_u64()); let y = u128::from(rng.next_u64()); (y << 64) | x } } impl Distribution for Standard { #[inline] #[cfg(any(target_pointer_width = "32", target_pointer_width = "16"))] fn sample(&self, rng: &mut R) -> usize { rng.next_u32() as usize } #[inline] #[cfg(target_pointer_width = "64")] fn sample(&self, rng: &mut R) -> usize { rng.next_u64() as usize } } macro_rules! impl_int_from_uint { ($ty:ty, $uty:ty) => { impl Distribution<$ty> for Standard { #[inline] fn sample(&self, rng: &mut R) -> $ty { rng.gen::<$uty>() as $ty } } }; } impl_int_from_uint! { i8, u8 } impl_int_from_uint! { i16, u16 } impl_int_from_uint! { i32, u32 } impl_int_from_uint! { i64, u64 } #[cfg(not(target_os = "emscripten"))] impl_int_from_uint! { i128, u128 } impl_int_from_uint! { isize, usize } macro_rules! impl_nzint { ($ty:ty, $new:path) => { impl Distribution<$ty> for Standard { fn sample(&self, rng: &mut R) -> $ty { loop { if let Some(nz) = $new(rng.gen()) { break nz; } } } } }; } impl_nzint!(NonZeroU8, NonZeroU8::new); impl_nzint!(NonZeroU16, NonZeroU16::new); impl_nzint!(NonZeroU32, NonZeroU32::new); impl_nzint!(NonZeroU64, NonZeroU64::new); #[cfg(not(target_os = "emscripten"))] impl_nzint!(NonZeroU128, NonZeroU128::new); impl_nzint!(NonZeroUsize, NonZeroUsize::new); #[cfg(feature = "simd_support")] macro_rules! simd_impl { ($(($intrinsic:ident, $vec:ty),)+) => {$( impl Distribution<$intrinsic> for Standard { #[inline] fn sample(&self, rng: &mut R) -> $intrinsic { $intrinsic::from_bits(rng.gen::<$vec>()) } } )+}; ($bits:expr,) => {}; ($bits:expr, $ty:ty, $($ty_more:ty,)*) => { simd_impl!($bits, $($ty_more,)*); impl Distribution<$ty> for Standard { #[inline] fn sample(&self, rng: &mut R) -> $ty { let mut vec: $ty = Default::default(); unsafe { let ptr = &mut vec; let b_ptr = &mut *(ptr as *mut $ty as *mut [u8; $bits/8]); rng.fill_bytes(b_ptr); } vec.to_le() } } }; } #[cfg(feature = "simd_support")] simd_impl!(16, u8x2, i8x2,); #[cfg(feature = "simd_support")] simd_impl!(32, u8x4, i8x4, u16x2, i16x2,); #[cfg(feature = "simd_support")] simd_impl!(64, u8x8, i8x8, u16x4, i16x4, u32x2, i32x2,); #[cfg(feature = "simd_support")] simd_impl!(128, u8x16, i8x16, u16x8, i16x8, u32x4, i32x4, u64x2, i64x2,); #[cfg(feature = "simd_support")] simd_impl!(256, u8x32, i8x32, u16x16, i16x16, u32x8, i32x8, u64x4, i64x4,); #[cfg(feature = "simd_support")] simd_impl!(512, u8x64, i8x64, u16x32, i16x32, u32x16, i32x16, u64x8, i64x8,); #[cfg(all( feature = "simd_support", any(target_arch = "x86", target_arch = "x86_64") ))] simd_impl!((__m128i, u8x16), (__m256i, u8x32),); #[cfg(test)] mod tests { use super::*; #[test] fn test_integers() { let mut rng = crate::test::rng(806); rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); #[cfg(not(target_os = "emscripten"))] rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); rng.sample::(Standard); #[cfg(not(target_os = "emscripten"))] rng.sample::(Standard); } #[test] fn value_stability() { fn test_samples(zero: T, expected: &[T]) where Standard: Distribution { let mut rng = crate::test::rng(807); let mut buf = [zero; 3]; for x in &mut buf { *x = rng.sample(Standard); } assert_eq!(&buf, expected); } test_samples(0u8, &[9, 247, 111]); test_samples(0u16, &[32265, 42999, 38255]); test_samples(0u32, &[2220326409, 2575017975, 2018088303]); test_samples(0u64, &[ 11059617991457472009, 16096616328739788143, 1487364411147516184, ]); test_samples(0u128, &[ 296930161868957086625409848350820761097, 145644820879247630242265036535529306392, 111087889832015897993126088499035356354, ]); #[cfg(any(target_pointer_width = "32", target_pointer_width = "16"))] test_samples(0usize, &[2220326409, 2575017975, 2018088303]); #[cfg(target_pointer_width = "64")] test_samples(0usize, &[ 11059617991457472009, 16096616328739788143, 1487364411147516184, ]); test_samples(0i8, &[9, -9, 111]); // Skip further i* types: they are simple reinterpretation of u* samples #[cfg(feature = "simd_support")] { // We only test a sub-set of types here and make assumptions about the rest. test_samples(u8x2::default(), &[ u8x2::new(9, 126), u8x2::new(247, 167), u8x2::new(111, 149), ]); test_samples(u8x4::default(), &[ u8x4::new(9, 126, 87, 132), u8x4::new(247, 167, 123, 153), u8x4::new(111, 149, 73, 120), ]); test_samples(u8x8::default(), &[ u8x8::new(9, 126, 87, 132, 247, 167, 123, 153), u8x8::new(111, 149, 73, 120, 68, 171, 98, 223), u8x8::new(24, 121, 1, 50, 13, 46, 164, 20), ]); test_samples(i64x8::default(), &[ i64x8::new( -7387126082252079607, -2350127744969763473, 1487364411147516184, 7895421560427121838, 602190064936008898, 6022086574635100741, -5080089175222015595, -4066367846667249123, ), i64x8::new( 9180885022207963908, 3095981199532211089, 6586075293021332726, 419343203796414657, 3186951873057035255, 5287129228749947252, 444726432079249540, -1587028029513790706, ), i64x8::new( 6075236523189346388, 1351763722368165432, -6192309979959753740, -7697775502176768592, -4482022114172078123, 7522501477800909500, -1837258847956201231, -586926753024886735, ), ]); } } } rand-0.8.4/src/distributions/mod.rs000064400000000000000000000215400000000000000153560ustar 00000000000000// Copyright 2018 Developers of the Rand project. // Copyright 2013-2017 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Generating random samples from probability distributions //! //! This module is the home of the [`Distribution`] trait and several of its //! implementations. It is the workhorse behind some of the convenient //! functionality of the [`Rng`] trait, e.g. [`Rng::gen`] and of course //! [`Rng::sample`]. //! //! Abstractly, a [probability distribution] describes the probability of //! occurrence of each value in its sample space. //! //! More concretely, an implementation of `Distribution` for type `X` is an //! algorithm for choosing values from the sample space (a subset of `T`) //! according to the distribution `X` represents, using an external source of //! randomness (an RNG supplied to the `sample` function). //! //! A type `X` may implement `Distribution` for multiple types `T`. //! Any type implementing [`Distribution`] is stateless (i.e. immutable), //! but it may have internal parameters set at construction time (for example, //! [`Uniform`] allows specification of its sample space as a range within `T`). //! //! //! # The `Standard` distribution //! //! The [`Standard`] distribution is important to mention. This is the //! distribution used by [`Rng::gen`] and represents the "default" way to //! produce a random value for many different types, including most primitive //! types, tuples, arrays, and a few derived types. See the documentation of //! [`Standard`] for more details. //! //! Implementing `Distribution` for [`Standard`] for user types `T` makes it //! possible to generate type `T` with [`Rng::gen`], and by extension also //! with the [`random`] function. //! //! ## Random characters //! //! [`Alphanumeric`] is a simple distribution to sample random letters and //! numbers of the `char` type; in contrast [`Standard`] may sample any valid //! `char`. //! //! //! # Uniform numeric ranges //! //! The [`Uniform`] distribution is more flexible than [`Standard`], but also //! more specialised: it supports fewer target types, but allows the sample //! space to be specified as an arbitrary range within its target type `T`. //! Both [`Standard`] and [`Uniform`] are in some sense uniform distributions. //! //! Values may be sampled from this distribution using [`Rng::sample(Range)`] or //! by creating a distribution object with [`Uniform::new`], //! [`Uniform::new_inclusive`] or `From`. When the range limits are not //! known at compile time it is typically faster to reuse an existing //! `Uniform` object than to call [`Rng::sample(Range)`]. //! //! User types `T` may also implement `Distribution` for [`Uniform`], //! although this is less straightforward than for [`Standard`] (see the //! documentation in the [`uniform`] module). Doing so enables generation of //! values of type `T` with [`Rng::sample(Range)`]. //! //! ## Open and half-open ranges //! //! There are surprisingly many ways to uniformly generate random floats. A //! range between 0 and 1 is standard, but the exact bounds (open vs closed) //! and accuracy differ. In addition to the [`Standard`] distribution Rand offers //! [`Open01`] and [`OpenClosed01`]. See "Floating point implementation" section of //! [`Standard`] documentation for more details. //! //! # Non-uniform sampling //! //! Sampling a simple true/false outcome with a given probability has a name: //! the [`Bernoulli`] distribution (this is used by [`Rng::gen_bool`]). //! //! For weighted sampling from a sequence of discrete values, use the //! [`WeightedIndex`] distribution. //! //! This crate no longer includes other non-uniform distributions; instead //! it is recommended that you use either [`rand_distr`] or [`statrs`]. //! //! //! [probability distribution]: https://en.wikipedia.org/wiki/Probability_distribution //! [`rand_distr`]: https://crates.io/crates/rand_distr //! [`statrs`]: https://crates.io/crates/statrs //! [`random`]: crate::random //! [`rand_distr`]: https://crates.io/crates/rand_distr //! [`statrs`]: https://crates.io/crates/statrs mod bernoulli; mod distribution; mod float; mod integer; mod other; mod slice; mod utils; #[cfg(feature = "alloc")] mod weighted_index; #[doc(hidden)] pub mod hidden_export { pub use super::float::IntoFloat; // used by rand_distr } pub mod uniform; #[deprecated( since = "0.8.0", note = "use rand::distributions::{WeightedIndex, WeightedError} instead" )] #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub mod weighted; pub use self::bernoulli::{Bernoulli, BernoulliError}; pub use self::distribution::{Distribution, DistIter, DistMap}; #[cfg(feature = "alloc")] pub use self::distribution::DistString; pub use self::float::{Open01, OpenClosed01}; pub use self::other::Alphanumeric; pub use self::slice::Slice; #[doc(inline)] pub use self::uniform::Uniform; #[cfg(feature = "alloc")] pub use self::weighted_index::{WeightedError, WeightedIndex}; #[allow(unused)] use crate::Rng; /// A generic random value distribution, implemented for many primitive types. /// Usually generates values with a numerically uniform distribution, and with a /// range appropriate to the type. /// /// ## Provided implementations /// /// Assuming the provided `Rng` is well-behaved, these implementations /// generate values with the following ranges and distributions: /// /// * Integers (`i32`, `u32`, `isize`, `usize`, etc.): Uniformly distributed /// over all values of the type. /// * `char`: Uniformly distributed over all Unicode scalar values, i.e. all /// code points in the range `0...0x10_FFFF`, except for the range /// `0xD800...0xDFFF` (the surrogate code points). This includes /// unassigned/reserved code points. /// * `bool`: Generates `false` or `true`, each with probability 0.5. /// * Floating point types (`f32` and `f64`): Uniformly distributed in the /// half-open range `[0, 1)`. See notes below. /// * Wrapping integers (`Wrapping`), besides the type identical to their /// normal integer variants. /// /// The `Standard` distribution also supports generation of the following /// compound types where all component types are supported: /// /// * Tuples (up to 12 elements): each element is generated sequentially. /// * Arrays (up to 32 elements): each element is generated sequentially; /// see also [`Rng::fill`] which supports arbitrary array length for integer /// types and tends to be faster for `u32` and smaller types. /// When using `rustc` β‰₯ 1.51, enable the `min_const_gen` feature to support /// arrays larger than 32 elements. /// Note that [`Rng::fill`] and `Standard`'s array support are *not* equivalent: /// the former is optimised for integer types (using fewer RNG calls for /// element types smaller than the RNG word size), while the latter supports /// any element type supported by `Standard`. /// * `Option` first generates a `bool`, and if true generates and returns /// `Some(value)` where `value: T`, otherwise returning `None`. /// /// ## Custom implementations /// /// The [`Standard`] distribution may be implemented for user types as follows: /// /// ``` /// # #![allow(dead_code)] /// use rand::Rng; /// use rand::distributions::{Distribution, Standard}; /// /// struct MyF32 { /// x: f32, /// } /// /// impl Distribution for Standard { /// fn sample(&self, rng: &mut R) -> MyF32 { /// MyF32 { x: rng.gen() } /// } /// } /// ``` /// /// ## Example usage /// ``` /// use rand::prelude::*; /// use rand::distributions::Standard; /// /// let val: f32 = StdRng::from_entropy().sample(Standard); /// println!("f32 from [0, 1): {}", val); /// ``` /// /// # Floating point implementation /// The floating point implementations for `Standard` generate a random value in /// the half-open interval `[0, 1)`, i.e. including 0 but not 1. /// /// All values that can be generated are of the form `n * Ξ΅/2`. For `f32` /// the 24 most significant random bits of a `u32` are used and for `f64` the /// 53 most significant bits of a `u64` are used. The conversion uses the /// multiplicative method: `(rng.gen::<$uty>() >> N) as $ty * (Ξ΅/2)`. /// /// See also: [`Open01`] which samples from `(0, 1)`, [`OpenClosed01`] which /// samples from `(0, 1]` and `Rng::gen_range(0..1)` which also samples from /// `[0, 1)`. Note that `Open01` uses transmute-based methods which yield 1 bit /// less precision but may perform faster on some architectures (on modern Intel /// CPUs all methods have approximately equal performance). /// /// [`Uniform`]: uniform::Uniform #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(serde::Serialize, serde::Deserialize))] pub struct Standard; rand-0.8.4/src/distributions/other.rs000064400000000000000000000264570000000000000157340ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! The implementations of the `Standard` distribution for other built-in types. use core::char; use core::num::Wrapping; #[cfg(feature = "alloc")] use alloc::string::String; use crate::distributions::{Distribution, Standard, Uniform}; #[cfg(feature = "alloc")] use crate::distributions::DistString; use crate::Rng; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; #[cfg(feature = "min_const_gen")] use std::mem::{self, MaybeUninit}; // ----- Sampling distributions ----- /// Sample a `u8`, uniformly distributed over ASCII letters and numbers: /// a-z, A-Z and 0-9. /// /// # Example /// /// ``` /// use std::iter; /// use rand::{Rng, thread_rng}; /// use rand::distributions::Alphanumeric; /// /// let mut rng = thread_rng(); /// let chars: String = iter::repeat(()) /// .map(|()| rng.sample(Alphanumeric)) /// .map(char::from) /// .take(7) /// .collect(); /// println!("Random chars: {}", chars); /// ``` /// /// # Passwords /// /// Users sometimes ask whether it is safe to use a string of random characters /// as a password. In principle, all RNGs in Rand implementing `CryptoRng` are /// suitable as a source of randomness for generating passwords (if they are /// properly seeded), but it is more conservative to only use randomness /// directly from the operating system via the `getrandom` crate, or the /// corresponding bindings of a crypto library. /// /// When generating passwords or keys, it is important to consider the threat /// model and in some cases the memorability of the password. This is out of /// scope of the Rand project, and therefore we defer to the following /// references: /// /// - [Wikipedia article on Password Strength](https://en.wikipedia.org/wiki/Password_strength) /// - [Diceware for generating memorable passwords](https://en.wikipedia.org/wiki/Diceware) #[derive(Debug, Clone, Copy)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Alphanumeric; // ----- Implementations of distributions ----- impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> char { // A valid `char` is either in the interval `[0, 0xD800)` or // `(0xDFFF, 0x11_0000)`. All `char`s must therefore be in // `[0, 0x11_0000)` but not in the "gap" `[0xD800, 0xDFFF]` which is // reserved for surrogates. This is the size of that gap. const GAP_SIZE: u32 = 0xDFFF - 0xD800 + 1; // Uniform::new(0, 0x11_0000 - GAP_SIZE) can also be used but it // seemed slower. let range = Uniform::new(GAP_SIZE, 0x11_0000); let mut n = range.sample(rng); if n <= 0xDFFF { n -= GAP_SIZE; } unsafe { char::from_u32_unchecked(n) } } } /// Note: the `String` is potentially left with excess capacity; optionally the /// user may call `string.shrink_to_fit()` afterwards. #[cfg(feature = "alloc")] impl DistString for Standard { fn append_string(&self, rng: &mut R, s: &mut String, len: usize) { // A char is encoded with at most four bytes, thus this reservation is // guaranteed to be sufficient. We do not shrink_to_fit afterwards so // that repeated usage on the same `String` buffer does not reallocate. s.reserve(4 * len); s.extend(Distribution::::sample_iter(self, rng).take(len)); } } impl Distribution for Alphanumeric { fn sample(&self, rng: &mut R) -> u8 { const RANGE: u32 = 26 + 26 + 10; const GEN_ASCII_STR_CHARSET: &[u8] = b"ABCDEFGHIJKLMNOPQRSTUVWXYZ\ abcdefghijklmnopqrstuvwxyz\ 0123456789"; // We can pick from 62 characters. This is so close to a power of 2, 64, // that we can do better than `Uniform`. Use a simple bitshift and // rejection sampling. We do not use a bitmask, because for small RNGs // the most significant bits are usually of higher quality. loop { let var = rng.next_u32() >> (32 - 6); if var < RANGE { return GEN_ASCII_STR_CHARSET[var as usize]; } } } } #[cfg(feature = "alloc")] impl DistString for Alphanumeric { fn append_string(&self, rng: &mut R, string: &mut String, len: usize) { unsafe { let v = string.as_mut_vec(); v.extend(self.sample_iter(rng).take(len)); } } } impl Distribution for Standard { #[inline] fn sample(&self, rng: &mut R) -> bool { // We can compare against an arbitrary bit of an u32 to get a bool. // Because the least significant bits of a lower quality RNG can have // simple patterns, we compare against the most significant bit. This is // easiest done using a sign test. (rng.next_u32() as i32) < 0 } } macro_rules! tuple_impl { // use variables to indicate the arity of the tuple ($($tyvar:ident),* ) => { // the trailing commas are for the 1 tuple impl< $( $tyvar ),* > Distribution<( $( $tyvar ),* , )> for Standard where $( Standard: Distribution<$tyvar> ),* { #[inline] fn sample(&self, _rng: &mut R) -> ( $( $tyvar ),* , ) { ( // use the $tyvar's to get the appropriate number of // repeats (they're not actually needed) $( _rng.gen::<$tyvar>() ),* , ) } } } } impl Distribution<()> for Standard { #[allow(clippy::unused_unit)] #[inline] fn sample(&self, _: &mut R) -> () { () } } tuple_impl! {A} tuple_impl! {A, B} tuple_impl! {A, B, C} tuple_impl! {A, B, C, D} tuple_impl! {A, B, C, D, E} tuple_impl! {A, B, C, D, E, F} tuple_impl! {A, B, C, D, E, F, G} tuple_impl! {A, B, C, D, E, F, G, H} tuple_impl! {A, B, C, D, E, F, G, H, I} tuple_impl! {A, B, C, D, E, F, G, H, I, J} tuple_impl! {A, B, C, D, E, F, G, H, I, J, K} tuple_impl! {A, B, C, D, E, F, G, H, I, J, K, L} #[cfg(feature = "min_const_gen")] impl Distribution<[T; N]> for Standard where Standard: Distribution { #[inline] fn sample(&self, _rng: &mut R) -> [T; N] { let mut buff: [MaybeUninit; N] = unsafe { MaybeUninit::uninit().assume_init() }; for elem in &mut buff { *elem = MaybeUninit::new(_rng.gen()); } unsafe { mem::transmute_copy::<_, _>(&buff) } } } #[cfg(not(feature = "min_const_gen"))] macro_rules! array_impl { // recursive, given at least one type parameter: {$n:expr, $t:ident, $($ts:ident,)*} => { array_impl!{($n - 1), $($ts,)*} impl Distribution<[T; $n]> for Standard where Standard: Distribution { #[inline] fn sample(&self, _rng: &mut R) -> [T; $n] { [_rng.gen::<$t>(), $(_rng.gen::<$ts>()),*] } } }; // empty case: {$n:expr,} => { impl Distribution<[T; $n]> for Standard { fn sample(&self, _rng: &mut R) -> [T; $n] { [] } } }; } #[cfg(not(feature = "min_const_gen"))] array_impl! {32, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T,} impl Distribution> for Standard where Standard: Distribution { #[inline] fn sample(&self, rng: &mut R) -> Option { // UFCS is needed here: https://github.com/rust-lang/rust/issues/24066 if rng.gen::() { Some(rng.gen()) } else { None } } } impl Distribution> for Standard where Standard: Distribution { #[inline] fn sample(&self, rng: &mut R) -> Wrapping { Wrapping(rng.gen()) } } #[cfg(test)] mod tests { use super::*; use crate::RngCore; #[cfg(feature = "alloc")] use alloc::string::String; #[test] fn test_misc() { let rng: &mut dyn RngCore = &mut crate::test::rng(820); rng.sample::(Standard); rng.sample::(Standard); } #[cfg(feature = "alloc")] #[test] fn test_chars() { use core::iter; let mut rng = crate::test::rng(805); // Test by generating a relatively large number of chars, so we also // take the rejection sampling path. let word: String = iter::repeat(()) .map(|()| rng.gen::()) .take(1000) .collect(); assert!(!word.is_empty()); } #[test] fn test_alphanumeric() { let mut rng = crate::test::rng(806); // Test by generating a relatively large number of chars, so we also // take the rejection sampling path. let mut incorrect = false; for _ in 0..100 { let c: char = rng.sample(Alphanumeric).into(); incorrect |= !(('0'..='9').contains(&c) || ('A'..='Z').contains(&c) || ('a'..='z').contains(&c) ); } assert!(!incorrect); } #[test] fn value_stability() { fn test_samples>( distr: &D, zero: T, expected: &[T], ) { let mut rng = crate::test::rng(807); let mut buf = [zero; 5]; for x in &mut buf { *x = rng.sample(&distr); } assert_eq!(&buf, expected); } test_samples(&Standard, 'a', &[ '\u{8cdac}', '\u{a346a}', '\u{80120}', '\u{ed692}', '\u{35888}', ]); test_samples(&Alphanumeric, 0, &[104, 109, 101, 51, 77]); test_samples(&Standard, false, &[true, true, false, true, false]); test_samples(&Standard, None as Option, &[ Some(true), None, Some(false), None, Some(false), ]); test_samples(&Standard, Wrapping(0i32), &[ Wrapping(-2074640887), Wrapping(-1719949321), Wrapping(2018088303), Wrapping(-547181756), Wrapping(838957336), ]); // We test only sub-sets of tuple and array impls test_samples(&Standard, (), &[(), (), (), (), ()]); test_samples(&Standard, (false,), &[ (true,), (true,), (false,), (true,), (false,), ]); test_samples(&Standard, (false, false), &[ (true, true), (false, true), (false, false), (true, false), (false, false), ]); test_samples(&Standard, [0u8; 0], &[[], [], [], [], []]); test_samples(&Standard, [0u8; 3], &[ [9, 247, 111], [68, 24, 13], [174, 19, 194], [172, 69, 213], [149, 207, 29], ]); } } rand-0.8.4/src/distributions/slice.rs000064400000000000000000000074500000000000000157020ustar 00000000000000// Copyright 2021 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use crate::distributions::{Distribution, Uniform}; /// A distribution to sample items uniformly from a slice. /// /// [`Slice::new`] constructs a distribution referencing a slice and uniformly /// samples references from the items in the slice. It may do extra work up /// front to make sampling of multiple values faster; if only one sample from /// the slice is required, [`SliceRandom::choose`] can be more efficient. /// /// Steps are taken to avoid bias which might be present in naive /// implementations; for example `slice[rng.gen() % slice.len()]` samples from /// the slice, but may be more likely to select numbers in the low range than /// other values. /// /// This distribution samples with replacement; each sample is independent. /// Sampling without replacement requires state to be retained, and therefore /// cannot be handled by a distribution; you should instead consider methods /// on [`SliceRandom`], such as [`SliceRandom::choose_multiple`]. /// /// # Example /// /// ``` /// use rand::Rng; /// use rand::distributions::Slice; /// /// let vowels = ['a', 'e', 'i', 'o', 'u']; /// let vowels_dist = Slice::new(&vowels).unwrap(); /// let rng = rand::thread_rng(); /// /// // build a string of 10 vowels /// let vowel_string: String = rng /// .sample_iter(&vowels_dist) /// .take(10) /// .collect(); /// /// println!("{}", vowel_string); /// assert_eq!(vowel_string.len(), 10); /// assert!(vowel_string.chars().all(|c| vowels.contains(&c))); /// ``` /// /// For a single sample, [`SliceRandom::choose`][crate::seq::SliceRandom::choose] /// may be preferred: /// /// ``` /// use rand::seq::SliceRandom; /// /// let vowels = ['a', 'e', 'i', 'o', 'u']; /// let mut rng = rand::thread_rng(); /// /// println!("{}", vowels.choose(&mut rng).unwrap()) /// ``` /// /// [`SliceRandom`]: crate::seq::SliceRandom /// [`SliceRandom::choose`]: crate::seq::SliceRandom::choose /// [`SliceRandom::choose_multiple`]: crate::seq::SliceRandom::choose_multiple #[derive(Debug, Clone, Copy)] pub struct Slice<'a, T> { slice: &'a [T], range: Uniform, } impl<'a, T> Slice<'a, T> { /// Create a new `Slice` instance which samples uniformly from the slice. /// Returns `Err` if the slice is empty. pub fn new(slice: &'a [T]) -> Result { match slice.len() { 0 => Err(EmptySlice), len => Ok(Self { slice, range: Uniform::new(0, len), }), } } } impl<'a, T> Distribution<&'a T> for Slice<'a, T> { fn sample(&self, rng: &mut R) -> &'a T { let idx = self.range.sample(rng); debug_assert!( idx < self.slice.len(), "Uniform::new(0, {}) somehow returned {}", self.slice.len(), idx ); // Safety: at construction time, it was ensured that the slice was // non-empty, and that the `Uniform` range produces values in range // for the slice unsafe { self.slice.get_unchecked(idx) } } } /// Error type indicating that a [`Slice`] distribution was improperly /// constructed with an empty slice. #[derive(Debug, Clone, Copy)] pub struct EmptySlice; impl core::fmt::Display for EmptySlice { fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { write!( f, "Tried to create a `distributions::Slice` with an empty slice" ) } } #[cfg(feature = "std")] impl std::error::Error for EmptySlice {} rand-0.8.4/src/distributions/uniform.rs000064400000000000000000001724610000000000000162670ustar 00000000000000// Copyright 2018-2020 Developers of the Rand project. // Copyright 2017 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A distribution uniformly sampling numbers within a given range. //! //! [`Uniform`] is the standard distribution to sample uniformly from a range; //! e.g. `Uniform::new_inclusive(1, 6)` can sample integers from 1 to 6, like a //! standard die. [`Rng::gen_range`] supports any type supported by //! [`Uniform`]. //! //! This distribution is provided with support for several primitive types //! (all integer and floating-point types) as well as [`std::time::Duration`], //! and supports extension to user-defined types via a type-specific *back-end* //! implementation. //! //! The types [`UniformInt`], [`UniformFloat`] and [`UniformDuration`] are the //! back-ends supporting sampling from primitive integer and floating-point //! ranges as well as from [`std::time::Duration`]; these types do not normally //! need to be used directly (unless implementing a derived back-end). //! //! # Example usage //! //! ``` //! use rand::{Rng, thread_rng}; //! use rand::distributions::Uniform; //! //! let mut rng = thread_rng(); //! let side = Uniform::new(-10.0, 10.0); //! //! // sample between 1 and 10 points //! for _ in 0..rng.gen_range(1..=10) { //! // sample a point from the square with sides -10 - 10 in two dimensions //! let (x, y) = (rng.sample(side), rng.sample(side)); //! println!("Point: {}, {}", x, y); //! } //! ``` //! //! # Extending `Uniform` to support a custom type //! //! To extend [`Uniform`] to support your own types, write a back-end which //! implements the [`UniformSampler`] trait, then implement the [`SampleUniform`] //! helper trait to "register" your back-end. See the `MyF32` example below. //! //! At a minimum, the back-end needs to store any parameters needed for sampling //! (e.g. the target range) and implement `new`, `new_inclusive` and `sample`. //! Those methods should include an assert to check the range is valid (i.e. //! `low < high`). The example below merely wraps another back-end. //! //! The `new`, `new_inclusive` and `sample_single` functions use arguments of //! type SampleBorrow in order to support passing in values by reference or //! by value. In the implementation of these functions, you can choose to //! simply use the reference returned by [`SampleBorrow::borrow`], or you can choose //! to copy or clone the value, whatever is appropriate for your type. //! //! ``` //! use rand::prelude::*; //! use rand::distributions::uniform::{Uniform, SampleUniform, //! UniformSampler, UniformFloat, SampleBorrow}; //! //! struct MyF32(f32); //! //! #[derive(Clone, Copy, Debug)] //! struct UniformMyF32(UniformFloat); //! //! impl UniformSampler for UniformMyF32 { //! type X = MyF32; //! fn new(low: B1, high: B2) -> Self //! where B1: SampleBorrow + Sized, //! B2: SampleBorrow + Sized //! { //! UniformMyF32(UniformFloat::::new(low.borrow().0, high.borrow().0)) //! } //! fn new_inclusive(low: B1, high: B2) -> Self //! where B1: SampleBorrow + Sized, //! B2: SampleBorrow + Sized //! { //! UniformSampler::new(low, high) //! } //! fn sample(&self, rng: &mut R) -> Self::X { //! MyF32(self.0.sample(rng)) //! } //! } //! //! impl SampleUniform for MyF32 { //! type Sampler = UniformMyF32; //! } //! //! let (low, high) = (MyF32(17.0f32), MyF32(22.0f32)); //! let uniform = Uniform::new(low, high); //! let x = uniform.sample(&mut thread_rng()); //! ``` //! //! [`SampleUniform`]: crate::distributions::uniform::SampleUniform //! [`UniformSampler`]: crate::distributions::uniform::UniformSampler //! [`UniformInt`]: crate::distributions::uniform::UniformInt //! [`UniformFloat`]: crate::distributions::uniform::UniformFloat //! [`UniformDuration`]: crate::distributions::uniform::UniformDuration //! [`SampleBorrow::borrow`]: crate::distributions::uniform::SampleBorrow::borrow #[cfg(not(feature = "std"))] use core::time::Duration; #[cfg(feature = "std")] use std::time::Duration; use core::ops::{Range, RangeInclusive}; use crate::distributions::float::IntoFloat; use crate::distributions::utils::{BoolAsSIMD, FloatAsSIMD, FloatSIMDUtils, WideningMultiply}; use crate::distributions::Distribution; use crate::{Rng, RngCore}; #[cfg(not(feature = "std"))] #[allow(unused_imports)] // rustc doesn't detect that this is actually used use crate::distributions::utils::Float; #[cfg(feature = "simd_support")] use packed_simd::*; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; /// Sample values uniformly between two bounds. /// /// [`Uniform::new`] and [`Uniform::new_inclusive`] construct a uniform /// distribution sampling from the given range; these functions may do extra /// work up front to make sampling of multiple values faster. If only one sample /// from the range is required, [`Rng::gen_range`] can be more efficient. /// /// When sampling from a constant range, many calculations can happen at /// compile-time and all methods should be fast; for floating-point ranges and /// the full range of integer types this should have comparable performance to /// the `Standard` distribution. /// /// Steps are taken to avoid bias which might be present in naive /// implementations; for example `rng.gen::() % 170` samples from the range /// `[0, 169]` but is twice as likely to select numbers less than 85 than other /// values. Further, the implementations here give more weight to the high-bits /// generated by the RNG than the low bits, since with some RNGs the low-bits /// are of lower quality than the high bits. /// /// Implementations must sample in `[low, high)` range for /// `Uniform::new(low, high)`, i.e., excluding `high`. In particular, care must /// be taken to ensure that rounding never results values `< low` or `>= high`. /// /// # Example /// /// ``` /// use rand::distributions::{Distribution, Uniform}; /// /// let between = Uniform::from(10..10000); /// let mut rng = rand::thread_rng(); /// let mut sum = 0; /// for _ in 0..1000 { /// sum += between.sample(&mut rng); /// } /// println!("{}", sum); /// ``` /// /// For a single sample, [`Rng::gen_range`] may be prefered: /// /// ``` /// use rand::Rng; /// /// let mut rng = rand::thread_rng(); /// println!("{}", rng.gen_range(0..10)); /// ``` /// /// [`new`]: Uniform::new /// [`new_inclusive`]: Uniform::new_inclusive /// [`Rng::gen_range`]: Rng::gen_range #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct Uniform(X::Sampler); impl Uniform { /// Create a new `Uniform` instance which samples uniformly from the half /// open range `[low, high)` (excluding `high`). Panics if `low >= high`. pub fn new(low: B1, high: B2) -> Uniform where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { Uniform(X::Sampler::new(low, high)) } /// Create a new `Uniform` instance which samples uniformly from the closed /// range `[low, high]` (inclusive). Panics if `low > high`. pub fn new_inclusive(low: B1, high: B2) -> Uniform where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { Uniform(X::Sampler::new_inclusive(low, high)) } } impl Distribution for Uniform { fn sample(&self, rng: &mut R) -> X { self.0.sample(rng) } } /// Helper trait for creating objects using the correct implementation of /// [`UniformSampler`] for the sampling type. /// /// See the [module documentation] on how to implement [`Uniform`] range /// sampling for a custom type. /// /// [module documentation]: crate::distributions::uniform pub trait SampleUniform: Sized { /// The `UniformSampler` implementation supporting type `X`. type Sampler: UniformSampler; } /// Helper trait handling actual uniform sampling. /// /// See the [module documentation] on how to implement [`Uniform`] range /// sampling for a custom type. /// /// Implementation of [`sample_single`] is optional, and is only useful when /// the implementation can be faster than `Self::new(low, high).sample(rng)`. /// /// [module documentation]: crate::distributions::uniform /// [`sample_single`]: UniformSampler::sample_single pub trait UniformSampler: Sized { /// The type sampled by this implementation. type X; /// Construct self, with inclusive lower bound and exclusive upper bound /// `[low, high)`. /// /// Usually users should not call this directly but instead use /// `Uniform::new`, which asserts that `low < high` before calling this. fn new(low: B1, high: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized; /// Construct self, with inclusive bounds `[low, high]`. /// /// Usually users should not call this directly but instead use /// `Uniform::new_inclusive`, which asserts that `low <= high` before /// calling this. fn new_inclusive(low: B1, high: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized; /// Sample a value. fn sample(&self, rng: &mut R) -> Self::X; /// Sample a single value uniformly from a range with inclusive lower bound /// and exclusive upper bound `[low, high)`. /// /// By default this is implemented using /// `UniformSampler::new(low, high).sample(rng)`. However, for some types /// more optimal implementations for single usage may be provided via this /// method (which is the case for integers and floats). /// Results may not be identical. /// /// Note that to use this method in a generic context, the type needs to be /// retrieved via `SampleUniform::Sampler` as follows: /// ``` /// use rand::{thread_rng, distributions::uniform::{SampleUniform, UniformSampler}}; /// # #[allow(unused)] /// fn sample_from_range(lb: T, ub: T) -> T { /// let mut rng = thread_rng(); /// ::Sampler::sample_single(lb, ub, &mut rng) /// } /// ``` fn sample_single(low: B1, high: B2, rng: &mut R) -> Self::X where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let uniform: Self = UniformSampler::new(low, high); uniform.sample(rng) } /// Sample a single value uniformly from a range with inclusive lower bound /// and inclusive upper bound `[low, high]`. /// /// By default this is implemented using /// `UniformSampler::new_inclusive(low, high).sample(rng)`. However, for /// some types more optimal implementations for single usage may be provided /// via this method. /// Results may not be identical. fn sample_single_inclusive(low: B1, high: B2, rng: &mut R) -> Self::X where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized { let uniform: Self = UniformSampler::new_inclusive(low, high); uniform.sample(rng) } } impl From> for Uniform { fn from(r: ::core::ops::Range) -> Uniform { Uniform::new(r.start, r.end) } } impl From> for Uniform { fn from(r: ::core::ops::RangeInclusive) -> Uniform { Uniform::new_inclusive(r.start(), r.end()) } } /// Helper trait similar to [`Borrow`] but implemented /// only for SampleUniform and references to SampleUniform in /// order to resolve ambiguity issues. /// /// [`Borrow`]: std::borrow::Borrow pub trait SampleBorrow { /// Immutably borrows from an owned value. See [`Borrow::borrow`] /// /// [`Borrow::borrow`]: std::borrow::Borrow::borrow fn borrow(&self) -> &Borrowed; } impl SampleBorrow for Borrowed where Borrowed: SampleUniform { #[inline(always)] fn borrow(&self) -> &Borrowed { self } } impl<'a, Borrowed> SampleBorrow for &'a Borrowed where Borrowed: SampleUniform { #[inline(always)] fn borrow(&self) -> &Borrowed { *self } } /// Range that supports generating a single sample efficiently. /// /// Any type implementing this trait can be used to specify the sampled range /// for `Rng::gen_range`. pub trait SampleRange { /// Generate a sample from the given range. fn sample_single(self, rng: &mut R) -> T; /// Check whether the range is empty. fn is_empty(&self) -> bool; } impl SampleRange for Range { #[inline] fn sample_single(self, rng: &mut R) -> T { T::Sampler::sample_single(self.start, self.end, rng) } #[inline] fn is_empty(&self) -> bool { !(self.start < self.end) } } impl SampleRange for RangeInclusive { #[inline] fn sample_single(self, rng: &mut R) -> T { T::Sampler::sample_single_inclusive(self.start(), self.end(), rng) } #[inline] fn is_empty(&self) -> bool { !(self.start() <= self.end()) } } //////////////////////////////////////////////////////////////////////////////// // What follows are all back-ends. /// The back-end implementing [`UniformSampler`] for integer types. /// /// Unless you are implementing [`UniformSampler`] for your own type, this type /// should not be used directly, use [`Uniform`] instead. /// /// # Implementation notes /// /// For simplicity, we use the same generic struct `UniformInt` for all /// integer types `X`. This gives us only one field type, `X`; to store unsigned /// values of this size, we take use the fact that these conversions are no-ops. /// /// For a closed range, the number of possible numbers we should generate is /// `range = (high - low + 1)`. To avoid bias, we must ensure that the size of /// our sample space, `zone`, is a multiple of `range`; other values must be /// rejected (by replacing with a new random sample). /// /// As a special case, we use `range = 0` to represent the full range of the /// result type (i.e. for `new_inclusive($ty::MIN, $ty::MAX)`). /// /// The optimum `zone` is the largest product of `range` which fits in our /// (unsigned) target type. We calculate this by calculating how many numbers we /// must reject: `reject = (MAX + 1) % range = (MAX - range + 1) % range`. Any (large) /// product of `range` will suffice, thus in `sample_single` we multiply by a /// power of 2 via bit-shifting (faster but may cause more rejections). /// /// The smallest integer PRNGs generate is `u32`. For 8- and 16-bit outputs we /// use `u32` for our `zone` and samples (because it's not slower and because /// it reduces the chance of having to reject a sample). In this case we cannot /// store `zone` in the target type since it is too large, however we know /// `ints_to_reject < range <= $unsigned::MAX`. /// /// An alternative to using a modulus is widening multiply: After a widening /// multiply by `range`, the result is in the high word. Then comparing the low /// word against `zone` makes sure our distribution is uniform. #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UniformInt { low: X, range: X, z: X, // either ints_to_reject or zone depending on implementation } macro_rules! uniform_int_impl { ($ty:ty, $unsigned:ident, $u_large:ident) => { impl SampleUniform for $ty { type Sampler = UniformInt<$ty>; } impl UniformSampler for UniformInt<$ty> { // We play free and fast with unsigned vs signed here // (when $ty is signed), but that's fine, since the // contract of this macro is for $ty and $unsigned to be // "bit-equal", so casting between them is a no-op. type X = $ty; #[inline] // if the range is constant, this helps LLVM to do the // calculations at compile-time. fn new(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!(low < high, "Uniform::new called with `low >= high`"); UniformSampler::new_inclusive(low, high - 1) } #[inline] // if the range is constant, this helps LLVM to do the // calculations at compile-time. fn new_inclusive(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!( low <= high, "Uniform::new_inclusive called with `low > high`" ); let unsigned_max = ::core::$u_large::MAX; let range = high.wrapping_sub(low).wrapping_add(1) as $unsigned; let ints_to_reject = if range > 0 { let range = $u_large::from(range); (unsigned_max - range + 1) % range } else { 0 }; UniformInt { low, // These are really $unsigned values, but store as $ty: range: range as $ty, z: ints_to_reject as $unsigned as $ty, } } #[inline] fn sample(&self, rng: &mut R) -> Self::X { let range = self.range as $unsigned as $u_large; if range > 0 { let unsigned_max = ::core::$u_large::MAX; let zone = unsigned_max - (self.z as $unsigned as $u_large); loop { let v: $u_large = rng.gen(); let (hi, lo) = v.wmul(range); if lo <= zone { return self.low.wrapping_add(hi as $ty); } } } else { // Sample from the entire integer range. rng.gen() } } #[inline] fn sample_single(low_b: B1, high_b: B2, rng: &mut R) -> Self::X where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!(low < high, "UniformSampler::sample_single: low >= high"); Self::sample_single_inclusive(low, high - 1, rng) } #[inline] fn sample_single_inclusive(low_b: B1, high_b: B2, rng: &mut R) -> Self::X where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!(low <= high, "UniformSampler::sample_single_inclusive: low > high"); let range = high.wrapping_sub(low).wrapping_add(1) as $unsigned as $u_large; // If the above resulted in wrap-around to 0, the range is $ty::MIN..=$ty::MAX, // and any integer will do. if range == 0 { return rng.gen(); } let zone = if ::core::$unsigned::MAX <= ::core::u16::MAX as $unsigned { // Using a modulus is faster than the approximation for // i8 and i16. I suppose we trade the cost of one // modulus for near-perfect branch prediction. let unsigned_max: $u_large = ::core::$u_large::MAX; let ints_to_reject = (unsigned_max - range + 1) % range; unsigned_max - ints_to_reject } else { // conservative but fast approximation. `- 1` is necessary to allow the // same comparison without bias. (range << range.leading_zeros()).wrapping_sub(1) }; loop { let v: $u_large = rng.gen(); let (hi, lo) = v.wmul(range); if lo <= zone { return low.wrapping_add(hi as $ty); } } } } }; } uniform_int_impl! { i8, u8, u32 } uniform_int_impl! { i16, u16, u32 } uniform_int_impl! { i32, u32, u32 } uniform_int_impl! { i64, u64, u64 } #[cfg(not(target_os = "emscripten"))] uniform_int_impl! { i128, u128, u128 } uniform_int_impl! { isize, usize, usize } uniform_int_impl! { u8, u8, u32 } uniform_int_impl! { u16, u16, u32 } uniform_int_impl! { u32, u32, u32 } uniform_int_impl! { u64, u64, u64 } uniform_int_impl! { usize, usize, usize } #[cfg(not(target_os = "emscripten"))] uniform_int_impl! { u128, u128, u128 } #[cfg(feature = "simd_support")] macro_rules! uniform_simd_int_impl { ($ty:ident, $unsigned:ident, $u_scalar:ident) => { // The "pick the largest zone that can fit in an `u32`" optimization // is less useful here. Multiple lanes complicate things, we don't // know the PRNG's minimal output size, and casting to a larger vector // is generally a bad idea for SIMD performance. The user can still // implement it manually. // TODO: look into `Uniform::::new(0u32, 100)` functionality // perhaps `impl SampleUniform for $u_scalar`? impl SampleUniform for $ty { type Sampler = UniformInt<$ty>; } impl UniformSampler for UniformInt<$ty> { type X = $ty; #[inline] // if the range is constant, this helps LLVM to do the // calculations at compile-time. fn new(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!(low.lt(high).all(), "Uniform::new called with `low >= high`"); UniformSampler::new_inclusive(low, high - 1) } #[inline] // if the range is constant, this helps LLVM to do the // calculations at compile-time. fn new_inclusive(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!(low.le(high).all(), "Uniform::new_inclusive called with `low > high`"); let unsigned_max = ::core::$u_scalar::MAX; // NOTE: these may need to be replaced with explicitly // wrapping operations if `packed_simd` changes let range: $unsigned = ((high - low) + 1).cast(); // `% 0` will panic at runtime. let not_full_range = range.gt($unsigned::splat(0)); // replacing 0 with `unsigned_max` allows a faster `select` // with bitwise OR let modulo = not_full_range.select(range, $unsigned::splat(unsigned_max)); // wrapping addition let ints_to_reject = (unsigned_max - range + 1) % modulo; // When `range` is 0, `lo` of `v.wmul(range)` will always be // zero which means only one sample is needed. let zone = unsigned_max - ints_to_reject; UniformInt { low, // These are really $unsigned values, but store as $ty: range: range.cast(), z: zone.cast(), } } fn sample(&self, rng: &mut R) -> Self::X { let range: $unsigned = self.range.cast(); let zone: $unsigned = self.z.cast(); // This might seem very slow, generating a whole new // SIMD vector for every sample rejection. For most uses // though, the chance of rejection is small and provides good // general performance. With multiple lanes, that chance is // multiplied. To mitigate this, we replace only the lanes of // the vector which fail, iteratively reducing the chance of // rejection. The replacement method does however add a little // overhead. Benchmarking or calculating probabilities might // reveal contexts where this replacement method is slower. let mut v: $unsigned = rng.gen(); loop { let (hi, lo) = v.wmul(range); let mask = lo.le(zone); if mask.all() { let hi: $ty = hi.cast(); // wrapping addition let result = self.low + hi; // `select` here compiles to a blend operation // When `range.eq(0).none()` the compare and blend // operations are avoided. let v: $ty = v.cast(); return range.gt($unsigned::splat(0)).select(result, v); } // Replace only the failing lanes v = mask.select(v, rng.gen()); } } } }; // bulk implementation ($(($unsigned:ident, $signed:ident),)+ $u_scalar:ident) => { $( uniform_simd_int_impl!($unsigned, $unsigned, $u_scalar); uniform_simd_int_impl!($signed, $unsigned, $u_scalar); )+ }; } #[cfg(feature = "simd_support")] uniform_simd_int_impl! { (u64x2, i64x2), (u64x4, i64x4), (u64x8, i64x8), u64 } #[cfg(feature = "simd_support")] uniform_simd_int_impl! { (u32x2, i32x2), (u32x4, i32x4), (u32x8, i32x8), (u32x16, i32x16), u32 } #[cfg(feature = "simd_support")] uniform_simd_int_impl! { (u16x2, i16x2), (u16x4, i16x4), (u16x8, i16x8), (u16x16, i16x16), (u16x32, i16x32), u16 } #[cfg(feature = "simd_support")] uniform_simd_int_impl! { (u8x2, i8x2), (u8x4, i8x4), (u8x8, i8x8), (u8x16, i8x16), (u8x32, i8x32), (u8x64, i8x64), u8 } impl SampleUniform for char { type Sampler = UniformChar; } /// The back-end implementing [`UniformSampler`] for `char`. /// /// Unless you are implementing [`UniformSampler`] for your own type, this type /// should not be used directly, use [`Uniform`] instead. /// /// This differs from integer range sampling since the range `0xD800..=0xDFFF` /// are used for surrogate pairs in UCS and UTF-16, and consequently are not /// valid Unicode code points. We must therefore avoid sampling values in this /// range. #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UniformChar { sampler: UniformInt, } /// UTF-16 surrogate range start const CHAR_SURROGATE_START: u32 = 0xD800; /// UTF-16 surrogate range size const CHAR_SURROGATE_LEN: u32 = 0xE000 - CHAR_SURROGATE_START; /// Convert `char` to compressed `u32` fn char_to_comp_u32(c: char) -> u32 { match c as u32 { c if c >= CHAR_SURROGATE_START => c - CHAR_SURROGATE_LEN, c => c, } } impl UniformSampler for UniformChar { type X = char; #[inline] // if the range is constant, this helps LLVM to do the // calculations at compile-time. fn new(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = char_to_comp_u32(*low_b.borrow()); let high = char_to_comp_u32(*high_b.borrow()); let sampler = UniformInt::::new(low, high); UniformChar { sampler } } #[inline] // if the range is constant, this helps LLVM to do the // calculations at compile-time. fn new_inclusive(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = char_to_comp_u32(*low_b.borrow()); let high = char_to_comp_u32(*high_b.borrow()); let sampler = UniformInt::::new_inclusive(low, high); UniformChar { sampler } } fn sample(&self, rng: &mut R) -> Self::X { let mut x = self.sampler.sample(rng); if x >= CHAR_SURROGATE_START { x += CHAR_SURROGATE_LEN; } // SAFETY: x must not be in surrogate range or greater than char::MAX. // This relies on range constructors which accept char arguments. // Validity of input char values is assumed. unsafe { core::char::from_u32_unchecked(x) } } } /// The back-end implementing [`UniformSampler`] for floating-point types. /// /// Unless you are implementing [`UniformSampler`] for your own type, this type /// should not be used directly, use [`Uniform`] instead. /// /// # Implementation notes /// /// Instead of generating a float in the `[0, 1)` range using [`Standard`], the /// `UniformFloat` implementation converts the output of an PRNG itself. This /// way one or two steps can be optimized out. /// /// The floats are first converted to a value in the `[1, 2)` interval using a /// transmute-based method, and then mapped to the expected range with a /// multiply and addition. Values produced this way have what equals 23 bits of /// random digits for an `f32`, and 52 for an `f64`. /// /// [`new`]: UniformSampler::new /// [`new_inclusive`]: UniformSampler::new_inclusive /// [`Standard`]: crate::distributions::Standard #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UniformFloat { low: X, scale: X, } macro_rules! uniform_float_impl { ($ty:ty, $uty:ident, $f_scalar:ident, $u_scalar:ident, $bits_to_discard:expr) => { impl SampleUniform for $ty { type Sampler = UniformFloat<$ty>; } impl UniformSampler for UniformFloat<$ty> { type X = $ty; fn new(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); debug_assert!( low.all_finite(), "Uniform::new called with `low` non-finite." ); debug_assert!( high.all_finite(), "Uniform::new called with `high` non-finite." ); assert!(low.all_lt(high), "Uniform::new called with `low >= high`"); let max_rand = <$ty>::splat( (::core::$u_scalar::MAX >> $bits_to_discard).into_float_with_exponent(0) - 1.0, ); let mut scale = high - low; assert!(scale.all_finite(), "Uniform::new: range overflow"); loop { let mask = (scale * max_rand + low).ge_mask(high); if mask.none() { break; } scale = scale.decrease_masked(mask); } debug_assert!(<$ty>::splat(0.0).all_le(scale)); UniformFloat { low, scale } } fn new_inclusive(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); debug_assert!( low.all_finite(), "Uniform::new_inclusive called with `low` non-finite." ); debug_assert!( high.all_finite(), "Uniform::new_inclusive called with `high` non-finite." ); assert!( low.all_le(high), "Uniform::new_inclusive called with `low > high`" ); let max_rand = <$ty>::splat( (::core::$u_scalar::MAX >> $bits_to_discard).into_float_with_exponent(0) - 1.0, ); let mut scale = (high - low) / max_rand; assert!(scale.all_finite(), "Uniform::new_inclusive: range overflow"); loop { let mask = (scale * max_rand + low).gt_mask(high); if mask.none() { break; } scale = scale.decrease_masked(mask); } debug_assert!(<$ty>::splat(0.0).all_le(scale)); UniformFloat { low, scale } } fn sample(&self, rng: &mut R) -> Self::X { // Generate a value in the range [1, 2) let value1_2 = (rng.gen::<$uty>() >> $bits_to_discard).into_float_with_exponent(0); // Get a value in the range [0, 1) in order to avoid // overflowing into infinity when multiplying with scale let value0_1 = value1_2 - 1.0; // We don't use `f64::mul_add`, because it is not available with // `no_std`. Furthermore, it is slower for some targets (but // faster for others). However, the order of multiplication and // addition is important, because on some platforms (e.g. ARM) // it will be optimized to a single (non-FMA) instruction. value0_1 * self.scale + self.low } #[inline] fn sample_single(low_b: B1, high_b: B2, rng: &mut R) -> Self::X where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); debug_assert!( low.all_finite(), "UniformSampler::sample_single called with `low` non-finite." ); debug_assert!( high.all_finite(), "UniformSampler::sample_single called with `high` non-finite." ); assert!( low.all_lt(high), "UniformSampler::sample_single: low >= high" ); let mut scale = high - low; assert!(scale.all_finite(), "UniformSampler::sample_single: range overflow"); loop { // Generate a value in the range [1, 2) let value1_2 = (rng.gen::<$uty>() >> $bits_to_discard).into_float_with_exponent(0); // Get a value in the range [0, 1) in order to avoid // overflowing into infinity when multiplying with scale let value0_1 = value1_2 - 1.0; // Doing multiply before addition allows some architectures // to use a single instruction. let res = value0_1 * scale + low; debug_assert!(low.all_le(res) || !scale.all_finite()); if res.all_lt(high) { return res; } // This handles a number of edge cases. // * `low` or `high` is NaN. In this case `scale` and // `res` are going to end up as NaN. // * `low` is negative infinity and `high` is finite. // `scale` is going to be infinite and `res` will be // NaN. // * `high` is positive infinity and `low` is finite. // `scale` is going to be infinite and `res` will // be infinite or NaN (if value0_1 is 0). // * `low` is negative infinity and `high` is positive // infinity. `scale` will be infinite and `res` will // be NaN. // * `low` and `high` are finite, but `high - low` // overflows to infinite. `scale` will be infinite // and `res` will be infinite or NaN (if value0_1 is 0). // So if `high` or `low` are non-finite, we are guaranteed // to fail the `res < high` check above and end up here. // // While we technically should check for non-finite `low` // and `high` before entering the loop, by doing the checks // here instead, we allow the common case to avoid these // checks. But we are still guaranteed that if `low` or // `high` are non-finite we'll end up here and can do the // appropriate checks. // // Likewise `high - low` overflowing to infinity is also // rare, so handle it here after the common case. let mask = !scale.finite_mask(); if mask.any() { assert!( low.all_finite() && high.all_finite(), "Uniform::sample_single: low and high must be finite" ); scale = scale.decrease_masked(mask); } } } } }; } uniform_float_impl! { f32, u32, f32, u32, 32 - 23 } uniform_float_impl! { f64, u64, f64, u64, 64 - 52 } #[cfg(feature = "simd_support")] uniform_float_impl! { f32x2, u32x2, f32, u32, 32 - 23 } #[cfg(feature = "simd_support")] uniform_float_impl! { f32x4, u32x4, f32, u32, 32 - 23 } #[cfg(feature = "simd_support")] uniform_float_impl! { f32x8, u32x8, f32, u32, 32 - 23 } #[cfg(feature = "simd_support")] uniform_float_impl! { f32x16, u32x16, f32, u32, 32 - 23 } #[cfg(feature = "simd_support")] uniform_float_impl! { f64x2, u64x2, f64, u64, 64 - 52 } #[cfg(feature = "simd_support")] uniform_float_impl! { f64x4, u64x4, f64, u64, 64 - 52 } #[cfg(feature = "simd_support")] uniform_float_impl! { f64x8, u64x8, f64, u64, 64 - 52 } /// The back-end implementing [`UniformSampler`] for `Duration`. /// /// Unless you are implementing [`UniformSampler`] for your own types, this type /// should not be used directly, use [`Uniform`] instead. #[derive(Clone, Copy, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct UniformDuration { mode: UniformDurationMode, offset: u32, } #[derive(Debug, Copy, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] enum UniformDurationMode { Small { secs: u64, nanos: Uniform, }, Medium { nanos: Uniform, }, Large { max_secs: u64, max_nanos: u32, secs: Uniform, }, } impl SampleUniform for Duration { type Sampler = UniformDuration; } impl UniformSampler for UniformDuration { type X = Duration; #[inline] fn new(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!(low < high, "Uniform::new called with `low >= high`"); UniformDuration::new_inclusive(low, high - Duration::new(0, 1)) } #[inline] fn new_inclusive(low_b: B1, high_b: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { let low = *low_b.borrow(); let high = *high_b.borrow(); assert!( low <= high, "Uniform::new_inclusive called with `low > high`" ); let low_s = low.as_secs(); let low_n = low.subsec_nanos(); let mut high_s = high.as_secs(); let mut high_n = high.subsec_nanos(); if high_n < low_n { high_s -= 1; high_n += 1_000_000_000; } let mode = if low_s == high_s { UniformDurationMode::Small { secs: low_s, nanos: Uniform::new_inclusive(low_n, high_n), } } else { let max = high_s .checked_mul(1_000_000_000) .and_then(|n| n.checked_add(u64::from(high_n))); if let Some(higher_bound) = max { let lower_bound = low_s * 1_000_000_000 + u64::from(low_n); UniformDurationMode::Medium { nanos: Uniform::new_inclusive(lower_bound, higher_bound), } } else { // An offset is applied to simplify generation of nanoseconds let max_nanos = high_n - low_n; UniformDurationMode::Large { max_secs: high_s, max_nanos, secs: Uniform::new_inclusive(low_s, high_s), } } }; UniformDuration { mode, offset: low_n, } } #[inline] fn sample(&self, rng: &mut R) -> Duration { match self.mode { UniformDurationMode::Small { secs, nanos } => { let n = nanos.sample(rng); Duration::new(secs, n) } UniformDurationMode::Medium { nanos } => { let nanos = nanos.sample(rng); Duration::new(nanos / 1_000_000_000, (nanos % 1_000_000_000) as u32) } UniformDurationMode::Large { max_secs, max_nanos, secs, } => { // constant folding means this is at least as fast as `Rng::sample(Range)` let nano_range = Uniform::new(0, 1_000_000_000); loop { let s = secs.sample(rng); let n = nano_range.sample(rng); if !(s == max_secs && n > max_nanos) { let sum = n + self.offset; break Duration::new(s, sum); } } } } } } #[cfg(test)] mod tests { use super::*; use crate::rngs::mock::StepRng; #[test] #[cfg(feature = "serde1")] fn test_serialization_uniform_duration() { let distr = UniformDuration::new(std::time::Duration::from_secs(10), std::time::Duration::from_secs(60)); let de_distr: UniformDuration = bincode::deserialize(&bincode::serialize(&distr).unwrap()).unwrap(); assert_eq!( distr.offset, de_distr.offset ); match (distr.mode, de_distr.mode) { (UniformDurationMode::Small {secs: a_secs, nanos: a_nanos}, UniformDurationMode::Small {secs, nanos}) => { assert_eq!(a_secs, secs); assert_eq!(a_nanos.0.low, nanos.0.low); assert_eq!(a_nanos.0.range, nanos.0.range); assert_eq!(a_nanos.0.z, nanos.0.z); } (UniformDurationMode::Medium {nanos: a_nanos} , UniformDurationMode::Medium {nanos}) => { assert_eq!(a_nanos.0.low, nanos.0.low); assert_eq!(a_nanos.0.range, nanos.0.range); assert_eq!(a_nanos.0.z, nanos.0.z); } (UniformDurationMode::Large {max_secs:a_max_secs, max_nanos:a_max_nanos, secs:a_secs}, UniformDurationMode::Large {max_secs, max_nanos, secs} ) => { assert_eq!(a_max_secs, max_secs); assert_eq!(a_max_nanos, max_nanos); assert_eq!(a_secs.0.low, secs.0.low); assert_eq!(a_secs.0.range, secs.0.range); assert_eq!(a_secs.0.z, secs.0.z); } _ => panic!("`UniformDurationMode` was not serialized/deserialized correctly") } } #[test] #[cfg(feature = "serde1")] fn test_uniform_serialization() { let unit_box: Uniform = Uniform::new(-1, 1); let de_unit_box: Uniform = bincode::deserialize(&bincode::serialize(&unit_box).unwrap()).unwrap(); assert_eq!(unit_box.0.low, de_unit_box.0.low); assert_eq!(unit_box.0.range, de_unit_box.0.range); assert_eq!(unit_box.0.z, de_unit_box.0.z); let unit_box: Uniform = Uniform::new(-1., 1.); let de_unit_box: Uniform = bincode::deserialize(&bincode::serialize(&unit_box).unwrap()).unwrap(); assert_eq!(unit_box.0.low, de_unit_box.0.low); assert_eq!(unit_box.0.scale, de_unit_box.0.scale); } #[should_panic] #[test] fn test_uniform_bad_limits_equal_int() { Uniform::new(10, 10); } #[test] fn test_uniform_good_limits_equal_int() { let mut rng = crate::test::rng(804); let dist = Uniform::new_inclusive(10, 10); for _ in 0..20 { assert_eq!(rng.sample(dist), 10); } } #[should_panic] #[test] fn test_uniform_bad_limits_flipped_int() { Uniform::new(10, 5); } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_integers() { #[cfg(not(target_os = "emscripten"))] use core::{i128, u128}; use core::{i16, i32, i64, i8, isize}; use core::{u16, u32, u64, u8, usize}; let mut rng = crate::test::rng(251); macro_rules! t { ($ty:ident, $v:expr, $le:expr, $lt:expr) => {{ for &(low, high) in $v.iter() { let my_uniform = Uniform::new(low, high); for _ in 0..1000 { let v: $ty = rng.sample(my_uniform); assert!($le(low, v) && $lt(v, high)); } let my_uniform = Uniform::new_inclusive(low, high); for _ in 0..1000 { let v: $ty = rng.sample(my_uniform); assert!($le(low, v) && $le(v, high)); } let my_uniform = Uniform::new(&low, high); for _ in 0..1000 { let v: $ty = rng.sample(my_uniform); assert!($le(low, v) && $lt(v, high)); } let my_uniform = Uniform::new_inclusive(&low, &high); for _ in 0..1000 { let v: $ty = rng.sample(my_uniform); assert!($le(low, v) && $le(v, high)); } for _ in 0..1000 { let v = <$ty as SampleUniform>::Sampler::sample_single(low, high, &mut rng); assert!($le(low, v) && $lt(v, high)); } for _ in 0..1000 { let v = <$ty as SampleUniform>::Sampler::sample_single_inclusive(low, high, &mut rng); assert!($le(low, v) && $le(v, high)); } } }}; // scalar bulk ($($ty:ident),*) => {{ $(t!( $ty, [(0, 10), (10, 127), ($ty::MIN, $ty::MAX)], |x, y| x <= y, |x, y| x < y );)* }}; // simd bulk ($($ty:ident),* => $scalar:ident) => {{ $(t!( $ty, [ ($ty::splat(0), $ty::splat(10)), ($ty::splat(10), $ty::splat(127)), ($ty::splat($scalar::MIN), $ty::splat($scalar::MAX)), ], |x: $ty, y| x.le(y).all(), |x: $ty, y| x.lt(y).all() );)* }}; } t!(i8, i16, i32, i64, isize, u8, u16, u32, u64, usize); #[cfg(not(target_os = "emscripten"))] t!(i128, u128); #[cfg(feature = "simd_support")] { t!(u8x2, u8x4, u8x8, u8x16, u8x32, u8x64 => u8); t!(i8x2, i8x4, i8x8, i8x16, i8x32, i8x64 => i8); t!(u16x2, u16x4, u16x8, u16x16, u16x32 => u16); t!(i16x2, i16x4, i16x8, i16x16, i16x32 => i16); t!(u32x2, u32x4, u32x8, u32x16 => u32); t!(i32x2, i32x4, i32x8, i32x16 => i32); t!(u64x2, u64x4, u64x8 => u64); t!(i64x2, i64x4, i64x8 => i64); } } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_char() { let mut rng = crate::test::rng(891); let mut max = core::char::from_u32(0).unwrap(); for _ in 0..100 { let c = rng.gen_range('A'..='Z'); assert!(('A'..='Z').contains(&c)); max = max.max(c); } assert_eq!(max, 'Z'); let d = Uniform::new( core::char::from_u32(0xD7F0).unwrap(), core::char::from_u32(0xE010).unwrap(), ); for _ in 0..100 { let c = d.sample(&mut rng); assert!((c as u32) < 0xD800 || (c as u32) > 0xDFFF); } } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_floats() { let mut rng = crate::test::rng(252); let mut zero_rng = StepRng::new(0, 0); let mut max_rng = StepRng::new(0xffff_ffff_ffff_ffff, 0); macro_rules! t { ($ty:ty, $f_scalar:ident, $bits_shifted:expr) => {{ let v: &[($f_scalar, $f_scalar)] = &[ (0.0, 100.0), (-1e35, -1e25), (1e-35, 1e-25), (-1e35, 1e35), (<$f_scalar>::from_bits(0), <$f_scalar>::from_bits(3)), (-<$f_scalar>::from_bits(10), -<$f_scalar>::from_bits(1)), (-<$f_scalar>::from_bits(5), 0.0), (-<$f_scalar>::from_bits(7), -0.0), (0.1 * ::core::$f_scalar::MAX, ::core::$f_scalar::MAX), (-::core::$f_scalar::MAX * 0.2, ::core::$f_scalar::MAX * 0.7), ]; for &(low_scalar, high_scalar) in v.iter() { for lane in 0..<$ty>::lanes() { let low = <$ty>::splat(0.0 as $f_scalar).replace(lane, low_scalar); let high = <$ty>::splat(1.0 as $f_scalar).replace(lane, high_scalar); let my_uniform = Uniform::new(low, high); let my_incl_uniform = Uniform::new_inclusive(low, high); for _ in 0..100 { let v = rng.sample(my_uniform).extract(lane); assert!(low_scalar <= v && v < high_scalar); let v = rng.sample(my_incl_uniform).extract(lane); assert!(low_scalar <= v && v <= high_scalar); let v = <$ty as SampleUniform>::Sampler ::sample_single(low, high, &mut rng).extract(lane); assert!(low_scalar <= v && v < high_scalar); } assert_eq!( rng.sample(Uniform::new_inclusive(low, low)).extract(lane), low_scalar ); assert_eq!(zero_rng.sample(my_uniform).extract(lane), low_scalar); assert_eq!(zero_rng.sample(my_incl_uniform).extract(lane), low_scalar); assert_eq!(<$ty as SampleUniform>::Sampler ::sample_single(low, high, &mut zero_rng) .extract(lane), low_scalar); assert!(max_rng.sample(my_uniform).extract(lane) < high_scalar); assert!(max_rng.sample(my_incl_uniform).extract(lane) <= high_scalar); // Don't run this test for really tiny differences between high and low // since for those rounding might result in selecting high for a very // long time. if (high_scalar - low_scalar) > 0.0001 { let mut lowering_max_rng = StepRng::new( 0xffff_ffff_ffff_ffff, (-1i64 << $bits_shifted) as u64, ); assert!( <$ty as SampleUniform>::Sampler ::sample_single(low, high, &mut lowering_max_rng) .extract(lane) < high_scalar ); } } } assert_eq!( rng.sample(Uniform::new_inclusive( ::core::$f_scalar::MAX, ::core::$f_scalar::MAX )), ::core::$f_scalar::MAX ); assert_eq!( rng.sample(Uniform::new_inclusive( -::core::$f_scalar::MAX, -::core::$f_scalar::MAX )), -::core::$f_scalar::MAX ); }}; } t!(f32, f32, 32 - 23); t!(f64, f64, 64 - 52); #[cfg(feature = "simd_support")] { t!(f32x2, f32, 32 - 23); t!(f32x4, f32, 32 - 23); t!(f32x8, f32, 32 - 23); t!(f32x16, f32, 32 - 23); t!(f64x2, f64, 64 - 52); t!(f64x4, f64, 64 - 52); t!(f64x8, f64, 64 - 52); } } #[test] #[should_panic] fn test_float_overflow() { Uniform::from(::core::f64::MIN..::core::f64::MAX); } #[test] #[should_panic] fn test_float_overflow_single() { let mut rng = crate::test::rng(252); rng.gen_range(::core::f64::MIN..::core::f64::MAX); } #[test] #[cfg(all( feature = "std", not(target_arch = "wasm32"), not(target_arch = "asmjs") ))] fn test_float_assertions() { use super::SampleUniform; use std::panic::catch_unwind; fn range(low: T, high: T) { let mut rng = crate::test::rng(253); T::Sampler::sample_single(low, high, &mut rng); } macro_rules! t { ($ty:ident, $f_scalar:ident) => {{ let v: &[($f_scalar, $f_scalar)] = &[ (::std::$f_scalar::NAN, 0.0), (1.0, ::std::$f_scalar::NAN), (::std::$f_scalar::NAN, ::std::$f_scalar::NAN), (1.0, 0.5), (::std::$f_scalar::MAX, -::std::$f_scalar::MAX), (::std::$f_scalar::INFINITY, ::std::$f_scalar::INFINITY), ( ::std::$f_scalar::NEG_INFINITY, ::std::$f_scalar::NEG_INFINITY, ), (::std::$f_scalar::NEG_INFINITY, 5.0), (5.0, ::std::$f_scalar::INFINITY), (::std::$f_scalar::NAN, ::std::$f_scalar::INFINITY), (::std::$f_scalar::NEG_INFINITY, ::std::$f_scalar::NAN), (::std::$f_scalar::NEG_INFINITY, ::std::$f_scalar::INFINITY), ]; for &(low_scalar, high_scalar) in v.iter() { for lane in 0..<$ty>::lanes() { let low = <$ty>::splat(0.0 as $f_scalar).replace(lane, low_scalar); let high = <$ty>::splat(1.0 as $f_scalar).replace(lane, high_scalar); assert!(catch_unwind(|| range(low, high)).is_err()); assert!(catch_unwind(|| Uniform::new(low, high)).is_err()); assert!(catch_unwind(|| Uniform::new_inclusive(low, high)).is_err()); assert!(catch_unwind(|| range(low, low)).is_err()); assert!(catch_unwind(|| Uniform::new(low, low)).is_err()); } } }}; } t!(f32, f32); t!(f64, f64); #[cfg(feature = "simd_support")] { t!(f32x2, f32); t!(f32x4, f32); t!(f32x8, f32); t!(f32x16, f32); t!(f64x2, f64); t!(f64x4, f64); t!(f64x8, f64); } } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_durations() { #[cfg(not(feature = "std"))] use core::time::Duration; #[cfg(feature = "std")] use std::time::Duration; let mut rng = crate::test::rng(253); let v = &[ (Duration::new(10, 50000), Duration::new(100, 1234)), (Duration::new(0, 100), Duration::new(1, 50)), ( Duration::new(0, 0), Duration::new(u64::max_value(), 999_999_999), ), ]; for &(low, high) in v.iter() { let my_uniform = Uniform::new(low, high); for _ in 0..1000 { let v = rng.sample(my_uniform); assert!(low <= v && v < high); } } } #[test] fn test_custom_uniform() { use crate::distributions::uniform::{ SampleBorrow, SampleUniform, UniformFloat, UniformSampler, }; #[derive(Clone, Copy, PartialEq, PartialOrd)] struct MyF32 { x: f32, } #[derive(Clone, Copy, Debug)] struct UniformMyF32(UniformFloat); impl UniformSampler for UniformMyF32 { type X = MyF32; fn new(low: B1, high: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { UniformMyF32(UniformFloat::::new(low.borrow().x, high.borrow().x)) } fn new_inclusive(low: B1, high: B2) -> Self where B1: SampleBorrow + Sized, B2: SampleBorrow + Sized, { UniformSampler::new(low, high) } fn sample(&self, rng: &mut R) -> Self::X { MyF32 { x: self.0.sample(rng), } } } impl SampleUniform for MyF32 { type Sampler = UniformMyF32; } let (low, high) = (MyF32 { x: 17.0f32 }, MyF32 { x: 22.0f32 }); let uniform = Uniform::new(low, high); let mut rng = crate::test::rng(804); for _ in 0..100 { let x: MyF32 = rng.sample(uniform); assert!(low <= x && x < high); } } #[test] fn test_uniform_from_std_range() { let r = Uniform::from(2u32..7); assert_eq!(r.0.low, 2); assert_eq!(r.0.range, 5); let r = Uniform::from(2.0f64..7.0); assert_eq!(r.0.low, 2.0); assert_eq!(r.0.scale, 5.0); } #[test] fn test_uniform_from_std_range_inclusive() { let r = Uniform::from(2u32..=6); assert_eq!(r.0.low, 2); assert_eq!(r.0.range, 5); let r = Uniform::from(2.0f64..=7.0); assert_eq!(r.0.low, 2.0); assert!(r.0.scale > 5.0); assert!(r.0.scale < 5.0 + 1e-14); } #[test] fn value_stability() { fn test_samples( lb: T, ub: T, expected_single: &[T], expected_multiple: &[T], ) where Uniform: Distribution { let mut rng = crate::test::rng(897); let mut buf = [lb; 3]; for x in &mut buf { *x = T::Sampler::sample_single(lb, ub, &mut rng); } assert_eq!(&buf, expected_single); let distr = Uniform::new(lb, ub); for x in &mut buf { *x = rng.sample(&distr); } assert_eq!(&buf, expected_multiple); } // We test on a sub-set of types; possibly we should do more. // TODO: SIMD types test_samples(11u8, 219, &[17, 66, 214], &[181, 93, 165]); test_samples(11u32, 219, &[17, 66, 214], &[181, 93, 165]); test_samples(0f32, 1e-2f32, &[0.0003070104, 0.0026630748, 0.00979833], &[ 0.008194133, 0.00398172, 0.007428536, ]); test_samples( -1e10f64, 1e10f64, &[-4673848682.871551, 6388267422.932352, 4857075081.198343], &[1173375212.1808167, 1917642852.109581, 2365076174.3153973], ); test_samples( Duration::new(2, 0), Duration::new(4, 0), &[ Duration::new(2, 532615131), Duration::new(3, 638826742), Duration::new(3, 485707508), ], &[ Duration::new(3, 117337521), Duration::new(3, 191764285), Duration::new(3, 236507617), ], ); } } rand-0.8.4/src/distributions/utils.rs000064400000000000000000000325670000000000000157520ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Math helper functions #[cfg(feature = "simd_support")] use packed_simd::*; pub(crate) trait WideningMultiply { type Output; fn wmul(self, x: RHS) -> Self::Output; } macro_rules! wmul_impl { ($ty:ty, $wide:ty, $shift:expr) => { impl WideningMultiply for $ty { type Output = ($ty, $ty); #[inline(always)] fn wmul(self, x: $ty) -> Self::Output { let tmp = (self as $wide) * (x as $wide); ((tmp >> $shift) as $ty, tmp as $ty) } } }; // simd bulk implementation ($(($ty:ident, $wide:ident),)+, $shift:expr) => { $( impl WideningMultiply for $ty { type Output = ($ty, $ty); #[inline(always)] fn wmul(self, x: $ty) -> Self::Output { // For supported vectors, this should compile to a couple // supported multiply & swizzle instructions (no actual // casting). // TODO: optimize let y: $wide = self.cast(); let x: $wide = x.cast(); let tmp = y * x; let hi: $ty = (tmp >> $shift).cast(); let lo: $ty = tmp.cast(); (hi, lo) } } )+ }; } wmul_impl! { u8, u16, 8 } wmul_impl! { u16, u32, 16 } wmul_impl! { u32, u64, 32 } #[cfg(not(target_os = "emscripten"))] wmul_impl! { u64, u128, 64 } // This code is a translation of the __mulddi3 function in LLVM's // compiler-rt. It is an optimised variant of the common method // `(a + b) * (c + d) = ac + ad + bc + bd`. // // For some reason LLVM can optimise the C version very well, but // keeps shuffling registers in this Rust translation. macro_rules! wmul_impl_large { ($ty:ty, $half:expr) => { impl WideningMultiply for $ty { type Output = ($ty, $ty); #[inline(always)] fn wmul(self, b: $ty) -> Self::Output { const LOWER_MASK: $ty = !0 >> $half; let mut low = (self & LOWER_MASK).wrapping_mul(b & LOWER_MASK); let mut t = low >> $half; low &= LOWER_MASK; t += (self >> $half).wrapping_mul(b & LOWER_MASK); low += (t & LOWER_MASK) << $half; let mut high = t >> $half; t = low >> $half; low &= LOWER_MASK; t += (b >> $half).wrapping_mul(self & LOWER_MASK); low += (t & LOWER_MASK) << $half; high += t >> $half; high += (self >> $half).wrapping_mul(b >> $half); (high, low) } } }; // simd bulk implementation (($($ty:ty,)+) $scalar:ty, $half:expr) => { $( impl WideningMultiply for $ty { type Output = ($ty, $ty); #[inline(always)] fn wmul(self, b: $ty) -> Self::Output { // needs wrapping multiplication const LOWER_MASK: $scalar = !0 >> $half; let mut low = (self & LOWER_MASK) * (b & LOWER_MASK); let mut t = low >> $half; low &= LOWER_MASK; t += (self >> $half) * (b & LOWER_MASK); low += (t & LOWER_MASK) << $half; let mut high = t >> $half; t = low >> $half; low &= LOWER_MASK; t += (b >> $half) * (self & LOWER_MASK); low += (t & LOWER_MASK) << $half; high += t >> $half; high += (self >> $half) * (b >> $half); (high, low) } } )+ }; } #[cfg(target_os = "emscripten")] wmul_impl_large! { u64, 32 } #[cfg(not(target_os = "emscripten"))] wmul_impl_large! { u128, 64 } macro_rules! wmul_impl_usize { ($ty:ty) => { impl WideningMultiply for usize { type Output = (usize, usize); #[inline(always)] fn wmul(self, x: usize) -> Self::Output { let (high, low) = (self as $ty).wmul(x as $ty); (high as usize, low as usize) } } }; } #[cfg(target_pointer_width = "32")] wmul_impl_usize! { u32 } #[cfg(target_pointer_width = "64")] wmul_impl_usize! { u64 } #[cfg(feature = "simd_support")] mod simd_wmul { use super::*; #[cfg(target_arch = "x86")] use core::arch::x86::*; #[cfg(target_arch = "x86_64")] use core::arch::x86_64::*; wmul_impl! { (u8x2, u16x2), (u8x4, u16x4), (u8x8, u16x8), (u8x16, u16x16), (u8x32, u16x32),, 8 } wmul_impl! { (u16x2, u32x2),, 16 } wmul_impl! { (u16x4, u32x4),, 16 } #[cfg(not(target_feature = "sse2"))] wmul_impl! { (u16x8, u32x8),, 16 } #[cfg(not(target_feature = "avx2"))] wmul_impl! { (u16x16, u32x16),, 16 } // 16-bit lane widths allow use of the x86 `mulhi` instructions, which // means `wmul` can be implemented with only two instructions. #[allow(unused_macros)] macro_rules! wmul_impl_16 { ($ty:ident, $intrinsic:ident, $mulhi:ident, $mullo:ident) => { impl WideningMultiply for $ty { type Output = ($ty, $ty); #[inline(always)] fn wmul(self, x: $ty) -> Self::Output { let b = $intrinsic::from_bits(x); let a = $intrinsic::from_bits(self); let hi = $ty::from_bits(unsafe { $mulhi(a, b) }); let lo = $ty::from_bits(unsafe { $mullo(a, b) }); (hi, lo) } } }; } #[cfg(target_feature = "sse2")] wmul_impl_16! { u16x8, __m128i, _mm_mulhi_epu16, _mm_mullo_epi16 } #[cfg(target_feature = "avx2")] wmul_impl_16! { u16x16, __m256i, _mm256_mulhi_epu16, _mm256_mullo_epi16 } // FIXME: there are no `__m512i` types in stdsimd yet, so `wmul::` // cannot use the same implementation. wmul_impl! { (u32x2, u64x2), (u32x4, u64x4), (u32x8, u64x8),, 32 } // TODO: optimize, this seems to seriously slow things down wmul_impl_large! { (u8x64,) u8, 4 } wmul_impl_large! { (u16x32,) u16, 8 } wmul_impl_large! { (u32x16,) u32, 16 } wmul_impl_large! { (u64x2, u64x4, u64x8,) u64, 32 } } /// Helper trait when dealing with scalar and SIMD floating point types. pub(crate) trait FloatSIMDUtils { // `PartialOrd` for vectors compares lexicographically. We want to compare all // the individual SIMD lanes instead, and get the combined result over all // lanes. This is possible using something like `a.lt(b).all()`, but we // implement it as a trait so we can write the same code for `f32` and `f64`. // Only the comparison functions we need are implemented. fn all_lt(self, other: Self) -> bool; fn all_le(self, other: Self) -> bool; fn all_finite(self) -> bool; type Mask; fn finite_mask(self) -> Self::Mask; fn gt_mask(self, other: Self) -> Self::Mask; fn ge_mask(self, other: Self) -> Self::Mask; // Decrease all lanes where the mask is `true` to the next lower value // representable by the floating-point type. At least one of the lanes // must be set. fn decrease_masked(self, mask: Self::Mask) -> Self; // Convert from int value. Conversion is done while retaining the numerical // value, not by retaining the binary representation. type UInt; fn cast_from_int(i: Self::UInt) -> Self; } /// Implement functions available in std builds but missing from core primitives #[cfg(not(std))] // False positive: We are following `std` here. #[allow(clippy::wrong_self_convention)] pub(crate) trait Float: Sized { fn is_nan(self) -> bool; fn is_infinite(self) -> bool; fn is_finite(self) -> bool; } /// Implement functions on f32/f64 to give them APIs similar to SIMD types pub(crate) trait FloatAsSIMD: Sized { #[inline(always)] fn lanes() -> usize { 1 } #[inline(always)] fn splat(scalar: Self) -> Self { scalar } #[inline(always)] fn extract(self, index: usize) -> Self { debug_assert_eq!(index, 0); self } #[inline(always)] fn replace(self, index: usize, new_value: Self) -> Self { debug_assert_eq!(index, 0); new_value } } pub(crate) trait BoolAsSIMD: Sized { fn any(self) -> bool; fn all(self) -> bool; fn none(self) -> bool; } impl BoolAsSIMD for bool { #[inline(always)] fn any(self) -> bool { self } #[inline(always)] fn all(self) -> bool { self } #[inline(always)] fn none(self) -> bool { !self } } macro_rules! scalar_float_impl { ($ty:ident, $uty:ident) => { #[cfg(not(std))] impl Float for $ty { #[inline] fn is_nan(self) -> bool { self != self } #[inline] fn is_infinite(self) -> bool { self == ::core::$ty::INFINITY || self == ::core::$ty::NEG_INFINITY } #[inline] fn is_finite(self) -> bool { !(self.is_nan() || self.is_infinite()) } } impl FloatSIMDUtils for $ty { type Mask = bool; type UInt = $uty; #[inline(always)] fn all_lt(self, other: Self) -> bool { self < other } #[inline(always)] fn all_le(self, other: Self) -> bool { self <= other } #[inline(always)] fn all_finite(self) -> bool { self.is_finite() } #[inline(always)] fn finite_mask(self) -> Self::Mask { self.is_finite() } #[inline(always)] fn gt_mask(self, other: Self) -> Self::Mask { self > other } #[inline(always)] fn ge_mask(self, other: Self) -> Self::Mask { self >= other } #[inline(always)] fn decrease_masked(self, mask: Self::Mask) -> Self { debug_assert!(mask, "At least one lane must be set"); <$ty>::from_bits(self.to_bits() - 1) } #[inline] fn cast_from_int(i: Self::UInt) -> Self { i as $ty } } impl FloatAsSIMD for $ty {} }; } scalar_float_impl!(f32, u32); scalar_float_impl!(f64, u64); #[cfg(feature = "simd_support")] macro_rules! simd_impl { ($ty:ident, $f_scalar:ident, $mty:ident, $uty:ident) => { impl FloatSIMDUtils for $ty { type Mask = $mty; type UInt = $uty; #[inline(always)] fn all_lt(self, other: Self) -> bool { self.lt(other).all() } #[inline(always)] fn all_le(self, other: Self) -> bool { self.le(other).all() } #[inline(always)] fn all_finite(self) -> bool { self.finite_mask().all() } #[inline(always)] fn finite_mask(self) -> Self::Mask { // This can possibly be done faster by checking bit patterns let neg_inf = $ty::splat(::core::$f_scalar::NEG_INFINITY); let pos_inf = $ty::splat(::core::$f_scalar::INFINITY); self.gt(neg_inf) & self.lt(pos_inf) } #[inline(always)] fn gt_mask(self, other: Self) -> Self::Mask { self.gt(other) } #[inline(always)] fn ge_mask(self, other: Self) -> Self::Mask { self.ge(other) } #[inline(always)] fn decrease_masked(self, mask: Self::Mask) -> Self { // Casting a mask into ints will produce all bits set for // true, and 0 for false. Adding that to the binary // representation of a float means subtracting one from // the binary representation, resulting in the next lower // value representable by $ty. This works even when the // current value is infinity. debug_assert!(mask.any(), "At least one lane must be set"); <$ty>::from_bits(<$uty>::from_bits(self) + <$uty>::from_bits(mask)) } #[inline] fn cast_from_int(i: Self::UInt) -> Self { i.cast() } } }; } #[cfg(feature="simd_support")] simd_impl! { f32x2, f32, m32x2, u32x2 } #[cfg(feature="simd_support")] simd_impl! { f32x4, f32, m32x4, u32x4 } #[cfg(feature="simd_support")] simd_impl! { f32x8, f32, m32x8, u32x8 } #[cfg(feature="simd_support")] simd_impl! { f32x16, f32, m32x16, u32x16 } #[cfg(feature="simd_support")] simd_impl! { f64x2, f64, m64x2, u64x2 } #[cfg(feature="simd_support")] simd_impl! { f64x4, f64, m64x4, u64x4 } #[cfg(feature="simd_support")] simd_impl! { f64x8, f64, m64x8, u64x8 } rand-0.8.4/src/distributions/weighted.rs000064400000000000000000000030520000000000000163750ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Weighted index sampling //! //! This module is deprecated. Use [`crate::distributions::WeightedIndex`] and //! [`crate::distributions::WeightedError`] instead. pub use super::{WeightedIndex, WeightedError}; #[allow(missing_docs)] #[deprecated(since = "0.8.0", note = "moved to rand_distr crate")] pub mod alias_method { // This module exists to provide a deprecation warning which minimises // compile errors, but still fails to compile if ever used. use core::marker::PhantomData; use alloc::vec::Vec; use super::WeightedError; #[derive(Debug)] pub struct WeightedIndex { _phantom: PhantomData, } impl WeightedIndex { pub fn new(_weights: Vec) -> Result { Err(WeightedError::NoItem) } } pub trait Weight {} macro_rules! impl_weight { () => {}; ($T:ident, $($more:ident,)*) => { impl Weight for $T {} impl_weight!($($more,)*); }; } impl_weight!(f64, f32,); impl_weight!(u8, u16, u32, u64, usize,); impl_weight!(i8, i16, i32, i64, isize,); #[cfg(not(target_os = "emscripten"))] impl_weight!(u128, i128,); } rand-0.8.4/src/distributions/weighted_index.rs000064400000000000000000000373500000000000000175740ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Weighted index sampling use crate::distributions::uniform::{SampleBorrow, SampleUniform, UniformSampler}; use crate::distributions::Distribution; use crate::Rng; use core::cmp::PartialOrd; use core::fmt; // Note that this whole module is only imported if feature="alloc" is enabled. use alloc::vec::Vec; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; /// A distribution using weighted sampling of discrete items /// /// Sampling a `WeightedIndex` distribution returns the index of a randomly /// selected element from the iterator used when the `WeightedIndex` was /// created. The chance of a given element being picked is proportional to the /// value of the element. The weights can use any type `X` for which an /// implementation of [`Uniform`] exists. /// /// # Performance /// /// Time complexity of sampling from `WeightedIndex` is `O(log N)` where /// `N` is the number of weights. As an alternative, /// [`rand_distr::weighted_alias`](https://docs.rs/rand_distr/*/rand_distr/weighted_alias/index.html) /// supports `O(1)` sampling, but with much higher initialisation cost. /// /// A `WeightedIndex` contains a `Vec` and a [`Uniform`] and so its /// size is the sum of the size of those objects, possibly plus some alignment. /// /// Creating a `WeightedIndex` will allocate enough space to hold `N - 1` /// weights of type `X`, where `N` is the number of weights. However, since /// `Vec` doesn't guarantee a particular growth strategy, additional memory /// might be allocated but not used. Since the `WeightedIndex` object also /// contains, this might cause additional allocations, though for primitive /// types, [`Uniform`] doesn't allocate any memory. /// /// Sampling from `WeightedIndex` will result in a single call to /// `Uniform::sample` (method of the [`Distribution`] trait), which typically /// will request a single value from the underlying [`RngCore`], though the /// exact number depends on the implementation of `Uniform::sample`. /// /// # Example /// /// ``` /// use rand::prelude::*; /// use rand::distributions::WeightedIndex; /// /// let choices = ['a', 'b', 'c']; /// let weights = [2, 1, 1]; /// let dist = WeightedIndex::new(&weights).unwrap(); /// let mut rng = thread_rng(); /// for _ in 0..100 { /// // 50% chance to print 'a', 25% chance to print 'b', 25% chance to print 'c' /// println!("{}", choices[dist.sample(&mut rng)]); /// } /// /// let items = [('a', 0), ('b', 3), ('c', 7)]; /// let dist2 = WeightedIndex::new(items.iter().map(|item| item.1)).unwrap(); /// for _ in 0..100 { /// // 0% chance to print 'a', 30% chance to print 'b', 70% chance to print 'c' /// println!("{}", items[dist2.sample(&mut rng)].0); /// } /// ``` /// /// [`Uniform`]: crate::distributions::Uniform /// [`RngCore`]: crate::RngCore #[derive(Debug, Clone)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub struct WeightedIndex { cumulative_weights: Vec, total_weight: X, weight_distribution: X::Sampler, } impl WeightedIndex { /// Creates a new a `WeightedIndex` [`Distribution`] using the values /// in `weights`. The weights can use any type `X` for which an /// implementation of [`Uniform`] exists. /// /// Returns an error if the iterator is empty, if any weight is `< 0`, or /// if its total value is 0. /// /// [`Uniform`]: crate::distributions::uniform::Uniform pub fn new(weights: I) -> Result, WeightedError> where I: IntoIterator, I::Item: SampleBorrow, X: for<'a> ::core::ops::AddAssign<&'a X> + Clone + Default, { let mut iter = weights.into_iter(); let mut total_weight: X = iter.next().ok_or(WeightedError::NoItem)?.borrow().clone(); let zero = ::default(); if !(total_weight >= zero) { return Err(WeightedError::InvalidWeight); } let mut weights = Vec::::with_capacity(iter.size_hint().0); for w in iter { // Note that `!(w >= x)` is not equivalent to `w < x` for partially // ordered types due to NaNs which are equal to nothing. if !(w.borrow() >= &zero) { return Err(WeightedError::InvalidWeight); } weights.push(total_weight.clone()); total_weight += w.borrow(); } if total_weight == zero { return Err(WeightedError::AllWeightsZero); } let distr = X::Sampler::new(zero, total_weight.clone()); Ok(WeightedIndex { cumulative_weights: weights, total_weight, weight_distribution: distr, }) } /// Update a subset of weights, without changing the number of weights. /// /// `new_weights` must be sorted by the index. /// /// Using this method instead of `new` might be more efficient if only a small number of /// weights is modified. No allocations are performed, unless the weight type `X` uses /// allocation internally. /// /// In case of error, `self` is not modified. pub fn update_weights(&mut self, new_weights: &[(usize, &X)]) -> Result<(), WeightedError> where X: for<'a> ::core::ops::AddAssign<&'a X> + for<'a> ::core::ops::SubAssign<&'a X> + Clone + Default { if new_weights.is_empty() { return Ok(()); } let zero = ::default(); let mut total_weight = self.total_weight.clone(); // Check for errors first, so we don't modify `self` in case something // goes wrong. let mut prev_i = None; for &(i, w) in new_weights { if let Some(old_i) = prev_i { if old_i >= i { return Err(WeightedError::InvalidWeight); } } if !(*w >= zero) { return Err(WeightedError::InvalidWeight); } if i > self.cumulative_weights.len() { return Err(WeightedError::TooMany); } let mut old_w = if i < self.cumulative_weights.len() { self.cumulative_weights[i].clone() } else { self.total_weight.clone() }; if i > 0 { old_w -= &self.cumulative_weights[i - 1]; } total_weight -= &old_w; total_weight += w; prev_i = Some(i); } if total_weight <= zero { return Err(WeightedError::AllWeightsZero); } // Update the weights. Because we checked all the preconditions in the // previous loop, this should never panic. let mut iter = new_weights.iter(); let mut prev_weight = zero.clone(); let mut next_new_weight = iter.next(); let &(first_new_index, _) = next_new_weight.unwrap(); let mut cumulative_weight = if first_new_index > 0 { self.cumulative_weights[first_new_index - 1].clone() } else { zero.clone() }; for i in first_new_index..self.cumulative_weights.len() { match next_new_weight { Some(&(j, w)) if i == j => { cumulative_weight += w; next_new_weight = iter.next(); } _ => { let mut tmp = self.cumulative_weights[i].clone(); tmp -= &prev_weight; // We know this is positive. cumulative_weight += &tmp; } } prev_weight = cumulative_weight.clone(); core::mem::swap(&mut prev_weight, &mut self.cumulative_weights[i]); } self.total_weight = total_weight; self.weight_distribution = X::Sampler::new(zero, self.total_weight.clone()); Ok(()) } } impl Distribution for WeightedIndex where X: SampleUniform + PartialOrd { fn sample(&self, rng: &mut R) -> usize { use ::core::cmp::Ordering; let chosen_weight = self.weight_distribution.sample(rng); // Find the first item which has a weight *higher* than the chosen weight. self.cumulative_weights .binary_search_by(|w| { if *w <= chosen_weight { Ordering::Less } else { Ordering::Greater } }) .unwrap_err() } } #[cfg(test)] mod test { use super::*; #[cfg(feature = "serde1")] #[test] fn test_weightedindex_serde1() { let weighted_index = WeightedIndex::new(&[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]).unwrap(); let ser_weighted_index = bincode::serialize(&weighted_index).unwrap(); let de_weighted_index: WeightedIndex = bincode::deserialize(&ser_weighted_index).unwrap(); assert_eq!( de_weighted_index.cumulative_weights, weighted_index.cumulative_weights ); assert_eq!(de_weighted_index.total_weight, weighted_index.total_weight); } #[test] fn test_accepting_nan(){ assert_eq!( WeightedIndex::new(&[core::f32::NAN, 0.5]).unwrap_err(), WeightedError::InvalidWeight, ); assert_eq!( WeightedIndex::new(&[core::f32::NAN]).unwrap_err(), WeightedError::InvalidWeight, ); assert_eq!( WeightedIndex::new(&[0.5, core::f32::NAN]).unwrap_err(), WeightedError::InvalidWeight, ); assert_eq!( WeightedIndex::new(&[0.5, 7.0]) .unwrap() .update_weights(&[(0, &core::f32::NAN)]) .unwrap_err(), WeightedError::InvalidWeight, ) } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_weightedindex() { let mut r = crate::test::rng(700); const N_REPS: u32 = 5000; let weights = [1u32, 2, 3, 0, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7]; let total_weight = weights.iter().sum::() as f32; let verify = |result: [i32; 14]| { for (i, count) in result.iter().enumerate() { let exp = (weights[i] * N_REPS) as f32 / total_weight; let mut err = (*count as f32 - exp).abs(); if err != 0.0 { err /= exp; } assert!(err <= 0.25); } }; // WeightedIndex from vec let mut chosen = [0i32; 14]; let distr = WeightedIndex::new(weights.to_vec()).unwrap(); for _ in 0..N_REPS { chosen[distr.sample(&mut r)] += 1; } verify(chosen); // WeightedIndex from slice chosen = [0i32; 14]; let distr = WeightedIndex::new(&weights[..]).unwrap(); for _ in 0..N_REPS { chosen[distr.sample(&mut r)] += 1; } verify(chosen); // WeightedIndex from iterator chosen = [0i32; 14]; let distr = WeightedIndex::new(weights.iter()).unwrap(); for _ in 0..N_REPS { chosen[distr.sample(&mut r)] += 1; } verify(chosen); for _ in 0..5 { assert_eq!(WeightedIndex::new(&[0, 1]).unwrap().sample(&mut r), 1); assert_eq!(WeightedIndex::new(&[1, 0]).unwrap().sample(&mut r), 0); assert_eq!( WeightedIndex::new(&[0, 0, 0, 0, 10, 0]) .unwrap() .sample(&mut r), 4 ); } assert_eq!( WeightedIndex::new(&[10][0..0]).unwrap_err(), WeightedError::NoItem ); assert_eq!( WeightedIndex::new(&[0]).unwrap_err(), WeightedError::AllWeightsZero ); assert_eq!( WeightedIndex::new(&[10, 20, -1, 30]).unwrap_err(), WeightedError::InvalidWeight ); assert_eq!( WeightedIndex::new(&[-10, 20, 1, 30]).unwrap_err(), WeightedError::InvalidWeight ); assert_eq!( WeightedIndex::new(&[-10]).unwrap_err(), WeightedError::InvalidWeight ); } #[test] fn test_update_weights() { let data = [ ( &[10u32, 2, 3, 4][..], &[(1, &100), (2, &4)][..], // positive change &[10, 100, 4, 4][..], ), ( &[1u32, 2, 3, 0, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7][..], &[(2, &1), (5, &1), (13, &100)][..], // negative change and last element &[1u32, 2, 1, 0, 5, 1, 7, 1, 2, 3, 4, 5, 6, 100][..], ), ]; for (weights, update, expected_weights) in data.iter() { let total_weight = weights.iter().sum::(); let mut distr = WeightedIndex::new(weights.to_vec()).unwrap(); assert_eq!(distr.total_weight, total_weight); distr.update_weights(update).unwrap(); let expected_total_weight = expected_weights.iter().sum::(); let expected_distr = WeightedIndex::new(expected_weights.to_vec()).unwrap(); assert_eq!(distr.total_weight, expected_total_weight); assert_eq!(distr.total_weight, expected_distr.total_weight); assert_eq!(distr.cumulative_weights, expected_distr.cumulative_weights); } } #[test] fn value_stability() { fn test_samples( weights: I, buf: &mut [usize], expected: &[usize], ) where I: IntoIterator, I::Item: SampleBorrow, X: for<'a> ::core::ops::AddAssign<&'a X> + Clone + Default, { assert_eq!(buf.len(), expected.len()); let distr = WeightedIndex::new(weights).unwrap(); let mut rng = crate::test::rng(701); for r in buf.iter_mut() { *r = rng.sample(&distr); } assert_eq!(buf, expected); } let mut buf = [0; 10]; test_samples(&[1i32, 1, 1, 1, 1, 1, 1, 1, 1], &mut buf, &[ 0, 6, 2, 6, 3, 4, 7, 8, 2, 5, ]); test_samples(&[0.7f32, 0.1, 0.1, 0.1], &mut buf, &[ 0, 0, 0, 1, 0, 0, 2, 3, 0, 0, ]); test_samples(&[1.0f64, 0.999, 0.998, 0.997], &mut buf, &[ 2, 2, 1, 3, 2, 1, 3, 3, 2, 1, ]); } } /// Error type returned from `WeightedIndex::new`. #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum WeightedError { /// The provided weight collection contains no items. NoItem, /// A weight is either less than zero, greater than the supported maximum, /// NaN, or otherwise invalid. InvalidWeight, /// All items in the provided weight collection are zero. AllWeightsZero, /// Too many weights are provided (length greater than `u32::MAX`) TooMany, } #[cfg(feature = "std")] impl std::error::Error for WeightedError {} impl fmt::Display for WeightedError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str(match *self { WeightedError::NoItem => "No weights provided in distribution", WeightedError::InvalidWeight => "A weight is invalid in distribution", WeightedError::AllWeightsZero => "All weights are zero in distribution", WeightedError::TooMany => "Too many weights (hit u32::MAX) in distribution", }) } } rand-0.8.4/src/lib.rs000064400000000000000000000150710000000000000124450ustar 00000000000000// Copyright 2018 Developers of the Rand project. // Copyright 2013-2017 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Utilities for random number generation //! //! Rand provides utilities to generate random numbers, to convert them to //! useful types and distributions, and some randomness-related algorithms. //! //! # Quick Start //! //! To get you started quickly, the easiest and highest-level way to get //! a random value is to use [`random()`]; alternatively you can use //! [`thread_rng()`]. The [`Rng`] trait provides a useful API on all RNGs, while //! the [`distributions`] and [`seq`] modules provide further //! functionality on top of RNGs. //! //! ``` //! use rand::prelude::*; //! //! if rand::random() { // generates a boolean //! // Try printing a random unicode code point (probably a bad idea)! //! println!("char: {}", rand::random::()); //! } //! //! let mut rng = rand::thread_rng(); //! let y: f64 = rng.gen(); // generates a float between 0 and 1 //! //! let mut nums: Vec = (1..100).collect(); //! nums.shuffle(&mut rng); //! ``` //! //! # The Book //! //! For the user guide and further documentation, please read //! [The Rust Rand Book](https://rust-random.github.io/book). #![doc( html_logo_url = "https://www.rust-lang.org/logos/rust-logo-128x128-blk.png", html_favicon_url = "https://www.rust-lang.org/favicon.ico", html_root_url = "https://rust-random.github.io/rand/" )] #![deny(missing_docs)] #![deny(missing_debug_implementations)] #![doc(test(attr(allow(unused_variables), deny(warnings))))] #![no_std] #![cfg_attr(feature = "simd_support", feature(stdsimd))] #![cfg_attr(feature = "nightly", feature(slice_partition_at_index))] #![cfg_attr(doc_cfg, feature(doc_cfg))] #![allow( clippy::float_cmp, clippy::neg_cmp_op_on_partial_ord, )] #[cfg(feature = "std")] extern crate std; #[cfg(feature = "alloc")] extern crate alloc; #[allow(unused)] macro_rules! trace { ($($x:tt)*) => ( #[cfg(feature = "log")] { log::trace!($($x)*) } ) } #[allow(unused)] macro_rules! debug { ($($x:tt)*) => ( #[cfg(feature = "log")] { log::debug!($($x)*) } ) } #[allow(unused)] macro_rules! info { ($($x:tt)*) => ( #[cfg(feature = "log")] { log::info!($($x)*) } ) } #[allow(unused)] macro_rules! warn { ($($x:tt)*) => ( #[cfg(feature = "log")] { log::warn!($($x)*) } ) } #[allow(unused)] macro_rules! error { ($($x:tt)*) => ( #[cfg(feature = "log")] { log::error!($($x)*) } ) } // Re-exports from rand_core pub use rand_core::{CryptoRng, Error, RngCore, SeedableRng}; // Public modules pub mod distributions; pub mod prelude; mod rng; pub mod rngs; pub mod seq; // Public exports #[cfg(all(feature = "std", feature = "std_rng"))] pub use crate::rngs::thread::thread_rng; pub use rng::{Fill, Rng}; #[cfg(all(feature = "std", feature = "std_rng"))] use crate::distributions::{Distribution, Standard}; /// Generates a random value using the thread-local random number generator. /// /// This is simply a shortcut for `thread_rng().gen()`. See [`thread_rng`] for /// documentation of the entropy source and [`Standard`] for documentation of /// distributions and type-specific generation. /// /// # Provided implementations /// /// The following types have provided implementations that /// generate values with the following ranges and distributions: /// /// * Integers (`i32`, `u32`, `isize`, `usize`, etc.): Uniformly distributed /// over all values of the type. /// * `char`: Uniformly distributed over all Unicode scalar values, i.e. all /// code points in the range `0...0x10_FFFF`, except for the range /// `0xD800...0xDFFF` (the surrogate code points). This includes /// unassigned/reserved code points. /// * `bool`: Generates `false` or `true`, each with probability 0.5. /// * Floating point types (`f32` and `f64`): Uniformly distributed in the /// half-open range `[0, 1)`. See notes below. /// * Wrapping integers (`Wrapping`), besides the type identical to their /// normal integer variants. /// /// Also supported is the generation of the following /// compound types where all component types are supported: /// /// * Tuples (up to 12 elements): each element is generated sequentially. /// * Arrays (up to 32 elements): each element is generated sequentially; /// see also [`Rng::fill`] which supports arbitrary array length for integer /// types and tends to be faster for `u32` and smaller types. /// * `Option` first generates a `bool`, and if true generates and returns /// `Some(value)` where `value: T`, otherwise returning `None`. /// /// # Examples /// /// ``` /// let x = rand::random::(); /// println!("{}", x); /// /// let y = rand::random::(); /// println!("{}", y); /// /// if rand::random() { // generates a boolean /// println!("Better lucky than good!"); /// } /// ``` /// /// If you're calling `random()` in a loop, caching the generator as in the /// following example can increase performance. /// /// ``` /// use rand::Rng; /// /// let mut v = vec![1, 2, 3]; /// /// for x in v.iter_mut() { /// *x = rand::random() /// } /// /// // can be made faster by caching thread_rng /// /// let mut rng = rand::thread_rng(); /// /// for x in v.iter_mut() { /// *x = rng.gen(); /// } /// ``` /// /// [`Standard`]: distributions::Standard #[cfg(all(feature = "std", feature = "std_rng"))] #[cfg_attr(doc_cfg, doc(cfg(all(feature = "std", feature = "std_rng"))))] #[inline] pub fn random() -> T where Standard: Distribution { thread_rng().gen() } #[cfg(test)] mod test { use super::*; /// Construct a deterministic RNG with the given seed pub fn rng(seed: u64) -> impl RngCore { // For tests, we want a statistically good, fast, reproducible RNG. // PCG32 will do fine, and will be easy to embed if we ever need to. const INC: u64 = 11634580027462260723; rand_pcg::Pcg32::new(seed, INC) } #[test] #[cfg(all(feature = "std", feature = "std_rng"))] fn test_random() { let _n: usize = random(); let _f: f32 = random(); let _o: Option> = random(); #[allow(clippy::type_complexity)] let _many: ( (), (usize, isize, Option<(u32, (bool,))>), (u8, i8, u16, i16, u32, i32, u64, i64), (f32, (f64, (f64,))), ) = random(); } } rand-0.8.4/src/prelude.rs000064400000000000000000000024040000000000000133330ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Convenience re-export of common members //! //! Like the standard library's prelude, this module simplifies importing of //! common items. Unlike the standard prelude, the contents of this module must //! be imported manually: //! //! ``` //! use rand::prelude::*; //! # let mut r = StdRng::from_rng(thread_rng()).unwrap(); //! # let _: f32 = r.gen(); //! ``` #[doc(no_inline)] pub use crate::distributions::Distribution; #[cfg(feature = "small_rng")] #[doc(no_inline)] pub use crate::rngs::SmallRng; #[cfg(feature = "std_rng")] #[doc(no_inline)] pub use crate::rngs::StdRng; #[doc(no_inline)] #[cfg(all(feature = "std", feature = "std_rng"))] pub use crate::rngs::ThreadRng; #[doc(no_inline)] pub use crate::seq::{IteratorRandom, SliceRandom}; #[doc(no_inline)] #[cfg(all(feature = "std", feature = "std_rng"))] pub use crate::{random, thread_rng}; #[doc(no_inline)] pub use crate::{CryptoRng, Rng, RngCore, SeedableRng}; rand-0.8.4/src/rng.rs000064400000000000000000000450640000000000000124720ustar 00000000000000// Copyright 2018 Developers of the Rand project. // Copyright 2013-2017 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! [`Rng`] trait use rand_core::{Error, RngCore}; use crate::distributions::uniform::{SampleRange, SampleUniform}; use crate::distributions::{self, Distribution, Standard}; use core::num::Wrapping; use core::{mem, slice}; /// An automatically-implemented extension trait on [`RngCore`] providing high-level /// generic methods for sampling values and other convenience methods. /// /// This is the primary trait to use when generating random values. /// /// # Generic usage /// /// The basic pattern is `fn foo(rng: &mut R)`. Some /// things are worth noting here: /// /// - Since `Rng: RngCore` and every `RngCore` implements `Rng`, it makes no /// difference whether we use `R: Rng` or `R: RngCore`. /// - The `+ ?Sized` un-bounding allows functions to be called directly on /// type-erased references; i.e. `foo(r)` where `r: &mut dyn RngCore`. Without /// this it would be necessary to write `foo(&mut r)`. /// /// An alternative pattern is possible: `fn foo(rng: R)`. This has some /// trade-offs. It allows the argument to be consumed directly without a `&mut` /// (which is how `from_rng(thread_rng())` works); also it still works directly /// on references (including type-erased references). Unfortunately within the /// function `foo` it is not known whether `rng` is a reference type or not, /// hence many uses of `rng` require an extra reference, either explicitly /// (`distr.sample(&mut rng)`) or implicitly (`rng.gen()`); one may hope the /// optimiser can remove redundant references later. /// /// Example: /// /// ``` /// # use rand::thread_rng; /// use rand::Rng; /// /// fn foo(rng: &mut R) -> f32 { /// rng.gen() /// } /// /// # let v = foo(&mut thread_rng()); /// ``` pub trait Rng: RngCore { /// Return a random value supporting the [`Standard`] distribution. /// /// # Example /// /// ``` /// use rand::{thread_rng, Rng}; /// /// let mut rng = thread_rng(); /// let x: u32 = rng.gen(); /// println!("{}", x); /// println!("{:?}", rng.gen::<(f64, bool)>()); /// ``` /// /// # Arrays and tuples /// /// The `rng.gen()` method is able to generate arrays (up to 32 elements) /// and tuples (up to 12 elements), so long as all element types can be /// generated. /// When using `rustc` β‰₯ 1.51, enable the `min_const_gen` feature to support /// arrays larger than 32 elements. /// /// For arrays of integers, especially for those with small element types /// (< 64 bit), it will likely be faster to instead use [`Rng::fill`]. /// /// ``` /// use rand::{thread_rng, Rng}; /// /// let mut rng = thread_rng(); /// let tuple: (u8, i32, char) = rng.gen(); // arbitrary tuple support /// /// let arr1: [f32; 32] = rng.gen(); // array construction /// let mut arr2 = [0u8; 128]; /// rng.fill(&mut arr2); // array fill /// ``` /// /// [`Standard`]: distributions::Standard #[inline] fn gen(&mut self) -> T where Standard: Distribution { Standard.sample(self) } /// Generate a random value in the given range. /// /// This function is optimised for the case that only a single sample is /// made from the given range. See also the [`Uniform`] distribution /// type which may be faster if sampling from the same range repeatedly. /// /// Only `gen_range(low..high)` and `gen_range(low..=high)` are supported. /// /// # Panics /// /// Panics if the range is empty. /// /// # Example /// /// ``` /// use rand::{thread_rng, Rng}; /// /// let mut rng = thread_rng(); /// /// // Exclusive range /// let n: u32 = rng.gen_range(0..10); /// println!("{}", n); /// let m: f64 = rng.gen_range(-40.0..1.3e5); /// println!("{}", m); /// /// // Inclusive range /// let n: u32 = rng.gen_range(0..=10); /// println!("{}", n); /// ``` /// /// [`Uniform`]: distributions::uniform::Uniform fn gen_range(&mut self, range: R) -> T where T: SampleUniform, R: SampleRange { assert!(!range.is_empty(), "cannot sample empty range"); range.sample_single(self) } /// Sample a new value, using the given distribution. /// /// ### Example /// /// ``` /// use rand::{thread_rng, Rng}; /// use rand::distributions::Uniform; /// /// let mut rng = thread_rng(); /// let x = rng.sample(Uniform::new(10u32, 15)); /// // Type annotation requires two types, the type and distribution; the /// // distribution can be inferred. /// let y = rng.sample::(Uniform::new(10, 15)); /// ``` fn sample>(&mut self, distr: D) -> T { distr.sample(self) } /// Create an iterator that generates values using the given distribution. /// /// Note that this function takes its arguments by value. This works since /// `(&mut R): Rng where R: Rng` and /// `(&D): Distribution where D: Distribution`, /// however borrowing is not automatic hence `rng.sample_iter(...)` may /// need to be replaced with `(&mut rng).sample_iter(...)`. /// /// # Example /// /// ``` /// use rand::{thread_rng, Rng}; /// use rand::distributions::{Alphanumeric, Uniform, Standard}; /// /// let mut rng = thread_rng(); /// /// // Vec of 16 x f32: /// let v: Vec = (&mut rng).sample_iter(Standard).take(16).collect(); /// /// // String: /// let s: String = (&mut rng).sample_iter(Alphanumeric) /// .take(7) /// .map(char::from) /// .collect(); /// /// // Combined values /// println!("{:?}", (&mut rng).sample_iter(Standard).take(5) /// .collect::>()); /// /// // Dice-rolling: /// let die_range = Uniform::new_inclusive(1, 6); /// let mut roll_die = (&mut rng).sample_iter(die_range); /// while roll_die.next().unwrap() != 6 { /// println!("Not a 6; rolling again!"); /// } /// ``` fn sample_iter(self, distr: D) -> distributions::DistIter where D: Distribution, Self: Sized, { distr.sample_iter(self) } /// Fill any type implementing [`Fill`] with random data /// /// The distribution is expected to be uniform with portable results, but /// this cannot be guaranteed for third-party implementations. /// /// This is identical to [`try_fill`] except that it panics on error. /// /// # Example /// /// ``` /// use rand::{thread_rng, Rng}; /// /// let mut arr = [0i8; 20]; /// thread_rng().fill(&mut arr[..]); /// ``` /// /// [`fill_bytes`]: RngCore::fill_bytes /// [`try_fill`]: Rng::try_fill fn fill(&mut self, dest: &mut T) { dest.try_fill(self).unwrap_or_else(|_| panic!("Rng::fill failed")) } /// Fill any type implementing [`Fill`] with random data /// /// The distribution is expected to be uniform with portable results, but /// this cannot be guaranteed for third-party implementations. /// /// This is identical to [`fill`] except that it forwards errors. /// /// # Example /// /// ``` /// # use rand::Error; /// use rand::{thread_rng, Rng}; /// /// # fn try_inner() -> Result<(), Error> { /// let mut arr = [0u64; 4]; /// thread_rng().try_fill(&mut arr[..])?; /// # Ok(()) /// # } /// /// # try_inner().unwrap() /// ``` /// /// [`try_fill_bytes`]: RngCore::try_fill_bytes /// [`fill`]: Rng::fill fn try_fill(&mut self, dest: &mut T) -> Result<(), Error> { dest.try_fill(self) } /// Return a bool with a probability `p` of being true. /// /// See also the [`Bernoulli`] distribution, which may be faster if /// sampling from the same probability repeatedly. /// /// # Example /// /// ``` /// use rand::{thread_rng, Rng}; /// /// let mut rng = thread_rng(); /// println!("{}", rng.gen_bool(1.0 / 3.0)); /// ``` /// /// # Panics /// /// If `p < 0` or `p > 1`. /// /// [`Bernoulli`]: distributions::Bernoulli #[inline] fn gen_bool(&mut self, p: f64) -> bool { let d = distributions::Bernoulli::new(p).unwrap(); self.sample(d) } /// Return a bool with a probability of `numerator/denominator` of being /// true. I.e. `gen_ratio(2, 3)` has chance of 2 in 3, or about 67%, of /// returning true. If `numerator == denominator`, then the returned value /// is guaranteed to be `true`. If `numerator == 0`, then the returned /// value is guaranteed to be `false`. /// /// See also the [`Bernoulli`] distribution, which may be faster if /// sampling from the same `numerator` and `denominator` repeatedly. /// /// # Panics /// /// If `denominator == 0` or `numerator > denominator`. /// /// # Example /// /// ``` /// use rand::{thread_rng, Rng}; /// /// let mut rng = thread_rng(); /// println!("{}", rng.gen_ratio(2, 3)); /// ``` /// /// [`Bernoulli`]: distributions::Bernoulli #[inline] fn gen_ratio(&mut self, numerator: u32, denominator: u32) -> bool { let d = distributions::Bernoulli::from_ratio(numerator, denominator).unwrap(); self.sample(d) } } impl Rng for R {} /// Types which may be filled with random data /// /// This trait allows arrays to be efficiently filled with random data. /// /// Implementations are expected to be portable across machines unless /// clearly documented otherwise (see the /// [Chapter on Portability](https://rust-random.github.io/book/portability.html)). pub trait Fill { /// Fill self with random data fn try_fill(&mut self, rng: &mut R) -> Result<(), Error>; } macro_rules! impl_fill_each { () => {}; ($t:ty) => { impl Fill for [$t] { fn try_fill(&mut self, rng: &mut R) -> Result<(), Error> { for elt in self.iter_mut() { *elt = rng.gen(); } Ok(()) } } }; ($t:ty, $($tt:ty,)*) => { impl_fill_each!($t); impl_fill_each!($($tt,)*); }; } impl_fill_each!(bool, char, f32, f64,); impl Fill for [u8] { fn try_fill(&mut self, rng: &mut R) -> Result<(), Error> { rng.try_fill_bytes(self) } } macro_rules! impl_fill { () => {}; ($t:ty) => { impl Fill for [$t] { #[inline(never)] // in micro benchmarks, this improves performance fn try_fill(&mut self, rng: &mut R) -> Result<(), Error> { if self.len() > 0 { rng.try_fill_bytes(unsafe { slice::from_raw_parts_mut(self.as_mut_ptr() as *mut u8, self.len() * mem::size_of::<$t>() ) })?; for x in self { *x = x.to_le(); } } Ok(()) } } impl Fill for [Wrapping<$t>] { #[inline(never)] fn try_fill(&mut self, rng: &mut R) -> Result<(), Error> { if self.len() > 0 { rng.try_fill_bytes(unsafe { slice::from_raw_parts_mut(self.as_mut_ptr() as *mut u8, self.len() * mem::size_of::<$t>() ) })?; for x in self { *x = Wrapping(x.0.to_le()); } } Ok(()) } } }; ($t:ty, $($tt:ty,)*) => { impl_fill!($t); // TODO: this could replace above impl once Rust #32463 is fixed // impl_fill!(Wrapping<$t>); impl_fill!($($tt,)*); } } impl_fill!(u16, u32, u64, usize,); #[cfg(not(target_os = "emscripten"))] impl_fill!(u128); impl_fill!(i8, i16, i32, i64, isize,); #[cfg(not(target_os = "emscripten"))] impl_fill!(i128); #[cfg(feature = "min_const_gen")] impl Fill for [T; N] where [T]: Fill { fn try_fill(&mut self, rng: &mut R) -> Result<(), Error> { self[..].try_fill(rng) } } #[cfg(not(feature = "min_const_gen"))] macro_rules! impl_fill_arrays { ($n:expr,) => {}; ($n:expr, $N:ident) => { impl Fill for [T; $n] where [T]: Fill { fn try_fill(&mut self, rng: &mut R) -> Result<(), Error> { self[..].try_fill(rng) } } }; ($n:expr, $N:ident, $($NN:ident,)*) => { impl_fill_arrays!($n, $N); impl_fill_arrays!($n - 1, $($NN,)*); }; (!div $n:expr,) => {}; (!div $n:expr, $N:ident, $($NN:ident,)*) => { impl_fill_arrays!($n, $N); impl_fill_arrays!(!div $n / 2, $($NN,)*); }; } #[cfg(not(feature = "min_const_gen"))] #[rustfmt::skip] impl_fill_arrays!(32, N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,N,); #[cfg(not(feature = "min_const_gen"))] impl_fill_arrays!(!div 4096, N,N,N,N,N,N,N,); #[cfg(test)] mod test { use super::*; use crate::test::rng; use crate::rngs::mock::StepRng; #[cfg(feature = "alloc")] use alloc::boxed::Box; #[test] fn test_fill_bytes_default() { let mut r = StepRng::new(0x11_22_33_44_55_66_77_88, 0); // check every remainder mod 8, both in small and big vectors. let lengths = [0, 1, 2, 3, 4, 5, 6, 7, 80, 81, 82, 83, 84, 85, 86, 87]; for &n in lengths.iter() { let mut buffer = [0u8; 87]; let v = &mut buffer[0..n]; r.fill_bytes(v); // use this to get nicer error messages. for (i, &byte) in v.iter().enumerate() { if byte == 0 { panic!("byte {} of {} is zero", i, n) } } } } #[test] fn test_fill() { let x = 9041086907909331047; // a random u64 let mut rng = StepRng::new(x, 0); // Convert to byte sequence and back to u64; byte-swap twice if BE. let mut array = [0u64; 2]; rng.fill(&mut array[..]); assert_eq!(array, [x, x]); assert_eq!(rng.next_u64(), x); // Convert to bytes then u32 in LE order let mut array = [0u32; 2]; rng.fill(&mut array[..]); assert_eq!(array, [x as u32, (x >> 32) as u32]); assert_eq!(rng.next_u32(), x as u32); // Check equivalence using wrapped arrays let mut warray = [Wrapping(0u32); 2]; rng.fill(&mut warray[..]); assert_eq!(array[0], warray[0].0); assert_eq!(array[1], warray[1].0); // Check equivalence for generated floats let mut array = [0f32; 2]; rng.fill(&mut array); let gen: [f32; 2] = rng.gen(); assert_eq!(array, gen); } #[test] fn test_fill_empty() { let mut array = [0u32; 0]; let mut rng = StepRng::new(0, 1); rng.fill(&mut array); rng.fill(&mut array[..]); } #[test] fn test_gen_range_int() { let mut r = rng(101); for _ in 0..1000 { let a = r.gen_range(-4711..17); assert!((-4711..17).contains(&a)); let a: i8 = r.gen_range(-3..42); assert!((-3..42).contains(&a)); let a: u16 = r.gen_range(10..99); assert!((10..99).contains(&a)); let a: i32 = r.gen_range(-100..2000); assert!((-100..2000).contains(&a)); let a: u32 = r.gen_range(12..=24); assert!((12..=24).contains(&a)); assert_eq!(r.gen_range(0u32..1), 0u32); assert_eq!(r.gen_range(-12i64..-11), -12i64); assert_eq!(r.gen_range(3_000_000..3_000_001), 3_000_000); } } #[test] fn test_gen_range_float() { let mut r = rng(101); for _ in 0..1000 { let a = r.gen_range(-4.5..1.7); assert!((-4.5..1.7).contains(&a)); let a = r.gen_range(-1.1..=-0.3); assert!((-1.1..=-0.3).contains(&a)); assert_eq!(r.gen_range(0.0f32..=0.0), 0.); assert_eq!(r.gen_range(-11.0..=-11.0), -11.); assert_eq!(r.gen_range(3_000_000.0..=3_000_000.0), 3_000_000.); } } #[test] #[should_panic] fn test_gen_range_panic_int() { #![allow(clippy::reversed_empty_ranges)] let mut r = rng(102); r.gen_range(5..-2); } #[test] #[should_panic] fn test_gen_range_panic_usize() { #![allow(clippy::reversed_empty_ranges)] let mut r = rng(103); r.gen_range(5..2); } #[test] fn test_gen_bool() { #![allow(clippy::bool_assert_comparison)] let mut r = rng(105); for _ in 0..5 { assert_eq!(r.gen_bool(0.0), false); assert_eq!(r.gen_bool(1.0), true); } } #[test] fn test_rng_trait_object() { use crate::distributions::{Distribution, Standard}; let mut rng = rng(109); let mut r = &mut rng as &mut dyn RngCore; r.next_u32(); r.gen::(); assert_eq!(r.gen_range(0..1), 0); let _c: u8 = Standard.sample(&mut r); } #[test] #[cfg(feature = "alloc")] fn test_rng_boxed_trait() { use crate::distributions::{Distribution, Standard}; let rng = rng(110); let mut r = Box::new(rng) as Box; r.next_u32(); r.gen::(); assert_eq!(r.gen_range(0..1), 0); let _c: u8 = Standard.sample(&mut r); } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_gen_ratio_average() { const NUM: u32 = 3; const DENOM: u32 = 10; const N: u32 = 100_000; let mut sum: u32 = 0; let mut rng = rng(111); for _ in 0..N { if rng.gen_ratio(NUM, DENOM) { sum += 1; } } // Have Binomial(N, NUM/DENOM) distribution let expected = (NUM * N) / DENOM; // exact integer assert!(((sum - expected) as i32).abs() < 500); } } rand-0.8.4/src/rngs/adapter/mod.rs000064400000000000000000000010120000000000000150350ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Wrappers / adapters forming RNGs mod read; mod reseeding; #[allow(deprecated)] pub use self::read::{ReadError, ReadRng}; pub use self::reseeding::ReseedingRng; rand-0.8.4/src/rngs/adapter/read.rs000064400000000000000000000100240000000000000151740ustar 00000000000000// Copyright 2018 Developers of the Rand project. // Copyright 2013 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A wrapper around any Read to treat it as an RNG. #![allow(deprecated)] use std::fmt; use std::io::Read; use rand_core::{impls, Error, RngCore}; /// An RNG that reads random bytes straight from any type supporting /// [`std::io::Read`], for example files. /// /// This will work best with an infinite reader, but that is not required. /// /// This can be used with `/dev/urandom` on Unix but it is recommended to use /// [`OsRng`] instead. /// /// # Panics /// /// `ReadRng` uses [`std::io::Read::read_exact`], which retries on interrupts. /// All other errors from the underlying reader, including when it does not /// have enough data, will only be reported through [`try_fill_bytes`]. /// The other [`RngCore`] methods will panic in case of an error. /// /// [`OsRng`]: crate::rngs::OsRng /// [`try_fill_bytes`]: RngCore::try_fill_bytes #[derive(Debug)] #[deprecated(since="0.8.4", note="removal due to lack of usage")] pub struct ReadRng { reader: R, } impl ReadRng { /// Create a new `ReadRng` from a `Read`. pub fn new(r: R) -> ReadRng { ReadRng { reader: r } } } impl RngCore for ReadRng { fn next_u32(&mut self) -> u32 { impls::next_u32_via_fill(self) } fn next_u64(&mut self) -> u64 { impls::next_u64_via_fill(self) } fn fill_bytes(&mut self, dest: &mut [u8]) { self.try_fill_bytes(dest).unwrap_or_else(|err| { panic!( "reading random bytes from Read implementation failed; error: {}", err ) }); } fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { if dest.is_empty() { return Ok(()); } // Use `std::io::read_exact`, which retries on `ErrorKind::Interrupted`. self.reader .read_exact(dest) .map_err(|e| Error::new(ReadError(e))) } } /// `ReadRng` error type #[derive(Debug)] #[deprecated(since="0.8.4")] pub struct ReadError(std::io::Error); impl fmt::Display for ReadError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, "ReadError: {}", self.0) } } impl std::error::Error for ReadError { fn source(&self) -> Option<&(dyn std::error::Error + 'static)> { Some(&self.0) } } #[cfg(test)] mod test { use std::println; use super::ReadRng; use crate::RngCore; #[test] fn test_reader_rng_u64() { // transmute from the target to avoid endianness concerns. #[rustfmt::skip] let v = [0u8, 0, 0, 0, 0, 0, 0, 1, 0, 4, 0, 0, 3, 0, 0, 2, 5, 0, 0, 0, 0, 0, 0, 0]; let mut rng = ReadRng::new(&v[..]); assert_eq!(rng.next_u64(), 1 << 56); assert_eq!(rng.next_u64(), (2 << 56) + (3 << 32) + (4 << 8)); assert_eq!(rng.next_u64(), 5); } #[test] fn test_reader_rng_u32() { let v = [0u8, 0, 0, 1, 0, 0, 2, 0, 3, 0, 0, 0]; let mut rng = ReadRng::new(&v[..]); assert_eq!(rng.next_u32(), 1 << 24); assert_eq!(rng.next_u32(), 2 << 16); assert_eq!(rng.next_u32(), 3); } #[test] fn test_reader_rng_fill_bytes() { let v = [1u8, 2, 3, 4, 5, 6, 7, 8]; let mut w = [0u8; 8]; let mut rng = ReadRng::new(&v[..]); rng.fill_bytes(&mut w); assert!(v == w); } #[test] fn test_reader_rng_insufficient_bytes() { let v = [1u8, 2, 3, 4, 5, 6, 7, 8]; let mut w = [0u8; 9]; let mut rng = ReadRng::new(&v[..]); let result = rng.try_fill_bytes(&mut w); assert!(result.is_err()); println!("Error: {}", result.unwrap_err()); } } rand-0.8.4/src/rngs/adapter/reseeding.rs000064400000000000000000000276330000000000000162440ustar 00000000000000// Copyright 2018 Developers of the Rand project. // Copyright 2013 The Rust Project Developers. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A wrapper around another PRNG that reseeds it after it //! generates a certain number of random bytes. use core::mem::size_of; use rand_core::block::{BlockRng, BlockRngCore}; use rand_core::{CryptoRng, Error, RngCore, SeedableRng}; /// A wrapper around any PRNG that implements [`BlockRngCore`], that adds the /// ability to reseed it. /// /// `ReseedingRng` reseeds the underlying PRNG in the following cases: /// /// - On a manual call to [`reseed()`]. /// - After `clone()`, the clone will be reseeded on first use. /// - After a process is forked, the RNG in the child process is reseeded within /// the next few generated values, depending on the block size of the /// underlying PRNG. For ChaCha and Hc128 this is a maximum of /// 15 `u32` values before reseeding. /// - After the PRNG has generated a configurable number of random bytes. /// /// # When should reseeding after a fixed number of generated bytes be used? /// /// Reseeding after a fixed number of generated bytes is never strictly /// *necessary*. Cryptographic PRNGs don't have a limited number of bytes they /// can output, or at least not a limit reachable in any practical way. There is /// no such thing as 'running out of entropy'. /// /// Occasionally reseeding can be seen as some form of 'security in depth'. Even /// if in the future a cryptographic weakness is found in the CSPRNG being used, /// or a flaw in the implementation, occasionally reseeding should make /// exploiting it much more difficult or even impossible. /// /// Use [`ReseedingRng::new`] with a `threshold` of `0` to disable reseeding /// after a fixed number of generated bytes. /// /// # Error handling /// /// Although unlikely, reseeding the wrapped PRNG can fail. `ReseedingRng` will /// never panic but try to handle the error intelligently through some /// combination of retrying and delaying reseeding until later. /// If handling the source error fails `ReseedingRng` will continue generating /// data from the wrapped PRNG without reseeding. /// /// Manually calling [`reseed()`] will not have this retry or delay logic, but /// reports the error. /// /// # Example /// /// ``` /// use rand::prelude::*; /// use rand_chacha::ChaCha20Core; // Internal part of ChaChaRng that /// // implements BlockRngCore /// use rand::rngs::OsRng; /// use rand::rngs::adapter::ReseedingRng; /// /// let prng = ChaCha20Core::from_entropy(); /// let mut reseeding_rng = ReseedingRng::new(prng, 0, OsRng); /// /// println!("{}", reseeding_rng.gen::()); /// /// let mut cloned_rng = reseeding_rng.clone(); /// assert!(reseeding_rng.gen::() != cloned_rng.gen::()); /// ``` /// /// [`BlockRngCore`]: rand_core::block::BlockRngCore /// [`ReseedingRng::new`]: ReseedingRng::new /// [`reseed()`]: ReseedingRng::reseed #[derive(Debug)] pub struct ReseedingRng(BlockRng>) where R: BlockRngCore + SeedableRng, Rsdr: RngCore; impl ReseedingRng where R: BlockRngCore + SeedableRng, Rsdr: RngCore, { /// Create a new `ReseedingRng` from an existing PRNG, combined with a RNG /// to use as reseeder. /// /// `threshold` sets the number of generated bytes after which to reseed the /// PRNG. Set it to zero to never reseed based on the number of generated /// values. pub fn new(rng: R, threshold: u64, reseeder: Rsdr) -> Self { ReseedingRng(BlockRng::new(ReseedingCore::new(rng, threshold, reseeder))) } /// Reseed the internal PRNG. pub fn reseed(&mut self) -> Result<(), Error> { self.0.core.reseed() } } // TODO: this should be implemented for any type where the inner type // implements RngCore, but we can't specify that because ReseedingCore is private impl RngCore for ReseedingRng where R: BlockRngCore + SeedableRng, ::Results: AsRef<[u32]> + AsMut<[u32]>, { #[inline(always)] fn next_u32(&mut self) -> u32 { self.0.next_u32() } #[inline(always)] fn next_u64(&mut self) -> u64 { self.0.next_u64() } fn fill_bytes(&mut self, dest: &mut [u8]) { self.0.fill_bytes(dest) } fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { self.0.try_fill_bytes(dest) } } impl Clone for ReseedingRng where R: BlockRngCore + SeedableRng + Clone, Rsdr: RngCore + Clone, { fn clone(&self) -> ReseedingRng { // Recreating `BlockRng` seems easier than cloning it and resetting // the index. ReseedingRng(BlockRng::new(self.0.core.clone())) } } impl CryptoRng for ReseedingRng where R: BlockRngCore + SeedableRng + CryptoRng, Rsdr: RngCore + CryptoRng, { } #[derive(Debug)] struct ReseedingCore { inner: R, reseeder: Rsdr, threshold: i64, bytes_until_reseed: i64, fork_counter: usize, } impl BlockRngCore for ReseedingCore where R: BlockRngCore + SeedableRng, Rsdr: RngCore, { type Item = ::Item; type Results = ::Results; fn generate(&mut self, results: &mut Self::Results) { let global_fork_counter = fork::get_fork_counter(); if self.bytes_until_reseed <= 0 || self.is_forked(global_fork_counter) { // We get better performance by not calling only `reseed` here // and continuing with the rest of the function, but by directly // returning from a non-inlined function. return self.reseed_and_generate(results, global_fork_counter); } let num_bytes = results.as_ref().len() * size_of::(); self.bytes_until_reseed -= num_bytes as i64; self.inner.generate(results); } } impl ReseedingCore where R: BlockRngCore + SeedableRng, Rsdr: RngCore, { /// Create a new `ReseedingCore`. fn new(rng: R, threshold: u64, reseeder: Rsdr) -> Self { use ::core::i64::MAX; fork::register_fork_handler(); // Because generating more values than `i64::MAX` takes centuries on // current hardware, we just clamp to that value. // Also we set a threshold of 0, which indicates no limit, to that // value. let threshold = if threshold == 0 { MAX } else if threshold <= MAX as u64 { threshold as i64 } else { MAX }; ReseedingCore { inner: rng, reseeder, threshold: threshold as i64, bytes_until_reseed: threshold as i64, fork_counter: 0, } } /// Reseed the internal PRNG. fn reseed(&mut self) -> Result<(), Error> { R::from_rng(&mut self.reseeder).map(|result| { self.bytes_until_reseed = self.threshold; self.inner = result }) } fn is_forked(&self, global_fork_counter: usize) -> bool { // In theory, on 32-bit platforms, it is possible for // `global_fork_counter` to wrap around after ~4e9 forks. // // This check will detect a fork in the normal case where // `fork_counter < global_fork_counter`, and also when the difference // between both is greater than `isize::MAX` (wrapped around). // // It will still fail to detect a fork if there have been more than // `isize::MAX` forks, without any reseed in between. Seems unlikely // enough. (self.fork_counter.wrapping_sub(global_fork_counter) as isize) < 0 } #[inline(never)] fn reseed_and_generate( &mut self, results: &mut ::Results, global_fork_counter: usize, ) { #![allow(clippy::if_same_then_else)] // false positive if self.is_forked(global_fork_counter) { info!("Fork detected, reseeding RNG"); } else { trace!("Reseeding RNG (periodic reseed)"); } let num_bytes = results.as_ref().len() * size_of::<::Item>(); if let Err(e) = self.reseed() { warn!("Reseeding RNG failed: {}", e); let _ = e; } self.fork_counter = global_fork_counter; self.bytes_until_reseed = self.threshold - num_bytes as i64; self.inner.generate(results); } } impl Clone for ReseedingCore where R: BlockRngCore + SeedableRng + Clone, Rsdr: RngCore + Clone, { fn clone(&self) -> ReseedingCore { ReseedingCore { inner: self.inner.clone(), reseeder: self.reseeder.clone(), threshold: self.threshold, bytes_until_reseed: 0, // reseed clone on first use fork_counter: self.fork_counter, } } } impl CryptoRng for ReseedingCore where R: BlockRngCore + SeedableRng + CryptoRng, Rsdr: RngCore + CryptoRng, { } #[cfg(all(unix, not(target_os = "emscripten")))] mod fork { use core::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Once; // Fork protection // // We implement fork protection on Unix using `pthread_atfork`. // When the process is forked, we increment `RESEEDING_RNG_FORK_COUNTER`. // Every `ReseedingRng` stores the last known value of the static in // `fork_counter`. If the cached `fork_counter` is less than // `RESEEDING_RNG_FORK_COUNTER`, it is time to reseed this RNG. // // If reseeding fails, we don't deal with this by setting a delay, but just // don't update `fork_counter`, so a reseed is attempted as soon as // possible. static RESEEDING_RNG_FORK_COUNTER: AtomicUsize = AtomicUsize::new(0); pub fn get_fork_counter() -> usize { RESEEDING_RNG_FORK_COUNTER.load(Ordering::Relaxed) } extern "C" fn fork_handler() { // Note: fetch_add is defined to wrap on overflow // (which is what we want). RESEEDING_RNG_FORK_COUNTER.fetch_add(1, Ordering::Relaxed); } pub fn register_fork_handler() { static REGISTER: Once = Once::new(); REGISTER.call_once(|| unsafe { libc::pthread_atfork(None, None, Some(fork_handler)); }); } } #[cfg(not(all(unix, not(target_os = "emscripten"))))] mod fork { pub fn get_fork_counter() -> usize { 0 } pub fn register_fork_handler() {} } #[cfg(feature = "std_rng")] #[cfg(test)] mod test { use super::ReseedingRng; use crate::rngs::mock::StepRng; use crate::rngs::std::Core; use crate::{Rng, SeedableRng}; #[test] fn test_reseeding() { let mut zero = StepRng::new(0, 0); let rng = Core::from_rng(&mut zero).unwrap(); let thresh = 1; // reseed every time the buffer is exhausted let mut reseeding = ReseedingRng::new(rng, thresh, zero); // RNG buffer size is [u32; 64] // Debug is only implemented up to length 32 so use two arrays let mut buf = ([0u32; 32], [0u32; 32]); reseeding.fill(&mut buf.0); reseeding.fill(&mut buf.1); let seq = buf; for _ in 0..10 { reseeding.fill(&mut buf.0); reseeding.fill(&mut buf.1); assert_eq!(buf, seq); } } #[test] fn test_clone_reseeding() { #![allow(clippy::redundant_clone)] let mut zero = StepRng::new(0, 0); let rng = Core::from_rng(&mut zero).unwrap(); let mut rng1 = ReseedingRng::new(rng, 32 * 4, zero); let first: u32 = rng1.gen(); for _ in 0..10 { let _ = rng1.gen::(); } let mut rng2 = rng1.clone(); assert_eq!(first, rng2.gen::()); } } rand-0.8.4/src/rngs/mock.rs000064400000000000000000000044050000000000000136000ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Mock random number generator use rand_core::{impls, Error, RngCore}; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; /// A simple implementation of `RngCore` for testing purposes. /// /// This generates an arithmetic sequence (i.e. adds a constant each step) /// over a `u64` number, using wrapping arithmetic. If the increment is 0 /// the generator yields a constant. /// /// ``` /// use rand::Rng; /// use rand::rngs::mock::StepRng; /// /// let mut my_rng = StepRng::new(2, 1); /// let sample: [u64; 3] = my_rng.gen(); /// assert_eq!(sample, [2, 3, 4]); /// ``` #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub struct StepRng { v: u64, a: u64, } impl StepRng { /// Create a `StepRng`, yielding an arithmetic sequence starting with /// `initial` and incremented by `increment` each time. pub fn new(initial: u64, increment: u64) -> Self { StepRng { v: initial, a: increment, } } } impl RngCore for StepRng { #[inline] fn next_u32(&mut self) -> u32 { self.next_u64() as u32 } #[inline] fn next_u64(&mut self) -> u64 { let result = self.v; self.v = self.v.wrapping_add(self.a); result } #[inline] fn fill_bytes(&mut self, dest: &mut [u8]) { impls::fill_bytes_via_next(self, dest); } #[inline] fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { self.fill_bytes(dest); Ok(()) } } #[cfg(test)] mod tests { #[test] #[cfg(feature = "serde1")] fn test_serialization_step_rng() { use super::StepRng; let some_rng = StepRng::new(42, 7); let de_some_rng: StepRng = bincode::deserialize(&bincode::serialize(&some_rng).unwrap()).unwrap(); assert_eq!(some_rng.v, de_some_rng.v); assert_eq!(some_rng.a, de_some_rng.a); } } rand-0.8.4/src/rngs/mod.rs000064400000000000000000000135110000000000000134240ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Random number generators and adapters //! //! ## Background: Random number generators (RNGs) //! //! Computers cannot produce random numbers from nowhere. We classify //! random number generators as follows: //! //! - "True" random number generators (TRNGs) use hard-to-predict data sources //! (e.g. the high-resolution parts of event timings and sensor jitter) to //! harvest random bit-sequences, apply algorithms to remove bias and //! estimate available entropy, then combine these bits into a byte-sequence //! or an entropy pool. This job is usually done by the operating system or //! a hardware generator (HRNG). //! - "Pseudo"-random number generators (PRNGs) use algorithms to transform a //! seed into a sequence of pseudo-random numbers. These generators can be //! fast and produce well-distributed unpredictable random numbers (or not). //! They are usually deterministic: given algorithm and seed, the output //! sequence can be reproduced. They have finite period and eventually loop; //! with many algorithms this period is fixed and can be proven sufficiently //! long, while others are chaotic and the period depends on the seed. //! - "Cryptographically secure" pseudo-random number generators (CSPRNGs) //! are the sub-set of PRNGs which are secure. Security of the generator //! relies both on hiding the internal state and using a strong algorithm. //! //! ## Traits and functionality //! //! All RNGs implement the [`RngCore`] trait, as a consequence of which the //! [`Rng`] extension trait is automatically implemented. Secure RNGs may //! additionally implement the [`CryptoRng`] trait. //! //! All PRNGs require a seed to produce their random number sequence. The //! [`SeedableRng`] trait provides three ways of constructing PRNGs: //! //! - `from_seed` accepts a type specific to the PRNG //! - `from_rng` allows a PRNG to be seeded from any other RNG //! - `seed_from_u64` allows any PRNG to be seeded from a `u64` insecurely //! - `from_entropy` securely seeds a PRNG from fresh entropy //! //! Use the [`rand_core`] crate when implementing your own RNGs. //! //! ## Our generators //! //! This crate provides several random number generators: //! //! - [`OsRng`] is an interface to the operating system's random number //! source. Typically the operating system uses a CSPRNG with entropy //! provided by a TRNG and some type of on-going re-seeding. //! - [`ThreadRng`], provided by the [`thread_rng`] function, is a handle to a //! thread-local CSPRNG with periodic seeding from [`OsRng`]. Because this //! is local, it is typically much faster than [`OsRng`]. It should be //! secure, though the paranoid may prefer [`OsRng`]. //! - [`StdRng`] is a CSPRNG chosen for good performance and trust of security //! (based on reviews, maturity and usage). The current algorithm is ChaCha12, //! which is well established and rigorously analysed. //! [`StdRng`] provides the algorithm used by [`ThreadRng`] but without //! periodic reseeding. //! - [`SmallRng`] is an **insecure** PRNG designed to be fast, simple, require //! little memory, and have good output quality. //! //! The algorithms selected for [`StdRng`] and [`SmallRng`] may change in any //! release and may be platform-dependent, therefore they should be considered //! **not reproducible**. //! //! ## Additional generators //! //! **TRNGs**: The [`rdrand`] crate provides an interface to the RDRAND and //! RDSEED instructions available in modern Intel and AMD CPUs. //! The [`rand_jitter`] crate provides a user-space implementation of //! entropy harvesting from CPU timer jitter, but is very slow and has //! [security issues](https://github.com/rust-random/rand/issues/699). //! //! **PRNGs**: Several companion crates are available, providing individual or //! families of PRNG algorithms. These provide the implementations behind //! [`StdRng`] and [`SmallRng`] but can also be used directly, indeed *should* //! be used directly when **reproducibility** matters. //! Some suggestions are: [`rand_chacha`], [`rand_pcg`], [`rand_xoshiro`]. //! A full list can be found by searching for crates with the [`rng` tag]. //! //! [`Rng`]: crate::Rng //! [`RngCore`]: crate::RngCore //! [`CryptoRng`]: crate::CryptoRng //! [`SeedableRng`]: crate::SeedableRng //! [`thread_rng`]: crate::thread_rng //! [`rdrand`]: https://crates.io/crates/rdrand //! [`rand_jitter`]: https://crates.io/crates/rand_jitter //! [`rand_chacha`]: https://crates.io/crates/rand_chacha //! [`rand_pcg`]: https://crates.io/crates/rand_pcg //! [`rand_xoshiro`]: https://crates.io/crates/rand_xoshiro //! [`rng` tag]: https://crates.io/keywords/rng #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] #[cfg(feature = "std")] pub mod adapter; pub mod mock; // Public so we don't export `StepRng` directly, making it a bit // more clear it is intended for testing. #[cfg(all(feature = "small_rng", target_pointer_width = "64"))] mod xoshiro256plusplus; #[cfg(all(feature = "small_rng", not(target_pointer_width = "64")))] mod xoshiro128plusplus; #[cfg(feature = "small_rng")] mod small; #[cfg(feature = "std_rng")] mod std; #[cfg(all(feature = "std", feature = "std_rng"))] pub(crate) mod thread; #[cfg(feature = "small_rng")] pub use self::small::SmallRng; #[cfg(feature = "std_rng")] pub use self::std::StdRng; #[cfg(all(feature = "std", feature = "std_rng"))] pub use self::thread::ThreadRng; #[cfg_attr(doc_cfg, doc(cfg(feature = "getrandom")))] #[cfg(feature = "getrandom")] pub use rand_core::OsRng; rand-0.8.4/src/rngs/small.rs000064400000000000000000000103300000000000000137510ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! A small fast RNG use rand_core::{Error, RngCore, SeedableRng}; #[cfg(target_pointer_width = "64")] type Rng = super::xoshiro256plusplus::Xoshiro256PlusPlus; #[cfg(not(target_pointer_width = "64"))] type Rng = super::xoshiro128plusplus::Xoshiro128PlusPlus; /// A small-state, fast non-crypto PRNG /// /// `SmallRng` may be a good choice when a PRNG with small state, cheap /// initialization, good statistical quality and good performance are required. /// Note that depending on the application, [`StdRng`] may be faster on many /// modern platforms while providing higher-quality randomness. Furthermore, /// `SmallRng` is **not** a good choice when: /// - Security against prediction is important. Use [`StdRng`] instead. /// - Seeds with many zeros are provided. In such cases, it takes `SmallRng` /// about 10 samples to produce 0 and 1 bits with equal probability. Either /// provide seeds with an approximately equal number of 0 and 1 (for example /// by using [`SeedableRng::from_entropy`] or [`SeedableRng::seed_from_u64`]), /// or use [`StdRng`] instead. /// /// The algorithm is deterministic but should not be considered reproducible /// due to dependence on platform and possible replacement in future /// library versions. For a reproducible generator, use a named PRNG from an /// external crate, e.g. [rand_xoshiro] or [rand_chacha]. /// Refer also to [The Book](https://rust-random.github.io/book/guide-rngs.html). /// /// The PRNG algorithm in `SmallRng` is chosen to be efficient on the current /// platform, without consideration for cryptography or security. The size of /// its state is much smaller than [`StdRng`]. The current algorithm is /// `Xoshiro256PlusPlus` on 64-bit platforms and `Xoshiro128PlusPlus` on 32-bit /// platforms. Both are also implemented by the [rand_xoshiro] crate. /// /// # Examples /// /// Initializing `SmallRng` with a random seed can be done using [`SeedableRng::from_entropy`]: /// /// ``` /// use rand::{Rng, SeedableRng}; /// use rand::rngs::SmallRng; /// /// // Create small, cheap to initialize and fast RNG with a random seed. /// // The randomness is supplied by the operating system. /// let mut small_rng = SmallRng::from_entropy(); /// # let v: u32 = small_rng.gen(); /// ``` /// /// When initializing a lot of `SmallRng`'s, using [`thread_rng`] can be more /// efficient: /// /// ``` /// use rand::{SeedableRng, thread_rng}; /// use rand::rngs::SmallRng; /// /// // Create a big, expensive to initialize and slower, but unpredictable RNG. /// // This is cached and done only once per thread. /// let mut thread_rng = thread_rng(); /// // Create small, cheap to initialize and fast RNGs with random seeds. /// // One can generally assume this won't fail. /// let rngs: Vec = (0..10) /// .map(|_| SmallRng::from_rng(&mut thread_rng).unwrap()) /// .collect(); /// ``` /// /// [`StdRng`]: crate::rngs::StdRng /// [`thread_rng`]: crate::thread_rng /// [rand_chacha]: https://crates.io/crates/rand_chacha /// [rand_xoshiro]: https://crates.io/crates/rand_xoshiro #[cfg_attr(doc_cfg, doc(cfg(feature = "small_rng")))] #[derive(Clone, Debug, PartialEq, Eq)] pub struct SmallRng(Rng); impl RngCore for SmallRng { #[inline(always)] fn next_u32(&mut self) -> u32 { self.0.next_u32() } #[inline(always)] fn next_u64(&mut self) -> u64 { self.0.next_u64() } #[inline(always)] fn fill_bytes(&mut self, dest: &mut [u8]) { self.0.fill_bytes(dest); } #[inline(always)] fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { self.0.try_fill_bytes(dest) } } impl SeedableRng for SmallRng { type Seed = ::Seed; #[inline(always)] fn from_seed(seed: Self::Seed) -> Self { SmallRng(Rng::from_seed(seed)) } #[inline(always)] fn from_rng(rng: R) -> Result { Rng::from_rng(rng).map(SmallRng) } } rand-0.8.4/src/rngs/std.rs000064400000000000000000000062510000000000000134420ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! The standard RNG use crate::{CryptoRng, Error, RngCore, SeedableRng}; #[cfg(all(any(test, feature = "std"), not(target_os = "emscripten")))] pub(crate) use rand_chacha::ChaCha12Core as Core; #[cfg(all(any(test, feature = "std"), target_os = "emscripten"))] pub(crate) use rand_hc::Hc128Core as Core; #[cfg(not(target_os = "emscripten"))] use rand_chacha::ChaCha12Rng as Rng; #[cfg(target_os = "emscripten")] use rand_hc::Hc128Rng as Rng; /// The standard RNG. The PRNG algorithm in `StdRng` is chosen to be efficient /// on the current platform, to be statistically strong and unpredictable /// (meaning a cryptographically secure PRNG). /// /// The current algorithm used is the ChaCha block cipher with 12 rounds. Please /// see this relevant [rand issue] for the discussion. This may change as new /// evidence of cipher security and performance becomes available. /// /// The algorithm is deterministic but should not be considered reproducible /// due to dependence on configuration and possible replacement in future /// library versions. For a secure reproducible generator, we recommend use of /// the [rand_chacha] crate directly. /// /// [rand_chacha]: https://crates.io/crates/rand_chacha /// [rand issue]: https://github.com/rust-random/rand/issues/932 #[cfg_attr(doc_cfg, doc(cfg(feature = "std_rng")))] #[derive(Clone, Debug, PartialEq, Eq)] pub struct StdRng(Rng); impl RngCore for StdRng { #[inline(always)] fn next_u32(&mut self) -> u32 { self.0.next_u32() } #[inline(always)] fn next_u64(&mut self) -> u64 { self.0.next_u64() } #[inline(always)] fn fill_bytes(&mut self, dest: &mut [u8]) { self.0.fill_bytes(dest); } #[inline(always)] fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { self.0.try_fill_bytes(dest) } } impl SeedableRng for StdRng { type Seed = ::Seed; #[inline(always)] fn from_seed(seed: Self::Seed) -> Self { StdRng(Rng::from_seed(seed)) } #[inline(always)] fn from_rng(rng: R) -> Result { Rng::from_rng(rng).map(StdRng) } } impl CryptoRng for StdRng {} #[cfg(test)] mod test { use crate::rngs::StdRng; use crate::{RngCore, SeedableRng}; #[test] fn test_stdrng_construction() { // Test value-stability of StdRng. This is expected to break any time // the algorithm is changed. #[rustfmt::skip] let seed = [1,0,0,0, 23,0,0,0, 200,1,0,0, 210,30,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0]; let target = [10719222850664546238, 14064965282130556830]; let mut rng0 = StdRng::from_seed(seed); let x0 = rng0.next_u64(); let mut rng1 = StdRng::from_rng(rng0).unwrap(); let x1 = rng1.next_u64(); assert_eq!([x0, x1], target); } } rand-0.8.4/src/rngs/thread.rs000064400000000000000000000124170000000000000141200ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Thread-local random number generator use core::cell::UnsafeCell; use std::rc::Rc; use std::thread_local; use super::std::Core; use crate::rngs::adapter::ReseedingRng; use crate::rngs::OsRng; use crate::{CryptoRng, Error, RngCore, SeedableRng}; // Rationale for using `UnsafeCell` in `ThreadRng`: // // Previously we used a `RefCell`, with an overhead of ~15%. There will only // ever be one mutable reference to the interior of the `UnsafeCell`, because // we only have such a reference inside `next_u32`, `next_u64`, etc. Within a // single thread (which is the definition of `ThreadRng`), there will only ever // be one of these methods active at a time. // // A possible scenario where there could be multiple mutable references is if // `ThreadRng` is used inside `next_u32` and co. But the implementation is // completely under our control. We just have to ensure none of them use // `ThreadRng` internally, which is nonsensical anyway. We should also never run // `ThreadRng` in destructors of its implementation, which is also nonsensical. // Number of generated bytes after which to reseed `ThreadRng`. // According to benchmarks, reseeding has a noticable impact with thresholds // of 32 kB and less. We choose 64 kB to avoid significant overhead. const THREAD_RNG_RESEED_THRESHOLD: u64 = 1024 * 64; /// A reference to the thread-local generator /// /// An instance can be obtained via [`thread_rng`] or via `ThreadRng::default()`. /// This handle is safe to use everywhere (including thread-local destructors) /// but cannot be passed between threads (is not `Send` or `Sync`). /// /// `ThreadRng` uses the same PRNG as [`StdRng`] for security and performance /// and is automatically seeded from [`OsRng`]. /// /// Unlike `StdRng`, `ThreadRng` uses the [`ReseedingRng`] wrapper to reseed /// the PRNG from fresh entropy every 64 kiB of random data as well as after a /// fork on Unix (though not quite immediately; see documentation of /// [`ReseedingRng`]). /// Note that the reseeding is done as an extra precaution against side-channel /// attacks and mis-use (e.g. if somehow weak entropy were supplied initially). /// The PRNG algorithms used are assumed to be secure. /// /// [`ReseedingRng`]: crate::rngs::adapter::ReseedingRng /// [`StdRng`]: crate::rngs::StdRng #[cfg_attr(doc_cfg, doc(cfg(all(feature = "std", feature = "std_rng"))))] #[derive(Clone, Debug)] pub struct ThreadRng { // Rc is explictly !Send and !Sync rng: Rc>>, } thread_local!( // We require Rc<..> to avoid premature freeing when thread_rng is used // within thread-local destructors. See #968. static THREAD_RNG_KEY: Rc>> = { let r = Core::from_rng(OsRng).unwrap_or_else(|err| panic!("could not initialize thread_rng: {}", err)); let rng = ReseedingRng::new(r, THREAD_RNG_RESEED_THRESHOLD, OsRng); Rc::new(UnsafeCell::new(rng)) } ); /// Retrieve the lazily-initialized thread-local random number generator, /// seeded by the system. Intended to be used in method chaining style, /// e.g. `thread_rng().gen::()`, or cached locally, e.g. /// `let mut rng = thread_rng();`. Invoked by the `Default` trait, making /// `ThreadRng::default()` equivalent. /// /// For more information see [`ThreadRng`]. #[cfg_attr(doc_cfg, doc(cfg(all(feature = "std", feature = "std_rng"))))] pub fn thread_rng() -> ThreadRng { let rng = THREAD_RNG_KEY.with(|t| t.clone()); ThreadRng { rng } } impl Default for ThreadRng { fn default() -> ThreadRng { crate::prelude::thread_rng() } } impl RngCore for ThreadRng { #[inline(always)] fn next_u32(&mut self) -> u32 { // SAFETY: We must make sure to stop using `rng` before anyone else // creates another mutable reference let rng = unsafe { &mut *self.rng.get() }; rng.next_u32() } #[inline(always)] fn next_u64(&mut self) -> u64 { // SAFETY: We must make sure to stop using `rng` before anyone else // creates another mutable reference let rng = unsafe { &mut *self.rng.get() }; rng.next_u64() } fn fill_bytes(&mut self, dest: &mut [u8]) { // SAFETY: We must make sure to stop using `rng` before anyone else // creates another mutable reference let rng = unsafe { &mut *self.rng.get() }; rng.fill_bytes(dest) } fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { // SAFETY: We must make sure to stop using `rng` before anyone else // creates another mutable reference let rng = unsafe { &mut *self.rng.get() }; rng.try_fill_bytes(dest) } } impl CryptoRng for ThreadRng {} #[cfg(test)] mod test { #[test] fn test_thread_rng() { use crate::Rng; let mut r = crate::thread_rng(); r.gen::(); assert_eq!(r.gen_range(0..1), 0); } } rand-0.8.4/src/rngs/xoshiro128plusplus.rs000064400000000000000000000070770000000000000164150ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #[cfg(feature="serde1")] use serde::{Serialize, Deserialize}; use rand_core::impls::{next_u64_via_u32, fill_bytes_via_next}; use rand_core::le::read_u32_into; use rand_core::{SeedableRng, RngCore, Error}; /// A xoshiro128++ random number generator. /// /// The xoshiro128++ algorithm is not suitable for cryptographic purposes, but /// is very fast and has excellent statistical properties. /// /// The algorithm used here is translated from [the `xoshiro128plusplus.c` /// reference source code](http://xoshiro.di.unimi.it/xoshiro128plusplus.c) by /// David Blackman and Sebastiano Vigna. #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature="serde1", derive(Serialize, Deserialize))] pub struct Xoshiro128PlusPlus { s: [u32; 4], } impl SeedableRng for Xoshiro128PlusPlus { type Seed = [u8; 16]; /// Create a new `Xoshiro128PlusPlus`. If `seed` is entirely 0, it will be /// mapped to a different seed. #[inline] fn from_seed(seed: [u8; 16]) -> Xoshiro128PlusPlus { if seed.iter().all(|&x| x == 0) { return Self::seed_from_u64(0); } let mut state = [0; 4]; read_u32_into(&seed, &mut state); Xoshiro128PlusPlus { s: state } } /// Create a new `Xoshiro128PlusPlus` from a `u64` seed. /// /// This uses the SplitMix64 generator internally. fn seed_from_u64(mut state: u64) -> Self { const PHI: u64 = 0x9e3779b97f4a7c15; let mut seed = Self::Seed::default(); for chunk in seed.as_mut().chunks_mut(8) { state = state.wrapping_add(PHI); let mut z = state; z = (z ^ (z >> 30)).wrapping_mul(0xbf58476d1ce4e5b9); z = (z ^ (z >> 27)).wrapping_mul(0x94d049bb133111eb); z = z ^ (z >> 31); chunk.copy_from_slice(&z.to_le_bytes()); } Self::from_seed(seed) } } impl RngCore for Xoshiro128PlusPlus { #[inline] fn next_u32(&mut self) -> u32 { let result_starstar = self.s[0] .wrapping_add(self.s[3]) .rotate_left(7) .wrapping_add(self.s[0]); let t = self.s[1] << 9; self.s[2] ^= self.s[0]; self.s[3] ^= self.s[1]; self.s[1] ^= self.s[2]; self.s[0] ^= self.s[3]; self.s[2] ^= t; self.s[3] = self.s[3].rotate_left(11); result_starstar } #[inline] fn next_u64(&mut self) -> u64 { next_u64_via_u32(self) } #[inline] fn fill_bytes(&mut self, dest: &mut [u8]) { fill_bytes_via_next(self, dest); } #[inline] fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { self.fill_bytes(dest); Ok(()) } } #[cfg(test)] mod tests { use super::*; #[test] fn reference() { let mut rng = Xoshiro128PlusPlus::from_seed( [1, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0, 4, 0, 0, 0]); // These values were produced with the reference implementation: // http://xoshiro.di.unimi.it/xoshiro128plusplus.c let expected = [ 641, 1573767, 3222811527, 3517856514, 836907274, 4247214768, 3867114732, 1355841295, 495546011, 621204420, ]; for &e in &expected { assert_eq!(rng.next_u32(), e); } } } rand-0.8.4/src/rngs/xoshiro256plusplus.rs000064400000000000000000000074640000000000000164170ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #[cfg(feature="serde1")] use serde::{Serialize, Deserialize}; use rand_core::impls::fill_bytes_via_next; use rand_core::le::read_u64_into; use rand_core::{SeedableRng, RngCore, Error}; /// A xoshiro256++ random number generator. /// /// The xoshiro256++ algorithm is not suitable for cryptographic purposes, but /// is very fast and has excellent statistical properties. /// /// The algorithm used here is translated from [the `xoshiro256plusplus.c` /// reference source code](http://xoshiro.di.unimi.it/xoshiro256plusplus.c) by /// David Blackman and Sebastiano Vigna. #[derive(Debug, Clone, PartialEq, Eq)] #[cfg_attr(feature="serde1", derive(Serialize, Deserialize))] pub struct Xoshiro256PlusPlus { s: [u64; 4], } impl SeedableRng for Xoshiro256PlusPlus { type Seed = [u8; 32]; /// Create a new `Xoshiro256PlusPlus`. If `seed` is entirely 0, it will be /// mapped to a different seed. #[inline] fn from_seed(seed: [u8; 32]) -> Xoshiro256PlusPlus { if seed.iter().all(|&x| x == 0) { return Self::seed_from_u64(0); } let mut state = [0; 4]; read_u64_into(&seed, &mut state); Xoshiro256PlusPlus { s: state } } /// Create a new `Xoshiro256PlusPlus` from a `u64` seed. /// /// This uses the SplitMix64 generator internally. fn seed_from_u64(mut state: u64) -> Self { const PHI: u64 = 0x9e3779b97f4a7c15; let mut seed = Self::Seed::default(); for chunk in seed.as_mut().chunks_mut(8) { state = state.wrapping_add(PHI); let mut z = state; z = (z ^ (z >> 30)).wrapping_mul(0xbf58476d1ce4e5b9); z = (z ^ (z >> 27)).wrapping_mul(0x94d049bb133111eb); z = z ^ (z >> 31); chunk.copy_from_slice(&z.to_le_bytes()); } Self::from_seed(seed) } } impl RngCore for Xoshiro256PlusPlus { #[inline] fn next_u32(&mut self) -> u32 { // The lowest bits have some linear dependencies, so we use the // upper bits instead. (self.next_u64() >> 32) as u32 } #[inline] fn next_u64(&mut self) -> u64 { let result_plusplus = self.s[0] .wrapping_add(self.s[3]) .rotate_left(23) .wrapping_add(self.s[0]); let t = self.s[1] << 17; self.s[2] ^= self.s[0]; self.s[3] ^= self.s[1]; self.s[1] ^= self.s[2]; self.s[0] ^= self.s[3]; self.s[2] ^= t; self.s[3] = self.s[3].rotate_left(45); result_plusplus } #[inline] fn fill_bytes(&mut self, dest: &mut [u8]) { fill_bytes_via_next(self, dest); } #[inline] fn try_fill_bytes(&mut self, dest: &mut [u8]) -> Result<(), Error> { self.fill_bytes(dest); Ok(()) } } #[cfg(test)] mod tests { use super::*; #[test] fn reference() { let mut rng = Xoshiro256PlusPlus::from_seed( [1, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0]); // These values were produced with the reference implementation: // http://xoshiro.di.unimi.it/xoshiro256plusplus.c let expected = [ 41943041, 58720359, 3588806011781223, 3591011842654386, 9228616714210784205, 9973669472204895162, 14011001112246962877, 12406186145184390807, 15849039046786891736, 10450023813501588000, ]; for &e in &expected { assert_eq!(rng.next_u64(), e); } } } rand-0.8.4/src/seq/index.rs000064400000000000000000000540410000000000000135760ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Low-level API for sampling indices #[cfg(feature = "alloc")] use core::slice; #[cfg(feature = "alloc")] use alloc::vec::{self, Vec}; // BTreeMap is not as fast in tests, but better than nothing. #[cfg(all(feature = "alloc", not(feature = "std")))] use alloc::collections::BTreeSet; #[cfg(feature = "std")] use std::collections::HashSet; #[cfg(feature = "alloc")] use crate::distributions::{uniform::SampleUniform, Distribution, Uniform}; #[cfg(feature = "std")] use crate::distributions::WeightedError; use crate::Rng; #[cfg(feature = "serde1")] use serde::{Serialize, Deserialize}; /// A vector of indices. /// /// Multiple internal representations are possible. #[derive(Clone, Debug)] #[cfg_attr(feature = "serde1", derive(Serialize, Deserialize))] pub enum IndexVec { #[doc(hidden)] U32(Vec), #[doc(hidden)] USize(Vec), } impl IndexVec { /// Returns the number of indices #[inline] pub fn len(&self) -> usize { match *self { IndexVec::U32(ref v) => v.len(), IndexVec::USize(ref v) => v.len(), } } /// Returns `true` if the length is 0. #[inline] pub fn is_empty(&self) -> bool { match *self { IndexVec::U32(ref v) => v.is_empty(), IndexVec::USize(ref v) => v.is_empty(), } } /// Return the value at the given `index`. /// /// (Note: we cannot implement [`std::ops::Index`] because of lifetime /// restrictions.) #[inline] pub fn index(&self, index: usize) -> usize { match *self { IndexVec::U32(ref v) => v[index] as usize, IndexVec::USize(ref v) => v[index], } } /// Return result as a `Vec`. Conversion may or may not be trivial. #[inline] pub fn into_vec(self) -> Vec { match self { IndexVec::U32(v) => v.into_iter().map(|i| i as usize).collect(), IndexVec::USize(v) => v, } } /// Iterate over the indices as a sequence of `usize` values #[inline] pub fn iter(&self) -> IndexVecIter<'_> { match *self { IndexVec::U32(ref v) => IndexVecIter::U32(v.iter()), IndexVec::USize(ref v) => IndexVecIter::USize(v.iter()), } } } impl IntoIterator for IndexVec { type Item = usize; type IntoIter = IndexVecIntoIter; /// Convert into an iterator over the indices as a sequence of `usize` values #[inline] fn into_iter(self) -> IndexVecIntoIter { match self { IndexVec::U32(v) => IndexVecIntoIter::U32(v.into_iter()), IndexVec::USize(v) => IndexVecIntoIter::USize(v.into_iter()), } } } impl PartialEq for IndexVec { fn eq(&self, other: &IndexVec) -> bool { use self::IndexVec::*; match (self, other) { (&U32(ref v1), &U32(ref v2)) => v1 == v2, (&USize(ref v1), &USize(ref v2)) => v1 == v2, (&U32(ref v1), &USize(ref v2)) => { (v1.len() == v2.len()) && (v1.iter().zip(v2.iter()).all(|(x, y)| *x as usize == *y)) } (&USize(ref v1), &U32(ref v2)) => { (v1.len() == v2.len()) && (v1.iter().zip(v2.iter()).all(|(x, y)| *x == *y as usize)) } } } } impl From> for IndexVec { #[inline] fn from(v: Vec) -> Self { IndexVec::U32(v) } } impl From> for IndexVec { #[inline] fn from(v: Vec) -> Self { IndexVec::USize(v) } } /// Return type of `IndexVec::iter`. #[derive(Debug)] pub enum IndexVecIter<'a> { #[doc(hidden)] U32(slice::Iter<'a, u32>), #[doc(hidden)] USize(slice::Iter<'a, usize>), } impl<'a> Iterator for IndexVecIter<'a> { type Item = usize; #[inline] fn next(&mut self) -> Option { use self::IndexVecIter::*; match *self { U32(ref mut iter) => iter.next().map(|i| *i as usize), USize(ref mut iter) => iter.next().cloned(), } } #[inline] fn size_hint(&self) -> (usize, Option) { match *self { IndexVecIter::U32(ref v) => v.size_hint(), IndexVecIter::USize(ref v) => v.size_hint(), } } } impl<'a> ExactSizeIterator for IndexVecIter<'a> {} /// Return type of `IndexVec::into_iter`. #[derive(Clone, Debug)] pub enum IndexVecIntoIter { #[doc(hidden)] U32(vec::IntoIter), #[doc(hidden)] USize(vec::IntoIter), } impl Iterator for IndexVecIntoIter { type Item = usize; #[inline] fn next(&mut self) -> Option { use self::IndexVecIntoIter::*; match *self { U32(ref mut v) => v.next().map(|i| i as usize), USize(ref mut v) => v.next(), } } #[inline] fn size_hint(&self) -> (usize, Option) { use self::IndexVecIntoIter::*; match *self { U32(ref v) => v.size_hint(), USize(ref v) => v.size_hint(), } } } impl ExactSizeIterator for IndexVecIntoIter {} /// Randomly sample exactly `amount` distinct indices from `0..length`, and /// return them in random order (fully shuffled). /// /// This method is used internally by the slice sampling methods, but it can /// sometimes be useful to have the indices themselves so this is provided as /// an alternative. /// /// The implementation used is not specified; we automatically select the /// fastest available algorithm for the `length` and `amount` parameters /// (based on detailed profiling on an Intel Haswell CPU). Roughly speaking, /// complexity is `O(amount)`, except that when `amount` is small, performance /// is closer to `O(amount^2)`, and when `length` is close to `amount` then /// `O(length)`. /// /// Note that performance is significantly better over `u32` indices than over /// `u64` indices. Because of this we hide the underlying type behind an /// abstraction, `IndexVec`. /// /// If an allocation-free `no_std` function is required, it is suggested /// to adapt the internal `sample_floyd` implementation. /// /// Panics if `amount > length`. pub fn sample(rng: &mut R, length: usize, amount: usize) -> IndexVec where R: Rng + ?Sized { if amount > length { panic!("`amount` of samples must be less than or equal to `length`"); } if length > (::core::u32::MAX as usize) { // We never want to use inplace here, but could use floyd's alg // Lazy version: always use the cache alg. return sample_rejection(rng, length, amount); } let amount = amount as u32; let length = length as u32; // Choice of algorithm here depends on both length and amount. See: // https://github.com/rust-random/rand/pull/479 // We do some calculations with f32. Accuracy is not very important. if amount < 163 { const C: [[f32; 2]; 2] = [[1.6, 8.0 / 45.0], [10.0, 70.0 / 9.0]]; let j = if length < 500_000 { 0 } else { 1 }; let amount_fp = amount as f32; let m4 = C[0][j] * amount_fp; // Short-cut: when amount < 12, floyd's is always faster if amount > 11 && (length as f32) < (C[1][j] + m4) * amount_fp { sample_inplace(rng, length, amount) } else { sample_floyd(rng, length, amount) } } else { const C: [f32; 2] = [270.0, 330.0 / 9.0]; let j = if length < 500_000 { 0 } else { 1 }; if (length as f32) < C[j] * (amount as f32) { sample_inplace(rng, length, amount) } else { sample_rejection(rng, length, amount) } } } /// Randomly sample exactly `amount` distinct indices from `0..length`, and /// return them in an arbitrary order (there is no guarantee of shuffling or /// ordering). The weights are to be provided by the input function `weights`, /// which will be called once for each index. /// /// This method is used internally by the slice sampling methods, but it can /// sometimes be useful to have the indices themselves so this is provided as /// an alternative. /// /// This implementation uses `O(length + amount)` space and `O(length)` time /// if the "nightly" feature is enabled, or `O(length)` space and /// `O(length + amount * log length)` time otherwise. /// /// Panics if `amount > length`. #[cfg(feature = "std")] #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] pub fn sample_weighted( rng: &mut R, length: usize, weight: F, amount: usize, ) -> Result where R: Rng + ?Sized, F: Fn(usize) -> X, X: Into, { if length > (core::u32::MAX as usize) { sample_efraimidis_spirakis(rng, length, weight, amount) } else { assert!(amount <= core::u32::MAX as usize); let amount = amount as u32; let length = length as u32; sample_efraimidis_spirakis(rng, length, weight, amount) } } /// Randomly sample exactly `amount` distinct indices from `0..length`, and /// return them in an arbitrary order (there is no guarantee of shuffling or /// ordering). The weights are to be provided by the input function `weights`, /// which will be called once for each index. /// /// This implementation uses the algorithm described by Efraimidis and Spirakis /// in this paper: https://doi.org/10.1016/j.ipl.2005.11.003 /// It uses `O(length + amount)` space and `O(length)` time if the /// "nightly" feature is enabled, or `O(length)` space and `O(length /// + amount * log length)` time otherwise. /// /// Panics if `amount > length`. #[cfg(feature = "std")] fn sample_efraimidis_spirakis( rng: &mut R, length: N, weight: F, amount: N, ) -> Result where R: Rng + ?Sized, F: Fn(usize) -> X, X: Into, N: UInt, IndexVec: From>, { if amount == N::zero() { return Ok(IndexVec::U32(Vec::new())); } if amount > length { panic!("`amount` of samples must be less than or equal to `length`"); } struct Element { index: N, key: f64, } impl PartialOrd for Element { fn partial_cmp(&self, other: &Self) -> Option { self.key.partial_cmp(&other.key) } } impl Ord for Element { fn cmp(&self, other: &Self) -> core::cmp::Ordering { // partial_cmp will always produce a value, // because we check that the weights are not nan self.partial_cmp(other).unwrap() } } impl PartialEq for Element { fn eq(&self, other: &Self) -> bool { self.key == other.key } } impl Eq for Element {} #[cfg(feature = "nightly")] { let mut candidates = Vec::with_capacity(length.as_usize()); let mut index = N::zero(); while index < length { let weight = weight(index.as_usize()).into(); if !(weight >= 0.) { return Err(WeightedError::InvalidWeight); } let key = rng.gen::().powf(1.0 / weight); candidates.push(Element { index, key }); index += N::one(); } // Partially sort the array to find the `amount` elements with the greatest // keys. Do this by using `select_nth_unstable` to put the elements with // the *smallest* keys at the beginning of the list in `O(n)` time, which // provides equivalent information about the elements with the *greatest* keys. let (_, mid, greater) = candidates.select_nth_unstable(length.as_usize() - amount.as_usize()); let mut result: Vec = Vec::with_capacity(amount.as_usize()); result.push(mid.index); for element in greater { result.push(element.index); } Ok(IndexVec::from(result)) } #[cfg(not(feature = "nightly"))] { use std::collections::BinaryHeap; // Partially sort the array such that the `amount` elements with the largest // keys are first using a binary max heap. let mut candidates = BinaryHeap::with_capacity(length.as_usize()); let mut index = N::zero(); while index < length { let weight = weight(index.as_usize()).into(); if !(weight >= 0.) { return Err(WeightedError::InvalidWeight); } let key = rng.gen::().powf(1.0 / weight); candidates.push(Element { index, key }); index += N::one(); } let mut result: Vec = Vec::with_capacity(amount.as_usize()); while result.len() < amount.as_usize() { result.push(candidates.pop().unwrap().index); } Ok(IndexVec::from(result)) } } /// Randomly sample exactly `amount` indices from `0..length`, using Floyd's /// combination algorithm. /// /// The output values are fully shuffled. (Overhead is under 50%.) /// /// This implementation uses `O(amount)` memory and `O(amount^2)` time. fn sample_floyd(rng: &mut R, length: u32, amount: u32) -> IndexVec where R: Rng + ?Sized { // For small amount we use Floyd's fully-shuffled variant. For larger // amounts this is slow due to Vec::insert performance, so we shuffle // afterwards. Benchmarks show little overhead from extra logic. let floyd_shuffle = amount < 50; debug_assert!(amount <= length); let mut indices = Vec::with_capacity(amount as usize); for j in length - amount..length { let t = rng.gen_range(0..=j); if floyd_shuffle { if let Some(pos) = indices.iter().position(|&x| x == t) { indices.insert(pos, j); continue; } } else if indices.contains(&t) { indices.push(j); continue; } indices.push(t); } if !floyd_shuffle { // Reimplement SliceRandom::shuffle with smaller indices for i in (1..amount).rev() { // invariant: elements with index > i have been locked in place. indices.swap(i as usize, rng.gen_range(0..=i) as usize); } } IndexVec::from(indices) } /// Randomly sample exactly `amount` indices from `0..length`, using an inplace /// partial Fisher-Yates method. /// Sample an amount of indices using an inplace partial fisher yates method. /// /// This allocates the entire `length` of indices and randomizes only the first `amount`. /// It then truncates to `amount` and returns. /// /// This method is not appropriate for large `length` and potentially uses a lot /// of memory; because of this we only implement for `u32` index (which improves /// performance in all cases). /// /// Set-up is `O(length)` time and memory and shuffling is `O(amount)` time. fn sample_inplace(rng: &mut R, length: u32, amount: u32) -> IndexVec where R: Rng + ?Sized { debug_assert!(amount <= length); let mut indices: Vec = Vec::with_capacity(length as usize); indices.extend(0..length); for i in 0..amount { let j: u32 = rng.gen_range(i..length); indices.swap(i as usize, j as usize); } indices.truncate(amount as usize); debug_assert_eq!(indices.len(), amount as usize); IndexVec::from(indices) } trait UInt: Copy + PartialOrd + Ord + PartialEq + Eq + SampleUniform + core::hash::Hash + core::ops::AddAssign { fn zero() -> Self; fn one() -> Self; fn as_usize(self) -> usize; } impl UInt for u32 { #[inline] fn zero() -> Self { 0 } #[inline] fn one() -> Self { 1 } #[inline] fn as_usize(self) -> usize { self as usize } } impl UInt for usize { #[inline] fn zero() -> Self { 0 } #[inline] fn one() -> Self { 1 } #[inline] fn as_usize(self) -> usize { self } } /// Randomly sample exactly `amount` indices from `0..length`, using rejection /// sampling. /// /// Since `amount <<< length` there is a low chance of a random sample in /// `0..length` being a duplicate. We test for duplicates and resample where /// necessary. The algorithm is `O(amount)` time and memory. /// /// This function is generic over X primarily so that results are value-stable /// over 32-bit and 64-bit platforms. fn sample_rejection(rng: &mut R, length: X, amount: X) -> IndexVec where R: Rng + ?Sized, IndexVec: From>, { debug_assert!(amount < length); #[cfg(feature = "std")] let mut cache = HashSet::with_capacity(amount.as_usize()); #[cfg(not(feature = "std"))] let mut cache = BTreeSet::new(); let distr = Uniform::new(X::zero(), length); let mut indices = Vec::with_capacity(amount.as_usize()); for _ in 0..amount.as_usize() { let mut pos = distr.sample(rng); while !cache.insert(pos) { pos = distr.sample(rng); } indices.push(pos); } debug_assert_eq!(indices.len(), amount.as_usize()); IndexVec::from(indices) } #[cfg(test)] mod test { use super::*; #[test] #[cfg(feature = "serde1")] fn test_serialization_index_vec() { let some_index_vec = IndexVec::from(vec![254_usize, 234, 2, 1]); let de_some_index_vec: IndexVec = bincode::deserialize(&bincode::serialize(&some_index_vec).unwrap()).unwrap(); match (some_index_vec, de_some_index_vec) { (IndexVec::U32(a), IndexVec::U32(b)) => { assert_eq!(a, b); }, (IndexVec::USize(a), IndexVec::USize(b)) => { assert_eq!(a, b); }, _ => {panic!("failed to seralize/deserialize `IndexVec`")} } } #[cfg(feature = "alloc")] use alloc::vec; #[test] fn test_sample_boundaries() { let mut r = crate::test::rng(404); assert_eq!(sample_inplace(&mut r, 0, 0).len(), 0); assert_eq!(sample_inplace(&mut r, 1, 0).len(), 0); assert_eq!(sample_inplace(&mut r, 1, 1).into_vec(), vec![0]); assert_eq!(sample_rejection(&mut r, 1u32, 0).len(), 0); assert_eq!(sample_floyd(&mut r, 0, 0).len(), 0); assert_eq!(sample_floyd(&mut r, 1, 0).len(), 0); assert_eq!(sample_floyd(&mut r, 1, 1).into_vec(), vec![0]); // These algorithms should be fast with big numbers. Test average. let sum: usize = sample_rejection(&mut r, 1 << 25, 10u32).into_iter().sum(); assert!(1 << 25 < sum && sum < (1 << 25) * 25); let sum: usize = sample_floyd(&mut r, 1 << 25, 10).into_iter().sum(); assert!(1 << 25 < sum && sum < (1 << 25) * 25); } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_sample_alg() { let seed_rng = crate::test::rng; // We can't test which algorithm is used directly, but Floyd's alg // should produce different results from the others. (Also, `inplace` // and `cached` currently use different sizes thus produce different results.) // A small length and relatively large amount should use inplace let (length, amount): (usize, usize) = (100, 50); let v1 = sample(&mut seed_rng(420), length, amount); let v2 = sample_inplace(&mut seed_rng(420), length as u32, amount as u32); assert!(v1.iter().all(|e| e < length)); assert_eq!(v1, v2); // Test Floyd's alg does produce different results let v3 = sample_floyd(&mut seed_rng(420), length as u32, amount as u32); assert!(v1 != v3); // A large length and small amount should use Floyd let (length, amount): (usize, usize) = (1 << 20, 50); let v1 = sample(&mut seed_rng(421), length, amount); let v2 = sample_floyd(&mut seed_rng(421), length as u32, amount as u32); assert!(v1.iter().all(|e| e < length)); assert_eq!(v1, v2); // A large length and larger amount should use cache let (length, amount): (usize, usize) = (1 << 20, 600); let v1 = sample(&mut seed_rng(422), length, amount); let v2 = sample_rejection(&mut seed_rng(422), length as u32, amount as u32); assert!(v1.iter().all(|e| e < length)); assert_eq!(v1, v2); } #[cfg(feature = "std")] #[test] fn test_sample_weighted() { let seed_rng = crate::test::rng; for &(amount, len) in &[(0, 10), (5, 10), (10, 10)] { let v = sample_weighted(&mut seed_rng(423), len, |i| i as f64, amount).unwrap(); match v { IndexVec::U32(mut indices) => { assert_eq!(indices.len(), amount); indices.sort_unstable(); indices.dedup(); assert_eq!(indices.len(), amount); for &i in &indices { assert!((i as usize) < len); } }, IndexVec::USize(_) => panic!("expected `IndexVec::U32`"), } } } #[test] fn value_stability_sample() { let do_test = |length, amount, values: &[u32]| { let mut buf = [0u32; 8]; let mut rng = crate::test::rng(410); let res = sample(&mut rng, length, amount); let len = res.len().min(buf.len()); for (x, y) in res.into_iter().zip(buf.iter_mut()) { *y = x as u32; } assert_eq!( &buf[0..len], values, "failed sampling {}, {}", length, amount ); }; do_test(10, 6, &[8, 0, 3, 5, 9, 6]); // floyd do_test(25, 10, &[18, 15, 14, 9, 0, 13, 5, 24]); // floyd do_test(300, 8, &[30, 283, 150, 1, 73, 13, 285, 35]); // floyd do_test(300, 80, &[31, 289, 248, 154, 5, 78, 19, 286]); // inplace do_test(300, 180, &[31, 289, 248, 154, 5, 78, 19, 286]); // inplace do_test(1_000_000, 8, &[ 103717, 963485, 826422, 509101, 736394, 807035, 5327, 632573, ]); // floyd do_test(1_000_000, 180, &[ 103718, 963490, 826426, 509103, 736396, 807036, 5327, 632573, ]); // rejection } } rand-0.8.4/src/seq/mod.rs000064400000000000000000001325230000000000000132500ustar 00000000000000// Copyright 2018 Developers of the Rand project. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Sequence-related functionality //! //! This module provides: //! //! * [`SliceRandom`] slice sampling and mutation //! * [`IteratorRandom`] iterator sampling //! * [`index::sample`] low-level API to choose multiple indices from //! `0..length` //! //! Also see: //! //! * [`crate::distributions::WeightedIndex`] distribution which provides //! weighted index sampling. //! //! In order to make results reproducible across 32-64 bit architectures, all //! `usize` indices are sampled as a `u32` where possible (also providing a //! small performance boost in some cases). #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] pub mod index; #[cfg(feature = "alloc")] use core::ops::Index; #[cfg(feature = "alloc")] use alloc::vec::Vec; #[cfg(feature = "alloc")] use crate::distributions::uniform::{SampleBorrow, SampleUniform}; #[cfg(feature = "alloc")] use crate::distributions::WeightedError; use crate::Rng; /// Extension trait on slices, providing random mutation and sampling methods. /// /// This trait is implemented on all `[T]` slice types, providing several /// methods for choosing and shuffling elements. You must `use` this trait: /// /// ``` /// use rand::seq::SliceRandom; /// /// let mut rng = rand::thread_rng(); /// let mut bytes = "Hello, random!".to_string().into_bytes(); /// bytes.shuffle(&mut rng); /// let str = String::from_utf8(bytes).unwrap(); /// println!("{}", str); /// ``` /// Example output (non-deterministic): /// ```none /// l,nmroHado !le /// ``` pub trait SliceRandom { /// The element type. type Item; /// Returns a reference to one random element of the slice, or `None` if the /// slice is empty. /// /// For slices, complexity is `O(1)`. /// /// # Example /// /// ``` /// use rand::thread_rng; /// use rand::seq::SliceRandom; /// /// let choices = [1, 2, 4, 8, 16, 32]; /// let mut rng = thread_rng(); /// println!("{:?}", choices.choose(&mut rng)); /// assert_eq!(choices[..0].choose(&mut rng), None); /// ``` fn choose(&self, rng: &mut R) -> Option<&Self::Item> where R: Rng + ?Sized; /// Returns a mutable reference to one random element of the slice, or /// `None` if the slice is empty. /// /// For slices, complexity is `O(1)`. fn choose_mut(&mut self, rng: &mut R) -> Option<&mut Self::Item> where R: Rng + ?Sized; /// Chooses `amount` elements from the slice at random, without repetition, /// and in random order. The returned iterator is appropriate both for /// collection into a `Vec` and filling an existing buffer (see example). /// /// In case this API is not sufficiently flexible, use [`index::sample`]. /// /// For slices, complexity is the same as [`index::sample`]. /// /// # Example /// ``` /// use rand::seq::SliceRandom; /// /// let mut rng = &mut rand::thread_rng(); /// let sample = "Hello, audience!".as_bytes(); /// /// // collect the results into a vector: /// let v: Vec = sample.choose_multiple(&mut rng, 3).cloned().collect(); /// /// // store in a buffer: /// let mut buf = [0u8; 5]; /// for (b, slot) in sample.choose_multiple(&mut rng, buf.len()).zip(buf.iter_mut()) { /// *slot = *b; /// } /// ``` #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] fn choose_multiple(&self, rng: &mut R, amount: usize) -> SliceChooseIter where R: Rng + ?Sized; /// Similar to [`choose`], but where the likelihood of each outcome may be /// specified. /// /// The specified function `weight` maps each item `x` to a relative /// likelihood `weight(x)`. The probability of each item being selected is /// therefore `weight(x) / s`, where `s` is the sum of all `weight(x)`. /// /// For slices of length `n`, complexity is `O(n)`. /// See also [`choose_weighted_mut`], [`distributions::weighted`]. /// /// # Example /// /// ``` /// use rand::prelude::*; /// /// let choices = [('a', 2), ('b', 1), ('c', 1)]; /// let mut rng = thread_rng(); /// // 50% chance to print 'a', 25% chance to print 'b', 25% chance to print 'c' /// println!("{:?}", choices.choose_weighted(&mut rng, |item| item.1).unwrap().0); /// ``` /// [`choose`]: SliceRandom::choose /// [`choose_weighted_mut`]: SliceRandom::choose_weighted_mut /// [`distributions::weighted`]: crate::distributions::weighted #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] fn choose_weighted( &self, rng: &mut R, weight: F, ) -> Result<&Self::Item, WeightedError> where R: Rng + ?Sized, F: Fn(&Self::Item) -> B, B: SampleBorrow, X: SampleUniform + for<'a> ::core::ops::AddAssign<&'a X> + ::core::cmp::PartialOrd + Clone + Default; /// Similar to [`choose_mut`], but where the likelihood of each outcome may /// be specified. /// /// The specified function `weight` maps each item `x` to a relative /// likelihood `weight(x)`. The probability of each item being selected is /// therefore `weight(x) / s`, where `s` is the sum of all `weight(x)`. /// /// For slices of length `n`, complexity is `O(n)`. /// See also [`choose_weighted`], [`distributions::weighted`]. /// /// [`choose_mut`]: SliceRandom::choose_mut /// [`choose_weighted`]: SliceRandom::choose_weighted /// [`distributions::weighted`]: crate::distributions::weighted #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] fn choose_weighted_mut( &mut self, rng: &mut R, weight: F, ) -> Result<&mut Self::Item, WeightedError> where R: Rng + ?Sized, F: Fn(&Self::Item) -> B, B: SampleBorrow, X: SampleUniform + for<'a> ::core::ops::AddAssign<&'a X> + ::core::cmp::PartialOrd + Clone + Default; /// Similar to [`choose_multiple`], but where the likelihood of each element's /// inclusion in the output may be specified. The elements are returned in an /// arbitrary, unspecified order. /// /// The specified function `weight` maps each item `x` to a relative /// likelihood `weight(x)`. The probability of each item being selected is /// therefore `weight(x) / s`, where `s` is the sum of all `weight(x)`. /// /// If all of the weights are equal, even if they are all zero, each element has /// an equal likelihood of being selected. /// /// The complexity of this method depends on the feature `partition_at_index`. /// If the feature is enabled, then for slices of length `n`, the complexity /// is `O(n)` space and `O(n)` time. Otherwise, the complexity is `O(n)` space and /// `O(n * log amount)` time. /// /// # Example /// /// ``` /// use rand::prelude::*; /// /// let choices = [('a', 2), ('b', 1), ('c', 1)]; /// let mut rng = thread_rng(); /// // First Draw * Second Draw = total odds /// // ----------------------- /// // (50% * 50%) + (25% * 67%) = 41.7% chance that the output is `['a', 'b']` in some order. /// // (50% * 50%) + (25% * 67%) = 41.7% chance that the output is `['a', 'c']` in some order. /// // (25% * 33%) + (25% * 33%) = 16.6% chance that the output is `['b', 'c']` in some order. /// println!("{:?}", choices.choose_multiple_weighted(&mut rng, 2, |item| item.1).unwrap().collect::>()); /// ``` /// [`choose_multiple`]: SliceRandom::choose_multiple // // Note: this is feature-gated on std due to usage of f64::powf. // If necessary, we may use alloc+libm as an alternative (see PR #1089). #[cfg(feature = "std")] #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))] fn choose_multiple_weighted( &self, rng: &mut R, amount: usize, weight: F, ) -> Result, WeightedError> where R: Rng + ?Sized, F: Fn(&Self::Item) -> X, X: Into; /// Shuffle a mutable slice in place. /// /// For slices of length `n`, complexity is `O(n)`. /// /// # Example /// /// ``` /// use rand::seq::SliceRandom; /// use rand::thread_rng; /// /// let mut rng = thread_rng(); /// let mut y = [1, 2, 3, 4, 5]; /// println!("Unshuffled: {:?}", y); /// y.shuffle(&mut rng); /// println!("Shuffled: {:?}", y); /// ``` fn shuffle(&mut self, rng: &mut R) where R: Rng + ?Sized; /// Shuffle a slice in place, but exit early. /// /// Returns two mutable slices from the source slice. The first contains /// `amount` elements randomly permuted. The second has the remaining /// elements that are not fully shuffled. /// /// This is an efficient method to select `amount` elements at random from /// the slice, provided the slice may be mutated. /// /// If you only need to choose elements randomly and `amount > self.len()/2` /// then you may improve performance by taking /// `amount = values.len() - amount` and using only the second slice. /// /// If `amount` is greater than the number of elements in the slice, this /// will perform a full shuffle. /// /// For slices, complexity is `O(m)` where `m = amount`. fn partial_shuffle( &mut self, rng: &mut R, amount: usize, ) -> (&mut [Self::Item], &mut [Self::Item]) where R: Rng + ?Sized; } /// Extension trait on iterators, providing random sampling methods. /// /// This trait is implemented on all iterators `I` where `I: Iterator + Sized` /// and provides methods for /// choosing one or more elements. You must `use` this trait: /// /// ``` /// use rand::seq::IteratorRandom; /// /// let mut rng = rand::thread_rng(); /// /// let faces = "πŸ˜€πŸ˜ŽπŸ˜πŸ˜•πŸ˜ πŸ˜’"; /// println!("I am {}!", faces.chars().choose(&mut rng).unwrap()); /// ``` /// Example output (non-deterministic): /// ```none /// I am πŸ˜€! /// ``` pub trait IteratorRandom: Iterator + Sized { /// Choose one element at random from the iterator. /// /// Returns `None` if and only if the iterator is empty. /// /// This method uses [`Iterator::size_hint`] for optimisation. With an /// accurate hint and where [`Iterator::nth`] is a constant-time operation /// this method can offer `O(1)` performance. Where no size hint is /// available, complexity is `O(n)` where `n` is the iterator length. /// Partial hints (where `lower > 0`) also improve performance. /// /// Note that the output values and the number of RNG samples used /// depends on size hints. In particular, `Iterator` combinators that don't /// change the values yielded but change the size hints may result in /// `choose` returning different elements. If you want consistent results /// and RNG usage consider using [`IteratorRandom::choose_stable`]. fn choose(mut self, rng: &mut R) -> Option where R: Rng + ?Sized { let (mut lower, mut upper) = self.size_hint(); let mut consumed = 0; let mut result = None; // Handling for this condition outside the loop allows the optimizer to eliminate the loop // when the Iterator is an ExactSizeIterator. This has a large performance impact on e.g. // seq_iter_choose_from_1000. if upper == Some(lower) { return if lower == 0 { None } else { self.nth(gen_index(rng, lower)) }; } // Continue until the iterator is exhausted loop { if lower > 1 { let ix = gen_index(rng, lower + consumed); let skip = if ix < lower { result = self.nth(ix); lower - (ix + 1) } else { lower }; if upper == Some(lower) { return result; } consumed += lower; if skip > 0 { self.nth(skip - 1); } } else { let elem = self.next(); if elem.is_none() { return result; } consumed += 1; if gen_index(rng, consumed) == 0 { result = elem; } } let hint = self.size_hint(); lower = hint.0; upper = hint.1; } } /// Choose one element at random from the iterator. /// /// Returns `None` if and only if the iterator is empty. /// /// This method is very similar to [`choose`] except that the result /// only depends on the length of the iterator and the values produced by /// `rng`. Notably for any iterator of a given length this will make the /// same requests to `rng` and if the same sequence of values are produced /// the same index will be selected from `self`. This may be useful if you /// need consistent results no matter what type of iterator you are working /// with. If you do not need this stability prefer [`choose`]. /// /// Note that this method still uses [`Iterator::size_hint`] to skip /// constructing elements where possible, however the selection and `rng` /// calls are the same in the face of this optimization. If you want to /// force every element to be created regardless call `.inspect(|e| ())`. /// /// [`choose`]: IteratorRandom::choose fn choose_stable(mut self, rng: &mut R) -> Option where R: Rng + ?Sized { let mut consumed = 0; let mut result = None; loop { // Currently the only way to skip elements is `nth()`. So we need to // store what index to access next here. // This should be replaced by `advance_by()` once it is stable: // https://github.com/rust-lang/rust/issues/77404 let mut next = 0; let (lower, _) = self.size_hint(); if lower >= 2 { let highest_selected = (0..lower) .filter(|ix| gen_index(rng, consumed+ix+1) == 0) .last(); consumed += lower; next = lower; if let Some(ix) = highest_selected { result = self.nth(ix); next -= ix + 1; debug_assert!(result.is_some(), "iterator shorter than size_hint().0"); } } let elem = self.nth(next); if elem.is_none() { return result } if gen_index(rng, consumed+1) == 0 { result = elem; } consumed += 1; } } /// Collects values at random from the iterator into a supplied buffer /// until that buffer is filled. /// /// Although the elements are selected randomly, the order of elements in /// the buffer is neither stable nor fully random. If random ordering is /// desired, shuffle the result. /// /// Returns the number of elements added to the buffer. This equals the length /// of the buffer unless the iterator contains insufficient elements, in which /// case this equals the number of elements available. /// /// Complexity is `O(n)` where `n` is the length of the iterator. /// For slices, prefer [`SliceRandom::choose_multiple`]. fn choose_multiple_fill(mut self, rng: &mut R, buf: &mut [Self::Item]) -> usize where R: Rng + ?Sized { let amount = buf.len(); let mut len = 0; while len < amount { if let Some(elem) = self.next() { buf[len] = elem; len += 1; } else { // Iterator exhausted; stop early return len; } } // Continue, since the iterator was not exhausted for (i, elem) in self.enumerate() { let k = gen_index(rng, i + 1 + amount); if let Some(slot) = buf.get_mut(k) { *slot = elem; } } len } /// Collects `amount` values at random from the iterator into a vector. /// /// This is equivalent to `choose_multiple_fill` except for the result type. /// /// Although the elements are selected randomly, the order of elements in /// the buffer is neither stable nor fully random. If random ordering is /// desired, shuffle the result. /// /// The length of the returned vector equals `amount` unless the iterator /// contains insufficient elements, in which case it equals the number of /// elements available. /// /// Complexity is `O(n)` where `n` is the length of the iterator. /// For slices, prefer [`SliceRandom::choose_multiple`]. #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] fn choose_multiple(mut self, rng: &mut R, amount: usize) -> Vec where R: Rng + ?Sized { let mut reservoir = Vec::with_capacity(amount); reservoir.extend(self.by_ref().take(amount)); // Continue unless the iterator was exhausted // // note: this prevents iterators that "restart" from causing problems. // If the iterator stops once, then so do we. if reservoir.len() == amount { for (i, elem) in self.enumerate() { let k = gen_index(rng, i + 1 + amount); if let Some(slot) = reservoir.get_mut(k) { *slot = elem; } } } else { // Don't hang onto extra memory. There is a corner case where // `amount` was much less than `self.len()`. reservoir.shrink_to_fit(); } reservoir } } impl SliceRandom for [T] { type Item = T; fn choose(&self, rng: &mut R) -> Option<&Self::Item> where R: Rng + ?Sized { if self.is_empty() { None } else { Some(&self[gen_index(rng, self.len())]) } } fn choose_mut(&mut self, rng: &mut R) -> Option<&mut Self::Item> where R: Rng + ?Sized { if self.is_empty() { None } else { let len = self.len(); Some(&mut self[gen_index(rng, len)]) } } #[cfg(feature = "alloc")] fn choose_multiple(&self, rng: &mut R, amount: usize) -> SliceChooseIter where R: Rng + ?Sized { let amount = ::core::cmp::min(amount, self.len()); SliceChooseIter { slice: self, _phantom: Default::default(), indices: index::sample(rng, self.len(), amount).into_iter(), } } #[cfg(feature = "alloc")] fn choose_weighted( &self, rng: &mut R, weight: F, ) -> Result<&Self::Item, WeightedError> where R: Rng + ?Sized, F: Fn(&Self::Item) -> B, B: SampleBorrow, X: SampleUniform + for<'a> ::core::ops::AddAssign<&'a X> + ::core::cmp::PartialOrd + Clone + Default, { use crate::distributions::{Distribution, WeightedIndex}; let distr = WeightedIndex::new(self.iter().map(weight))?; Ok(&self[distr.sample(rng)]) } #[cfg(feature = "alloc")] fn choose_weighted_mut( &mut self, rng: &mut R, weight: F, ) -> Result<&mut Self::Item, WeightedError> where R: Rng + ?Sized, F: Fn(&Self::Item) -> B, B: SampleBorrow, X: SampleUniform + for<'a> ::core::ops::AddAssign<&'a X> + ::core::cmp::PartialOrd + Clone + Default, { use crate::distributions::{Distribution, WeightedIndex}; let distr = WeightedIndex::new(self.iter().map(weight))?; Ok(&mut self[distr.sample(rng)]) } #[cfg(feature = "std")] fn choose_multiple_weighted( &self, rng: &mut R, amount: usize, weight: F, ) -> Result, WeightedError> where R: Rng + ?Sized, F: Fn(&Self::Item) -> X, X: Into, { let amount = ::core::cmp::min(amount, self.len()); Ok(SliceChooseIter { slice: self, _phantom: Default::default(), indices: index::sample_weighted( rng, self.len(), |idx| weight(&self[idx]).into(), amount, )? .into_iter(), }) } fn shuffle(&mut self, rng: &mut R) where R: Rng + ?Sized { for i in (1..self.len()).rev() { // invariant: elements with index > i have been locked in place. self.swap(i, gen_index(rng, i + 1)); } } fn partial_shuffle( &mut self, rng: &mut R, amount: usize, ) -> (&mut [Self::Item], &mut [Self::Item]) where R: Rng + ?Sized { // This applies Durstenfeld's algorithm for the // [Fisher–Yates shuffle](https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle#The_modern_algorithm) // for an unbiased permutation, but exits early after choosing `amount` // elements. let len = self.len(); let end = if amount >= len { 0 } else { len - amount }; for i in (end..len).rev() { // invariant: elements with index > i have been locked in place. self.swap(i, gen_index(rng, i + 1)); } let r = self.split_at_mut(end); (r.1, r.0) } } impl IteratorRandom for I where I: Iterator + Sized {} /// An iterator over multiple slice elements. /// /// This struct is created by /// [`SliceRandom::choose_multiple`](trait.SliceRandom.html#tymethod.choose_multiple). #[cfg(feature = "alloc")] #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))] #[derive(Debug)] pub struct SliceChooseIter<'a, S: ?Sized + 'a, T: 'a> { slice: &'a S, _phantom: ::core::marker::PhantomData, indices: index::IndexVecIntoIter, } #[cfg(feature = "alloc")] impl<'a, S: Index + ?Sized + 'a, T: 'a> Iterator for SliceChooseIter<'a, S, T> { type Item = &'a T; fn next(&mut self) -> Option { // TODO: investigate using SliceIndex::get_unchecked when stable self.indices.next().map(|i| &self.slice[i as usize]) } fn size_hint(&self) -> (usize, Option) { (self.indices.len(), Some(self.indices.len())) } } #[cfg(feature = "alloc")] impl<'a, S: Index + ?Sized + 'a, T: 'a> ExactSizeIterator for SliceChooseIter<'a, S, T> { fn len(&self) -> usize { self.indices.len() } } // Sample a number uniformly between 0 and `ubound`. Uses 32-bit sampling where // possible, primarily in order to produce the same output on 32-bit and 64-bit // platforms. #[inline] fn gen_index(rng: &mut R, ubound: usize) -> usize { if ubound <= (core::u32::MAX as usize) { rng.gen_range(0..ubound as u32) as usize } else { rng.gen_range(0..ubound) } } #[cfg(test)] mod test { use super::*; #[cfg(feature = "alloc")] use crate::Rng; #[cfg(all(feature = "alloc", not(feature = "std")))] use alloc::vec::Vec; #[test] fn test_slice_choose() { let mut r = crate::test::rng(107); let chars = [ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', ]; let mut chosen = [0i32; 14]; // The below all use a binomial distribution with n=1000, p=1/14. // binocdf(40, 1000, 1/14) ~= 2e-5; 1-binocdf(106, ..) ~= 2e-5 for _ in 0..1000 { let picked = *chars.choose(&mut r).unwrap(); chosen[(picked as usize) - ('a' as usize)] += 1; } for count in chosen.iter() { assert!(40 < *count && *count < 106); } chosen.iter_mut().for_each(|x| *x = 0); for _ in 0..1000 { *chosen.choose_mut(&mut r).unwrap() += 1; } for count in chosen.iter() { assert!(40 < *count && *count < 106); } let mut v: [isize; 0] = []; assert_eq!(v.choose(&mut r), None); assert_eq!(v.choose_mut(&mut r), None); } #[test] fn value_stability_slice() { let mut r = crate::test::rng(413); let chars = [ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', ]; let mut nums = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]; assert_eq!(chars.choose(&mut r), Some(&'l')); assert_eq!(nums.choose_mut(&mut r), Some(&mut 10)); #[cfg(feature = "alloc")] assert_eq!( &chars .choose_multiple(&mut r, 8) .cloned() .collect::>(), &['d', 'm', 'b', 'n', 'c', 'k', 'h', 'e'] ); #[cfg(feature = "alloc")] assert_eq!(chars.choose_weighted(&mut r, |_| 1), Ok(&'f')); #[cfg(feature = "alloc")] assert_eq!(nums.choose_weighted_mut(&mut r, |_| 1), Ok(&mut 5)); let mut r = crate::test::rng(414); nums.shuffle(&mut r); assert_eq!(nums, [9, 5, 3, 10, 7, 12, 8, 11, 6, 4, 0, 2, 1]); nums = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]; let res = nums.partial_shuffle(&mut r, 6); assert_eq!(res.0, &mut [7, 4, 8, 6, 9, 3]); assert_eq!(res.1, &mut [0, 1, 2, 12, 11, 5, 10]); } #[derive(Clone)] struct UnhintedIterator { iter: I, } impl Iterator for UnhintedIterator { type Item = I::Item; fn next(&mut self) -> Option { self.iter.next() } } #[derive(Clone)] struct ChunkHintedIterator { iter: I, chunk_remaining: usize, chunk_size: usize, hint_total_size: bool, } impl Iterator for ChunkHintedIterator { type Item = I::Item; fn next(&mut self) -> Option { if self.chunk_remaining == 0 { self.chunk_remaining = ::core::cmp::min(self.chunk_size, self.iter.len()); } self.chunk_remaining = self.chunk_remaining.saturating_sub(1); self.iter.next() } fn size_hint(&self) -> (usize, Option) { ( self.chunk_remaining, if self.hint_total_size { Some(self.iter.len()) } else { None }, ) } } #[derive(Clone)] struct WindowHintedIterator { iter: I, window_size: usize, hint_total_size: bool, } impl Iterator for WindowHintedIterator { type Item = I::Item; fn next(&mut self) -> Option { self.iter.next() } fn size_hint(&self) -> (usize, Option) { ( ::core::cmp::min(self.iter.len(), self.window_size), if self.hint_total_size { Some(self.iter.len()) } else { None }, ) } } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_iterator_choose() { let r = &mut crate::test::rng(109); fn test_iter + Clone>(r: &mut R, iter: Iter) { let mut chosen = [0i32; 9]; for _ in 0..1000 { let picked = iter.clone().choose(r).unwrap(); chosen[picked] += 1; } for count in chosen.iter() { // Samples should follow Binomial(1000, 1/9) // Octave: binopdf(x, 1000, 1/9) gives the prob of *count == x // Note: have seen 153, which is unlikely but not impossible. assert!( 72 < *count && *count < 154, "count not close to 1000/9: {}", count ); } } test_iter(r, 0..9); test_iter(r, [0, 1, 2, 3, 4, 5, 6, 7, 8].iter().cloned()); #[cfg(feature = "alloc")] test_iter(r, (0..9).collect::>().into_iter()); test_iter(r, UnhintedIterator { iter: 0..9 }); test_iter(r, ChunkHintedIterator { iter: 0..9, chunk_size: 4, chunk_remaining: 4, hint_total_size: false, }); test_iter(r, ChunkHintedIterator { iter: 0..9, chunk_size: 4, chunk_remaining: 4, hint_total_size: true, }); test_iter(r, WindowHintedIterator { iter: 0..9, window_size: 2, hint_total_size: false, }); test_iter(r, WindowHintedIterator { iter: 0..9, window_size: 2, hint_total_size: true, }); assert_eq!((0..0).choose(r), None); assert_eq!(UnhintedIterator { iter: 0..0 }.choose(r), None); } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_iterator_choose_stable() { let r = &mut crate::test::rng(109); fn test_iter + Clone>(r: &mut R, iter: Iter) { let mut chosen = [0i32; 9]; for _ in 0..1000 { let picked = iter.clone().choose_stable(r).unwrap(); chosen[picked] += 1; } for count in chosen.iter() { // Samples should follow Binomial(1000, 1/9) // Octave: binopdf(x, 1000, 1/9) gives the prob of *count == x // Note: have seen 153, which is unlikely but not impossible. assert!( 72 < *count && *count < 154, "count not close to 1000/9: {}", count ); } } test_iter(r, 0..9); test_iter(r, [0, 1, 2, 3, 4, 5, 6, 7, 8].iter().cloned()); #[cfg(feature = "alloc")] test_iter(r, (0..9).collect::>().into_iter()); test_iter(r, UnhintedIterator { iter: 0..9 }); test_iter(r, ChunkHintedIterator { iter: 0..9, chunk_size: 4, chunk_remaining: 4, hint_total_size: false, }); test_iter(r, ChunkHintedIterator { iter: 0..9, chunk_size: 4, chunk_remaining: 4, hint_total_size: true, }); test_iter(r, WindowHintedIterator { iter: 0..9, window_size: 2, hint_total_size: false, }); test_iter(r, WindowHintedIterator { iter: 0..9, window_size: 2, hint_total_size: true, }); assert_eq!((0..0).choose(r), None); assert_eq!(UnhintedIterator { iter: 0..0 }.choose(r), None); } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_iterator_choose_stable_stability() { fn test_iter(iter: impl Iterator + Clone) -> [i32; 9] { let r = &mut crate::test::rng(109); let mut chosen = [0i32; 9]; for _ in 0..1000 { let picked = iter.clone().choose_stable(r).unwrap(); chosen[picked] += 1; } chosen } let reference = test_iter(0..9); assert_eq!(test_iter([0, 1, 2, 3, 4, 5, 6, 7, 8].iter().cloned()), reference); #[cfg(feature = "alloc")] assert_eq!(test_iter((0..9).collect::>().into_iter()), reference); assert_eq!(test_iter(UnhintedIterator { iter: 0..9 }), reference); assert_eq!(test_iter(ChunkHintedIterator { iter: 0..9, chunk_size: 4, chunk_remaining: 4, hint_total_size: false, }), reference); assert_eq!(test_iter(ChunkHintedIterator { iter: 0..9, chunk_size: 4, chunk_remaining: 4, hint_total_size: true, }), reference); assert_eq!(test_iter(WindowHintedIterator { iter: 0..9, window_size: 2, hint_total_size: false, }), reference); assert_eq!(test_iter(WindowHintedIterator { iter: 0..9, window_size: 2, hint_total_size: true, }), reference); } #[test] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_shuffle() { let mut r = crate::test::rng(108); let empty: &mut [isize] = &mut []; empty.shuffle(&mut r); let mut one = [1]; one.shuffle(&mut r); let b: &[_] = &[1]; assert_eq!(one, b); let mut two = [1, 2]; two.shuffle(&mut r); assert!(two == [1, 2] || two == [2, 1]); fn move_last(slice: &mut [usize], pos: usize) { // use slice[pos..].rotate_left(1); once we can use that let last_val = slice[pos]; for i in pos..slice.len() - 1 { slice[i] = slice[i + 1]; } *slice.last_mut().unwrap() = last_val; } let mut counts = [0i32; 24]; for _ in 0..10000 { let mut arr: [usize; 4] = [0, 1, 2, 3]; arr.shuffle(&mut r); let mut permutation = 0usize; let mut pos_value = counts.len(); for i in 0..4 { pos_value /= 4 - i; let pos = arr.iter().position(|&x| x == i).unwrap(); assert!(pos < (4 - i)); permutation += pos * pos_value; move_last(&mut arr, pos); assert_eq!(arr[3], i); } for (i, &a) in arr.iter().enumerate() { assert_eq!(a, i); } counts[permutation] += 1; } for count in counts.iter() { // Binomial(10000, 1/24) with average 416.667 // Octave: binocdf(n, 10000, 1/24) // 99.9% chance samples lie within this range: assert!(352 <= *count && *count <= 483, "count: {}", count); } } #[test] fn test_partial_shuffle() { let mut r = crate::test::rng(118); let mut empty: [u32; 0] = []; let res = empty.partial_shuffle(&mut r, 10); assert_eq!((res.0.len(), res.1.len()), (0, 0)); let mut v = [1, 2, 3, 4, 5]; let res = v.partial_shuffle(&mut r, 2); assert_eq!((res.0.len(), res.1.len()), (2, 3)); assert!(res.0[0] != res.0[1]); // First elements are only modified if selected, so at least one isn't modified: assert!(res.1[0] == 1 || res.1[1] == 2 || res.1[2] == 3); } #[test] #[cfg(feature = "alloc")] fn test_sample_iter() { let min_val = 1; let max_val = 100; let mut r = crate::test::rng(401); let vals = (min_val..max_val).collect::>(); let small_sample = vals.iter().choose_multiple(&mut r, 5); let large_sample = vals.iter().choose_multiple(&mut r, vals.len() + 5); assert_eq!(small_sample.len(), 5); assert_eq!(large_sample.len(), vals.len()); // no randomization happens when amount >= len assert_eq!(large_sample, vals.iter().collect::>()); assert!(small_sample .iter() .all(|e| { **e >= min_val && **e <= max_val })); } #[test] #[cfg(feature = "alloc")] #[cfg_attr(miri, ignore)] // Miri is too slow fn test_weighted() { let mut r = crate::test::rng(406); const N_REPS: u32 = 3000; let weights = [1u32, 2, 3, 0, 5, 6, 7, 1, 2, 3, 4, 5, 6, 7]; let total_weight = weights.iter().sum::() as f32; let verify = |result: [i32; 14]| { for (i, count) in result.iter().enumerate() { let exp = (weights[i] * N_REPS) as f32 / total_weight; let mut err = (*count as f32 - exp).abs(); if err != 0.0 { err /= exp; } assert!(err <= 0.25); } }; // choose_weighted fn get_weight(item: &(u32, T)) -> u32 { item.0 } let mut chosen = [0i32; 14]; let mut items = [(0u32, 0usize); 14]; // (weight, index) for (i, item) in items.iter_mut().enumerate() { *item = (weights[i], i); } for _ in 0..N_REPS { let item = items.choose_weighted(&mut r, get_weight).unwrap(); chosen[item.1] += 1; } verify(chosen); // choose_weighted_mut let mut items = [(0u32, 0i32); 14]; // (weight, count) for (i, item) in items.iter_mut().enumerate() { *item = (weights[i], 0); } for _ in 0..N_REPS { items.choose_weighted_mut(&mut r, get_weight).unwrap().1 += 1; } for (ch, item) in chosen.iter_mut().zip(items.iter()) { *ch = item.1; } verify(chosen); // Check error cases let empty_slice = &mut [10][0..0]; assert_eq!( empty_slice.choose_weighted(&mut r, |_| 1), Err(WeightedError::NoItem) ); assert_eq!( empty_slice.choose_weighted_mut(&mut r, |_| 1), Err(WeightedError::NoItem) ); assert_eq!( ['x'].choose_weighted_mut(&mut r, |_| 0), Err(WeightedError::AllWeightsZero) ); assert_eq!( [0, -1].choose_weighted_mut(&mut r, |x| *x), Err(WeightedError::InvalidWeight) ); assert_eq!( [-1, 0].choose_weighted_mut(&mut r, |x| *x), Err(WeightedError::InvalidWeight) ); } #[test] fn value_stability_choose() { fn choose>(iter: I) -> Option { let mut rng = crate::test::rng(411); iter.choose(&mut rng) } assert_eq!(choose([].iter().cloned()), None); assert_eq!(choose(0..100), Some(33)); assert_eq!(choose(UnhintedIterator { iter: 0..100 }), Some(40)); assert_eq!( choose(ChunkHintedIterator { iter: 0..100, chunk_size: 32, chunk_remaining: 32, hint_total_size: false, }), Some(39) ); assert_eq!( choose(ChunkHintedIterator { iter: 0..100, chunk_size: 32, chunk_remaining: 32, hint_total_size: true, }), Some(39) ); assert_eq!( choose(WindowHintedIterator { iter: 0..100, window_size: 32, hint_total_size: false, }), Some(90) ); assert_eq!( choose(WindowHintedIterator { iter: 0..100, window_size: 32, hint_total_size: true, }), Some(90) ); } #[test] fn value_stability_choose_stable() { fn choose>(iter: I) -> Option { let mut rng = crate::test::rng(411); iter.choose_stable(&mut rng) } assert_eq!(choose([].iter().cloned()), None); assert_eq!(choose(0..100), Some(40)); assert_eq!(choose(UnhintedIterator { iter: 0..100 }), Some(40)); assert_eq!( choose(ChunkHintedIterator { iter: 0..100, chunk_size: 32, chunk_remaining: 32, hint_total_size: false, }), Some(40) ); assert_eq!( choose(ChunkHintedIterator { iter: 0..100, chunk_size: 32, chunk_remaining: 32, hint_total_size: true, }), Some(40) ); assert_eq!( choose(WindowHintedIterator { iter: 0..100, window_size: 32, hint_total_size: false, }), Some(40) ); assert_eq!( choose(WindowHintedIterator { iter: 0..100, window_size: 32, hint_total_size: true, }), Some(40) ); } #[test] fn value_stability_choose_multiple() { fn do_test>(iter: I, v: &[u32]) { let mut rng = crate::test::rng(412); let mut buf = [0u32; 8]; assert_eq!(iter.choose_multiple_fill(&mut rng, &mut buf), v.len()); assert_eq!(&buf[0..v.len()], v); } do_test(0..4, &[0, 1, 2, 3]); do_test(0..8, &[0, 1, 2, 3, 4, 5, 6, 7]); do_test(0..100, &[58, 78, 80, 92, 43, 8, 96, 7]); #[cfg(feature = "alloc")] { fn do_test>(iter: I, v: &[u32]) { let mut rng = crate::test::rng(412); assert_eq!(iter.choose_multiple(&mut rng, v.len()), v); } do_test(0..4, &[0, 1, 2, 3]); do_test(0..8, &[0, 1, 2, 3, 4, 5, 6, 7]); do_test(0..100, &[58, 78, 80, 92, 43, 8, 96, 7]); } } #[test] #[cfg(feature = "std")] fn test_multiple_weighted_edge_cases() { use super::*; let mut rng = crate::test::rng(413); // Case 1: One of the weights is 0 let choices = [('a', 2), ('b', 1), ('c', 0)]; for _ in 0..100 { let result = choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap() .collect::>(); assert_eq!(result.len(), 2); assert!(!result.iter().any(|val| val.0 == 'c')); } // Case 2: All of the weights are 0 let choices = [('a', 0), ('b', 0), ('c', 0)]; let result = choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap() .collect::>(); assert_eq!(result.len(), 2); // Case 3: Negative weights let choices = [('a', -1), ('b', 1), ('c', 1)]; assert_eq!( choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap_err(), WeightedError::InvalidWeight ); // Case 4: Empty list let choices = []; let result = choices .choose_multiple_weighted(&mut rng, 0, |_: &()| 0) .unwrap() .collect::>(); assert_eq!(result.len(), 0); // Case 5: NaN weights let choices = [('a', core::f64::NAN), ('b', 1.0), ('c', 1.0)]; assert_eq!( choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap_err(), WeightedError::InvalidWeight ); // Case 6: +infinity weights let choices = [('a', core::f64::INFINITY), ('b', 1.0), ('c', 1.0)]; for _ in 0..100 { let result = choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap() .collect::>(); assert_eq!(result.len(), 2); assert!(result.iter().any(|val| val.0 == 'a')); } // Case 7: -infinity weights let choices = [('a', core::f64::NEG_INFINITY), ('b', 1.0), ('c', 1.0)]; assert_eq!( choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap_err(), WeightedError::InvalidWeight ); // Case 8: -0 weights let choices = [('a', -0.0), ('b', 1.0), ('c', 1.0)]; assert!(choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .is_ok()); } #[test] #[cfg(feature = "std")] fn test_multiple_weighted_distributions() { use super::*; // The theoretical probabilities of the different outcomes are: // AB: 0.5 * 0.5 = 0.250 // AC: 0.5 * 0.5 = 0.250 // BA: 0.25 * 0.67 = 0.167 // BC: 0.25 * 0.33 = 0.082 // CA: 0.25 * 0.67 = 0.167 // CB: 0.25 * 0.33 = 0.082 let choices = [('a', 2), ('b', 1), ('c', 1)]; let mut rng = crate::test::rng(414); let mut results = [0i32; 3]; let expected_results = [4167, 4167, 1666]; for _ in 0..10000 { let result = choices .choose_multiple_weighted(&mut rng, 2, |item| item.1) .unwrap() .collect::>(); assert_eq!(result.len(), 2); match (result[0].0, result[1].0) { ('a', 'b') | ('b', 'a') => { results[0] += 1; } ('a', 'c') | ('c', 'a') => { results[1] += 1; } ('b', 'c') | ('c', 'b') => { results[2] += 1; } (_, _) => panic!("unexpected result"), } } let mut diffs = results .iter() .zip(&expected_results) .map(|(a, b)| (a - b).abs()); assert!(!diffs.any(|deviation| deviation > 100)); } }